Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Programming Education Math IT Technology

Forget Math to Become a Great Computer Scientist? 942

Coryoth writes "A new book is trying to claim that computer science is better off without maths. The author claims that early computing pioneers such as Von Neumann and Alan Turing imposed their pure mathematics background on the field, and that this has hobbled computer science ever since. He rejects the idea of algorithms as a good way to think about software. Can you really do computer science well without mathematics? And would you want to?"
This discussion has been archived. No new comments can be posted.

Forget Math to Become a Great Computer Scientist?

Comments Filter:
  • by ScottyH ( 791307 ) on Sunday July 08, 2007 @08:03AM (#19788307)
    The author is saying that without the pioneers of the science CS wouldn't be intertwined with mathematics. So yeah, it is fundamental now, but in the absence of the original contributors it may have looked quite different. I personally find the argument a bit difficult to swallow, but then again, I'm inside the box.
  • There are of course parts of CS that are less involved in math, but it is still overall a fundamental part.

    Not even that.... Computer Science is a subsection of Maths. That's it.... Theoretically, you can complete CS without ever touching a computer.

    I was never the best at maths (even though, I beat the best of our class in the final maths exam, but that must have been pure luck. He is a math PhdD at Harvard now, so....). Luckily the parts of maths that are useful to CS, were within my reach ;-)

  • by 3seas ( 184403 ) on Sunday July 08, 2007 @08:15AM (#19788367) Homepage Journal
    What the short review seem to be saying is that the author recognizes its not just math.

    How in depth the book goes I do not know, but I do know I've been on about the abstraction perspective for near two decades and communicating it to everyone I can including to those in positions at universities.

    I have noticed these last few years there are others beginning to grasp the bigger picture, such as J. Wing of CMU and her "Computational Thinking" perspective http://www.cs.cmu.edu/computational_thinking.html [cmu.edu] perspective and another P. Denning of GMU and his "Great Principles of Computing" http://cs.gmu.edu/cne/pjd/GP/GP-site/welcome.html [gmu.edu] and I'm sure there are others.

    Now I see this short book review "Computer Science Reconsidered: The Invocation Model of Process Expression"...yet I have not seen from any of them software or even an outline of such, that anyone can use to explore and apply the presented perspective. And we all know that to really understand something as it applies to computers requires that actual use of a computer in the learning process for verification of understanding.

    So, here is mine http://threeseas.net/vicprint/Virtual_Interaction_ Configuration.html [threeseas.net] which the link I gave in the parent post points to.

    Its all about Abstraction Physics no matter how you present it or what you call it. The evidence is in the inability to avoid using the mentioned action constants set, with or without computers. Know what you do, in everything you do!
  • by neillewis ( 137544 ) on Sunday July 08, 2007 @08:37AM (#19788467)
    A lot of the criticism of this guy seems to be knee-jerk defensiveness. Read his papers on 'NULL Convention Logic' and its applicability to asynchronous circuit design and you will see where he is coming from.
  • Actually, "Informatics" (which is, as you say, an incorrect term in English) is used in other languages to label "Computer Science". In Dutch it is "Informatica", in German it is "Informatik" and in French is "Informatique" (sorry, I now am at the boundaries of my own language skills). All there translate to "Computer Science".

    I have to admit that I prefer the English term, because it says much more than the Dutch, French and German terms. Fact is: "Informatics" is the same thing as "Computer Science".

    Go to wikipedia, search for "Computer Science" and see what the languages I mention translate to. (Try "Nederlands", "Français" and "Deutsch" in the left hand column.

  • by michaelmalak ( 91262 ) <michael@michaelmalak.com> on Sunday July 08, 2007 @08:58AM (#19788589) Homepage
    Of the $6.3 billion that Fannie Mae had to restate to the SEC in 2006 (covering the 1998-2006 timeframe), $1 billion [ofheo.gov] was due to "End User Computing", presumably an error in a Microsoft Excel spreadsheet:

    OFHEO expressed concerns about Fannie Mae's reliance on end-user computing systems and the lack of strong controls that led to the $1 billion computational error and directed the Enterprise to take remedial action.
    Fannie Mae now requires its IT department to develop applications and has made mandatory many best practices that were previously recommended, such as unit tests, strict source code control, strict deployment control, and a software management process modeled after PMI.
  • by Anonymous Coward on Sunday July 08, 2007 @09:24AM (#19788763)
    Bewcause it doesn't say anything at all like he's claiming it says.

    It was garden variety executive directed securities fraud. Not errors created by poor VB scripts in Excel.

    VI. MISAPPLICATIONS OF GAAP, WEAK INTERNAL CONTROLS, AND
    IMPROPER EARNINGS MANAGEMENT
    As noted in previous chapters of this report, the extreme predictability of the financial results reported by Fannie Mae from 1998 through 2003 was an illusion deliberately and systematically created by senior management. This chapter provides specific examples how senior executives exploited the weaknesses of the Enterprise's accounting to accomplish improper earnings management and misapply Generally Accepted Accounting Principles (GAAP), and how they used a variety of transactions and accounting manipulations to fine-tune the Enterprise's annual earnings results. Those actions aimed to perpetuate management's reputation for achieving smooth and predictable double-digit growth in earnings per share and for keeping Fannie Mae's risk low, while assuring maximum funding of the pool from which senior management would receive bonus payments under the Enterprise's Annual Incentive Plan as well as maximum payments under other, longer-term executive compensation plans.
    To provide context for the technical material that follows, the chapter first expands on several issues raised in the previous chapters by elaborating on the concept of improper earnings management and describing the circumstances that demonstrate that Fannie Mae senior management must have been aware of the evolving official concerns about such practices.

    Following those discussions, the chapter reviews the improper accounting policies and control weaknesses that created opportunities for inappropriate manipulation of earnings at the Enterprise. The chapter then describes inappropriate accounting undertaken to avoid recording other-than-temporary impairment losses to avoid earnings volatility. The chapter concludes with discussions of several additional techniques used by senior management to fine-tune reported earnings results.
    The actions and inactions of Fannie Mae senior management described in this chapter constituted unsafe and unsound practices that involved failures to comply with a number of statutory and other requirements. Several independent authorities, for example, require the Enterprise to verify and submit financial information. The Fannie Mae Charter Act--the statute that created the Enterprise--specifically requires that quarterly and annual reports of financial conditions and operations be prepared in accordance with GAAP.1 The Federal Housing Enterprises Financial Safety and Soundness Act of 1992, OFHEO's organic statute, requires Fannie Mae to provide OFHEO with reports on its financial condition and operations.

    Similarly, regulations promulgated by OFHEO under that statute require the Enterprise to prepare and submit financial and other disclosures that include supporting financial information and certifications, on matters such as its financial condition, the results of its operations, business developments, and management's expectations.

    Moreover, in accordance with applicable safety and soundness authorities, Fannie Mae should have had an effective system of internal controls in place under which:
    policies and procedures would be sufficient to assure that the organizational structure of the Enterprise and the assignment of responsibilities within that structure would provide clear accountability;
    policies and procedures would be adequate to manage and safeguard assets, and assure compliance with applicable law and regulation;4
    policies and procedures would assure reports and documents would be generated that are timely, complete, and sufficient for directors and management to make informed decisions by providing relevant information with an appropriate level of detail; and
    policies and procedures for managing changes in risk would be sufficient to permit the prudent management of balance sh
  • by nomadic ( 141991 ) <nomadicworldNO@SPAMgmail.com> on Sunday July 08, 2007 @10:41AM (#19789291) Homepage
    I look forward to reviewing some of this guys code.

    Knock yourself out. [theseusresearch.com] Whether you agree or disagree with this guy, it's obvious his credentials put him at a level above 95% of the people criticizing him here.
  • Re:Lemme guess (Score:2, Informative)

    by nomadic ( 141991 ) <nomadicworldNO@SPAMgmail.com> on Sunday July 08, 2007 @10:51AM (#19789381) Homepage
    The author really sucks at math but heard that there's big bucks in the computer stuff, right?

    No. [theseusresearch.com]
  • by smilindog2000 ( 907665 ) <bill@billrocks.org> on Sunday July 08, 2007 @12:55PM (#19790407) Homepage
    The web site often referred to the reliability of hardware compared to software, and argues we need a hardware methodology-like software development system. I disagree. Hardware is often incredibly difficult to design, we use procedural Verilog code to describe each process, and the rest is manual connections between them. At the manual connection level, we're not more productive designing circuits that we were when we used schematic entry. There's something called the "Design Gap", which refers to the fact that we don't yet know how to make chip designers productive enough for them to use up all the transistors now available on chips. This is one reason so many design teams where off-shored to India. We often can't afford the salaries of US engineers to design modern chips.

    Many people have pointed out that programmers can describe the function of a chip in C several times faster than chip designers can do in Verilog. The point is that chips need to work no matter what system they're used in, while somewhat unreliable software can be ok. I'm releasing a 500K line program to a client this weekend. It has bugs. However, it passes hundreds of designs without failure, indicating most customers likely wont run into them. We built that system with only a dozen-ish man-years of effort, but AFAIK it's more complex than any chip ever designed. Our software methodology converges on reliability, but only to a certain quality point. Going beyond that quality point wastes resources that are needed for the next project. To get to Intel Pentium level of quality would take a team the size of Intel's processor design group.

    There are several companies that now directly convert C (or SystemC - basically C++) code to hardware. The idea is to greatly improve hardware design productivity, but it only partially works. The bottle-neck is verification. If the link between the description and the hardware is difficult to see and understand, debugging and verification becomes a nightmare. A C model can be written much faster, but who cares, since it's the verification that takes all the effort? Maybe they can figure out how to improve verification flows, but until then, plain old procedural C/C++ programing with solid coding methodologies will continue to kick pants off of hardware design in terms of productivity.
  • by Anonymous Brave Guy ( 457657 ) on Sunday July 08, 2007 @01:07PM (#19790499)

    Having written a software renderer myself, I am very well aware how such libraries work, and I can tell you that very little higher math is involved in making them, and all that math has already been done.

    But that wasn't what I was talking about, as I thought was pretty clear from my previous comment. Sure, the rendering itself is fairly straightforward, but how do you decide what to render? And yes, I do do this for a living, and the maths and algorithms involved in serious CAD (for example) are not trivial.

    No, we are talking about Software Engineering, which is what most people here do.

    You might be. The rest of us are talking about Computer Science. The clue is in the title of the discussion.

    Bullshit. Nobody has developed anything useful in the field of computer science in more than a decade. All the higher-level theory has been done already.

    Sure. So tell me again the theory of how to get a compiler to automatically subdivide work and hand out chunks to different threads, so we can take advantage of all this wholesome multi-core goodness the chip vendors are providing us with these days?

  • by Anonymous Coward on Sunday July 08, 2007 @01:30PM (#19790715)
    Sorry, it is possible to wrote a worst-case O(n lg n) Quicksort. That's worst-case, not expected-worst-case or average-case. The trick is to use an O(n) selection algorithm to find the median of each subset and partition about that. Partitioning around the median guarantees that the rest of the algorithm stays in O(n lg n) as long as the finding the median can be done in no more than O(n). The reason no one ever implements it that way is that the overhead involved in the O(n) Select makes the average-case performance worse. In contrast, if you use perform a randomized partition, you get an expected-worst-case time of O(n lg n). This means that, while you can still get O(n^2) worst-case time, you can't force it to happen by running it on the same data. In order to get the worst-case to happen, you'd not only have to get unlucky with your input data, you'd also have to get unlucky with the sequence of numbers coming from your random number generator. (In other words, some malicious person can't slow down your algorithm by handing you a worst-case data set.)

    The only description of the worst-case O(n lg n) Quicksort I could quickly find for free online is slide 16 of these lecture notes (PPT) [virginia.edu], but there aren't many details there. More can be found in the chapter on Order Statistics in "Introduction to Algorithms", 2nd. ed., by Cormen, Leiserson, Rivest, and Stein.
  • Re:Damn straight! (Score:5, Informative)

    by NickFortune ( 613926 ) on Sunday July 08, 2007 @02:30PM (#19791157) Homepage Journal

    No, seriously. It seems that his entire argument is directed towards changing semantics to take the emphasis off of the mathematical underpinnings of computer science. Rar.

    I think he's astroturfing for the pro-patents lobby.

    One of the reasons you can't patent software in the EU (and probably many other places) is that algorithms are essentially mathematical constructs, and maths is generally regarded as unpatentable.

    So maybe one of the big software houses has decided that the next time they go to court over patents, it might be useful to have a scholarly book saying how algorithms are not in fact math based, and should therefore be patentable.

    It would also explain the odd references to circuit boards - which are another arguing point in the patent debate. If it has a physical expression, the argument goes, then it can't be maths.

  • Re:Damn straight! (Score:3, Informative)

    by gordo3000 ( 785698 ) on Sunday July 08, 2007 @04:48PM (#19792153)
    just wondering, but have you actually read what the harvard president said? I find most people haven't. what he did say was much milder than anything like girls are stupider than boys and unfortunately, this backlash really kills an interesting line of study.

    It was well known that men tend in much greater numbers towards genius and retardation. Part of his reasoning was that there may just be a smaller pool to draw from for the very top coupled with fundamental differences in brain development between the sexes. for example, it's now known women have far more white matter and far less gray matter than men do and some people beleive this man be a reason why men tend to excel in teh mathematics realm(due to the differences in how these areas of the brain approach problems). quick link: http://today.uci.edu/news/release_detail.asp?key=1 261 [uci.edu]

    I say more research is needed, but people shouldn't be surprised if they find later on that they basically crushed a good man's reputation because of some idea of political correct bullshit. it's said common sense is just a set of biases we develop by the time we are 18; maybe we'll find one day that men and women are just built to process (and therefore excel) differently.

After a number of decimal places, nobody gives a damn.

Working...