Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Programming Education IT Technology

Dijkstra's Manuscripts Available Online 251

Bodrius writes "Salon has a short but interesting article called GOTO considered joyful, about E. W. Dijkstra's manuscripts, as published by the University of Texas, and their bloggish nature. I'm not sure if the blog analogy is that accurate, but the articles are a must read for computer scientists and geeks in general." (Annoying but free click-through system for non-subscribers.)
This discussion has been archived. No new comments can be posted.

Dijkstra's Manuscripts Available Online

Comments Filter:
  • by Alranor ( 472986 ) on Wednesday July 09, 2003 @10:11AM (#6399943)
    Um, since when is perl legacy technology?
  • by CoolVibe ( 11466 ) on Wednesday July 09, 2003 @10:13AM (#6399963) Journal
    Dijkstra had a very distinctive and very readable handwriting. It certainly influenced mine. I don't know which pens he used, but I do agree there is something about writing stuff by hand. For one, you write slower than you think. And it can be a really meditative experience putting words to paper by hand.
  • Re:Compelling? (Score:4, Insightful)

    by garcia ( 6573 ) * on Wednesday July 09, 2003 @10:14AM (#6399969)
    you really need to RTFA and his documents first.

    As a person only vaguely interested in CS I can say that I was more intrigued by the fact that he hand-wrote his documents, gave personal notes about what he was feeling at the time (my note [slashdot.org] about what pen-type he was using), which are all VERY interesting to me.

    For me, these little things are far more interesting than what topics he happened to be discussing.

    His "blog-like" notes are probably better to read than JoSchmoe049169666420's because they are coming from very well-known professor who was in touch with the CS academic community.

    That's my worthless .02 at least.
  • by errl ( 43525 ) on Wednesday July 09, 2003 @10:15AM (#6399980) Homepage
    The article states that Dijskstra has said:

    "Programming is one of the most difficult branches of applied mathematics; the poorer mathematicians had better remain pure mathematicians."

    I do not agree with this. I mean, in pure mathematics there are not much to think about besides mathematics. Programming includes many other aspects, for example creativity. So if you are a poor mathematican but have other qualities that are needed for programming, you would have an easier time doing programming than pure mathemtaics I think.
  • by Anonymous Coward on Wednesday July 09, 2003 @10:15AM (#6399985)
    Only in the United Corporations Of America my friend...
  • by no reason to be here ( 218628 ) on Wednesday July 09, 2003 @10:19AM (#6400008) Homepage
    Well, he started writing in the 1960's, so it was pretty non-trivial to fire up your computer and peck away at a keyboard in some very primitive text editor with (if one were lucky) a tiny amber monochrome display. At the point he started writing his JOURNAL (sorry, i just fucking hate the word "blog"), pen and paper was the easiest, most reliable, and most expediant option (also remember that at that time, mathematicians and engineers were still using slide rules). By the time word processing became a more viable option, he was entrenched in the habit of keeping a paper journal. Furthermore, until the advent of the portable computer, if you wanted to write in your journal regardless of where you were, pen and paper was the only option. Personally, I'd like to see more people keep pen and paper journals; one can tell a lot about people from their handwriting.
  • by Blitzshlag ( 685207 ) on Wednesday July 09, 2003 @10:22AM (#6400022)
    His teachings are not language specific.
  • Ever hear of OSPF (Score:4, Insightful)

    by alen ( 225700 ) on Wednesday July 09, 2003 @10:22AM (#6400026)
    I heard it's still pretty popular in the routing of traffic on the internet.
  • Algorithms? (Score:4, Insightful)

    by TrekkieGod ( 627867 ) on Wednesday July 09, 2003 @10:29AM (#6400084) Homepage Journal
    Dijkstra developed some very efficient algorithms, and algorithms span all computer languages, even if I were to agree with you that C++ and perl are no longer used...which I don't.

    What comes to mind right at first is Dijkstra's Shortest Path Algorithm [tokushima-u.ac.jp]. And hey, look...that page has java programs. In fact, take a look at a Java applet [toronto.edu] to better understand the algorithm.

  • by Anonymous Coward on Wednesday July 09, 2003 @10:30AM (#6400094)
    Salon asks for help [slashdot.org]

    Another slashdot karma whore.
  • by jejones ( 115979 ) on Wednesday July 09, 2003 @10:30AM (#6400095) Journal
    Yes, Dijkstra is still relevant. That you should think he has anything to do with C++ is strange, and makes me wonder whether you're familiar with Dijkstra's work at all. Take a look at EWD 1243, and you'll see that he thought it was just another one of the messes pushed as the savior of us all. I dare say he'd say the same for Java and C#, which will be the legacy technology of tomorrow.

    Dijkstra's work on writing programs so as to be confident in their correctness from the start is very relevant--how much do you think people would be willing to pay for an OS written that way?
  • by Rogerborg ( 306625 ) on Wednesday July 09, 2003 @10:38AM (#6400159) Homepage

    > You could change the expiration on the temporary cookie they give you to get perminent access. Of course, this would be illegal.

    I was winding myself up to sneer, but then I realized that this would be [circumventing] a technological measure that effectively controls access to a work protected under [Title 17] [warwick.ac.uk].

    While we're at it, remember that "No person shall [...] offer to the public [or] provide [...] any technology [...] or part thereof that is primarily designed or produced for the purpose of circumventing a technological measure that effectively controls access to a work protected under [Title 17]."

    Citizen, remain at your console while the Secret Service analyzes the case against you and decides your guilt and an appropriate punishment.

  • by nullp0inter ( 687739 ) on Wednesday July 09, 2003 @10:41AM (#6400173)
    Maybe your comment was intended to be a troll, but if you think C++ and perl are no longer relevent you're deluding yourself Doctor Burns. Maybe you have moved on to more "bold and beautiful" things but C++ and is still widely in use, for much more than just the tech support night shift. Just because you aren't using them does not make them, or Dijkstara's writings which are not specific to any programming language, irrelevant.
  • by errl ( 43525 ) on Wednesday July 09, 2003 @10:42AM (#6400183) Homepage
    I do see, and understand, your point. Maybe my point needs some clarification.

    I think too that good mathematicans very often make good programmers and the other way around. The problem I see with Dijkstra's statement is that he says (as I understand it) that poor mathematicans would do better pure mathematicans than they they would do programmers. However you see it, there is more mathematics in pure mathematics than there is in programming. And thus if you have other qualities needed in programming, but you are not very good at maths, you would make a better programmer than pure mathematican (though maybe not a very good one at either). I hope that makes my point a bit clearer.
  • by Cpt. Fwiffo ( 42356 ) on Wednesday July 09, 2003 @10:52AM (#6400249)
    I think you are very much mistaken there.

    I study CS @ Eindhoven University, where he came and teached a lot(his and his compatriots were good in programming methodology: http://www.win.tue.nl/pm/ - horrible looking webpage) Trust me, it shows. Most of the 'hardcore' faculty members were friends/exstudents/what have you, and work the way he did. Dijkstra (and the folks at my faculty) did not bother himself with implementations of programming languages. Nor with what function to call for what. They all strive to understand the nature of the problem, and from that they try to derive the solution.
    That's a totally different approach to programming, which is a *lot* of work. However, it shows in areas where simplicity is key. There is a reason why Dijkstra used Semaphores (what do you think Java uses?). Or have you ever seen a good proof of Peterson's Algorithm? (I know Feijen and van Gasteren gave a generic derivation in 'On a Method of Multiprogramming', but that's just me having had to read it because it's part of my study there, of course. A book which delves into seemingly simplistic problems, but then gathers a framework which can tackle much bigger problems then you would expect.)

    The problems for single-process computing are easy. For those of you who program in them, I'm not trying to critisize or anything (I personally know that it's still damn hard from time to time), but there are no synchronisation problems, for one. To ensure that these are all systematicly handled you'd really want to have a proof that nothing can go wrong. Java and exceptions? Fine, it's just a way to get away with bad programming. There are a lot of places where you simply cannot get away with dirty programming: you don't want your car to deadlock going at 90 MpH, now would you? You want to be absolutely positive that it will *never* happen. THat means having either done extensive testing (which you can only hope it was sufficient), or having formal proof that it cannot go wrong.

    That is why Dijkstra held himself to the 'very hard problems'. The easy ones you can mess up with and still have not too much problems. The hard ones are problematic if they fail.
    He did not believe in cluttered code. Everything should be there for a reason, should be proven to be there and exactly there for a reason.


    "If I've learned one thing it's that in IS/IT/CS you either adapt and move on or you end up doing tech support on the midnight shift. Plain and simple. I think Fred Brooks touched on this in his book "The Mythical Man Month" when he said that computer programming will never be a mature field because to excel in it you must always be changing your language focus.


    To excel in Computer Programming you must be so smart as to be able to tackle the really hard problems. That means tackling problems on the problem field. You don't need languages for that, you need proof. Languages are but a tool for describing a solution and verifying your proof. Some languages describe easier then others, yes, but the solution is the same.

    I can write a C to Haskell to C++ to Prolog to Java compiler. Pretty straightforward too. The languages are the same. You just don't want to see the spaghetti which comes out of a program once I'm through with it. And that's the reason why you use a specific language for a solving a problem: some languages simply are much easier to express the solution in.
    However, that does NOT solve the problem, it merely makes it easier to program a solution understandibly.

    Dijkstra was above all a scientist, and thus had to convince the scientific community of his ideas. This normally is done by using formal methods which describe both the problem as the solution in such a away that they can be easily understood.

    That is still the holy grail for may solutions: how can they be written such that they can be understood more easily.

    But I'm starting to rant here...
  • by CompWerks ( 684874 ) on Wednesday July 09, 2003 @10:52AM (#6400255)
    that support's the KISS principal when it comes to CS
  • by h00pla ( 532294 ) on Wednesday July 09, 2003 @11:05AM (#6400342) Homepage
    I love fountain pens and I really get my ideas flowing when I use them, even if it is to turn around and code the thing with emacs. I have tried to use emacs or other editors to flesh out preliminary ideas, but it just doesn't have the same appeal to me. I believe I read something about air traffic controllers still doing part of their job on paper because they can't get the same results with computer programs. It has something to do with that meditative experience your talking about, IIRC.

  • by Morgaine ( 4316 ) on Wednesday July 09, 2003 @11:11AM (#6400392)
    (1) He obviously can't tell the difference between pure and applied mathematics and

    That conclusion is not obvious. Given that the real world introduces complications that can be ignored in the world of pure mathematics, his (presumed) premise that "if applied is hard, the weaker might better stick with pure" makes logical sense.

    (2) How come all the loser mathematicians who can't hack it end up becoming programmers?.

    Both of your premises of "loser" and "can't hack it" are just some sort of pejorative that mean nothing in practice if you're trying to make a logical argument, and the "end up becoming programmers" is patently false. So the statement is just plain empty of value.

    I've never had much respect for Dijkstra. I have even less now.

    Well, as a personal statement of your dislike for someone, it requires no rational justification and hence cannot be faulted. Whether others will feel a consequent lack of respect for your own self as a result is hard to say, but it's pretty safe to assume that they won't be impressed by your ability to reason.
  • by Dun Malg ( 230075 ) on Wednesday July 09, 2003 @11:17AM (#6400433) Homepage
    The problem I see with Dijkstra's statement is that he says (as I understand it) that poor mathematicans would do better pure mathematicans than they they would do programmers.

    Perhaps he meant something along the lines of "if you're a poor mathematician, don't compound your poor choice of career by becoming a programmer instead, because programming is still math." I don't think he meant that pure mathematics is the easier course of study, only that programming isn't necessarilly easier either.

  • by rose_bud4201 ( 651146 ) on Wednesday July 09, 2003 @11:41AM (#6400639) Homepage Journal
    I'm glad to hear that programming is a brain-dead job. That makes my college courses and job that much easier - apparently I can stop working my ass off to write good, efficient programs which people can actually use and start writing useless perl scripts like everyone else, no? Thank you for successfully insulting every decent programmer out there.
    Oh, and have you ever really looked at a real algorithm? They are mathematics, pure and simple. Mathematics has everything to do with programming. Case in point: Dijkstra's Algorithm [northwestern.edu]. Not one of the really heavy math ones, granted, but in view of the topic I think it's appropriate.
  • by LilMikey ( 615759 ) on Wednesday July 09, 2003 @11:53AM (#6400715) Homepage
    You obviously don't know what programming is if you're lumping programmers in with 'perl/php script kiddies.'
  • by Zeinfeld ( 263942 ) on Wednesday July 09, 2003 @11:56AM (#6400734) Homepage
    (1) He obviously can't tell the difference between pure and applied mathematics and
    (2) How come all the loser mathematicians who can't hack it end up becoming programmers?.

    Well I have something of an advantage here, having actually read the original notes rather than the article about them. Back in the late 1980s I spent an afternoon reading them. Dijkstra used to send the notes out to what he considered the major computer science labs. Since Oxford was run by Tony Hoare it was obviously on the list.

    At the time some of the other students thought that this practice was somewhat pretentious, tending to imply a somewhat elevated self-opinion. Today of course everyone from the lowliest grad student shares far more mundane thoughts in their blogs.

    What Dijkstra was actually doing in the article referred to was pointing out that there was nothing intrinsically superior about 'pure' mathematics. At the time computer science was regularly condescended to as an inferior for of mathematics.

    Where Dijkstra was wrong is that comp sci is not a branch of mathematics at all, it is as my tutor Tony Hoare pointed out 'An engineering profession'. At the time this was first proposed the idea was somewhat controvertial, today almost every programmer regards themselves as an engineer.

    I think that in fact we have to go a bit further and understand that the highest levels of programming are actually more akin to architecture. It combines art and engineering, just as engineering combines science and mathematics.

    There are plenty of architects and engineers who could never make much progress in the pure sciences. But take the best architects and the best engineers and you will often find that not only were most capable of being top class scientists, in many cases they actually were.

  • by Anonymous Coward on Wednesday July 09, 2003 @12:02PM (#6400772)
    Because programming is a brain-dead job. You're just writing instructions for a computer to do the real work.

    Holy crap! You don't even know what a computer is! Computers do the easy, repetitive work very, very quickly. The instructions are the real work.

    Mathematics is not arithmetic.

  • by syle ( 638903 ) * <syle@waygate. o r g> on Wednesday July 09, 2003 @12:58PM (#6401122) Homepage
    Whoever modded this up is abusing their points. Whether you agree with him or not about Bush's politics, you can take your pick between (a) troll, or (b) offtopic.

    Notice how I didn't respond to the trolling part? Good. Now, you don't either.

  • Re:Subject (Score:2, Insightful)

    by murr ( 214674 ) on Wednesday July 09, 2003 @05:47PM (#6403436)
    One of the major points he made before he left, and somewhat adamantly at that, was that software is so poor in quality nowadays because developers don't really bother to come up with formal proofs of correctness for their programs.

    Evidence that Dijkstra was not particularly in touch with what most software nowadays is about. It's not that it's fundamentally impossible to prove a large program correct, i.e., prove that its postcondition follows from its precondition, but that for many programs, coming up with those postconditions would be an enormous development effort itself.

    Like many mathematicians, Dijkstra seems to have had a somewhat overy optimistic view of the susceptibility of mathematical reasoning to bugs in itself. I believe that in the general case, the proof for a program will be larger than the program itself, and will be written in a language that is more complex, has poorer abstraction capabilities, and less machine support than the programming language of the program. It would stand to reason that the proof would have at least as many bugs as the software.
  • Re:Subject (Score:2, Insightful)

    by notfancy ( 113542 ) <matias@@@k-bell...com> on Thursday July 10, 2003 @12:43PM (#6407878) Homepage

    Forgive me if you find me rude, but offhand dismissal without cogent arguing really taxes my patience.

    Asides from it being perposterous to expect all the developers in the world to write formal proofs for their programs,

    Why would that be so, exactly? Dijkstra was especially vocal against this "can't do" attitude. I don't ask for a compelling argument, just for a reasonable one.

    this statement is at best a wild assumption. He is proposing that the lack of use of a particular (his) potential solution to a problem is the root cause of the problem.

    That's true, but how exactly is that bad? He believed that his method is effective with a passion bordering on mania. Again, if you have alternative explanations for the problem that are reasonable, I'd love to hear them.

    Also, I've got to doubt that formal proofs would be worth nearly the tradeoff in terms of time. If you think about it, a program is itself akin to a proof of correctness. If a coder makes a mistake in his initial code, it seems likely he will repeat that error in a formal proof. Peer review could improve the failure rate, but that is a whole nother ballgame.

    First off, I think that trading thinking time for debugging/QA testing time is a definite savings (i.e., it makes sense from an economics point of view). Secondly and regarding repeated mistakes, yes and no. Yes you can err in the proof. However, in my experience, errors in a proof feel very different to errors in a program. There's a little voice in your head telling you: "can't be, can't be" and it's not until you go back and recheck your proof and you find your errors that it would rest.

    Anecdotal evidence is no evidence at all, I know, so I offer the following argument: consider the proof and the code as two different embodiments of the same solution; doing it twice cuts the probability of errors (not trivial, fifteen-second-to-spot mistakes, but hard errors) by half.

    Another argument for is that should an error remain, it's easier (i.e., mechanical) to check the proof; if the code is annotated with the proof steps, it's natural to check for agreement.

    I'm a convert; I've found errors in my code that never surfaced in five years of heavy usage but were nonetheless there, just by employing (very simple) formal reasoning. You don't need to acquire much knowledge (a very good grasp of logic; a moderate one of elementary integer functions like floor, minimum and maximum; a modest one of number theory) but you need constant practice to change mindsets.

    Eighty percent of code is, allow me a loaded word, "trivial" in the sense that yes, you could have pointer manipulation bugs, a reversed condition in a loop, whatever; but for the twenty percent of remaining code, stopping and pondering about the problem makes the road down towards the solution considerably smoother.

Those who can, do; those who can't, write. Those who can't write work for the Bell Labs Record.

Working...