Dijkstra's Manuscripts Available Online 251
Bodrius writes "Salon has a short but interesting article called GOTO considered joyful, about E. W. Dijkstra's manuscripts, as published by the University of Texas, and their bloggish nature.
I'm not sure if the blog analogy is that accurate, but the articles are a must read for computer scientists and geeks in general." (Annoying but free click-through system for non-subscribers.)
Re:Is Dykstra still relevant today? (Score:3, Insightful)
Re:Can someone shed more light on his misc. info? (Score:5, Insightful)
Re:Compelling? (Score:4, Insightful)
As a person only vaguely interested in CS I can say that I was more intrigued by the fact that he hand-wrote his documents, gave personal notes about what he was feeling at the time (my note [slashdot.org] about what pen-type he was using), which are all VERY interesting to me.
For me, these little things are far more interesting than what topics he happened to be discussing.
His "blog-like" notes are probably better to read than JoSchmoe049169666420's because they are coming from very well-known professor who was in touch with the CS academic community.
That's my worthless
Statement I don't agree on (Score:5, Insightful)
"Programming is one of the most difficult branches of applied mathematics; the poorer mathematicians had better remain pure mathematicians."
I do not agree with this. I mean, in pure mathematics there are not much to think about besides mathematics. Programming includes many other aspects, for example creativity. So if you are a poor mathematican but have other qualities that are needed for programming, you would have an easier time doing programming than pure mathemtaics I think.
Re:Subscription not necessary (Score:2, Insightful)
Re:Can someone shed more light on his misc. info? (Score:5, Insightful)
Re:Is Dykstra still relevant today? (Score:5, Insightful)
Ever hear of OSPF (Score:4, Insightful)
Algorithms? (Score:4, Insightful)
What comes to mind right at first is Dijkstra's Shortest Path Algorithm [tokushima-u.ac.jp]. And hey, look...that page has java programs. In fact, take a look at a Java applet [toronto.edu] to better understand the algorithm.
Re:Full Text (Subscribers Only Article) (Score:1, Insightful)
Another slashdot karma whore.
Re:Is Dykstra still relevant today? (Score:5, Insightful)
Dijkstra's work on writing programs so as to be confident in their correctness from the start is very relevant--how much do you think people would be willing to pay for an OS written that way?
Re:Subscription not necessary (Score:5, Insightful)
> You could change the expiration on the temporary cookie they give you to get perminent access. Of course, this would be illegal.
I was winding myself up to sneer, but then I realized that this would be [circumventing] a technological measure that effectively controls access to a work protected under [Title 17] [warwick.ac.uk].
While we're at it, remember that "No person shall [...] offer to the public [or] provide [...] any technology [...] or part thereof that is primarily designed or produced for the purpose of circumventing a technological measure that effectively controls access to a work protected under [Title 17]."
Citizen, remain at your console while the Secret Service analyzes the case against you and decides your guilt and an appropriate punishment.
Re:Is Dykstra still relevant today? (Score:2, Insightful)
Re:Statement I don't agree on (Score:2, Insightful)
I think too that good mathematicans very often make good programmers and the other way around. The problem I see with Dijkstra's statement is that he says (as I understand it) that poor mathematicans would do better pure mathematicans than they they would do programmers. However you see it, there is more mathematics in pure mathematics than there is in programming. And thus if you have other qualities needed in programming, but you are not very good at maths, you would make a better programmer than pure mathematican (though maybe not a very good one at either). I hope that makes my point a bit clearer.
Re:Is Dykstra still relevant today? (Score:5, Insightful)
I study CS @ Eindhoven University, where he came and teached a lot(his and his compatriots were good in programming methodology: http://www.win.tue.nl/pm/ - horrible looking webpage) Trust me, it shows. Most of the 'hardcore' faculty members were friends/exstudents/what have you, and work the way he did. Dijkstra (and the folks at my faculty) did not bother himself with implementations of programming languages. Nor with what function to call for what. They all strive to understand the nature of the problem, and from that they try to derive the solution.
That's a totally different approach to programming, which is a *lot* of work. However, it shows in areas where simplicity is key. There is a reason why Dijkstra used Semaphores (what do you think Java uses?). Or have you ever seen a good proof of Peterson's Algorithm? (I know Feijen and van Gasteren gave a generic derivation in 'On a Method of Multiprogramming', but that's just me having had to read it because it's part of my study there, of course. A book which delves into seemingly simplistic problems, but then gathers a framework which can tackle much bigger problems then you would expect.)
The problems for single-process computing are easy. For those of you who program in them, I'm not trying to critisize or anything (I personally know that it's still damn hard from time to time), but there are no synchronisation problems, for one. To ensure that these are all systematicly handled you'd really want to have a proof that nothing can go wrong. Java and exceptions? Fine, it's just a way to get away with bad programming. There are a lot of places where you simply cannot get away with dirty programming: you don't want your car to deadlock going at 90 MpH, now would you? You want to be absolutely positive that it will *never* happen. THat means having either done extensive testing (which you can only hope it was sufficient), or having formal proof that it cannot go wrong.
That is why Dijkstra held himself to the 'very hard problems'. The easy ones you can mess up with and still have not too much problems. The hard ones are problematic if they fail.
He did not believe in cluttered code. Everything should be there for a reason, should be proven to be there and exactly there for a reason.
To excel in Computer Programming you must be so smart as to be able to tackle the really hard problems. That means tackling problems on the problem field. You don't need languages for that, you need proof. Languages are but a tool for describing a solution and verifying your proof. Some languages describe easier then others, yes, but the solution is the same.
I can write a C to Haskell to C++ to Prolog to Java compiler. Pretty straightforward too. The languages are the same. You just don't want to see the spaghetti which comes out of a program once I'm through with it. And that's the reason why you use a specific language for a solving a problem: some languages simply are much easier to express the solution in.
However, that does NOT solve the problem, it merely makes it easier to program a solution understandibly.
Dijkstra was above all a scientist, and thus had to convince the scientific community of his ideas. This normally is done by using formal methods which describe both the problem as the solution in such a away that they can be easily understood.
That is still the holy grail for may solutions: how can they be written such that they can be understood more easily.
But I'm starting to rant here...
I'm impressed by anyone (Score:2, Insightful)
Re:Can someone shed more light on his misc. info? (Score:3, Insightful)
Your logic is outstandingly poor (Score:5, Insightful)
That conclusion is not obvious. Given that the real world introduces complications that can be ignored in the world of pure mathematics, his (presumed) premise that "if applied is hard, the weaker might better stick with pure" makes logical sense.
(2) How come all the loser mathematicians who can't hack it end up becoming programmers?.
Both of your premises of "loser" and "can't hack it" are just some sort of pejorative that mean nothing in practice if you're trying to make a logical argument, and the "end up becoming programmers" is patently false. So the statement is just plain empty of value.
I've never had much respect for Dijkstra. I have even less now.
Well, as a personal statement of your dislike for someone, it requires no rational justification and hence cannot be faulted. Whether others will feel a consequent lack of respect for your own self as a result is hard to say, but it's pretty safe to assume that they won't be impressed by your ability to reason.
Re:Statement I don't agree on (Score:3, Insightful)
Perhaps he meant something along the lines of "if you're a poor mathematician, don't compound your poor choice of career by becoming a programmer instead, because programming is still math." I don't think he meant that pure mathematics is the easier course of study, only that programming isn't necessarilly easier either.
Re:Full Text (Subscribers Only Article) (Score:3, Insightful)
Oh, and have you ever really looked at a real algorithm? They are mathematics, pure and simple. Mathematics has everything to do with programming. Case in point: Dijkstra's Algorithm [northwestern.edu]. Not one of the really heavy math ones, granted, but in view of the topic I think it's appropriate.
Re:Full Text (Subscribers Only Article) (Score:2, Insightful)
Re:Full Text (Subscribers Only Article) (Score:4, Insightful)
(2) How come all the loser mathematicians who can't hack it end up becoming programmers?.
Well I have something of an advantage here, having actually read the original notes rather than the article about them. Back in the late 1980s I spent an afternoon reading them. Dijkstra used to send the notes out to what he considered the major computer science labs. Since Oxford was run by Tony Hoare it was obviously on the list.
At the time some of the other students thought that this practice was somewhat pretentious, tending to imply a somewhat elevated self-opinion. Today of course everyone from the lowliest grad student shares far more mundane thoughts in their blogs.
What Dijkstra was actually doing in the article referred to was pointing out that there was nothing intrinsically superior about 'pure' mathematics. At the time computer science was regularly condescended to as an inferior for of mathematics.
Where Dijkstra was wrong is that comp sci is not a branch of mathematics at all, it is as my tutor Tony Hoare pointed out 'An engineering profession'. At the time this was first proposed the idea was somewhat controvertial, today almost every programmer regards themselves as an engineer.
I think that in fact we have to go a bit further and understand that the highest levels of programming are actually more akin to architecture. It combines art and engineering, just as engineering combines science and mathematics.
There are plenty of architects and engineers who could never make much progress in the pure sciences. But take the best architects and the best engineers and you will often find that not only were most capable of being top class scientists, in many cases they actually were.
Re:Full Text (Subscribers Only Article) (Score:1, Insightful)
Holy crap! You don't even know what a computer is! Computers do the easy, repetitive work very, very quickly. The instructions are the real work.
Mathematics is not arithmetic.
Re: Subscription not necessary (Score:3, Insightful)
Notice how I didn't respond to the trolling part? Good. Now, you don't either.
Re:Subject (Score:2, Insightful)
Evidence that Dijkstra was not particularly in touch with what most software nowadays is about. It's not that it's fundamentally impossible to prove a large program correct, i.e., prove that its postcondition follows from its precondition, but that for many programs, coming up with those postconditions would be an enormous development effort itself.
Like many mathematicians, Dijkstra seems to have had a somewhat overy optimistic view of the susceptibility of mathematical reasoning to bugs in itself. I believe that in the general case, the proof for a program will be larger than the program itself, and will be written in a language that is more complex, has poorer abstraction capabilities, and less machine support than the programming language of the program. It would stand to reason that the proof would have at least as many bugs as the software.
Re:Subject (Score:2, Insightful)
Forgive me if you find me rude, but offhand dismissal without cogent arguing really taxes my patience.
Asides from it being perposterous to expect all the developers in the world to write formal proofs for their programs,
Why would that be so, exactly? Dijkstra was especially vocal against this "can't do" attitude. I don't ask for a compelling argument, just for a reasonable one.
this statement is at best a wild assumption. He is proposing that the lack of use of a particular (his) potential solution to a problem is the root cause of the problem.
That's true, but how exactly is that bad? He believed that his method is effective with a passion bordering on mania. Again, if you have alternative explanations for the problem that are reasonable, I'd love to hear them.
Also, I've got to doubt that formal proofs would be worth nearly the tradeoff in terms of time. If you think about it, a program is itself akin to a proof of correctness. If a coder makes a mistake in his initial code, it seems likely he will repeat that error in a formal proof. Peer review could improve the failure rate, but that is a whole nother ballgame.
First off, I think that trading thinking time for debugging/QA testing time is a definite savings (i.e., it makes sense from an economics point of view). Secondly and regarding repeated mistakes, yes and no. Yes you can err in the proof. However, in my experience, errors in a proof feel very different to errors in a program. There's a little voice in your head telling you: "can't be, can't be" and it's not until you go back and recheck your proof and you find your errors that it would rest.
Anecdotal evidence is no evidence at all, I know, so I offer the following argument: consider the proof and the code as two different embodiments of the same solution; doing it twice cuts the probability of errors (not trivial, fifteen-second-to-spot mistakes, but hard errors) by half.
Another argument for is that should an error remain, it's easier (i.e., mechanical) to check the proof; if the code is annotated with the proof steps, it's natural to check for agreement.
I'm a convert; I've found errors in my code that never surfaced in five years of heavy usage but were nonetheless there, just by employing (very simple) formal reasoning. You don't need to acquire much knowledge (a very good grasp of logic; a moderate one of elementary integer functions like floor, minimum and maximum; a modest one of number theory) but you need constant practice to change mindsets.
Eighty percent of code is, allow me a loaded word, "trivial" in the sense that yes, you could have pointer manipulation bugs, a reversed condition in a loop, whatever; but for the twenty percent of remaining code, stopping and pondering about the problem makes the road down towards the solution considerably smoother.