Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Programming IT Technology

The Poetry Of Programming 416

Lumpish Scholar writes "Sun's Richard Gabriel (possibly the only person with both a Ph.D. in computer science and an MFA in poetry) talks about "the connections between creativity, software, and poetry": "People say, 'Well, how come we can't build software the way we build bridges?' The answer is that we've been building bridges for thousands of years, and while we can make incremental improvements to bridges, the fact is that every bridge is like some other bridge that's been built.... But in software ... we're rolling out -- if not the first -- at most the seventh or eighth version. We've only been building software for 50 years, and almost every time we're creating something new.""
This discussion has been archived. No new comments can be posted.

The Poetry Of Programming

Comments Filter:
  • The "only"? (Score:5, Informative)

    by sphealey ( 2855 ) on Thursday December 05, 2002 @10:57AM (#4818158)
    I would have to seriously question the statement that Mr. Gabriel is "possibly the only person with a Ph.D in Computer Science and an MFA in poetry". Many computer people I have met have a lifelong fascination with language and literature. Particularly the academic types who pursue Ph.D's. I would guess that there are a fair number of people out there with that combination of degrees.

    sPh

  • by Bodrius ( 191265 ) on Thursday December 05, 2002 @11:03AM (#4818193) Homepage
    The approach to studying physics is also replicating well-known experiments with shoddy equipment, no experience, and predicted results.

    This is not to educate scientist to repeat the same experiments over and over again. It's just that you cannot be expected to understand complex physics and create new experiments for new theories if you haven't seen and tried the building blocks first-hand.

    They don't teach you to solve the Towers of Hanoi because it's a "common problem". They teach you to use recursion to solve problems, and to recognize a "recursion problem" by its characteristics, by using Towers of Hanoi as a common example.

  • Re:Wrong (Score:3, Informative)

    by RazzleFrog ( 537054 ) on Thursday December 05, 2002 @11:07AM (#4818218)
    I am guessing that you are an engineer. Bridges are extremely complex. Every bridge presents a new challenge. Watch a special on the building of the Brooklyn Bridge or the Tacoma Narrows. Read about the challenges of the proper Strait of Messina (Sicily) and Gibraltar bridges.

    As for telling whether you can tell if a bridge is right or not, The Koror-Babeldaob Bridge stood for 20 years before collapsing.
  • Re:Get real (Score:3, Informative)

    by Malc ( 1751 ) on Thursday December 05, 2002 @11:12AM (#4818249)
    Self-discipline seems to be a key factor between good and bad developers. Especially when it comes to languages like C++.

    I've met people who are amazingly creative and churn out very innovative code... yet are incapable of testing it and making it production quality. Then I've met overly anal people who snuff out creativity in all the people around them, producing code that is late and uninspiring. The best developers are somewhere in between.

    I've noticed that many of the best developers were once or still are musicians... perhaps musical discipline is good training for being a software engineer. I also read an article in the National Post recently that published the results of a reasonably sized study: students educated in the arts including music also achieved higher and better results in maths and science.
  • by eval ( 8638 ) on Thursday December 05, 2002 @11:14AM (#4818262) Homepage
    It's not strictly true that we can only incrementally improve bridges. Consider that steel was first used in a suspension bridge in the late 1800s. Before this, suspension bridges were fairly limited in span length, due to the strength of the materials. Then the Brooklyn Bridge was built (the paradigm) and in fewer than 50 years, the Golden Gate Bridge (and many others). So in only a few decades, the limits of bridge design were expanded by at least an order of magnitude. (That's not much in CPU terms, but in the world of big things like bridges, it's pretty impressive.)

    Anyone wanting a good read on the subject of bridges, I suggest "The Great Bridge" by McCullough, the story of the building of the Brooklyn Bridge. Most of it's about Washington Roebling (who took over when his father, John Roebling, the original designer of the bridge, died, before the bridge was actually started). It's a truly inspirational story for anyone that calls themselves an engineer.
  • by tuxliner ( 589414 ) on Thursday December 05, 2002 @11:24AM (#4818330)
    Perl poetry [arminco.com]
  • by rmohr02 ( 208447 ) <mohr.42@osu. e d u> on Thursday December 05, 2002 @12:01PM (#4818572)
    There's a family guy episode (There's Something About Paulie - Episode #23) in which the Griffins get a new car with a computer. They're messing around with the languages while getting directions, and when they switch to Russian the computer says "In Soviet Russia, car drives YOU." Then, later in the episode, when telling which way to turn at a fork in the road, the computer says "In Soviet Russia, road forks YOU."
  • Some Observations (Score:2, Informative)

    by small_box_of_stuff ( 258902 ) on Thursday December 05, 2002 @03:47PM (#4820555)
    I spend all day writing sofware, side by side with EE's and ME's and others designing and building control systems and other machines, and have been doing so for the last 10 years.

    The thing is, they screw up their designs just as often as I do, they build things that don't work well the first time just as often as I do, and they release stuff that doesnt do what the customer wants just as often as I do. And the outside companies we work with are worse.

    It's a complete falacy that more mature engineering disciplines are able to some how make things that work all the time, right up front. I heard this in school long ago, and with out any experience I took it as true. After just a few years of working with hardware engineers, I found it was complete crap.

    The crux of the issue is this. Building hardware is complicated. Building software is complicated. Building anything with a couple of hundred thousands parts in it is very hard. To do it right takes talented and motivated people, lots of time, and lots of money. Things need to be well organized, well planned, and well executed.

    I've seen a few people on this story post that many traditionally engineered things that people hold up as being examples of how to do it right, such as bridges, are much simpler than software. That is very true. Most circuts that our EE's build here, lets say they had 50000 things in them. thats quite a lot. but look at the things. the things that make up a circut are all very simple. one input, one output, performs a very simple operation. Its actually alot less complicated that a big piece of software. Write a program with 50000 addition, subtraction, and boolean logic statements in it, and you'll find youve got a very simple program. Take a look at an assembly dump for a simple hello world program, and your find that many thigns.

    Invariably, when someone says that engineering works with less complicated things than software, someone always trots out a 747 or a space shuttle by way of a counter example. Its true, these things are well engineered, work right, and are insanely complicated.

    They also took many (10-12?) years to go from idea to working tool, and took billions of dollars to make. Find me a software consumer willing to sit around for more than a year, and I'll be excited. Find me someone that doesnt think 600K for piece of software is insanly expensive and I'll be just as excited. The space shuttle software is often taken as an example of what could be done with good software enginnering, but they dont realize that the documentation budget for the space shuttle software is larger than many software companies entire revenue stream. The space shuttle's software teams customers are people that understand that if the software isnt done right, people and billions of dollars worth of stuff will be destroyed. You know what that means, they have the staff, budget, and time to do things right.

    You cant compare things like the space shuttle to a 6mo project to make data entry program. Dont even bother. And dont think that the problems you have in a 6mo data entry project can in any way be solved by tools designed and proven to work on the space shuttle.

    People have come to expect good software very quickly and for cheap. That comes with some problems, and its very hard to combat those problems. The programmers are often very poorly trained, the budget is tight, and the software is a moving target. I have yet to see a program I started to work on not change substantiatly from the time I started till the end. Spending 2 mo at the beginning designing things to the level of a 747 is stupid, because by the time the item gets out the door, it will have changed from a plane to a boat. that 2 mo was wasted.

  • by SpringRevolt ( 1046 ) on Thursday December 05, 2002 @08:35PM (#4822896)
    Sadly, I am a little late to see this posting, here here you go nevertheless....

    Strangely, IMHO, missed by the slashdot editors (on second thoughts, perhaps I am not so surprised) and the article itself was the paper for which Peter Galbraith is famous. The paper is Lisp: Good News, Bad News, How to Win Big [mit.edu], which includes the section "The Rise of Worse is Better" which he wrote while at Lucid.

    Peter Galbraith was respected and quoted by JWZ (or lucid/xemacs and Netscape fame).

    It was the ideas in Worse is Better that ESR rehashed and become the Cathederal and the Bazaar.

    i.e. Linux was developed using the "New Jersey" approach and GNU was developed using the MIT approach: The folowing passage illustrates this:


    Two famous people, one from MIT and another from Berkeley (but working on Unix) once met to discuss operating system issues. The person from MIT was knowledgeable about ITS (the MIT AI Lab operating system) and had been reading the Unix sources. He was interested in how Unix solved the PC loser-ing problem. The PC loser-ing problem occurs when a user program invokes a system routine to perform a lengthy operation that might have significant state, such as IO buffers. If an interrupt occurs during the operation, the state of the user program must be saved. Because the invocation of the system routine is usually a single instruction, the PC of the user program does not adequately capture the state of the process. The system routine must either back out or press forward. The right thing is to back out and restore the user program PC to the instruction that invoked the system routine so that resumption of the user program after the interrupt, for example, re-enters the system routine. It is called PC loser-ing because the PC is being coerced into loser mode, where loser is the affectionate name for user at MIT.

    The MIT guy did not see any code that handled this case and asked the New Jersey guy how the problem was handled. The New Jersey guy said that the Unix folks were aware of the problem, but the solution was for the system routine to always finish, but sometimes an error code would be returned that signaled that the system routine had failed to complete its action. A correct user program, then, had to check the error code to determine whether to simply try the system routine again. The MIT guy did not like this solution because it was not the right thing.

    The New Jersey guy said that the Unix solution was right because the design philosophy of Unix was simplicity and that the right thing was too complex. Besides, programmers could easily insert this extra test and loop. The MIT guy pointed out that the implementation was simple but the interface to the functionality was complex. The New Jersey guy said that the right tradeoff has been selected in Unix: namely, implementation simplicity was more important than interface simplicity.


    i.e. Linus used the "Worse is Better" method and RMS (ahem... :) did not, thus the GNU Kernel, however good it is is delayed somewhat while they Do The Right Thing.

    I encourage you to read the whole of Good News, Bad News - it contains insightful material on things other than Lisp (I should declare an interest in that I am a scheme programmer).

Living on Earth may be expensive, but it includes an annual free trip around the Sun.

Working...