Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Programming IT Technology

Why Programming Still Stinks 585

Andrew Leonard writes "Scott Rosenberg has a column on Salon today about a conference held in honor of the twentieth anniversary of the publishing of 'Programmers at Work.' Among the panelists saying interesting things about the state of programing today are Andy Hertzfeld, Charles Simonyi, Jaron Lanier, and Jef Raskin."
This discussion has been archived. No new comments can be posted.

Why Programming Still Stinks

Comments Filter:
  • panel link (Score:5, Informative)

    by AnonymousCowheart ( 646429 ) on Saturday March 20, 2004 @10:18PM (#8624805)
    here is the link [sdexpo.com] to the ppl on the panel, and talks about their backround
    • Re:panel link (Score:5, Insightful)

      by nlper ( 638076 ) on Saturday March 20, 2004 @11:26PM (#8625182)

      Reviewing the list of contributors, it's interesting to note that some of them had already stopped programming back when they were interviewed. So why should we listen to them opine about software development techniques today?

      My pet peeve on the list would have to be Jef Raskin [sourceforge.net], who's far better at self-promotion than actually coding. Had people actually listened to his ideas in the early days of the Macintosh project, they would have delivered a machine without a mouse or other features most people associate with the Mac. (As Andy Hertzfeld puts it, he's not the father of the Macintosh so much as the eccentric uncle.) [folklore.org]

      However, if you want to hear him repeat the same things he's been saying for the last 20 years, he'll be keynoting the Desktop Linux Summit [desktoplinuxsummit.org]. No doubt he'll be beating the horse's skeleton that mice, icons, and the windowing interface are what's holding Linux back on the desktop. (MacOS X be damned!) Using those special "leap" keys that made the Canon Cat so successful, now that's the future!

      Tyler

    • Hungarian Notation (Score:5, Informative)

      by Speare ( 84249 ) on Saturday March 20, 2004 @11:56PM (#8625314) Homepage Journal

      The linked page didn't mention that Charles Simonyi is the Hungarian for whom the term, Hungarian Notation is named.

      Hungarian Notation is that Microsoft Windows programming naming strategy where the first few characters of a variable name should hint to the reader as to its data type. So hToolbox tips you to understand that it was a HANDLE type, even without scrolling your editor up a couple pages to find out; papszEnvironment would likewise tell a Win32 devotee that it was a Pointer to an Array of Pointers to Zero-terminated Strings.

      It's not the first such instance of binding data type and name, and it won't be the last. For example, FORTRAN compilers have long had implicit variables; any variable not otherwise declared that started with I, J, K, L, M, or N would be assumed to be an integer, where most other variables would assume a real (floating-point) type. So FORCE(LAMBDA) directs the code to a real scalar from a real array, given an integer index. Many programmers start a routine with IMPLICIT NONE to disable this assumptive behavior, as mistakes are easy to make when you let the machine decide things for you.

      BASIC would use sigils at the end of the variable names (NAME$, COUNT#, and scripting languages like Perl use sigils that precede the name %phones, @log, $count).

      • Re: (Score:3, Insightful)

        Comment removed based on user account deletion
        • which notation is nothing but an irritant when code is well-structured

          which happens how many times when you buy software from anybody.

          which happens how many times when your own development team is faced with insane deadlines and unrealistic specs.

          Which happens how many times in the post dot com boom?
      • by jhoger ( 519683 ) on Sunday March 21, 2004 @02:20AM (#8625911) Homepage
        It makes variable names butt ugly.

        The farthest I am willing to go is to end my pointer variable names with the letter p, and pointer-to-pointers with pp. Array names don't get a suffix, since they're not really pointers.

        It makes the code leaps and bounds clearer without the hideous ugliness of Hungarian notation. And pointers is where the bad errors are, so it's the bang-for-the-buck here...
        • by arkanes ( 521690 ) <arkanes.gmail@com> on Sunday March 21, 2004 @03:24AM (#8626135) Homepage
          It's usefull for what it does, but it does make code damn fugly (and near-unreadable, for me), and theres far better tools now for figuring out what a variable is.
        • by pipacs ( 179230 ) on Sunday March 21, 2004 @04:46AM (#8626319)
          It makes variable names butt ugly.

          Another problem is that if you change types, you have to change the names, too. Or if you don't, you end up with a completely misleading code.

          • by Ed Avis ( 5917 )
            I feel that if you want to use a notation where the type of a variable determines part of its name, then this should be checked by the compiler. Furthermore, it should be easy to strip off the type prefixes when editing code (for those who don't like them) and add them back, since they can be automatically determined from the variable's type.

            Manually prefixing each variable name with redundant information is the kind of extra work that I'd rather have the computer do for me.
      • by prockcore ( 543967 ) on Sunday March 21, 2004 @05:17AM (#8626401)
        even without scrolling your editor up a couple pages to find out; papszEnvironment would likewise tell a Win32 devotee that it was a Pointer to an Array of Pointers to Zero-terminated Strings.

        No it wouldn't. It would tell a win32 devotee that it started out that way.. it may not be that way now.

        Look at how many "lp" variables are in the win32 headers.

        Hungarian Notation is the most horrible concept ever because it always ends up lying. I bet that's why MS is so slow to fix buffer overflows.. in order to change a variable from an int to a long is an arduous process.
      • by fforw ( 116415 ) on Sunday March 21, 2004 @05:21AM (#8626410) Homepage
        The linked page didn't mention that Charles Simonyi is the Hungarian for whom the term, Hungarian Notation is named.

        my first thought was :
        why should I listen to what m_plzstrSimonyi has to say about programming?

      • by john.r.strohm ( 586791 ) on Sunday March 21, 2004 @10:31AM (#8627289)
        Hungarian notation was originally developed as a band-aid (tm) for the near-complete lack of type checking in C. When all ints are created equal, and may be freely assigned to each other, and pointers must routinely be type-coerced to something else, and the compiler refuses to help the programmer keep things straight, something like Hungarian notation becomes necessary.

        Hungarian notation declined after Stroustrup added strong typing to C++. It is worth noting that Stroustrup never even considered NOT doing strong typing in C++. (Read his book on the design and evolution of C++.) Distaste on the part of hard-line C programmers for strong typing also declined, after C++ forced them to eat their broccoli, and they discovered it actually tasted pretty good (by catching errors that would otherwise not have been found nearly as easily).

        It is also worth noting that Hungarian notation never caught on in any language other than C. In particular, Ada programmers never bothered with it: the Ada type system was rich enough that it could do everything that Hungarian notation pretended to do, and enforce it, by requiring compilers to refuse to compile type-incorrect programs.

        (Somewhere, recently, I saw something about a commercial satellite project that was forced to use Ada, because there was no C/C++ compiler available for a recently-declassified microprocessor. Their programmers screamed bloody murder at the idea. The screams stopped when they noticed that the Ada compiler was catching errors for them.)
      • This has got to be the perfect troll!!

        This post immediately leads to an (off-topic) flame war about the relative merits (and lack thereof) of said notation, AND at the same time it gets modded up to 5!

        I bow down before your superior posting skills...
    • by Vagary ( 21383 ) <jawarren AT gmail DOT com> on Sunday March 21, 2004 @12:50AM (#8625596) Journal
      If anybody else goes to Simonyi's company [intentsoft.com] and still can't figure out what they're talking about (mostly because it's vapourware at the moment, I believe), may I direct you to this Wiki [program-tr...mation.org]. It turns out that he thinks source transformation tools will change the world.

      I'm told that my university is one of the leading source transformation research centres in the world, but the only interesting things they're producing right now are for understanding legacy systems. So yes, there's probably a lot of money in source transformation, but it's also boring as hell.
  • by stephanruby ( 542433 ) on Saturday March 20, 2004 @10:21PM (#8624825)
    Want to read the whole article? You have two options: Subscribe now, or watch a brief ad and get a free day pass. If you're already a subscriber log in here.

    No thanks. The first two paragraphs didn't make me want to read anymore. I'll wait for the comments of the slashdoters to appear.

  • by GoofyBoy ( 44399 ) on Saturday March 20, 2004 @10:21PM (#8624830) Journal
    In response to a Salon article on the state of programming today, GoofyBoy posted a witty and insightful comment. Its sparked a large thread of apologists and public outrage from a wide range of slashdot readers and trolls.

    Want to read the whole comment? You have two options: Subscribe now, or watch a brief ad and get a free day pass. If you're already a subscriber log in here.
  • by Imperator ( 17614 ) <{slashdot2} {at} {omershenker.net}> on Saturday March 20, 2004 @10:24PM (#8624846)
    Someone with a @salon.com address submits a story to slashdot linking to a Salon article. That article costs money to read. Slashdot posts the story anyway.

    Could an AC please post the full text?
    • by Anonymous Coward
      In some quarters today, it's still a controversial proposition to argue that computer programming is an art as well as a science. But 20 years ago, when Microsoft Press editor Susan Lammers assembled a collection of interviews with software pioneers into a book titled "Programmers at Work," the idea was downright outlandish. Programming had long been viewed as the domain of corporate engineers and university computer scientists. But in the first flush of the personal computer era, the role of software innov
      • by torokun ( 148213 ) on Saturday March 20, 2004 @10:52PM (#8625002) Homepage
        This is some serious copyright infringement, man. Ripping an article verbatim and posting it on another site.

        • That's an interesting point. Somebody will often post a copy or mirror of an article or web site if the original has been slashdotted. The copy has been presented because the original is unavailable due to technical reasons: It is the author's intent to keep the page up, but there isn't enough bandwidth. Is that still copyright infringement if permission hasn't been obtained prior? What about Google's cache?
      • by WasterDave ( 20047 ) <davep@noSPAm.zedkep.com> on Saturday March 20, 2004 @11:16PM (#8625131)
        Oh, for fucks' sake. Sooner or later somebody doing this - anonymous or not - is going to get Slasdot sued.

        Fucking stop it. It's a copyrighted piece of work, it belongs to someone else, it is their right to control it.

        Dave
    • by anthonyrcalgary ( 622205 ) on Saturday March 20, 2004 @10:31PM (#8624886)
      Are you seriously suggesting that anyone would read the article anyway? No, this is the time to broadcast one's opinions in a fashion loosely connected with what we think the article might be about.
      • Well, I took the bait and read it.

        It's suprisingly full of fluff. I admire the challenge to go somewhere new and interesting, but am equally appaled by lack of sense of direction in the article.

        It's about as coherent as pointing out that theres 360 degress around you, and they are all hopeful and promising. Then asking you "Where do want to go today?", while reminding you that you're in your hometown.

        The dismissal of open-source as a non-innovator is questionable, and the statments about programming it
    • by Otter ( 3800 ) on Saturday March 20, 2004 @10:37PM (#8624926) Journal
      Could an AC please post the full text?

      They have free day passes, FYI.

      To summarize, though:

      • Charles Simonyi has a new company that he claims will change everything.
      • Jaron Lanier is still happy to inform you that he's a genius and everyone else is stupid. Don't count on him to do anything, though.
      • Andy Hertzfeld sounds like he's gearing up to lose more money on Linux desktop software.
      • Salon continues to suck up to Linux users.
      That's pretty much it. Don't count on anything more useful out of these guys, except maybe Simonyi.
    • by timothy ( 36799 ) on Saturday March 20, 2004 @10:55PM (#8625017) Journal
      Ads can be (are not always) annoying, in any medium, but they make the content possible.

      Radio ads drone on seemingly forever, but they pay for me to listen to Coast to Coast a.m. once in a while, or NPR (whose ads, in the form of begging, are even worse, but whose content is better). Television ads, on programs not caught to TiVo, can be obnoxious, too.

      The Salon article *can* cost money (that is, you can subscribe to Salon to read it), but you can also watch an ad (or you can click on the ad and carefully look away from it) and then read the article for free. That's what I do. Sites not run as charities need to pay for their content somehow: Even some commercial websites don't make money per se, but are justified by other means (goodwill, information spreading leading to sales, etc), and some are free to read and make money with banners. Salon, unlike some sites, has provided two ways to read their stuff, meaning (I hope) that they stay in business, since I like some of their original stories. Note that reading Salon by the watch-ad/get-daypass means doesn't require you to give them demographic information, answer surveys, surrender your email, click checkboxes to avoid (yeah right!) spam, choose a password, or pay any money.

      Probably someone will come up with a way to block the content of the interstitial Salon ads: the arms race continues. But I prefer their approach to the increasing number of news sources that require registration and / or a paid subscription. The New York Times is annoying but hard to ignore as a news source, enough so that we link to it from Slashdot despite the required registration process; other papers, barring unusal circumstances, we won't link to because it's annoying to keep so many username / password combinations and have to login to read their content.

      And that it's someone from Salon who submitted ... Na und? An editor or writer with a publication or website can submit just like anyone else; I'm glad when they're up-front about it. Would you rather A. Leonard have submitted more sneakily from a throwaway hotmail account? :)

      Cheers,

      timothy

  • by The Spie ( 206914 ) on Saturday March 20, 2004 @10:24PM (#8624848) Homepage
    If they did this as a round table, I would have been sad to have missed it. You just know that at some point in the discussion, Raskin and Hertzfeld would have gotten into a fistfight over who the real father of the Mac was. "Two geeks enter, one geek leaves..."
  • by bsDaemon ( 87307 ) on Saturday March 20, 2004 @10:26PM (#8624860)
    Please keep in mind that being only nearly 20, the depth of my personal experience is not that of say, someone who was around when UNIX was first rolled out. However, I have been in my day an avid C and BSD (mostly FreeBSD, but some NetBSD) user.
    Honestly, from where I sit (you may agree or not), programming and computer stuff in general has become a lot less like a science or craft, and more like a factory job. In the early days programmers who physicists, engineers, and mathamaticians. Today programmers are just programmers. More and more computer science departments are teaching using Java. Why? because it helps people to understand how the computer works? no. Simply, because it's what the industry is using.
    I had 4 technicians from Cox over at my house yesturday because my parents couldn't figure out what was wrong with the cable modem. They were the most filthy, disgusting bunch I have ever seen and were dressed more like gas station attendants than professionals. Why? because that sort of work has become blue-collar and low-rent.
    Programmers are no longer expected to be educated beyond their field. they are being educated to produce software, not to be COMPUTER SCIENTISTS. How many graduates of say, ITT Tech would actually understand Knuth, even if they have ever heard of him? Likely, not many. That is why software sucks. That is why the programming "trade" sucks. and that is why companies can send the jobs abroad to people who work for peanuts. Programming is just like stamping "Ford" on the grill in a Detroit assembly plant these days and nothing more.
    • by matusa ( 132837 ) <chisel.gmail@com> on Saturday March 20, 2004 @10:42PM (#8624951) Homepage
      While that is true, it's also naive to think it would be any other way. Why? Think about every other profession! The opportunity to be doing something creative is delegated to a selective, lucky few. I say lucky few because everyone has met great people in crap jobs.

      So the question of course becomes--how do you dodge the bullet of crap positions? Doing well in academia is probably unfortunately the best solution. I'm going to CMU next year, annd had a nice talk with one of their CS profs about some openGL + C++ projects I'm working on, plus some AI research I started. I will get to work on these things, however I will also have to try not to swallow cyanide while dying through Java classes teaching me how to program in a way so dumbed down that even the greatest imbecile can't screw up.

      This of course touches upon a great sadness of modern engineering training for me: you don't get taught to think--you get taught to use prescribed methods. Why? Ostensibly to never never reinvent the wheel, but also to get retards to do the job fine.

      Let's not be stupidly depressed about everything, however. Trying to shoot to the very top has always required talent and hard work, and always been possible.

      I think programming is truly great, truly beautiful. This afternoon I made some money writing some boring PHP code, but also worked on my personal projects, and I'll work to have the tides change in the future.
      • by Doomdark ( 136619 ) on Saturday March 20, 2004 @11:58PM (#8625322) Homepage Journal
        however I will also have to try not to swallow cyanide while dying through Java classes teaching me how to program in a way so dumbed down that even the greatest imbecile can't screw up.

        ...

        This afternoon I made some money writing some boring PHP code, but also worked on my personal projects,

        While I whole-heartedly agree with points you are making, it's worth mentioning that there's nothing fundamentally wrong with either Java or PHP, that leads to boring lowest common demoninator programming: it's possible to do interesting advanced and sophisticated things using both, as well as with their countless alternatives. Except for some elitits who claim one has to use, say, functional programming languages to do anything interesting, or "no pain no gain" hard-core low-level language fanatics, most truly good programmers understand it's not tools that make exceptional advances; it's the craftsmen that use them.

      • by Stinking Pig ( 45860 ) on Sunday March 21, 2004 @01:21AM (#8625703) Homepage
        Lemme get this straight, doing well in academia is the answer but the only good thing in your academic curriculum is a side research project because your main classes don't teach thinking?

        I'll say this for the English degree, which I highly recommend to any young geek looking at schools: you learn to think, you learn to communicate, and you learn to differentiate shit from shinola. Surviving through a hardcore postmodernist-influenced seminar will prepare you for any amount of corporate meetings.
    • And it's precisely this attitude of yours towards your "common man" that really makes computing suck, in general. Your entire post reeks of "elitism", harkening to a time where computer programmers were some sort of elite bunch that people "just depended upon" to make the magic database "go". Now that computing has been reduced to the masses, for better or for worse, you feel that you and your 4-8 years of education in the computing field are threatened by a bunch of ITT graduates who don't have the theor
      • Any moron fresh out of ITT or CLN with a degree pasted to their face by their own drool can churn out a reaking pile of code that will work.

        However, without the theoretical knowledge to back that basic syntax knowledge up, it won't work well.

        The grandparent post mentions coding being a "factory job". The commoditization of coding IS a huge problem. Coding WELL is not easy. However, because these half-wits that barely dragged their sorry asses through High School can go to CLN and pick up the latest "Microsoft cert dujour" or whatever other worthless peice of paper they offer, the overall expectation of coding is dropping. I couldn't tell you how often I've been ordered to cut critical corners by clueless bosses who don't understand coding on any level deeper than how to throw syntax together to create a brittle shell of a program that will work just long enough to take the customer's money and run (a favorite quote: "... *I* was never taught that." - spoken in the manner of someone who can't imagine they don't know everything).

        The problem isn't that we're elite. The problem is that good programming is no less complex or time-consuming a task now than it was 20 years ago. Why is it elite to try and explain that to someone when they tell you not to bother with such and such critical piece or this basic security test? It's not elite, it's just that we've been flooded by so many bozos that wouldn't know good programming practice if it bit them in the balls that we're constantly deluged by sub-par workers and everyone has come to accept that sub-par work as the norm.

        • by Milo77 ( 534025 ) on Sunday March 21, 2004 @01:32AM (#8625748)
          I agree - coding well isn't easy, but my CS degree really only half prepared me for being able to code well. Yea I learned a lot of theory and understand more-or-less exactly what's going on on that processor, and this has enabled me to write some pretty clever code. But I've also learned that cleverly written code (code that's "better" in a purely acedemic sense) can be some of the worst code. It reminds me of that aweful hacker creed: "it was difficult to write, it should be difficult to read." Its sad, but true for most of the "software developers" I've met. They'll write terribly clever code, but in the end they've done a disservice to the project's long-term viability. I saw another quote this week (off the UNO website, off slashdot earlier this week) [paraphrase]: "the code will get written once, but read many, many times - if you make it easy to read you and others will benefit in the long-run." (you get the gist)

          It is sad what we've allowed to happen to software engineering - a disgrace really.
        • by ebuck ( 585470 ) on Sunday March 21, 2004 @01:58AM (#8625837)
          Agreed, good programming is not easy, and many more of the avenues to enter the field should require more basis in theory (language design, automata, OS internals, compilers, underpinnings of good database design, etc.)

          But good programming should be less complex now than it was before. That's the whole imperitus of language design. That's the reason that the last of the big languages to roll out is the same "simple" JAVA bashed in a few previous posts.

          In a previous post, I couldn't fathom the divergence of thoughts that denounced JAVA as a language while espousing that he's really cool C++/OpenGL stuff out there. C++ has a syntax that's unwieldy and awkward, mastered by comparatively few, and full of "compatibility" weaknesses shared by it's older brother, C. It's almost like it was thrown in there subconsiously to say, "Look, I am an uber elite programmer. OpenGL and C++. Watch me whine as I use something that has a clean, clear syntax."

          I'd hate to hear him gripe about PASCAL.

          "Really cool work", can be done in any language, and the proliferation of languages shows that there's many solutions to the same problem.

          His bashing the language for it's simplicity was as insightful as bashing good error checking, testing array boundaries, or enforcing garbage collection. Note that technological improvements can lessen the impact of these annoyances, but when the language design is flawed, only deep education of the masses (as in, don't do this, you'll regret it) can save the language.
    • by xtal ( 49134 ) on Saturday March 20, 2004 @10:45PM (#8624971)
      I agree with you, but only partly. Another problem is that some people are interested in programming applications as a ends in itself - e.g. their whole life revolves around implementing solutions to other people's problems. The guy from cox probably couldn't care less about Knuth - it's just what he's being told to do. Perhaps this isn't so much a problem as it is a side-effect of the need for programming services.

      That's because business has a need to get their problems solved, and finds the most effective tool to do it - in this case, generic problem solvers or programmers. This is work that is easily outsourced.

      Back in the day, the guy programming was solving problems to make -his- life easier. It's not a stark distinction, but one that needs to be made. My formal training is as an EE, I I took MANY more advanced mathematics courses than the CS people at least at the undergraduate level. We did a grand total of three programming courses, all of them offered by the CS faculty, and when I was there, we were taught Modula-2. It's since moved to Java. They don't start out teaching the virtual machine or bytecode, either. Pointer? Eh?

      Anyway, back to my point - I used Matlab, C, Assembly, you name it in my digital systems courses. We were not taught those things; we were expected to know them or learn them on our own to solve the problem at hand.

      Using a calculator to solve a problem and making the calculator are different things.
    • Shoveling Data (Score:5, Interesting)

      by nycsubway ( 79012 ) on Saturday March 20, 2004 @11:01PM (#8625050) Homepage
      That sounds like most IT jobs. I've found that IT is different from research and academia. Where I work, at an insurance company, I started referring to what I do as shoveling data. Because my entire job can be summed in one flow chart. Begin, open file, read file, process data, end of file? no. read file. end of file? yes. close file. end of program.

      It's mindless. The problem with programming today is that yes, it has become a commodity. Something that people expect you to be able to sit for 8 hours a day and do continuously, without thinking or having any input of your own whether WHAT your doing is really worth it.

      There is no creativity in the corporate world, I think thats why so many people choose to work on open source software.

    • by KrispyKringle ( 672903 ) on Saturday March 20, 2004 @11:17PM (#8625134)
      I know a number of people just bitched you out for this post, so I'm going to try to keep it brief. ;) Just a few points, in no particular order.

      You refer to the cable guys as iif they are the epitome of computer science. They aren't computer scientists. They almost certainly aren't even programmers. Perhaps to the completely ignorant, all computer-related jobs are the same, but they aren't. Most jobs as a technician are crap. Slightly above that would be the post of admin. Keeping something up to date. Installing new software. Above that, some network and system admins have interesting jobs designing new systems, implementing creative solutions to problems, and so forth. Programmers have a similar opportunity, to do creative coding, but often it's just another solution to another problem. Not something that sounds like a lotta fun. And above that would be computer science. Research. Whole different ball game.

      I think this is the root of your confusion. You see more blue-collar technical jobs. This doesn't mean less research is going on, though. Back in the day, the only people who interacted with computers were academics and researchers. There was no ITT tech. Now, in addition to the academics and researchers (of whom there are actually almost certainly many many more), there are hordes of unwashed masses actually (heaven forbid) using computers as tools, rather than just for the academic prospects themselves. Point is, the research is still there; in fact, there's far more of it. But there are also more and more other uses. This isn't a bad thing; it's a good thing.

      In case you don't see what I mean, look at it this way. Your complaint could be summarized with an analogous complaint about the watch industry. Back in the 1800's, the only watches available were really classy, expensive, work-of-art kinda things. A gentleman's accessory. Now, any old Joe on the street has one; they come in all sorts of cheap, disposable, low-quality shitty versions. But that doesn't mean there are less high-quality versions; in fact, there are more. Tag Huer, Rolex, Citizen, Suunto...the competition to make the greatest precision timepiece is quite tough, I suspect. Point is, there's a lotta shit out there now that wasn't there in 1800, but plenty more nice watches as well.

      Hmm. I guess I didn't really keep that brief. Sorry.

      • by slamb ( 119285 ) on Sunday March 21, 2004 @02:02AM (#8625849) Homepage
        You refer to the cable guys as iif they are the epitome of computer science. They aren't computer scientists. They almost certainly aren't even programmers. Perhaps to the completely ignorant, all computer-related jobs are the same, but they aren't. Most jobs as a technician are crap.

        Agreed.

        Slightly above that would be the post of admin. Keeping something up to date. Installing new software. Above that, some network and system admins have interesting jobs designing new systems, implementing creative solutions to problems, and so forth. Programmers have a similar opportunity, to do creative coding, but often it's just another solution to another problem. Not something that sounds like a lotta fun. And above that would be computer science. Research. Whole different ball game.

        Here's where you lose me. I don't agree that computer science is "above" programming. In fact, I'd say that programming is the union of computer science and software engineering. Superior programming requires contributions from both fields.

        Software engineering is nothing to sneer at. It envelopes version control, coding style, rigorous specifications, code review, bug tracking, API documentation, user manuals, user interface design and testing, unit/regression/acceptance testing, etc. There's some real artistry involved. It's easy for us programmers to neglect these, saying that they aren't hard but we don't have the interest or resources. But in reality, when I really try to do even the most seemingly mundane of these tasks, I find there's a lot more skill involved than I previously realized.

        The experience has also made me skeptical of the idea of farming software engineering tasks off wholesale to specialized people. For example, writing good API documentation requires the involvement of the people who designed the interface. Likewise good user manuals require the people who designed the UI and administered the user tests. Pure technical writers can maybe even write the bulk of the prose, but if they can't do what they document, they don't have a prayer of noting subtleties without help.

        Summary: I think actually designing, implementing, and even proving correctness/efficiency of algorithms is a much smaller part of the whole than we like to admit. The other things not only valuable and difficult, but also should be done by the same people.

    • by torokun ( 148213 ) on Saturday March 20, 2004 @11:20PM (#8625150) Homepage
      It's clear to me why you wrote this post from a subjective standpoint -- I thought the same way when I was 20. Even when I was 24. I don't think the same way now at 28.

      Why? Because I've seen through experience that (1) most people can't learn the hard CS stuff, and (2) 95% of projects don't require it. The sad fact is that "Computer Science" is only really applicable to solving "hard problems", writing compilers, designing languages, or to AI and its kin. It is in general, not applicable to business applications.

      It used to be, but what happened? Computers got faster. Here's the progression in my career...

      1. Kudos for optimizing memory management and speed in C/C++, or even assembler.
      2. Questioning the need for such optimizations and pushing profiling before such work if it would take a significant amount of time
      3. Questioning the need for ever doing optimization, questioning the value of low-level languages.
      4. Pushing high-level languages (web-based solutions / VB) for everything unless a clear need exists.
      5. Sending everything to India.
      This just wouldn't work unless most apps simply didn't need the work that we as computer scientists want to put into them. Knuth is a perfect example -- he spent years and years getting TeX perfect just so he could see his books typeset perfectly. We have that sort of perfectionist bent.

      But it's all driven by money in the end, unless you're in academia, or research... A very few people are in positions doing both technically hard stuff and making money for it. These would be like Wolfram Research, google, some game companies (although I hear they're sweatshops, but what isn't nowadays?)...

      In the end, you're correct that a lot of hard problems can only be handled by people trained in CS. These would be the things mentioned, along with parallelism, threading, and optimization issues... But it's also true that most of the products out there don't need these. We're sifting these categories apart now, and unfortunately, it's just a fact that not much yummy stuff is left.

      But when you have garbage collection, a raging fast machine, and graphical IDE's, even if someone puts crappy code together, as long as they make a decent API, it's going to work after they try to compile it half a million times.

    • by heyitsme ( 472683 ) on Saturday March 20, 2004 @11:28PM (#8625192) Homepage
      As a student at a major Big Ten University (tm) I can easily tell that your perception is a bit skewed. The old cliche "you get what you put into it" applies to many things in life, and computer science is no different.

      My school's core computer science curriculum is in Java. Language of instruction is a moot point to a rather great extent. You can learn as much from a data structures class taugh in Java as you can from one taught in $language_of_choice. The idea is to learn how things work fundamentally, and then apply those ideas practically. A linked list in Java works the same as a linked list in C. Its not about Java being the "industry standard" as you call it, its about Java being a perfectly modern and capable programming language. The idea

      Your next analogy of the cable repairmen almost prompted me to moderate your post as +1 Funny, but when I found out you were not joking I decided to write this reply instead. To even equate a cable repair person with a computer scientist is pure madness. Even if they were programmers, how is getting the cable modem working a good metric of "computer stuff in general" being "a lot less like a science or craft and more like a factory job", or even relevant to the discussion of computer programmers vs. computer scientists at all?

      None of your points even remotely explain what you consider the fundamental problem: "why software sucks...why the programming "trade" sucks...why companies can send the jobs abroad to work for peanuts" The fact is not all software sucks, many people love their jobs in the industry, and these people are getting paid well to do their jobs. Most of the computer scientists you speak of don't work in the private sector, you can find them at government [fnal.gov] research [llnl.gov] institutions [ornl.gov].

      To say that these type of people don't currently exist, and that current CS curriculums can't produce scientists of this caliber is nothing short of ignorant.
      • by Admiral Burrito ( 11807 ) on Sunday March 21, 2004 @02:45AM (#8625978)
        You can learn as much from a data structures class taugh in Java as you can from one taught in $language_of_choice. The idea is to learn how things work fundamentally, and then apply those ideas practically. A linked list in Java works the same as a linked list in C.

        The thing is, Java is somewhat high-level. There are things that go on under the hood that you won't learn about, but once in a while pop up anyway. For example, being taught Java, you won't learn about the difference between memory allocated on the heap, and memory allocated on the stack. And yet...

        This does not work (it doesn't even compile):

        String x = "a";
        (new Runnable() { public void run() { x = "b"; } } ).run();
        System.out.println(x);

        There's nothing wrong with the code; the problem is that Java pretends to support closures but really doesn't. To use x in the anonymous inner class, you need to declare it final. But if you declare it final, you can't do the x = "b" assignment.

        I'm familiar with C, so I understand the difference between the heap and the stack. I can infer that x (the reference to the string, not the string itself) is allocated on the stack. It is not uncommon for instances of anonymous inner classes to outlive the stack frame they were created in, so the compiler doesn't know whether or not x (on the stack) will still exist when the object's run() method is called. So it makes a _copy_ of x, but in order to pretend that it is still x, the compiler wants you to declare it final so that the original and the copy can never get out of sync.

        Having experience with C, I know the heap is a safe place to put things that may need to outlive the current stack frame:

        final String[] x = new String[] { "a" };
        (new Runnable() { public void run() { x[0] = "b"; } } ).run();
        System.out.println(x[0]);

        It's ugly, but it works. The reference called x needs to be declared final (because it's on the stack) but the reference contained in the array does not need to be final (because it's on the heap).

        Because of my experience with lower-level stuff, I understand how Java is faking its support for closures, and how to work around the limitations. This is only one example; there are many other times when understanding things from a closer-to-the-metal perspective gives insights that would be lost if things were only understood from a high level. Joel Spolsky summed this up fairly well: Leaky Abstractions [joelonsoftware.com]

    • by noonien_soong ( 723097 ) on Sunday March 21, 2004 @01:53AM (#8625815)
      Let me get this straight. You're 19 years old, you've played around with computers a little bit "in your day," you've read some Knuth, and you're smart enough that you've come to see yourself as superior to the average blue-collar worker. This gives you so much insight into the world of software engineering that you can discern exactly what the problem is---programmers aren't as smart as you are, and if they were, software wouldn't suck. Would you say that's a fair characterization of your argument? Because that's how you come off.

      The first thing you have to realize is that software is being written at a much higher rate now than it was back in the days when programmers were physicists and mathematicians. That's because it no longer takes a rocket scientist to write a program, and that is a Good Thing (TM). If the programming learning curve hadn't come down, we wouldn't be living in the wonderful information society that we are today, because there simply aren't enough rocket scientists to hammer out every PHP script and database app one might desire. The technical aptitude of the people working on the foundations of computing is certainly not degrading as programming becomes more democratized, because those people's talent and skills are not forged in some intro programming class. Every hacker knows he learned his art outside the classroom. So it's not exactly logical to assume that colleges could turn out a crop of brilliant innovators if only they used Haskell, read Knuth, and taught all their CS students physics and numerical analysis on the side. Clearly, what would actually happen if one simply cut the lowly Java-bred programmers out of the programming craft is that a lot less bad software would be written, a little less good software would be written, and all of it would cost more. That's not exactly an improvement.

      None of this is to say that there are not pervasive (and addressable) problems in modern software engineering. Those problems are simply much more endemic to the state of the art of programming than they are to any particular group of people. As many have pointed out, software today is brittle. It is frequently opaque, offering users and programmers alike only the most rudimentary means of debugging. The bread and butter software used by everyone everyday is often monolithic, designed for one purpose and impossible to customize without intensive study. Think about it for a moment, and you'll realize that much of the functionality of a typical document-editing application is duplicated in any other. In principle, such functionality could be factored out. But I can't digress too far into that here - let me continue with the list of reasons why software sucks. Integrated development environments give programmers a view of a project that scares poorly with complexity, software is incredibly difficult to build from source (unless you use Java--heaven forbid that should find its way into the bubble of intellectual purity you inhabit), and perhaps worst of all, the design decisions and architecture of software are usually not expressed clearly anywhere except in source code, where they are obscured by all manner of syntactic complexities, compiler optimizations, and details that aren't significant to the overall intent of the code. These things--all the things that make software complex, which make it difficult for groups to work together on large software projects (as you would understand if you'd ever worked on one)-- are some of the real hurdles to be overcome in software engineering. ITT Tech and outsourcing to India are NOT the problem.

      I haven't said much about how to solve any of these problems. But I've said a lot, so I'm going to stop now. I highly encourage you to get some more experience and perspective before you make sweeping and arrogant generalizations. College-aged know-it-alls with overblown rhetoric are a dime a dozen. Real problem solvers are rare.

  • Comment removed (Score:3, Interesting)

    by account_deleted ( 4530225 ) on Saturday March 20, 2004 @10:28PM (#8624871)
    Comment removed based on user account deletion
  • Programming Skills (Score:3, Interesting)

    by Anonymous Coward on Saturday March 20, 2004 @10:31PM (#8624889)
    It's such a simple concept. The more of anything we have, the more the mediocre stands out. With millions of writers, we get self help books, assorted garbage, and several really excellent works.

    Programming has an artisitic side, the creativity, vision, and insanity required to apply oneself to a project is much like the authoring of a book. Many have the skills, know the principles, but even then, few have the internal extra to create.

    I may know language and syntax, but I'm nowhere near the league of Shakespear, Tolkien, Asimov, or Clancey. Fortunately for me, they are nowhere near my league when it comes to putting code together.

    We have millions of coders - 60 percent will have average skills, 20 percent will be below average (or plain suck), and 20 percent will be above average, including that rare 2 percent of the absolutely insane, don't let them out on weekends, make sure they get fed, check they haven't peed themselves brand of genius.

  • by BeerSlurpy ( 185482 ) on Saturday March 20, 2004 @10:36PM (#8624924)
    It is entirely possible to survive in many companies as a bad programmer who nonetheless manages to be productive and produce seemingly non-buggy code. They may even appear to be especially hardworking and motivated because of the poor design that they have to spend extra time working around as they add features.

    The forces that allow this phenomenon to self-perpetuate are:
    =Lack of people who know how to manage engineers properly, know how to recognize good ones and bad ones and how to motivate the ones you have to be productive.
    =Lack of good project management skills that inevitably leads to crunched schedules and poor quality code, also lack of perception on the part of management as to why software is having problems with performance, bugs or schedule to complete
    =Lack of desire to retain good engineers or cultivate improvement in the junior ones
    =Lack of communication between engineering and whomever is giving them work, especially regarding desired features and schedule
    =Lack of quality control, lack of oversight, lack of checkpoints in project progress

    It doesnt help that the concept of "good engineering" is so hard to measure- a few things are "obviously bad" but most things are not. Even if someone is being completely wrong headed about one particular concept, it is entirely possible that they are exceptionally strong in many other areas within that field. It eventually boils down to "the proof being in the pudding" with the pudding being exceptionally complex to make and subject to the whims of the royal pudding tasters when done.

  • by Illserve ( 56215 ) on Saturday March 20, 2004 @10:43PM (#8624958)
    For programming to get "good" it's going to have to get unfun. No more will long haired super cool geniuses plug away for hours on end.

    It'll have to be a managed engineering process with all the fun and excitement of a CPA convention.

    • by alienmole ( 15522 ) on Saturday March 20, 2004 @11:10PM (#8625099)
      For programming to get "good" it's going to have to get unfun. No more will long haired super cool geniuses plug away for hours on end.

      It'll have to be a managed engineering process with all the fun and excitement of a CPA convention.

      This only works when no innovation is necessary. You can't CPA-ify innovation (at least, no-one has ever succeeded at that). That's why big companies have to buy small companies, and why big companies run research departments for the long-haired super cool geniuses to play.
  • by G4from128k ( 686170 ) on Saturday March 20, 2004 @10:43PM (#8624961)
    Moore's Law is one reason why software still stinks. Instead of perfecting systems within the confines of a limited amount of resources, its too easy to just assume more MHz, MB, amd Mbps.

    With exponentially increasing resources, nothing ever stabilizes and everyone knows it. If people design software with the assumption that it will be totally obsolete and replaced in 18 months, they create software that is so badly designed that it must be replaced in 18 months.

    Until hardware performance plateaus and people get off the upgrade-go-round, programming will be sloppy and ugly.
    • I'm gonna have to disagree with the notion that lack of scarcity leads to bad design.

      I think more often that low level optimization often locks us into a bad design, look at the Mac System software version 9 and lower or Windows before XP for an extreme example of this. Locks and crashes caused by apps were common because the task scheduler and memory model were created with scarcity in mind - developers at Apple and MS knew better ways to do things, but were locked in by those descisions made based on ea

    • I half agree. The complete lack of regard that most programmers have for the amount of system resources that they have is in most cases, entirely tragic. There are a lot of programs that could have a smaller memory footprint or run much faster if any time was taken to make these things fundamental to the design.

      That said, I've never really heard anything good about the code that developers have to write for systems like the PS2 that simultaneously have a wealth and a lack of resources. The PS2 has a lot o
  • by gilmet ( 601408 ) on Saturday March 20, 2004 @10:44PM (#8624970) Homepage
    The two are remarkably similar. As time goes on, analagous roles to those found in the production of physical machines/structures (such as concept artists, architects, engineers, construction workers) will be defined for digital creation. Actually, this has already happened. Perhaps what's lagging behind is the partitioning of education that leads to these professions?
  • by JordanH ( 75307 ) on Saturday March 20, 2004 @10:47PM (#8624988) Homepage Journal
    From the article:
    Simonyi believes the answer is to unshackle the design of software from the details of implementation in code. "There are two meanings to software design," he explained on Tuesday. "One is, designing the artifact we're trying to implement. The other is the sheer software engineering to make that artifact come into being. I believe these are two separate roles -- the subject matter expert and the software engineer."

    Giving the former group tools to shape software will transform the landscape, according to Simonyi. Otherwise, you're stuck in the unsatisfactory present, where the people who know the most about what the software is supposed to accomplish can't directly shape the software itself: All they can do is "make a humble request to the programmer." Simonyi left Microsoft in 2002 to start a new company, Intentional Software, aimed at turning this vision into something concrete.

    It's difficult to believe that Simonyi could be ignorant of the many many years of development of CASE tools and AI projects that have promised to build software systems from specifications.

    In 1980, a Professor told a lecture hall of Sophomore Computer Science students, myself included, that almost none of them would have jobs in programming, because in just a few years we would have AI systems that would build software systems from specifications that subject specialists could input.

    I don't think we are even a little bit closer to that dream today than we were 24 years ago.

    Maybe I'm confusing things here, though. Specifications aren't exactly the same as design. I know that I've sat through some CASE tool presentations where they implied that the work was all done when the design was done, but they were doing some pretty fast hand waving. I believe that those tools did not live up to the promises of their marketing.

    Am I off-base here? Has Simonyi cracked this problem with something entirely new?

  • by hak1du ( 761835 ) on Saturday March 20, 2004 @10:50PM (#8624996) Journal
    Sorry, I don't think any of those people have much credibility left: they have been in the business for decades, they have had enormous name recognition. We have seen the kind of software they produce. If they knew how to do better, they have had more opportunities than anybody else to fix it. I think it's time to listen to someone else.
  • by rice_burners_suck ( 243660 ) on Saturday March 20, 2004 @10:50PM (#8624997)

    I admittedly haven't read the article (yet), but I'd like to include a few reasons of my own that programming stinks. As you might guess, I am a programmer.

    My friends and I compare a lot of computer things to car things. Most likely, we do that because we are enthusiasts of both. Fast cars and fast software are very similar in many respects.

    A little background information on cars is necessary to gain the full effect of my argument about programming. Although the next three paragraphs may seem unnecessary at first glance, I assure you that I am a careful writer and that you should read them.

    Car enthusiasts fall into quite a few categories. For example, people who restore classic Mopar or Chevy cars enjoy making everything look like "mint" condition. Usually, every part of the car is so spotless and beautiful that you could eat off the engine. On the other end of the classic car spectrum, there are those who will tub out the entire car and concentrate only on performance features. These cars may not look like much, but they'll break your neck if you push the gas too hard. And of course, there is an entire spectrum of prefenences between these two ideals.

    In most of these categories, the hard core enthusiasts like to do the ENTIRE job themselves. They won't let anyone else touch their cars. The wanna be's will usually contract out nearly everything, because they want the glamor of showing up at car shows and showing off their machine, but can't hold a screwdriver and don't know the difference between a 6-point wrench and an Allen wrench. And of course, there is an entire spectrum of car knowledge, experience, and do-it-yourself levels in between these two extremes.

    Somewhere in the middle of the two extremes are people like my friends and I. We do a lot of work ourselves, but when it's a complex or high-risk job, or if we don't feel like doing it because it's boring and time consuming, we'll have a professional do it. There are auto mechanics who do pretty much any job. And there are mechanics who specialize in a specific area. For example, I have my radiator guy, my transmission guy, my engine rebuilding guy, my chrome plating guy, my carpet guy, my headliner guy, and the list goes on and on. I use each specific person for the job he excels at because I understand thoroughly what I am about to explain.

    Programmers are a lot like the car enthusiasts that I am and whom I describe above. Some prefer to do EVERYTHING, like that guy who wrote 386BSD and wouldn't insert other peoples' code improvements. (The project got forked and now you've got the *BSDs, and that guy is no longer involved as far as I know.) Some prefer to concentrate only on a specific area of software, such as graphics, numerical algorithms, kernel schedulers, assembly optimizations, databases, text processing, and the list goes on and on forever. Even an area such as graphics can break down into a plethora of categories, such as charting software, user interfaces, etc.

    The biggest reason that software sucks, in my opinion, is the very same reason that the automotive repair industry sucks. I wouldn't be surprised if programmers are just as hated as car mechanics. The programmer's boss is just like the old lady who takes her car to the mechanic. Neither knows anything about the job at hand. The only thing they know is that it costs them big and the results suck.

    For the programmer's boss, the software contains bugs, is difficult and confusing for the customer to use, and takes much too long to develop, so the market window closes, the project goes over budget, and maybe higher management cancels the project altogether.

    For the little old lady, the car broke down. The mechanic wants to fix it properly. But doing so will take weeks (believe me). The symptoms are caused by one or more problems, which require several new parts and quite a lot of labor to repair. The parts may be hard to find. The old ones may need to be rebuilt. And generally, people don't like renting a car for t

  • Kind of disappointed (Score:3, Interesting)

    by Comatose51 ( 687974 ) on Saturday March 20, 2004 @11:00PM (#8625040) Homepage
    The article doesn't provide much of the actual discussions so it's really hard for me to decide if I agree with the experts. From the article, it seems to imply that there are problems with software. That much is nothing new. Software is fragile and implemenation is difficult. However, the article doesn't really seem to get at the reason, other than to say we lack the necessary tools. So, while I agree with that much, it's nothing shocking or particularly insightful. It's disappointingly shallow for a Salon article.

    The only real shocking part to me was the Bill Gates quote. He's an Open-Source man at heart or just a hypocrite. :P
  • by miu ( 626917 ) on Saturday March 20, 2004 @11:00PM (#8625043) Homepage Journal
    From the article:
    "Making programming fundamentally better might be the single most important challenge we face -- and the most difficult one." Today's software world is simply too "brittle" -- one tiny error and everything grinds to a halt: "We're constantly teetering on the edge of catastrophe." Nature and biological systems are much more flexible, adaptable and forgiving, and we should look to them for new answers. "The path forward is being biomimetic."
    This is easy to say, but what to do about it? A CPU is controlled by a set of registers and the contents of a stack, even if you virtualize those things (JVM, smalltalk, .NET, ...) and give them access controls you still have a system that is subject to massive failure once a single part of the system falls.

    So for this biomimetic approach to work would require a dramatically different machine architecture from what we have now. Of course this would also require the rewrites of all existing Operating Systems and lots of existing application and library software. So 'emulate biological systems' is a nice easy answer that does not really answer anything in the near term.

    • one tiny error and everything grinds to a halt

      This is easy to say, but what to do about it?

      Simple, you don't make programs so stupid. Here's what a program does. "OH SHIT! THAT WASN'T SUPPOSED TO HAPPEN! EXCEPTION! Oh shit, there's no exception handler. CORE DUMP!"

      The problem comes down to 'that wasn't supposed to happen.' It reminds me of my 2nd year CSC course when my prof said "assume all your input is correct." WTF? They never teach you about error handling in university (not in any curriculum

  • by sashang ( 608223 ) on Saturday March 20, 2004 @11:12PM (#8625111)
    "One is, designing the artifact we're trying to implement. The other is the sheer software engineering to make that artifact come into being. I believe these are two separate roles -- the subject matter expert and the software engineer."
    Funny chap talking about how design and implementation should be separate. Seems a bit ironic considering he was the one who create Word docs where the layout and content are all packed into one file. Most decent solutions separate the layout from the content (eg: Latex, HTML/CSS). If Simoyi was a web programmer he'd be laying out his html with tables.
  • by slamb ( 119285 ) on Saturday March 20, 2004 @11:22PM (#8625165) Homepage

    The article is crap. A typical snippet:

    "There's this wonderful outpouring of creativity in the open-source world," Lanier said. "So what do they make -- another version of Unix?"

    Jef Raskin jumped in. "And what do they put on top of it? Another Windows!"

    "What are they thinking?" Lanier continued. "Why is the idealism just about how the code is shared -- what about idealism about the code itself?"

    This is similar to many articles before disparaging the WIMP (Windows, Icons, Mouse, Pointer) model. A bunch of "visionaries" who see that we've used this same model for some time and therefore are convinced it is horribly limiting, and that we are using it solely because the people who actually make systems have less imagination than people who write these kinds of articles. [*] They never have any but the most vague suggestions of a better model. They certainly never take the time to explore its limitations longer to ensure it really is workable (much less actually an improvement).

    In fact, this article is so vacuous that I'm not sure what they think stinks about software, much less why. And certainly not how to fix it.

    [*] In fairness, this article mentions people who have done some impressive work in the past (and is thus atypical of the genre). But still, I do not see any suggestions for a fundamentally better model or even any concrete problems with the existing one.

    • by Anonymous Coward
      But still, I do not see any suggestions for a fundamentally better model or even any concrete problems with the existing one.

      Its the editor!

      No, not the IDE, its the model we use.

      I've coded almost daily for over 25 years, so ponder this!

      We're working the parse tree once removed and are forced into prose by the ancient conventions and a pathetic need to print.

      We lack a model in which we work the parse trees symbolically/pictorially using the keyboard. One where we can zoom structurally, rotate and slice
  • From the soapbox (Score:5, Insightful)

    by DaveAtFraud ( 460127 ) on Saturday March 20, 2004 @11:41PM (#8625242) Homepage Journal
    I have been working professionally in software development for not quite 24 years with experience in aerospace/defense, established commercial, "dot com", and a post dot-com startup companies plus I dabble in Linux. This still means this is a series of single data points taken in different industries at different times so take what I have to say with a grain of salt.

    The worst programming problem is unrealistic expectations on the part of management. What it really will cost and how long it will take is always too much and too long so budgets and schedules get cut. At least aerospace/defense makes an attempt to figure this out and bid the contract accordingly. The commercial world looks at when the next trade show is or something else equally irrelevant and then says it has to be done by then with the staff that's available. They end up getting what they paid for and blaming the programmers when it crashes (See my sig. Yes, I do software QA). Established commercial companies aren't quite as bad but there is still a tendency for making the sale to somehow trump in the determination of what can be developed with the time and resources available. The resources may be there but there is a tendency to try to produce a baby in one month by getting nine women pregnant and then wondering why there is no baby after the month is up in spite of publishing detailed schedules.

    In contrast, I think one of the primary reasons free/open source software tends to be of significantly higher quality is that these factors don't come into play. A feature or program either is ready or it is not. If it is not, it stays as a development project until it either dies of apathy or enough people are attracted to it to make it into something real. For established projects, you have people like Linus who "own" the project and ensure that contributions only get incorporated if they pass muster.

    I find it amusing that one of the criticisms of FOSS is that the schedules are unpredictable but the reality is that software development schedules ARE somewhat unpredictable* but at least the FOSS development process recognizes this and instead focuses on the quality of the program rather than pretending it doesn't exist and coughing up something that isn't really done based on someone else's absurd schedule.

    * If someone develops the same sort of software over and over again (think IBM) they will eventually gain enough experience to have a reasonable shot at scheduling and resourcing a project correctly. The fewer data points you have, the less likely you are to get it right.
  • by ciggieposeur ( 715798 ) on Saturday March 20, 2004 @11:48PM (#8625278)
    Quoting the article: "Giving the [software architects] tools to shape software will transform the landscape, according to Simonyi. Otherwise, you're stuck in the unsatisfactory present, where the people who know the most about what the software is supposed to accomplish can't directly shape the software itself: All they can do is 'make a humble request to the programmer.'"

    As a programmer who recently stopped working for a very very very large computer firm that sells both hardware and software, let me say that Simonyi's point makes zero sense. Tools already exist to "shape software," and they are known as programming languages like Visual Basic, C, C++, C#, Java, Perl, PHP, Python, etc...

    I'm frankly sick of architects (that's the term for people who say they design software but don't actually design software) who bemoan the gap between their glorious visions and the real products their teams end up producing. These people need to click "Close" on their UML models and go get their hands dirty by writing parts of the production code. Then they'll understand the real-world constraints that their codeless design didn't account for, like internationalization, performance bottlenecks, user authentication, heterogenous networked environments, and ACID transaction support (to name the first few).

    Oh yeah, and the reason open-source developers wrote a Unix-like operating system (Linux) and put a Windows-like interface on top of it (X11 + GNOME/KDE) is because these are both very reasonable and mature solutions for a variety of computing needs. If any of you architects out there want something besides Linux that conveniently abstracts away 99.9% of the hardware interaction yet also provides an easy-to-learn interface for general users, you are more than welcome to write it yourself. Or you can model it in UML, click some buttons, and hope it compiles.

    Why do I think software sucks? Because market droids and architects who forgot how to program get together and promise their customers AI in only six months.
    • I'm frankly sick of architects (that's the term for people who say they design software but don't actually design software) who bemoan the gap between their glorious visions and the real products their teams end up producing. These people need to click "Close" on their UML models and go get their hands dirty by writing parts of the production code. Then they'll understand the real-world constraints that their codeless design didn't account for, like internationalization, performance bottlenecks, user authe

  • by HarveyBirdman ( 627248 ) on Saturday March 20, 2004 @11:58PM (#8625324) Journal
    Oh it *is* an art.

    Sadly it's the type of art where they paint with feces.

  • by MagikSlinger ( 259969 ) on Sunday March 21, 2004 @12:34AM (#8625520) Homepage Journal
    I hate programming now. I loathe the thought of it. Not because I hate the act of programming, but the systems I have to work with.

    Sure, in the nice old days, the C64 and IBM PC were fairly easy to code for, but they also gave you very little bang for the buck. The nice thing was a couple hours of programming could get something nice out.

    Now, it can take me a couple hours to do even a simple notepad application from scratch. I'm forever spending lines of code to fill in structures or respond to all the events an API wants.

    The computers got more powerful, and the APIs also got more powerful, but now I spend so much time filling out basic structures that I don't need. I'd rather a lot of that stuff was user configurable or stored in an XML file somewhere. I don't want to have to know about allocating & positioning fonts! I just want to dump it in a nice scrolling box.

    It's like a bureacratic nightmare writing code now. Sure, there's MFC, etc., but that's like the "easy" tax form. The moment you want to do just one thing different, you're back to square one. And the learning curve, sheesh!

    That's why I like projects like XUL. We've made the APIs so programmer centric, that I can't breathe anymore. I just want to code the important stuff then let someone else make the GUI pretty.
    • by soft_guy ( 534437 ) on Sunday March 21, 2004 @01:33AM (#8625753)
      From your post, it sounds to me like you are doing win32 programming. It also sounds like you are trying to use a bunch of those calls that Microsoft has that end in "ex".

      You're right. Those damn calls have so many goddamned parameters and complicated structures to fill out, it feels like you're going on a snipe hunt every time you want to make a system call.

      Try Qt. Its not quite as efficient as win32, but for GUIs, it generally doesn't matter. For other things, the key is to make as few system calls as possible and instead rely on the C/C++ standard libraries.

      I'm lucky. I get to write commercial software for MacOS X using Cocoa. Still, the bugs I have are mostly due to having to make system calls. When I'm writing code to hold and manipulate data using the STL, I have very few bugs. Where I run into trouble is when I want to use something like Quicktime or other system APIs where I don't really know what the calls are doing and there are a lot of undocumented gotchas.

  • by MagicDude ( 727944 ) on Sunday March 21, 2004 @12:44AM (#8625560)
    One reason I believe that programming is bad these days is that the market is flooded with poor programmers. At my school, many freshmen come in with lofty goals about being computer systems engineers, or electrical engineers, or something like that, but they get their little freshmen butts kicked by the carriculum and end up dropping engineering. Most people I've seen this happen to end up switching to Computer Science or IT. Combine these people with those who came into college to do CS or IT, and you have a glut of people entering the market trying to score programming jobs. And some of these people are those who kinda just fell into it with no real interest other than to graduate college. I think CS carriculums should be more intensive so as it isn't used as a fallback major for those who can't hack it in other fields.
  • by Hamster Of Death ( 413544 ) on Sunday March 21, 2004 @01:00AM (#8625630)
    Nobody knows how it should be done. Plain and simple. Sure there are 50 different ways to shuffle 1's and 0's around to produce something that kind of solves a problem someone may or may not have (customers seldom even know the problem they are paying to solve).

    Add to that the string of phbs, development team, testing team. Then you end up with people who don't really understand the problem, solving it with tools that we don't know if they work or not, or which they might not even fully understand. (Have you proved GCC is correct lately? How about .NET?)

    So until we can get a method that ties programming to what it really is (problem solving) we get to poke about blindly in the dark to find our 'solution' and hope it shuts up the customer long enough for them to write the check. We're slowly getting there, but because programming is still a new thing it hasn't been remotely fully explored yet.

    There's lots of room to figure out how to make a computer solve a problem once it's defined. Finding the problem is a major portion of the battle. The rest comes in finding a repeatable, provable way to solve it.

    Until that happens youll need your blindfold and poking stick handy.
  • by rusty0101 ( 565565 ) on Sunday March 21, 2004 @01:09AM (#8625666) Homepage Journal
    ... doesn't fit on a bumper sticker, so no one is really concerned about it.

    Seriously.

    Physics is about finding the one inch formula. One of which is e=mc^2

    Accounting is about making sure that the accounts ballance. Profit is the difference between cost and revenue. Cost plus Profit equals Revenue. Accountants recognize that they are part of both the "Cost" and "Profit". Good business management recognizes that eliminating the Cost will also eliminate the Revenue, which will also eliminate the Profit.

    Programing is not currently about finding the least expensive way to solve a problem. It is about finding a usable way to help people accomplish their desires.

    Computer Science, such that it is, in most cases is a euphamism for Software Engineering. The goals of Software Engineering when I was going to school was to write "provable" software. "Provable" software is software that you can "prove" every line of it does exactly what the "engineer" who wrote it intended, nothing more, nothing less. If the developer writes to variable "n" in function "A", and the scope of the design is that variable "n" is only applicable to function "A" then when function "B" changes variable "n", it should not affect what function "A" expects it to be.

    That is a very simplified version of what Software Engineering is all about. Software development is supposed to be about using the tools that software engineers have used to build useful software. All too often it is about using tools other software developers have created instead, because other software developers have gone ahead and created something to work with, because they got tired of waiting on Software Engineers to get over being elietist, and actually putting together provable designs.

    A suspension bridge is a beautiful piece of engineering. It is also very often a very beautiful piece of hardware. The Tacomma Narrows disaster happens when a piece of engineering comes across a situation that the engineer was not expecting, and didn't design for.

    Likewise software developers are using unproven tools to acomplish various tasks. Then they are being asked to work arround the problems that come up when their tools encounter situations that the developers of those tools had no idea would ever be asked of those tools.

    As a result, we currently get buffer overflows, memory overruns, as well as hundreds of other problems that can best be described as anoyances, and at worst be described as security flaws.

    ------

    Alternatives to the WIMP design, as well as the Unix understructure.

    While Lanier bemones the fact that we have not "surpassed" the Unix and Windowing mode of computer archetecture and interface, at best he can be said to have waved his hands at a new direction, (VR). The only "improvement" I have observed is the time based document stream view that has come out of MIT, and even that can be considered a WMP view (minus the icons at the moment.) For some people this very well may be an improvement, though I think it is only useful to those people who look at time centric management of information. In other words, I think it's a great way to manipulate stuff like e-mail, but probably wouldn't be of particular use in managing a book store inventory.

    "Mind Mapping" seems to me to be a "better" way of managing information, but I don't know that it is a great idea as an interface for a computer. Perhaps that's based upon my own limitation as my input to a computer is a small set of serial interfaces, rarely used concurrently, and the output is a couple of serial interfaces, and a "screen" or "window" of data that I process as information.

    As long as that is what my interface to a computer is, I will probably run into limitations as to what I can expect from a computer. Those limitations are going to affect how I interact with the computer, as well as how I, and others, develop software for the computer.

    Rather than bemoning the current state of the art as being of the same
  • by jjohnson ( 62583 ) on Sunday March 21, 2004 @01:26AM (#8625720) Homepage
    I think comparing the progress that software development makes to the progress of hardware development is a fundamental mistake. Moore's Law depends upon the properties of the materials we're using and our ability to exploit them--chip fabrication, heat conduction, power consumption--not upon our ability to design chips. It's not the chip layout that's improving, it's our ability to milk what's already in the materials, which yields exponential growth. We're not responsible for that growth curve, the materials are. Doubling the length of our lever gets us the ability to move four times the weight.

    Software, on the other hand, is all about design. Of course it's not going to double in power every 18 months--we're not doubling in design ability every 18 months. If the computing of our hardware were limited to our chip design abilities, it would be going just as slowly.
  • by Mr.Oreo ( 149017 ) on Sunday March 21, 2004 @01:57AM (#8625834)
    This is coming from a game programming point of view, but I think it applies to all facets of software development. Programming sucks these days because of the communities it has created.

    I'm not going to be a Yancy and specify where these points aren't applicable. Take what you read here with a grain of salt, but I guarantee you can apply one of these to an experience you've had.

    - Zealot Trolls.Answering someone's question with a code solution that contains even the pettiest OO fault, even if it has nothing to do with OOP, will get you nothing but a bunch of OOP zealots on your ass, saying 'WRONG! That shouldn't be public' or 'WRONG. The destructor should be virtual' or 'WRONG. Should pass by reference'. You get the point. There are more and more trolls on boards these days looking to stroke their ego by posting extremely minor corrections to mostly correct solutions.

    - Wheel Engineers. Stop making 3D engines. Stop making WinSock networking wrappers. Stop making ray-tracers. Stop making things that have been done 1000x before unless a)It's for fun/educational purposes. b) You're going to do something someone else hasn't. Even if there's _one_ thing in your coding project that someone hasn't done before, it's definitly worth it to create. Red-Faction was the last game IMO that added anything new to the world of 3D engines, other than 'taking advantage of x graphics card feature'. Physics is another area to inovate with game engines. Please Stop Re-Inveting The Wheel and giving it some cheezy name.

    - Meatless Code. Anyone who has worked with the 3DS Max SDK knows what I'm talking about. Important data is fragmented everywhere, and accessed in 10 different ways. You spend more time reading the API docs than you do programming. I was reading through some ASP.net code the other day, and it took 45 lines to update a table with an SQL command. I read through it, and it could be done with 5 narrow lines of perl code. With C++, you could probably spend a solid two weeks writting generic 'manager' code that does absolutely nothing. Programmers need to learn to draw the line between 'productive' code and 'silly' code. Having a DataObjectFactoryCreatorManager class for 'ping' program, is a bit silly.

    If I could do the world a favour, it would be to send all coders a letter that simply said "You are not the best. Live with it.". If I read another reply to a simple question with some dork awkwardly throwing in that he's "A 20 year C programmer who wrote a compiler on a TSK-110ZaxonBeta when I was 11". No one cares about your background unless they ask, or it's relevant.

    Other than that, programming is fine. Except for Java.
  • by mabu ( 178417 ) on Sunday March 21, 2004 @04:59AM (#8626352)
    The problem is not the art, it's the "artists".

    Programmers are analagous to lawyers now. It used to be that passion and a genuine interest was why most people were in this business. Now most people arbitrarily pick CompSci because they think that will give them career stability, and really giving a damn about the art of programming doesn't matter much. So like lawyers, you have this new breed of people in the industry who are just there for the money and have no appreciation for the work and the accomplishment. You don't see lawyers trying to use their craft to change the world.. you see them chasing around ambulances. Likewise, you don't see programmers these days trying to make things better.. you see them promoting ASP, Java, PL-SQL, and a hoarde of other get-bys so they can collect their check and move on.

  • by lokedhs ( 672255 ) on Sunday March 21, 2004 @05:34AM (#8626445)
    Ever coupld of months some know-it-all, usually with a few degrees in CS come out saying that "programming sycks", "we haven't eveolved in 20 years", "we need better tools that can automate things" and usually finishes off with "this can't go on! We're working on a tool that will transform programming!".

    Then you usually don't hear from them again. Want to know why? Because they're wrong.

    The fact is that regardless of what methodologys used when developing software, in the end you are simply giving instructions to the computer what to do. No matter how many layers of tools you try to add on top of this, in the end you want to giveinstructions to the computer how it should solve the problem at hand. What it all boils down to is that the more complex the problem is, the more detailed must your instructions to the computer be.

    Allow me to give an example: If you have some calculations to perform you can do that in a spreadsheet app, but when your formulas grows more complex you start scripting the spreadsheet. After a while even that isn't enough and you write a separate VB (or other scripting language) app to do this for you. Again, the problem might grow to the level that even your scripting language can't handle it and you sit there with a full app implemented in Java or C++ which solves your original problem. If you happen to be a CS professor, you will start thinking: "why did I have to write this app so solve this simple problem? Programming sucks! We haven't evolved in 20 years! I'm going to write an app that takes the complexity out of programming!", you will publish an article on this, and then you'll spend the next couple of years trying to solve a problem that doesn't exist.

    Are you old enough to remember the craze about 4GL? The reasoning behind that is exactly the same as what Charles Simonyi says:

    Giving the former group tools to shape software will transform the landscape, according to Simonyi. Otherwise, you're stuck in the unsatisfactory present, where the people who know the most about what the software is supposed to accomplish can't directly shape the software itself: All they can do is "make a humble request to the programmer." Simonyi left Microsoft in 2002 to start a new company, Intentional Software, aimed at turning this vision into something concrete.

    Right. Brining programming to non-programmers. Think about it. Does it make sense? As soon as you start programming, you ARE a programmer. Why, then, do you want to limit yourself to a limiting point-and-click tool? This is where 4GL failed. While making it very easy to connect a few database tables together, the real business logic was hard or even impossible to create, and the resulting apps were extremely difficult to understand and impossible maintain.

    One of the best tools that helps programming in recent years is, in my opinion, IDEA [intellij.net]. It's a Java IDE that doesn't assume that the programmer is stupid and doesn't understand programming, but rather automatically creates the boilerplate code for you while you write the code. You still work with the code, you just don't have to write all of it.

    There's an enormous difference between IDEA and the "4GL mindset". While IDEA acknoledges that the best way of writing code is by typing in the instructions that will actually run, the 4GL mindset assumes that people are incapable of learning this and need fancy icons instead. Allow me to clarify:"icons is not a good way of representing computer code.

    It feels to me that the people who claim these things have realised that they are not the worlds bet programmers. They realised that programming can be hard, but instead of acknowledging this and try to be better, they decide that it's the "state of programming today" that is at fault. That if they don't know how to write good code it's got be the tools fault. It couldn't possible be that some non-academics can be better programmers than them, now could it?

    So

  • by varj ( 581639 ) on Sunday March 21, 2004 @07:03AM (#8626617) Journal
    Lack of soap!
  • by blahplusplus ( 757119 ) on Sunday March 21, 2004 @08:57AM (#8626908)
    The problem of near infinite time investment... of time and work versus finite amount of resources for small customer payoffs / results. It takes massive investments of time to get the computer to do some of the most basic interface and problem solving tasks so that humans can do and perform some simple tasks, even today. Tools and compiler/language development needs to get better. Right now many programs are too fragile, very hard for the end users to modify (without recompilation), and they are far from robust. Think of it this way, in an ideal world any program should be able to run on any platform without having to be recompiled and any necessary hardware/software dependencies would be automagically emulated (assuming you have the CPU power). Computer scientists have yet to create decent "building blocks" and tools that don't require a thorough understanding of how the tool itself was made. i.e. you don't expect a construction worker to know the workings of how his tool(s) was made or came to be manufactured or how it works internally, he can use it to perform all tasks from the intended "purpose" without ever having to understand how it was made or works internally from a single domain of functionality. Too many times programmers have to have cross disciplinary knowledge of their tools/etc that should not be required to get the job done. Weak analytic/development tools to help other programmers and new programmers demystify what is going on without having to write lengthly comments to tell other programmers what sections of code mean. This should tip you off right there that if you have to comment and explain something that should be as obvious as reading plain english sentences that mean the same thing you've got severe weakness on multiple person projects that shouldn't be there. No one ever gets confused about- "Today I went to the supermarket and bought some food." with "Today I picked up some food at the supermarket." If you look at computer programming languages today, its like learning a foriegn language because you're forced to "learn the rules", syntax of how the language works AND how it parses and a million other little "gotchas" when it interprets your code. Right now programming tools to create things are simply in the dark age. How many lines of code does it take to create buttons, lists, input boxes, programmer and user friendly functionality from scratch? It takes a massive investment of time and energy today just to create the meaningful building blocks let alone full programs.
  • by Anonymous Coward on Sunday March 21, 2004 @09:34AM (#8627046)
    I have been working in industry for a bit over 7 yrs and have made a few observations that I believe makes software suck:
    -People try to rely on process to fix 'common sense'. By common sense I mean being careful, meticulous, and using one's brain for each situation one comes across. Granted, I work in a large corporation and this may not apply to some of the smaller companies, but I have noticed that when we find a bug, or have some sort of other development issue, management tacks on more process to fix it. It's of course normal for people to make mistakes but sometimes you just have people that continually use poor judgement - get rid of those people and get others who can do the job right!
    -Today's engineers have a large tendancy to overarchitect, doing MUCH more than is required; overthinking what possible changes may occur in the future and designing code around that idea (this idea isn't bad in and of itself, but I have found people get carried away in this area) - What happened to the KISS principal?
    -I may sound horribly outdated, but I have serious questions as to whether OOP has bought us anything as an industry. Sure, when used properly, I believe it can have some benefits. BUT I think it gives the programmer so many powerful tools, that incompetant programmers (of which are there many) turn those tools into powerful weapons. Most of the projects I have dealt with have been C based (~90%) - the rest, some sort of OOP (the remaining 10%). Even though the quantity of procedural code greatly outnumbered the OOP, the two most confusing, sh*ttiest pieces of garbage were OOP - I don't this this is a cooincidence. I have found that people can learn the techniques and tools of OOP, but often they fail to understand the philosophy and why it was developed in the first place. Abuse of OOP creates mass amounts of crap code that needs to be maintained.

news: gotcha

Working...