Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Programming IT Technology

How To Make Software Projects Fail 905

Bob Abooey writes: "SoftwareMarketSolution has an interesting interview of Joel Spolsky, of Joel on Software fame. Joel, a former programmer at Microsoft, discusses some of the reasons he thinks some very popular software companies or projects fail, including Netscape, Lotus 123, Borland, etc." This interview brings out some mild boiler-room stories which sound like they could be the basis of a good book, along the lines of Soul of a New Machine .
This discussion has been archived. No new comments can be posted.

How To Make Software Projects Fail

Comments Filter:
  • Code rewrite (Score:3, Informative)

    by Dominic_Mazzoni ( 125164 ) on Tuesday December 04, 2001 @10:15PM (#2657603) Homepage
    A lot of the article is about whether or not you should ever rewrite code.

    SMS: Joel, what, in your opinion, is the single greatest development sin a software company can commit?

    Joel: Deciding to completely rewrite your product from scratch, on the theory that all your code is messy and bug prone and is bloated and needs to be completely rethought and rebuild from ground zero.

    SMS: Uh, what's wrong with that?

    Joel: Because it's almost never true. It's not like code rusts if it's not used. The idea that new code is better than old is patently absurd. Old code has been used. It has been tested. Lots of bugs have been found, and they've been fixed. There's nothing wrong with it.

    Joel blasts rewriting code some more, but doesn't really get into alternatives. Instead he talks about forcing programmers to get with the program, and if they don't, fire them.

    Isn't there sometimes a happy medium between completely rewriting the whole codebase and continuing to hack it up? For example, maybe you can identify certain modules that can be isolated and rewritten, then tested rigorously against the old code to make sure they're functionally identical. Or you could separate the old code into a library that just does the computational part of a program, and then write a new GUI around it from scratch.

    He takes Netscape as an example, saying the worst mistake they made was to rewrite it from scratch.

    I admit that it would have been nice if they released the source code to Netscape 4.x, and not just Mozilla. Even if the code was the most gawd-awful thing in the world, in the years since Mozilla started don't you think we (the open-source community) could have at least fixed some of the more annoying bugs in Netscape?

  • by Anonymous Coward on Tuesday December 04, 2001 @10:28PM (#2657651)
    As the article says, Microsoft has the money to keep two projects going at the same time.. the rules don't really apply to them.

    They might have had two code bases but they weren't stepping on each other's feet.
  • by tshak ( 173364 ) on Tuesday December 04, 2001 @10:47PM (#2657722) Homepage
    Actually, NT was a completely different product line (servers, high-end workstations), with the eventual goal of replacing another product line (home PC's). Now, with XP, the "old" product line has been depreciated. It's also not ridden with 16bit code from the DOS era, as your post implies. Keep in mind that there's a difference between rewriting functionality, and even emulating functionality (for backward compatibility), then a complete rewrite of the entire codebase (in the millions of lines of code!)
  • by kelv ( 305876 ) on Tuesday December 04, 2001 @11:05PM (#2657803)
    The Rockerfeller line is incorrect.

    He asked how many drops of solder they were using and why. He was given an answer around 48 (don't have the book Titan in forn of me right at the moment) but they could not give him a good reason for this number.

    Thus he required them to start at that number and decrease it untill the process was no longer satisfactory and then incrse by 1 from that point. This insured they used the least amount of solder that performed to the required level.

    That's simply smart and efficient use of resources.
  • by os2fan ( 254461 ) on Tuesday December 04, 2001 @11:07PM (#2657816) Homepage
    Yes, Windows NT has compatibility layers. But then so does Windows 2.x, 3.x, 9x, and OS/2, Linux and so on. But the support of DOS in Windows, and the support of OS/2 in NT is fundementally different to the support of DOS in OS/2 or Linux, or POSIX or DOS in NT. In the first group, the calls are handled by the core operating system itself. In the second group, the calls are handled by a subsystem loaded when there is a need for it, ie an external plugin.

    The support for OS/2 in NT is fundementally different to the DOS/Win16 or POSIX layers, even by what MS says.

    The Resource Kit says that the only subsystem you have to disable to get C2 complience is the OS/2 system: ie it's the only one that calls directly to the kernel. The DOS/Win31 and POSIX systems do not call to the kernel.

    NT4 kernel has ONLY OS/2 compatibility. The bulk of the operating system is not available until the shell loads and the user logs on.

    All os2ss.exe and os2.exe do is shunt the calls direct to the kernel for execution. The file in question is there for OS/2 apps to call if they want console messages, it is a MKMSGF file, after all.

    But the technet seems to go a fair bit into OS/2 and the subsystem, while the POSIX system is just there as an afterthought. "We support POSIX".

    No, NT has correct version numbers because they follow from OS/2.

  • by 1in10 ( 250285 ) on Tuesday December 04, 2001 @11:48PM (#2657982)
    Rubbish, they did start from scratch.

    http://www.gerbilbox.com/newzilla/netscape6/genera l01.php

    As it states right on mozilla's own FAQ:

    In 1998 it was originally planned for Netscape 5 was to be based on MozillaClassic, with Netscape 6 be based on the Gecko rendering engine. But MozillaClassic was scrapped in favor of a code rewrite late that year, and Gecko has been the heart of the Mozilla browser ever since. So when MozillaClassic was scrapped, the "Netscape 5" moniker was scrapped with it.
  • Re:Good point (Score:2, Informative)

    by haruharaharu ( 443975 ) on Tuesday December 04, 2001 @11:53PM (#2658000) Homepage

    Comments in code are like sex - even when it's bad, it's still better than nothing

    Totally wrong. While bad sex is still good, bad comments are worse than useless - they deceive you as to the point of the code, or they document what they programmers wished they were doing, while the code merrily trashes your data.

  • by SoftwareJanitor ( 15983 ) on Wednesday December 05, 2001 @12:05AM (#2658050)
    I agree with you up to a point with IE. Microsoft certainly isn't making any money on it, and doesn't look like they have any idea how. But the difference is that a company with a monopoly to fall back on can afford to keep around loss leaders whereas a startup can't. In the case of IE, I believe that Microsoft can, and is, trying to use it to leverage into other markets by trying to build proprietary lock ins to their .NET server products and lock everyone else out of those markets.

  • by Drahca ( 410495 ) on Wednesday December 05, 2001 @12:11AM (#2658075)
    An important factor in Linux' cost is its maintenance. Linux requires a *lot* of maintenance, work doable only by the relatively few high-paid Linux administrators that put themselves - of course willingly - at a great place in the market. Linux seems to be needing maintenance continuously, to keep it from breaking down.

    And you base that on? I admit that Linux takes a lot of work to setup and is badly documented. So as an administrator you need to know a lot of stuff and have experience. Therefor good sysadmins are scarce, and earn a decent living. But when Linux is actually running it just doesn't brake down like that and doesn't require hours of maintenance, it just works!

    About the EXT2 filesystem. It has had its best time and should be replaced. Ext3 is not the *longterm* answer, and I sincerely hope Linux advocates realize this. The reason Redhat is using ext3 as its new default FS is simple, there is no valid alternative. Ext3 is the *only* new Linux FS which is included in the newest kernel release, it is mature and fully tested. There however are alternatives on the way. Such as XFS and ReiserFS. I hope one of those FS's will become the new default FS for Linux. It is after all beter to have one good FS, that several mediocre ones.

    Back to Linux' cost. Factor in also the fact that crashes happen much more often on Linux than on other unices. On other unices, crashes usually are caused by external sources like power outages. Crashes in Linux are a regular thing, and nobody seems to know what causes them, internally. Linux advocates try to hide this fact by denying crashes ever happen. Instead, they have frequent "hardware problems".

    Please! If this was true than why oh why are SGI and IBM (and many others) putting so much effort in Linux? Ah right, to patch it and make it work...

    The steep learning curve compared to about any other operating system out there is a major factor in Linux' cost. The system is a mix of features from all kinds of unices, but not one of them is implemented right. A Linux user has to live with badly coded tools which have low performance, mangle data seemingly at random and are not in line with their specification. On top of that a lot of them spit out the most childish and unprofessional messages, indicating that they were created by 14-year olds with too much time, no talent and a bad attitude.

    No argument here. Even if you stick to software from the distros themselves you end up with buggy programs with GUI's that resemble, well, don't know anything quite so bad. That give error messages like: "Error: Unkown Error" or "Oops!" or "valoir parser" (french). This really is a problem. And I hope this will be adressed in the future.

    I could go on and on and on, but the conclusion is clear. Linux is not an option for any one who seeks a professional OS with high performance, scalability, stability, adherence to standards, etc

    standards..like POSIX? Scalability you mean as in SMP, beowulf.. Performance as in used on many a webserver.

    Face it, Linux is evolving rapidly, and perfect as it may not be, it is one damned usable, scalable, well performing, stable, FREE OS!
  • by rsclient ( 112577 ) on Wednesday December 05, 2001 @01:39AM (#2658336) Homepage
    Um...Netscape wasn't making money on the browser? Are you stark, raving bonkers? They were hauling in money by the barrel -- Jim Clarke says (in his book) that Netscape was seduced by how easy it was to charge for a browser and therefore didn't diversify into other areas.

    *you* may not have paid for your copy of Netscape -- and a lot of individuals didn't -- but lots of companies were happy to pay for site licenses.
  • by ttfkam ( 37064 ) on Wednesday December 05, 2001 @04:22AM (#2658689) Homepage Journal
    I have to admit, my rant wasn't intended to target you specifically. It was more of a general rant. Maybe I'm just cranky because k5 is still down.

    And I agree that comments aren't a substitute for documentation. Use cases and UML don't lend themselves to source code comments. However, my comments about "Cliff's Notes" still stands. Comments judiciously sprinkled within code helps casual scanning for meaning much faster and easier no matter your coding skill. Coding skill simply speeds the case where comments are not present.

    In answer to your question, while certain logical constructs are best expressed in code, overall concepts are usually just as easy if not easier to express in a natural language.

    Unless you are writing programs and libraries that are only going to be used from end to end by other programmers, the outside world is dictating needs, requirements, and problems. That outside world speaks a natural language. If you cannot map the problem set and the solution set to a natural language, I submit that you do not have an adequate knowledge of the problem.

    However in this case, comments would be a side effect.

    Programming languages describe and work with the nuts and bolts. No programming language that I am aware of is sufficiently abstract to directly map to complex real world problems; no creative use of partial template specialization in C++, dynamic classloading in Java, dynamic function definition in Scheme, etc. can do this.

    Conversely, no natural language that I am aware of can adequately describe a truly logical, deterministic domain either (just check out legal documents for proof). There needs to be more translation between the two.

    For now, those bridges are comments and documentation.

    -----

    For a less abstract argument, let's leave computers out of it for a second. Two people from Indianapolis can communicate with each other better than either can communicate with someone from Sydney. They all speak English, but the dialects, the idioms, and the inside jokes may not translate. So when we write for general distribution, we try to keep those local colloquialisms to a minimum (and no, I admit that I haven't been a perfect example of this in my posts :-/).

    Now keeping this analogy in mind, a programmer has an intimate relationship with a computer/compiler/system libraries/whatever. Together they have many inside jokes, goofy idioms, and function prototypes that mean absolutely nothing to someone else who, not being necessarily stupid, simply has a different focus or area of expertise.

    While it may be more effort, and it may reduce some of the free-wheeling fun you would have had if you were alone with your CRT, doesn't it seem appropriate to make it more comprehensible if for no other reason that to allow others that specifically don't know as much as you to learn from a good example? Wouldn't that help to encourage others to continue using your code instead of scrapping it just as the interview topic suggests? Wouldn't it help to keep a few less wheels from being reinvented?

    Maybe I'm too idealistic. Maybe I'm just not jaded enough. But for me, part of being a lead software developer or a senior software engineer or a project lead is not simply to crank out a mountain of code while others look on and a select few help out. We don't code forever. Some of us get sick of it before too long. Some go into management. Doesn't it make sense to try and *teach* the ones coming up to do what we do? For me, it does. But, again, I am lazy sometimes. A few extra comments means that more than half of the time, I don't need to be asked what something I wrote was intending to do regardless of whether or not they are as good a coder as I am. They just read the comments.

    ...I'm also prone to being long-winded. ;-)
  • by achurch ( 201270 ) on Wednesday December 05, 2001 @04:42AM (#2658726) Homepage

    In answer to your question, while certain logical constructs are best expressed in code, overall concepts are usually just as easy if not easier to express in a natural language.

    This is more or less what I was trying to get across (not all that effectively, I guess); comments are most helpful when they give the reader more information than they can easily get from reading the code, and code that's easy enough to understand without comments shouldn't be commented "on principle". I guess it's just that my idea of "easy" is a bit different from that of other people. <shrug>

    At any rate, I do comment and document my code (or at least, I'm trying to do better than I have in the past), and I agree with the point you make with your Indianapolis and Sydney analogy. I guess I'm just cranky today, too...

  • by DunbarTheInept ( 764 ) on Wednesday December 05, 2001 @07:41AM (#2658942) Homepage
    Comments should explain WHY, not HOW.

    If all the comments are doing is telling you exactly what you already knew from being moderately literate in the language, then they are just ugly chunks of text that get in the way of reading the program.

    But that doesn't mean verbose comments are bad. If the verbosity is dedicated to telling you *why* something is being done, rather than giving a play-by-play description of how, then it is very useful. If I see a for-loop that counts backward from ( array_size -1 ) down to zero, don't give me a comment that says "counting backward in a loop". I can TELL that. But what I can't necessarily tell at a glance is *why* the author chose to count backward instead of forward - what was the algorithmic purpose to doing it that way - THAT is what I want to see comments explaining. And with THAT type of comment I am very happy when it comes with a lot of verbosity.

    The worst examples of useless verbosity are when you see code written by someone who has *just* learned a new programming language and is unfamiliar with the "culture" of that language. They tend to document things that everyone already knows like the back of their hand. (For example, a novice C programmer tends to go into excessive detail about the use of null chars to terminate strings.)
  • by jeremyp ( 130771 ) on Wednesday December 05, 2001 @09:40AM (#2659203) Homepage Journal
    Microsoft had a near monopoly in the BASIC interpreter market because there weren't really any alternatives. If we look right back to the beginning, it was Paul Allen and Bill Gates who wrote the *first* BASIC interpreter for a microcomputer. So you could say they were technical innovators in those days.

    I thought the reasons why MS-DOS got its momopoly position were well known. Basically, IBM were going to license CPM/86 for their new computer, but the guy who ran Digital Research was out playing with his aeroplane when IBM went to see him and nobody else was prepared to sign the NDA. When IBM went to see Bill Gates, however, he was prepared to take them seriously and sold them an operating system he didn't have at the time. The rest is history.

    Everybody complains about M$ and how they use dirty tricks and their monopoly situation to stamp on the opposition, but they fail to ask the question: why have they got the ability to do that? It's because their competitors all made bad business decisions.

    Some examples:

    Apple lost the opportunity to dominate the desktop in the mid '80s because they refused to license their operating system without an actual Macintosh which was perceived as a very expensive desktop system.

    Word Perfect lost its dominance of the PC word processor market because it failed to realise that Windows was the future for PCs.

    Lotus lost its position of dominance in the spreadsheet market because Lotus failed to perceive the importance of Win32. Although it has to be said that this only accelerated the inevitable because by then M$ dominated the word processing market and for only a bit more money you could buy Word with an adequate spreadsheet and presentation graphics package which all integrated together nicely.
  • by Chris Mattern ( 191822 ) on Wednesday December 05, 2001 @10:46AM (#2659485)
    > Let's compare:
    >
    > for (i = 0; i array_size; i++)
    > free(array[i]);
    > free(array);
    >
    > and now let's look at:
    >
    > // get rid of the array
    > for (i = 0; i array_size; i++)
    > free(array[i]);
    > free(array);
    >
    > Has your life *really* been so harmed? Is this
    > *really* so terrible?

    Well, yes. Because instead of a comment that
    states the blindingly obvious, we *could* have
    had:

    // search is done, so get rid of array
    for (i = 0; i array_size; i++)
    free(array[i]);
    free(array);

    stating *why* we're getting rid of the array.

    Chris Mattern
  • by Junks Jerzey ( 54586 ) on Wednesday December 05, 2001 @11:14AM (#2659598)
    Like Joel, I have been programming for 20 years, so I'm certainly not trolling just because what I have to say isn't the in thing with the core of the Slashdot audience.

    I read Joel's interview yesterday, before it was mentioned here. Good interview, I thought, he makes lots of good points. But the debate about it here has nothing whatsoever to do with what was said there. Many of the comments key off of the word "Microsoft" and so immediately assume that the interview is crap and has something to do with justifying Microsoft's monopoly position (are these people really bots?).

    Most of the the comments, though, are taking little bits of advice and twisting them around into mini-lectures about commenting style or programming issues, or they're simply being used as jumping off points for the poster's own spouting. Let me make this perfectly clear:

    These people are not professional programmers.

    Anyone who has been through the wringer of commercial software development, and not just a few classes and some tiny open source projects, wouldn't be so religious about such trivialities. Real software development is different. It is not a battle between the Evil Bad Commenters and the Perfection of Beautiful Computer Science (or more correctly What My Professor Said in Class Last Semester). That's not how it works at all. All programmers know about commenting, about indentation style, and so on. There's more to developing commercial products, though: deadlines, missed features, last minute requests from the client, strict requirements for supported platforms, and so on. In this kind of environment, commenting style is a very minor issue (not to say it isn't important, but ranting about it is like ranting to an experienced guitarist about your pet music theories--when you barely know how to play guitar at all). A good way of spotting such people is to ask them what they think of "goto." Odds are you'll get all sorts of vitriol about the evils of goto and the benefits of structured programming and how you should never, never, ever, even if your life depended on it use a goto. An experience programmer would shrug and say "sometimes they are useful, sometimes not."

    My advice: Learn, practice, work on projects, and over the years you'll become a pro. A college student without significant software engineering experience is not in a position to rant about how commercial development doesn't fit his ideals. The true sign of experienced developers is that they've been through it all and have enough experience that they don't feel the need to rant every chance they get--or at all.
  • by Anonymous Coward on Wednesday December 05, 2001 @11:32AM (#2659706)

    for (i = 0; i < array_size; i++)
    free(array[i]);
    free(array);

    and now let's look at:

    // get rid of the array
    for (i = 0; i < array_size; i++)
    free(array[i]);
    free(array);


    To me, there's no difference really. So sure, the comment's a good idea as it doesn't hurt too much.

    On the other hand, look at this code:

    i = 0;
    while (i < array_size) {
    free(array[i]);
    i = i + 1;
    }
    free(array);

    These snippets are completely identical functionally. However, one is much, much better than other (guess which one is better, my snippet or yours).

    Your snippet is far superior to the code I posted above as your code is idiomatic. In C, the standard way to loop through an array is with a for loop - a for loop that starts at zero and tests with less-than (not less-than-or-equals) against the allocated size of the array.

    So when I see your for loop, I can simply glance at it and I know what it's doing. However, when I see some non-idiomatic construct like the while loop I posted above, I have to stop for a short moment and mentally verify that it does the right thing. The while example would be idiomatic for many other languages (such as Python).

    For a more concrete example, look at this:

    for (p = list; p != NULL; p = p->next)
    printf("data = %d\n", p->data);

    and compare it to this:

    /* print out the list */
    p = list;
    while (p) {
    printf("data = %d\n", p->data);
    p = p->next;
    }

    The second one has a comment, but I claim the first is superior.
  • by Anonymous Coward on Wednesday December 05, 2001 @02:03PM (#2660613)

    "Unfortunately, if I write software that doesn't suck, doesn't need patches, and does what you want, you'll buy one copy (Netware 3 [novell.com], WinZip [winzip.com], Eudora [emailman.com]) and in 2 years I'll be bankrupt."


    Sure... if you sit on your ass like a moron and just wait for the revenue stream to run out. On the other hand, if you wanted to keep the revenue coming, you might want to work on another project, and use the revenue and customer goodwill generated from the first success to leverage it. Wouldn't that make more sense from a business standpoint?
  • Re:API specs? (Score:2, Informative)

    by BlueWonder ( 130989 ) on Wednesday December 05, 2001 @04:22PM (#2661320)

    Well, nobody ever told me explicitly that -5000 + 5000 = 0, but I was able to deduce it.

    Exactly my point: It can be deduced from axioms ("specs") about integer numbers, addition, etc that -5000 + 5000 = 0. Therefore, I would not assume that asc(chr(x)) equals x unless it can be deduced from the specs.

  • by pdc ( 19855 ) on Thursday December 06, 2001 @02:26PM (#2665963) Homepage

    Here’s a similar problem I had. I wanted to generate web pages using ASP, but I wanted to use UTF-8 as the character encoding. The easiest solution seemed to be to make my UTF-8 encoding routine return a string, which I then pass to Response.Write. Make sense?

    This seems to work, but behind the scenes there is a conversion between 8- and 16-bit character sets and back. UTF-8 encoding makes a sequence of bytes. I was storing these in a VB string, which stores Unicode character data, so it transcoded my byte sequence as if it were character data encoded in Window’s ANSI ecoding (MbcsToWideChar). But this is OK, because Response.Write does the inverse conversion: it takes UTF-16 character data and writes what it thinks is Windows ANSI character data (WideCharToMcbs).

    It all goes horribly wrong if WideCharToMcbs(McbsToWideChar(x)) is not equal to x, for some value x used in the UTF-8 encoding. Normally this isn’t a problem (for the platforms I have tested it on), but recent additon of the EURO SIGN to Unicode and Windows ANSI has caused me trouble: some code in SQL Server that does the transcoding itself rather than using the APIs causes this identity to be invalid... :-(

  • Re:the Dodo (Score:1, Informative)

    by Anonymous Coward on Saturday December 08, 2001 @01:39AM (#2674847)
    I believe you are wrong. The Dodo died due to its evolutionary failure to protect ground nests. The Dodo was never hunted to extinction for sport or food (that is an old myth). Nor was it stupid (as suggested in another email). FYI: It was a poor food due to minor breast muscles (& fatty rump) and of little value for sport. The Dodo was killed for evolutionary reasons, The Dutch introduced Pigs etc to Mauritius, these ate the ground nests.

There are two ways to write error-free programs; only the third one works.

Working...