How To Make Software Projects Fail 905
Bob Abooey writes: "SoftwareMarketSolution has an interesting interview of Joel Spolsky, of Joel on Software fame. Joel, a former programmer at Microsoft, discusses some of the reasons he thinks some very popular software companies or projects fail, including Netscape, Lotus 123, Borland, etc." This interview brings out some mild boiler-room stories which sound like they could be the basis of a good book, along the lines of Soul of a New Machine .
Isn't it obvious? (Score:2, Flamebait)
I imagine the interview goes something like:
Joel: We drove them all out of business.
Re:Isn't it obvious? (Score:4, Insightful)
Re:Isn't it obvious? (Score:4, Insightful)
It is funny how every company he talks about lost to MS. Seriously though, one of the things he does say is:
Fortunately for Microsoft, they did this with parallel teams, and had never stopped working on the old code base, so they had something to ship, making it merely a financial disaster, not a strategic one.
IOW, have more money than God and throw it at any problem you're having trouble with. The minnows in the pond get beaten up by the 800lb gorilla (or something).
Re:Isn't it obvious? (Score:4, Insightful)
Didn't work for IBM in the early 90's, didn't work for Detroit in the late 70's and early 80's, still doesn't work for the government.
Re:Isn't it obvious? (Score:3, Insightful)
> The internet? Highways? Electric power (which failed privately)?
Show me a government owned power company in the USA. Regulation is not ownership, much (not all) of it comes because the monopoly was government-enforced in the first place, so they now have to control it. I work in a social service nonprofit, housing comes to mind as something the government consistently screws up (all the effective housing programs here are other nonprofits that don't receive government funds). This is all of course US-centric -- other nations governments might do some of these things well. It just further demonstrates that having the money does not mean it gets spent wisely in every case, which is the point you seem to be trying to sidetrack me from.
Re:Isn't it obvious? (Score:4, Offtopic)
OK, I'll mention a few: TVA (i.e., Tennesse Valley Authority, which lifted an entire region out of utter abject poverty during the Depression, SoCal's DWP (which not only distributes water and generates power, but also manages to generate power while distributing water), Sacramento's Municipal Utility District (MUD) which generates and distributes most of the power in north-central CA, and finally the BPA (i.e., Bonneville Power Administration) which built and still operates most of the hydroelectric power generation and transmission in the Columbia watershed. The Northwest has a lower cost of living partly due to low power costs (though it isn't guaranteed and it has been rising) and low water costs (likely to continue given near term global warming effects). Water, Power (and soon, Broadband) are _exactly_ the infrastructure investments that our government does well and should control. Private utilities are very vulnerable to economic fluctuations where their executives' self-interest leads them to try foolish deals and daft accounting tricks in search of short-term performance, while government can weather tough quarters (and years) without worrying about the stock analysts.
In case you hadn't noticed, the major private power utilities in California are in bankruptcy and are desperately beseeching the State to bail them out (and might yet stick the tax and rate paying citizens after all, given how cozy their lobbyists have been with the CA PUC, Legislature, and Executive branch fixers, just about forever). One can only hope that the CA government and regulators now realize that the public is watching with interest and will nail them if they screw it up further, so they might fix it properly.
Of course, private utility executives and board members never do get held accountable, nor do their government co-conspirators, but if things were to be really just, there'd be a few of them hung from lamp-posts in San Francisco before this is over. Screwing the public for private gain is just the sort of thing that deserves "extreme prejudice."
Government utilities are a good thing, mostly (WPPS notwithstanding, but that was a _private_ boondoggle admittedly triggered by a BPA error). Private utilities are simply disasters waiting to ripen, explode, and be discovered, unless they are regulated into castrated quasi-governmental entities. The term "private utility" really is an oxymoron.
Perhaps you should read the article (Score:5, Insightful)
You may want to check out this article by Robert Cringely: Microsoft's C# Language Might Be the Death of Java, but Sun's the One to Blame [pbs.org].
A lot of truth in that...
Re:Perhaps you should read the article (Score:5, Insightful)
It is a pretty bad situation for a market to be in if any one company is so big that all they have to do is wait for their competitors to make a mistake in order to be able to crush them. When any one company wields so much power, it makes it nearly impossible to sustain any sort of competition. Not to mention that when a market is ruled by an 800lb gorilla, all of the smaller players are pretty much forced to take more risks and make other decisions differently than companies do in a market where there are at least two or three players splitting up significant chunks of market share. Sometimes those risks pay off brilliantly, sometimes they are stupid mistakes.
Re:Perhaps you should read the article (Score:3, Interesting)
Going on a rant here: this is why I believe Eazel failed. They held on to their file management program, but failed to realize it would not make them money when they needed it most. This is what I believe happened with Netscape also. They could not figure out a way to utilize Navigator for profit, but kept developing it. It would have probably been a good idea to release the source code then, while MS would only have been comfortable going as far as no-charge with IE (thus, giving Netscape the upper-hand). I also believe IE was a failure with Microsoft as well, though people don't realize it. Now that IE is free MS makes no money on it, and does not, IMO, know how either. The result of this action is that Microsoft is stuck developing the worlds most popular web browser for free with no way to recoup development costs. A total loss to Netscape? I don't think so..
Re:Perhaps you should read the article (Score:3, Insightful)
IE is no more a loss leader than any other piece of windows is. Doesn't matter what they tell the judge, it's a piece of windows. It's probably saved them a lot of money in the long run as well by not having to implement and integrate various viewers into other programs. Winhelp is gone, the help system now is 99.9% IE.
Even if it is a loss leader, it still sells Windows. This can scarcely be considered a failure. Bob and Comic Chat were a failures
Re:Perhaps you should read the article (Score:3, Informative)
IE does make money - as part of the OS sale (Score:3)
Re:Perhaps you should read the article (Score:3, Insightful)
>> too, but since they were powered by monopoly profits in OSes (and earlier
>> on by licences for BASIC in ROMs), they could afford to wait out their
>> mistakes and just keep throwing money at the problems until they
>> straightened them out.
> I believe the key to Microsoft's success is knowing when to let go of
> a bad idea. Such as MS Bob, MS Chat and various others. It is when you
> still believe in a product which doesn't make money that you fail.
What worked for microsoft was to enter a market, take control of it, and then find new markets to take on.
When they got big enough they did that in parallel.
They started with BASIC, then MS-DOS, and then tried to maintain control of the OS (with Windows) at the same time as writing applications to take on people like Lotus. They they went for the server market, the database market, etc.
They can afford to kill of stupid/failing products because they have a number of revenue sources.
Borland screwed up all their markets, and manage to scrape through with a set of development tools.
Lotus lost on the apps, but managed to hold on to Notes long enough to get bought out.
Netscape lost the broswer war, and got sold off for spare parts (NetCenter and Enterprise Server).
If you're only competing in one market, then you have to get it right pretty much every time (Oracle). If you let the ball down you will end up loosing your market, and getting bought out (Informix).
MS didn't win because they had a monopoly, they won because they used their monopoly (both for power and resources) to allow then to diversify into and market they wanted.
Re:Perhaps you should read the article (Score:5, Insightful)
But wait... that wasn't Microsoft's first monopoly position. Even before MS-DOS they basically had a lock-hold on BASIC interpreters, which were one of the most critical parts of the pre-IBM PC desktop. Apple's BASIC, Commodore's, and Radio Shack's were all licensed from Microsoft. Most CP/M machines also bundled Microsoft BASIC. In fact a strong case could be made that the MS-DOS monopoly which grew into the Windows monopoly was itself leveraged from the BASIC monopoly.
There is a difference between pointing to monopoly power as the primary reason for Microsoft's success than the only reason. Saying that Microsoft's monopoly power had nothing to do with the failures of other companies is as wrong as saying that those other companies made no mistakes.
As for Microsoft's marketing, I am not so sure I would call it 'shrewd' as pervasive and persistant. They've outspent just about everyone else for years, with the possible exception of IBM, but that is easy when you have monopoly profits to fall back on. It would be hard for a startup to outspend Microsoft on advertising, even in a niche.
Re:Perhaps you should read the article (Score:3, Insightful)
Not quite buckeroo. In the late 1980's platforms from Apple and Commodore(and even Atari to some extent) were very viable, and continued to be up until around 1990.
Apple, Atari and Commodore, all put together even as early as 1987 could barely have scratched into the 2 digit market share range. And certainly of those the only one that was making any share in business sales was Apple. Even then, Apple was never very successful outside of the desktop publishing and graphics niches. Atari and Commodore were never serious competitors to Microsoft. Heck, Commodore even sold MS-DOS based x86 boxes. Hardly much of a competitor.
At which point DR-DOS and OS/2 were very viable competitors with Microsoft and had large chunks of marketshare.
Oh please. Neither of those ever sold any significant numbers. Microsoft already had over an 80% market share by then. I'm not saying that they weren't viable products, but that is different than being viable competitors. OS/2 and DR-DOS were never viable competitors at least in part because neither of them could ever crack the preload market. Microsoft was able to keep them locked out through restrictive contract terms amongst other issues.
Having viable products in competion with yours doesn't necessarily prevent you from having a monopoly. If you have a seriously dominant market share, then you can have a monopoly even without being officially granted a government sanctioned monopoly or without necessarily being the only product available. It seems like a popular thing for Microsoft apologists to claim Microsoft isn't (or wasn't) a monopoly by wanting to make the definition of what a monopoly is so restrictive that there practically isn't such a thing.
It wasn't until 1995-1996 that Microsoft became a real monopoly, when everything else just disappeared into the nowheresphere.
Please. Both Atari and Commodore were out of business by the 1994 timeframe, and neither were more than gasping for air for the last several years of their lives. Apple is essentially the only non-x86 holdout in micros. Microsoft certainly extended their monopoly during that time period, but it was far from the turning point.
You're also grasping for straws on the BASIC thing.
How so? Microsoft was making a royalty on virtually every microcomputer sold in the 8 bit days. About the only exceptions were Atari and TI, who had their own proprietary BASIC. Just like with most packaged PC's these days, it was virtually impossible to avoid paying the Microsoft tax even back then. Sounds like a de-facto monopoly to me.
Re:Perhaps you should read the article (Score:3, Interesting)
What a horrid column. Cringely just gets worse and worse. For example:
Java is buggy? A language is buggy? What's that supposed to mean? Perhaps he means that Sun's Java compiler is buggy or their JRE is buggy, but neither is true. As software goes, both are extremely un-buggy. Unless he actually has some evidence to the contrary. No, I'm just kidding; I realize evidence would be completely out of place in that column.
I'm sorry, Gates and Ballmer have bet their company on Java? I guess since he doesn't actually research his writing, why should he check it over afterwards?
Seriously, I've read more insightful comments browsing slashdot at -1.
Good point (Score:5, Insightful)
"My theory is that this happens because it's harder to read code than to write it."
He couldn't be more right. I've recently been asked to port some code from another group in the company. Upon first reading it, I found global variables being referenced from everywhere, and it looked terrible.
The more I looked at it though, the easier it got to read, and having an existing code base to work from made things much easier.
Plus, when I have problems with it, I can blame it on a "design error" by the previous programmers!
Re:Good point (Score:5, Insightful)
Re:Good point (Score:2, Insightful)
Re:Good point (Score:5, Insightful)
Re:Good point (Score:5, Funny)
Re:Good point (Score:3, Insightful)
Likewise, I've left comments in code to the effect of, "I know this is broken, but we had to have it yesterday. It should be written better using x, y, and z techniques"; this flags to future developers (and me) that it's a FIXME and points a route for the fix.
Re:Good point (Score:3, Funny)
Re:Good point (Score:4, Insightful)
or (3).. (Score:4, Insightful)
or (3), incessantly repeated nerdisms such as "if it was hard to write it should be hard to read" instill an improper sense into young, impressionable programmers.
Re:Good point (Score:5, Insightful)
All things in moderation--including comments (Score:3, Insightful)
I'll agree with your other points, but this:
Copious Comments - Lots of comments, clearly written and explanatory. [...] The best comment I heard was from a friend about a former coworkers code: "It's English with some C++ thrown inbetween the comments."
is nonsense. If your code isn't written well enough to make it obvious to another programmer what it does, then no amount of documentation will help the poor sop who comes along after you and has to maintain your code. Programming languages are just that--languages--and can be used to express concepts just as well (better, in some cases) than human languages. For example, if I see:
then it's perfectly obvious that it's freeing the contents of an array; I don't need you to tell me so in a comment, and in fact if you do, it gets in the way. As one of my university professors said, "Use comments to tell me something I don't know."Comments are good things, of course--used sparingly. But there is such a thing as "too much of a good thing."
Hubris, laziness, and impatience (Score:5, Insightful)
for (i = 0; i array_size; i++)
free(array[i]);
free(array);
and now let's look at:
// get rid of the array
for (i = 0; i array_size; i++)
free(array[i]);
free(array);
Has your life *really* been so harmed? Is this *really* so terrible? Comments should not be written with the thought that your university professor would know what everything else means. Comments should be written so that all of those folks without a PhD in CompSci. know what it means.
What if the next joe to hit your code doesn't have a degree? What if the recently-hired intern was just handed a "C in 21 days" book and told by the manager to "fix it" because the programming team is snowed in (or similarly unavailable) and the customer is screaming? (Yeah, try and tell me that's never happened...)
A fine use of comments is (for example) every ten lines to say, in general, what is going on. One thing I used to do is write a comment at least every 10-15 lines. Why? When the next joe who comes along has to read/edit my code, scanning through some periodically placed comments will *always* be quicker and easier than reading the code.
The code effectively shows my implementation, but may not show my intention. I have coded for years. I started dreaming in code several years ago. Shortly thereafter, the code actually worked when I typed it in the next morning. That isn't the point. How good a coder you are isn't the point.
When you have a hundred thousand lines of code to go through, comments become like "Cliff's Notes." For the quick patch (probably the majority of code being written by most people), comments are invaluable. Who cares if I didn't read Moby Dick if I can still pass the pop quiz? If I need to make an indepth study, I can still do this, but thank god for the "Cliff's Notes."
Now then, on to the "proper" use of comments.
1. Write out what you are planning to do in English. (or whatever else may be the dominant language in your development group) Fill in every step in the problem. This is NOT psuedo code. This is akin to: Find out who www.yahoo.com is, open a connection, ask for the main page, and check to see if our cache is still valid. If the cache is stale (the yahoo page has been updated), get a new copy of the main page. If the cache is still valid, pull the page from cache instead. Drop the page into the "ready" bin and send a message to the user that the page is here.
2. Make a copy and label it "documentation."
3. Go back to the original, fill in all of the logic in whatever programming language at the appropriate points in your "documentation," and label it "source file."
This means that your documentation is done, your code is adequately commented, and your algorithm and intent(!) are clearly defined for both your co-workers (and yourself when you have to fix something ten months from now). If you can't spell out the problem and the solution in your primary native language, you sure as hell better not be trying to spell it out in a programming language that members of your team have only been using for two years.
The only excuse not to do the above is laziness. For some people, laziness is not considered a bad thing. It was noted as being one of the main virtues of a hacker -- hubris, laziness, and impatience. Hell, according to this measure, I myself am lazy from time to time. But cut the bravado, the beating of the chest, the battle cries of "I'm smart enough to figure this out, so should you be," and call a spade a spade. Avoiding comments means that you are being lazy.
Re:Programming for programmers (Score:3, Informative)
And I agree that comments aren't a substitute for documentation. Use cases and UML don't lend themselves to source code comments. However, my comments about "Cliff's Notes" still stands. Comments judiciously sprinkled within code helps casual scanning for meaning much faster and easier no matter your coding skill. Coding skill simply speeds the case where comments are not present.
In answer to your question, while certain logical constructs are best expressed in code, overall concepts are usually just as easy if not easier to express in a natural language.
Unless you are writing programs and libraries that are only going to be used from end to end by other programmers, the outside world is dictating needs, requirements, and problems. That outside world speaks a natural language. If you cannot map the problem set and the solution set to a natural language, I submit that you do not have an adequate knowledge of the problem.
However in this case, comments would be a side effect.
Programming languages describe and work with the nuts and bolts. No programming language that I am aware of is sufficiently abstract to directly map to complex real world problems; no creative use of partial template specialization in C++, dynamic classloading in Java, dynamic function definition in Scheme, etc. can do this.
Conversely, no natural language that I am aware of can adequately describe a truly logical, deterministic domain either (just check out legal documents for proof). There needs to be more translation between the two.
For now, those bridges are comments and documentation.
-----
For a less abstract argument, let's leave computers out of it for a second. Two people from Indianapolis can communicate with each other better than either can communicate with someone from Sydney. They all speak English, but the dialects, the idioms, and the inside jokes may not translate. So when we write for general distribution, we try to keep those local colloquialisms to a minimum (and no, I admit that I haven't been a perfect example of this in my posts
Now keeping this analogy in mind, a programmer has an intimate relationship with a computer/compiler/system libraries/whatever. Together they have many inside jokes, goofy idioms, and function prototypes that mean absolutely nothing to someone else who, not being necessarily stupid, simply has a different focus or area of expertise.
While it may be more effort, and it may reduce some of the free-wheeling fun you would have had if you were alone with your CRT, doesn't it seem appropriate to make it more comprehensible if for no other reason that to allow others that specifically don't know as much as you to learn from a good example? Wouldn't that help to encourage others to continue using your code instead of scrapping it just as the interview topic suggests? Wouldn't it help to keep a few less wheels from being reinvented?
Maybe I'm too idealistic. Maybe I'm just not jaded enough. But for me, part of being a lead software developer or a senior software engineer or a project lead is not simply to crank out a mountain of code while others look on and a select few help out. We don't code forever. Some of us get sick of it before too long. Some go into management. Doesn't it make sense to try and *teach* the ones coming up to do what we do? For me, it does. But, again, I am lazy sometimes. A few extra comments means that more than half of the time, I don't need to be asked what something I wrote was intending to do regardless of whether or not they are as good a coder as I am. They just read the comments.
...I'm also prone to being long-winded.
Re:Programming for programmers (Score:3, Informative)
In answer to your question, while certain logical constructs are best expressed in code, overall concepts are usually just as easy if not easier to express in a natural language.
This is more or less what I was trying to get across (not all that effectively, I guess); comments are most helpful when they give the reader more information than they can easily get from reading the code, and code that's easy enough to understand without comments shouldn't be commented "on principle". I guess it's just that my idea of "easy" is a bit different from that of other people. <shrug>
At any rate, I do comment and document my code (or at least, I'm trying to do better than I have in the past), and I agree with the point you make with your Indianapolis and Sydney analogy. I guess I'm just cranky today, too...
Verbosity isn't bad as long as it's WHY not HOW. (Score:3, Informative)
If all the comments are doing is telling you exactly what you already knew from being moderately literate in the language, then they are just ugly chunks of text that get in the way of reading the program.
But that doesn't mean verbose comments are bad. If the verbosity is dedicated to telling you *why* something is being done, rather than giving a play-by-play description of how, then it is very useful. If I see a for-loop that counts backward from ( array_size -1 ) down to zero, don't give me a comment that says "counting backward in a loop". I can TELL that. But what I can't necessarily tell at a glance is *why* the author chose to count backward instead of forward - what was the algorithmic purpose to doing it that way - THAT is what I want to see comments explaining. And with THAT type of comment I am very happy when it comes with a lot of verbosity.
The worst examples of useless verbosity are when you see code written by someone who has *just* learned a new programming language and is unfamiliar with the "culture" of that language. They tend to document things that everyone already knows like the back of their hand. (For example, a novice C programmer tends to go into excessive detail about the use of null chars to terminate strings.)
Re:All things in moderation--including comments (Score:3, Insightful)
The point of comments is not to say WHAT you're doing...as you say, that's obvious. The point is to say WHY you're doing it. Any programmer can see that I'm mangling d and calling a function. It might be useful to add the following comment, though... Now, when you're trying to improve performance or figure out why d changes value subtly in this routine, you can rewrite the code as Comments are good things - you should use them copiously to explain your thinking. Any compiler can figure out WHAT you're doing; a human being can thus do likewise. Only a sentient being can determine WHY you're doing it. Use your comments to communicate with sentient beings. It may take a paragraph to explain a single line of code. A page of code may require only a single line of comment. Use your own judgement, but always remember that your target audience is someone completely inexperienced with your project somewhat inexperienced with the language you're using.
Re:Good point (Score:4, Interesting)
Let me explain myself. I have been the type of programmer you speak of. I have written copiously commented code. I have properly formatted my code and used standardized function names and such. After all, I was taught in college to write and comment my code so that any programmer could walk in off the street and understand it easily; that made it easy to replace me and I was.
It seems that when you follow good programming practice, you end up destroying your job security; and as silly as it sounds... it appears to be sooth.
Jaded in a realistic world.
Job security? (Score:4, Insightful)
Your perspective assumes your company requires a fixed amount of software. Think more imaginatively.
Better documentation means you can shove maintenance to a more junior programmer with less pushback.
Also, without good documentation, its a b*tch to try to outsource/handoff pieces of the code you don't want to bother writing.
Besides, I don't care how well documented your code is, you should always be able to convince a boss that its more efficient for you to make changes to it (even at higher salary) than some cheaper guy who has never seen the code before.
--LP
Hungarian notation considered harmful (Score:5, Insightful)
char *strcpy(char *dest, const char *src);
much easier to read than the Windows-style Help which is full of stuff like "LPCSTR lpBuf" and suchlike. The idea which is commonly called "Hungarian Notation" says that a variable name should include the type of the variable as a prefixed abbreviation in front of the name. This leads to stuff like:
byte[] baBuf;
whereas without Hungarian, it might be called:
byte[] message;
which would be much more meaningful.
Especially in object-oriented programming, the type of a variable is the least important piece of information about the variable, and has no place being abbreviated and prefixed to the name. The most important thing about a variable is what the programmer is using the variable for, and that information should be what the name of the variable tells another programmer. If somebody really wants to know the type of a variable, then their editor or IDE should tell them what it is. If it doesn't tell them automatically, then they should look at the variable declaration, which will state exactly what type the variable is. If programmers want the variable name to tell them the type, then what is the point of declarations? And why bother putting a comment near the declaration saying what the variable is for, because people aren't going to read the declaration or comment anyway, because they are just going to look at the Hungarian warts.
The argument that Hungarian notation reduces the possibility of assigning variables of different type to each other is long dead with compilers well capable of throwing errors if any incompatible type assignments are attended. I think that Hungarian notation is dead, or at least should be.
Re:Good point NOT (Score:4, Insightful)
Excellent point. My philosophy about commercial software is this:
Never forget why you're writing the code to begin with. The point is to get working, stable code out the door as fast as possible. Anything that does not accomplish this is a waste of time.
Doing your architecture work is fine, but do it on a whiteboard in your cube with your co-workers. Don't waste time holding formal design meetings and drafting useless documentation and diagrams because, honestly, nobody will ever read them.
Modularize/componentize your code as much as possible. Document what the module as a whole does and what data it requires and what data it returns . You shouldn't have to waste time commenting every single peice of code. If you''ve modularized and documented what the module does, any decent programmer can figure out the rest.
In addition to not hiring idiots, don't hire people who love to wallow in bureaucracy and process. Besides not getting jack shit done, they impede everybody else.
If you want to comment and spend hours drawing Visio diagrams, fine. Wait until after the product is released to do this. These do not accomplish the goal (see point #1).
Chris
Re:Good point NOT (Score:3, Insightful)
I'm surprised you haven't yet been modded down as a Troll, you are so far off-base. See Dave Parnas' "Software Fundamentals" for a detailed exposition of why your approach is wrong, wrong, wrong.
Your objective should be to get working, stable, maintainable, supportable code out of the door. This means it's commented and documented (and, yes, diagrams count as documentation). If you fail to do this, you're failing your users and (ultimately) your company.
Re:You're wrong about Hungarian (Score:3, Interesting)
no, he developed Hungarian Notation in the 70s as a small part of his PhD thesis at Xerox PARC. His thesis has a very interesting concept in it: imagine that programming is a game where two programmers are given the same task to work on, independently. They are allowed to communicate before beginning, but not once they start. If they produce identical code, they win big.
The point is that this is exactly how the software industry works: if you and I followed a set of conventions that caused us to generate the same code, we could maintain each others code with no extra effort (and rewriting it would produce the same result again :)
it illustrates what is wrong with perl's "there's more than one way to do it"
How To Make Software Projects Fail: (Score:5, Funny)
Step 2: Put him in charge of software development.
Step 3: Do nothing as priorities change weekly and deadlines slip away.
Step 4: Do nothing to stem exodus of clued-in employees to less-screwed companies.
Step 5: Force remaining employees to work 15 hour days. Provide subtle reminders that there's a recession out there.
Step 6: Do nothing as even non-clued-in employees flee.
Step 7: Hire a sweatshop in China to crank out code; present this sound like a good idea.
There, that was pretty easy. And, to be honest, everything beyond Step 1 pretty much happens on its own.
Re:How To Make Software Projects Fail: (Score:4, Funny)
Soul of a New Machine (slightly OT) (Score:3)
problem = clueless management/directors/execs etc. (Score:5, Interesting)
Then a skilled/talented developer and/or engineer wants more money. The employer does nothing to retain them - thus the skilled/talented employee leaves.
Now who maintains the code?
The other problem is bringing in short term consultants for long term projects. The non-technical people who make these executive decisions don't seem to see the feasability of KEEPING their code maintained by the talented/skilled person who BEGAN the development on it.
I know alot of consultants read
Another problem is hiring non-technical managers to manage technical people. At my last job we had a manager off of an automobile manufacturing production line quit his job at the auto company to take a job as the manager of a group of Unix admins. This "bumper jockey" had NO CLUE what we did for a living, and treated us like a bunch of unionized UAW slobs, and not like professionals.
How can a non-technical boss effectively manage technical people???
Also - how about all the Ceo, Cio, Cto, eieio - types with their big salaries, catered lunches, etc... Alot of them have NO programming or hands-on technical experience. Hell - I've had the CTO come up to me and tell me that "The Internet was broken" when he knocked the dongle out of the side of his laptop - severing the network connection. And this guy is our Chief Technology Officer???? *lmao*
I'm not saying that only technological people can make technology companies work - but I do feel that managers should take some sort of hands-on classes to learn some basic programming and internet skills so they have SOME SORT OF CLUE about what WE all do for a living!
Death by Engineers (Score:5, Insightful)
This leads to a whole host of problems:
Many of them tend think they're smarter than people in non-engineering roles.
Pursuant to this, they don't think PR and marketing and sales are "hard" or really even "important".
Again after #1, they're always right when in disagreement with marketing or sales guys.
Most of them haven't developed in a decade+, so now they know just enough to be dangerous -- make micromanaging decisions about detailed subjects things they don't understand well enough, chase unnecessarily after bleeding edge tech, etc.
Fail to understand that not everyone wants to always work 14 hours a day.
Laugh off meetings, so that eventually nobody in the company knows whats going on.
As a result, nobody's heard of us (no marketing budget, no trade shows, no nothing) and nobody's buying our products (engineers tend to make lousy sales guys; despite what they might believe, nobody wants to listen to a 3-hour ridiculously detailed presentation on your product).
There's got to be a happy medium someplace.
Re:problem = clueless management/directors/execs e (Score:2)
It is rapidly becoming harder for them to maintain their code, and develop new software that might earn them a profit.
CEO salaries (Score:3, Interesting)
Or is it that CEOs work that much harder than they did ten or twenty years ago?
The guys at the top did NOT pull themselves up by the bootstraps. They were hoisted up on the backs of others. Don't get me wrong. I have met some truly brilliant minds that were in charge of some companies. But I've also met some true boneheads who still gleefully pulled in $500K/yr+. We're moving headlong back into the days of the robber baron who rules over his fiefdom with an iron fist. Yeah, those were good times. Well, of course! When was the last time you took a pay cut for some peon...er...someone else's job?
Rewrite vs compatability (Score:4, Interesting)
Obviously, MS biggest problem though is that they don't know when to give up and actually rewrite. For instance, it seems that the windows series of operating systems are all made with the intent of being backwards compatible and reusing core parts back to early DOS systems. Backwards compatability and code reuse is nice and all, but there is a limit to it and a time to give up.
It will however be interesting to see what comes out of the "total rewrite" of IIS.
"never a good idea to do a complete rewrite" (Score:4, Troll)
Re:"never a good idea to do a complete rewrite" (Score:3, Interesting)
The reason they they need to go for a 32-bit system is that the hacks built into DOS was just not enough for their (then) expanding programs. Put simply, they exhausted DOS and were looking for help to get 32-bits under way. Hence the MS-IBM cooperation on OS/2, and NT from MS's fragment of it. FAT32 is another extention of the fs to allow the use of fat ideas into larger disks.
The original name of NT was "OS/2 NT". It's just an evolution of an IBM product that they had code sharing rights for. Ironically, the first version of NT [ie 3.1] is correct: version 1 and 2 were the common OS/2 base.
Apparently the one true MS invention was MS BOB. It has a massive entry in technet. Enough said.
Re:"never a good idea to do a complete rewrite" (Score:2)
Re:"never a good idea to do a complete rewrite" (Score:2)
This system is not present in Win9x or WinME. So I would still stick with the notion that NT is a modified OS/2, probably with VMS hacks in it. But I can't see them not recycling something that works and that they can use.
Re:"never a good idea to do a complete rewrite" (Score:3, Informative)
The support for OS/2 in NT is fundementally different to the DOS/Win16 or POSIX layers, even by what MS says.
The Resource Kit says that the only subsystem you have to disable to get C2 complience is the OS/2 system: ie it's the only one that calls directly to the kernel. The DOS/Win31 and POSIX systems do not call to the kernel.
NT4 kernel has ONLY OS/2 compatibility. The bulk of the operating system is not available until the shell loads and the user logs on.
All os2ss.exe and os2.exe do is shunt the calls direct to the kernel for execution. The file in question is there for OS/2 apps to call if they want console messages, it is a MKMSGF file, after all.
But the technet seems to go a fair bit into OS/2 and the subsystem, while the POSIX system is just there as an afterthought. "We support POSIX".
No, NT has correct version numbers because they follow from OS/2.
Re:"never a good idea to do a complete rewrite" (Score:3, Insightful)
That is being generous. It is my personal opinion that Microsoft only included the POSIX subsystem in order to be "checklist compliant" to bid on government and commercial contracts that require POSIX compliance. Furthering my suspicions, they basically made the POSIX subsystem more or less useless by crippling it, and making it virtually impossible to run POSIX applications and Windows applications at the same time in any sort of useful manner. So bad was the POSIX subsystem that Interix and Cygwin were created to replace it as well as numerous other porting tools (WindU, MainWin), etc. Microsoft has since borgified Interix in order to make sure it never becomes any sort of serious way to wean Windows users off to other OSes. They want it to only be a one-way street.
Re:"never a good idea to do a complete rewrite" (Score:3, Informative)
NT was for nothing (Score:3, Insightful)
No one seems to be taking up this position in their replies to you, so I'll give it a try...
Maybe NT really was a mistake. Maybe Microsoft would be even richer if they had just kept evolving Win9x and let it accrete more features.
Did Microsoft really gain anything from NT? I don't mean gain things that important to geeks (reliability, performance, cleanliness, etc), I mean gain anything that is important to being commercially successful.
Name one feature that NT has and 9x doesn't, which has resulted in increased revenue for Microsoft.
Before everyone points at Microsoft ..... (Score:4, Interesting)
"Everyone thinks, poor Netscape, they were a victim of MS practices" - yes, they were, and yes, they innovated, but come to think of it, NS4 was crappy software that sucked.
"Poor Real Networks, MS is integrating all that stuff into the OS." - Good riddance
We blame Microsoft because their software sucks, and their practices suck
Only now, with Linux and Open Source, can WE the users contribute to what we want, not what some guys proposed business model wants. I mean seriously
ICQ pioneered instant messaging, but give me a break, the things been in beta for years and uses up more memory than most anything.
My note to all burgeoning software companies - Make me something that doesn't suck,and I'll pay for it, don't force me to upgrade every 20 minutes to a more bloated piece of crap that is nothing more than a "portal" for all those neat advertising engines you've snuck in there....and I swear, if I hear someone say "monetize the desktop"
Re:Before everyone points at Microsoft ..... (Score:3, Interesting)
Think about it--can you name a non-microsoft app using OLE that actually works well? They can't *all* be fragile pieces of shit due to implementation incompetence.
Re:Before everyone points at Microsoft ..... (Score:2)
It just seems that all these software companies blame Microsoft because they can't keep up, and that might be true, but these guys aren't helping their own situations any.
Maybe what - 60% MS monopolistic prices, 40% own vendor incompetence. Sure, 60% is high, and unfair, but there is never an excuse for incompetence either.
Re:Before everyone points at Microsoft ..... (Score:2)
Your OS/2 comment is bang on. Technically OS/2 beat win95 in every way (back in 92/93). But IBM charged money for the dev kits. MS gave their's away (atleast for the first little bit). IBM failed to get hardware vendors onboard. MS paid hardware vendors to write drivers for win95. IBM ingnored the home market. MS put the home market square in their marketing sights. IBM's OS division couldn't even convince IBM's pc division to ship OS/2.
OS/2 2.0 had the jump on win94 (ninety four) by a good year. When win95 finally shipped OS/2 had had two years to catch the market but did nothing.
Re:Before everyone points at Microsoft ..... (Score:5, Insightful)
Unfortunately, if I write software that doesn't suck, doesn't need patches, and does what you want, you'll buy one copy (Netware 3 [novell.com], WinZip [winzip.com], Eudora [emailman.com]) and in 2 years I'll be bankrupt.
If I write software with tons of broken features and requiring constant upgrades for 'compatibility' and security (SAP [sap.com], QuickBooks [quickbooks.com], and Windows 95 [microsoft.com]), I'm guaranteed plenty of repeat customers.
Now if you'll excuse me, I need to go buy a $100 ink cartridge for my $30 printer.
Re:Before everyone points at Microsoft ..... (Score:3)
Ford, for example, stays in business not because Escorts suck so bad that you have to get the newest one every year to be road-compliant, but rather because people depend on their products and WANT to come back for the new ones. (Okay, maybe Ford is a bad example, but you get the gist.)
Name a business thats never made a mistake (Score:3, Interesting)
They all do, or they all will make a mistake. Pointing out that Netscape or Real lost because of a mistake is disingenious, because EVERY business makes some sort of mistake. They spend too much time adding on buggy features, or they spend too much time getting it stable that they lack features, or both at the same time.
But, Microsoft's monopoly position mean that they're almost immune from mistakes. They can afford to have 3 teams rewriting code. They can afford to be a loss-leader for YEARS. They can write crap, but make sure it gets users from version 2.0.
And, Microsoft makes mistakes, mistakes that would put any other software house out of business. Look at how late they got into the internet, and how many people they bought out to catch up? The billions of dollars spent developing IE.
At that level, Microsoft doesn't need to give a fuck if they make a mistake. They have immunity from mistakes. They can use their monopoly to hide it, cover it up, or get a second chance later. Others, without a monopoly, cannot afford the expense of keeping up.
Re:Before everyone points at Microsoft ..... (Score:3, Interesting)
Microsoft was OEM agnostic, IBM was not. That's what doomed OS/2.
He certanly is into lunch, isn't he? (Score:2, Interesting)
Anyhow, I think he speaks horrible advice from a computer science standpoint. "It dosen't matter how bad, buggy, cludgy, and crufty a code base is, never ever rewrite it". If you don't understand what the code is, if it's impossible to read, don't worry! that's the sign of good code!
It speaks alot of Microsoft's tactics, do whatever it is that takes the absolute least effort possible, and charge as much as you possibly can for it. All of those other companies failed because they were focused on quality, whilst they were focused on nothing the bottom line.
Here's what I think is the worst software sin: writting shitty code and pretending it's not shitty. Regardless of how much gloss you put on it, bad code is rotten to the core, and it reflects that in stability and security. Why on earth do you think Microsoft falls flat on its face in those areas?
I remember a story about JD Rockefeller. He was touring one of his oil works, and he saw someone soldering the oil cans shut. He asked him how much solder he uses on each can. The man told him, something like 48. Rockefeller said "from now on, use 36". That's exactly the type of cutting corners companies like Microsoft do. THat's not good for the customers, it's not good for society, and it's not good capatilism.
Re:He certanly is into lunch, isn't he? (Score:3, Insightful)
And this is the difference between academic code and commercial code. Ever looked at 90% of the research projects from graduate students? Most of them barely work, or only work on one specific set of hardware (and not anything else), or require a huge set of work-arounds to get the code up and running. The reason for this is because this is theoretical work. It doesn't matter how well it works (well, unless your thesis is on optimization, but that's different), only that it works well enough to demonstrate your research. Commercial software is the complete opposite. It has to work, and work well, on many different configurations of hardware, and many different versions of software (Windows 95 vs 98 vs 98se vs ME vs XP, Windows NT4 vs 2K vs XP, Mac OS 7.x vs 8.x vs 9.x vs OS X, and so on), or your potential customers aren't going to buy it.
You didn't read the interview, did you? One of the main points in there was that by throwing away your old code and re-writing from scratch, you're throwing away years of experience and bug fixes. To use the example he gave of the nasty function that was supposed to do something simple but had a whole bunch of seemingly useless extra crap in it, the point was that all those extra little things that you'd throw away in a rewrite were necessary bug fixes. You throw them away, and unless you wrote those bugfixes in the first place (not likely, and even if you did, not likely you would remember), you lose all that information. That means that your new, "cleaner" version is very likely going to have similar bugs. Perhaps even the same bugs you fixed in your older, crufty code. Rewriting your code from the ground up is not focusing on "quality" (it may be focusing on "quality of code", which is a pretty useless standard so long as your code is proprietary), but instead focusing on "triviality". The bottom line is that in business (any business, not just the software development business), the bottom line is what's important. If you don't like that, stick to academia. You'll be happier there, and your potential peers in the commercial arena will be happier having you there.
Irrelevant red herring, and a bad example to boot. You're equating a potentially dangerous situation (in your example, less solder means a less solid joint, which means the oil could leak) with a harmless situation (reusing your old code, crufty as it is). In one case you're making a conscious decision to be less safe, while in the other you're making a conscious decision to leverage the work that's already been done.
As I said, I don't think you read the article. If it makes financial sense (you will sell enough copies to recoup your extra development cost and extended time to market), then there's nothing wrong with re-writing code (though Joel did suggest looking at the old codebase while writing the new, so that you won't miss any of those one-off bug fixes that are neccessary but are also the source of the cruft). The problem is that it rarely makes much sense. Especially when you're in a competitive market (as all the examples he gave were -- when you're competing against Microsoft, the worst possible thing you can do is be more concerned about code quality than making a featureful, useable product that's available quickly).
Re:He certanly is into lunch, isn't he? (Score:4, Interesting)
What you're seeing there _is_ capitalism- it just happens to be 'laissez-faire'. Under current conditions, those guys are the only ones who survive, because they 'eat the lunch' of everybody else and make sure there's no choice to resort to, by hook or by crook. In strict laissez-faire as it's practiced in the modern world, there is no concept of 'society' at all. It's 100% Union Carbide and there is no such place as Bhopal...
Now, it's important to remember that there are OTHER types of capitalism, but to claim laissez-faire isn't capitalism seems a bit wrong. The trouble here is that you are aware of society and things like consequences to actions, perhaps you are aware of stuff like game theory that proves 'best doesn't always win' and you object to the rules of the game being virtually nonexistent, because you see what happens and you don't like it.
However, to do something about it you'll have to encourage a different sort of capitalism than the laissez-faire one we live with, and until then it will be about 'eating lunch' and to hell with society, customers, or even basic fitness to the task.
what a load of crap (Score:2, Insightful)
What happens when you cludge it a coupple more times?
Eventually you need to go back and redesign the section you are working on from the ground up with all of your goals in mind.
This is not throwing out the old knowlege it's learning from it and there are plenty of examples where it's worked out for the best.
What was spelled out in the interview was a recipy for a buggy mess.
Code rewrite (Score:3, Informative)
SMS: Joel, what, in your opinion, is the single greatest development sin a software company can commit?
Joel: Deciding to completely rewrite your product from scratch, on the theory that all your code is messy and bug prone and is bloated and needs to be completely rethought and rebuild from ground zero.
SMS: Uh, what's wrong with that?
Joel: Because it's almost never true. It's not like code rusts if it's not used. The idea that new code is better than old is patently absurd. Old code has been used. It has been tested. Lots of bugs have been found, and they've been fixed. There's nothing wrong with it.
Joel blasts rewriting code some more, but doesn't really get into alternatives. Instead he talks about forcing programmers to get with the program, and if they don't, fire them.
Isn't there sometimes a happy medium between completely rewriting the whole codebase and continuing to hack it up? For example, maybe you can identify certain modules that can be isolated and rewritten, then tested rigorously against the old code to make sure they're functionally identical. Or you could separate the old code into a library that just does the computational part of a program, and then write a new GUI around it from scratch.
He takes Netscape as an example, saying the worst mistake they made was to rewrite it from scratch.
I admit that it would have been nice if they released the source code to Netscape 4.x, and not just Mozilla. Even if the code was the most gawd-awful thing in the world, in the years since Mozilla started don't you think we (the open-source community) could have at least fixed some of the more annoying bugs in Netscape?
Re:Code rewrite (Score:3, Insightful)
This happy medium is described well by Bruce Eckel in Thinking in C++ [eckelobjects.com]. He says in the chapter on design (paraphrased): "don't worry that getting some aspects of a design wrong will mean you have to rewrite everything. You won't - properly-written classes shield you from your mistakes." This is from the section that talks about the problems that occur early on in implementation, but applies equally to rewrites.
For example, maybe you can identify certain modules that can be isolated and rewritten, then tested rigorously against the old code to make sure they're functionally identical.
This is called refactoring and is now a widely-accepted industry standard practice for improving a codebase without rewriting it from scratch. The official web site is here [refactoring.com].
He's only partly correct about the rewrite thing (Score:3, Insightful)
For other kinds of systems I'm going to have to argue that rewites can be very beneficial.
For systems where the development team has access to a regression test suite and the old (working) code at the same time rewrites are much more easily done. You simply treat the existing code like a prototype. Something that captures all of your requirements, but maybe not using a design that ended up working out (read as: turned into a freaking hairball as time passed!) You work through the old code, understand it, and then build up a new design that works out cleanly in all of the places where the old code was a hack.
When you are all done (or as you are working), you hit it with the test suite. This works out best if your process requires that all of those pesky little bugs found in the old code had to have test cases to reproduce them. (Obvisouly there are limits to this
Anyway, I think Joel's statement was just a little to broad. He's correct in some cases, but not all. Of course maybe I'm just one of those overly confident coder types
why are we listening to this guy? (Score:5, Funny)
Hold on, this man worked at Microsoft from 1991 to 1994. He led the Excel team. He led the VB team. This was win16. Excel is great now, but do you remember how much it sucked before office 95? And who the heck used VB for 3.1?
Even better! he wrote the Juno e-mail application. Believe me, this was no fine engineering here. Why does he know better then anyone other Tom, Dick or Harry what makes software project tick?
True, but you're missing the point (Score:2)
Likewise, Juno wasn't great from a technical perspective, but that's not why it's a giant FUBAR. It's the business model that's crippling the company - how many times do we need to see that even limited ad-sponsored ISPS Just Don't Work.
Would I go to Joel for advice on how to actually write code for a given project. I don't think so, no - he's almost certainly a good programer, but there are plenty of other people out there who're probably more skilled. But from the perspective of managing software development - well, Joel does have a lot of experience doing THAT successfully (at least at MS).
One last thought - I wouldn't hold Juno against him as proof of business stupidity - he wrote that client in a simpler time, an innocent time. A time when click-throughs really were worth something - or so the marketing mavens thought.
Re:why are we listening to this guy? (Score:3, Insightful)
Um.. possibly because this guy has a better track record that *you* do when it comes to pushing out reasonable quality *commercial* software, and on time?
Not "fron scratch" (Score:4, Insightful)
Netscape made the "single worst strategic mistake that any software company can make" by deciding to rewrite their code from scratch.
Netscape didn't rewrite the browser from scratch. Back in April 1998, Communicator 4 was the current version; to get from there to the open-source Mozilla browser, everything that couldn't be distributed (code from other companies, and security code with export restrictions) was stripped out of the source code. What was left was made available as the start of Mozilla. It didn't even compile at first, but Mozilla didn't start from scratch.
Admittedly, the fact that this next-generation browser hardly worked at all for more than three years did keep Netscape from capturing any market share, but the browser had already been commoditized, and the battle had already been lost.
I think that the real browser battle is yet to come -- when the bulletproof and iron-clad Mozilla, carefully fine-tuned to scratch every developer's personal itch, is finally ready sometime next year to take on whatever Microsoft has got. I think that's when the real interesting things will happen -- not just on the technical and marketing fronts, but also on the legal front, as Microsoft finds ways to make sure Mozilla isn't a threat...
You misunderstand (Score:3, Interesting)
By the time the Mozilla project was announced, Netscape was already out of marketshare and had a product that was cleary inferior to ie 4. Considering the amount of bugs in the initial release of ie 4, making an inferior product was no easy feat.
Jamie Zawinski [jwz.org] has a great deal to say about this period in Netscapes history.
--Shoeboy
Sure-fire way of making a software project fail: (Score:4, Insightful)
No, I'm not cynical. Honest.
Program Manager != Programmer (Score:4, Interesting)
From the interview's lead-in material:
At Microsoft, the job title of Program Manager is given to the people that design the software. They dream it up, write the specs, hold countless meetings, and basically lay the path for the developers to follow. The developers (Software Design Engineer) are tasked with actually programming that software (and thus would be considered "programmers"). Just to round out the roster, the Software Design Engineers in Test (SDET) write the testing suites used by the test teams, and the Software Test Engineers apply those suites to the code following a test plan that they create. In that heirarchy, only the SDE and SDET jobs could be accurately described as "programmers".
Note that this is actually not so cut and dried, wherein SDEs often do design work and test work, and SDETs often do the work of SDTs. PMs don't program, however (well, aside from javascript&html prototypes, anyway).
The point? Calling this Joel an ex-Microsoft programmer is misleading, because he was not. However, the position he held at Microsoft actually lends more credence to his views on design than if he were actually an ex-programmer, as part of the job description of a program manager is doing software design.
(Brief descriptions of all these job titles can be found at Microsoft's college site [microsoft.com].)
This guy is a turd! (Score:5, Interesting)
However, I feel slimy for just reading that stuff. Here is what I got:
1. Bugs are fine if they get your product delivered.
2. Load in useless features to drive sales, knowing that your code will suck.
3. Once you have gobs of crap code and a large user base, there will never exist the possibility of re-designing things (eg, WinXP) since it doesn't matter that code sucks (see point 1) and all that counts is revenue.
4. Being efficient is a waste of time. Let the hardware catch up with the crap code.
5. The customer never has valid input anyway.
6. Do it fast and furious, even if January 1900 is broken. Consumers are idiots anyway.
These may be great for sales, but ultimately you will build crap. Garbage in, garbage out. I would rather design good software that was well designed and efficient than vomit up mounds of bloat that will ultimately topple under its own weight.
Software built poorly will never hold up over time. If you look at how little UNIX has had to change over the past 30 years to keep up with "The Internet Age" versus the amount of work done to get XP "working", the future looks bleak for Microsoft. In 20 years, their OS need 25GB of RAM simply to boot up. Of course, this seems not to concern them.
Re:This guy is a turd! (Score:2, Flamebait)
As for the points you bring up, you can't possibly understand writing software meant to be sold. Bugs are a part of anything, do you think your mom's care rolled out of the factory absolutely bug free? Features do drive sales, sales provide a means for continued development and the feeding of one's family. Not everyone lives with mommy and daddy. Completely trashing old code is often times retarded, clean up dirty patches and whatnot but you don't scrap working code entirely. Writing ultra efficient software is often a waste of time since you're hammered by schedules. Today's screamer is a POS in 18 months, product life cycles are often only a little bit longer than that especially in business environments. People running Windows 95B on old 166 Pentiums are probably still using Office 95 or 97, they don't give a shit about new features. New features are the concern of people who really rely on new dodads and whistles. Customers know shit about development most of the time and you often times know what they are going to say. Read the fucking article man, he goes into why customer's suggestions mean shit.
Hmm, sort of. . . (Score:3, Interesting)
I do somewhat agree with you on the other points, though I'm going to take the liberty of doing some "charitable" interpreting of Joel on a couple of the points:
2. Load in useless features to drive sales, knowing that your code will suck.
I think, with respect to this, Joel isn't interested in useless features. What he is basically saying is that if users REALLY want a feature, you're stupid to to take the attitude that "I know better than you: you don't need this feature". You just lose customers that way. Remeber, the customer is always right.
3. Once you have gobs of crap code and a large user base, there will never exist the possibility of re-designing things (eg, WinXP) since it doesn't matter that code sucks (see point 1) and all that counts is revenue.
Well, although I do believe there are certain situations where a complete re-write is in order, I think he makes a valid point. I think Joel (again I'm interpreting here) would say that it is better to revise the current code, clean the current buggy code up and "perfect" it rather than to start over. After all, starting over doesn't even guarantee you that the new code will be any less crufty than the old code, just different! (Although, sometimes your design was fundamentally flawed to begin with and you need to start over to deal with the intrinsic problems in it, but hopefully those kinds of problem can also be dealt with by revisioning instead of starting over completely.) Start over with a new code base and you just end up with new bugs sometimes. Plus, as he points out, not releasing an updated product in the market for 3 or 4 years REALLY hurts a technology company.
4. Being efficient is a waste of time. Let the hardware catch up with the crap code.
Hmm, he does sound a bit like he's saying that. But, to be charitable again, I'll interpret him as meaning that it's not worth spending a lot of money and time to get small incremental performance increases or size decreases. But, obviously you're not going to set out to make your code as inefficient as possible. And he does have somewhat of a point about Moore's law. How many people are still using WordPerfect 5.1? Undoubtedly there are still a few people. . . but is Corel making any money from those people? Probably not, and since Corel is a company that needs (desperately at this point) to make money, they are going to add features that user want, that they think will give them a competitive advantage to Microsoft, even if that means increasing the size of the program a little bit.
Re:This guy is a turd! (Score:4, Interesting)
These may be great for sales, but ultimately you will build crap. Garbage in, garbage out. I would rather design good software that was well designed and efficient than vomit up mounds of bloat that will ultimately topple under its own weight.
Everything he said was focused on achieving commercial success. He gave solid examples of times when companies did not do as he suggests and failed commercially. I can't think of too many examples of companies that have succeeded overwhelmingly by doing otherwise.
On the other hand, he did not talk about sculpting perfect software people will use for decades to come. I can think of software that succeeded in this respect, and it didn't do it by following his advice. (TeX comes to mind.)
For example, he talked about bloatware, saying it is a good thing. "Features make users' lives better if they use them, and don't usually hurt if they don't." I disagree with this when talking about "hurt" as "making the software more painful to use" instead of "cutting sales". Extra features introduce more bugs and take away from the time programmers could be fixing other bugs. They shouldn't be added until everything else is perfectly solid and possibly not even then.
Really, I think there is a time for to listen to what this guy has to say and a time to completely ignore it. If you're developing a commercial project, his advice has a certain merit. If you're doing something as a hobby, producing a good piece of software you're proud of is much more important than producing a product before your competitors.
Another interesting read... (Score:4, Interesting)
Yes, there are a lot of companies who have been squashed (or, as Joel would say, "Had their lunch eaten") by Microsoft in large part because of Microsoft's money/marketing, but there are also a lot of companies that nose dived into failure because of their own ignorant business and technology decisions.
While Microsoft may not like the costs and annoyance of court cases and DOJ action, it must give them some satisfaction because most of those companies bringing suit against Microsoft are doing so because they think that's their best option. I would argue that for these plantiffs making better products would be a "better option."
A little unrealistic (Score:5, Insightful)
I agree with the spririt of what he is saying - often the "rewrite" is an ego thing - one programmer wanting to write his code instead of reading someone else's, but there is no doubt that most serious professional programmers have looked at code that simply needs to be thrown away.
I know, I know! (Score:2)
Bloatware (Score:2, Insightful)
I have learnt a lot of good practices from one [joelonsoftware.com] or two [joelonsoftware.com] of Spolsky's articles, and for that I was prepared to put up with his cocky know-all attitude and routine rubbishing of every software company except the ones he has stock in, but lately he is full of tendentious statements like
So the space it takes on the hard drive is the only cost of bloatware? Try downloading IE 6 on a dialup connection and then check your phone bill.
not rewriting is why ME makes me nau (Score:3, Interesting)
It's supposed to be a simple function to display a window or something, but for some reason it takes up two pages and has all these ugly little hairs and stuff on it and nobody knows why. OK. I'll tell you why. Those are bug fixes. One of them fixes that bug that Jill had when she tried to install the thing on a computer that didn't have Internet Explorer. Another one fixes a bug that occurs in low memory conditions. Another one fixes some bug that occurred when the file is on a floppy disk and the user yanks out the diskette in the middle. That LoadLibrary call is sure ugly but it makes the code work on old versions of Windows 95. When you throw that function away and start from scratch, you are throwing away all that knowledge. All those collected bug fixes. Years of programming work.
Now, maybe I'm just ignorant because I've never really developed anything with the 640k barrier or had to haggle with XMS and EMS and other whatnot, but it seems to me that those bugfixes are needed because there's something else fundamentaly wrong with the code. IMO, sometimes you just have to rewrite, because your code is just fundamentally wrong.
Take DOS, for example. Microsoft added Windows. Then they made a bootloader, added real multitasking, and called it Windows 95, which wasn't that bad. Then they made some fixes, and called it Windows 98, which supported newer hardware, but was more unstable. Then only God and the developers at Microsoft know what happened to Windows ME, but that's when the bugfixes started causing bugs themselves. I mean, plugging in a USB printer shouldn't freeze the entire system!
Microsoft knew that they had a fundamentally wrong approach to an OS, so they wrote NT, 2K, and are new phasing out ME in favor of XP. XP replaces ME because ME is crap. However, this dude doesn't seem to realize that his own company isn't following is "wisdom."
Maybe I'm just cynnical, but I wonder if there is some kind of ulterior motive here.
Valid point, but something else to think about: (Score:3, Insightful)
As far as Microsoft redesigning the OS with NT:
Microsoft is one of the few companies who can afford, financially, to have parallel development teams. When Microsoft started building NT they had help in funding the development because IBM was helping them (remember that NT started out of the OS/2 project that Microsoft was working on with IBM). And later on they had made such a fortune on Win95 + Office 95/97 that they had more than enough money to fund parallel development.
Microsoft realized that, at least in the early versions of NT, normal users would have a hard time administering NT and running a lot of the programs they wanted to use under NT. That is why they paid developers to keep developing Win9X/ME, EVEN AFTER they decided to redesign the OS. I would say that Win2k Pro is the first version of NT that most users would have no more problems with than if they were using Win9x.
Since most companies can't afford to keep parallel development teams in order to maintain the old product until the new product is "ready" for all their users, it usually (though not always, I think) makes more sense financially to try to evolve and revise the current code base.
Point 2: Microsoft IS, in most cases, following exactly the strategy that Joel outlines. Take Internet Explorer for example. Up until IE4, IE just plain sucked as a browser. Microsoft kept revising/evolving it though; With 4 it still had lots of annoying things about it, but was generally usable. With 5.x they fixed more bugs, got a lot of things working fairly well (of course, there were still some things that were annoying about it, especially from a web developers' perspective, like a bad implementation of Cascading Style Sheets, which still isn't quite right (but I'd hasten to add that Netscape 4.x's implementation of CSS is much worse; sometimes valid CSS would CRASH some of the 4.x browsers). Now they've released IE 6. Still not perfect, but adequate and productive for most users. The point of my writing about IE like this is that Microsoft has been able to, relatively quickly, revise their browser, whereas Mozilla/Netscape6 has basically become really useable only in the last 4 months or so (here's where I point out that I'm composing this in Mozilla under Linux).
So, I'd say that Joel's point is somewhat valid, and that Microsoft, in fact, does follow his logic, in most cases (Office, SQL Server/BackOffice, IE, etc).
He missed one (Score:5, Interesting)
MS successfulness != code quality (Score:3, Insightful)
My God. So this is what Microsoft code looks like? It's a miracle it can be maintained at all. This sounds like sloppy coding by trial-and-error, at its worst. Code filled with "ugly little hairs and stuff" that "nobody knows why" is almost a guaranteed recipe for buggy, unstable code. If all these "bug fixes" were properly commented to begin with there would be no argument as to why they should be kept. Thank God for open source, where programmers are _proud_ to show off their code (well, a lot of them, anyway).
I would attribute the successfulness of Microsoft, and the failure of others, to factors other than the quality of its code.
Re:MS successfulness != code quality (Score:3, Insightful)
You don't code much do you? The point is that the environment you are given may not work as told.
This sounds like sloppy coding by trial-and-error, at its worst.
Not always:
Example: The hardware on device X has a timing hazard under conditions Y. Even if your code is perfect it will not work under every condition (that is the real world). So under that condition you do something that seems to have nothing to do with the design under normal condtions. Voila! Problem is solved your code is more robust.
In the land of make-believe you can just ask the hardware maker to recall their million+ units to fix your silly little issue, but we live in reality.
Those "ugly little hairs" may not make sense to the 10th maintainer on the project that was ported three platforms ago (Even if commented). I've seen alot of opensource code and some of its pretty damn unreadble. I've seen libraries given with no api reference or samples, drivers with no documentation, and the list goes on. I guess "real men" figure it out just by reading the source. While wasting hours trying to get the code to work in their project.
Poorly documented open source is just as closed as proprietary code.
Not relevent to free software movement (Score:3, Interesting)
The fundamental difference between free software and commercial software is that free software is about the product, commercial software places a higher value on the profit than the product.
The classic engineering design method is to build something, break it, build it again break it again, etc etc.
Ive never heard of good engineering design comming from build it, break it, fix just the bit that failed, break it, fix just the bit that failed, etc
The whole program is one product, patches dont always fit in nicely with the overall program flow, thats when a program becomes ugly.
Ugly code is more costly to free software because it stops people wanting to get involved, if commercial software is ugly they just pay them more money or something, managment doesnt care how much programmers like readign the code, they just care that it works and its on time/budget
Thinking "simply"... (Score:3, Insightful)
Everything this guy says tells programmers to consider the bottom line, the almighty dollar. This attitude works in other industries, but will eventually bite them in the ass. (Automotive anyone?)
He's actually giving us the directions on how to beating MS. So, if you are producing such and are in this to make a fortune today instead of tomorrow... take notes they will be invaluable... for the near future.
However, we all know this is the worst advice for those of us who use and program open source software. We want simple code. We want it to do just the basics. If it's too basic for you, here's the code, feel free to add to it.
Remember the automotive industry? Japan (and Germany) started out with simple basic cars and trucks. And the typical American car buyer? "They're so small, so plain and slow." Hmmm, now these little 4-cylinders are blowing the doors off of the bigger American cars. Because each time they built their cars, they started out simple and refined each part before they added on.
MS is winning today, but soon people will like their programs and OS's like they demnd their cars now, reliable and economical. It will happen, but how long will it take?
To succeed in commercial software... (Score:5, Interesting)
Yes, it's true. If you make a major mistake, you get killed, often by Microsoft. Some people think it's a pretty sad state. I just think it's capitalism and evolution. Dodo birds are extinct, and so is Visicalc. I don't want to be extinct, so I try to learn from the mistakes of the companies that have tried to go up against Microsoft. It's easy for me because I was inside and I know something about the way R&D worked at Microsoft. I've tried to share many of these lessons on my site.
To succeed in commercial software you have to get beyond being shrill and angry about Microsoft. You have to be cool headed and smart and study the past and make the right decisions for your company, not the right decisions for some arbitrary sense of aesthetics (although of course I am as big a fan of clean documented code as anyone.)
Production code is not so pretty. Open source code is not so pretty. No real code is all that pretty. It takes time to study it, understand it, and read it, to understand how it got the way it is. The more widely the code is used, the more true that is. For Fog Creek's latest software product, CityDesk [fogcreek.com], we stayed up one night tracking down a bug that only happened on Chinese Windows, where asc(chr(x)) turned out not to be equal to x, an assumption we had been making. How many of you ever thought about getting your code to work on Chinese Windows? No matter how well that piece of code was designed, I'm sorry, I've been programming for 20 years and I never realized that asc(chr(x)) was not always equal to x on some platforms, and I designed it wrong, and until someone tried it on Chinese Windows, I never would have known. Now the code uses byte arrays instead of strings and doesn't have that problem. There's a nice comment in the code saying "use byte arrays instead of strings because of MBCS versions of Windows." The code now works perfectly, but the byte arrays are a little bit uglier than strings. If ten years from now somebody rewrites CityDesk from scratch, I'll guarantee you that 95% of the Windows programmers working today would make the same mistake again, and stay up all night again.
If a piece of your code is ugly and doesn't work, by all means, rewrite that piece. If it's ugly and works perfectly, you're wasting valuable hours rewriting it, time that could be spent doing something that will gain you market share. If you really have an undecipherable mess of spaghetti, 9 times out of 10 you're just being lazy about deciphering it because it seems like more fun to create it from scratch, but it's the ultimate in arrogance to think that your newly created from-scratch version is going to be all that great.
Chars and bytes arrays are both the wrong solution (Score:3, Insightful)
Nope.
The right solution, which many would take if doing it again today, is to do it all in Unicode.
Re:To succeed in commercial software... (Score:5, Insightful)
Early on in my programming career, I was looking at such a piece of code - something which was simple when first written (in C) 5 years previously, but had so many mods, the function was pushing 1000 lines (ugh). Some guy had put in this change, commented it as "for performance improvement", then commented it out with an extra comment explaining that it was taken out because it didn't work. At the time I thought "why is this guy making himself look like a dick?". Later I got it - he'd left it like that just to stop some other poor bastard making the same mistake (it was a calculation thing which we were always pushed to improve performance).
If you do something wierd or clever, for f*ck's sake put a comment in (do as I say, nbot as I do).
Comment rules (Score:3, Interesting)
x = y + 3;
Come out of that sort of training, and it takes a while to realize that comments aren't stupid, your teacher was. Comments should NOT say what a line of code does. Except in obfuscated code contests, if what one line does isn't obvious, either the writer or the reader is incompetent.
Every module and function should have a block of comments to say what the module or function does, what the function arguments are, and especially give the specs, and tell what assumptions are hidden in the interface If possible, go back after software testing is done and note the conditions in which the module was tested -- this is invaluable for re-use, since it avoids the assumption that "this part was thoroughly tested", when it wasn't actually tested for what is now being done with it. Every time someone has to update the program, it's a "code re-use" for the modules that weren't touched...
In large functions, you might want to put in comments for each major block of code, but first think about whether the function ought to be split up... Line by line comments are needed only when you are doing something that is unusual, not straightforward, or handling an issue not stated in the specs; say WHY you are doing it that way, or why you have to do it at all.
Joel is almost completely and totally wrong... (Score:4, Interesting)
However, the mistake is not in doing the rewrite, but in not managing the process well.
The number one reason for doing a rewrite is for a cleaner, more stable architecture for writing new features against. The need for a new architecture is discovered in the process of adding new code to the old, and discovering issues that were not adequately addressed in the older version, or in learning better methodologies, or in the existence of better tools and programming processes.
Programs that have been improved by a total rewrite...
Windows NT/XP over DOS (and DOS windows)
Excel over Multimate
Word For Windows over Word for DOS.
Adobe Indesign over PageMaker
Quake over Doom
Quake II over Quake
That is off the top of my head...
ALL real successful software was origninally generated by extremely small teams of EXCELLENT *STAR* quality programmers. (There are not very many of them. If you don't believe that programming is a talent industry, you don't really understand what it takes to make successful commercial software).
The only real other option is unlimited resources (time and money) and it seems that where this exists is at Microsoft, and in some open-source projects.
The biggest problem comes from management believing that random team of programmers can create a new platform from scratch and that it can be done in a schedule that permits dropping the old code base.
ID does it by continuing to build new platforms with very small extremely talented teams.
ADOBE and Microsoft did it, with lots of time energy and effort, with parallel development against the old code base.
But this does not mean that it shouldn't be done. Those that do not rewrite eventually lose, because they are not able to respond to the market on the old code base, and are not able to make the kind of advances that a required by the customer base to upgrade if they use thier product already, or to switch or begin using their product if they hadn't already been convinced.
Managers are going to be disserved in the long term by reading Joels thoughts on the process, and ultimately the companies they work for will be eaten for lunch by new competitors that are not burdened by legacy code, but also really understand well the problem space they are trying to solve.
Oh the religion, oh the pain (Score:5, Informative)
I read Joel's interview yesterday, before it was mentioned here. Good interview, I thought, he makes lots of good points. But the debate about it here has nothing whatsoever to do with what was said there. Many of the comments key off of the word "Microsoft" and so immediately assume that the interview is crap and has something to do with justifying Microsoft's monopoly position (are these people really bots?).
Most of the the comments, though, are taking little bits of advice and twisting them around into mini-lectures about commenting style or programming issues, or they're simply being used as jumping off points for the poster's own spouting. Let me make this perfectly clear:
These people are not professional programmers.
Anyone who has been through the wringer of commercial software development, and not just a few classes and some tiny open source projects, wouldn't be so religious about such trivialities. Real software development is different. It is not a battle between the Evil Bad Commenters and the Perfection of Beautiful Computer Science (or more correctly What My Professor Said in Class Last Semester). That's not how it works at all. All programmers know about commenting, about indentation style, and so on. There's more to developing commercial products, though: deadlines, missed features, last minute requests from the client, strict requirements for supported platforms, and so on. In this kind of environment, commenting style is a very minor issue (not to say it isn't important, but ranting about it is like ranting to an experienced guitarist about your pet music theories--when you barely know how to play guitar at all). A good way of spotting such people is to ask them what they think of "goto." Odds are you'll get all sorts of vitriol about the evils of goto and the benefits of structured programming and how you should never, never, ever, even if your life depended on it use a goto. An experience programmer would shrug and say "sometimes they are useful, sometimes not."
My advice: Learn, practice, work on projects, and over the years you'll become a pro. A college student without significant software engineering experience is not in a position to rant about how commercial development doesn't fit his ideals. The true sign of experienced developers is that they've been through it all and have enough experience that they don't feel the need to rant every chance they get--or at all.
Death March (Score:3, Insightful)
Whenever I hear of a software project failing, I think of this book because it explains in gory details what happens when software is treated like fast food instead of architecture.
When Joel Spolsky gripes about "re-writing" as the cause to failure, he's both right and wrong at the same time. Rewrites don't kill projects, MISMANGED rewrites kill projects.
There are some other points that raise my suspicious about Spolsky's training and experience. Since the 90's, there has been a big effort in the industry to develop large scale products with some semblance of reuse. Hence, one of the determinat reasons for the lurch into object oriented program.
Spolsky descriptions sound to me like he's still thinking of code, and of failed projects that were lacked modularity. Nor did he give much attention to other major factors such as FEATURE CREEP, where a small system becomes spagetti over years and years of maintenance. Same with scalability, challenges definately occured in the past decade or so with the massive changes in processors, operating systems and their associated APIs/internals.
But again, it all gets down to one's approach. If you treat software development like you're flipping burgers for the lunch crowd, then you're going to have to deal with the indigestion that comes along with building a house sloppily.
Re:Taking lessons from...Better Yet Check this one (Score:2, Insightful)
Can you spot the "seat of the pants/never piss of the installed base"-oriented design fallicies just in that one paragraph:
1. More features are always better features
2. Coders are not responsible for optimization
3. Hardware vendors must not change h/w designs that would break installed base, even to improve their architecture
4. Your s/w is SOOOO... important that shipping delays for optimization/tuning/additional debugging are not to be accepted
further from the rest of the interview;
6. There's never a reason to rewrite extent code, EVER...(here's my nominee for that reason -- Microsoft Outlook)
7. Architecture is secondary to UI, maintanence of the UI experience is the MOST important standard
8. Any problems caused by #7 above should never be fixed by redesign, but instead should be prioritized and patched by response to User problems.
i could go on, but i think i've gotten the highlights, did i miss any???
Gee, can anybody figure out a s/w product(s) family that seems to be a living demo of (my phrase) "Design By Release Date, Redesign by User Complaints" School of coding????
i'll even agree with Joel that you should be very careful with "scratch" redesigns, and too many people would rather rewrite viable code than fix it....
BUT JEEZ, should you hold on to a payroll system written in FORTRAN69 (or LISP or ALGOL or FORTH...), just because it works, even if you have NO OTHER apps in FORTRAN and don't have one single FORTRAN programmer working for you????
.....
Re:Taking lessons from...Better Yet Check this one (Score:2)
3.) Actually, if there is one lesson we have learned in the past 20 years, it is that hardware vendors have an almost impossible time making people recompile.You need only witness the design of IBM's OS/400 platform, which places a thin layer of microcode between the processor and the OS, the problems Intel has had in generating support for Itanium, and the transition to Java/C# from recompiles on every platform. You can innovate, but you had better not break compatibility with the existing platform.
So, as the old adage goes, if it ain't broke, don't fix it. That FORTRAN may be crufty, but it never blue screens or segfaults.
--tkr