Open Source Programmers Stink At Error Handling 610
Mark Cappel writes: "LinuxWorld columnist Nick Petreley has a few choice words for for the open source community in 'Open source programmers stink at error handling'. Do you think commercial software handles errors better?"
Open source BSOD library. (Score:2, Funny)
that would make our life more convenient and
our applications more commercial-like.
Re:Open source BSOD library. (Score:2)
In the case of some of us... (Score:4, Funny)
Who spend days at a time at work (read: Stallman) without showers, removing the last 3 words provides a better description :o)
No, its not limited to OSS (Score:4, Insightful)
Things like checking pointers to see if they are NULL before using them. Simple basic things that could prevent errors.
Error handling doesn't just mean catching the error after its already happened. It also means being proactive about it before it happens.
A lot of programmers do not do that.
Re:No, its not limited to OSS (Score:3, Insightful)
I have forced the developers that I work with to add hundreds of error case checks in the last year.
Re:No, its not limited to OSS (Score:2)
Re:No, its not limited to OSS (Score:2)
My company contracted them to implement a site metrics solution. . . the first inkling I had that something was wrong came as a result of the following:
SENIOR E.PIPHANY CONSULTANT: Hey, this server you've given us? Does it have any utilities on it to run periodic processes? You know, like every day at the same time, run a process?
[The consultant goes on to name two or three 3rd-party process-scheduling apps that I've never heard of. This doesn't bother me too much though, because I'm thinking:]
ME (out loud): Well, there's cron, right?
CONSULTANT: "cron"? What's "cron"? Does it schedule processes?
. . .
Two days later their senior DBA tried to tell me that I couldn't get RAID-5 out of a three-drive array...
Errors? (Score:2)
Re:Errors? (Score:2)
What are these "errors" you speak of? Open source has no errors...
Think that is what he mean
Program has shut down, "error"? What error? Open source has no errors.
Re:Errors? (Score:2)
New /. speed record? (Score:2)
That's the error I'm getting. Could it possibly be slashdotted in only 3 minutes?
Too bad, I was hoping I could say something meaningful, or maybe even relevant...
It's not an error... (Score:2)
That all depends on... your selection of course! (Score:3, Interesting)
Two Words: Apache and Tomcat
I'm a professional who works with the closed source equivalents all the time: Netscape iPlanet server, IIS and WebLogic.
Now: before you flame - I like working with WebLogic, but it is no better than Tomcat in my opinion (as far as error reporting goes). And IIS is a piece of crap! Not to mention Netscape's overly complecated UI that blasts every change you've ever made and is completely out of sync with the flat file configs.
Need I mention that Tomcat error logging is set-up in an XML file that is easy to read, modify, and translate into a simple report for management (IT that is).
When was the last time Windows gave you a nice error.log when it blue-screened, or how about IIS on a buffer overflow?
I'm sick of bashing on the free stuff out there. Sure, just because I can release one of my college projects into the open source may mean that statistically there are more projects without good error reporting, the real projects are pretty darn good.
Re:That all depends on... (slightly OT) (Score:3, Insightful)
And I would be careful about holding up Tomcat as an (open source) triumph. It's had some major bugs all through the 3.x timeframe, and its team includes at least a few daytime profressional "closed source" programmers (there's no correlation between the two, by the way).
Commercial programmers are worse! (Score:2, Insightful)
My textbook example:
It takes no argument, and only produces one line of output. Despite this apparent simplicity, I've been able to get each and every pwd that ships with a commercial Unix to dump core (almost always by executing in an exceedingly deep directory.)
The GNU shellutils version of pwd, on the other hand, has never dumped core on me.
I will admit, the fact that it took two decades for a non-crashable version of pwd to become available doesn't bode well for the many other vastly more complicated programs out there in any environment. But it does speak very highly of the GNU utilities in general, and I haven't even begun to praise the thousands of folks who have worked on making these tools quite portable!
Blame this on Open Source Programmers only? (Score:3, Interesting)
Testing (Score:3, Insightful)
Explanation via Analogy (Score:3, Insightful)
A great sysadmin would cut out their own heart before operating without known good backups. A great sysadmin would chew their own arm off before putting something into production without testing it first in a development environment. A great sysadmin *always* has a backout plan.
And how does a lowly admin reach this amazing level of greatness, you ask?
Admins get paranoid after making hideous, terrible mistakes that immediately result in Bad Things Happening.
I have personally: killed the email server for 2 days...shut down distribution for the world's largest distributor of widgets (every Thursday for 3 weeks)...destroyed all connectivity (voice and data) to the world for 12 hours...hosed the upgrade on a 700GB Oracle database (and our backups were no good). And any semi-experienced administrator will have, at minimum, two stories that are at least this bad (like my friend who shut down trading at Fidelity for a day).
And for every one one of these instances, I immediately felt the wrath of: my manager, my manager's manager, other people's managers, other people who were affected, stray people wandering by my cube who weren't affected...I also became a part of the "mythical sysadmin storybook"--"I once worked with this guy, and (you won't believe this) he..."
I submit the hypothesis that: generally, most developers are not subject to this type of immediate and extremely negative form of feedback for their mistakes. Therefore it takes a developer a long time to develop an aversion reflex that conditions them to do "the right thing -- error handling, code documentation" instead of doing "the easy, interesting, enjoyable and sexy thing -- making spiffy algorithms, writing tight code".
Drifting into another analogy, error handling is like code docmentation. Why do most developers get good (and a little obsessive) about documenting code? Becuase they finally spent some years trying to maintain someone else's tight, sexy code that is virtually incomprehensible.
So, my point is, developers take a long time to viscerally learn the need for good error handling by repeatedly getting whacked on the head for lack of error handling. It's like evolution in action.
Comment removed (Score:5, Funny)
Re:I think I've got it down (Score:2)
try {
--code--
} catch(exception e) {
throw e;
}
Error handling... (Score:5, Interesting)
Ultimately, if the engineer (or team of engineers) is inexperienced, error-handling will be weak, error-recovery nearly non-existant. However, a more senior engineer will generally start from error handling on up, making sure the code is robust before diving too deeply into business logic. The time taken for unit testing plays an especially large role here. The more time spent trying to break the code (negative test cases) the more likely you will have a system that has been revised throughout development to have rock-solid error handling/reporting/recovery.
[McP]KAAOS
yes, and most programmers suck (Score:2, Insightful)
Re:yes, and most programmers suck (Score:2)
Too specific: Programmers Stink at Error Handling (Score:3, Insightful)
The last time I used Word a drive filled during a save operation and left me with just a mutilated copy of the original file. (I will not use it again.)
My HP PSC 750xi software informs me every morning that its controlling software was exploded and I should reboot the host computer. (I'll wait for the OS-X drivers. If they are still bad the PSC goes out the door.)
The most amazing part is that this state of affairs doesn't surprise me. If my refrigerator intermittently defrosted and melted icecream all over the kitchen I'd be ticked. If my car mysteriously dies at stop signs I get it fixed.
Programmers have managed to beat down everyone's expectations to the point where half-assed is pretty good.
The only way I see to fix it is for consumers to refuse to buy flawed products, or legislators to pass laws allowing redress for flawed products.
I don't think either is likely.
I now use OSS for my mission critical work and fix what needs it.
Maybe people expect too much from software. (Score:5, Insightful)
how is a programmer expected to deal with the CD being scratched? Does your car still work if the transmission is damaged or half the engine has been riddled with bullet holes?
Again, a very unexpected and unnatural scenario. How well do cars function when they run out of fuel?
But how well would your refrigerator react if you treated it shoddily such as by leaving it outdoors intermuitently or diconnecting and reconnecting the power several times a day?
Now, I'm not trying to excuse sloppy software development but the fact of the matter is that software is constantly expected to work perfectly under situations completely outside its specifications yet we don't expect this from other items or appliances that we use.
Re:Maybe people expect too much from software. (Score:2)
Programmers are people, people make mistakes. Users are people, people make mistakes. Most often I've found problems arising from stable software coming from people using the software in a way it shouldn't be used.Using a full disk and complaining because it saved a mangled version of your original is just assinine. You are saving OVER a file. It is just extra code and bloat to ensure that your drive still has enough space every time you do a file save operation. I could understand being upset if he was doing a Save As operation.. but get real, it's his fault - not the software.
This mid-afternoon rant session has been brought to you by a slow day of engineering as my current project nears delivery.
Graceful degradation (Score:3, Insightful)
I agree with you that software in general is a lot more complex, and used in a lot more unexpected ways, than something like a car.
OTOH, there is such a thing called graceful degradation -- that is, if you push the limits of the software, it shouldn't just suddenly barf and die on you, but degrade gracefully. Too much code I've seen (both open and non-open source) assumes too much -- and dies badly when the assumptions fail.
It is possible and not overly difficult to design software such that it degrades gracefully. Sadly to say, sloppy programming (programmers), deadline pressure, or disinterest in handling error conditions, dominate the world of software. Not many would put in the extra work to make a program degrade gracefully, because it doesn't have very visible effect -- until things start to fail. And too many programmers have this "test only the cases that work" syndrome.
Re:Maybe people expect too much from software. (Score:2)
If you don't like checking the result of malloc, then write a function that does it for you. When you run out of memory, do your best to exit gracefully. It's not rocket science. It's just tedious, so people don't like it.
It's also hard to test, because some error conditions may be hard to reproduce. We may be able to take a hint from the hardware crowd, who have been using "design for testability" for some time now.
Re:Maybe people expect too much from software. (Score:2)
A computer should deal with a scratched CD by going "Uh, garbage data. Don't like that. Shut down system gracefully, don't try to find out the sound of heads on platters."
If the transmission is screwed up in your car, you don't expect it to crack the drivetrain.
EVERY piece of software should be checking for sufficient disk space during a WRITE operation. Only fools and dieties do otherwise. You shouldn't blindly try to overwrite the old file with the new and hope it works unless you've got a damn good reason.
When your car runs out of fuel, you certainly don't expect it to ruin the engine. Give it fuel (or disk space, as it were), and it's happy again.
There's a difference between dying heroically and taking everyone with you. I don't mind if a program barfs on occasion because of slight variance in the pases of the moon. The software industry is still young and needs to learn more lessons from the engineering industry. I mind when it barfs and takes my data along with it.
Not TOO much (Score:3)
First, about your analogies...
If I wear out the door key to my car, the car should not burst into flames when I try to open the door.
If my car runs out of fuel, I expect that after rectifying that little problem (and bleeding the injectors) it will be just like new. I do not expect that it will ruin my tires.
And yes, I have kept my refrigerator outdoors. I kept it on the front porch for two months while the house was being renovated 10 years ago. It worked just fine. It is 30 years old now (Thats 15 PC generations to you young whippersnappers. Moores law says the new fridges should be 1,000,000 times colder now.
About the cubase incident...
Yes, the CD is scratched. I expect that I won't be able to re-authorize my copy of the software, but don't ruin ALL the data on my hard drive! (Its actually worse. It was my wife's laptop. You do NOT want to have to tell my wife that you just wiped out her laptop.)
About Word...
Destroying the on disk copy of a document before successfully writing out the new copy is just plain stupid. Particularly on a Mac where there is a special file system function to swap two files. You write the new copy under a fake name, swap it atomically (even over file severs) with the original file, then delete the fake named file (which now contains the old data). No one gets hurt in error conditions, no one can ever have bad luck timing and read a partially written file off the file sever. Life is good.
The third case (The PSC) you don't mention, but it isn't really a case of graceful degradation. Its just an irritating bug. Honestly I'd dump the device because of the irritation, but it actually feeds card stock out of its paper tray! A rare quality in a printer.
I suppose the more explicit point I should have made is that bad things are going to happen to software and it requires effort from the programmer to deal with it. Sometimes just a tiny bit of effort. Cubase performed so badly with a bad CD that I suspect they never tested it. They write about it in their documentation, but they probably didn't test it. The Word example is just careless programming which could have been trivially avoided if the programmers understood the platform's file system calls.
How about the cost? I estimate that it probably doubles the engineering effort to handle the exception cases to a degree that would cover the incidents I note above. In the calculus of software development the benefits do not out way that cost.
What's the point... (Score:2, Interesting)
8)
On a serious note : I've written commercial and non-commercial code. Sometimes I'm obsessive about completeness, sometimes I'm pragmatic. No point in generalizing about OSS vs. commercial.
slashdot effect - slashdot mirror (Score:5, Informative)
Here is a select-n-middlemousebuttonclick(with my formatting):
Title: Open source programmers stink at error handling.
Outline: Commercial programmers stink at it too, but that's not the point. We should be better.
Summary: Why are we subjected to so many errors? Shouldn't open source be better at this than commercial software? Where are the obsessive-compulsive programmers? Plus, more reader PHP tips. (1,400 words)
Author: By Nicholas Petreley
Body: (LinuxWorld) -- Thanks to my very talented readers I've been able to start almost every recent column with a reader's PHP tip.I'm tempted to make it a regular feature, but with my luck the tips would stop rolling in the moment I made it official.So I want you to be aware that this week's tip is not part of any regular practice. It is purely coincidental that PHP tips appear in column after column. Now that I've jinx-proofed the column, I'll share the tip.
Reader Michael Anderson wrote in with an alternative to using arrays to pass database information to PHP functions. As you may recall from the column Even more stupid PHP tricks, you can retrieve the results of a query into an array and pass that array to a function this way:
<?PHP
$result = mysql_query("select name, address from customer where cid=1");
$CUST = mysql_fetch_array($result);
do_something($CUST);
function do_something($CUST) {
echo $CUST["name"];
echo $CUST["address"];
}
?>
Michael pointed out that you can also retrieve the data as an object and reference the fields as the object's properties. Here's the above example rewritten to use objects:
<?PHP
$result = mysql_query("select name, address from customer where cid=1");
$CUST = mysql_fetch_object($result);
do_something($CUST);
function do_something($CUST) {
echo $CUST->name;
echo $CUST->address;
}
?>
I can't help but agree with Michael that this is a preferable way to handle the data, but only because it feels more natural to me to point to an object property than to reference an element of an array using the string name or address. It's purely a personal preference, probably stemming from habits I learned using C++.
Subtitle: OCD programmers unite
Nothing could be a better segue into the topic I had planned for this week. I'm thinking about starting a group called OLUG, the Obsessive Linux User Group. Although I know enough about psychology to know I don't meet the qualifications of a person with full-fledged OCD (Obsessive-Compulsive Disorder), I confess that I went back and rewrote my PHP code to use objects instead of arrays even there was no technical justification for doing so.
Certain things bring out the OCD in me. Warning messages, for example. It doesn't matter if my programs seem to work perfectly. If a compiler issues warnings when I compile my code, I feel compelled to fix the code to get rid of the warnings even if I know the code works fine. Likewise, if my program generates warnings or error messages at run time, I feel driven to look for the reasons and get rid of them.
Now I don't want you to get the wrong impression. My PHP and C++ code stand as testimony to the fact that my programming practices don't even come within light years of perfection. But just because I do not live up to the standards I am about to demand isn't going to stop me from demanding them. It's my right as a columnist. Those who can, do. Those who can't, write columns.
I'll be blunt. Open source programmers need to stop being so darned lazy about error handling. That obviously doesn't include all open source programmers. You know who you are.
If you want a demonstration of what I mean, start your favorite GUI-based open source applications from the command line of an X terminal instead of a menu or icon. In most cases this will cause the errors and warnings that the application generates to appear in the terminal window where you started it. (There are exceptions, depending on the application or the script that launches the application.)
Many of the applications I use on a daily basis generate anywhere from a few warnings or error messages to a few hundred. And I'm not just talking about the debug messages that programmers use to track what a program is doing. I mean warning messages about missing files, missing objects, null pointers, and worse.
These messages raise several questions. Doesn't anyone who works on these programs check for such things?Why do they go unfixed for so long? Are these problems something that should be of concern to users?Worse, what if these messages appear because of a problem with my installation or configuration, and not because the program hasn't been fully debugged?But even if it is my installation that is broken, shouldn't the application report the errors? Why do I have to start the application from a terminal window to see the messages?
Subtitle: Getting a handle on errors
At first I wondered if this was a problem that you would be more likely to find when developers use one graphical toolkit rather than another. But I see both good and bad error handling no matter which tools people use. For example, the GNOME/Gtk word processor AbiWord has been flawless lately. Not a single warning or error message appears in the console. It's possible that AbiWord simply isn't directing output to the console, but I'm guessing that it's simply a well-tested and well-behaved application.
On the other hand, GNOME itself has been a nightmare for me lately. At one point I got so frustrated that I deleted all the configuration files for all of GNOME and GTK applications in my home directory in disgust, determined never to use them again. When I regained my composure and restarted GNOME with the intent of finding the cause of the problems, the problems had already disappeared. Obviously one or more of my configuration files had been at fault. Which one, I may never know, because GNOME or some portion of it lacked the proper error handling that should have told me.
In this case I was lucky that the problems were so bad I lost my temper and deleted the configuration files. In most cases, the applications appear to function normally. Aside from being ignorant of any messages unless you start the application from a terminal, there's no way of knowing why the warnings exist, or if they are cause for concern. The warnings could be harmless, or they could mean the application will eventually crash, corrupt data, or worse.
Subtitle: Examples
Just so you know I'm not making this up, here are some samples of the console messages that appeared after just a couple of minutes of toying with various programs. By the way, did you know you can actually configure the Linux kernel from the KDE control panel? Bravo to whoever added this feature. Nevertheless, when I activate that portion of the control panel, I get the message:
QToolBar::QToolBar main window cannot be 0.
Is there supposed to be a toolbar that isn't displayed as a result? I may never know.
The e-mail client sylpheed generates this informative message after about a minute of use:
Sylpheed-CRITICAL **: file main.c: line 346 (get_queued_message_num): assertion `queue != NULL' failed.
The Ximian Evolution program generates tons of warnings, but most are repetitions. They begin with the following:
evolution-shell-WARNING **: Cannot activate Evolution component -- OAFIID:GNOME_Evolution_Calendar_ShellComponent
evolution-shell-WARNING **: e_folder_type_registry_get_icon_for_type() -- Unknown type `calendar'
evolution-shell-WARNING **: e_folder_type_registry_get_icon_for_type() -- Unknown type `tasks'
The KDE Aethera client generates even more warning messages than Evolution, but many of them are simply debug messages about what the program is doing. By the way, I finally figured out why I couldn't login to my IMAP server with Aethera. The Aethera client couldn't deal with the asterisks in my password. I could log in after I changed my password, but I still can't see my mail. The program simply leaves the folder empty and says there's nothing to sync. Here are just a few of the countless warnings I get from Aethera, including the sync message.
Warning: ClientVFS::_fact_ref could not create object vfolderattribute:/Magellan/Mail/default.fattr
Reason(s): -- object does not exist on server
Warning: VFolder *_new() was called on an already registered path
clientvfs: warning: could not create folder [spath:imap_00141, type:imap]
RemoteMailFolder::sync() : Nothing to sync!
The spreadsheet Kspread reports these errors all the time, even though what I'm doing has nothing to do with dates or times:
QTime::setHMS Invalid time -1:-1:-1.000
QDate::setYMD: Invalid date -001/-1/-1
The e-mail client Balsa popped up these messages just moments after using it:
changing server settings for '' ((nil))
** WARNING **: Cannot find expected file "gnome-multipart-mixed.png" (spliced with "pixmaps") with no extra prefixes
The Gnumeric spreadsheet only reported that it couldn't find the help file, as shown below:
Bonobo-WARNING **: Could not open help topics file NULL for app gnumeric
Many of these problems could easily have been handled more intelligently. For example, Gnumeric could have asked for the correct path to the help file, perhaps adding an option so a user can decide not to install the help files and disable the message. Unless GTK and Bonobo are a lot more complicated than they should be, it should be easy to create a generic component for handling things like this and then use the component to handle all optional help files as a rule.
The only conclusion I can draw is that, like most commercial software developers, many open source programmers are just plain lazy about proper error handling. But we're supposed to be better than that, and it's time we started to live up to the reputation. I realize that most of these programs are works in progress. But good error handling is not something that should be left for last. It should be part of the development process. Although I may not practice it myself, I'm not the least bit ashamed to preach it.
Nick Petreley is a moron... (Score:3, Insightful)
Programming traits - just like preferences for pizza toppings, frequency in bathing and type of pr0n - vary from programmer to programmer. Some implement proper error handling, others could care less. It doesn't matter whether they're working on an open or closed source project. If the open-source programmers all traded places with the closed-source programmers, you'd have the same ratios of proper vs. improper error handling (although the traffic from open-source-programmers.com to goatse.cx would probably spike).
Blanket statements (Score:3, Funny)
Wait for it...wait for it...ahhhhh!
Re:you didn't read the article (Score:5, Insightful)
The article does not say "open source doesn't handle errors as well as closed source". What the article does say is "like most commercial software developers, many open source programmers are just plain lazy about proper error handling. But we're supposed to be better than that...".
I don't see a problem with this statement. The fact is, most open-source software sucks donkey balls. Petreley is merely saying it's time to put your money where your mouth is -- if you want open source to be considered better than closed source software, it better stop being so danged flaky.
Agreed, death to pundits (Score:2)
He's the stereotypical technology pundit. He learns just enough about technology to have an uninformed opinion about it.
The worst thing is that we on the internet have truckloads of people like him. Every mailing list, newsgroup, web log, IRC channel, or any other group in which people or trying to get things done will have a crew of wankers spouting their opinions with no attempt to actually contribute anything useful.
What really burns me about pundits is that they're getting paid to do what a couple million monkeys on the internet do for free.
Take Petreley. One time, he wrote an article about how maverick programmers don't write good code. I guess I can believe that. Then he went on to say that all brilliant programmers are mavericks, and Microsoft etc all hire them so they'll write bad code and people will have to buy bug fixes. Um, right. He then finished off by claiming that he used to be an absolutely outstanding programmer and that he had to quit because he was so amazingly good that writing decent code wasn't fun for him.
He has, to the best of my knowledge, never actually contributed anything at all even remotely useful to Free Software, or computing in general. He's even worse than Fred Langa, the guy who helped invent ethernet in 1976, then spent the rest of his career punditing, developing more and more bizarre opinions as his practical knowledge became antiquated.
So here's a message to Petreley: Do something useful, anything. If all you have to contribute is your opinion, then go home. Free Software writers are mostly volunteers, we don't have to put up with your wanking. If you have a problem with a program, file a fucking bug report. Actually, if you're such an amazing programmer, SHOW US SOME CODE! I don't care how much Infoworld pays you, to us, your opinions are worthless. So do something useful or, I'll have to dig out my cluestick and use it bash you into a profession that benifits humanity in some conceivable way.
Re:Nick Petreley is a moron... (Score:2)
stop panicking! (Score:2, Interesting)
I am sure there is some semi-painful way to get around this but should I really have to? If you ask me, the kernel should not panic at this "error" and should recognize it, prompt you and try to solve it (probe the new hardware and load the correct module(s)). Maybe some distros are better than others (and I shouldn't be placing this "blame" on the kernel team).
That's interesting, but how? (Score:2)
Last gasp error handling. (Score:2)
Over the past few years I've used several OSS programs in pre-release versions, and the tendency I observed was for the programmers to provide "last gasp" file saves to keep you from using work when the program crashed. For instance, I never lost a keystroke when using early versions of LyX.
I don't recall ever seeing this in a commercial product, though I haven't used any commercial products to speak of lately, so perhaps the state of the art has changed. I sure used to lose a lot of work under commercial software, though.
Nothing beats X-Box's error handling! (Score:2)
Commercial = Competition = Usability (Score:2, Insightful)
Excpetions are a key (Score:3, Informative)
Exceptions are mandatory for good programming, period. If the language you are using doesn't support exceptions (C, Perl, etc), you are going to have problems. Exceptions make sure that if an error occurs, and you aren't aware of it, your program dies, and doesn't go on its merry way, causing a security hole/unstable software.
Perl's hack at exceptions using 'die' doesn't cut it; one important thing about implementing exceptions is that your base operations (e.g., opening files, and other system functions) need to raise exceptions when problems occur. If this doesn't happen, you're only going to struggle in vain to implement good, correct code.
Exceptions are a primary reason I've moved from Perl to Python. Python's exceptions model is standard and clean. Base operations throw exceptions when they occur problems. And my hashes no longer auto-vivify on access, thank goodness. Auto-vivification on hash access are probably one of the principle causes of bad Perl code.
Re:Excpetions are a key (Score:2)
To show that there's been an error, I'd much rather do this:
return undef;
than this:
raise Exception.Create('This didn't work');
And to check for an error, I'd MUCH rather do this:
die unless defined(do_something);
than this:
try
do_something
except
on e.excpetion do
exit;
end;
end;
In fact, the programmers that I've seen use exceptions tend to be less careful than those that simply check result codes.
steve
Re:Excpetions are a key (Score:2)
The problem with result codes is that you can't propagate the problem up to the level of scope that should be dealing with it. For example, imagine you have a GUI program. At some point, it needs to open "foo.txt", but fails. Since you're a good software engineer, you've well-separated your GUI code from logic code. The GUI needs to display an error message, but if you only check error calls, the only part that knows about the eror that has happened is way down in the logic code, which has no idea how to tell the user. And propagating 'undef's all the way up through the code is uncool. Especially since return values should not be used to indidate errors; they should be used for return values.
With an exceptions model, you can let the logic code just propagate the error up to the GUI, who can then display a message to the user. It's a very clean, elegant system.
(Sorry for the lack of prettiness; Slashdot's input mechanism doesn't allow <pre> tags.)
This is not how you would handle it using exceptions; you would merely say "do_something". Period. If "do_something" threw an exception, and it wasn't caught, it propagates up and the program dies automatically.
The error handling challenge (Score:2)
you think checking return codes is the solution? Well, it is but at a cost.
Exercise for /. readers: add errorchecks to the following C function. 'return' and exception handling pseudocode allowed:
/* Here we do something with p1, p2, p3 */
int allocate_3(void){
int *p1, *p2, *p3 ;
p1 = malloc(SOME_NUMBER*sizeof(int)) ;
p2 = malloc(SOME_NUMBER*sizeof(int)) ;
p3 = malloc(SOME_NUMBER*sizeof(int)) ;
free ( p1 ) ;
free ( p2 ) ;
free ( p3 ) ;
return 0 ;
}
Let the game begin...
Re:The error handling challenge (Score:2)
Re:The error handling challenge (Score:2)
So, suppose the first malloc succeeds, but the second one fails.
In that case, you have allocated p1, buth then 'return -1'. That results in a memory leak, because you never freed p1.
There's one big point for garbage collection, btw. But, the same would happen with fopen()/fclose()
I posted this to show what the more common mistakes are. Yours is on the list (I've seen it zillion of times)
Re:The error handling response (Score:2)
the problem itself has a very linear structure, but the solution here has a lot of nesting. If i had more blocks, it would have even deeper nesting.
If the allocation was non linear (for example, a tree or a graph), and failed in the middle, deallocation would be really a mess. You would have to exit some mix of loops/recursion in the middle, and refree all before exiting.
If you want a better solution, see my comment about BetterC [slashdot.org]. Or use Eiffel
The problem with this: 1 return value. (Score:2)
This is one thing I like about Forth-style languages, where it's just as natural for a function to return multiple results as to receive multiple arguments, letting you do either:
A B / on_error{ log_error cleanup exit }else{ use_result } return
or
A B / on_error{ store_exception drop_result push_unhandled_exception_errcode }else{ use_result } return
or
A B / drop_error use_result return
Unlike with exceptions, the possibility of an error isn't hidden away somewhere; if you ignore it, or hand it down to reach exception handling code, you have to do so right there and then, explicitly at every step. Actually, that's a general plus: with a stack language, you have to explicitly dispose of everything, which makes it harder to ignore return values, and impossible to write programs without knowing whether a function returns anything ("What do you mean it can return an error code? I though it was void!").
Re:Excpetions are a key (Score:2)
Well, you haven't seen Error.pm yet. It implements exceptions for Perl.
I'm not totally convinced that exceptions are necessary for good programming. A good programmer should know how to do error handling. It's nice to be able to call upon it when you need it but it should not be forced upon you, kind of like commenting your code.
Of course I love Perl and believe TMTOWTDI.
Re:Excpetions are a key (Score:2)
As I stated in my post, having high-level mechanisms for exceptions doesn't cut it. Your base operations must throw them, or else you've lost out on 50% of the reasons for having exceptions. Opening a non-existant file with open() won't raise an exception; this is a problem.
Exceptions are not necessary for good programming, but they are necessary for good software engineering.
Re:Excpetions are a key (Score:2)
No, it doesn't, because Fatal can only sanely be applied to core operations like open(), which don't deal with objects (another must for good software engineering). For example, when I write, Perl, I use IO::File to open files; Fatal doesn't help there there.
Also, Fatal is very crude in that it just checks for false values. Perhaps a function fails if it returns undef, but succeeds if it returns a defined scalar, like 0, which tests as false! Fatal will flag this as an error, incorrectly.
When you raise exceptions, you can associate a human-readable string with them, so you point is moot. You aren't just returning an exception; the exception is an object, which turned into a string is meant for human consumption. At least Java and Python are capable of doing this.
But with exceptions you don't even have to try to check if something failed; it automatically dies! What can be less obscuring than not being there!
Re:Excpetions are a key (Score:2)
I disagree with this. I think it is much easier to program badly without exceptions than with. Without exceptions, your code suddenly becomes a lot more ripe for corrupting data and causing security issues, without the user knowing there is a problem. With exceptions falling all the way back up the execution stack, it's immediately known there is a problem, and the program is halted there notifying the user, not causing hours more of run-time corrupting data.
THAT is your answer? (Score:2)
You mean like that Ariane rocket that blew up when its double-redundant computer system was halted because of an utterly irrelevant uncaught exception? Yeah, that's definitely a superior error-handling philosophy.
Aside from the conceptual problems of what are essentially COMEFROM statements with scope management, there's no reason to assume that halting the program is better than just allowing it to run.
Re:THAT is your answer? (Score:2)
I'm not familiar with the rocket you describe, but yes, it is a superior error-handling philosophy. Imagine if there was an unchecked error, and the rocket, instead of detonating, landed in civilian housing? That's precisely what not using exceptions allows for: programs that become destructive because of lack of error management.
That's like saying there's no reason to assume knowing about a bug is better than just allowing a program to go on its merry way. Uncaught bugs are the cause of 99% of the security holes out there. It's always better to know when there is a problem.
Re:THAT is your answer? (Score:3, Insightful)
Why would you assume the rocket was intentionally detonated by the computer? Its computers went down and it went completely out of control. It was only blown up after it broke apart because it happened to go into a spin. There is no upside to this computer failure.
You call blowing up a commercial satellite launch vehicle non-destructive? If this error was ignored the rocket would not have been affected by it, it was an utterly irrelevant mathematical value overflow error in a program that only did anything before launch.
This program became destructive because of the "error management." In particular, the error management philosophy that halting a suspicious system is always safer than allowing it to run.
The point you seem to have missed is that halting the program is often more destructive than ignoring the error. Data loss, control loss, vital services suspended, etc.
That's like saying there's no reason to assume knowing about a bug is better than just allowing a program to go on its merry way. Uncaught bugs are the cause of 99% of the security holes out there. It's always better to know when there is a problem.
I'm sure the European Space Agency found it worth every penny of the estimated half-billion dollars lost to find this otherwise irrelevant bug. After all, it's always better to know, whatever the cost of halting the system, right?
Exceptions don't save you much of the time. (Score:2)
Unless it's an uninitialized-memory error or a buffer overrun that overwrites some other program variables, in which case a C++ program will still keep going on its merry way without throwing an exception, causing difficult-to-duplicate and hard-to-trace bugs.
If it's possible to check for the error at all, then anything that you can implement with exceptions you can implement without exceptions (though I agree that exceptions are a _neater_ way of doing it).
If your program can't check for the error (as is common for memory errors without extensive and slow wrapping on memory accesses), then exceptions won't be triggered and you're still screwed.
[Aside: You can propagate error codes up between levels either by making error codes bit vectors and masking subcall errors on to the parent call's failure code, or by implementing your own error stack (if you anticipate using deep recursion). Messy, so exceptions are still _preferable_, but it can still be _done_ without exceptions. Almost as cleanly, if you wrap error-handling helper functions nicely.]
Re:Excpetions are a key (Score:2)
Certainly, exception handling in C++ or Python is much more efficient and elegant.
Example:
#!/usr/bin/perl
eval{test(3)};
if ($@) {
print "Whoops: $@\n";
}
sub test {
my $bob = shift;
if ($bob == 1) {
print "Happy\n";
} else {
die("Failure testing \$bob");
}
}
Re:Excpetions are a key (Score:2)
You do realize, hopefully, that the die syntax makes it very hard to selectively catch exceptions. If I have a subroutine that does some array manipulations, and opens a file, I might want to only catch the IOError (file opening error) at the level I'm on, and pass the ArrayIndexError on up. eval() can't handle that well.
CPAN has some modules that hack some exceptions, but it's all very, very unclean. Unclean and unreadable code can lead to just as many errors.
No just Open Source (Score:2)
Well... error checking sucks in most languages (Score:5, Informative)
Most languages make error checking very hard. In particular, C and Perl, two of the most used langs in OSS development, lack good mechanisms for sane error checking. I might explain more, but is better explained at this document [usc.edu].
btw, the document is part of a library that allows nicer error checking in C, called BetterC [usc.edu]. (Yes, this is a plug, I've participated in the development).
It is modelled in Eiffel's "Design by contract", a set of techniques complemented with language support to make error checking a lot easier and semiautomatic. "Design by contract" has been described as "one of the most useful non-used engineering tool".
There a no such things as errors! (Score:3, Funny)
Them thar's fightin' words (Score:2)
This ones easy. (Score:2)
#define ERR_LOCATION fprintf(stderr, "ERROR in File: %c Line: %d
Then use it like so:
ERR_LOCATION;
fprintf(stderr, "foo returned %d.\n", ret);
I believe that's the correct code.
Re:This ones easy. (Score:2)
Re:This ones easy. (Score:2)
(Insert standard Douglas Adams quote about things that can't ever go wrong)
Re:This ones easy. (Score:2)
Re:This ones easy. (Score:2)
I don't normally use the "NDEBUG" flag, and when I looked at the man page I guess I assumed that "assert(x)" would become "x" instead of "" when NDEBUG was on. However I just tested it, and it does indeed throw away the whole statement. Not the way I'd do it, but I guess the standards committee had their reasons (or were all in a hurry to take off for a long weekend...).
Commercial software does it differently (Score:2)
I don't think that a straight comparison of open source to commercial software, in the context of error handling, has any merit.
I'll try to illustrate with an example. I'm running IE 5.00.2920.00 on Windows 2000. I get a huge number of "Cannot find server or DNS error" pages from IE. You know, those are the stock HTML files that IE displays that say "The page cannot be displayed", and it has a whole boatload of gibberish on it about clicking the Refresh button, contacting your network administrator, checking URL spelling, etc etc etc.
Unless the host machine is truly unreachable, I can click "Refresh" and get the appropriate page almost instantly about 80% of the time. Does that make you smell a fish? It makes me smell a fish.
The fish that I smell is commercial software handling errors in such a way as to blame anything other than itself when it encourters an error. I'm sure this works on most Windows users, because they've never used anything else, and their desktops crash all the time. Why shouldn't web sites just arbitrarily refuse to give up a page now and then? But if I'm debugging a web server that I'm telnetted to from my SPARCStation, and IE on Win2K claims that the web server can't be found 12% of the time, yet finds it instantly on refresh, I begin to see a pattern.
If you write commercial software, the pattern is to including fairly complete error handling, but make the error handling blame something else. IE didn't choke, DNS or the remote server did, or you typed the URL wrong. Anything but admit that IE had the problem.
Open source programmers don't experience pressure from marketeers and PR people and "product managers" to appear blameless. Open source programs tell it like it is, up to the limits of the programmer's articulation. That's why it's useless trying to compare the two: commercial software handles errors in order to shift the blame. Open source software handles errors in order to provide debugging information.
Re:Commercial software does it differently (Score:2)
This isn't really about error handling. (Score:2)
Basically, his complaints boil down to, "bugs exist, causing error messages, why aren't all the ones that cause error messages fixed yet?"
Then he goes off on a confused tangent, apparently suggesting that "error handling" be added to work around any bugs. After all, if it can log the errors caused by bugs, it can respond to them in any way, up to and including fixing the problem (i.e. doing what the code should have done, except for the bug)! For example, if a system file is missing (meaning either a bug in the install, a bug in the program requesting something that isn't really a required system file, or an externally damaged system that can't be expected to work at all), just pop up a dialog to let the user search for it! Because of course the user should attempt to patch things up with his intimate knowledge of system internals instead of just seeing that there's a bug to report.
Hooooo boy....
I didn't see a single example of a genuine external error that wasn't handled properly, just bugs which should be fixed.
Looks OK to me. (Score:2)
Way to go, I say. Would rather have hugely detailed warnings any day.
Dave
Re:Looks OK to me. (Score:2)
In addition, most of the examples he gave were not programs crashing. I think the problem is open-source software generally is more verbose in error checking. Proprietary software generally gives you NO ERROR MESSAGES, it just crashes indiscriminately. At least OSS programs give me an explanation of what might have happened, and often directions on how to fix the problem. Windows, for example, has given me error messages like: "An unknown error has occured in <unknown application>. The program will be terminated." I never get error messages like this in Linux. The error messages being verbose doesn't mean it is bad software, it means it is good software.
Error messages need to be have error numbers (Score:3, Interesting)
What might be cool is a codified error numbering a la Oracle for instance. I would love to have KDE-2345 error, or GNOME-1234 error, or Koffice-567 etc. That would made searchs far more effectives
Follow the example of strerror() (Score:2)
Error messages need to have numbers associated with them. For instance when I have ORA-1241 in oracle, a quick search in groups.google.com will give me a lot of informations about this error, and why it occured and what I can do about.
C's strerror() uses another approach: a short 6-character name for each error ("no such file or directory" is ENOENT, etc.) that stays constant across localizations.
The situation is even worse for people who used localised versions of the software, as you don't have the English translation
Whether you get "Non ci è tale archivio o indice (ENOENT)" or "Es gibt keine solche Datei oder Verzeichnis (ENOENT)", you can still search on the ENOENT. (Translations by Babel Fish.)
Now if only the popular apps did this...
Hrmm (Score:2, Insightful)
Now let me compare this to a judge I once met, who said that men have more tickets in general, but women always follow too close.
This is interesting, but if we further evaluate, one could conclude that women are just as bad (equally so), but perhaps people were lighter on them along the way. A police officer might have let her off, and so forth (this isn't to sound mysogynist of course, but I know women who get let off all of the time).
Instead, following too close is an easy prelude to... an accident. After all, when your bumpers are crushed together, you're too close.
Now think of error handling. "Open Souce Software handles errors poorly," is another way of saying that it too crashes a lot. Perhaps other people get caught for other things, but we only rag on open source when it crashes.
This isn't to say ALL open source software though.... but lets be perfectly honest. Programming is a difficult profession that a lot of people think they can just pick up. How many people would volunteer to do surgery without med school because they read a book on the subject? How many people get offended when you flash some important programming credentials in front of them that they don't have?
The trick is sifting the wheat from the chaff. Sure, a 14 year old with a little ambition can whip up a pretty impressive looking windowed program in X... but he doesn't have the sophistication of a well educated programmer... generally. There are plenty of good programmers and bad programmers in open source. The key is to know whats good and whats bad. If you can't figure that out, then buy a distro made by people who do.
No kidding... (Score:2)
My Ass (Score:2)
Open source programmers may suck at handling errors, but commercial programmers suck much more.
Zen GET (Score:2, Funny)
...and in that moment, he became enlightened.
Depends upon the application (Score:2)
take a look at the standards (Score:2)
They've been declining in general for the past 10 years, and before that they sucked as well. I think the standard is really set by the hardware itself.
Typically drive errors can have symptoms of software running more slowly as the drive retries - or applications will simply appear to hang, or if it's an error reading code into memory, well, anything goes.
Network errors can go completely unknown until you haul out the crusty old hacker with a sniffer - oh gee, did you know that your card is dumping half it's packets?
Oh - especially network problems - where the software at the user level 90% time just sits there and goes "Duh!" for simple things like pulling the cable out.
Error checking and handling, in general, SUCKS and it's the main reason why computers suck - why the software industry spends billions of dollars chasing problems during the development phase that they never really get to pin down, so the problem ends up going into shipping products.
I blame the lax standards on the platform, and the dumbing down of programming in general (the over-reliance on high-level languages that remove the programmer progressively further and further from the hardware their programs run on).
If PC's had better standards for this sort of thing at the hardware level - and if the vendors adhered to those standards, then the software people could write software that handles errors better, and it would bubble up to the user level as more reliability, and much simpler troubleshooting, probably tens of billions of dollars saved in productivity alone, and probably the PC industry would be 10 times the size it is today, because people would actually trust them for important tasks, rather than the next nifty home killer-app like pirating music. (not meant to be a troll against MP3 trading - meant to be a troll against the apparent purpose and direction of the PC industry in general).
It is not the programmers, it is the projects (Score:2)
Re:It is not the programmers, it is the projects (Score:2)
Commercial error handling (Score:2)
Well, we all know how bug-free Internet Expl...<This program has caused an illegal operation in module kernel.dll and will now be terminated>
Not just error handling--EVERYTHING! (Score:2)
Open Source as it stands today is great at bashing together a really "neat" program which gets the job done in a specific manner. Soon enough, lots of cool little features are added in, and before long you have a 'perpetual-beta application.'
Programming, however, requires some discipline which doesn't often get put towards OSS. Programs require good error handling (and error trapping, for that matter), usability (That means intuitive interfaces), and documentation. Oh yes, and freedom from bugs. However, these things are BORING to produce, compared to the original plan of bashing out a neat routine.
Ironically, the only way to achieve such things in a distributed and open development model, is to have a central administrative point. Without it, large projects are just impossible. Funny, eh?
[1]of course, so does commercial software, but in different ways)
Blame it on von Neumann (Score:3, Interesting)
This is not about open-source vs closed-source programs, nor for-fun vs for-money programmers. It's about computational models such as von Neumann machines that, at their deepest roots, assume there will be no errors. That chain-of-falling-dominos style of thinking so permeates conventional programming on conventional machines that it's almost surprising that any code has any error handling at all.
Of course it's possible to hand-pack error-handling code all around the main functional code in an application.. and of course quality designers and programmers in and out of open-source will do just that.. but viewed honestly we must admit it's an huge drag having to do so, and typically fragile to boot, because the typical underlying computational and programming models provide no help with it. Error-handling code tends to be added on later to applications just as try/catch was added on later to C++.
Lest we think this sad state must be inevitable, let's recall that other computational models, like many neural network architectures for example, are inherently robust to low level noise and error. Then, that underlying assumption colors and shapes all the `programming' that gets built on top of it. We're to the point where trained neural networks, for all the limitations they currently have, can frequently do the right thing in the face of entirely novel and unanticipated combinations of inputs. Now that's error handling.
The saddest part is that von Neumann knew his namesake architecture was bogus in just this way, and expressed hope that future architectures would move toward more robust approaches. Fifty years later and pretty much the future's still waiting..
I got your fix RIGHT HERE! (Score:2, Troll)
I'll be blunt, too. I got your fix RIGHT HERE! I have whipped up some open source magic that uses a powerful error-finding heuristic in combination with a correction algorithm. It should fix all of these problems you have described.
----CUT HERE----
#!/bin/bash
if [ "$#" -lt "1" ]; then
echo "Usage:" $0 "<program> {<args>}
exit 1
fi
$* 2>/dev/null
echo "All errors corrected!"
----CUT HERE----
You are not expected to understand how this works. Send me beer, we open source guys like that.
A core dump *is* an error message! (Score:2, Funny)
dump is the best possible error message because
it contains ALL the information you need to
diagnose why the program had to stop running."
Mmmm'K
The user should not see errors unless they want to (Score:3, Insightful)
Also, I use error reporting to a logfile rather than alarming the user. Most applications should be able to survive the average error. Those applications should prompt the user for proper input - even to the point of placing the cursor in the proper field. Each field should be intelligent and be able to validate it's own input data.
Those error logs I spoke of should be used by the programmer to debug his/her application - don't alarm the user ok?
Something to be said for baby-sitting mainframes (Score:3, Interesting)
The biggest problem is not whether your language has exceptions (good error-handling has been done for years without them) or whether programmers are lazy. It's a matter of making it a priority. In fact, laziness caused a lot of us old-timers to take a major interest in error-handling.
Picture the days before internet access, running mainframe systems, probably with overnight batch cycles.
Good error handling might mean that you don't get a phone call at 3:00 am.
If that phone call comes, good error messages might mean that you can diagnose the problem over the phone and walk the operator through recovery.
In either case, you don't have to drive down to the data center.
Sleep. Now there's a motivator.
Re:That all depends on your point of view (Score:3, Insightful)
And attitudes like that, ladies and gentlemen, are the reason why we're all going to be old and grey before Linux is accepted on the desktop.
Re:That all depends on your point of view (Score:2)
The more likely that I think someone other than me is going to use a program, the more likely I'm going to put work into error support. If I'm expecting really techie types to use it, I may be more oriented to super-terse responses.
For stuff that goes to the general public, I'm more likely to do nice error work, but even that may be cut short if I'm really short on time. Of course, there's the problem of systems that go are originally intended for me but then get released to an unintended audience... These are the ones most likely to have what I consider inappropriate error handling for the audience. I think that this re-targeting of programs originally intended for internal consumption are also the source of sub-optimal error handling in some open source programs.
Some principles that I've learned for error reporting are (in roughly decresing order of importance);
Now ideally, I'm going to achieve all of the above, but that's going to depend on how much time I have to put into the project in question. I guess that the listing I gave is the order in which I plan for error reporting depending on the 'techiness' of the expected user.. (This may also include the techniness of my own expected mood when I'm going to be running the program).
To be honest, I believe that if other users are going to be playing with my program, it's usually worth my while to go the full gamut for error response/ recovery. I know that if I don't take the time to make error recovery as easy as possible, I'm ultimately going to end up spending more than that amount of time responding to users who don't understand/like the errors that they get out of my program.
Short term investment, long term gain
--------
Microsoft has had a history of going to extremes with their error response. The response is either laking all but (possibly) the most basic error handling that may not even achieve my first intent (e.g. BSOD) to things like the paper clip that are so damned helpful that they're annoying. Part of the problem, I think, is that the Microsoft culture encourages it.
When program failures are cryptic and unpredictable, it encourages support calls to Microsoft that they get paid for. Microsoft actually gets paid for. The other reasonable response is to go to microsoft for training -- once again paying them for an MSIE that spends lots of time on how to placate customer dissatisfaction with Microsoft's problems.. In other words, microsoft gets paid for bad programming practices.
This seems to change, however, when Marketing decrees that problems need to be handled better. As far as I can tell, marketing seems to drive Microsoft, so when they decree that things need to change, they will.
An early example of this was when windows 95 came out and decided to check the filesystem if the system was brought down badly. This was something that Unix and MAC boxes already did, and microsoft probably wanted to jump on the "stability bandwagon". The result was the annoying wait at the "we're about to clean the filesystem, you naughty boy" prompt.
I expect that the paperclip was similarly mandated by Marketing (though I sometimes think that it may have started out as some programmer's prank and picked up by marketing as a 'good idea'). That feature was so annoying that it was ultimately dropped -- I think because it broke the principle of 'reasonable response'.
Something that distracts the user isn't helpful. The constant (nervous) motion of the paper clip, and it's obtrusive location on top of the main screenare tricks learned by advvertisers to get peoples attention -- unfortunately, most of the time the users' attention is attempting to focus on the document being created. Had the paper clip been an unobtrusive text box in the toolbar, people probably would have welcomed it.
In any case (having rambled), I think that Microsoft error responses are more oriented towards making money for Microsoft than making life easier for the user (a common subtext on slashdot).
Re:That all depends on your point of view (Score:2)
Well, an Amiga would give you a "You MUST replace volume < disklabel > in drive < device >!!!" if you ejected a disk while it was in use. It was a good reminder to the user that he had just done a Bad Thing, but the program could (usually) continue once it got the disk back. I don't think the application program even knew that this had happened; the read() call just blocked until the disk was put back.
If the OS isn't able to handle this sort of situation, the application program should get an EIO error on its read(). However this shouldn't translate into a segfault. Nothing should translate into a segfault - at worst an abort() if the program doesn't feel like recovering from the error.
Of course a "real OS" will just lock the CD-ROM or floppy drive [hardware permitting], thus preventing the user from ejecting a disk that's in use (unless the user has a paperclip, in which case he does deserve whatever he gets).
Re:Of Course (Score:2, Funny)
Re:Of Course (Score:2, Funny)
Re:What errors? (Score:3, Insightful)
Sure, an application error in a Unix derived system is much less likely to bring down the whole system. But that's no excuse for not dealing with error conditions correctly.
Also "errors" don't just occur from bad code. Running Linux gives you no protection against drive failures, network flakiness, or plain old user error.
Sure, if the error messages is misleading you can look in the source to find out what actually caused the error. Heck, even if it SEGVs you can compile it with debugging symbols and let GDB tell you what line it's failing at.
However, the open source community will NEVER attract mainstream users with that kind of attitude. Furthermore, even hardcore geeks should have better things to do fix up crud in supposedly release quality code. Hey, it's one thing if I'm working on something clearly under development, but it's nice to be able to get stable stuff to.
That said, I don't find open source to be any worse than the commercial stuff I've worked with. With, say, Microsoft stuff, it is just much harder to distinguish bad error handling code from bad code even when no error conditions are encountered.
Re:How about Apple? (Score:2)
Actually the Sad Mac usually indicates hardware failure (failed POST). You'll notice a hex code underneath the icon; the hex code indicates the actual error. One of the reasons it doesn't give plain English errors is, the Sad Mac is in the ROM. Text strings would take more space (think back to 1984 when that was an issue). Also, the Mac hardware isn't supposed to be language-specific (notice that there are no text labels on the ports) - if English isn't your native language, you shouldn't have English error messages. On top of that, Apple originally intended the Mac not to be user-serviceable. If you got a Sad Mac, you were supposed to take it into an Apple-authorized repair center and have an Apple-certified technician (who would have a list of error codes) take a look at it.
Fortunately, Apple has changed their attitude, but the legacy Sad Mac remains. Personally I agree with you, it would be really helpful to have some idea of what the problem is without having to look up a number.
/. error (Score:2)
Re:Linus' changes to recent kernels (Score:2)
Nothing to handle then stupid.