Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
DEAL: For $25 - Add A Second Phone Number To Your Smartphone for life! Use promo code SLASHDOT25. Also, Slashdot's Facebook page has a chat bot now. Message it for stories and more. Check out the new SourceForge HTML5 internet speed test! ×
Programming IT Technology

Open Source Programmers Stink At Error Handling 610

Mark Cappel writes: "LinuxWorld columnist Nick Petreley has a few choice words for for the open source community in 'Open source programmers stink at error handling'. Do you think commercial software handles errors better?"
This discussion has been archived. No new comments can be posted.

Open Source Programmers Stink At Error Handling

Comments Filter:
  • We really need this open source BSOD library
    that would make our life more convenient and
    our applications more commercial-like.

    • I always prefered the Amigas "Guru Meditation" error messages, I just wish they had combined it with a Zen Koan generator or quote of the day, it would have made figuring out what went wrong even more fun!
  • by brunes69 ( 86786 ) <slashdot.keirstead@org> on Thursday October 25, 2001 @06:03PM (#2480531) Homepage

    Who spend days at a time at work (read: Stallman) without showers, removing the last 3 words provides a better description :o)

  • by ArcadeNut ( 85398 ) on Thursday October 25, 2001 @06:03PM (#2480533) Homepage
    Commercial Developers are just as bad. If you ever see an "Access Violation" or "Unhandled Exception in blah...", chances are, the programmer didn't do proper error handling or checking for error conditions.

    Things like checking pointers to see if they are NULL before using them. Simple basic things that could prevent errors.

    Error handling doesn't just mean catching the error after its already happened. It also means being proactive about it before it happens.

    A lot of programmers do not do that.

    • No, but frequently error cases are only added because good testers force the programmer to take care of all of the edge cases.

      I have forced the developers that I work with to add hundreds of error case checks in the last year. :)
      • Good for you, and no thats no sarcasm, we need better testing regimes in commercial software development companies, and developers, you should ask for some metrics on code before you join a team, testing is crutial if yo're going to ship a good product, code reviews, test harnesses etc should all be in abundance. Sadly what happens in a lot of cases is Marketing says deliver on day X, where day X is some random day that they thought up rather than a "real deadline" ("real deadline" being a competitor about to trample you etc) then you have to massage the code, work 18 hours a day, not do testing just so everything fits in with when Marketing want to launch their ad campaign (usually about a month after their Kenyan safari). I have personally expereinced this, and its not fun at all. If you want a professional level product you need to plan, design implement and TEST. The most recent high profile case I can think of is Oracles CRM offering which sucks, and doesnt have functionality that corporations were sold on. So now I know many people looking for alternatives to Oracle, not just for CRM but DBMS as well, give a dog a bad name and all that.
  • What are these "errors" you speak of? Open source has no errors...
  • The page cannot be displayed

    That's the error I'm getting. Could it possibly be slashdotted in only 3 minutes?

    Too bad, I was hoping I could say something meaningful, or maybe even relevant...

  • it's a feature.
  • by noahbagels ( 177540 ) on Thursday October 25, 2001 @06:06PM (#2480560)
    Why does it seem like there are as many people in the "community" criticizing open source as there are supporting it?

    Two Words: Apache and Tomcat

    I'm a professional who works with the closed source equivalents all the time: Netscape iPlanet server, IIS and WebLogic.

    Now: before you flame - I like working with WebLogic, but it is no better than Tomcat in my opinion (as far as error reporting goes). And IIS is a piece of crap! Not to mention Netscape's overly complecated UI that blasts every change you've ever made and is completely out of sync with the flat file configs.

    Need I mention that Tomcat error logging is set-up in an XML file that is easy to read, modify, and translate into a simple report for management (IT that is).


    When was the last time Windows gave you a nice error.log when it blue-screened, or how about IIS on a buffer overflow?

    I'm sick of bashing on the free stuff out there. Sure, just because I can release one of my college projects into the open source may mean that statistically there are more projects without good error reporting, the real projects are pretty darn good.
    • Why does it seem like there are as many people in the "community" criticizing open source as there are supporting it?
      Because the reality is that open source software is neither as good as, nor as bad as, the zealots on both sides claim. More than closed source development, open source development is subject to significant variability in the skills of its practitioners. There is some open source software out there that is complete crap, and some that is very good, and far more than either that is merely mediocre.

      And I would be careful about holding up Tomcat as an (open source) triumph. It's had some major bugs all through the 3.x timeframe, and its team includes at least a few daytime profressional "closed source" programmers (there's no correlation between the two, by the way).

  • Error handling in a C/Unix environment is, by its nature, very difficult. But at least some open source tools have become very refined over the years and are quite good at it.


    My textbook example:

    • The pwd command.

      It takes no argument, and only produces one line of output. Despite this apparent simplicity, I've been able to get each and every pwd that ships with a commercial Unix to dump core (almost always by executing in an exceedingly deep directory.)


    The GNU shellutils version of pwd, on the other hand, has never dumped core on me.


    I will admit, the fact that it took two decades for a non-crashable version of pwd to become available doesn't bode well for the many other vastly more complicated programs out there in any environment. But it does speak very highly of the GNU utilities in general, and I haven't even begun to praise the thousands of folks who have worked on making these tools quite portable!

  • by ackthpt ( 218170 ) on Thursday October 25, 2001 @06:09PM (#2480577) Homepage Journal
    I've been coding for over 20 years and I've seen some beauties, and I'm sure others have as well. Like the guy who put about 500 lines of Java in one Try - Catch. I'd suggest they screen their contributors better. Use a carrot and very gentle stick approach and be certain to encourage coders to think "what could happen here and how should I handle it?" whenever writing.
  • Testing (Score:3, Insightful)

    by Taral ( 16888 ) on Thursday October 25, 2001 @06:09PM (#2480578) Homepage
    The real problem, IMHO, is that nobody likes to do the intensive testing that is necessary to get a program to be truly robust. We do it here at IBM, and I promise you -- it's not something I would do if I weren't being paid to do it.
    • The main difference between a great systems administrator and a technically competent sysadmin is paranoia.

      A great sysadmin would cut out their own heart before operating without known good backups. A great sysadmin would chew their own arm off before putting something into production without testing it first in a development environment. A great sysadmin *always* has a backout plan.

      And how does a lowly admin reach this amazing level of greatness, you ask?

      Admins get paranoid after making hideous, terrible mistakes that immediately result in Bad Things Happening.

      I have personally: killed the email server for 2 days...shut down distribution for the world's largest distributor of widgets (every Thursday for 3 weeks)...destroyed all connectivity (voice and data) to the world for 12 hours...hosed the upgrade on a 700GB Oracle database (and our backups were no good). And any semi-experienced administrator will have, at minimum, two stories that are at least this bad (like my friend who shut down trading at Fidelity for a day).

      And for every one one of these instances, I immediately felt the wrath of: my manager, my manager's manager, other people's managers, other people who were affected, stray people wandering by my cube who weren't affected...I also became a part of the "mythical sysadmin storybook"--"I once worked with this guy, and (you won't believe this) he..."

      I submit the hypothesis that: generally, most developers are not subject to this type of immediate and extremely negative form of feedback for their mistakes. Therefore it takes a developer a long time to develop an aversion reflex that conditions them to do "the right thing -- error handling, code documentation" instead of doing "the easy, interesting, enjoyable and sexy thing -- making spiffy algorithms, writing tight code".

      Drifting into another analogy, error handling is like code docmentation. Why do most developers get good (and a little obsessive) about documenting code? Becuase they finally spent some years trying to maintain someone else's tight, sexy code that is virtually incomprehensible.

      So, my point is, developers take a long time to viscerally learn the need for good error handling by repeatedly getting whacked on the head for lack of error handling. It's like evolution in action.
  • by SanLouBlues ( 245548 ) on Thursday October 25, 2001 @06:12PM (#2480597) Journal
    As a professional programmer I adhere to a strict stylesheet which I think the Open Source community may appreciate a copy of:

    main( arguments ){
    try{
    --code goes here--
    }catch( exception ){
    printout "I'm sorry to do that you need our $50k/year support plan. \n Thank you!"
    }}

    No need to thank me.
  • Error handling... (Score:5, Interesting)

    by mcpkaaos ( 449561 ) on Thursday October 25, 2001 @06:14PM (#2480606)
    ...is only as solid as the engineer behind it (and the design behind him/her). A poor design often results in a flaky system, difficult to implement and nearly impossible to predict. That, in turn, can result in very thin error handling. Whether or not a product is commercial has nothing to do with it. The only argument for that could possibly be that in many cases, more careful attention (in the form of testing and code reviews) is taken when a product is a revenue generator (or anything that will affect the perception of the quality of a company's engineering ability).

    Ultimately, if the engineer (or team of engineers) is inexperienced, error-handling will be weak, error-recovery nearly non-existant. However, a more senior engineer will generally start from error handling on up, making sure the code is robust before diving too deeply into business logic. The time taken for unit testing plays an especially large role here. The more time spent trying to break the code (negative test cases) the more likely you will have a system that has been revised throughout development to have rock-solid error handling/reporting/recovery.

    [McP]KAAOS
  • which is why Open Source is required. So we can all see exactly where the code stinks and we can fix it. Too bad legacy development models don't provide such advantages. Wouldn't it be cool if you could just buy quality software?
  • by victim ( 30647 ) on Thursday October 25, 2001 @06:21PM (#2480645)
    I was just re-re-reauthenticating my Cubase installation. The key CD is now scratched which hangs the authenticator forced a quite ungraceful reboot and corrupted my hard drive. (Perhaps a $150 upgrade will help. I'll never know.)

    The last time I used Word a drive filled during a save operation and left me with just a mutilated copy of the original file. (I will not use it again.)

    My HP PSC 750xi software informs me every morning that its controlling software was exploded and I should reboot the host computer. (I'll wait for the OS-X drivers. If they are still bad the PSC goes out the door.)

    The most amazing part is that this state of affairs doesn't surprise me. If my refrigerator intermittently defrosted and melted icecream all over the kitchen I'd be ticked. If my car mysteriously dies at stop signs I get it fixed.

    Programmers have managed to beat down everyone's expectations to the point where half-assed is pretty good.

    The only way I see to fix it is for consumers to refuse to buy flawed products, or legislators to pass laws allowing redress for flawed products.

    I don't think either is likely.

    I now use OSS for my mission critical work and fix what needs it.

    • by Carnage4Life ( 106069 ) on Thursday October 25, 2001 @07:04PM (#2480859) Homepage Journal
      One of my friends counters arguments about software being too sloppy with the point that there is practically no other field where a product is designed to be used on such a varying degree of ways and expected to still be robust. For instance, let's use a car as an analogy to your complaints

      The key CD is now scratched which hangs the authenticator forced a quite ungraceful reboot and corrupted my hard drive. (Perhaps a $150 upgrade will help. I'll never know.)

      how is a programmer expected to deal with the CD being scratched? Does your car still work if the transmission is damaged or half the engine has been riddled with bullet holes?

      The last time I used Word a drive filled during a save operation and left me with just a mutilated copy of the original file. (I will not use it again.)


      Again, a very unexpected and unnatural scenario. How well do cars function when they run out of fuel?

      The most amazing part is that this state of affairs doesn't surprise me. If my refrigerator intermittently defrosted and melted icecream all over the kitchen I'd be ticked. If my car mysteriously dies at stop signs I get it fixed.


      But how well would your refrigerator react if you treated it shoddily such as by leaving it outdoors intermuitently or diconnecting and reconnecting the power several times a day?

      Now, I'm not trying to excuse sloppy software development but the fact of the matter is that software is constantly expected to work perfectly under situations completely outside its specifications yet we don't expect this from other items or appliances that we use.
      • You hit it straight on the head - I wish I had mod points because you would definitely go up there. I was about to post in response to that guy saying mostly the same things you said, except much less well-spoken.

        Programmers are people, people make mistakes. Users are people, people make mistakes. Most often I've found problems arising from stable software coming from people using the software in a way it shouldn't be used.Using a full disk and complaining because it saved a mangled version of your original is just assinine. You are saving OVER a file. It is just extra code and bloat to ensure that your drive still has enough space every time you do a file save operation. I could understand being upset if he was doing a Save As operation.. but get real, it's his fault - not the software.

        This mid-afternoon rant session has been brought to you by a slow day of engineering as my current project nears delivery.
      • I agree with you that software in general is a lot more complex, and used in a lot more unexpected ways, than something like a car.

        OTOH, there is such a thing called graceful degradation -- that is, if you push the limits of the software, it shouldn't just suddenly barf and die on you, but degrade gracefully. Too much code I've seen (both open and non-open source) assumes too much -- and dies badly when the assumptions fail.

        It is possible and not overly difficult to design software such that it degrades gracefully. Sadly to say, sloppy programming (programmers), deadline pressure, or disinterest in handling error conditions, dominate the world of software. Not many would put in the extra work to make a program degrade gracefully, because it doesn't have very visible effect -- until things start to fail. And too many programmers have this "test only the cases that work" syndrome.

      • One of my friends counters arguments about software being too sloppy with the point that there is practically no other field where a product is designed to be used on such a varying degree of ways and expected to still be robust.
        But there is no other field where it is so straightforward to be robust: just check for error codes. I think that cancels the flexibility issue, and so software shouldn't be any less reliable than any other device.
        The last time I used Word a drive filled during a save operation and left me with just a mutilated copy of the original file. (I will not use it again.)
        How well do cars function when they run out of fuel?
        I don't know about yours, but mine doesn't stop dead and leave the passengers mutilated.
        Now, I'm not trying to excuse sloppy software development...
        Yes you are.
        but the fact of the matter is that software is constantly expected to work perfectly under situations completely outside its specifications yet we don't expect this from other items or appliances that we use.
        No, but I think it's fair to expect that software can tell when it is beyond its specification and fail gracefully.

        If you don't like checking the result of malloc, then write a function that does it for you. When you run out of memory, do your best to exit gracefully. It's not rocket science. It's just tedious, so people don't like it.

        It's also hard to test, because some error conditions may be hard to reproduce. We may be able to take a hint from the hardware crowd, who have been using "design for testability" for some time now.

      • I think that you're trying to link together things that don't really go together in the real world.

        A computer should deal with a scratched CD by going "Uh, garbage data. Don't like that. Shut down system gracefully, don't try to find out the sound of heads on platters."

        If the transmission is screwed up in your car, you don't expect it to crack the drivetrain.

        EVERY piece of software should be checking for sufficient disk space during a WRITE operation. Only fools and dieties do otherwise. You shouldn't blindly try to overwrite the old file with the new and hope it works unless you've got a damn good reason.

        When your car runs out of fuel, you certainly don't expect it to ruin the engine. Give it fuel (or disk space, as it were), and it's happy again.

        There's a difference between dying heroically and taking everyone with you. I don't mind if a program barfs on occasion because of slight variance in the pases of the moon. The software industry is still young and needs to learn more lessons from the engineering industry. I mind when it barfs and takes my data along with it.
      • ardax makes the point above, but is only scored 1, so...

        First, about your analogies...
        If I wear out the door key to my car, the car should not burst into flames when I try to open the door.

        If my car runs out of fuel, I expect that after rectifying that little problem (and bleeding the injectors) it will be just like new. I do not expect that it will ruin my tires.

        And yes, I have kept my refrigerator outdoors. I kept it on the front porch for two months while the house was being renovated 10 years ago. It worked just fine. It is 30 years old now (Thats 15 PC generations to you young whippersnappers. Moores law says the new fridges should be 1,000,000 times colder now. :-) and I expect it to continue running indefinately. About every 5 years I put a drop of oil on the fan shaft behind the freezer to keep it from squealing. Software has a ways to go.

        About the cubase incident...
        Yes, the CD is scratched. I expect that I won't be able to re-authorize my copy of the software, but don't ruin ALL the data on my hard drive! (Its actually worse. It was my wife's laptop. You do NOT want to have to tell my wife that you just wiped out her laptop.)

        About Word...
        Destroying the on disk copy of a document before successfully writing out the new copy is just plain stupid. Particularly on a Mac where there is a special file system function to swap two files. You write the new copy under a fake name, swap it atomically (even over file severs) with the original file, then delete the fake named file (which now contains the old data). No one gets hurt in error conditions, no one can ever have bad luck timing and read a partially written file off the file sever. Life is good.

        The third case (The PSC) you don't mention, but it isn't really a case of graceful degradation. Its just an irritating bug. Honestly I'd dump the device because of the irritation, but it actually feeds card stock out of its paper tray! A rare quality in a printer.

        I suppose the more explicit point I should have made is that bad things are going to happen to software and it requires effort from the programmer to deal with it. Sometimes just a tiny bit of effort. Cubase performed so badly with a bad CD that I suspect they never tested it. They write about it in their documentation, but they probably didn't test it. The Word example is just careless programming which could have been trivially avoided if the programmers understood the platform's file system calls.

        How about the cost? I estimate that it probably doubles the engineering effort to handle the exception cases to a degree that would cover the incidents I note above. In the calculus of software development the benefits do not out way that cost.

  • What's the point... (Score:2, Interesting)

    by Coniine ( 524342 )
    of handling errors that should never happen? You just double the size of your code, cause schedules to be missed, make maintenance more difficult and increase the probability of a grotesque coding error. I expected more macho stuff from the slashdot audience, not namby pamby whimpering! Sheesh! Welcome the the real world, get a thicker skin man!

    8)

    On a serious note : I've written commercial and non-commercial code. Sometimes I'm obsessive about completeness, sometimes I'm pragmatic. No point in generalizing about OSS vs. commercial.
  • by TV-SET ( 84200 ) <leonid&mamchenkov,net> on Thursday October 25, 2001 @06:23PM (#2480658) Homepage Journal
    I guess if /. killed the site, it should mirror it :)
    Here is a select-n-middlemousebuttonclick(with my formatting):

    Title: Open source programmers stink at error handling.

    Outline: Commercial programmers stink at it too, but that's not the point. We should be better.

    Summary: Why are we subjected to so many errors? Shouldn't open source be better at this than commercial software? Where are the obsessive-compulsive programmers? Plus, more reader PHP tips. (1,400 words)

    Author: By Nicholas Petreley

    Body: (LinuxWorld) -- Thanks to my very talented readers I've been able to start almost every recent column with a reader's PHP tip.I'm tempted to make it a regular feature, but with my luck the tips would stop rolling in the moment I made it official.So I want you to be aware that this week's tip is not part of any regular practice. It is purely coincidental that PHP tips appear in column after column. Now that I've jinx-proofed the column, I'll share the tip.

    Reader Michael Anderson wrote in with an alternative to using arrays to pass database information to PHP functions. As you may recall from the column Even more stupid PHP tricks, you can retrieve the results of a query into an array and pass that array to a function this way:

    <?PHP
    $result = mysql_query("select name, address from customer where cid=1");
    $CUST = mysql_fetch_array($result);
    do_something($CUST);
    function do_something($CUST) {
    echo $CUST["name"];
    echo $CUST["address"];
    }
    ?>

    Michael pointed out that you can also retrieve the data as an object and reference the fields as the object's properties. Here's the above example rewritten to use objects:

    <?PHP
    $result = mysql_query("select name, address from customer where cid=1");
    $CUST = mysql_fetch_object($result);
    do_something($CUST);
    function do_something($CUST) {
    echo $CUST->name;
    echo $CUST->address;
    }
    ?>
    I can't help but agree with Michael that this is a preferable way to handle the data, but only because it feels more natural to me to point to an object property than to reference an element of an array using the string name or address. It's purely a personal preference, probably stemming from habits I learned using C++.

    Subtitle: OCD programmers unite

    Nothing could be a better segue into the topic I had planned for this week. I'm thinking about starting a group called OLUG, the Obsessive Linux User Group. Although I know enough about psychology to know I don't meet the qualifications of a person with full-fledged OCD (Obsessive-Compulsive Disorder), I confess that I went back and rewrote my PHP code to use objects instead of arrays even there was no technical justification for doing so.

    Certain things bring out the OCD in me. Warning messages, for example. It doesn't matter if my programs seem to work perfectly. If a compiler issues warnings when I compile my code, I feel compelled to fix the code to get rid of the warnings even if I know the code works fine. Likewise, if my program generates warnings or error messages at run time, I feel driven to look for the reasons and get rid of them.

    Now I don't want you to get the wrong impression. My PHP and C++ code stand as testimony to the fact that my programming practices don't even come within light years of perfection. But just because I do not live up to the standards I am about to demand isn't going to stop me from demanding them. It's my right as a columnist. Those who can, do. Those who can't, write columns.

    I'll be blunt. Open source programmers need to stop being so darned lazy about error handling. That obviously doesn't include all open source programmers. You know who you are.

    If you want a demonstration of what I mean, start your favorite GUI-based open source applications from the command line of an X terminal instead of a menu or icon. In most cases this will cause the errors and warnings that the application generates to appear in the terminal window where you started it. (There are exceptions, depending on the application or the script that launches the application.)

    Many of the applications I use on a daily basis generate anywhere from a few warnings or error messages to a few hundred. And I'm not just talking about the debug messages that programmers use to track what a program is doing. I mean warning messages about missing files, missing objects, null pointers, and worse.

    These messages raise several questions. Doesn't anyone who works on these programs check for such things?Why do they go unfixed for so long? Are these problems something that should be of concern to users?Worse, what if these messages appear because of a problem with my installation or configuration, and not because the program hasn't been fully debugged?But even if it is my installation that is broken, shouldn't the application report the errors? Why do I have to start the application from a terminal window to see the messages?

    Subtitle: Getting a handle on errors

    At first I wondered if this was a problem that you would be more likely to find when developers use one graphical toolkit rather than another. But I see both good and bad error handling no matter which tools people use. For example, the GNOME/Gtk word processor AbiWord has been flawless lately. Not a single warning or error message appears in the console. It's possible that AbiWord simply isn't directing output to the console, but I'm guessing that it's simply a well-tested and well-behaved application.

    On the other hand, GNOME itself has been a nightmare for me lately. At one point I got so frustrated that I deleted all the configuration files for all of GNOME and GTK applications in my home directory in disgust, determined never to use them again. When I regained my composure and restarted GNOME with the intent of finding the cause of the problems, the problems had already disappeared. Obviously one or more of my configuration files had been at fault. Which one, I may never know, because GNOME or some portion of it lacked the proper error handling that should have told me.

    In this case I was lucky that the problems were so bad I lost my temper and deleted the configuration files. In most cases, the applications appear to function normally. Aside from being ignorant of any messages unless you start the application from a terminal, there's no way of knowing why the warnings exist, or if they are cause for concern. The warnings could be harmless, or they could mean the application will eventually crash, corrupt data, or worse.

    Subtitle: Examples

    Just so you know I'm not making this up, here are some samples of the console messages that appeared after just a couple of minutes of toying with various programs. By the way, did you know you can actually configure the Linux kernel from the KDE control panel? Bravo to whoever added this feature. Nevertheless, when I activate that portion of the control panel, I get the message:

    QToolBar::QToolBar main window cannot be 0.

    Is there supposed to be a toolbar that isn't displayed as a result? I may never know.

    The e-mail client sylpheed generates this informative message after about a minute of use:

    Sylpheed-CRITICAL **: file main.c: line 346 (get_queued_message_num): assertion `queue != NULL' failed.

    The Ximian Evolution program generates tons of warnings, but most are repetitions. They begin with the following:

    evolution-shell-WARNING **: Cannot activate Evolution component -- OAFIID:GNOME_Evolution_Calendar_ShellComponent
    evolution-shell-WARNING **: e_folder_type_registry_get_icon_for_type() -- Unknown type `calendar'
    evolution-shell-WARNING **: e_folder_type_registry_get_icon_for_type() -- Unknown type `tasks'

    The KDE Aethera client generates even more warning messages than Evolution, but many of them are simply debug messages about what the program is doing. By the way, I finally figured out why I couldn't login to my IMAP server with Aethera. The Aethera client couldn't deal with the asterisks in my password. I could log in after I changed my password, but I still can't see my mail. The program simply leaves the folder empty and says there's nothing to sync. Here are just a few of the countless warnings I get from Aethera, including the sync message.

    Warning: ClientVFS::_fact_ref could not create object vfolderattribute:/Magellan/Mail/default.fattr
    Reason(s): -- object does not exist on server
    Warning: VFolder *_new() was called on an already registered path
    clientvfs: warning: could not create folder [spath:imap_00141, type:imap]
    RemoteMailFolder::sync() : Nothing to sync!

    The spreadsheet Kspread reports these errors all the time, even though what I'm doing has nothing to do with dates or times:

    QTime::setHMS Invalid time -1:-1:-1.000
    QDate::setYMD: Invalid date -001/-1/-1
    The e-mail client Balsa popped up these messages just moments after using it:
    changing server settings for '' ((nil))
    ** WARNING **: Cannot find expected file "gnome-multipart-mixed.png" (spliced with "pixmaps") with no extra prefixes

    The Gnumeric spreadsheet only reported that it couldn't find the help file, as shown below:

    Bonobo-WARNING **: Could not open help topics file NULL for app gnumeric

    Many of these problems could easily have been handled more intelligently. For example, Gnumeric could have asked for the correct path to the help file, perhaps adding an option so a user can decide not to install the help files and disable the message. Unless GTK and Bonobo are a lot more complicated than they should be, it should be easy to create a generic component for handling things like this and then use the component to handle all optional help files as a rule.

    The only conclusion I can draw is that, like most commercial software developers, many open source programmers are just plain lazy about proper error handling. But we're supposed to be better than that, and it's time we started to live up to the reputation. I realize that most of these programs are works in progress. But good error handling is not something that should be left for last. It should be part of the development process. Although I may not practice it myself, I'm not the least bit ashamed to preach it.
  • by ryanwright ( 450832 ) on Thursday October 25, 2001 @06:23PM (#2480663)
    Nick Petreley is a moron. Intelligent people don't make blanket statements like "Open source programmers stink at error handling." Next thing you know, he'll be telling you "Closed source programmers use more descriptive variables." How the hell does he know?

    Programming traits - just like preferences for pizza toppings, frequency in bathing and type of pr0n - vary from programmer to programmer. Some implement proper error handling, others could care less. It doesn't matter whether they're working on an open or closed source project. If the open-source programmers all traded places with the closed-source programmers, you'd have the same ratios of proper vs. improper error handling (although the traffic from open-source-programmers.com to goatse.cx would probably spike).
    • "Intelligent people don't make blanket statements..."

      Wait for it...wait for it...ahhhhh!

    • by egomaniac ( 105476 ) on Thursday October 25, 2001 @07:35PM (#2481024) Homepage
      News flash: Technology pundit seemingly insults open source, Slashdot up in arms. None of them actually read the article. Story at 11.

      The article does not say "open source doesn't handle errors as well as closed source". What the article does say is "like most commercial software developers, many open source programmers are just plain lazy about proper error handling. But we're supposed to be better than that...".

      I don't see a problem with this statement. The fact is, most open-source software sucks donkey balls. Petreley is merely saying it's time to put your money where your mouth is -- if you want open source to be considered better than closed source software, it better stop being so danged flaky.
    • Linuxworld is having issues, so I can't read the atricle, but I remember Petreley from when I used to get Inforworld Magazine.

      He's the stereotypical technology pundit. He learns just enough about technology to have an uninformed opinion about it.
      The worst thing is that we on the internet have truckloads of people like him. Every mailing list, newsgroup, web log, IRC channel, or any other group in which people or trying to get things done will have a crew of wankers spouting their opinions with no attempt to actually contribute anything useful.

      What really burns me about pundits is that they're getting paid to do what a couple million monkeys on the internet do for free.

      Take Petreley. One time, he wrote an article about how maverick programmers don't write good code. I guess I can believe that. Then he went on to say that all brilliant programmers are mavericks, and Microsoft etc all hire them so they'll write bad code and people will have to buy bug fixes. Um, right. He then finished off by claiming that he used to be an absolutely outstanding programmer and that he had to quit because he was so amazingly good that writing decent code wasn't fun for him.

      He has, to the best of my knowledge, never actually contributed anything at all even remotely useful to Free Software, or computing in general. He's even worse than Fred Langa, the guy who helped invent ethernet in 1976, then spent the rest of his career punditing, developing more and more bizarre opinions as his practical knowledge became antiquated.

      So here's a message to Petreley: Do something useful, anything. If all you have to contribute is your opinion, then go home. Free Software writers are mostly volunteers, we don't have to put up with your wanking. If you have a problem with a program, file a fucking bug report. Actually, if you're such an amazing programmer, SHOW US SOME CODE! I don't care how much Infoworld pays you, to us, your opinions are worthless. So do something useful or, I'll have to dig out my cluestick and use it bash you into a profession that benifits humanity in some conceivable way.
  • stop panicking! (Score:2, Interesting)

    by DeadPrez ( 129998 )
    Out of all open source software I use, my biggest complain is with Linux and how freaking hard it is to swap a hard drive to a new machine. I can only imagine the insults that will be thrown my way but 98, or even NT/2000 nine times out of 10 I will have no problem with this. Sure it takes about 30 minutes of clicking yes, install new hardware but it works (usually). Under Linux, can't load root fs, goto panic. Grr, that bugs me like nobodies business.

    I am sure there is some semi-painful way to get around this but should I really have to? If you ask me, the kernel should not panic at this "error" and should recognize it, prompt you and try to solve it (probe the new hardware and load the correct module(s)). Maybe some distros are better than others (and I shouldn't be placing this "blame" on the kernel team).

  • Over the past few years I've used several OSS programs in pre-release versions, and the tendency I observed was for the programmers to provide "last gasp" file saves to keep you from using work when the program crashed. For instance, I never lost a keystroke when using early versions of LyX.

    I don't recall ever seeing this in a commercial product, though I haven't used any commercial products to speak of lately, so perhaps the state of the art has changed. I sure used to lose a lot of work under commercial software, though.
  • See my .sig for linkage!
  • It seems a lot of open source programs do in fact have little error handling. Most open source programs seem to focus on functionality, rather than usability. It gets you from point A to point B, and doesn't give you much help in between. If the user is not a master programmer, any errors usually end up being cryptic and nonsensical. It seems that a lot of commercial software has decent error recovery, and prevents user error effectively. Now I know there are many exceptions, but I think it simply has to do with the fact that commercial programmers get paid to do their work, and competition forces companies to put out products that are competitively usable.
  • Excpetions are a key (Score:3, Informative)

    by ftobin ( 48814 ) on Thursday October 25, 2001 @06:37PM (#2480746) Homepage

    Exceptions are mandatory for good programming, period. If the language you are using doesn't support exceptions (C, Perl, etc), you are going to have problems. Exceptions make sure that if an error occurs, and you aren't aware of it, your program dies, and doesn't go on its merry way, causing a security hole/unstable software.

    Perl's hack at exceptions using 'die' doesn't cut it; one important thing about implementing exceptions is that your base operations (e.g., opening files, and other system functions) need to raise exceptions when problems occur. If this doesn't happen, you're only going to struggle in vain to implement good, correct code.

    Exceptions are a primary reason I've moved from Perl to Python. Python's exceptions model is standard and clean. Base operations throw exceptions when they occur problems. And my hashes no longer auto-vivify on access, thank goodness. Auto-vivification on hash access are probably one of the principle causes of bad Perl code.

    • Interesting. I've never run across an example of using exceptions that can't be done more cleanly, quickly, and efficiently by simply using (and checking) result codes.

      To show that there's been an error, I'd much rather do this:

      return undef;

      than this:

      raise Exception.Create('This didn't work');

      And to check for an error, I'd MUCH rather do this:

      die unless defined(do_something);

      than this:

      try
      do_something
      except
      on e.excpetion do
      exit;
      end;
      end;

      In fact, the programmers that I've seen use exceptions tend to be less careful than those that simply check result codes.

      steve
      • Interesting. I've never run across an example of using exceptions that can't be done more cleanly, quickly, and efficiently by simply using (and checking) result codes.

        The problem with result codes is that you can't propagate the problem up to the level of scope that should be dealing with it. For example, imagine you have a GUI program. At some point, it needs to open "foo.txt", but fails. Since you're a good software engineer, you've well-separated your GUI code from logic code. The GUI needs to display an error message, but if you only check error calls, the only part that knows about the eror that has happened is way down in the logic code, which has no idea how to tell the user. And propagating 'undef's all the way up through the code is uncool. Especially since return values should not be used to indidate errors; they should be used for return values.

        With an exceptions model, you can let the logic code just propagate the error up to the GUI, who can then display a message to the user. It's a very clean, elegant system.

        try do_something except on e.excpetion do exit; end; end;

        (Sorry for the lack of prettiness; Slashdot's input mechanism doesn't allow <pre> tags.)

        This is not how you would handle it using exceptions; you would merely say "do_something". Period. If "do_something" threw an exception, and it wasn't caught, it propagates up and the program dies automatically.

      • you think checking return codes is the solution? Well, it is but at a cost.

        Exercise for /. readers: add errorchecks to the following C function. 'return' and exception handling pseudocode allowed:


        int allocate_3(void){
        int *p1, *p2, *p3 ;

        p1 = malloc(SOME_NUMBER*sizeof(int)) ;
        p2 = malloc(SOME_NUMBER*sizeof(int)) ;
        p3 = malloc(SOME_NUMBER*sizeof(int)) ;

        /* Here we do something with p1, p2, p3 */

        free ( p1 ) ;
        free ( p2 ) ;
        free ( p3 ) ;
        return 0 ;
        }

        Let the game begin...

      • One thing that really bugs me about most programming languages is that they only allow 1 return value by their most natural idiom. So you get these stupid hacks where some settings of the returned value mean errors and some are useful results, of you have to define a new named data structure just for the return value of this one function, or you end up having to mix output variables with the inputs for a function.

        This is one thing I like about Forth-style languages, where it's just as natural for a function to return multiple results as to receive multiple arguments, letting you do either:
        A B / on_error{ log_error cleanup exit }else{ use_result } return
        or
        A B / on_error{ store_exception drop_result push_unhandled_exception_errcode }else{ use_result } return
        or
        A B / drop_error use_result return

        Unlike with exceptions, the possibility of an error isn't hidden away somewhere; if you ignore it, or hand it down to reach exception handling code, you have to do so right there and then, explicitly at every step. Actually, that's a general plus: with a stack language, you have to explicitly dispose of everything, which makes it harder to ignore return values, and impossible to write programs without knowing whether a function returns anything ("What do you mean it can return an error code? I though it was void!").
    • Exceptions are mandatory for good programming, period. If the language you are using doesn't support exceptions (C, Perl, etc), you are going to have problems.

      Well, you haven't seen Error.pm yet. It implements exceptions for Perl.

      I'm not totally convinced that exceptions are necessary for good programming. A good programmer should know how to do error handling. It's nice to be able to call upon it when you need it but it should not be forced upon you, kind of like commenting your code.

      Of course I love Perl and believe TMTOWTDI.
      • Well, you haven't seen Error.pm yet. It implements exceptions for Perl.

        As I stated in my post, having high-level mechanisms for exceptions doesn't cut it. Your base operations must throw them, or else you've lost out on 50% of the reasons for having exceptions. Opening a non-existant file with open() won't raise an exception; this is a problem.

        I'm not totally convinced that exceptions are necessary for good programming

        Exceptions are not necessary for good programming, but they are necessary for good software engineering.

    • Exceptions make sure that if an error occurs, and you aren't aware of it, your program dies, and doesn't go on its merry way, causing a security hole/unstable software.

      You mean like that Ariane rocket that blew up when its double-redundant computer system was halted because of an utterly irrelevant uncaught exception? Yeah, that's definitely a superior error-handling philosophy.

      Aside from the conceptual problems of what are essentially COMEFROM statements with scope management, there's no reason to assume that halting the program is better than just allowing it to run.
      • You mean like that Ariane rocket that blew up when its double-redundant computer system was halted because of an utterly irrelevant uncaught exception? Yeah, that's definitely a superior error-handling philosophy.

        I'm not familiar with the rocket you describe, but yes, it is a superior error-handling philosophy. Imagine if there was an unchecked error, and the rocket, instead of detonating, landed in civilian housing? That's precisely what not using exceptions allows for: programs that become destructive because of lack of error management.

        Aside from the conceptual problems of what are essentially COMEFROM statements with scope management, there's no reason to assume that halting the program is better than just allowing it to run.

        That's like saying there's no reason to assume knowing about a bug is better than just allowing a program to go on its merry way. Uncaught bugs are the cause of 99% of the security holes out there. It's always better to know when there is a problem.

        • I'm not familiar with the rocket you describe, but yes, it is a superior error-handling philosophy. Imagine if there was an unchecked error, and the rocket, instead of detonating, landed in civilian housing?

          Why would you assume the rocket was intentionally detonated by the computer? Its computers went down and it went completely out of control. It was only blown up after it broke apart because it happened to go into a spin. There is no upside to this computer failure.

          You call blowing up a commercial satellite launch vehicle non-destructive? If this error was ignored the rocket would not have been affected by it, it was an utterly irrelevant mathematical value overflow error in a program that only did anything before launch.

          This program became destructive because of the "error management." In particular, the error management philosophy that halting a suspicious system is always safer than allowing it to run.

          The point you seem to have missed is that halting the program is often more destructive than ignoring the error. Data loss, control loss, vital services suspended, etc.

          That's like saying there's no reason to assume knowing about a bug is better than just allowing a program to go on its merry way. Uncaught bugs are the cause of 99% of the security holes out there. It's always better to know when there is a problem.

          I'm sure the European Space Agency found it worth every penny of the estimated half-billion dollars lost to find this otherwise irrelevant bug. After all, it's always better to know, whatever the cost of halting the system, right?
    • If the language you are using doesn't support exceptions (C, Perl, etc), you are going to have problems. Exceptions make sure that if an error occurs, and you aren't aware of it, your program dies, and doesn't go on its merry way, causing a security hole/unstable software.

      Unless it's an uninitialized-memory error or a buffer overrun that overwrites some other program variables, in which case a C++ program will still keep going on its merry way without throwing an exception, causing difficult-to-duplicate and hard-to-trace bugs.

      If it's possible to check for the error at all, then anything that you can implement with exceptions you can implement without exceptions (though I agree that exceptions are a _neater_ way of doing it).

      If your program can't check for the error (as is common for memory errors without extensive and slow wrapping on memory accesses), then exceptions won't be triggered and you're still screwed.

      [Aside: You can propagate error codes up between levels either by making error codes bit vectors and masking subcall errors on to the parent call's failure code, or by implementing your own error stack (if you anticipate using deep recursion). Messy, so exceptions are still _preferable_, but it can still be _done_ without exceptions. Almost as cleanly, if you wrap error-handling helper functions nicely.]
    • While this is not exactly on topic, exception-like behavior in Perl can be handled using the eval()/die()/$@ syntax.

      Certainly, exception handling in C++ or Python is much more efficient and elegant.

      Example:

      #!/usr/bin/perl
      eval{test(3)};
      if ($@) {
      print "Whoops: $@\n";
      }

      sub test {
      my $bob = shift;
      if ($bob == 1) {
      print "Happy\n";
      } else {
      die("Failure testing \$bob");
      }
      }

      • You do realize, hopefully, that the die syntax makes it very hard to selectively catch exceptions. If I have a subroutine that does some array manipulations, and opens a file, I might want to only catch the IOError (file opening error) at the level I'm on, and pass the ArrayIndexError on up. eval() can't handle that well.

        CPAN has some modules that hack some exceptions, but it's all very, very unclean. Unclean and unreadable code can lead to just as many errors.

  • When you start Outlook Express, it often displays the password entry box before it has finished drawing the screen. Enter your password before the redraw has finished and Lookout locks up. (Netscape 4.7x has a similar problem, too.)
  • by TrixX ( 187353 ) on Thursday October 25, 2001 @06:39PM (#2480753) Homepage Journal

    Most languages make error checking very hard. In particular, C and Perl, two of the most used langs in OSS development, lack good mechanisms for sane error checking. I might explain more, but is better explained at this document [usc.edu].

    btw, the document is part of a library that allows nicer error checking in C, called BetterC [usc.edu]. (Yes, this is a plug, I've participated in the development).

    It is modelled in Eiffel's "Design by contract", a set of techniques complemented with language support to make error checking a lot easier and semiautomatic. "Design by contract" has been described as "one of the most useful non-used engineering tool".

  • by toupsie ( 88295 ) on Thursday October 25, 2001 @06:41PM (#2480767) Homepage
    The open source community should take the same stance as closed source corporations when it comes to bugs. They are not really bugs but undocumented features!
  • Yeah? My code might not handle errors well but your server doesn't handle a load well and at least my code will never get /.'ed.

  • Check your return values!! As simple as it sounds, so many people just don't do this. Everyone just assumes that everything will go ok. Check the return, then print out an error to stderr. To be more helpful use this define just before you print your error message to help find where the error was and debug it.
    #define ERR_LOCATION fprintf(stderr, "ERROR in File: %c Line: %d :", __FILE__, __LINE__)

    Then use it like so:
    ERR_LOCATION;
    fprintf(stderr, "foo returned %d.\n", ret);

    I believe that's the correct code.
    • The difference, speaking from personal experience of one datapoint, is that the `commercial' world employs monkeys who say "oh yeah, On Linux gethostbyaddr_r returns -2 in this case" whereas in the free world, in the next release, libc6 (note, not "Linux") will return a different code, and in the real world, people will not be so stupid as to hard-code the numbers by hand when there are perfectly good symbolic Esomething constants to be used instead.
    • It's even easier than that - just use the built-in "assert()" macro. This is a good thing to wrap around any function call that "can't ever fail", just so that when it does fail, your program will terminate cleanly and tell you the location of the error.

      (Insert standard Douglas Adams quote about things that can't ever go wrong)
      • Ahhh... assert: the basis of all good pre- and post- conditions, and any sanity checks you need in between ;)
  • I don't think that a straight comparison of open source to commercial software, in the context of error handling, has any merit.

    I'll try to illustrate with an example. I'm running IE 5.00.2920.00 on Windows 2000. I get a huge number of "Cannot find server or DNS error" pages from IE. You know, those are the stock HTML files that IE displays that say "The page cannot be displayed", and it has a whole boatload of gibberish on it about clicking the Refresh button, contacting your network administrator, checking URL spelling, etc etc etc.

    Unless the host machine is truly unreachable, I can click "Refresh" and get the appropriate page almost instantly about 80% of the time. Does that make you smell a fish? It makes me smell a fish.

    The fish that I smell is commercial software handling errors in such a way as to blame anything other than itself when it encourters an error. I'm sure this works on most Windows users, because they've never used anything else, and their desktops crash all the time. Why shouldn't web sites just arbitrarily refuse to give up a page now and then? But if I'm debugging a web server that I'm telnetted to from my SPARCStation, and IE on Win2K claims that the web server can't be found 12% of the time, yet finds it instantly on refresh, I begin to see a pattern.

    If you write commercial software, the pattern is to including fairly complete error handling, but make the error handling blame something else. IE didn't choke, DNS or the remote server did, or you typed the URL wrong. Anything but admit that IE had the problem.

    Open source programmers don't experience pressure from marketeers and PR people and "product managers" to appear blameless. Open source programs tell it like it is, up to the limits of the programmer's articulation. That's why it's useless trying to compare the two: commercial software handles errors in order to shift the blame. Open source software handles errors in order to provide debugging information.

    • There is a well documented case of how the Microsoft's BSOD changed from pointing out that "Windows Has Become Unstable" to "msdcex.dll has caused a fault" between Windows 95 and Windows 98. The change was done as to make it appear that Windows was not to blame.
  • This is about detected bugs which haven't been fixed yet.

    Basically, his complaints boil down to, "bugs exist, causing error messages, why aren't all the ones that cause error messages fixed yet?"

    Then he goes off on a confused tangent, apparently suggesting that "error handling" be added to work around any bugs. After all, if it can log the errors caused by bugs, it can respond to them in any way, up to and including fixing the problem (i.e. doing what the code should have done, except for the bug)! For example, if a system file is missing (meaning either a bug in the install, a bug in the program requesting something that isn't really a required system file, or an externally damaged system that can't be expected to work at all), just pop up a dialog to let the user search for it! Because of course the user should attempt to patch things up with his intimate knowledge of system internals instead of just seeing that there's a bug to report.

    Hooooo boy....

    I didn't see a single example of a genuine external error that wasn't handled properly, just bugs which should be fixed.
  • He chose some pretty bad examples of bad error handling - they all gave the module and direct cause of the error, and provided ample clues for the programmer in each case to go find out what went wrong. If we're looking for anything that open source programmers do that stinks, it's making GUI apps that pretend nothing's wrong.

    Way to go, I say. Would rather have hugely detailed warnings any day.

    Dave
    • He chose some pretty bad examples of bad error handling

      In addition, most of the examples he gave were not programs crashing. I think the problem is open-source software generally is more verbose in error checking. Proprietary software generally gives you NO ERROR MESSAGES, it just crashes indiscriminately. At least OSS programs give me an explanation of what might have happened, and often directions on how to fix the problem. Windows, for example, has given me error messages like: "An unknown error has occured in <unknown application&gt. The program will be terminated." I never get error messages like this in Linux. The error messages being verbose doesn't mean it is bad software, it means it is good software.

  • by Khalid ( 31037 ) on Thursday October 25, 2001 @06:55PM (#2480825) Homepage
    Error messages need to have numbers associated with them. For instance when I have ORA-1241 in oracle, a quick search in groups.google.com will give me a lot of informations about this error, and why it occured and what I can do about. Alas, there is no such thing in most of Open Source software, you just have plain text, so the search is less effective, which search keywords are you going to choose. The situation is even worse for people who used localised versions of the software, as you don't have the English transltation so you can search the English archive in groups.google.com and which count for 80% of the posts.

    What might be cool is a codified error numbering a la Oracle for instance. I would love to have KDE-2345 error, or GNOME-1234 error, or Koffice-567 etc. That would made searchs far more effectives
    • Error messages need to have numbers associated with them. For instance when I have ORA-1241 in oracle, a quick search in groups.google.com will give me a lot of informations about this error, and why it occured and what I can do about.

      C's strerror() uses another approach: a short 6-character name for each error ("no such file or directory" is ENOENT, etc.) that stays constant across localizations.

      The situation is even worse for people who used localised versions of the software, as you don't have the English translation

      Whether you get "Non ci è tale archivio o indice (ENOENT)" or "Es gibt keine solche Datei oder Verzeichnis (ENOENT)", you can still search on the ENOENT. (Translations by Babel Fish.)

      Now if only the popular apps did this...

  • Hrmm (Score:2, Insightful)

    by NitsujTPU ( 19263 )
    First let me say that I am a Linux user and an open source advovate.

    Now let me compare this to a judge I once met, who said that men have more tickets in general, but women always follow too close.

    This is interesting, but if we further evaluate, one could conclude that women are just as bad (equally so), but perhaps people were lighter on them along the way. A police officer might have let her off, and so forth (this isn't to sound mysogynist of course, but I know women who get let off all of the time).

    Instead, following too close is an easy prelude to... an accident. After all, when your bumpers are crushed together, you're too close.

    Now think of error handling. "Open Souce Software handles errors poorly," is another way of saying that it too crashes a lot. Perhaps other people get caught for other things, but we only rag on open source when it crashes.

    This isn't to say ALL open source software though.... but lets be perfectly honest. Programming is a difficult profession that a lot of people think they can just pick up. How many people would volunteer to do surgery without med school because they read a book on the subject? How many people get offended when you flash some important programming credentials in front of them that they don't have?

    The trick is sifting the wheat from the chaff. Sure, a 14 year old with a little ambition can whip up a pretty impressive looking windowed program in X... but he doesn't have the sophistication of a well educated programmer... generally. There are plenty of good programmers and bad programmers in open source. The key is to know whats good and whats bad. If you can't figure that out, then buy a distro made by people who do.
  • Heh, the link to the site gives me a Proxy Error. ;-)
  • I've seen (and done) a lot of commercial and custom programming for business. I've dug through a lot of open source software. Open source software wins hands down every time. Open source programmers like to code. You will find that a lot of people who are programming professionally do so because the salaries are good. They're mediocre programmers at best and their code reflects that.

    Open source programmers may suck at handling errors, but commercial programmers suck much more.

  • Zen GET (Score:2, Funny)

    by isomeme ( 177414 )
    What is the sound of one LinuxWorld story illustrating its own point, grasshopper?
    Proxy Error
    The proxy server received an invalid response from an upstream server.
    The proxy server could not handle the request
    GET /site-stories/2001/1025.errorhandling.html.
    Reason: Could not connect to remote machine: Connection refused
    Additionally, a 404 Not Found error was encountered while trying to use an ErrorDocument to handle the request.

    ...and in that moment, he became enlightened.

  • The "proper" sort of error handling varies wildly depending on the application type. For example:
    • A game should pretty much always abort if anything goes wrong. Missing or corrupted datafiles, etc. Diagnoic information will only be useful to programmers, so no need to make it terribly user friendly - just tell them to resintall the game.
    • A server application should give detailed messages which go to a logfile, pinning down exactly what it was trying to do, and what line it was on if parsing a file.
    • A productivity application can go easy on the diagnostic information, like a game, but being able to recover gracefully is far more important. At the very least, allowing the user to save their data before they restart the program.
    • A viewer-type program should be able to recover gracefully and just fill the area it can't parse or that is causing a problem with a blank space or something. Like a web browser that can't load a picture. If it's something like a file viewer that can't load a certain font, it should be able to fall back to a default font so that the document is still readable.


  • Open Source sucks at error handling? Look at the standards in the PC industry.

    They've been declining in general for the past 10 years, and before that they sucked as well. I think the standard is really set by the hardware itself.

    Typically drive errors can have symptoms of software running more slowly as the drive retries - or applications will simply appear to hang, or if it's an error reading code into memory, well, anything goes.
    Network errors can go completely unknown until you haul out the crusty old hacker with a sniffer - oh gee, did you know that your card is dumping half it's packets?
    Oh - especially network problems - where the software at the user level 90% time just sits there and goes "Duh!" for simple things like pulling the cable out.

    Error checking and handling, in general, SUCKS and it's the main reason why computers suck - why the software industry spends billions of dollars chasing problems during the development phase that they never really get to pin down, so the problem ends up going into shipping products.

    I blame the lax standards on the platform, and the dumbing down of programming in general (the over-reliance on high-level languages that remove the programmer progressively further and further from the hardware their programs run on).

    If PC's had better standards for this sort of thing at the hardware level - and if the vendors adhered to those standards, then the software people could write software that handles errors better, and it would bubble up to the user level as more reliability, and much simpler troubleshooting, probably tens of billions of dollars saved in productivity alone, and probably the PC industry would be 10 times the size it is today, because people would actually trust them for important tasks, rather than the next nifty home killer-app like pirating music. (not meant to be a troll against MP3 trading - meant to be a troll against the apparent purpose and direction of the PC industry in general).
  • Open source programmers are basically the same people as commercial programmers, maybe by night or maybe as different jobs come and go. The difference is that most open source projects arise from a person's need, and it is natural to ease up on the effort once that need is filled, i.e. once the program is good enough for your personal use.
  • Commercial apps are better?

    Well, we all know how bug-free Internet Expl...<This program has caused an illegal operation in module kernel.dll and will now be terminated>

  • That's right, open source software sucks at nearly everything it does![1]

    Open Source as it stands today is great at bashing together a really "neat" program which gets the job done in a specific manner. Soon enough, lots of cool little features are added in, and before long you have a 'perpetual-beta application.'

    Programming, however, requires some discipline which doesn't often get put towards OSS. Programs require good error handling (and error trapping, for that matter), usability (That means intuitive interfaces), and documentation. Oh yes, and freedom from bugs. However, these things are BORING to produce, compared to the original plan of bashing out a neat routine.

    Ironically, the only way to achieve such things in a distributed and open development model, is to have a central administrative point. Without it, large projects are just impossible. Funny, eh?

    [1]of course, so does commercial software, but in different ways)
  • by dha ( 23395 ) on Thursday October 25, 2001 @08:02PM (#2481124)
    Inexperienced and lazy programmers are usually poor at error-handling, and it's easy to lay the blame there -- but at a deep level that misses the point.

    This is not about open-source vs closed-source programs, nor for-fun vs for-money programmers. It's about computational models such as von Neumann machines that, at their deepest roots, assume there will be no errors. That chain-of-falling-dominos style of thinking so permeates conventional programming on conventional machines that it's almost surprising that any code has any error handling at all.

    Of course it's possible to hand-pack error-handling code all around the main functional code in an application.. and of course quality designers and programmers in and out of open-source will do just that.. but viewed honestly we must admit it's an huge drag having to do so, and typically fragile to boot, because the typical underlying computational and programming models provide no help with it. Error-handling code tends to be added on later to applications just as try/catch was added on later to C++.

    Lest we think this sad state must be inevitable, let's recall that other computational models, like many neural network architectures for example, are inherently robust to low level noise and error. Then, that underlying assumption colors and shapes all the `programming' that gets built on top of it. We're to the point where trained neural networks, for all the limitations they currently have, can frequently do the right thing in the face of entirely novel and unanticipated combinations of inputs. Now that's error handling.

    The saddest part is that von Neumann knew his namesake architecture was bogus in just this way, and expressed hope that future architectures would move toward more robust approaches. Fifty years later and pretty much the future's still waiting..

  • I'll be blunt. Open source programmers need to stop being so darned lazy about e rror handling. That obviously doesn't include all open source programmers. You k now who you are.

    If you want a demonstration of what I mean, start your favorite GUI-based open s ource applications from the command line of an X terminal instead of a menu or i con. In most cases this will cause the errors and warnings that the application generates to appear in the terminal window where you started it. (There are exce ptions, depending on the application or the script that launches the application .)

    Many of the applications I use on a daily basis generate anywhere from a few war nings or error messages to a few hundred. And I'm not just talking about the deb ug messages that programmers use to track what a program is doing. I mean warnin g messages about missing files, missing objects, null pointers, and worse.

    I'll be blunt, too. I got your fix RIGHT HERE! I have whipped up some open source magic that uses a powerful error-finding heuristic in combination with a correction algorithm. It should fix all of these problems you have described.

    ----CUT HERE----

    #!/bin/bash

    if [ "$#" -lt "1" ]; then
    echo "Usage:" $0 "<program> {<args>}
    exit 1
    fi

    $* 2>/dev/null
    echo "All errors corrected!"

    ----CUT HERE----

    You are not expected to understand how this works. Send me beer, we open source guys like that.

  • by Anonymous Coward
    A very smart guy from SGI once told me "A core
    dump is the best possible error message because
    it contains ALL the information you need to
    diagnose why the program had to stop running."

    Mmmm'K

    :-)
  • by Codifex Maximus ( 639 ) on Thursday October 25, 2001 @10:26PM (#2481663) Homepage
    The user should not see errors unless they want to. I agree with sending errors to STDOUT. If the user wants to see the errors then they can either start the app in an XTERM so they can see the errors or switch virt terms over to VT1 and see the STDOUT output.

    Also, I use error reporting to a logfile rather than alarming the user. Most applications should be able to survive the average error. Those applications should prompt the user for proper input - even to the point of placing the cursor in the proper field. Each field should be intelligent and be able to validate it's own input data.

    Those error logs I spoke of should be used by the programmer to debug his/her application - don't alarm the user ok?

  • by dinotrac ( 18304 ) on Friday October 26, 2001 @01:50AM (#2482254) Journal
    I'm astonished at the poor error-handling in most software these days.

    The biggest problem is not whether your language has exceptions (good error-handling has been done for years without them) or whether programmers are lazy. It's a matter of making it a priority. In fact, laziness caused a lot of us old-timers to take a major interest in error-handling.

    Picture the days before internet access, running mainframe systems, probably with overnight batch cycles.

    Good error handling might mean that you don't get a phone call at 3:00 am.
    If that phone call comes, good error messages might mean that you can diagnose the problem over the phone and walk the operator through recovery.
    In either case, you don't have to drive down to the data center.

    Sleep. Now there's a motivator.

1 Mole = 25 Cagey Bees

Working...