Slashdot is powered by your submissions, so send in your scoop


Forgot your password?

The Scourge of Error Handling 536

CowboyRobot writes "Dr. Dobb's has an editorial on the problem of using return values and exceptions to handle errors. Quoting: 'But return values, even in the refined form found in Go, have a drawback that we've become so used to we tend to see past it: Code is cluttered with error-checking routines. Exceptions here provide greater readability: Within a single try block, I can see the various steps clearly, and skip over the various exception remedies in the catch statements. The error-handling clutter is in part moved to the end of the code thread. But even in exception-based languages there is still a lot of code that tests returned values to determine whether to carry on or go down some error-handling path. In this regard, I have long felt that language designers have been remarkably unimaginative. How can it be that after 60+ years of language development, errors are handled by only two comparatively verbose and crude options, return values or exceptions? I've long felt we needed a third option.'"
This discussion has been archived. No new comments can be posted.

The Scourge of Error Handling

Comments Filter:
  • Re:The third option (Score:5, Interesting)

    by Yetihehe ( 971185 ) on Saturday December 08, 2012 @12:56PM (#42225723)

    That's the philosophy of erlang, "Let it crash". Apparently this leads to some of the most reliable systems. []
    Apparently OP didn't heard about it, because this is the third way.

  • by joe_frisch ( 1366229 ) on Saturday December 08, 2012 @01:08PM (#42225829)

    Software that interacts with the real world needs some way to handle errors. We have a distributed control system (MATLAB / EPICS based) that runs our accelerator (SLAC). The code needs to deal with a hardware device that is broken and has returned a nonsensical value, or does not return anything. This needs to be dealt with in some way - whether it is by throwing an exception or by checking the return from the routine that made the call. The error handling can be fairly complex, some devices are vital to operation and an error requires that the machine be stopped, others are at least partially redundant and you can continue to operate, though possibly with reduced capacity.

    BTW: personnel safety and hardware protection are handled separately.

  • Yes Monads! (Score:5, Interesting)

    by Weezul ( 52464 ) on Saturday December 08, 2012 @01:09PM (#42225837)

    Monads are fun for error handling. :)

    I donno if they present exactly what the author might consider a third option though, well certainly they can present other options, like with the Either monad, but that's no simpler really.

  • Re:The third option (Score:5, Interesting)

    by Nerdfest ( 867930 ) on Saturday December 08, 2012 @01:13PM (#42225869)

    I've gotten to prefer using runtime exceptions with a general policy of "Throw as early as possible, catch as late as possible". Only catch if you can do something about it. It works very well, and keeps the code very clean.

  • Re:The third option (Score:5, Interesting)

    by Capitaine ( 2026730 ) on Saturday December 08, 2012 @01:40PM (#42226093)
    My work implies writing aeronautical software specifications. In order to facilitate FAA/EASA certification of the system, we are required to stick to the KISS principle. Data validity checks are done almost everywhere, but we are asked to design the logics so that they do not need to use these data validity statuses. Degraded mode is done through the use of failsafe values. The consumer of a data do not need to know the status of the producer. The whole system is designed so that it works and is robust with minimal use of alternate logics.

    This principle works well with dataflow oriented program and might be adaptable to other domains.
  • Re:The third option (Score:5, Interesting)

    by SplashMyBandit ( 1543257 ) on Saturday December 08, 2012 @02:13PM (#42226349)

    In Java you probably wouldn't do as you say. You would 'chain the exception; so that the original exception information is preserved even though you are transforming the exception type (eg. from a checked exception thrown from a library to an unchecked exception you don't have to declare throws clauses for). The code becomes:

    try {
    // Do something here that may throw an exception (which is 'checked').
    // eg. throw Exception();
    } catch (Throwable th) {
    throw new RuntimeException("A problem occurred when launching the SS-18 because the launch authorization code was invalid. The launch authorization code had a value of " + authCode, th);
    } finally {
    // Do any clean-up.

    There are two import things to note in this contrived example:

    • * The use of the chained exception. When the exception type is transformed by the creation of the new exception we include the old exception in the constructor. That way the 'chain' of exceptions can be viewed and the original cause of the exception found. That will help you fix this issue.
    • * A message that tries to describe the exact decision used to throw the exception and the values of any contributing variables or boundary values. It is critical this information is recorded at the point of throwing because in a massively multithreaded system with millions of transactions you can't reproduce the same conditions exactly in your debugger. The only information you have is what you put in your log, and you must include all relevant information in that log. Otherwise you will not have enough data to diagnose the decision the program made to throw, and won't have enough info to fix the problem.

    Checked exceptions are valuable in Java. Those that are against them don't understand that they are very useful for certain classes of problems - systems that have to be reliable. The mistake the Java designers made was that they made the library throw checked exceptions rather than unchecked ones. If they had used unchecked exceptions everywhere (while still supporting checked exceptions for systems that need to force reliable operation under error conditions) then many of the gripes people have when encountering Java would be eliminated. Plus, programmer productively would increase because we wouldn't have to wrap and chain the checked exceptions produced by library calls all over the place. C# kinda gets it right in the fact the libraries don't have/use checked exceptions, but it lacks the option of using checked exceptions in critical systems. So neither Java nor C# have it perfect, IMHO.

    If you are a Java developer writing libraries intended for re-use by others then you should ensure your library never throws a checked exception to the caller. Only libraries for critical systems should do this. Unless you are working on nuclear plant control, avionics, medical devices, weapons systems or interplanetary probes then your system probably doesn't need to expose checked exceptions.

    The way you structure, handle and report exceptions is mundane, but is absolutely crucial for writing reliable and easily maintained software. Most programmers are sloppy about this, or consider it as unimportant as good documentation, but that is what makes then bad programmers (if you ever have to use or maintain their software).

    I hope this helps some developers out there understand how to use chained exceptions. The chaining *preserves information* about the cause of a failure. The adding sensible messages and program state is also about *preserving information* about the failure at the point of throw. Loss of information is what you are battling here, since once you lose/throw away information it is a huge effort to reconstruct it later. Avoiding loss of information is worth keeping that in mind as you develop, so you avoid doing it. Example: the built in NullPointerException being the worst example of providing zero additional information about what was null, a problem if you have multiple chained method calls on a single line. Don't write code like the Java code that raises NullPointerException.

  • by mdmkolbe ( 944892 ) on Saturday December 08, 2012 @02:49PM (#42226619)

    Haskell also comes to mind. Errors are so well handled in that language, you probably won't notice that they are so well handled. Because of the way things are structured, errors are rare (so no need to check them). When they are present, there are a number of techniques from "Maybe" types to "Error" monads to throwing "IO" monad errors. The "Maybe" type is particularly interesting as it ensures that the user will check the error, and provides convenient notations and combinators for doing that checking.

  • Re:Third option (Score:5, Interesting)

    by epine ( 68316 ) on Saturday December 08, 2012 @04:10PM (#42227349)

    Maybe - and admittedly this is just a guess from my fairly ignorant viewpoint - it's a very hard problem. How can it be that after 100+ years of industrial development, we're still heavily reliant on internal combustion engines to get us around?

    This is a good illustration of the wisdom in Thinking Fast and Slow: the people most likely to highlight what they don't know proceed to the most sensible conclusions.

    I haven't sat down in front of a keyboard to write code since 1985 without this issue foremost on my mind. In the majority of serious programs what the program does is a tiny minority of what you need to think about. Dijkstra wrote some chapters where he illustrates that some programs will actually write themselves if you adhere rigorously to what the program is allowed/not allowed to do at each step, and bear in mind that you need to progress on the variant.

    I do everything humanly possible when I write code to work within the model of tasks accomplished / not accomplished rather than the domain of everything that could possible go wrong (error codes). It's a lot harder when error codes unreliably signal transient / persistent error distinctions. I view programs as precondition graphs, where the edges are some chunk of code that tries to do something, typically a call into an API of some kind. Who cares if it reports an error? What you care about is did it establish the precondition necessary to visit edges departing the destination vertex in the dependency graph?

    Dijkstra also cautions about getting hot and bothered about order relations in terms of which edges execute in which order. In general, any edge departing a vertex with all input conditions satisfied is suitable to execute next. Because he was so interested in concurrency, he assumed that when multiple valid code blocks were viable to run that scheduling was non-deterministic. It can make a difference to program performance which blocks are executed in which order. We enter the domain of premature optimization much sooner than we suspect simply by writing our code with overdetermined linearity (few languages even have a non-deterministic "any of" control operator).

    I had a micro-controller application in which there were many variable references of the form a->b->c->d. This is hell to be strict about precondition testing.

    bool ok;
    ok = a != NULL && a->b != NULL && a->b->c;
    if (ok) {
    ok = fragile_api_call (a->b->c->d);
    if (!ok) sideband_error_info = ERR_YUCK;
    if (ok) { // lather, rinse, repeat

    // ...

    // end of graph, did we succeed or fail?
    if (!ok) {
    barf_out (sideband_error_info);
    return ok;

    This is the fall-through do everything possible and not a thing more programming idiom. It doesn't end up highly nested, because you flatten the dependency graph. Assignments to ok often occur at deeper nesting points than where they are later tested.

    This kind of code often looks horrible, but it's incredibly easy to analyze and debug. The preconditions and post-conditions at every step are local to the action. Invariants are restored at the first possible statement. For the following piece of code, the desired invariant is that if thingX is non NULL, it points to a valid allocation block. In the entire scope of the program, this invariant is untrue only for the duration of the statement whose job is to restore the invariant.

    bool ok = true;

    void* thing1 = NULL;
    thing1 () = malloc (SIZE_MATTERS);
    ok &&= thing1 != NULL;

    void* thing2 = NULL;
    thing2 () = malloc (SIZE_MATTERS+6);
    ok &&= thing1 != NULL;

    if (ok) {
    // more logic

    if (thing1 != NULL) {

  • by VortexCortex ( 1117377 ) <VortexCortex.project-retrograde@com> on Saturday December 08, 2012 @05:24PM (#42227955)

    A well written program doesn't NEED an error handler. Okay, tough-guy... "The specified network name is no longer available". Explain how you avoid needing to handle that.

    Well, it's simple: Sane defaults. Try again with "localhost". What? "localhost" doesn't have anything listening on port 80? system( "apt-get install LAMP" ); It does now. Oh, Apache failed to install? Spawn a thread that opens a socket and listens on port 80. Can't spawn a thread? Cooperative Multitasking mode enabled (hint: function pointers for main loops). Can't listen on port 80? Virtualize a socket in memory, etc.

    "Error Handling" Pffffbt, how about a Solution Handler? Hint: Don't focus on the Problem, focus on the Solution. You'd know this already if you posted in HTML mode... It's doesn't throw errors when displaying my malformed post. Quote it and see

"Even if you're on the right track, you'll get run over if you just sit there." -- Will Rogers