Forgot your password?

The Scourge of Error Handling 536

Posted by Soulskill
from the a-user-did-what? dept.
CowboyRobot writes "Dr. Dobb's has an editorial on the problem of using return values and exceptions to handle errors. Quoting: 'But return values, even in the refined form found in Go, have a drawback that we've become so used to we tend to see past it: Code is cluttered with error-checking routines. Exceptions here provide greater readability: Within a single try block, I can see the various steps clearly, and skip over the various exception remedies in the catch statements. The error-handling clutter is in part moved to the end of the code thread. But even in exception-based languages there is still a lot of code that tests returned values to determine whether to carry on or go down some error-handling path. In this regard, I have long felt that language designers have been remarkably unimaginative. How can it be that after 60+ years of language development, errors are handled by only two comparatively verbose and crude options, return values or exceptions? I've long felt we needed a third option.'"
This discussion has been archived. No new comments can be posted.

The Scourge of Error Handling

Comments Filter:
  • by Anonymous Coward on Saturday December 08, 2012 @12:34PM (#42225567)

    Ignoring the error completely, data integrity or planned functioning be damned.

    • by MacDork (560499)

      That's still catch blocks:

      try { throw VeryImportantException(); } catch (VeryImportantException e) { throw UnimportantException(); };

      You'd never do this on purpose, but its rather easy to accomplish in practice.

      • by MacDork (560499)
        Also: try { throw ImportantException(); } finally { throw UnimportantException(); }; The unimportant exception wins again.
        • Re:The third option (Score:5, Interesting)

          by SplashMyBandit (1543257) on Saturday December 08, 2012 @02:13PM (#42226349)

          In Java you probably wouldn't do as you say. You would 'chain the exception; so that the original exception information is preserved even though you are transforming the exception type (eg. from a checked exception thrown from a library to an unchecked exception you don't have to declare throws clauses for). The code becomes:

          try {
          // Do something here that may throw an exception (which is 'checked').
          // eg. throw Exception();
          } catch (Throwable th) {
          throw new RuntimeException("A problem occurred when launching the SS-18 because the launch authorization code was invalid. The launch authorization code had a value of " + authCode, th);
          } finally {
          // Do any clean-up.

          There are two import things to note in this contrived example:

          • * The use of the chained exception. When the exception type is transformed by the creation of the new exception we include the old exception in the constructor. That way the 'chain' of exceptions can be viewed and the original cause of the exception found. That will help you fix this issue.
          • * A message that tries to describe the exact decision used to throw the exception and the values of any contributing variables or boundary values. It is critical this information is recorded at the point of throwing because in a massively multithreaded system with millions of transactions you can't reproduce the same conditions exactly in your debugger. The only information you have is what you put in your log, and you must include all relevant information in that log. Otherwise you will not have enough data to diagnose the decision the program made to throw, and won't have enough info to fix the problem.

          Checked exceptions are valuable in Java. Those that are against them don't understand that they are very useful for certain classes of problems - systems that have to be reliable. The mistake the Java designers made was that they made the library throw checked exceptions rather than unchecked ones. If they had used unchecked exceptions everywhere (while still supporting checked exceptions for systems that need to force reliable operation under error conditions) then many of the gripes people have when encountering Java would be eliminated. Plus, programmer productively would increase because we wouldn't have to wrap and chain the checked exceptions produced by library calls all over the place. C# kinda gets it right in the fact the libraries don't have/use checked exceptions, but it lacks the option of using checked exceptions in critical systems. So neither Java nor C# have it perfect, IMHO.

          If you are a Java developer writing libraries intended for re-use by others then you should ensure your library never throws a checked exception to the caller. Only libraries for critical systems should do this. Unless you are working on nuclear plant control, avionics, medical devices, weapons systems or interplanetary probes then your system probably doesn't need to expose checked exceptions.

          The way you structure, handle and report exceptions is mundane, but is absolutely crucial for writing reliable and easily maintained software. Most programmers are sloppy about this, or consider it as unimportant as good documentation, but that is what makes then bad programmers (if you ever have to use or maintain their software).

          I hope this helps some developers out there understand how to use chained exceptions. The chaining *preserves information* about the cause of a failure. The adding sensible messages and program state is also about *preserving information* about the failure at the point of throw. Loss of information is what you are battling here, since once you lose/throw away information it is a huge effort to reconstruct it later. Avoiding loss of information is worth keeping that in mind as you develop, so you avoid doing it. Example: the built in NullPointerException being the worst example of providing zero additional information about what was null, a problem if you have multiple chained method calls on a single line. Don't write code like the Java code that raises NullPointerException.

      • by pla (258480) on Saturday December 08, 2012 @01:02PM (#42225779) Journal
        You'd never do this on purpose, but its rather easy to accomplish in practice.

        You have too much faith in humanity, friend!

        I hate hate hate the exception-handling model of dealing with errors, because in practice, I've seen very, very little code that actually handles the error. People either:
        1) Use far too coarse grained a "try" (as in, on the entire function), giving almost no possibility of knowing what actually happened or how to recover,
        2) Use the "catch" just to tell the user "golly, it broke, try again later" rather than accidentally revealing the ugly (but meaningful) exception text,
        3) Assume nothing in the "try" could actually fail and only do it to satisfy their company's code auditors, so the catch does... nothing, or
        4) (My "favorite") - copy the entire body of the "try" into the "catch" and blindly do it again!

        When used correctly, exception handling doesn't make your code cleaner, it reduces to a slightly more verbose way of checking return values. You should, if you want any hope of really dealing with the error, wrap every call in its own try/catch. I have not ever seen that done (and honestly, I can't claim I do it as religiously as I should either - I tend to trust my own code (big mistake), and only do that for external calls).

        Then again, how do you handle the system volume suddenly vanishing out from under you? So, perhaps the coarse-grained "golly, it broke, try again later" folks have the right idea. ;)
        • Re:The third option (Score:5, Informative)

          by Anonymous Coward on Saturday December 08, 2012 @01:13PM (#42225875)

          Sounds to me like you actually hate abusive exception handling. Exceptions that are in relevant places and handle the errors in meaningful ways are good. I've seen lots of code that actually handles the error, but then I work with competent people (and yes, that is a lovely thing).

          Exception handling can make code cleaner. I'd much rather see a nice exception than return value checking as I can instantly see what kind of error is expected and what should happen if it occurs.

          Please don't dismiss a step forward from return value checking just because you're unfortunate enough to have never worked with anyone who uses it properly.

          • lease don't dismiss a step forward from return value checking just because you're unfortunate enough to have never worked with anyone who uses it properly.

            +1. An exception mechanism is absolutely necessary to being able to contruct sane error reporting and recovery in a nontrivial code base, without unduly obfuscating the code. Whether this powerful tool is used or abused depends on the the quality of the programmer, as nearly every aspect of software development does.

            One issue with exception handling: you generally lose the ability to get a traceback at the point of throw. So it is often difficult to find out where an exception came from. This can fairly be

            • An exception mechanism is absolutely necessary to being able to contruct sane error reporting and recovery in a nontrivial code base, without unduly obfuscating the code

              The one thing that exceptions offer that return values don't is they allow the programmer to forget about the stack unwind and assume someone else will catch it higher up the chain.

              Useful yes, "absolutely necessary" no. I can't think of a faster way to kill a non-trivial code base than to try and convert it from one error handling method to the other. The stack trace problem is one reason many commercial source trees avoid exceptions (particularly C/C++) code. There is nothing worse than finding a tree w

        • by Simply Curious (1002051) on Saturday December 08, 2012 @01:16PM (#42225903)
          I would say that it is much less verbose in the case where errors need to be propagated upward. This is exactly why not every function call has a try/catch around it. Suppose I am writing a function that accepts a filename, interprets the text in the file, and then returns some modified version of the text. With error codes, I would need to explicitly check that open_file has returned a valid file handler. I can't do anything without a valid file, so I then need to propagate the error upward. On the other hand, with exceptions, I could simply not catch that exception from open_file. I can't do anything to recover, so I should let the exception propagate upward to wherever called me, and then let them deal with it.
        • by mrvan (973822) on Saturday December 08, 2012 @01:30PM (#42226027)

          I think you are focussing too much on java-style compiler-forced error handling. To me, the essence of try/catch error handling is that you only catch errors if you can deal with them. If you can't (the majority of cases), let is escalate, all the way up to the user (or a log file) if needed. I think there are three sane ways of using a try/catch: (1) to actually deal with the error (this is by far the rarest), (2) mainly in loops of more-or-less independeny actions: to log the error, reset state, and continue working, and (3) at the top level, to log the error and display something less meaningful but less scary to the end user.

          I think it is a bad design decision to impose static checking on declared 'throws' statements, because that forces routines to catch stuff that they can't handle, or declare a meaningless list of everything every called routine could ever throw. In essence, it couples the signalling and handling again that exceptions were supposed to decouple.

          Another nicety of exceptions compared to return values is that the semanitcs of "something went wrong" is clear. This makes it possible to e.g. have a wrapper function that begins a transaction and commits or rollbacks it depending on the outcome (e.g. [])

          • by Z00L00K (682162)

            And you can only catch errors if you know that they will appear.

            One example is when developing in C# you never know which exceptions to except unless you waste a lot of time reading the documentation for every class that you use or resort to do a general catch of Exception and try to guess what's best to do. In Java the amount of "blind" exceptions thrown are limited to the Runtime exceptions that you can get - which is bad enough, especially if some esoteric third party developer throws their exceptions as

            • Re: (Score:3, Insightful)

              by BitZtream (692029)

              You're fired.

              unless you waste a lot of time reading the documentation for every class that you use

              Again I say, you are fired. I would throw you out on your ass so quick it would make your head spin if you told me that as one of my employees. If you aren't reading the documentation you don't know how the method works and you don't need to be writing code.

              That is the most idiotic argument I've ever heard. Its the definition of bad programing, ON PURPOSE no less.

              • by AmiMoJo (196126) * <> on Saturday December 08, 2012 @07:16PM (#42228827) Homepage

                What a douchbag you are. In the real world there are deadlines and never enough people on hand, so you don't have time to read every bit documentation for everything. That is perfectly acceptable as long as you are still capable of developing software that is robust and does what the customer wants.

                This is why we have testing. It is more cost effective to avoid getting bogged down in making something perfect and instead get it tested as you go, making improvements based on feedback. The only people who do it any other way are writing mission critical code that costs a fortune to develop.

                You know what? You're fired. Your products are all late, way over budget, the development team hates your anal retentive attitude, while your competitors left you in the dust.

        • by Sarten-X (1102295) on Saturday December 08, 2012 @01:46PM (#42226133) Homepage

          If you're actually seeing #4 in practice, your coworkers need a nice bit of physical re-education.

          Each situation you describe has perfectly valid circumstances:

          1) Using a "try" on the whole function is suitable on functions where a particular caught exception can only mean one thing. If you're catching a FileNotFound exception, it means the file's missing. It doesn't matter that the error happened while opening the file, or at the first read. The exceptional situation is that the file can't be found. Exactly which call had the exception doesn't matter beyond debugging (for which there's usually extra information in the exception, such as line numbers).

          2) Revealing ugly text isn't user-friendly. Rather, it shows that the programmer has no idea what's going on and is putting the burden of debugging on the user. Ideally, the exception handler will first take steps to remedy the situation on its own (config file not found? Use sane defaults and save them for next time!), then log the exception somewhere with only the meaningful parts (such as a module name, line number, and a selection of parameters). Nobody really needs to know the whole stack trace up to main().

          3) Sometimes, an empty catch routine is fine, and if it isn't fine a good code audit should notice this anyway. Some errors can be safely discarded, but the code should reflect that they are being willfully ignored, rather than just ignored out of ignorance.

          4) Despite my glib comment earlier, there are also cases where blindly retrying a step is the cleanest solution. One example I've seen recently is where a database connection would reveal a timed-out disconnection only upon actually executing statements. The straightforward solution was that if the first statement failed due to a timeout, the connection (which was now in an error state) would be checked again, reconnected, and the statement would be retried.

          You should, if you want any hope of really dealing with the error, wrap every call in its own try/catch. I have not ever seen that done...

          ...because it's a silly idea. Now you're just using exceptions as special return values. Exceptions are not supposed to mean "something went wrong here". They should mean "there's a situation that is so unexpected that I don't know how to handle". It's a different paradigm entirely. The idea is that rather than writing your program to anticipate every possible error (as the mathematicians so loved), the program should instead follow a more practical "hope for the best, plan for the worst" design. Rather than worrying about exactly which byte of a file couldn't be read, the program should just understand that something's wrong with the file, and its contents can't really be trusted.

          Then again, how do you handle the system volume suddenly vanishing out from under you? So, perhaps the coarse-grained "golly, it broke, try again later" folks have the right idea. ;)

          If your program is supposed to run on transient resources (like, for instance, a cluster that has a weak master controller running your program, and the bulk of its processors scheduled to run computation), this should be expected. Perhaps a "system vanished" exception can be raised to signal that in-process calculations should be restarted the next time the system appears, and that previous calculations should be saved in case everything else disappears, too.

          Or in other words, it broke and you should prepare to try again later. :)

          • Exceptions are not supposed to mean "something went wrong here". They should mean "there's a situation that is so unexpected that I don't know how to handle"

            Wait, but when you write an exception handler for it, then it's not only expected, but you know how to handle it, so it's not an exception anymore, so the program doesn't know how to handle it, which means...


            Exception: Out of Stack Space. System Halted.

        • by west (39918) on Saturday December 08, 2012 @03:27PM (#42226921)

          I have not ever seen that done

          I have. The coder handled every possible exception intelligently, handled the possible exceptions in the exception handlers, handled the possible exceptions in the exception exception handlers, etc. It was phenomenal. His code could practically handle a CPU burning out at the same time as the primary disk had been hit by lightening while the database had been accidentally converted into EBCDIC.

          Unfortunately, it was also completely unmaintainable. No human being, outside of the original programmer, could possibly grok all the conditions, sub-conditions, and contingencies. The code was also 3000 lines of error handling for about 25 lines of normal execution.

          It was my privilege to gaze upon the world's most complete error handling before I fulfilled my responsibility of burning it to the ground.

      • What? Why? It should be:
        blahBlah = blah(); //throws exception
        } catch (Exception e)
        { //Do whatever War is good for.

    • Re:The third option (Score:5, Interesting)

      by Yetihehe (971185) on Saturday December 08, 2012 @12:56PM (#42225723)

      That's the philosophy of erlang, "Let it crash". Apparently this leads to some of the most reliable systems. []
      Apparently OP didn't heard about it, because this is the third way.

      • Re:The third option (Score:5, Interesting)

        by Nerdfest (867930) on Saturday December 08, 2012 @01:13PM (#42225869)

        I've gotten to prefer using runtime exceptions with a general policy of "Throw as early as possible, catch as late as possible". Only catch if you can do something about it. It works very well, and keeps the code very clean.

      • That is only because most of telephony can get away with being stateless.

        • by Yetihehe (971185)

          Not only telephony. Also in erlang you can make stateful servers, you just don't have shared state. Instead you send messages about data changes between processes. It's like many people talking and updating their knowledge of some situation.

    • by jythie (914043)
      Over the years I have seen people try to push some kind of sci-fi ready 'graceful failure' where somehow programs simply survive errors without programmers actually needing to do anything... so in that model one doesn't need error handling. As one can imagine, such magical no-work systems have not panned out well in real life.
    • by PRMan (959735)

      Ignoring the error completely, data integrity or planned functioning be damned.

      This is the option chosen by most developers...

    • Re:The third option (Score:5, Interesting)

      by Capitaine (2026730) on Saturday December 08, 2012 @01:40PM (#42226093)
      My work implies writing aeronautical software specifications. In order to facilitate FAA/EASA certification of the system, we are required to stick to the KISS principle. Data validity checks are done almost everywhere, but we are asked to design the logics so that they do not need to use these data validity statuses. Degraded mode is done through the use of failsafe values. The consumer of a data do not need to know the status of the producer. The whole system is designed so that it works and is robust with minimal use of alternate logics.

      This principle works well with dataflow oriented program and might be adaptable to other domains.
  • by Andy Prough (2730467) on Saturday December 08, 2012 @12:36PM (#42225581)
    I think MS already tried the blue screen.
  • by bunratty (545641) on Saturday December 08, 2012 @12:38PM (#42225591)
    BASIC has had it all along!
    • On Error Resume next (Score:5, Informative)

      by xaxa (988988) on Saturday December 08, 2012 @12:46PM (#42225661)

      Visual Basic had:
      On Error Resume Next

      I last typed that when I was about 13...

      The documentation [] shows a couple of valid uses for it.

  • by Anonymous Coward on Saturday December 08, 2012 @12:42PM (#42225631)
    Exceptions should NOT be used for 'normal' errors. They should be used for events that are, well, exceptional. A healthy program should NEVER raise an exception, but may deal with a lot of error conditions.
    • by Smallpond (221300)

      Exceptions should NOT be used for 'normal' errors. They should be used for events that are, well, exceptional. A healthy program should NEVER raise an exception, but may deal with a lot of error conditions.

      [citation needed]

      What is the basis of this strange belief? The non-error case should be simple and readable, not cluttered with tests for seldom-occurring errors. Putting a try-catch block around it pulls the messy error code out of it and is simpler to read and debug than inline code. Never worry about performance of error-handling code.

      • by ruiner13 (527499) on Saturday December 08, 2012 @01:27PM (#42226007) Homepage
        I believe the reason for not wanting to throw exceptions unless really needed is that exceptions (and their handling) are relatively expensive and resource intensive operations. Most languages when exceptions are thrown do a lot of runtime stack analysis to, among other things, get a full stack trace. There are many research links on the interweb explaining how expensive it is in whatever language you happen to be using, but here is the first link I found: []

        In the case of the .net runtime, throwing an exception was > 1000x as expensive as using a return value, in processing time.
    • I think exceptions should be used at a reasonable point where the application cannot utilize the given parameters to continue.

      For instance, Null argument exceptions. Your code should be written to assume that the arguments presented to it fit within the functional parameters of the code. An Exception should be thrown if it's determined they are not.

      Another: SQL connection exceptions and other Network Exceptions. Your code should be written to assume that these resources are available. If they are not
      • You should never, ever catch a NullPointerException. It indicates a bug in your code, not some external error condition. If it happens, you fix your code.

        I mean, how exactly do you expect to recover from it even if you catch it?

        The same goes for all other exceptions that are actually contract violations - various range- and bound-checks etc.

    • by Ziggitz (2637281) on Saturday December 08, 2012 @03:40PM (#42227033)

      This. It's not difficult to write good defensive programs that check for nulls before performing operations and can fairly consistently never raise an exception. However, most programs need to handle inputs from other applications that the program cannot guarantee are valid, a lot of complex inputs cannot be verified by simply null checks. Additionally any developer who writes code that touches the internet(i.e. most of us) have to cope with unreliable services, deployment engineers, or worse the lack thereof setting up applications with incorrect configurations, bad inputs and network failures.

      What a try catch block should really be used for, is a conscious decision point to identify where a valid program might meet an error conditions and deal with the implications of that error. Maybe the error is not finding crucial initialization parameters and all you can do is log an error, set a pretty error message for the user and kill the program. Maybe you can flush the current parameters and try again with some defaults. Maybe you can still run but with impaired functionality. Maybe you are a secondary function and it's ok if you fail but you need to let the user know.

      The author's arguments boil down to "try catch blocks make my code look ugly." There is no valid solution to error handling that doesn't involve developers proactively identifying and addressing unreliable operations. Any valid solution that isn't current exception handling is going to look a lot like it because error handling is not some boilerplate task that you can wave a magic wand at and make disappear.

  • Exceptions in C++ (Score:4, Insightful)

    by Anonymous Coward on Saturday December 08, 2012 @12:45PM (#42225651)

    While I have seen good error handing schemes in many languages, so far, I haven't seen anything as good as C++ exceptions combined with RAII. Exceptions alone aren't that great, but if you combine it with the way constructors / destructors work and compose in C++, it ends up working really well. A lot of languages with exceptions lack RAII. Java and C# have exceptions but don't have destructors (the language equivalent is much less useful than C++) much less ones that compose.

    The only real problem is that lots of C++ code rely on return codes, no error handling at all, or poor use of exceptions and resource management. There are lots of C++ programmers who stumble on error handling code and haven't learned how to take advantage of the tools the language provides. Of course error handing logic can be quite hard, even if the language helps out a lot.

    STM is also a great way of doing error handling. Transactions (like used in databases) make error conditions much easier. But they cannot be limited to databases; transactions in the file system (Microsoft has this with NTFS) and transactions in memory data structures (STM) are very valuable.

    • by devent (1627873)

      You have got to be kidding me. C++ have the worst exception implementation by far.

      First, you can throw anything. String, int, object, etc. That means that you can't use a catch-all and do anything useful.
      Second, you are losing the stack trace, which means debugging is pain.
      Third, you can't throw an exception in the dtor, which means you can't use RAII. What if an exception is occurring while you close a resource? For example, disk is full; the socket is gone; memory full; disk is broken; the user removed th

  • Exceptions (Score:3, Insightful)

    by vargad (1948686) on Saturday December 08, 2012 @12:49PM (#42225681)
    Normally exceptions should be used in exceptional cases, not in normal control flow. Exceptions are usually quite expensive, especially in C++ compared to just returning an error code. Language APIs should be fast, but also convinient so they had to made a trade-off.
    • by Hentes (2461350)

      Speed is another reason why current exception handling mechanisms are insufficient.

      • Re:Exceptions (Score:5, Insightful)

        by Anonymous Brave Guy (457657) on Saturday December 08, 2012 @02:15PM (#42226369)

        Speed is another reason why current exception handling mechanisms are insufficient.


        Whether I'm aborting due to an error or exiting early from an intricate recursive graph processing algorithm, I'm still only doing it once.

        On the other hand, adding extra conditions on every pass around a nested loop to check whether a flag is set to cause an early exit creates code you're going to run lots of times (but only actually helps once).

        And in any case, for reasons I explained in my first post to this subthread, exceptions can actually be faster than relying on things like flags and error codes in both exceptional and non-exceptional code paths, obviously depending on your language's implementation strategy.

    • Normally exceptions should be used in exceptional cases, not in normal control flow.

      People keep saying that, but I've yet to find someone who can defend the position with a logical argument.

      Fundamentally, you run some code to do a job. There are two ways it can finish early: either it succeeded, and we did all the work/figured out whatever information we were asked for, or it failed, and maybe we want to report this along with some related information. Either way, there is nothing useful left to do except hand control back to the higher level code that asked for the work to be done, along

  • by erroneus (253617) on Saturday December 08, 2012 @12:51PM (#42225689) Homepage

    It is not clutter. It is necessary. Trash cans in the home might be considered clutter too I suppose. Some people artfully conceal them within cabinets and such, but in whatever form, they are both necessary and either take up space or get in the way or both.

    It is the reality we live in. If you want to code in a language that doesn't require error handling, you might look to one of those languages we use to teach 5 year olds how to program in.

    Good code does everything needed to manage and filter input, process data accurately and deliver the output faithfully and ensuring that it was delivered well. All of this requires error checking along the way. If you leave it to the language or the OS to handle errors, your running code looks unprofessional and is likely to abort and close for unknown causes.

    I think the short of this is that if anyone sees error checking as clutter or some sort of needless burden, they need to not code and to do something else... or just grow up.

  • by overshoot (39700) on Saturday December 08, 2012 @12:53PM (#42225703)

    How can it be that after 60+ years of language development, errors are handled by only two comparatively verbose and crude options, return values or exceptions?

    Of course, it could be that this just means that your own language horizons are too narrow. Prolog and icon [] come to mind.

    • by mdmkolbe (944892) on Saturday December 08, 2012 @02:49PM (#42226619)

      Haskell also comes to mind. Errors are so well handled in that language, you probably won't notice that they are so well handled. Because of the way things are structured, errors are rare (so no need to check them). When they are present, there are a number of techniques from "Maybe" types to "Error" monads to throwing "IO" monad errors. The "Maybe" type is particularly interesting as it ensures that the user will check the error, and provides convenient notations and combinators for doing that checking.

  • ...write code without error. Or in other words:

    "Never test for an error condition you don't know how to handle." -- Steinbach's Guideline for Systems Programmers.

  • Exceptions here provide greater readability

    Nah, they don't.

    • by ruiner13 (527499)
      I'll take good comments explaining difficult to read code over using exceptions to make it "appear" more readable any day. Exceptions should only be used when the code has no way to recover from the error gracefully. They should not be used to improve readability. That's what comments are for. Now, I'm not saying every trivial line should be commented, but blocks of code that are complex should at least have the intent of the code commented, so no one later on changes its behavior unintentionally (yes,
  • Third option (Score:5, Insightful)

    by wonkey_monkey (2592601) on Saturday December 08, 2012 @12:57PM (#42225735) Homepage

    How can it be that after 60+ years of language development, errors are handled by only two comparatively verbose and crude options, return values or exceptions? I've long felt we needed a third option.

    Maybe - and admittedly this is just a guess from my fairly ignorant viewpoint - it's a very hard problem. How can it be that after 100+ years of industrial development, we're still heavily reliant on internal combustion engines to get us around? Why have we only got people as far as the moon in 60 years of space travel? Why, after x years, have we only achieved y?

    Because that's the way it is. Is there some reason we should have the third option by now?

    • Re:Third option (Score:5, Interesting)

      by epine (68316) on Saturday December 08, 2012 @04:10PM (#42227349)

      Maybe - and admittedly this is just a guess from my fairly ignorant viewpoint - it's a very hard problem. How can it be that after 100+ years of industrial development, we're still heavily reliant on internal combustion engines to get us around?

      This is a good illustration of the wisdom in Thinking Fast and Slow: the people most likely to highlight what they don't know proceed to the most sensible conclusions.

      I haven't sat down in front of a keyboard to write code since 1985 without this issue foremost on my mind. In the majority of serious programs what the program does is a tiny minority of what you need to think about. Dijkstra wrote some chapters where he illustrates that some programs will actually write themselves if you adhere rigorously to what the program is allowed/not allowed to do at each step, and bear in mind that you need to progress on the variant.

      I do everything humanly possible when I write code to work within the model of tasks accomplished / not accomplished rather than the domain of everything that could possible go wrong (error codes). It's a lot harder when error codes unreliably signal transient / persistent error distinctions. I view programs as precondition graphs, where the edges are some chunk of code that tries to do something, typically a call into an API of some kind. Who cares if it reports an error? What you care about is did it establish the precondition necessary to visit edges departing the destination vertex in the dependency graph?

      Dijkstra also cautions about getting hot and bothered about order relations in terms of which edges execute in which order. In general, any edge departing a vertex with all input conditions satisfied is suitable to execute next. Because he was so interested in concurrency, he assumed that when multiple valid code blocks were viable to run that scheduling was non-deterministic. It can make a difference to program performance which blocks are executed in which order. We enter the domain of premature optimization much sooner than we suspect simply by writing our code with overdetermined linearity (few languages even have a non-deterministic "any of" control operator).

      I had a micro-controller application in which there were many variable references of the form a->b->c->d. This is hell to be strict about precondition testing.

      bool ok;
      ok = a != NULL && a->b != NULL && a->b->c;
      if (ok) {
      ok = fragile_api_call (a->b->c->d);
      if (!ok) sideband_error_info = ERR_YUCK;
      if (ok) { // lather, rinse, repeat

      // ...

      // end of graph, did we succeed or fail?
      if (!ok) {
      barf_out (sideband_error_info);
      return ok;

      This is the fall-through do everything possible and not a thing more programming idiom. It doesn't end up highly nested, because you flatten the dependency graph. Assignments to ok often occur at deeper nesting points than where they are later tested.

      This kind of code often looks horrible, but it's incredibly easy to analyze and debug. The preconditions and post-conditions at every step are local to the action. Invariants are restored at the first possible statement. For the following piece of code, the desired invariant is that if thingX is non NULL, it points to a valid allocation block. In the entire scope of the program, this invariant is untrue only for the duration of the statement whose job is to restore the invariant.

      bool ok = true;

      void* thing1 = NULL;
      thing1 () = malloc (SIZE_MATTERS);
      ok &&= thing1 != NULL;

      void* thing2 = NULL;
      thing2 () = malloc (SIZE_MATTERS+6);
      ok &&= thing1 != NULL;

      if (ok) {
      // more logic

      if (thing1 != NULL) {

  • by White Flame (1074973) on Saturday December 08, 2012 @12:58PM (#42225737)

    The author commends the use of multiple return values and a side-band error value that must be checked? Gee, multiple return values have been in Lisp forever, and maybe he's not aware of this little thing called "errno"?

    Error handling is very, very tedious by nature. There are bajillions of ways that a system can go screwy, and many of these have individualized responses that we want distinguished for it to behave intelligently in response. We expect computers to become "smarter", and that means reacting intelligently to these problematic/unexpected situations. That is a lot of behavioral information to imbue into the system, all hooked into precise locations or ranges for which that response is applicable. That information is hard to compress.

  • by ribuck (943217) on Saturday December 08, 2012 @12:58PM (#42225743) Homepage

    But even in exception-based languages there is still a lot of code that tests returned values to determine whether to carry on or go down some error-handling path

    The key to taming exceptions is to use them differently. Any exception that escapes a method means that the method has failed to meet its specification, and therefore you will need to clean up and abort at some level in the call chain. But you don't need to catch at every level (unless your language forces you to), nor should you need to do anything that relies on the "meaning" of the exception. Instead, you take a local action: close a file, roll back the database, prompt the user to save or abandon, etc, and either re-throw or not according to whether you have restored normality. There will only be a few places in your app where this type of cleanup is needed.

    If you're not doing it this way, you're using exceptions as a control structure, and that's never going to be clean.

  • Condition handling [] and Conditions []. Old school, but does the job without too much clutter!

  • Even if there was a shortcut for safely ignoring return values, I would (the company I work for would) still need to check and catch every return. Why? We have to log them all.

    If you don't want to deal with failed returns, I find that a scripting language is the best way to go. I write my glue functions to handle nulls gracefully and I am done.
  • This is the real third option.

    The so-called "error returns" from things like file opening are telling the program something very important about what's going on. The program's flow must be designed from the beginning to interpret and handle errors. This is in fact be much of what a good program does.

    It doesn't matter whether we use exceptions or error codes to signal the errors as long as the program is designed to accurately interpret the errors that do occur. In some sense, exceptions may be easier to imp

  • But even in exception-based languages there is still a lot of code that tests returned values to determine whether to carry on or go down some error-handling path.

    The whole idea of exceptions is that you don't need to worry about checking return codes. If you're putting a lot of work into checking return codes for error conditions, then you're working with some ugly code that probably needs refactoring. One place I see this a lot is in Java wrappers around libraries ported from C (or used directly with JNI). Often, to make the documentation and example code line up perfectly, the wrapper returns invalid values for exceptional circumstances rather than just throwing e

  • by cstdenis (1118589) on Saturday December 08, 2012 @01:09PM (#42225841)

    Just don't bother checking for errors.

  • The biggest problem with exceptions is that they get thrown too far, changing them into comefroms (the opposite of a goto). And like gotos, they encourage spaghetti code. The best way to deal with them is to limit them to thrown exceptions only to their callers. That way, all exceptions become part of the subroutine's interface. Remember, for a programmer, out of sight is out of mind. If it's not part of the interface, it will be forgotten. For those who are interested, you can read my blog [] for details and
  • If possible, design a component such that no errors will ever occur (except for hardware failure). An entire program can't be designed that way, obviously, but individual components can be. Collecting all resource allocation into a small, well defined set of locations will relieve the majority of other code from the need to handle errors. In such a design, the majority of functions will return "void".
  • The nice thing about functions (rather than just simple subroutines) is that you can chain them in a single line. E.g.
    a = geommean(factorial(b), zeta(c))
    The single return value mechanic really makes it easy to use for math-style expressions. But the single return value mechanic isn't adequate when the function is allowed to have errors. In a language like c, one might do something like
    f = factorial(b, &error);
    z = zeta(c, &error2);
    if (error==0 and error2==0) { a = geommean(f, z, &error3); } else

  • by gman003 (1693318) on Saturday December 08, 2012 @01:20PM (#42225947)

    There are two ways to do error-handling: try{}catch{}, or if{}else{}. That's "using exceptions" and "using return values", under Dobb's naming.

    The difference in usage is simple: one handles errors immediately, thus cluttering the code with all the things that could go wrong, while the other separates error-handling out, pushing it to the end of a block (and away from the code that actually generates the error, which can complicate debugging).

    I can really think of no other way to do it. You can handle the error where it happens, or handle the error at the end. I tend to look on anyone whining about how hard error-handling is with suspicion - their suggestions (if they even have any) are almost always "the language/compiler/interpreter/processor/operating system should handle errors for me", and there are enough obvious flaws in that logic that I need not point them out.

    • by JesseMcDonald (536341) on Saturday December 08, 2012 @04:07PM (#42227313) Homepage

      There is at least one other method, which is available natively in Common Lisp. It's known as conditions, and involves registering a condition handler which, unlike an exception handler, runs in the context where the error occurs. The handler has access to zero or more dynamically-scoped restarts, which allow the computation to be resumed at well-defined points without unwinding the entire stack up to where the condition handler was established. The default condition handler is an interactive debugger, which allows the user to examine the state of the program and choose one of the available restarts.

      Beyond Exception Handling: Conditions and Restarts []

    • the language/compiler/interpreter/processor/operating system should handle errors for me

      Which they can't, unless they know what your intents are. Perhaps there's a way of declaring intents, I'm not sure.

      A subset of this might be something like invariant conditions / code contracts which makes sure that you don't have as many unexpected errors to handle. Static typing is another (and more rigid) approach.

      The author isn't wrong in the narrow sense that the programmer's tools should allow him to focus his er

  • by Greyfox (87712) on Saturday December 08, 2012 @01:26PM (#42226001) Homepage Journal
    You don't write an article like this unless you're actually going to suggest a different solution in it. Otherwise it just comes off as whiny and inexperienced. "Oh, if only we could not do that thing that everyone must do if they want robust code!" Reminds me of beginner CS students who don't want to make an extra header file or prototype functions. We're not doing magic here, and no amount of wishing for magic will make it happen. Work with some magic module (ActiveRecord, maybe) for a while and you'll quickly learn to hate magic, anyway. Discipline is required to write code that will stand the test of time. If I were wishing for something, it'd be that more programmers had the discipline to write good code consistently.
    • Re:Pfff (Score:4, Insightful)

      by larkost (79011) on Saturday December 08, 2012 @03:26PM (#42226913)

      I somewhat agree with you, but your examples are horrible: the near-requirement for header files and prototype functions in C stemms from a language deficiency, not from something that "beginner CS students" don't get. They are correctly seeing the situation as non-optimal. Java any Python have both (in differnt ways) are examples of langauages that handle these things with a multi-step parse. Note that I am not arguing aginst the option of having headder files, since they clearly have a use in large project (one that javadoc also servers). But the requirement to have function prototypes in order to have out-of-order functions is simply a language deficiency. The fact that people have been very sucessful while working around it for so long is a testiment to them, not to the language's inherint merit.

  • I read the article (Score:4, Insightful)

    by iplayfast (166447) on Saturday December 08, 2012 @01:27PM (#42226003)

    And I must say that as the Editor in Chief he has a very simplistic view of the problem. If I understand, his view is that a global exception added at the compiler level would somehow solve all the problems. He gives the example of calling "open" without worrying about it failing. Of course he doesn't state how to handle the failure when it occurs. For example

    open(file1); // ok
    open(file2); // failure

    What happens to file1 in this case? How is the code cleaned up? There may be a case where you don't want to just close all files in the functions, but just create file2 if the open failed. (for example).

    His complaint is that there is too many options available for error handling, and that they lead to cluttered code. As far as I can see the alternative is not enough options available and code not always doing what you want, and having to fight the compiler in order to get what you want.

  • This is generally seen with asynchronous code, but it could apply anywhere.

    Consider: (javascript) XmlHttpRequest has a readystatechange callback. Most javascript libs wrap it up so you pass in two callbacks, one for success and one for failure/error.


    jQuery.ajax(url, {
    success: function(data, textStatus, jqXHR){ ... },
    error: function(jqXHR, textStatus, errorThrown)) { ... }

    No return, no exception, the programmer decides how to handle it.

  • by cheebie (459397)

    We need the computer fairies to handle our errors, that way the beauty of our code will not be marred by mundane things like error checking.

    Seriously, error checking is part of the process. It's not the fun part of the process but it's a necessary part. Return values and exceptions work just fine as long as you get off your high horse and realize that your code will not be hung in the Louvre. Working is more important than pretty.

  • by Jmc23 (2353706) on Saturday December 08, 2012 @01:36PM (#42226063)
    One day, the majority of programmers will be able to grok Lisp and all these silly problems disappear.

    Problems don't exist in reality, they exist in points of view.

  • by msobkow (48369) on Saturday December 08, 2012 @01:47PM (#42226139) Homepage Journal

    Quality code will always be "cluttered" with data validation code, result verification, and a host of other details.

    The simple fact is that computers are stupid. They have to be explicitly told what to do in every conceivable situation the code could encounter at runtime, or else the code will crash and the user will complain about it being "unusable".

    I notice that despite the article author's bitching about the situation, they had not one suggestion as to what to do instead. It's easy to bitch about life, but a lot harder to suck it up and deal.

    If exception handling and return-value checking code are "too hard" for someone to understand, they need to get the hell out of the programming industry and leave it to professionals who actually find it fun and challenging to deal with all the details. Not everyone has the mindset of a true programmer.

  • by TheRealHocusLocus (2319802) on Saturday December 08, 2012 @01:55PM (#42226201)

    All coding should proceed as if every possible exceptional condition (device not ready, cache fail, controller failure, cat dials 911 on speakerphone) is the primary and intended purpose of the Project. Hash collisions not merely covered as a contingency but pursued with vigor in the main line to the Nth degree, where N indicates the infinitesimal possibility of multiple simultaneous hash collisions that would be the likely result of a vengeful god constructing the universe such as to produce a life of continuous and foul exceptions.

    When gathered at the water cooler, coders would discuss triumphs in their particular areas of malfunction, and when they corroborate as a group it is to merge their respective threaded exceptions into a parallel paroxysm of failure, branching with virtual threads and physical coring such that the greatest possible number of malevolent conditions are met and coded for, simultaneously. Proceeding steadily towards the grail of the Grandest Failure.

    The Grandest Failure being the stuff of mere legend, yet it is what drives us. It represents that supreme and sublime moment where everything that can go wrong has gone wrong and the very fundament reeks of wrongness.

    Buffers are not starved as an exception, they are starved by design! Disk controllers are never ready. Communications packets never arrive in sequence, or so we assume because there are no markers to check, when they do arrive they are garbled beyond repair. Reconstruction occurs as a matter of course! Streams are unsynchronized by nature, incompatible by rote, unresolvable.

    Off the corridor in a dusty hallway a small team of pariahs is assembled to perform the dirtiest and most detestable task of all: to handle the exceptions and branches thrown by the main line, conditional branches sketched out briefly (whose existence is known but not mentioned in polite conversation) are pursued in secret. This is necessary work but unrewarding as it leads away from the noble purpose of Grandest Failure, towards useful work. Such stuff as consolidation, transaction handling and data ordering, forgive me for uttering, Chaos be Praised!

    For the goal is to produce a System that boldly and efficiently proceeds down the pathways of most numerous and most simultaneous failure, where the actual success of anything triggers the exceptions and is cast off to the side.

    If robustness of design becomes human sentiment, it could be said that the System confidently strides forward boldly embracing every error condition and is shocked -- horrified -- every time something goes 'right'. As life's own experience is our guide, it is seldom disappointed.

    The output of useful work in such a System the source of great embarrassment and discomfort, a necessary evil.

    That is the principle behind the control systems of the Improbability Drive. It is the driving principle of the quantum flux, Brownian motion and wave/particle paradox.

    All of this Order and Progress (blaspheme!) is but a side road off a side road ad infinitum. The main path leads to Chaos. Follow that path and revel in it. There is no honor in coding for success, any idiot could do that.

    Down deep people know this is the Way. That is why when coders meet in dim conference rooms and the slideshow laptop suddenly projects a Blue Screen of Death for all to see, there is an eruption of thunderous applause, as if one had dropped a tray of food in a crowded cafeteria. Deep down we know failure is the noble path, and success the exception.

  • by Mal-2 (675116) on Saturday December 08, 2012 @02:35PM (#42226517) Homepage Journal

    When Chuck Norris throws an exception, it is always fatal.

  • by rew (6140) <> on Sunday December 09, 2012 @08:30AM (#42232717) Homepage

    The fundamental problem is that sometimes an error is an error to the calling program, but sometimes it is not.

    For example, when you issue: open "$HOME/.myconfig", the inability to find the file does not mean there is an error. Just that the optional config file is not there. But when you try to open the source file for an operation, the open-error really IS an error.

    This duality happens at most levels. A library wrapping "open" will have the same problem. Does the caller consider this a fatal error or not?

    Similarly, sometimes errors should result in telling the user and then quitting. But for a gui application it's better to show a graphical message and continue, even if the error is more or less "fatal".....

The clearest way into the Universe is through a forest wilderness. -- John Muir