Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Programming IT Technology

New & Revolutionary Debugging Techniques? 351

An anonymous reader writes "It seems that people are still using print statements to debug programs (Brian Kernighan does!). Besides the ol' traditional debugger, do you know any new debugger that has a revolutionary way to help us inspect the data? (don't answer it with ddd, or any other debugger that got fancy data display), what I mean is a new revolutionary way. I have only found one answer. It seems that Relative Debugging is quite neat and cool."
This discussion has been archived. No new comments can be posted.

New & Revolutionary Debugging Techniques?

Comments Filter:
  • Exceptions (Score:3, Interesting)

    by AKAImBatman ( 238306 ) <akaimbatman@gmaiBLUEl.com minus berry> on Sunday May 02, 2004 @09:28AM (#9033500) Homepage Journal
    Java Exceptions *were* a revolution in debugging. Java stack traces tell you the exact line number something went wrong, and the path taken to get there. More often than not, that's plenty of information to track down the bug and fix it. No need to load a debugger.

    • Re:Exceptions (Score:3, Insightful)

      by Anonymous Coward
      Sometimes something goes wrong WITHOUT causing an exception. Those are the bugs hard to find.
      • Often something goes wrong with no runtime error. Those bugs are often really, really difficult to find.
    • Re:Exceptions (Score:3, Insightful)

      by bdash ( 598142 )

      Java Exceptions *were* a revolution in debugging.

      Because everyone knows that Java invented exception handling...

      • Re:Exceptions (Score:4, Informative)

        by Delirium Tremens ( 214596 ) on Sunday May 02, 2004 @10:04AM (#9033642) Journal
        Java invented the dynamic analysis and handling of stack traces [sun.com], not just exceptions.
        If you are into dynamic analysis and recovery of exceptions -- that is, self-healing software --, that is a very powerful tool.
        • Re:Exceptions (Score:4, Informative)

          by bdash ( 598142 ) <slashdot@org.bdash@net@nz> on Sunday May 02, 2004 @10:06AM (#9033652) Homepage

          Java invented the dynamic analysis and handling of stack traces, not just exceptions.

          Where is your evidence that Java "invented" this? I have seen several other languages that are at least as old as Java that contain this feature, so some facts wouldn't go astray...

          • Re:Exceptions (Score:3, Interesting)

            by bigjocker ( 113512 ) *
            Care to name one of them? That is, one programming language, invented prior to Java, that includes detailed exception handling and execution stack traces that show source file name and line numbers (this implies that the language must have native multithread capabilities).
            • Re:Exceptions (Score:3, Informative)

              by bdash ( 598142 )
              Care to name one of them?
              See this post [slashdot.org] for a concrete example of such a language. It would be nice to see some evidence of a) when Java grew these features, and b) that it was the first language to have such features.

              this implies that the language must have native multithread capabilities
              Huh? What does threading have to do with exception handling? The two are almost completely unrelated, and the presence of one feature in a language in no way requires nor implies the presence of the other.
              • Re:Exceptions (Score:3, Informative)

                What does threading have to do with exception handling? The two are almost completely unrelated, and the presence of one feature in a language in no way requires nor implies the presence of the other.

                I'd like to reinforce this statement. The only reason you would need multithreading is if you set up a watchdog timer to anticipate an infinite/semi-infinite loop state. Exceptions are almost exaclty like interrupt vectors. You set up a handler, it gets stored in a table, and if needed, it's called. In fact,

            • Re:Exceptions (Score:5, Informative)

              by Paul Fernhout ( 109597 ) on Sunday May 02, 2004 @12:45PM (#9034542) Homepage
              Lisp and Smalltalk possibly in the 70s & certainly in the 80s (when I used ZetaLisp on a Symbolics and Smalltalk on various hardware -- Mac, TI, etc.).

              How much has been forgotten. Time and time again I hear people claiming Java invented something when it was just the place they first saw it compared to programming in C or TurboPascal or whatever. Java does have some ideas it popularized -- but they are things like interfaces. Much of its class design like for Swing was taken from ParcPlace Smatallk's VisualWorks. Hotspot profiling came from Smalltalk. MVC came from Smalltalk. etc. etc. Between Forth, Smalltalk, and Lisp (and a few other languages and libraries) most of the innovations people see now were invented a long time ago. VMs came from Smalltalk and IBM mainframes (first) and Pascal and Forth. Another example -- XML is a stupid version of Lisp s-expressions. And so it goes...

              • Welcome to the 70s! (Score:3, Informative)

                by voodoo1man ( 594237 )
                Lisp and Smalltalk certainly had these capabilities in the 70s. Some of the Smalltalk stuff is described in a book I highly recommend, G. Kranser's (ed.) Smalltalk-80: Bits of History, Words of Advice. Interlisp had advanced stack-handling facilities for it's spaghetti-stack VM, and hooks into all the error handling facilities, dating back to the late 60s, when it was known as BBN-Lisp. Of course, development was also entirely structure-oriented, so instead of line numbers with your stack trace, you'd get t
        • I fail to see how stack traces are self-healing software. The program still crashes, self-healing implies that it would somehow get around the error and continue execution at some point.
        • Re:Exceptions (Score:3, Informative)

          by smallpaul ( 65919 )

          Java invented the dynamic analysis and handling of stack traces, not just exceptions.

          Python has the same feature and Python is older than Java. It would take some effort to prove that Python had introspection of stack traces before Java did, but it seems quite likely to me. And it seems even more likely that some variant of Lisp had it long before Python.

        • Re:Exceptions (Score:3, Informative)

          by smallpaul ( 65919 )
          I'm looking at the source code for Python 1.1. It has a function called "extract_tb" which generates a list that you can manipulate and handle from a traceback. According to the changelog, that feature was added in 1994, the year before Java was released. I would bet money that the feature is much, much older than Python.
    • Re: Old news (Score:5, Interesting)

      by David Byers ( 50631 ) on Sunday May 02, 2004 @09:43AM (#9033571)
      Back in 1987 the FORTRAN compiler i used generated code that prited the source-code location of all failures, and that feature was old news even then.

      Besides, nontrivial bugs don't result in stack traces or crashes. They result in infrequent, hard-to-spot, anomoalies in the output. No amount of Java stack traces will help you find them.
      • Re: Old news (Score:3, Interesting)

        by mindriot ( 96208 )

        Besides, nontrivial bugs don't result in stack traces or crashes. They result in infrequent, hard-to-spot, anomoalies in the output. No amount of Java stack traces will help you find them.

        I second that. I currently have a piece of software that runs as a daemon. It silently crashes about once a week. Tell me a way of debugging it that doesn't take months, and I'll be happy. But until then, I'll have to add debugging statements and triple-check each line of code, run it again and wait another week or so.

    • living under a rock? (Score:3, Informative)

      by hak1du ( 761835 )
      Java Exceptions *were* a revolution in debugging.

      Only if you have been living under a rock. Most languages and compilers other than C and C++ have been doing that forever. Even C and C++ allowed you to get a complete backtrace and inspect the complete program state from a core file (software bloat has made more and more people turn off that feature, however).

    • Perhaps but making your program rely on catching execeptions to handle bugs isn't very good. And adding a bunch of try and catches when you experience a bug to try and find it really does take a long time.
    • by eyeye ( 653962 )
      Line numbers? What next.. fire?
    • Python had this feature in around 1992, long before Java was called Java or was public.
    • Re:Exceptions (Score:4, Interesting)

      by michael_cain ( 66650 ) on Sunday May 02, 2004 @11:58AM (#9034230) Journal
      Java stack traces tell you the exact line number something went wrong, and the path taken to get there.

      Just as a historical note, the APL system that I used in 1975 provided this capability. When an exception occurred, the interpreter halted program execution, identified the problem and the source line, and provided access to the stack info on how (functions and line numbers) you had gotten there. You also had the ability to examine any variable that was currently in scope, and could change values and resume execution. Given the cryptic nature of the language, you needed all the debugging help you could get. Still, for certain types of numerical problems, you could get a lot of effective code written in a very short period of time.

  • Avoid debugging (Score:5, Insightful)

    by heironymouscoward ( 683461 ) <heironymouscowar ... m ['oo.' in gap]> on Sunday May 02, 2004 @09:30AM (#9033508) Journal
    You get what's called 'glassnose syndrome' too easily.

    Instead concentrate on building software in many small incremental steps so that problems are caught quickly, and on separation of design so that dependencies are rare.

    If you can't find a problem, leave it and do something else.

    Otherwise, print statements, yes, that's about the right level to debug at.
    • Which is nice... (Score:5, Interesting)

      by Kjella ( 173770 ) on Sunday May 02, 2004 @09:56AM (#9033619) Homepage
      ...unless you happen to have two pieces of software, that each function excellently alone, yet dies a horrible death together, for reasons unknown.

      I've got this thing with OpenSSL, Qt and my code right now. On a one-time run, it works fine. When I put it into a program and try to loop it, it crashes on some mysterious and horrible error, sometimes on 2nd pass, sometimes 3rd, 4th pass or more.

      All I'm getting from traceback logs are some wierd memory allocation errors in different places, e.g. in Qt code that *never* crashes if I replace the OpenSSL code with dummy code, but has nothing to do with each other. Or in OpenSSL itself, which I hardly think is their fault, if it was that buggy noone would use it. And only if this is put together and looped. Each taken apart, they work perfectly.

      Kjella
      • Re:Which is nice... (Score:3, Interesting)

        by iabervon ( 1971 )
        Try running it through valgrind. This has the advantage that you'll get errors on many kinds of bad accesses when they occur, rather than just when the program relies on something that got messed up. Note, however, that OpenSSL as distributed xors uninitialized memory into the random pool, which means that valgrind (not knowing that the outcome is intended to be unpredictable) complains about every use of a random number. You can stop this by defining "PURIFY" when you build OpenSSL, or removing the line th
    • My programs don't have a lot of bugs when they are finished. I think the reason is that I physically cannot code for more than 15 minutes without compiling and running. Obviously, my programs do have bugs that turn up when they're "done". But I catch a helluva lot before that time becuase I test each little bit of functionality as I build it.
    • Re:Avoid debugging (Score:5, Insightful)

      by mattgreen ( 701203 ) on Sunday May 02, 2004 @10:19AM (#9033705)
      Ah, nothing like claiming that your way of approaching something is the only way. A debugger is just a tool. Like any other tool it can be bad if it is misused, and it isn't appropriate for every situation. I find a debugger invaluable for jumping into someone else's code and seeing exactly what is happening step-by-step. Debuggers can be great if you suspect buffer overflows and don't have access to more sophisticated tools that would detect it for you. Just yesterday I used a debugger to modify values in real-time to test code coverage.

      Inserting printf statements into the code is probably not logging - usually if you are debugging they are destined for removal anyway. I use a logging system that shows the asynchronous, high-level overview of events being dispatched and then can use the debugger to zero in on the problem very quickly without recompiliation. In addition if a test machine screws up I can remotely debug it.

      If you want to throw out debugging because Linus isn't a fan of it, be my guest. But I'm not a fan of wasting time, and injecting print statements into the code plus recompiling is a waste of time and ultimately accomplishes close to the same thing as debugging. Any decent IDE will let you slap a breakpoint down and execute to that point quickly. But I assume someone will come along and tell me that IDEs are for the weak as well.
      • Re:Avoid debugging (Score:4, Insightful)

        by jaoswald ( 63789 ) on Sunday May 02, 2004 @10:30AM (#9033767) Homepage
        The main advantage of printfs over IDE/interactive debugging is that you can collect a lot of data in one burst, then look at the text output as a whole.

        The tricky part about IDE/interactive debugging is understanding the behavior of loops, for instance. Sure you can put a breakpoint in the loop, and check things everytime, but you quickly find out that the first 99 times are fine, and somewhere after 100 you get into trouble, but you don't quite know where, because after the loop, everything is total chaos. So you have to switch gears; put in some watch condition that still traps too often (because if you knew exactly what to watch for, you would know what the bug was, and would just fix it), and hope that things went wrong, but left enough evidence, when it traps.

        Whereas print statements let you combine the best of both worlds: expose the data you care about (what you would examine at breakpoints), but the ability to scan through the text result to find the particular conditions that cause the problem (what you could potentially get from watch conditions).
    • by Alan Cox ( 27532 ) on Sunday May 02, 2004 @10:51AM (#9033883) Homepage
      Its all very well talking about elegance and planning in advance until you try and deal with hardware. No amount of zen contemplation of your code is going to tell you what a debugger does about how the hardware and its documentation relate.

      The neatest debugging tricks I've seen so far are those logging all inputs and returns from the OS level. Since you can replay them you can rerun the app to an earlier point and investigate - in effect you can run it backwards from a bug to see how it got there.

  • Valgrind (Score:5, Interesting)

    by brejc8 ( 223089 ) * on Sunday May 02, 2004 @09:31AM (#9033517) Homepage Journal
    Oh I do love it. My boss had 100% faith in his code claiming that he tests it so much it cant have any bugs. Running it through valgrind showed pages worth of bugs which were only accidently non-fatal.
    • Re:Valgrind (Score:4, Informative)

      by Daniel ( 1678 ) <dburrows@nospAm.debian.org> on Sunday May 02, 2004 @01:00PM (#9034618)
      AOL.

      Valgrind is possibly the most useful debugging tool I've found lately. It's especially great for tracking down slippery memory bugs -- you know, the type that are virtually impossible to find using most debugging tools.

      For people who haven't used it, what it basically does is recompile your program to target a simulated x86 CPU. It can detect branches that depend on uninitialized values, writes through a freed pointer, and a whole slew of other nasties that are difficult or impossible to detect with other tools.

      Daniel
      • Re:Valgrind (Score:5, Informative)

        by Soul-Burn666 ( 574119 ) on Sunday May 02, 2004 @01:18PM (#9034722) Journal
        It wasn't infered from your post, but it is important to note that your do not need to recompile your code to get it to work. It wraps already compiled executables. Though it would be smart to compile with -g so that it tells your which lines the errors happend and such.
  • More of the same (Score:5, Interesting)

    by poptones ( 653660 ) on Sunday May 02, 2004 @09:31AM (#9033518) Journal
    How is this "revolutionary?" All you are doing is generating a bunch of test vectors and feeding them to a machine instead of comparing them yourself. You could let programs "create" the test vectors if, as it says, there is another program of similar function to generate the vectors, but what if there isn't one?

    All you are doing is replacing human eyes with a computer at the first "filter" process. Instead of having to compare a bunch of values and look for the errors, let the machine point them out to you - grep anyone?

    I see nothing reolutionary about this. You still have the DUT making "assertions" - duuuuh can you say "print?"

  • by YetAnotherName ( 168064 ) on Sunday May 02, 2004 @09:31AM (#9033519) Homepage
    I haven't used a debugger in years; print statements are the only debugging tool I need.

    But bear in mind that almost all of my work these days are in environments where the bugs that traditional debuggers help you find are pretty much impossible to make in the first place (Python, Java, etc.). Instead of tracing data structures through bits of memory and navigating stack frames, you just focus on the application itself. It's kind of refreshing.
    • by Gorobei ( 127755 ) on Sunday May 02, 2004 @10:00AM (#9033634)
      Print statements are a great tool, especially on large pieces of software maintained/enhanced by many people. Once you've debugged your problem, you just #ifdef out the prints, and check the code back into version control.

      When the next poor programmer comes along, trying to fix/find a bug in that code, he a) can #ifdef the prints back on and quickly get debugging output about the important events taking place in his run, and b) read the code and see where the hairy bits are, because they tend to be the sections most heavily littered with debugging print calls.

      Fancy debugger IDEs just don't support this preservation of institutional knowledge.
      • by Anonymous Coward
        Yeah, it's always tempting to leave your debug printfs in and ifdef them out but I find that has two problems:

        1. When reading the code for logic, the print statements can be distracting and take up valuable vertical screen realestate. An algorithm without printfs can usually fit on a single screen. With printfs it may spill over two pages. That can make debugging harder if you need to understand what you're looking at at a conceptual level.

        2. Almost invariably I find that a previous person's printfs are a
      • I agree with your comments 100% but try Log4j/Log4perl/Log4c. It allows you to turn debug info on/off through the config file. Saves you that extra claim

        h

      • debug_print(int debug_level, str debug_msg)

        Somewhere, you have a list of what the various debug levels are. It's useful to do something like
        0 = off
        1 = entering major functions
        2 = less major functions
        3 = specific breakpoints
        4 = loop variables

        The debug print checks a constant global variable, or if more work is required, gets set by a command line. Means you dont need to remove the statements for the final compile.
      • If you're working in Java, you should add assertions to your toolkit. You don't just preserve the institutional knowledge about where the code gets hairy, but you preserve institutional knowledge about what the internal state of the application is supposed to be.

        What's especially potent is that you don't actually need to comment the assertions out : leave them in your code, so that downstream users can activate them and confirm that there's nothing broken in your code. There's no performance hit when they

    • by Anonymous Coward
      ...is that they don't help in time-dependent situations.
      For example, a program in C that uses lots of signals and semaphores could perform differently when print statements are added. This is because print statements take a (relatively) long time to execute. Print statements can affect the bug their supposed to be monitoring.
      I had a situation very much like this. One process would fork and exec another, and they would send signals to each other to communicate. But there were a few small bugs that caused one
    • I use humorous print statements e.g. in a loop i will put in

      echo 'bork! ';

      ....ahh, good times.
  • by DeadSea ( 69598 ) * on Sunday May 02, 2004 @09:32AM (#9033521) Homepage Journal
    To use relative debugging, you need a reference implementation that is correct. The only time I have that is when I'm extending an existing implementation with new functionality. In that case I use unit tests with assertions that compare the new and the old implementations.

    I suppose this would be useful if you were writing something in a new programming language. You could port your code and run the relative debugger to make sure that both implementations acted the same. In such a situation, that would be great, but such a situation isn't the common case for me.

  • Good idea.. (Score:2, Insightful)

    by sw155kn1f3 ( 600118 )
    Good idea, but isn't unit testing + standard assertions do the same thing but in more automatic way ?

    You feed some data to functions, you expect some sane pre-calculated output from them. Simple yet powerful.

    And more important it's automatic. So you can integrate it into build process.
  • AppSight Debugger (Score:2, Informative)

    by dominick ( 550229 )
    I am attending a college pursuing my Software Engineering degree and a company called Mutek showed us via weblink a new problem to track software issues. They called it AppSight. It could tell you exactly at which point your program failed. It even showed all the .DLLs your program called, COM objects that were created and even system calls made by the App. Mutek was bought out I believe and is now called Identify Software. You can see more about their technology at: http://www.identify.com/ - Dominick
  • by Halo1 ( 136547 ) on Sunday May 02, 2004 @09:33AM (#9033529)
    Debugging backwards in time. See the Omniscient Debugger [lambdacs.com] for an implementation in Java. Instead of re-executing the program a thousand times, each time setting breakpoints and watchpoints in different places to get nearer to the root cause of the problem, this debugger completely records all key events and lets you view the complete program state at any point in time.
    • by hak1du ( 761835 ) on Sunday May 02, 2004 @09:54AM (#9033613) Journal
      That sounds cool, but it isn't all that useful in practice. Debuggers that support stepping backwards usually end up keeping a lot of state around, which limits them to fairly small, well-defined problems or modules. But the problems where an experienced programmers need a debugger are just the opposite: they involve lots of code and large amounts of data.

      Usually, it's best to avoid going back and forth through the code altogether; insert assertions and see which ones fail.
      • by Halo1 ( 136547 ) on Sunday May 02, 2004 @10:07AM (#9033656)
        Well, he has used the Omniscient Debugger to debug itself. In a paper published about it at the AADEBUG03 conference, the author writes
        In actual experience with the ODB, neither CPU overhead nor memory requirements have proven to be a major stumbling block. Debugging the debugger with itself is not a problem on a 110 MHz SS4 with 128 MB. On a 700 MHz iBook it's a pleasure. All bugs encountered while developing the ODB fit easily into the 500k event limit of the small machine.
        I also disagree with your assertion that all situations where experienced programmers need a debugger involve lots of code and large amounts of data. The former is most of the time true, but latter isn't necessarily.
  • Hey, nice ad! (Score:5, Insightful)

    by EnglishTim ( 9662 ) on Sunday May 02, 2004 @09:35AM (#9033536)
    I can't escape the suspicion that the anonymous poster is actually in some way connected to Guardsoft, but let's leave that for now...

    I think it's a good idea, but I do wonder how many situations you'll be in where you already have an exisiting program that does everything you want to test against.

    Having said, that, I can see how this would help with regression testing - making sure that you've not introduced any new bugs when fixing old ones. But I wonder how much it gives you above a general testing framework anyway...
    • Re:Hey, nice ad! (Score:5, Insightful)

      by fishdan ( 569872 ) * on Sunday May 02, 2004 @10:04AM (#9033643) Homepage Journal
      I agree with you. I would have rather seen it posted without a reference to guardsoft and have someone mention it. I'm all for advertising on /. -- just not in the form of news.

      The fundamental issue here is that people are ALWAYS looking for a way to avoid having to write unit tests. I'm happy with a combination of Intellij and print statements. So far I've never had a situation where I though "the debugger isn't giving me enough information."

      I think that one of the reasons I'm happy with the debugging options available to me, is that I write my code so that it can be easily followed in the debugger. That means splitting my declarations and assignments, and other such things that make my code a bit more verbose, but eminently more readable. Lord knows as a child, I loved those complicated boolean switches, and cramming as much line into one line of code as possible. Now that my code is maintained by more people than me, I'm tired of people having ot ask me "what does this do." I used to get angry at them, but now I get angry at myself when that happens. We don't just write code for the users, we write it for our peers. Write code that your sibling developers will be able to follow in a debugger. I know some code is hard to follow, even with a debugger, so I write all my conditions as clearly as possible, name my methods and variables as clearly as I can and refactor reusable code into well named "submethods", so that we can solve "modules".

      This is because I want my code to last beyond my employment. Therefore it has to be maintainable by someone other than me. The real test of your code is: can someone ELSE debug it, using whatever the heck tools they want. A fancy debugger is a fine thing, but someday someone is going to have to debug your code with inadequate tools. My rule of them is "Code as if your life depended on someone else being able to fix it"

  • by cpu_fusion ( 705735 ) on Sunday May 02, 2004 @09:36AM (#9033545)
    I've found that aspect-oriented programming [aosd.net] using tools like AspectJ [aspectj.org] (for Java) can be a big help. There are aspect-oriented programming tools for many other languages.

    Basically, you can define an aspect to capture points in your program that are of particular note, and then do debug handling at those points. Aspect oriented programming allows you to break out that debug-handling logic into seperate modules, keeping your main sourcecode nice and clean.

    Aspect-oriented programming (AOP) has a lot of other uses too. I think in 5 years or so talking about AOP will be as commonplace as talking about OOP. They are orthogonal concepts.

    Cheers, Me

  • I have some relatives who demonstrate numerous error conditions.
  • Revolutionary? NO. (Score:4, Interesting)

    by News for nerds ( 448130 ) on Sunday May 02, 2004 @09:39AM (#9033556) Homepage
    It looks no more than a fancy variation of good old 'assert' macro, or an antecedent [guardsoft.com] of unit testing. Why did this anonymous submitter find it 'revolutionary'? What does it have over current debuggers which can be attached to working process or can analyze post-mortem dump?
  • Old methods best. (Score:4, Insightful)

    by hegemon17 ( 702622 ) on Sunday May 02, 2004 @09:39AM (#9033557)
    "Relative debugging" seems to be what people have always been doing. Dump some state and comapre it to an expected state. Most frameworks for regression tests do something like that.

    The best debugging method is to have a fast build environment so that you can add one printf, rebuild, reproduce the bug, move the printf to an even better place, rebuild and reproduce, etc. The more you rely on your tools to do the work for you, the less you understand the code and the less you understand the code, the more bugs you will make in the future.

    There are no shortcuts to good code.
  • by Rosco P. Coltrane ( 209368 ) on Sunday May 02, 2004 @09:43AM (#9033575)
    do you know any new debugger that has a revolutionary way to help us inspect the data?

    I'm noy sure what the question is here. Any debugger will allow you to watch data. If your program is special enough that you can't use a standard debugger, you probably need to write a test suite to go with it (and well, for any reasonably sized project, you should anyway).

    That's to help you find "surface" bug, i.e. to catch things like misaligned words, wrong data types, buffer overflows ..etc...

    For deep structural problems, like when you try to code something and you have no clue how to go about it, and the end result is just not good and never going to be, the cure is usually a total rewrite, so debuggers won't help you there. That's a problem due to bad architecture of the code.

    So, I'm not sure anything else is required. FYI, when I code, I believe I have enough experience to architecture and code something relatively clean the first time, then because I've done it for many years, I sort of "instinctively" expect to find a certain amount of this or that types bugs. And usually, I can fix them without debugging because they jump at me. When they dont (and I can do it), I pull out the old "print to stdout" debugging (or LED wiggling, sound generating ... on headless embedded boards), and that's usually enough to catch a 99% of whatever bugs remained. Normal debugging techniques using debuggers, or the test suite I made for that particular piece of code, takes care of the rest. My guess is, if you need anymore than that, it's probably that you lack experience.
    • Embedded board doing solenoid control. Too lazy to read through the RS-232 output, so I programmed the controller to change the solenoid PWM to a frequency/drive that made them vibrate at the resonant frequency of the structure they were mounted to when a certain issue was encountered.

      I could be all the way across the room, and suddenly there would be this nice clear tone, as my solenoids 'sang' to alert me of trouble.

  • old technique... (Score:4, Insightful)

    by hak1du ( 761835 ) on Sunday May 02, 2004 @09:47AM (#9033589) Journal
    Comparing the "state" of multiple implementations or versions of code is an old technique. You don't need a special debugger for it--you can use a regular debugger and a tiny bit of glue code. Alternatively, you can insert the debugging code using aspects (aspectj.org).

    However, like many programming techniques, most real world programmers won't know about them unless they can shell out $1000 for a tool; reading a paper or book just would be too much intellectual challenge, right?

    This news item seems to be a thinly veiled attempt to drum up business for that company.
  • by Novus ( 182265 ) on Sunday May 02, 2004 @09:49AM (#9033596)
    On the subject of software debugging techniques, I'd like to point out visual testing [cs.hut.fi], which (basically) allows you to try out method calls and fiddle with variables and examine the results (including execution history) graphically. MVT [cs.hut.fi] is a prototype visual testing tool for Java.
  • nothing new (Score:4, Informative)

    by hak1du ( 761835 ) on Sunday May 02, 2004 @09:50AM (#9033603) Journal
    There has been almost nothing new in programming environments or debuggers over the last 10-20 years.

    Almost those features you see in Visual C++, Visual Studio.NET, Eclipse, NetBeans, etc. have been around in IDEs since the 1980's. Debuggers have allowed you to step forwards and backwards, see the source code, examine data structures graphically, and modify the running source code for about as long.

    If anything, current commercial IDEs and debuggers still haven't caught up to the state of the art.
  • by N8F8 ( 4562 ) on Sunday May 02, 2004 @09:58AM (#9033626)
    The technique I've found most effective is to run many simultaneous debugging sessions in parallel. My debugger of preference is a semi-autonomous intelligent agent that seeks out defects in a random fashion. I call this type of agent a "user".
  • For systems that don't have a console, and even for those that do, a parallel port connected to a set of LEDs can be very useful. you can run the system at full speed and monitor important events on the LEDs.
  • A better solution (Score:3, Informative)

    by Morgahastu ( 522162 ) <bshel@WEEZERroge ... fave bands name> on Sunday May 02, 2004 @10:01AM (#9033635) Journal
    A better solution is to make your program generate a log of everything that happens, when an object is created, when an database connection is made etc.

    And when you launch the program in debug mode everything is printed to a log file and when it crashes or a bug occurs you can just halt everything (if it hasn't crashed) and look at the log to see what it was doing.

    Different levels of logging could be used. Say level 1 with the most basic logging (database connections, disk access, network access, etc), level 2 includes all level 1 plus network traffic, level 3 has all object creations, etc.

    ex: logEvent(3,"DBO_Connection create");
  • Some ideas (Score:5, Informative)

    by AeiwiMaster ( 20560 ) on Sunday May 02, 2004 @10:05AM (#9033647)
    They might not be revolutionary, but the is a few ideas
    which can be just to reduce the number of bugs in a program.

    1) 100% unit test coverage of your programs. [all-technology.com]
    2) Statistical Debugging [berkeley.edu]

    3) Valgrind [kde.org]

    4) The D programing Language [digitalmars.com]
    with build in support for unit testing, contracts and class Invariants.
  • Data logging (Score:4, Informative)

    by mrm677 ( 456727 ) on Sunday May 02, 2004 @10:06AM (#9033654)
    Don't trivialize the data logging approach to debugging.

    In complex, multi-threaded systems where you are debugging timing events more often than programmer logic, data logging (aka print statements) is probably the only technique that works.

    In fact, one of the first things we implement in embedded systems is a data logger that can spit out your print statements over RS232. Yes, we can single-step through code using in-circuit emulators and JTAG interfaces, however I found this rarely useful.
    • Re:Data logging (Score:5, Insightful)

      by Tim Browse ( 9263 ) on Sunday May 02, 2004 @10:28AM (#9033752)
      Of course, as many people who debug multi-threaded programs have found, using print routines to output logs can make the bug 'go away', because quite often CRT functions like printf() etc are mutex'd, which serialises code execution, and thus alters the timing, and voila, race condition begone!

      I know it's happened to me :-S
      • Re:Data logging (Score:5, Informative)

        by mrm677 ( 456727 ) on Sunday May 02, 2004 @12:00PM (#9034235)
        Of course, as many people who debug multi-threaded programs have found, using print routines to output logs can make the bug 'go away', because quite often CRT functions like printf() etc are mutex'd, which serialises code execution, and thus alters the timing, and voila, race condition begone!

        Of course. A good data-logger design does not call expensive output routines in the timing sensitive threads. The routines should be low-cost and append information to some kind of shared memory block such that low-priority threads occasionally format and spit them out to your output device.
  • by Anonymovs Coward ( 724746 ) on Sunday May 02, 2004 @10:08AM (#9033660)
    in a lot of higher-level languages, eg functional languages like lisp, haskell and ocaml. But not only debugging: in these languages you tend to write code that doesn't have bugs in the first place. No need for mallocs, no buffer overflows, no memory leaks. And if you're careful to write in a functional style, no "side-effect" bugs (variables that change value when you weren't expecting them to). For a language that started out in the 1950s, it's amazing how far ahead it was and still is as a development environment. This paper [mit.edu] is a fascinating read, especially the section on Worse is better [mit.edu] that describes why Unix/C won. And there are other languages like the ML family and Haskell. OCaml [inria.fr] (Objective Caml, a descendant of ML) is as concise and elegant as python, but produces native-code binaries quite competitive in speed with C, and occasionally faster [bagley.org]. I'm wondering why anyone uses C-like languages anymore.
    • We use C-like languages to make things go very, very fast, immediately. Sometimes a high level language _could_ deliver this if we were willing to wait for the hardware architecture to be re-designed in its favour (which we're not), and sometimes it's just not possible because the C-like language lets you do things which cannot be proven by the machine to be safe, yet nevertheless are correct. Even scary old gets() can be quite harmless, under certain carefully controlled circumstances.

      Now, some people use
    • C (and especially C++) are sufficiently good languages in the hands of those who know how to program cleanly (for example, they know why returning a pointer to a automatic variable is bad in C, and why you need to define copy constructors, or make the destructor virtual, for certain classes in C++) --- just look at the many well-written projects in C, you rarely hear the core developers screaming that the language is painful to use. A good compiler helps for giving warnings about certain constructs, but s
  • RetroVue (Score:4, Interesting)

    by Stu Charlton ( 1311 ) on Sunday May 02, 2004 @10:10AM (#9033664) Homepage
    This is going to sound like a plug, but I have nothing to do with this company or product - I just thought it was really cool.

    When I was wandering through JavaOne last year, I ran across this booth by VisiComp, Inc. who sells this debugger called RetroVue [visicomp.com]. I think it's an interesting attempt at bridging the gap between live-breakpoint debugging and logging.

    The main issue with debugging vs. logging is that logging provides you with a history of operations that allows you to determine the execution order and state of variables at various times of the execution, something that debuggers don't actually help you with.

    RetroVue seems to instrument your Java bytecode to generate a journal file. This journal file is quite similar to a core file extended over time, by recording all operations that occurred in the program over time: every method call, every variable assignment, exception thrown, and context switch. RetroVue then allows you to play back the execution of the application.

    It includes a timeline view to jump around the various execution points of the program, as well as an ongoing call-list to show the call sequence that has occurred. It also notes every context switch that the VM makes, and detects deadlocks, thus making it a great tool for multi-threaded application debugging. You can adjust the speed of the playback if you would like to watch things unfold in front of you, or you can pause it at any time and step through the various operations. Want to find out when that variable was last assigned to? Just click a button. Want to find out when that method is called? Same.

    It's not free/cheap, but it seems quite useful.
  • Unit testing (Score:3, Insightful)

    by Tomah4wk ( 553503 ) <tb100@NOSpAM.doc.ic.ac.uk> on Sunday May 02, 2004 @10:15AM (#9033690) Homepage
    It seems to me that a lot more effort is being put into creating good unit tests to identify and prevent bugs, rather than debugging running applications. With an automated testing framework you can seriously reduce the amount of time spent on manual debugging and fixing as the bugs get identified as early as compile time, rather than run time.
  • by crmartin ( 98227 ) on Sunday May 02, 2004 @10:16AM (#9033697)
    thinking is better.
  • ...variables are compared...with variables in another reference program that is known to be correct.

    So, this isn't for developing or implementing a new algorithm.

    However, it might be a step closer to fully automating the re-implementation of existing ones ...which is inherently a rote task to begin with.

  • The best debugging (Score:3, Informative)

    by Decaff ( 42676 ) on Sunday May 02, 2004 @10:19AM (#9033707)
    The best debugging system I have ever used is in Smalltalk. Its possible to stop code at any time, and then data can be inspected and altered, new classes coded and methods re-compiled without interrupting execution. When changes have been made code can be re-started or resumed.

    Features like exception handling with full stack trace in Java are great, but nothing beats the Smalltalk system of suspending execution and keeping the application 'alive', so it can be modified, inspected and resumed, when an error occurs.
    • Features like exception handling with full stack trace in Java are great, but nothing beats the Smalltalk system of suspending execution and keeping the application 'alive', so it can be modified, inspected and resumed, when an error occurs.

      GDB does this. gdb --pid=[running process].

    • Eclipse (and I guess other IDEs do as well) supports hot-code swap with JDK1.4. Never used it much myself, though...
  • Memory leaks and such are easily tracked with valgrind, although for basic logic errors you want to use the printf() and gdb methods.

    Valgrind is http://valgrind.kde.org and requires that you turn off all pax protections for the binary you wish to debug.
  • Rewind (Score:3, Interesting)

    by jarran ( 91204 ) on Sunday May 02, 2004 @10:28AM (#9033756)
    O'Caml [ocaml.org] has a replay debugger. You can run your program in the debugger until it crashes, then step backwards through the code to see what was happening before it crashed.

    Very handy, IMHO, although the O'Caml debugger sucks in other ways. (E.g. no watch conditions.)
  • *Bzzzt* (Score:4, Insightful)

    by bmac ( 51623 ) on Sunday May 02, 2004 @10:39AM (#9033813) Journal
    Nope, looks like marketroid hype to me. Answer me this: what is the point of comparing two separate identical runs of a computer, except in the case of testing platform equivalence, in which case the output of a test set can simply be diff'd.

    The key to their idea is that The user first formulates a set of assertions about key data structures, which equals traditional techniques. The reason such traditional techniques have failed and continue to fail is that those assertions are always an order of magnitude simpler than the code itself. These people forget that a program *is* a set of assumptions. Dumbing it down to "x must be > y" doesn't help with the complex flow of information.

    Peace & Blessings,
    bmac
  • don't debug (Score:5, Insightful)

    by mkcmkc ( 197982 ) * on Sunday May 02, 2004 @10:49AM (#9033875)
    • The best programmer I've met once told me that once you've dropped into the debugger, you've lost, which over time I've found to be quite true. The best debugging practice is to learn how not to use a debugger. (e.g., Are you using threads when they're not absolutely required? Say hello to debugging hell...)
    • When you must debug, print statements cover 97% of the cases perfectly. They allow you to formulate a hypothesis and test it experimentally as efficiently as possible.
    • Differential debugging is a nifty idea, but most of the time it'd be better to just use it with your print statements as above (e.g., print to logs and then diff them). For the one time per year (or five or ten years?) that having a true differential debugger might pay off, it's probably a loss anyway because of the cost and learning curve of the tool. (I thought about adding this to SUBTERFUGUE, but realized that no one would likely ever productively use this feature.)
    • If you need another reason to avoid this tool in particular, these guys have a (software) patent on it. Blech!
    --Mike
    • Re:don't debug (Score:3, Informative)

      by S3D ( 745318 )
      It's not always possible not to debug. Ever tryied programming multiagents systems ? The behavior of the system unpredictable (heh emergence), and it's quite often just plain impossible to tell why the system behave this or that way without debugger. Which rule or combination of rules exactly causing that specific situation...Print not help much if you have 1000 agents, and each have a lot of data. Only good old breakpoints help :)
  • by Ricdude ( 4163 ) on Sunday May 02, 2004 @11:42AM (#9034139) Homepage
    If I had a working program in the first place (to compare my buggy program with), I wouldn't need the debugger.

    Seriously, though. I've worked as a programmer for the last 15 years. Mostly, I've been fixing other people's bugs. Here's what I like to see in code that I need to fix (and generally don't see):

    1) Consistency in formatting, style, variable names, design - I don't care what style you use as long as it's consistent. I prefer my own form of Hungarian Notation, where a variable's prefix indicates its scope (global, static, etc), as well as the type. If any of that information changes, I should darn well follow through to make sure I've fixed everything that depends on them. Bring on strong type checking!

    2) No spaghetti code. Give me this:

    if ( something_bad ) {
    return failure;
    }
    good_stuff();
    return good;
    instead of this:
    if ( ! something_bad ) {
    good_stuff();
    return good;
    }
    return failure;
    It doesn't look like it matters much yet, but try adding eight more error checks to both, and see which you can track better. The "early bailout on error" model clearly surpasses the "endless nesting" model.

    3) Use of descriptive variable and procedure names. Source code is not meant to be understood by the computer. This is why we have compilers, and interpreters. Source code is meant to be understood by humans. Write your code for humans, and you'll be surprised at how much faster you can grind through code. You'll only write the code once, but when you have to debug it, you'll spend eternity sifting through line after line, wondering what the hell you meant by that overused "temp" variable (temporary value? temperature? celsius? kelvin?). If you had only taken the time to spell out, "surface_temperature_C", you'd know for sure. Vowels are good for you.

    4) Comment! Not every line. Not an impossible to maintain function header comment with dates and initials of everyone who's edited it. Don't fall for nor rely on that "self-documenting" code nonsense. Just one comment line every three to ten code lines. That's all. Give me an overview of what's supposed to happen in each logical block of code. Tell me what if conditions are checking for. A good rule of thumb is to sketch out your functions in comments first, then fill in the blanks.

    That's all I can come up with off the top of my head, but there are certainly more...

    NOTE: for the pedants who think they noticed an apparent conflict between my hungarian notation style and the "surface_temperature_C" variable: since there is no scope or type prefix on the variable, it's a local variable, and I can change it at will, knowing that it will not affect any code outside the function at hand. If it had been "m_fSurfaceTemperature_C", then I'd know it could have repercussions affecting the state of the current object. If it were "g_fSurfaceTemperature_F", then I'd know I could hose my whole program with an invalid value. And should have converted from Celsius to Fahrenheit before doing so...

  • by CondeZer0 ( 158969 ) on Sunday May 02, 2004 @12:05PM (#9034264) Homepage
    25 years later I still agree with Kernighan:

    The most effective debugging tool is still careful thought, coupled with
    judiciously placed print statements.

    -- Brian W. Kernighan, in the paper Unix for Beginners (1979)

    But I think the key to debugging is not the technique used for debugging, but how one wrote the code in the first place, here again God Kernighan hits the nail in the head:

    Debugging is twice as hard as writing the code in the first place. Therefore,
    if you write the code as cleverly as possible, you are, by definition, not
    smart enough to debug it.

    -- Brian W. Kernighan

    Once again, at the time of debugging, simplicity shows it's superiority to the complexity that seems to be so much in fashion this days. That is why I still prefer C to C++; rc [bell-labs.com] to bash; AWK/sed to Perl; Plan 9 [bell-labs.com] to Linux; Limbo [vitanuova.com] to Java; 9p [bell-labs.com] to NFS,...

    This is the forgotten key to software design: ...there are two ways of constructing a software design: One way is to make it
    so simple that there are obviously no deficiencies and the other way is to make
    it so complicated that there are no obvious deficiencies.

    -- C.A.R. Hoare, The 1980 ACM Turing Award Lecture

    Or put in another way:

    The cheapest, fastest, and most reliable components are those that aren't there.

    -- Gordon Bell

    Back in the topic of debugging, aside from the sacred printf, the Plan 9 [bell-labs.com] debugger acid [bell-labs.com] is often helpful, and now you can even use it on Linux/BSD!

    Plan 9 on Unix [swtch.com]

    Also the chapter on debugging in The Practice of Programming [bell-labs.com] by Brian W. Kernighan and Rob Pike is very good.

    Always remember:
    • Simplicity
    • Clarity
    • Generality
  • by elan ( 171883 ) * on Sunday May 02, 2004 @01:10PM (#9034674)
    The code that calculated all the spreadsheet dependencies and what cells needed to be recomputed, was pretty complicated, as you might imagine.

    So they had the super-optimized version running in parallel with the dumb, calulate-every-cell-every-time engine, and then they'd compare the results.

    In certain cases, like this one, the technique is useful, but it's neither revolutionary nor new.

    -elan
  • UPS is worth a look (Score:4, Interesting)

    by jcupitt65 ( 68879 ) on Sunday May 02, 2004 @01:17PM (#9034715)

    No one has mentioned UPS [demon.co.uk] yet. I'm not sure you could really call it revolutionary, but it does have a few interesting features:

    • Not based on gdb, amazingly. Does C/C++/FORTRAN on linux, freebsd and solaris.
    • As your program runs you see an animated view of the machine at roughly source-code level (ie. text style, not ddd-like graphical).
    • It includes a C interpreter ... you can write scraps of C and attach them to parts of your program as it runs. You can use the C interpreter to write watches and conditional breakpoints etc.
    • It's based on the Athena widget set so it now looks incredibly ugly. OTOH, it also makes it very quick.

    Like other people here I debug mostly with printfs() logged to a file for easy searching, supplemented with valgrind, memprof and occasionally UPS. They are all tools and you need to try to pick the right one for the sort of bug you think you're facing.

  • by oliverthered ( 187439 ) <{moc.liamtoh} {ta} {derehtrevilo}> on Sunday May 02, 2004 @02:31PM (#9035161) Journal
    Jbuilder tells me in real time every sytax error in my code, I guess that's debugging.

    It also has good refactoring support, so no need to debug my poor hand refactoring. I guess that's kinda debugging.

    And it's very good at displaying my code in a way that allows me to find any bugs before running it, getters, setters, things I may have wanted to overload, UML diagrams etc... So I guess that's debugging.

    Debugging without even having to run the application, and wizards to perform all the monkey work so you don't gte bugs in the first place and intergrated junit testing.

    I think Eclipse has simila support.

    I'm not a very experianced java programmer, but my productivity is more than 4 times that of a friend whos been programming in java for more than 6 years. I do very little runtime debugging because my code is by and large bug free thanks to the design time and code time debugging in the IDE.

    Go download jbuilder [borland.com] trial or Eclipse with some sister project [eclipse.org] plugins (eclipse is a bit of a pain to use because it's still quite a recient product)
  • I have found... (Score:4, Informative)

    by dutky ( 20510 ) on Sunday May 02, 2004 @02:36PM (#9035183) Homepage Journal
    it is easiest if you just leave the bugs out in the first place.

    Failing that, as most of us do, the next best practice is to program defensively: anticipate where problems might occur in your code and include assertion checking and logging (yes, print statements) to illuminate those problem spots. Generally, I include debugging flags on the command line that allow me to control the level of assertion checking and logging (0=no logging, except for errors (the default), 1=log all branches, 2=log branches and variable values, 4=log everything).

    This defensive debugging strategy works quite well. First, it forces the programmer to think harder about both the algorithms they are using, and their implementation. I catch about a quarter of my programming errors just in the process of adding assertions. Second, the program will tend to abort as soon as a problem is detected, rather than running on for a couple billion instructions, dumping crap into the output file or database and then either aborting mysteriously on some marginally related condition, or, worse, completing without any reported errors! Finally, when errors are detected, the debugging can usually be done simply by inspecting the soure and following actual execution from the log file.

    All debugging comes down to one, fairly simple, idea: show me the program status at crucial points in the flow of control (generally at every branch and return). A few other tools are of some use under special circumstances: Purify [ibm.com], Electric Fence [perens.com] or Valgrind [kde.org] for detecting problems with dynamically allocated memory, or something like ddd [gnu.org] for examining linked structures (though I prefer to just write a validation function for my data structures, see my AVL-tree [bellatlantic.net] code for an example). Defensive programming works because it answers the important question that usually forces you into using the debugger: what the hell just happened?!? Defensive programming gives you a way to examine program states without invoking an outside tool.

    The only class of bugs that doesn't succumb well to this approach is race conditions. Unfortunately, anything that changes the timing of the program (such as stepping instruction-by-instruction in a debugger, or writting log messages out to a disk file) will change the behavior of the race condition. I'd be really interested in tools or techniques that could address this class of bugs.

  • by Tomster ( 5075 ) on Sunday May 02, 2004 @09:47PM (#9037571) Homepage Journal
    Those of you who have written distributed applications/code know what a bitch it can be to debug something when multiple processes are involved.

    Those of you who have written multi-threaded applications know what a bitch it can be to debug something when multiple threads are involved.

    Those of you who have written timing-sensitive code know what a bitch it can be to debug something that is timing-related.

    Now, put all three of those into a pot and stir it around. That's what I and a co-worker have been working on the past four days.

    We sent four or five debug versions of the code to the customer for them to run in their production test environment over the past several days with various information printed to the console. With the dials turned way up, the problem usually manifested after a few hours (as opposed to a day or more, when operating under normal conditions). Each time, we'd get back a multi-megabyte log file which we would pore over to see if we had found the root cause of the problem. (Yes, grep was our dear, dear companion -- we're taking it out for drinks as soon as we've verified the problem has been fixed.)

    The problem was caused by a specific set of conditions -- the right things happening at the right time, in the right sequence, with a particular timing. To "trap" those conditions would require running both the client and server under a tracing debugger that recorded the time and "event" (e.g. method call, assignment, exception) of everything the system did and then allowed complex queries on the data produced. E.g. "How times per minute was update() called prior to isDead() returning true, on this instance?"

    The data could perhaps be recorded using AOP. Next time we run into a scenario like this, it might be worthwhile to break out AspectJ or AspectWorkz. But analysing it will be tricky.

To be awake is to be alive. -- Henry David Thoreau, in "Walden"

Working...