Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Programming IT Technology

New & Revolutionary Debugging Techniques? 351

An anonymous reader writes "It seems that people are still using print statements to debug programs (Brian Kernighan does!). Besides the ol' traditional debugger, do you know any new debugger that has a revolutionary way to help us inspect the data? (don't answer it with ddd, or any other debugger that got fancy data display), what I mean is a new revolutionary way. I have only found one answer. It seems that Relative Debugging is quite neat and cool."
This discussion has been archived. No new comments can be posted.

New & Revolutionary Debugging Techniques?

Comments Filter:
  • Re:Exceptions (Score:3, Insightful)

    by Anonymous Coward on Sunday May 02, 2004 @10:29AM (#9033505)
    Sometimes something goes wrong WITHOUT causing an exception. Those are the bugs hard to find.
  • You get what's called 'glassnose syndrome' too easily.

    Instead concentrate on building software in many small incremental steps so that problems are caught quickly, and on separation of design so that dependencies are rare.

    If you can't find a problem, leave it and do something else.

    Otherwise, print statements, yes, that's about the right level to debug at.
  • Good idea.. (Score:2, Insightful)

    by sw155kn1f3 ( 600118 ) on Sunday May 02, 2004 @10:33AM (#9033525)
    Good idea, but isn't unit testing + standard assertions do the same thing but in more automatic way ?

    You feed some data to functions, you expect some sane pre-calculated output from them. Simple yet powerful.

    And more important it's automatic. So you can integrate it into build process.
  • Re:Exceptions (Score:3, Insightful)

    by bdash ( 598142 ) <slashdot DOT org AT bdash DOT net DOT nz> on Sunday May 02, 2004 @10:33AM (#9033527) Homepage

    Java Exceptions *were* a revolution in debugging.

    Because everyone knows that Java invented exception handling...

  • Hey, nice ad! (Score:5, Insightful)

    by EnglishTim ( 9662 ) on Sunday May 02, 2004 @10:35AM (#9033536)
    I can't escape the suspicion that the anonymous poster is actually in some way connected to Guardsoft, but let's leave that for now...

    I think it's a good idea, but I do wonder how many situations you'll be in where you already have an exisiting program that does everything you want to test against.

    Having said, that, I can see how this would help with regression testing - making sure that you've not introduced any new bugs when fixing old ones. But I wonder how much it gives you above a general testing framework anyway...
  • Old methods best. (Score:4, Insightful)

    by hegemon17 ( 702622 ) on Sunday May 02, 2004 @10:39AM (#9033557)
    "Relative debugging" seems to be what people have always been doing. Dump some state and comapre it to an expected state. Most frameworks for regression tests do something like that.

    The best debugging method is to have a fast build environment so that you can add one printf, rebuild, reproduce the bug, move the printf to an even better place, rebuild and reproduce, etc. The more you rely on your tools to do the work for you, the less you understand the code and the less you understand the code, the more bugs you will make in the future.

    There are no shortcuts to good code.
  • old technique... (Score:4, Insightful)

    by hak1du ( 761835 ) on Sunday May 02, 2004 @10:47AM (#9033589) Journal
    Comparing the "state" of multiple implementations or versions of code is an old technique. You don't need a special debugger for it--you can use a regular debugger and a tiny bit of glue code. Alternatively, you can insert the debugging code using aspects (aspectj.org).

    However, like many programming techniques, most real world programmers won't know about them unless they can shell out $1000 for a tool; reading a paper or book just would be too much intellectual challenge, right?

    This news item seems to be a thinly veiled attempt to drum up business for that company.
  • by hak1du ( 761835 ) on Sunday May 02, 2004 @10:54AM (#9033613) Journal
    That sounds cool, but it isn't all that useful in practice. Debuggers that support stepping backwards usually end up keeping a lot of state around, which limits them to fairly small, well-defined problems or modules. But the problems where an experienced programmers need a debugger are just the opposite: they involve lots of code and large amounts of data.

    Usually, it's best to avoid going back and forth through the code altogether; insert assertions and see which ones fail.
  • by Gorobei ( 127755 ) on Sunday May 02, 2004 @11:00AM (#9033634)
    Print statements are a great tool, especially on large pieces of software maintained/enhanced by many people. Once you've debugged your problem, you just #ifdef out the prints, and check the code back into version control.

    When the next poor programmer comes along, trying to fix/find a bug in that code, he a) can #ifdef the prints back on and quickly get debugging output about the important events taking place in his run, and b) read the code and see where the hairy bits are, because they tend to be the sections most heavily littered with debugging print calls.

    Fancy debugger IDEs just don't support this preservation of institutional knowledge.
  • Re:Hey, nice ad! (Score:5, Insightful)

    by fishdan ( 569872 ) * on Sunday May 02, 2004 @11:04AM (#9033643) Homepage Journal
    I agree with you. I would have rather seen it posted without a reference to guardsoft and have someone mention it. I'm all for advertising on /. -- just not in the form of news.

    The fundamental issue here is that people are ALWAYS looking for a way to avoid having to write unit tests. I'm happy with a combination of Intellij and print statements. So far I've never had a situation where I though "the debugger isn't giving me enough information."

    I think that one of the reasons I'm happy with the debugging options available to me, is that I write my code so that it can be easily followed in the debugger. That means splitting my declarations and assignments, and other such things that make my code a bit more verbose, but eminently more readable. Lord knows as a child, I loved those complicated boolean switches, and cramming as much line into one line of code as possible. Now that my code is maintained by more people than me, I'm tired of people having ot ask me "what does this do." I used to get angry at them, but now I get angry at myself when that happens. We don't just write code for the users, we write it for our peers. Write code that your sibling developers will be able to follow in a debugger. I know some code is hard to follow, even with a debugger, so I write all my conditions as clearly as possible, name my methods and variables as clearly as I can and refactor reusable code into well named "submethods", so that we can solve "modules".

    This is because I want my code to last beyond my employment. Therefore it has to be maintainable by someone other than me. The real test of your code is: can someone ELSE debug it, using whatever the heck tools they want. A fancy debugger is a fine thing, but someday someone is going to have to debug your code with inadequate tools. My rule of them is "Code as if your life depended on someone else being able to fix it"

  • Unit testing (Score:3, Insightful)

    by Tomah4wk ( 553503 ) <tb100@NOsPAm.doc.ic.ac.uk> on Sunday May 02, 2004 @11:15AM (#9033690) Homepage
    It seems to me that a lot more effort is being put into creating good unit tests to identify and prevent bugs, rather than debugging running applications. With an automated testing framework you can seriously reduce the amount of time spent on manual debugging and fixing as the bugs get identified as early as compile time, rather than run time.
  • Re:Avoid debugging (Score:5, Insightful)

    by mattgreen ( 701203 ) on Sunday May 02, 2004 @11:19AM (#9033705)
    Ah, nothing like claiming that your way of approaching something is the only way. A debugger is just a tool. Like any other tool it can be bad if it is misused, and it isn't appropriate for every situation. I find a debugger invaluable for jumping into someone else's code and seeing exactly what is happening step-by-step. Debuggers can be great if you suspect buffer overflows and don't have access to more sophisticated tools that would detect it for you. Just yesterday I used a debugger to modify values in real-time to test code coverage.

    Inserting printf statements into the code is probably not logging - usually if you are debugging they are destined for removal anyway. I use a logging system that shows the asynchronous, high-level overview of events being dispatched and then can use the debugger to zero in on the problem very quickly without recompiliation. In addition if a test machine screws up I can remotely debug it.

    If you want to throw out debugging because Linus isn't a fan of it, be my guest. But I'm not a fan of wasting time, and injecting print statements into the code plus recompiling is a waste of time and ultimately accomplishes close to the same thing as debugging. Any decent IDE will let you slap a breakpoint down and execute to that point quickly. But I assume someone will come along and tell me that IDEs are for the weak as well.
  • by Anonymous Coward on Sunday May 02, 2004 @11:19AM (#9033710)
    Yeah, it's always tempting to leave your debug printfs in and ifdef them out but I find that has two problems:

    1. When reading the code for logic, the print statements can be distracting and take up valuable vertical screen realestate. An algorithm without printfs can usually fit on a single screen. With printfs it may spill over two pages. That can make debugging harder if you need to understand what you're looking at at a conceptual level.

    2. Almost invariably I find that a previous person's printfs are almost totally useless because that person was usually debugging a different problem. Thus, enabling old printfs will dump out a whole bunch of information I couldn't possibly care about.

    Thus, I make it a point to clear out any debug printfs I was using before I check in my code.
  • by Anonymous Coward on Sunday May 02, 2004 @11:28AM (#9033750)
    ...is that they don't help in time-dependent situations.
    For example, a program in C that uses lots of signals and semaphores could perform differently when print statements are added. This is because print statements take a (relatively) long time to execute. Print statements can affect the bug their supposed to be monitoring.
    I had a situation very much like this. One process would fork and exec another, and they would send signals to each other to communicate. But there were a few small bugs that caused one of the processes to occationally miss a signal. When I added the print statements, it slowed the process down enough that it caught the signal. The only way i was able to successfully debug it was with a line-by-line trace with pencil and paper. I don't know if ddd would have helped (but I didn't know about it at the time).
  • Re:Data logging (Score:5, Insightful)

    by Tim Browse ( 9263 ) on Sunday May 02, 2004 @11:28AM (#9033752)
    Of course, as many people who debug multi-threaded programs have found, using print routines to output logs can make the bug 'go away', because quite often CRT functions like printf() etc are mutex'd, which serialises code execution, and thus alters the timing, and voila, race condition begone!

    I know it's happened to me :-S
  • by Anonymous Coward on Sunday May 02, 2004 @11:29AM (#9033760)
    I'm guessing it's because writing a device driver in LISP is not anybody's idea of a good time?

    Seriously though, not all bugs in a program have to do with allocation or buffer overflows. Besides, there are tools to help people with these problems. Most interesting bugs are beyond these types of distractions and can happen regardless of the language, or are you asserting that functional language programs always work perfectly the first time you run them?
  • Re:Avoid debugging (Score:4, Insightful)

    by jaoswald ( 63789 ) on Sunday May 02, 2004 @11:30AM (#9033767) Homepage
    The main advantage of printfs over IDE/interactive debugging is that you can collect a lot of data in one burst, then look at the text output as a whole.

    The tricky part about IDE/interactive debugging is understanding the behavior of loops, for instance. Sure you can put a breakpoint in the loop, and check things everytime, but you quickly find out that the first 99 times are fine, and somewhere after 100 you get into trouble, but you don't quite know where, because after the loop, everything is total chaos. So you have to switch gears; put in some watch condition that still traps too often (because if you knew exactly what to watch for, you would know what the bug was, and would just fix it), and hope that things went wrong, but left enough evidence, when it traps.

    Whereas print statements let you combine the best of both worlds: expose the data you care about (what you would examine at breakpoints), but the ability to scan through the text result to find the particular conditions that cause the problem (what you could potentially get from watch conditions).
  • Better emphasis (Score:3, Insightful)

    by Futurepower(R) ( 558542 ) on Sunday May 02, 2004 @11:34AM (#9033789) Homepage
    Often something goes wrong with no runtime error. Those bugs are often really, really difficult to find.
  • *Bzzzt* (Score:4, Insightful)

    by bmac ( 51623 ) on Sunday May 02, 2004 @11:39AM (#9033813) Journal
    Nope, looks like marketroid hype to me. Answer me this: what is the point of comparing two separate identical runs of a computer, except in the case of testing platform equivalence, in which case the output of a test set can simply be diff'd.

    The key to their idea is that The user first formulates a set of assertions about key data structures, which equals traditional techniques. The reason such traditional techniques have failed and continue to fail is that those assertions are always an order of magnitude simpler than the code itself. These people forget that a program *is* a set of assumptions. Dumbing it down to "x must be > y" doesn't help with the complex flow of information.

    Peace & Blessings,
    bmac
  • don't debug (Score:5, Insightful)

    by mkcmkc ( 197982 ) * on Sunday May 02, 2004 @11:49AM (#9033875)
    • The best programmer I've met once told me that once you've dropped into the debugger, you've lost, which over time I've found to be quite true. The best debugging practice is to learn how not to use a debugger. (e.g., Are you using threads when they're not absolutely required? Say hello to debugging hell...)
    • When you must debug, print statements cover 97% of the cases perfectly. They allow you to formulate a hypothesis and test it experimentally as efficiently as possible.
    • Differential debugging is a nifty idea, but most of the time it'd be better to just use it with your print statements as above (e.g., print to logs and then diff them). For the one time per year (or five or ten years?) that having a true differential debugger might pay off, it's probably a loss anyway because of the cost and learning curve of the tool. (I thought about adding this to SUBTERFUGUE, but realized that no one would likely ever productively use this feature.)
    • If you need another reason to avoid this tool in particular, these guys have a (software) patent on it. Blech!
    --Mike
  • Re:Avoid debugging (Score:4, Insightful)

    by jaoswald ( 63789 ) on Sunday May 02, 2004 @11:59AM (#9033928) Homepage
    Hey, I'm certainly not going to call debuggers useless; I use them, albeit usually in a crude way to do stepping where I need to check hardware state. But I must say that I have rarely felt comfortable using an IDE debugger to find a logic error; the old insert printfs and read, or interactive use of a Lisp REPL or test harness have always felt more comfortable.

    Another advantage of printfs or adding code to the app is that most languages are more powerful than the UIs of interactive debuggers; even the best inspectors make it hard to filter out large arrays to find the problem in element 1085. But I can add a little helper function to scan through the array in code and find exactly what I want. In Lisp, the full language is available in the debugger, so the debugging and coding are hard to distinguish.
    Sometimes the tools you write to make the code are the same tools that are useful for debugging, whether you plan it that way or not.

    Different strokes for different folks---we're all fighting the same enemy: the bugs.

    There is something weird about debugging, however, which I can't quite put my finger on. Powerful language features have a return on investment which has a longer time to compound. You can attack bigger problems by understanding the language better, so spending time to understand the language pays off.

    Powerful debugger features don't really have time to compound. Sure, they may save you 50% of your time tracking down a particular bug, but only if you recognize that the bug you have is solved with that tool.If you get a lot of practice using that tool, however, you'll tend to stop making the kind of specific mistake that makes the tool valuable.

    Before you know how to use a language feature, you can write toy examples until you can feel comfortable. It's hard to practice with a debugger; how do you make toy mistakes---make a mistake deliberately and forget what mistake you made?
  • by Xyrus ( 755017 ) on Sunday May 02, 2004 @12:06PM (#9033964) Journal
    "I haven't used a debugger in years; print statements are the only debugging tool I need."

    You shouldn't limit your options. For instance, I can guarantee that I can find a fix a bug with a debugger faster than using printline's when dealing with cross-language projects. Like using Java's JNI to talk with native dll's, for example. The error could be somewhere on the java side or the C++ side. It could be the object on the C++ side, the converter to a Java object, the way the object is being used on the java side, etc. etc.. When you need to see a bunch of data simultaneously, a debugger is the way to go.

    Need I mention assembler? Much easier to look at the registers and stack in a debugger. :)

    I'm not saying that printlines are bad, because I use them frequently for quick to moderate debugging. But when I need to examine large objects, functions, etc. I bring out the debugger.

    ~X~

  • by Ricdude ( 4163 ) on Sunday May 02, 2004 @12:42PM (#9034139) Homepage
    If I had a working program in the first place (to compare my buggy program with), I wouldn't need the debugger.

    Seriously, though. I've worked as a programmer for the last 15 years. Mostly, I've been fixing other people's bugs. Here's what I like to see in code that I need to fix (and generally don't see):

    1) Consistency in formatting, style, variable names, design - I don't care what style you use as long as it's consistent. I prefer my own form of Hungarian Notation, where a variable's prefix indicates its scope (global, static, etc), as well as the type. If any of that information changes, I should darn well follow through to make sure I've fixed everything that depends on them. Bring on strong type checking!

    2) No spaghetti code. Give me this:

    if ( something_bad ) {
    return failure;
    }
    good_stuff();
    return good;
    instead of this:
    if ( ! something_bad ) {
    good_stuff();
    return good;
    }
    return failure;
    It doesn't look like it matters much yet, but try adding eight more error checks to both, and see which you can track better. The "early bailout on error" model clearly surpasses the "endless nesting" model.

    3) Use of descriptive variable and procedure names. Source code is not meant to be understood by the computer. This is why we have compilers, and interpreters. Source code is meant to be understood by humans. Write your code for humans, and you'll be surprised at how much faster you can grind through code. You'll only write the code once, but when you have to debug it, you'll spend eternity sifting through line after line, wondering what the hell you meant by that overused "temp" variable (temporary value? temperature? celsius? kelvin?). If you had only taken the time to spell out, "surface_temperature_C", you'd know for sure. Vowels are good for you.

    4) Comment! Not every line. Not an impossible to maintain function header comment with dates and initials of everyone who's edited it. Don't fall for nor rely on that "self-documenting" code nonsense. Just one comment line every three to ten code lines. That's all. Give me an overview of what's supposed to happen in each logical block of code. Tell me what if conditions are checking for. A good rule of thumb is to sketch out your functions in comments first, then fill in the blanks.

    That's all I can come up with off the top of my head, but there are certainly more...

    NOTE: for the pedants who think they noticed an apparent conflict between my hungarian notation style and the "surface_temperature_C" variable: since there is no scope or type prefix on the variable, it's a local variable, and I can change it at will, knowing that it will not affect any code outside the function at hand. If it had been "m_fSurfaceTemperature_C", then I'd know it could have repercussions affecting the state of the current object. If it were "g_fSurfaceTemperature_F", then I'd know I could hose my whole program with an invalid value. And should have converted from Celsius to Fahrenheit before doing so...

  • by computational super ( 740265 ) on Sunday May 02, 2004 @01:00PM (#9034236)

    I have to agree... this sounds as revolutionary as "Junit". Yes, if you follow the paradigm correctly, you'll produce 100% bug-free code, but it would take so long to follow the paradigm correctly, you'd never get anything done. Not to say that it doesn't look like it might be useful, but I think they're being disingenuous about the amount of work that's going to go into using it.

  • by tialaramex ( 61643 ) on Sunday May 02, 2004 @07:15PM (#9036505) Homepage
    We use C-like languages to make things go very, very fast, immediately. Sometimes a high level language _could_ deliver this if we were willing to wait for the hardware architecture to be re-designed in its favour (which we're not), and sometimes it's just not possible because the C-like language lets you do things which cannot be proven by the machine to be safe, yet nevertheless are correct. Even scary old gets() can be quite harmless, under certain carefully controlled circumstances.

    Now, some people use C (or even more stupidly, C++) to make petty database GUIs, or configure their font preferences, and here I agree that it's worthless, and a modern high-level language would be more appropriate. But note that even when Red Hat use Python throughout their management applets, they still crash, just with a run-time error (a list is too short, or a string is found when a number was expected) instead of SIGSEGV. Some of those mistakes might even have security implications... Programmers make mistakes in any and every language.
  • by Zooks! ( 56613 ) on Sunday May 02, 2004 @10:14PM (#9037428)
    > Use an editor with folding capability.

    For one thing my editor doesn't support this, and not all editors do. For another thing it depends on the folding implementation in the editor as to how distracting this is.

    > Redirect stdout/stderr to a file. Besides, this sounds like a straw man. There's nothing stopping you from having differently detailed level of debugging output.

    So now you want me to sit down and create sed/awk/grep/perl scripts to filter out stuff I don't want. No thanks. I'd rather just put in the print statements I actually need rather than waste time filtering out useless printfs I didn't want in the first place.

    Again, I've found most printfs to be useless except to the person debugging the particular problem. A reusable debug printf is a RARE thing. Why bother preserving them?

    > Fine, but I hope you document how you tested/debugged the algorithm so another developer can recreate whatever it was that you did.

    All I can say is: DUH. First off, I never said to delete comments, and as far as detailing how the file/problem was debugged, that's what the checkin log is for..
  • C (and especially C++) are sufficiently good languages in the hands of those who know how to program cleanly (for example, they know why returning a pointer to a automatic variable is bad in C, and why you need to define copy constructors, or make the destructor virtual, for certain classes in C++) --- just look at the many well-written projects in C, you rarely hear the core developers screaming that the language is painful to use. A good compiler helps for giving warnings about certain constructs, but some of the more subtle types are very hard to detect by a compiler.

    In high-level languages, you usually don't have memory-allocation or buffer-overflow problems, but quite often there are other traps. In Perl, numerous gotchas are mentioned in the manual. In Python, unexperienced developers often make shallow copies of lists when deep copies are needed. In Lisp, beginners often accidentally modify quoted lists in program sources, and they may write macros that captures variables. In Haskell, hastily-written programs may leak memory because of incorrect handling of laziness. I can't quickly think of an OCaml example, but at least it is easy to get hard-to-find typing errors during compile time if you are not careful... As for Java, I bet lots of beginners write applets that locks up randomly because they are not well aware of AWT/Swing threading issues.

    All these, like memory problems in C/C++, are avoidable if the gotchas of the language is well taught and learnt --- and indeed they are mentioned on most books about the language. However if people happen to forget one of these, they will all lead to very hard-to-find bugs. So in this respect, you need self-discipline when programming with present-day languages, even high-level ones.

    A problem with functional languages is that they are quite hard to learn (which also makes them interesting if you like computer science). One have to read quite a number of CS papers if he wants to use Haskell well (otherwise he will see cryptic type errors if he tries to do anything advanced, or if he did anything wrong). C is much easier in this respect, and even C++/Perl aren't that hard --- they are just complex.

"May your future be limited only by your dreams." -- Christa McAuliffe

Working...