New & Revolutionary Debugging Techniques? 351
An anonymous reader writes "It seems that people are still using print statements to debug programs (Brian Kernighan does!).
Besides the ol' traditional debugger, do you know any new debugger that has a revolutionary way to help us inspect the data? (don't answer it with ddd, or any other debugger that got fancy data display), what I mean is a new revolutionary way. I have only found one answer.
It seems that Relative Debugging is quite neat and cool."
Exceptions (Score:3, Interesting)
Valgrind (Score:5, Interesting)
More of the same (Score:5, Interesting)
All you are doing is replacing human eyes with a computer at the first "filter" process. Instead of having to compare a bunch of values and look for the errors, let the machine point them out to you - grep anyone?
I see nothing reolutionary about this. You still have the DUT making "assertions" - duuuuh can you say "print?"
Print statements work fine for me, too (Score:5, Interesting)
But bear in mind that almost all of my work these days are in environments where the bugs that traditional debuggers help you find are pretty much impossible to make in the first place (Python, Java, etc.). Instead of tracing data structures through bits of memory and navigating stack frames, you just focus on the application itself. It's kind of refreshing.
Nothing I've done needs relative debugging (Score:5, Interesting)
I suppose this would be useful if you were writing something in a new programming language. You could port your code and run the relative debugger to make sure that both implementations acted the same. In such a situation, that would be great, but such a situation isn't the common case for me.
Revolutionary? NO. (Score:4, Interesting)
Re: Old news (Score:5, Interesting)
Besides, nontrivial bugs don't result in stack traces or crashes. They result in infrequent, hard-to-spot, anomoalies in the output. No amount of Java stack traces will help you find them.
Re:Good idea.. (Score:1, Interesting)
Part of me thinks that the appeal of debuggers versus unit testing is that a debugger is something mysterious to use and therefore feeds the ego, versus unit tests which are simple to write.
Course the end goal for a programmer is to deliver a workable system, and I think unit tests are the way to get there.
Revolutionary way of debugging what? (Score:5, Interesting)
I'm noy sure what the question is here. Any debugger will allow you to watch data. If your program is special enough that you can't use a standard debugger, you probably need to write a test suite to go with it (and well, for any reasonably sized project, you should anyway).
That's to help you find "surface" bug, i.e. to catch things like misaligned words, wrong data types, buffer overflows
For deep structural problems, like when you try to code something and you have no clue how to go about it, and the end result is just not good and never going to be, the cure is usually a total rewrite, so debuggers won't help you there. That's a problem due to bad architecture of the code.
So, I'm not sure anything else is required. FYI, when I code, I believe I have enough experience to architecture and code something relatively clean the first time, then because I've done it for many years, I sort of "instinctively" expect to find a certain amount of this or that types bugs. And usually, I can fix them without debugging because they jump at me. When they dont (and I can do it), I pull out the old "print to stdout" debugging (or LED wiggling, sound generating
Which is nice... (Score:5, Interesting)
I've got this thing with OpenSSL, Qt and my code right now. On a one-time run, it works fine. When I put it into a program and try to loop it, it crashes on some mysterious and horrible error, sometimes on 2nd pass, sometimes 3rd, 4th pass or more.
All I'm getting from traceback logs are some wierd memory allocation errors in different places, e.g. in Qt code that *never* crashes if I replace the OpenSSL code with dummy code, but has nothing to do with each other. Or in OpenSSL itself, which I hardly think is their fault, if it was that buggy noone would use it. And only if this is put together and looped. Each taken apart, they work perfectly.
Kjella
Re:cool, but not useful (Score:5, Interesting)
RetroVue (Score:4, Interesting)
When I was wandering through JavaOne last year, I ran across this booth by VisiComp, Inc. who sells this debugger called RetroVue [visicomp.com]. I think it's an interesting attempt at bridging the gap between live-breakpoint debugging and logging.
The main issue with debugging vs. logging is that logging provides you with a history of operations that allows you to determine the execution order and state of variables at various times of the execution, something that debuggers don't actually help you with.
RetroVue seems to instrument your Java bytecode to generate a journal file. This journal file is quite similar to a core file extended over time, by recording all operations that occurred in the program over time: every method call, every variable assignment, exception thrown, and context switch. RetroVue then allows you to play back the execution of the application.
It includes a timeline view to jump around the various execution points of the program, as well as an ongoing call-list to show the call sequence that has occurred. It also notes every context switch that the VM makes, and detects deadlocks, thus making it a great tool for multi-threaded application debugging. You can adjust the speed of the playback if you would like to watch things unfold in front of you, or you can pause it at any time and step through the various operations. Want to find out when that variable was last assigned to? Just click a button. Want to find out when that method is called? Same.
It's not free/cheap, but it seems quite useful.
Print statements good, debuggers good (Score:4, Interesting)
Rewind (Score:3, Interesting)
Very handy, IMHO, although the O'Caml debugger sucks in other ways. (E.g. no watch conditions.)
Firmware! (Score:2, Interesting)
I ran into this when putting together a system with two micros, each of which had a 16MHz processor and 32K of onboard memory. I couldn't afford their damn emulator, so I had to think of other good ways to debug. They had to communicate regularly, and using printf() caused the program to delay at critical points, causing it to hang even though nothing else was done to the code. Even putting in a trap-check was still too big of a delay. I started using bitwise operators on 4 bits of memory. Each time I executed a bit of code that I thought would give problems, I inserted an assembly command into the C code that shifted in 4 bits with a code that corresponded to the location, and set the read line high. Then I put those 4 lines to a logic analyzer and triggered on the read line, reading the bits as the program ran. If (when) it hung, then I would know where. This had the advantage of being much faster to execute than a printf(), so I could put it in anywhere, even in the middle of getting a packet.
Debugging firmware is a whole new bag, to be sure.
Fix & Continue - Apple's ProjectBuilder (Score:2, Interesting)
You run your program in the debugger. Let's say you try to click some button, and it doesn't do what you expected it to do. With the program still running, you change the handler code, click Fix in the debugger, and try your button again. Voila, problem solved! No need to recompile and then try to get back to the state you were in when stuff didn't work.
Those of you who haven't used languages like LISP or Smalltalk (or Forth?) wouldn't believe how convenient it is to be able to change code run-time! I often add statements like
if(this) printf(that);
on the fly. Conditional breakpoints almost lose their relevancy when you can just add whatever conditional you like in the code, and put a breakpoint there. =)
Works with many changes to the code, as long as the program counter isn't in the code block you're editing (in which case you get a warning and can retry later). Works with C/C++ and Objective-C just the same. Some things, like adding member fields to classes are not accepted, or changing the number of local variables for functions that are on the stack.
Re:Exceptions (Score:3, Interesting)
Re: Old news (Score:3, Interesting)
I second that. I currently have a piece of software that runs as a daemon. It silently crashes about once a week. Tell me a way of debugging it that doesn't take months, and I'll be happy. But until then, I'll have to add debugging statements and triple-check each line of code, run it again and wait another week or so. Right now I can only very vaguely tell where the crash occurs - but not what causes it. Not fun.
Re:Aspect-Oriented Programming can help (Score:3, Interesting)
Best debug technique ever (Score:3, Interesting)
I could be all the way across the room, and suddenly there would be this nice clear tone, as my solenoids 'sang' to alert me of trouble.
Re:Exceptions (Score:4, Interesting)
Just as a historical note, the APL system that I used in 1975 provided this capability. When an exception occurred, the interpreter halted program execution, identified the problem and the source line, and provided access to the stack info on how (functions and line numbers) you had gotten there. You also had the ability to examine any variable that was currently in scope, and could change values and resume execution. Given the cryptic nature of the language, you needed all the debugging help you could get. Still, for certain types of numerical problems, you could get a lot of effective code written in a very short period of time.
UPS is worth a look (Score:4, Interesting)
No one has mentioned UPS [demon.co.uk] yet. I'm not sure you could really call it revolutionary, but it does have a few interesting features:
Like other people here I debug mostly with printfs() logged to a file for easy searching, supplemented with valgrind, memprof and occasionally UPS. They are all tools and you need to try to pick the right one for the sort of bug you think you're facing.
Re:Which is nice... (Score:3, Interesting)
Revolutionary? Certainly not! (Score:1, Interesting)
Often, chip architects create reference models in a high level language, typically C/C++, while the logic designers create an implementation of the same functionality using a Hardware Description Language (HDL), typically Verilog or VHDL.
Chip verification engineers then run simulations in which data is fed into both models, and data values (internal values and/or output values) are compared at key points in the two models as the simulation progresses. The simulation is usually set up to automatically complain (WARNING/ERROR) when the two models disagree, and may or may not halt, based on the severity of the discrepancy.
It is also typical to have the HDL model dump value change events into a file whose contents can be viewed graphically in a GUI which shows the value changes with respect to time (similar to what one might see using a digital logic analyzer on real hardware).
Also, if I recall correctly, the Space Shuttles use triple redundancy with "voting", so that as long as at least two CPUs agree on output results, they are viewed as correct, and the odd man out is ignored.
Re:Unit testing (Score:1, Interesting)
Since I've gotten into test-driven development, my bug rate has dropped nearly to zero. Let me be more specific: the code that I write test-first does not fail unpredictably on its own. Sure there are bugs in the GUI, in integration of modules, in understanding specifications... but those bugs were already there before, TDD basically eliminates one class of bugs.
And when I get a bug report related to one of the above, I figure out how to write a test for it (GUI's are the toughest, but I've got WWW::Mechanize and HttpUnit for web GUIs).
I haven't used a debugger OR a debugging printf statement in the last couple of years.
If you haven't tried test-driven development, DO IT NOW! Your code becomes MUCH simpler and well-factored, and you get nearly 100% test coverage, and nearly 0% buggy code. Of course you have to be a decent programmer, and you have to know when to test edge cases and so forth, but it helps me tremendously.
Tarantula -- Visual bug localizing (Score:2, Interesting)
Tarantula Web Site [gatech.edu]
The intuition of the approach is simple (this is our hypothesis): statements that are executed primarily by failed test cases are more suspicious of being faulty than those that are primarily executed by passed test cases.
So, we take the statements executed by each test case and its pass/fail status and the source code for the progam under test as input. Statements that are executed primarily by passed test cases are colored green to denote safety; statements that are executed primarily by failed test cases are colored red to denote danger; and statements that are executed by both passed and failed are colored in a yellowish hue to denote caution.
Example screenshot [atl.ga.us]
We use a visualization for the code called SeeSoft that represents each line of code by a line of pixels, where the length of the line of pixels is proportionate to the length of the source code. This gives a miniature view of the code -- much like if you were to print out all of the code and post it on a wall and walk away from it. This allows the developer to see the colors of many lines of code simultaneously.
We have since extended the visualization to include an even higher-level abstraction than the SeeSoft view. This view uses TreeMaps and allows the simultaneous display of the colors of about 2 million lines of code.
Another example screenshot with the TreeMap visualization [atl.ga.us]
So far, our experiments show that for programs with a single bug showing up in the test suite, this method successfully illuminates the fault about 90% of the time.
Here's some papers about this work.
Paper 1 [gatech.edu]
Paper 2 [gatech.edu]
Re:Better emphasis (Score:3, Interesting)
This class of bugs is significantly less painful than the little known 'shrodinbug' which is reported by testers and/or users, and cannot be reproduced in the prescence of a qualified observer or logged in any usable way.
The question is whether there is a 'planck constant' associated with debugging any sufficiently complex algorithmic function that constrains the ability of the coder to localize the bug while specifing its effects.
A question for the theorists among us, no doubt.
Interestingly, several theoretical frameworks do exist to describe the heisenbug and shrodinbug but unfortunately none have yet been verified through experiment, and it is not clear that the tools currently exist to allow us to do so.
I prefer a debug print func (Score:3, Interesting)
Somewhere, you have a list of what the various debug levels are. It's useful to do something like
0 = off
1 = entering major functions
2 = less major functions
3 = specific breakpoints
4 = loop variables
The debug print checks a constant global variable, or if more work is required, gets set by a command line. Means you dont need to remove the statements for the final compile.
Re:Print statements work fine for me, too (Score:3, Interesting)
What's especially potent is that you don't actually need to comment the assertions out : leave them in your code, so that downstream users can activate them and confirm that there's nothing broken in your code. There's no performance hit when they don't ask for them, and if you're writing code for others to incorporate into their apps, you've just given them a powerful support tool.
This is why configuration driven logging (like log4j) is such a leap over commented in/out println() statements : you can use them outside of your own development environment. Your customers, be they other developers or end users, can get an insight into how things are failing, either to diagnose problems for themselves, or to generate a report for you, so that you can find out what's going wrong for them.
Re:nothing new (Score:3, Interesting)
Sorry, you're right, taken literally, what I said doesn't make much sense.
What I meant was that the currently big and commercial platforms (Windows, Macintosh, Java) are still behind what was already available commercially 10-20 years ago, although it wasn't very successful back then.
The reason why it's successful now and wasn't back then is because programmers in industry didn't understand they needed it back then. Programmers in industry have slowly been learning: object-oriented programming, garbage collection, dynamic loading, reflection, etc. The development of popular platforms simply reflects the state of education and understanding of the great mass of programmers. But the state of the art represents what is actually available if you only know that you need it.
Linux magazine (Score:2, Interesting)
its main focus is the kernel, but it should be easy enough to adapt to other programs. not a debugger in the true sense of the word but it will detect a lot of bugs for you which you might otherwise have to hunt down with a debugger.