Hindsight: Reversible Computing 178
One of the more interesting tech pieces that came out this week has been Hindsight [PDF]. Hindsight is made by Virtutech and is billed as the "the first complete, general-purpose tool for reverse execution and debugging of arbitrary electronic systems." The demos were received extremely well and it just looks cool.
Sounds good - but expensive. (Score:5, Informative)
Hindsight will go into beta sites in May, with production slated for July. Incremental cost over Simics is around $5,000 per seat, but Hindsight won't target single seats. A typical engagement, including Simics, Hindsight and some initial model development, is estimated at $200,000 to $300,000 for a software development group with 10 to 20 seats.
That's just nutty... (Score:4, Informative)
That would be quite nice... It almost seems like a shuttle head or what not for programmers... Rewind, play, slow motion and so on... I know they said it's the first complete one, but is there anything else out there like this?
Not necessarily (Score:5, Informative)
Mirror (Score:5, Informative)
Never underestimate the Slashdot Effect!
Virtualization layer for checkpointing and steppin (Score:5, Informative)
The Omniscient Debugger? (Score:3, Informative)
http://www.lambdacs.com/debugger/debugger.html/ [lambdacs.com]
Seems like this has been done before, at least for java apps...
Re:But what about external events (Score:5, Informative)
Bash VB all you want.... (Score:1, Informative)
Coupled with fix and continue, you have not only a more productive development environment, but an environment where you can press a prototype into limited production use long before it's ready. Everyone in their right mind thinks that's a retarded idea; even so, it becomes necessary once in awhile. Getting that prototype into production as soon as possible can be the difference between the company surviving or failing. I know because I've been there. These two features saved a company I used to work for. Thank you, Microsoft!
Reversible Computing != Reversible Execution (Score:5, Informative)
Anyway, the headline is misleading.
Re:Too good to be true... (Score:1, Informative)
For example, if you're at point t=10, your previous checkpoint is at t=0, and you want to go back to t=9, their system first go back to t=0 and then reexecute the code until t=9.
The thing is that you have to log everything non reversible (I/O, interrupts, syscalls, etc.) and use the logged value when reexecuting.
Re:Too good to be true... (Score:4, Informative)
It also says in TFA that it doesn't actually calculate the reverse steps, so it doesn't matter if it's mathematically impossible.
What it does do is take complete snapshots every (for example) 100 steps. In order to move "backwards" a step, it returns to the previous breakpoint (a known state) and goes forward 99 steps.
Then it returns to the same breakpoint and goes forward 98 steps. And so on. So from your perspective, you see the 99th step, 98th, 97th, and on down. It only LOOKS like it's running backwards.
This would even work for the game of life.
So the performance tradeoff would be this:
More frequent breakpoints causes forward execution to be slower because it's spending more time saving data at regular intervals for breakpoints, but "reverse" execution would be faster because it has to iterate fewer steps from the previous breakpoint.
Re:Reverse Execution of Code? Haha! Oh wait... (Score:3, Informative)
Re:That's just nutty... (Score:3, Informative)
Re:reversible computing == low energy computing (Score:3, Informative)
I think you got it just the wrong way.
Traditional computers generate entropy because of the information destroyed. Entropy created is necessarily associated with heat. With reversible computing there is no entropy increase, which in theory means less heat produced and less energy consumption.
Damn misleading articles. (Score:2, Informative)
It's and interesting field that's going to take off as Moore's Law slows down due to wasted heat. A good starting page with links for the interested is here [ufl.edu].
--
Free iPod? Try a free Mac Mini [freeminimacs.com]
Or a free Nintendo DS, GC, PS2, Xbox [freegamingsystems.com]
Wired article as proof [wired.com]
RTFA (Score:2, Informative)
Mod parent up (Score:4, Informative)
This has real potential. Beta versions of programs should run with this installed, so the core dump can be stepped backwards to the trouble spot. This could make Linux software significantly more reliable.
Re:That's just nutty... (Score:3, Informative)
Yes, in the museum.
The debugger that came with BS3 on the TR440 [vaxman.de] had an option that enabled you to step back a defined (small due to lack of space for saving) number of steps if you set the appropriate switch when compiling. Very cool feature - 30 years ago !
CC.
Re:Not by a decade. (Score:3, Informative)
First, it was kind of silly to name the program the same as my user name, but I never found a better name for it than "my trace-and-replay debugger".
My original plan was to write this for Solaris and sell it, hence the insistence on tracing and replaying without modifying the target program or the operating system, and that's why the replay controller messes with gdb's mind, so that it can work with a stock gdb rather than needing gdb extensions.
I developed a Linux version first because it was so cheap and simple to throw Slackware on my computer. And, well, it turns out that I'll probably never need a Solaris version, because Linux sure has become big enough and rewarding enough for me.
Versions 0.1, 0.2, and 0.3 took 15 months of full-time work, living on my savings and writing code on a little Linux box.
After version 0.3, several bad things happened:
Technical butterfly-chasing: the tracer needs to know about all possible ioctl calls that the target program makes, and Linux was adding and changing ioctls faster than I could check the patch diffs to update them. (That's how I came to write the kernel change summaries for a while). The obvious solution is to trace about 20 common ioctl's and throw all the rest in a big worst-case box that says "this ioctl might touch all of memory". One of the problems of working alone: nobody else around to notice the obvious solution.
Moving up the management chain: anybody who runs a one-person operation knows about this. It's one thing to write a proof-of-concept program. It's another thing to push it out, start a community, market it, manage all the communication with users and co-developers. I failed at that.
Fade-out: after the proof-of-concept worked, I noticed I'd spent 15 months full time, and I did make the milestone of seeing test programs run. I lost some interest and went and did something else.
Version 0.3 still has one good use: to help defend against anybody else that files a patent for technology like this. I released in November 1995.
Some responses:
Zogger and Animats, that's exactly the use case, the user in the field runs the tracer, then mails a big log back to the developer. This gets very useful when the user has unique resources that the developer doesn't, for a program like a network server. I don't know much about dtrace, but I think dtrace is just more comprehensive kernel reporting information, not fundamentally "video-taping the user process".
Mebane, I think the answer is: in 1995, I sucked at explaining things to people. Specifically, back then, I was into the "macho flash" school of communication: "this debugger is the best thing since breakpoint debugging, it will solve problems you didn't even think could be solved, etc". I should have just done a very simple demo walkthrough of printf("%d\n", gettimeofday()).
Auxon and Skubeedooo, yeah, it was a lot about marketing.
Jeff Mahoney: I agree, Hindsight looks much more powerful. But Hindsight is also more resource-intensive: they have to simulate a whole CPU.
MenTaLguY: I would be happy to chat with anybody who wants to do a revival. The mec@shout.net contact address still works.
And the whatever-happened-to-mec line: I worked for Cygnus/Red Hat for several years on gdb. My current job is with Google.