Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Programming Technology IT

Hindsight: Reversible Computing 178

One of the more interesting tech pieces that came out this week has been Hindsight [PDF]. Hindsight is made by Virtutech and is billed as the "the first complete, general-purpose tool for reverse execution and debugging of arbitrary electronic systems." The demos were received extremely well and it just looks cool.
This discussion has been archived. No new comments can be posted.

Hindsight: Reversible Computing

Comments Filter:
  • by bigtallmofo ( 695287 ) on Friday March 11, 2005 @09:39AM (#11909317)
    From reading about this earlier [eetimes.com], it is a very exciting technology for embedded systems. It does seem a bit expensive though:

    Hindsight will go into beta sites in May, with production slated for July. Incremental cost over Simics is around $5,000 per seat, but Hindsight won't target single seats. A typical engagement, including Simics, Hindsight and some initial model development, is estimated at $200,000 to $300,000 for a software development group with 10 to 20 seats.
  • That's just nutty... (Score:4, Informative)

    by The Desert Palooka ( 311888 ) on Friday March 11, 2005 @09:39AM (#11909318)
    With Simics Hindsight it is now possible to step back just before the error and then run forward again, providing another opportunity to reproduce the error and look more closely at what occurs in detail, without having to re-launch the program. Simics Hindsight can even unboot an operating system, running the code backwards until it reaches the initial hardware launch instruction after a hardware reset.

    That would be quite nice... It almost seems like a shuttle head or what not for programmers... Rewind, play, slow motion and so on... I know they said it's the first complete one, but is there anything else out there like this?
  • Not necessarily (Score:5, Informative)

    by spookymonster ( 238226 ) on Friday March 11, 2005 @09:44AM (#11909350)
    From their website, you can get a free academic version of the software as well. At least, that's what the site says (I didn't register to download it, so I can't confirm).
  • Mirror (Score:5, Informative)

    by tabkey12 ( 851759 ) on Friday March 11, 2005 @09:45AM (#11909353) Homepage
    Mirror of the PDF [mirrordot.org]

    Never underestimate the Slashdot Effect!

  • by jaxdahl ( 227487 ) on Friday March 11, 2005 @09:45AM (#11909356)
    This seems to create a virtualization layer where checkpoints are saved periodically, then instructions are single stepped through. So to step back, it goes to the first checkpoint before the instruction you want to step back to, then it single steps up to that point. This would aid in kernel-level debugging where data structures might be overwritten from almost anywhere in the computer that can access the kernel space -- no need to set a watchpoint then reboot and wait for the next error to occur.
  • by Anonymous Coward on Friday March 11, 2005 @09:51AM (#11909402)

    http://www.lambdacs.com/debugger/debugger.html/ [lambdacs.com]

    Seems like this has been done before, at least for java apps...
  • by tesmako ( 602075 ) on Friday March 11, 2005 @09:54AM (#11909428) Homepage
    Since it is based on the whole-system simulator Simics -- Yes, it does assume that the app runs in isolation, since all external stuff is just simics simulations.
  • by Anonymous Coward on Friday March 11, 2005 @09:56AM (#11909452)
    Bash VB all you want, but it's had a (more limited) version of this feature for years. It's a gigantic help when debugging. In my experience the error occurs, or is detectable within a few lines of the crash/exception. So you don't necessarily need to back up the entire call stack, just enough to see what's broken immediately before the crash/exception occurs.

    Coupled with fix and continue, you have not only a more productive development environment, but an environment where you can press a prototype into limited production use long before it's ready. Everyone in their right mind thinks that's a retarded idea; even so, it becomes necessary once in awhile. Getting that prototype into production as soon as possible can be the difference between the company surviving or failing. I know because I've been there. These two features saved a company I used to work for. Thank you, Microsoft!

  • by Anonymous Coward on Friday March 11, 2005 @10:01AM (#11909494)
    Reversible computing is a way of computing without (permenantly) consuming energy. Look it up if you're not familiar, because it's pretty interesting.

    Anyway, the headline is misleading.
  • by Anonymous Coward on Friday March 11, 2005 @10:27AM (#11909707)
    There are not doing "reverse steps". They go back to a previous checkpoint, and reexecute code (forward) until they reach the desired point.

    For example, if you're at point t=10, your previous checkpoint is at t=0, and you want to go back to t=9, their system first go back to t=0 and then reexecute the code until t=9.

    The thing is that you have to log everything non reversible (I/O, interrupts, syscalls, etc.) and use the logged value when reexecuting.
  • by CausticPuppy ( 82139 ) on Friday March 11, 2005 @10:33AM (#11909747)
    There are many types of calculations out there (think The Game of Life or other CAs) that by their nature cannot be reversed, so all of those states would have to be stored or it would be mathematically impossible to calculate the reverse steps.

    It also says in TFA that it doesn't actually calculate the reverse steps, so it doesn't matter if it's mathematically impossible.

    What it does do is take complete snapshots every (for example) 100 steps. In order to move "backwards" a step, it returns to the previous breakpoint (a known state) and goes forward 99 steps.
    Then it returns to the same breakpoint and goes forward 98 steps. And so on. So from your perspective, you see the 99th step, 98th, 97th, and on down. It only LOOKS like it's running backwards.

    This would even work for the game of life.

    So the performance tradeoff would be this:
    More frequent breakpoints causes forward execution to be slower because it's spending more time saving data at regular intervals for breakpoints, but "reverse" execution would be faster because it has to iterate fewer steps from the previous breakpoint.
  • by HidingMyName ( 669183 ) on Friday March 11, 2005 @10:38AM (#11909793)
    Reverse execution is possible at the source level, but it requires generation of extra data structures to handle operations that don't correspond to invertible functions. This approach has been applied with some success to high performance simulations to give a "lightweight rollback", by Peters and Carothers in An Algorithm for Fully Reversible Optimistic Parallel Simulation [rpi.edu].
  • by zootm ( 850416 ) on Friday March 11, 2005 @11:04AM (#11910055)
    There's an academically interesting (I'm assured :)) Java system similar to this called Bdbj [ed.ac.uk]. I'm not sure if it's useful in a real context, but I assume it is to some degree.
  • by TeknoHog ( 164938 ) on Friday March 11, 2005 @11:20AM (#11910214) Homepage Journal
    The term "reversible computing" has also been used for a type of circuit that does not consume energy, other than entropy, for computation.

    I think you got it just the wrong way.

    Traditional computers generate entropy because of the information destroyed. Entropy created is necessarily associated with heat. With reversible computing there is no entropy increase, which in theory means less heat produced and less energy consumption.

  • by PxM ( 855264 ) on Friday March 11, 2005 @11:30AM (#11910353)
    I was getting excited since I thought they had actually created a practical reversible computing hardware system. The idea behind true reversible computing is that information flow in computation is linked to the energy lost as heat during computing. Von Neumann showed that there was a hard limit on the amount of energy needed everytime a bit of information is lost dependent on Boltzmann's constant and temperature of the system. The ultimate goal is to have a computer that looks a lot like particle physics where the rules are completely time-symmetric. I.e. if I reverse the flow of time, the laws of physics will still run properly and allow me to reconstruct all the previous states from the present one. While the principle of quantum reversibility (sometimes called the "conservation of information law") you can't do the same with most binary operations since all the common ones except NOT take in 2 bits and output 1 bit. Thus, it is impossible to run the system in reverse and reconstruct those two bits from that one bit. This has the adverse effect of wasting energy as heat into the environment.

    It's and interesting field that's going to take off as Moore's Law slows down due to wasted heat. A good starting page with links for the interested is here [ufl.edu].

    --
    Free iPod? Try a free Mac Mini [freeminimacs.com]
    Or a free Nintendo DS, GC, PS2, Xbox [freegamingsystems.com]
    Wired article as proof [wired.com]
  • RTFA (Score:2, Informative)

    by p3d0 ( 42270 ) on Friday March 11, 2005 @12:25PM (#11910953)
    There are many types of calculations out there (think The Game of Life or other CAs) that by their nature cannot be reversed, so all of those states would have to be stored or it would be mathematically impossible to calculate the reverse steps.
    They take periodic system checkpoints and then work forward to the instruction preceeding the one you started from. There's no reason the Game of Life wouldn't be amenable to this.
  • Mod parent up (Score:4, Informative)

    by Animats ( 122034 ) on Friday March 11, 2005 @12:32PM (#11911034) Homepage
    That's impressive technology. And it needs to be better known. Reverse-stepping has been available for gdb under Linux since at least 1999, and nobody knows it. So please, mod the parent up.

    This has real potential. Beta versions of programs should run with this installed, so the core dump can be stepped backwards to the trouble spot. This could make Linux software significantly more reliable.

  • by foobsr ( 693224 ) on Friday March 11, 2005 @01:32PM (#11911762) Homepage Journal
    but is there anything else out there like this?

    Yes, in the museum.

    The debugger that came with BS3 on the TR440 [vaxman.de] had an option that enabled you to step back a defined (small due to lack of space for saving) number of steps if you set the appropriate switch when compiling. Very cool feature - 30 years ago !

    CC.
  • Re:Not by a decade. (Score:3, Informative)

    by mec ( 14700 ) <mec@shout.net> on Sunday March 13, 2005 @05:02AM (#11924918) Journal
    Wow, what to say?!

    First, it was kind of silly to name the program the same as my user name, but I never found a better name for it than "my trace-and-replay debugger".

    My original plan was to write this for Solaris and sell it, hence the insistence on tracing and replaying without modifying the target program or the operating system, and that's why the replay controller messes with gdb's mind, so that it can work with a stock gdb rather than needing gdb extensions.

    I developed a Linux version first because it was so cheap and simple to throw Slackware on my computer. And, well, it turns out that I'll probably never need a Solaris version, because Linux sure has become big enough and rewarding enough for me.

    Versions 0.1, 0.2, and 0.3 took 15 months of full-time work, living on my savings and writing code on a little Linux box.

    After version 0.3, several bad things happened:

    Technical butterfly-chasing: the tracer needs to know about all possible ioctl calls that the target program makes, and Linux was adding and changing ioctls faster than I could check the patch diffs to update them. (That's how I came to write the kernel change summaries for a while). The obvious solution is to trace about 20 common ioctl's and throw all the rest in a big worst-case box that says "this ioctl might touch all of memory". One of the problems of working alone: nobody else around to notice the obvious solution.

    Moving up the management chain: anybody who runs a one-person operation knows about this. It's one thing to write a proof-of-concept program. It's another thing to push it out, start a community, market it, manage all the communication with users and co-developers. I failed at that.

    Fade-out: after the proof-of-concept worked, I noticed I'd spent 15 months full time, and I did make the milestone of seeing test programs run. I lost some interest and went and did something else.

    Version 0.3 still has one good use: to help defend against anybody else that files a patent for technology like this. I released in November 1995.

    Some responses:

    Zogger and Animats, that's exactly the use case, the user in the field runs the tracer, then mails a big log back to the developer. This gets very useful when the user has unique resources that the developer doesn't, for a program like a network server. I don't know much about dtrace, but I think dtrace is just more comprehensive kernel reporting information, not fundamentally "video-taping the user process".

    Mebane, I think the answer is: in 1995, I sucked at explaining things to people. Specifically, back then, I was into the "macho flash" school of communication: "this debugger is the best thing since breakpoint debugging, it will solve problems you didn't even think could be solved, etc". I should have just done a very simple demo walkthrough of printf("%d\n", gettimeofday()).

    Auxon and Skubeedooo, yeah, it was a lot about marketing.

    Jeff Mahoney: I agree, Hindsight looks much more powerful. But Hindsight is also more resource-intensive: they have to simulate a whole CPU.

    MenTaLguY: I would be happy to chat with anybody who wants to do a revival. The mec@shout.net contact address still works.

    And the whatever-happened-to-mec line: I worked for Cygnus/Red Hat for several years on gdb. My current job is with Google.

Top Ten Things Overheard At The ANSI C Draft Committee Meetings: (5) All right, who's the wiseguy who stuck this trigraph stuff in here?

Working...