Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Programming Technology IT

Hindsight: Reversible Computing 178

One of the more interesting tech pieces that came out this week has been Hindsight [PDF]. Hindsight is made by Virtutech and is billed as the "the first complete, general-purpose tool for reverse execution and debugging of arbitrary electronic systems." The demos were received extremely well and it just looks cool.
This discussion has been archived. No new comments can be posted.

Hindsight: Reversible Computing

Comments Filter:
  • by bigtallmofo ( 695287 ) on Friday March 11, 2005 @08:39AM (#11909317)
    From reading about this earlier [eetimes.com], it is a very exciting technology for embedded systems. It does seem a bit expensive though:

    Hindsight will go into beta sites in May, with production slated for July. Incremental cost over Simics is around $5,000 per seat, but Hindsight won't target single seats. A typical engagement, including Simics, Hindsight and some initial model development, is estimated at $200,000 to $300,000 for a software development group with 10 to 20 seats.
  • That's just nutty... (Score:4, Informative)

    by The Desert Palooka ( 311888 ) on Friday March 11, 2005 @08:39AM (#11909318)
    With Simics Hindsight it is now possible to step back just before the error and then run forward again, providing another opportunity to reproduce the error and look more closely at what occurs in detail, without having to re-launch the program. Simics Hindsight can even unboot an operating system, running the code backwards until it reaches the initial hardware launch instruction after a hardware reset.

    That would be quite nice... It almost seems like a shuttle head or what not for programmers... Rewind, play, slow motion and so on... I know they said it's the first complete one, but is there anything else out there like this?
    • Take a look at
      http://www.lambdacs.com/debugger/debugger.html [lambdacs.com] I'm sure a Hindsight sales person would (correctly) say this isn't a complete solution, but its the closest thing I've seen before this article.
    • It's just reminded me of a 6502 debugger I wrote over 20 years ago that could step back as well as forwards. It was a bit slow because it actually did store the state of anything that changed during the execution of any instruction. It also didn't handle changes to hardware state or calls to OS functions that had side effects. Oh, and it needed tons of memory :)

      In spite of all those things it was still extremely useful. As they say though, the devil's in the detail. Getting it to work in a general purpose
    • I had to code a visual debugger for a small embedded system some 5+ years back. The debugging data was fetched from the system via some I/O, so I did what seemed obvious to me: logged all the data (relatively small amounts of it anyway), and gave the debugger ability to go back in the log. Of course, when not at current PC, the information was view-only, but the feature was awesome. No more of those "oops, one step too many, start over, dang!" Although the debugger itself was quite low-end, I still miss tha
    • The only way to "back up" execution is to save your state as you go. In Computational Theory, this amounts to the fact that, given a particular UTM in a particular state (state, position on the tape, values on the tape) an infinite number of UTMs exist which, at some point in their execution, arrive at a state equal to that particular state. (we leave the proof to the reader)

      Thus the only way to "back up" computation is to know the past of that machine, i.e. a state log of the execution of the program.

      B
      • by VAXman ( 96870 ) on Friday March 11, 2005 @10:31AM (#11910366)
        The only way to "back up" execution is to save your state as you go.

        At first I wasn't sure that your statement was true, but after thinking about it for 30 seconds or so, I realized it definitely was. Every instruction produces a deterministic calculation and can be reversed, right? If we have "ADD EAX, EBX", and know the current values of EAX and EBX, going backwards is easy, right?

        Well, one really difficult case is jumps. How do you know what the previous instruction executed was? On X86 this would be pretty difficult since the encodings are non-regular, but even on an ISA with regular encodings it would be non-trivial because it would be difficult to figure whether you got to the instruction via a jump (which could be anywhere in memory), or from the previous instruction.

        Add things such as Self-Modifying Code, and you have a real headache. Yes, you definitely need to track state as you go, though I'm not sure you'd need to save anything more than just the Instruction Pointer (which X86 does have a mechanism for). If you know what instructions were executed, it should be pretty easy to backtrack in time. I think.
        • by matman ( 71405 )
          Not only that, just try to undo a "i += rand()" type of statement... or user input... or a network call. Most network protocols do not support "forget the last three statements and roll back in state". :)
        • by Hynee ( 774168 )

          The only way to "back up" execution is to save your state as you go.

          At first I wasn't sure that your statement was true, but after thinking about it for 30 seconds or so, I realized it definitely was. Every instruction produces a deterministic calculation and can be reversed, right? If we have "ADD EAX, EBX", and know the current values of EAX and EBX, going backwards is easy, right?

          Try this UTM program:

          Set A and B to the values 1 and 2
          Add values of A and B and put them in C
          Put value x1 and y1 int

        • by pVoid ( 607584 )
          Every instruction produces a deterministic calculation and can be reversed, right?

          Wrong.

          mov eax, 0
          mov eax, [eax]
          xor eax, eax
          jmp eax
          imul eax , 0
          ...
          Basically any code that moves any data is crunching some other data. Given that RISC and CISC processors are Load/Store based architectures, that makes for pretty much a majority of cases.
        • There are instances where knowing the instructions will not help you back track, such as certain encryption algorithms.

          You're also not taking into acount all the data and sates a piece of software may have traveled through, which is not as simple as just back-traking through the execution.

          There are also other fun little things like interrupts, threads, processes, etc. It can get quite complicated very quickly.

          In some instances you'd be able to get away with an IP back-trace, but with the complexity of mo
    • by zootm ( 850416 )
      There's an academically interesting (I'm assured :)) Java system similar to this called Bdbj [ed.ac.uk]. I'm not sure if it's useful in a real context, but I assume it is to some degree.
    • by foobsr ( 693224 )
      but is there anything else out there like this?

      Yes, in the museum.

      The debugger that came with BS3 on the TR440 [vaxman.de] had an option that enabled you to step back a defined (small due to lack of space for saving) number of steps if you set the appropriate switch when compiling. Very cool feature - 30 years ago !

      CC.
  • UI (Score:5, Interesting)

    by GigsVT ( 208848 ) on Friday March 11, 2005 @08:43AM (#11909347) Journal
    They say the way they accomplish this is running the program in some sort of sandbox and taking checkpoints every so often and then when you step back, it actually runs forward from the closest checkpoint and stops one instruction short.

    My question is how UI interactions are handled. If the execution between the checkpoint and current-1 instruction includes a UI interaction, it might be very confusing to the programmer to know what or how many UI interactions need to be carried out to accomplish the backstep.
    • Re:UI (Score:1, Interesting)

      by lazer_nm ( 593581 )
      well, i dont know, i am embedded programmer, and as the flyer says, finding what happened when is a crucial part of our life. I always use Lauterbach debugger [lauterbach.com] with ETM (enhanced Trace Module) to do my set of debugging, and ofcourse, I can reset what ever I want and step as much back i need to, without the "reverse gear"
    • A big honking emulator with state saving in a memory buffer. I'm sure they've already accounted for recursive loops and interactions, just they've improved the ability to load / save states.

      Don't the video game emulators already do this?
      • Don't the video game emulators already do this?

        Yeah. To create a movie they save the state and log the following input, so the game can be replayed exactly. ZSNES also has a rewind key, but I haven't messed with it.
    • Re:UI (Score:3, Interesting)

      by TuringTest ( 533084 )
      Furthermore, this won't work for finding bugs on concurrent programs due to race conditions or parallel threads corrupting a shared resource.

      Those bugs might be catched if the environment would record instructions one-by-one, but as is you may find a bug in your execution, roll back to the checkpoint and find that the bug is gone in the replay. Hey, that would be funny if it happened on a TV football game...
    • Re:UI (Score:2, Interesting)

      by DanShearer ( 7067 ) *
      The beauty of full systems simulation is that you are simulating the full system :-) So UI interactions also take place in the simulated world.

      The trick is to have a simulator fast enough so that you can do UI interactions, because the user isn't in the simulated world. As it happens Simics is fast enough and this is exactly how it works. I'm on the Simics product team, and one way we have of proving the point is to run operating systems and their applications backwards for which we cannot have the source
  • Mirror (Score:5, Informative)

    by tabkey12 ( 851759 ) on Friday March 11, 2005 @08:45AM (#11909353) Homepage
    Mirror of the PDF [mirrordot.org]

    Never underestimate the Slashdot Effect!

  • by jaxdahl ( 227487 ) on Friday March 11, 2005 @08:45AM (#11909356)
    This seems to create a virtualization layer where checkpoints are saved periodically, then instructions are single stepped through. So to step back, it goes to the first checkpoint before the instruction you want to step back to, then it single steps up to that point. This would aid in kernel-level debugging where data structures might be overwritten from almost anywhere in the computer that can access the kernel space -- no need to set a watchpoint then reboot and wait for the next error to occur.
    • by goombah99 ( 560566 ) on Friday March 11, 2005 @10:15AM (#11910154)
      the state of a computer is not the state of the memory. it includes the hard disk as well. to give one tiny example: the vvirtual memory. to give a better example, if a program overwrites a file you have to check point back over that too. to give an even better example, if you were debuggin a disk defragmenting program every bit on the disk could move.
  • Yeah, I can see some technical hurdles here ... like storing all old variable/register contents, jump addresses, etc.

    How in the world did they pull this off?
    • If you read the article carefully, it does actually say. Basically they've optimised the printf() and scanf() functions, from the standard C libraries, to a very high degree. Using these optimised functions allows them to literally run the processor backwards, with a little help from Euler Integration to approximate the execution path. Its very clever indeed.
    • It's in the article, the section labelled "how it works".

      Assuming most slashdotters are lazy f*cks, a condensed explanation: it takes a snapshot every couple of seconds, then when you want to go backwards it moves all the way to the previous snapshot, then runs forwards ignoring sleep() to appear instant.

  • by bangzilla ( 534214 ) on Friday March 11, 2005 @08:47AM (#11909368) Journal
    It's all very well to be able to run code backwards/forwards/slo-mo/etc, but how to handle non deterministic external events coming in from the network? Does this tool presume that all applications to which it will be applied live in isolation?
    • by tesmako ( 602075 ) on Friday March 11, 2005 @08:54AM (#11909428) Homepage
      Since it is based on the whole-system simulator Simics -- Yes, it does assume that the app runs in isolation, since all external stuff is just simics simulations.
      • Then the example in TFA is pretty bad.
        It's exactly an example of an external packet containing a wrong checksum.

        If the system is in isolation, you would have to come up with the idea of sending a malformed packet yourself instead of just letting it run until it crashed. that doesn't seem a very likely thing to try.
    • by LiquidCoooled ( 634315 ) on Friday March 11, 2005 @08:55AM (#11909439) Homepage Journal
      No, you got it all wrong.

      This product is a cleverly disguised time machine.
      You can actually rollback and reverse to actually see the initial "First post!" remark, and undo the slashdot effect.

      If you look closely, you can also see the cognative response from Hemos as he clicked Accept on this submission.
    • All it has to do is log all non-deterministic external input during the initial run, and re-play the results from the log during the replay run.

      So, if it does a virtual DMA to get the network packet the first time, copy the data from that DMA into a log. When you're re-executing, copy the DMA from the log instead of from the NIC.

      It's similar ReVirt (my project), which was slashdotted here [slashdot.org].

      The downside, of course, is that you need to log *all* network data; so if you actually need that 1Gb ethernet, you

  • by Leadhyena ( 808566 ) <nathaniel DOT de ... T purdue DOT edu> on Friday March 11, 2005 @08:50AM (#11909395) Journal
    I can see how this software can come in real handy, but it won't work in every situation. It states in TFA that Hindsight doesn't do the naive approach of recording every instrction, but rather takes snapshots and tries to fill in the gaps. There are many types of calculations out there (think The Game of Life or other CAs) that by their nature cannot be reversed, so all of those states would have to be stored or it would be mathematically impossible to calculate the reverse steps.

    Therefore, I can't see their approach being foolproof, and the over-obvious advertisement (this is what normal debugging toolbars look like, but they don't have a nifty step-one-back feature) seems too bright to be withot caveat. At $5,000 a seat I'd say buyer beware.

    • Except that it forward-fills the gaps, and that is easy.

      It stores a snapshot every now and then, and when you want to go back it actually goes forward from the first snapshot before the time index you want.

      If you make snapshots 10 times a second and can forward-fill in a tenth of a second, the programmer will not notice this. (And if he does, make more snapshots)

      HTH
      --Blerik
    • by Anonymous Coward
      There are not doing "reverse steps". They go back to a previous checkpoint, and reexecute code (forward) until they reach the desired point.

      For example, if you're at point t=10, your previous checkpoint is at t=0, and you want to go back to t=9, their system first go back to t=0 and then reexecute the code until t=9.

      The thing is that you have to log everything non reversible (I/O, interrupts, syscalls, etc.) and use the logged value when reexecuting.
    • by CausticPuppy ( 82139 ) on Friday March 11, 2005 @09:33AM (#11909747)
      There are many types of calculations out there (think The Game of Life or other CAs) that by their nature cannot be reversed, so all of those states would have to be stored or it would be mathematically impossible to calculate the reverse steps.

      It also says in TFA that it doesn't actually calculate the reverse steps, so it doesn't matter if it's mathematically impossible.

      What it does do is take complete snapshots every (for example) 100 steps. In order to move "backwards" a step, it returns to the previous breakpoint (a known state) and goes forward 99 steps.
      Then it returns to the same breakpoint and goes forward 98 steps. And so on. So from your perspective, you see the 99th step, 98th, 97th, and on down. It only LOOKS like it's running backwards.

      This would even work for the game of life.

      So the performance tradeoff would be this:
      More frequent breakpoints causes forward execution to be slower because it's spending more time saving data at regular intervals for breakpoints, but "reverse" execution would be faster because it has to iterate fewer steps from the previous breakpoint.
      • Even more efficient, when "running backwards", go back to the checkpoint and re-execute everything while logging each instruction (log an instruction: record the previous value of anything this instruction overwrites, including the program counter if not the next instruction). Then you can do a direct "go backwards" mode. You still need to log each memory location that gets overwritten between snapshots (and use those values when re-executing).

        Instead of snapshots, you could record state at the beginning

    • RTFA (Score:2, Informative)

      by p3d0 ( 42270 )
      There are many types of calculations out there (think The Game of Life or other CAs) that by their nature cannot be reversed, so all of those states would have to be stored or it would be mathematically impossible to calculate the reverse steps.
      They take periodic system checkpoints and then work forward to the instruction preceeding the one you started from. There's no reason the Game of Life wouldn't be amenable to this.
  • Wow ! (Score:2, Funny)

    Now when I'm playing online, when I die I can rewind.

    I can hear it now, "Godlike!"
  • by Anonymous Coward on Friday March 11, 2005 @08:51AM (#11909402)

    http://www.lambdacs.com/debugger/debugger.html/ [lambdacs.com]

    Seems like this has been done before, at least for java apps...
  • BSOL? (Score:3, Funny)

    by bunyip ( 17018 ) on Friday March 11, 2005 @08:53AM (#11909417)
    So, would reversible computing let me have a Blue Screen Of Life?

    That would be so cool...

    Alan.
    • So, would reversible computing let me have a Blue Screen Of Life?

      Since it would be occurring backwards, wouldn't it be red?
  • by selectspec ( 74651 ) on Friday March 11, 2005 @08:53AM (#11909422)
    Hindsight is a service within their platform emulator. While it sounds nifty, and I'm all for it... emulators never behaive the same as the real platform... especially in embedded environments. The timing of peripherals is never the same on the emulator as the platform. The result is that lots of time is spent debugging the emulator environment that bares little fruit for the platform environment.

    What would be far more useful, would be to write tools that took advantage of many of the onboard hardware debugging capabilities of some of the common embedded chip architectures.
    • That's true, but often the real equipment is the first few off the production line, and hence is quite expensive and in limited supply, and even buggy, whereas the emulators can be duplicated as many times as you want.

      Also, you have better control of what goes on in an emulation, and that can help you find mysterious bugs that are very opaque on the real hardware.

      Finally, there's certain situations where emulators are probably the only way to go- for example Space Shuttle software validation is done on

      • I agree that life would suck with emulators. However, despite my support for the Hindsight vendor, I don't believe they "will change debugging as we know it" which they claim on their website.

        But you are absolutely right. In my own experience with embedded developement, emulators were often all you hand until the eval boards showed up and those were usually hogged by the bootstrap teams who could careless about emulation and were only interested in real hardware. Same deal with the prototype boards.

        Tha
        • I actually prefer to run about half my test cases on an emulator. Indeed I don't believe that good testing can be performed without isolating from the real hardware (but there are exceptions, clearly if the hardware is dirt cheap or something, and the debugging hooks are excellent...)
  • by Anonymous Coward on Friday March 11, 2005 @08:54AM (#11909432)
    just invert the micro clock signal so everything runs backwards :)
  • Not by a decade. (Score:5, Interesting)

    by Murmer ( 96505 ) on Friday March 11, 2005 @08:56AM (#11909449) Homepage
    This technology has existed, in GPL form [shout.net], for ten years. It's just had exactly zero uptake.

    I read this usenet post [google.ca] every now and then when I'm trying to fix something, and it makes me want to cry every time I do.

    • Mod parent up (Score:4, Informative)

      by Animats ( 122034 ) on Friday March 11, 2005 @11:32AM (#11911034) Homepage
      That's impressive technology. And it needs to be better known. Reverse-stepping has been available for gdb under Linux since at least 1999, and nobody knows it. So please, mod the parent up.

      This has real potential. Beta versions of programs should run with this installed, so the core dump can be stepped backwards to the trouble spot. This could make Linux software significantly more reliable.

    • Just... wow. Ten years? It could've been the most awesome debugging tool ever, and nobody wanted it? Why?
    • The links you posted aren't anywhere close to the level of sophistication that this software appears to be. While being able to use ptrace to intercept system calls in this manner is a cool hack, I don't think it's on the same level. The idea of reverse execution doesn't appear to be a new one, but the usability of this package appears to be far beyond that offered by mec.

      Maybe I'm misreading the information, but the impression I got is that it has the capability to setup a virtual machine so that you can
    • Seems to me it at least deserves an attempt at revial. I may email the developer tonight if I have time and see if he'd be interested in participating (in an advisory capacity at least).

      I don't have time to do it all myself, but I can at least try to start the ball rolling...
    • Re:Not by a decade. (Score:3, Informative)

      by mec ( 14700 )
      Wow, what to say?!

      First, it was kind of silly to name the program the same as my user name, but I never found a better name for it than "my trace-and-replay debugger".

      My original plan was to write this for Solaris and sell it, hence the insistence on tracing and replaying without modifying the target program or the operating system, and that's why the replay controller messes with gdb's mind, so that it can work with a stock gdb rather than needing gdb extensions.

      I developed a Linux version first because
  • by Anonymous Coward
    Bash VB all you want, but it's had a (more limited) version of this feature for years. It's a gigantic help when debugging. In my experience the error occurs, or is detectable within a few lines of the crash/exception. So you don't necessarily need to back up the entire call stack, just enough to see what's broken immediately before the crash/exception occurs.

    Coupled with fix and continue, you have not only a more productive development environment, but an environment where you can press a prototype into

  • We've seen a few april fools claiming to be able to run code backwards. This is impossible, at the lowest level. For example, take the logical OR: C = A + B (excuse the layout, the top line is the value of B, the first column the value of A)

    A\B | 0 | 1 |
    0 | 0 | 1 |
    1 | 1 | 1 |

    We know the result, C. How do we know if A, B, or both was 1? We lost information (2 bits of info became 1), and cannot get it back. So at first I dismissed any ridiculous claims of reverse execution. But we aren't the 1st of April...

  • by Anonymous Coward on Friday March 11, 2005 @09:01AM (#11909494)
    Reversible computing is a way of computing without (permenantly) consuming energy. Look it up if you're not familiar, because it's pretty interesting.

    Anyway, the headline is misleading.
    • You mean that if all the gamers would simply unplay their games after they get fragged it would solve the energy crisis?
    • by PxM ( 855264 )
      I was getting excited since I thought they had actually created a practical reversible computing hardware system. The idea behind true reversible computing is that information flow in computation is linked to the energy lost as heat during computing. Von Neumann showed that there was a hard limit on the amount of energy needed everytime a bit of information is lost dependent on Boltzmann's constant and temperature of the system. The ultimate goal is to have a computer that looks a lot like particle physics
      • ...you can't do the same with most binary operations since all the common ones except NOT...

        I'm not trying to be an ass here, but isn't that why they call NOT a unary operation, because of one operand?
      • I too was disappointed with the headline choice.

        Actually, you can solve the 2->1 folding problem quite easily. You just need to find gates that output as much information as they use. For example, an xor gate produces a single output, but if you invent the gate 'a,b' -> 'a, a^b' (xor*) then it is fully reversible and it is its own inverse.

        The trick is to make the electronics themselves fully reversible, rather than emulating xor* using a standard xor and a wire. I don't know how this is done. My
  • OCaml anyone? (Score:4, Interesting)

    by fab13n ( 680873 ) on Friday March 11, 2005 @09:02AM (#11909498)
    OCaml [inria.fr] as been offering timestamps and backward debugging for years, in addition of a great programming language (backward debugger's implementation is based on Unix's forking and copy-on-write, so running it on windows requires cygwin). Simply compile your stuff to byte-code rather than with the native optimizing compiler, run the debugger and use backstep/backward just as you used to do with step/forward. Breakpoints block execution in both directions.

    And what about GUI and other side effects? Debugging a program in which such side-effects are deeply interleaved with algorithmics can be tricky indeed, although smart timestamping from the debugger will reduce glitches. But if you don't know better than randomly mixing algo and front-end in the first place, then you'd better fix the programmer than the program...

  • I can see from the screenshots that they provide the debugger as an eclipse plugin.

    Does that include a C programming environment for eclipse or do they use the CPE project? I didn't know that was at a usable state yet.

    I use eclipse for java and it is excellent.
    • I've been using the C++ eclipse plugin for a while now. Its useable, but I don't know how the build system works, because we wrote up our own makefiles (I'm sure there's some way of telling it to use those makefiles, but I just haven't taken the time to find it). Autocompletion works pretty well most of the time - I'm using it mostly for the cvs integration!
  • This is *very* intaresting^g^n^i^t^s^e^r^aersting technology.

    Seriously, if this really works, the $5K/seat could well be worth it. Now, if they could convince Intel/AMD/IBM to somehow provide advanced support in the hardware...

    -k

    • Nobody is gonna pay $5k a seat for this. Companies are too cheap to even consider it. The vast majority of bugs just don't require this level of debugging sophistication to solve in short order. The few that do are rare enough to cancel out the benefit of this software. However, some people truely suck at debugging, in which case they can probably justify the 5k.

      If this company was smart, they'd sell it for $500 and sell 100 times more copies.
  • I know this won't be popular, but the lowly Visual Basic has had this feature forever. Of course, this will be very useful for real (compiled) languages and has to be better than the piece of crap "Edit and Continue".
  • by eddy ( 18759 ) on Friday March 11, 2005 @09:07AM (#11909532) Homepage Journal

    ReVirt [umich.edu]:

    The ability to replay the execution of a virtual machine is useful in many ways besides intrusion analysis. For example, it enables one to replay and debug any portion of a prior execution. We have built an extension to gdb that uses virtual-machine replay to provide the illusion of time travel. In particular, we provide the ability to do reverse debugging, though commands such as reverse watchpoint and reverse breakpoint. graph. See our paper in USENIX 2005 for details.

  • by same_old_story ( 833424 ) on Friday March 11, 2005 @09:09AM (#11909550)
    John McCarthy has been talking about giving programming langues the notion of time for quite some time (no pun intended).

    In this paper, he proposes the Elephant language [stanford.edu] that can refer to the past in computer programs.

    Pretty cool stuff!

  • I can back up easily enough with a call stack. I can see some situations where this approach might be better than a simple stack, but those instances would seem to be few and far between.
  • Is This Really New? (Score:3, Interesting)

    by Ginnungagap42 ( 817075 ) on Friday March 11, 2005 @09:14AM (#11909586)
    I remember that several of the older compilers like Borland's Turbo Pascal, Turbo C and Microsoft C and MASM could run reverse execution through the debugger. They also had the "animate" feature that let you step through the code automatically, but slowly so you could watch each line of code as it was executed. I remember setting my PC up with two video cards: a monochrome Hercules card and an EGA card. A lot of the compilers from those days supported mutiple graphics card output - the code would appear on the monochrome monitor and the running executable would appear on the color monitor.

    Being able to trace backwards ware extraordinarily useful, and it's one thing I miss in modern compilers. I always assumed that this capability was taken out with the advent of event-driven (GUI) programming. That's when a lot of this kind of functionality seemed to disappear.
  • by Anonymous Coward
    A while ago there was a jwz post about being able to run a debugger backwards [slashdot.org] (for some reason I thought this needed kernel patches). If the checkpointing happened at "machine" level this would be possible regardless of the language...

    (Actually someone has already posted [slashdot.org] the links jwz was talking about)
  • Hmm... (Score:2, Funny)

    by thed00d ( 822393 )
    Interesting, but will it work on a dead badger [strangehorizons.com] running GNU/Linux? Cause thats where do all my development work.
  • The term "reversible computing" has also been used for a type of circuit that does not consume energy, other than entropy, for computation. The trick is run a computation in parallel that goes in the opposite direction. Theoretically, this would mean really long-lived laptops and space probes, but I haven't heard of anyone testing this on more than a few gates.
    • The term "reversible computing" has also been used for a type of circuit that does not consume energy, other than entropy, for computation.

      I think you got it just the wrong way.

      Traditional computers generate entropy because of the information destroyed. Entropy created is necessarily associated with heat. With reversible computing there is no entropy increase, which in theory means less heat produced and less energy consumption.

  • Although the summary cleared things up.

    At first glance, I thought I'd eventually take an "upgrade" path to a VIC-20 via "Reversible Computing"
  • Come from? (Score:4, Funny)

    by AJWM ( 19027 ) on Friday March 11, 2005 @10:59AM (#11910663) Homepage
    Reverse execution? Are we finally going to see an implementation of the COME FROM [fortran.com] statement?

    (See also the entry in the jargon file [catb.org].)
  • This is especially significant when you consider 50% of a software engineer's time is spent debugging software.

    They assume that programmers... DEBUG! Hah!
  • Since this appears to be a sandbox tool, what's the problem? The only real problem I can see, and that people bring up (since everything else is deterministic in a closed environment) would be random or psuedo random number generation. Could it not (or does it) simply save the results of system clock queries? Since the system clock is used to seed most random number generators, saving the return values and feeding them back could eliminate the problem. Clearly in encryption intensive programs this would
  • Sorry to be the nay sayer this time, but I really don't see how this is so utterly impressive. For starters, all it takes is a little logical thinking to backtrack when you're debugging: every (or, nearly every) operation you can do on a computer can be thought of as having an input state, an operation on that input, and an output state. Backtracking, therefore, is as simple as coming up with the reverse of whatever the operation was, giving you the input state once more.

    And on top of that, most IDEs ha

    • every (or, nearly every) operation you can do on a computer can be thought of as having an input state, an operation on that input, and an output state. Backtracking, therefore, is as simple as coming up with the reverse of whatever the operation was, giving you the input state once more.

      after execution of A=B the state is A is 1, b is 1... tell me what was A before execution. Clobbering of data is not reversible and is very common (as in everytime there's an assignment operator used). Even things like
  • It is possible to replay the execution of programs that communicate with the outside world, rather than just in an isolated virtual machine: you have to log nondeterministic events. See http://www.erights.org/elang/concurrency/determini sm/overview.html [erights.org].

    The first language I know of that supported replay is the Abundance database language [mindprod.com], back in 1986. Also see http://c2.com/cgi/wiki?ReversibleProgrammingLangua ge [c2.com].

"The great question... which I have not been able to answer... is, `What does woman want?'" -- Sigmund Freud

Working...