Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Operating Systems Software Programming IT Technology

A Diagnosis of Self-Healing Systems 149

gManZboy writes "We've been hearing about self-healing systems for a while, but (as is usual), so far it's more hype than reality. Well it looks like Mike Shapiro (from Sun's Solaris Kernel group) has been doing a little actual work in this direction. His prognosis is that there's a long way to go before we get fully self-healing systems. In this article he talks a little bit about what he's done, points out some alternative approaches to his own, as well as what's left to do."
This discussion has been archived. No new comments can be posted.

A Diagnosis of Self-Healing Systems

Comments Filter:
  • Your operating system provides threads as a programming primitive that permits applications to scale transparently and perform better as multiple processors, multiple cores per die, or more hardware threads per core are added. Your operating system also provides virtual memory as a programming abstraction that allows applications to scale transparently with available physical memory resources. Now we need our operating systems to provide the new abstractions that will enable self-healing activities or graceful degradation in service without requiring developers to rewrite applications or administrators to purchase expensive hardware that tries to work around the operating system instead of with it.

    Neither the applications nor the OS should depend on the other providing any failover or self-healing services; they should always be prepared to go it alone if necessary (as it might be the failover system). Services that crash should restart themselves, etc. This part is pretty well done by most enterprise-grade server software. It's the operating systems we're waiting to play catch-up.

    And I'm still waiting to see any box that can replace its own power supply after someone flips the 115/230 switch. Once we get that, then we'll have truly self-healing systems. And all you BOFH's out there might be looking for a new career...

    • by grahamsz ( 150076 ) on Tuesday December 21, 2004 @07:49PM (#11154052) Homepage Journal
      Plenty of Sun's boxes have redundant power supplies.

      If something goes wrong with one, the system should detect either too little or too much DC voltage or current coming from it, and switch to it's backup.

      Your suggestion doesn't make much sense. Should mozilla know what to do if a usb mouse fails or is removed unexpectedly? Of course not, the mozilla developers expect that this will be taken care of.

      Likewise when an correctably memory or disk error occurs... The memory controller or disk firmware should deal with it and the application should be none-the-wiser.

      • Plenty of Sun's boxes have redundant power supplies.

        Click here to ruin the joke. [ntk.net]

      • Should mozilla know what to do if a usb mouse fails or is removed unexpectedly? Of course not, the mozilla developers expect that this will be taken care of.

        Of course not... the point is not that each layer (peripheral, BIOS, kernel, application) can handle errors in all other layers. The point is that Mozilla should be designed to be able to recover from crashes without help from the kernel, BIOS, or anything else. Likewise, if a USB mouse somehow gets "confused" (protocol-wise) it should take the initi

      • Your suggestion doesn't make much sense. Should mozilla know what to do if a usb mouse fails or is removed unexpectedly? Of course not, the mozilla developers expect that this will be taken care of.

        No, but Mozilla could be written to survive a memory access failure. It oculd be written so that it does not assume that drives and ram are infallible.

        Likewise when an correctably memory or disk error occurs... The memory controller or disk firmware should deal with it and the application should be none-the

    • .... my thoughts have always been the computer is a neuron .. so we designed our system (server farm) so that neurons are backed up and if one fails we simply restore it to a fresh server (fully automated -- simply follows the logic path we humans follow) we use DCH or Distributed Cheap Hardware -- aka we go to a local computer store/online and buy the biggest bang for buck stuff(linux only) we can find average server life time is about 2.5 years or about $10 a month --- our only missing part is an automati
  • Had this 3 years ago (Score:5, Interesting)

    by shoppa ( 464619 ) on Tuesday December 21, 2004 @07:45PM (#11154012)
    According to a documentary movie from 3 years ago, we already had this. HAL 9000 sent an astronaout out to help repair the antenna azimuth control board.

    Which turned out not to be faulty... hmmm...

    Some IBM mainframes are already at this level of self-diagnosis. Where I work, IBM repairmen show up with spare drives for the RAID array when they fail and the array phones IBM to report the fault. We don't know that a drive failed until the field service tech shows up!

    • Is a little bit different from self-healing, but they are in the same vein.

      I believe Sun are working on systems that will attempt to spot failure trends, so they can proactively identify other customers who may run into similar problems and then either have the system fix itself or send someone out to deal with it.

      The other mindset i've seen with RAID disks, is why bother replacing them. Disks are getting to the point that it's probably cheaper just to leave the dead one in there and power up a spare than
      • Actually, decent RAID systems have hot spares installed. When a disk dies, the hot spare takes it's place - you just remove the dead drive in order for it to become a new hot spare, and not to get the array to it's original state.
        You can't just "not replace dead drives" unless you have like 400 SCSI controllers on your machine, therefore providing an insane amount of hot spares for future failures.
    • by jomas1 ( 696853 ) on Tuesday December 21, 2004 @08:01PM (#11154168) Homepage
      Some IBM mainframes are already at this level of self-diagnosis. Where I work, IBM repairmen show up with spare drives for the RAID array when they fail and the array phones IBM to report the fault. We don't know that a drive failed until the field service tech shows up!

      Interesting. Where I work this happens too except instead of IBM techs we get sent techs who work for the city and instead of finding out that they were sent for some good reason, 90% of the time it turns out that the techs were sent for no reason. The techs usually don't even know that a machine called in a service request and waste a lot of time asking me why they were called.

      If the future holds more of this I hope I die soon.
    • Well this seems like where computing services are heading as IBM is doing extensive research on Self-Configuring, Self-Healing, Self-Optimizing, and Self-Protecting computing systems called 'Autonomic'

      Check out: Autonomic Computing [ibm.com]

    • HAL 9000 sent an astronaout out to help repair the antenna azimuth control board.

      Unfortunately, the astronaut (one 'Dave') wasn't able to comply, because HAL refused to open the pod-bay door.

    • I did R & D for an elevator factory 12 years ago, and back then we made a box that called home when something went wrong. The system scanned some critical points of the circuit and, if the readings were not in the expected pattern, an external modem was used to call the maintenance and send a full report of the readings, indicating the cause of the failure.

      For example, a broken door sensor could make the door fail to slow down when closing, and the only symptom would be the louder sound of the door sla

  • I use a little method I like to call the crontab coupled with shell scripts.
  • It looks like the T1000 won't be appearing any time soon: at least not until Skynet comes online.
  • TiVo (Score:3, Insightful)

    by Radak ( 126696 ) on Tuesday December 21, 2004 @07:51PM (#11154071) Journal
    TiVo has had self-healing Linux systems out there for five years now. There are virtually no complaints of TiVo software failure (hard drives certainly go bad from time to time, but very rarely does the OS get itself into a state it can't fix), so the notion that self-healing systems are still years off is silly. They may not be extremely advanced yet, but they're certainly out there.
    • Yes, self healing and reactive systems exist for specific implementations, but not as a whole. If the kernel or some low level drivers in a TiVo went bad somehow, I doubt the TiVo could repair that. I doubt we will see self-healing desktop boxes or general purpose servers for at least a few more years. I think that is what the article is trying to drive home.
    • Not really (Score:3, Insightful)

      by grahamsz ( 150076 )
      It's very easy to make a system self-healing when you are running in a completely controlled evironment.

      Indeed my TiVo very rarely crashes and always recovers, but the same is also true of every embedded system i've used - be it a cellphone, weather station or alarm system.

      Now if i screw around modding my tivo then it's entirely possible to crash it and it doesn't recover very well from that...
      • I had a cellphone once that would crash regularly. Some crappy samsung thing, I think. Drove me batty.
      • Exactly. People don't go installing software on their TiVo's weekly, or surfing the net with IE, or opening e-mail attachments with malicious code designed for that OS...etc. It's almost outside the realm of comparables.
    • That's a nice accomplishment, but it's not the same thing. TiVo has complete control over the software they run on their boxes, and I'm sure they test it quite carefully before shipping. This isn't unique to TiVo; you could say the same thing about the software that runs in your cell phone, DVD player, etc.

      These people are in a different domain; they don't know what apps their system will run, or what mistakes the sysadmin will make, or what worms someone will write next month -- they're preparing a re

    • There's a big, big difference between designing a system that can do one thing well, and a system that can do many things well. Worst case with TiVO, it restarts itself and you wonder why a few minutes of your program didn't record. Worst case with Solaris, your system restarts and everyone wonders where eBay went :-0
    • Must be nice...my DishPVR 501 crashes all the time. Normally if you're careful you can avoid it, but woe to the user that tries to change timer settings while it's recording something, or elects to stop watching something as it's being recorded and goes to the PVR menu. I can crash it about 30% of the time doing either of these common activities. Stupid thing takes almost 4 minutes to reboot, too, so you miss a big chunk of whatever you were trying to watch, plus it commonly crashes again because it gets co
  • Networks and servers, they tell me, can self-defend, self-diagnose, self-heal, and even have enough computing power left over from all this introspection to perform their owner-assigned tasks.

    After repeated viewing of those Thinkpad commercials where the techs tell the hysterical PHB to press the blue button on startup and thereby enable IBM to magically resurrect his hard drive, I summoned up the courage to try it. (The curly haired guy in those ads is also in one of my favorite commercials ("Please stay

    • Apples do have a magic button to access the HDD if the OS can not boot.
  • if (Score:3, Insightful)

    by Kanasta ( 70274 ) on Tuesday December 21, 2004 @07:58PM (#11154143)
    if self healing = ms office keeps putting another icon in my start menu whenever I start word, then I don't want self healing.

    How many times do I have to move their icons to a submenu before they realise I don't want my root menu cluttered up with crap?
    • if self healing = ms office keeps putting another icon in my start menu whenever I start word

      It's much better than that. Self-healing means that disks in a RAID array can detect corrupted blocks of data using checksums and correct them from good mirrors on-the-fly. With multiple mirrors with checksums proving whether there is a problem or not, corrupt data files should be a thing of the past (on systems with RAID). It seems failing drives would be detected sooner, also.

    • How many times do I have to move their icons to a submenu before they realise I don't want my root menu cluttered up with crap?

      However many it takes you to realize the problem is user error and upgrade to an OS that works well.

      If you compare computers to biological systems (something people should spend more time doing, the biological systems usually are more robust) then self-healing is something like the concept of "radiant health." Before you worry about that, first you have to reach a state of healt

  • While a self healing system sounds nifty, todays systems aren't even good enough to be healed manually.

    Uninstalling applications is often not handled by the OS and has to be done by application itself, resulting in incomplete installations, config files and registiry entries that havn't been properly cleaned up and whatever.

    Files arn't versioned, so every change done to a file will simply erase the former content forever, not so good if the former content might have been important.

    Undelete? Nope, we don't have that either, we have this hack of a Trashcan, but that won't help you much if some programm deleted the file.

    Check of integritiy of an installed piece of software isn't possible either, sure there are third-party solutions, but again that should be something that the OS provides at default

    Well, there are millons of more issues why todays system suck and why it is often easier to simply reinstall from scratch then to try to actually fix the mess, and yep, that is true for both Linux, Windows and MacOS, sure for some more then for the others, but thats it.

    • Mostly, that's because Windows is a piece of shit.

    • Files arn't versioned

      Undelete?

      Check of integritiy of an installed piece of software

      During the desktop's formative years, the raw drive space needed to actually implement these kinds of things just wasn't available. This is why things like file versioning (popular on large systems like VMS, where the universities/companies running it had the money for the storage requirements) and permanent storage of "unwanted" files just didn't appear.

      The third problem is a bit tougher without some extra metadata
      • ### Something must be stored somewhere so that the system can identify a modified binary.

        Well, the system ultimativly knows when something changes, since it is the one who changes it. You are right that one needs some metadata, those however in most cases already comes with the packages (deb/rpm) one installs, there just isn't a standard way to automatically check these changes. However this problem can be solved completly in userspace with a cronjob, would just be nice to have a standard way to do it.

        ###
      • "and another entirely to define what this is."

        That would be "Survive, damn you! Survive!"

        There is this internal conflict we must have, where on the one hand we want our technology to have a survival instinct; so that it is motivated to look after itself while we are not.

        A bit like a human baby figuring out that sometimes mummy is not looking this way and it has to get out of the way of the reversing SUV by its self.

        On the other hand, the prospect of computers that have a survival instinct is (or bloody
  • If they break they can be fixed or replaced. I want selfhealing (prefereably wolverine type) for me.
  • Reset button (Score:2, Interesting)

    by mboverload ( 657893 )
    I don't know why windows doesn't just have a reset button for all the settings to return it to it's original condition. It's a bitch to reinstall it twice a year, you know.
  • Just out of curosity, can anyone define what the precise difference is between "Fault Tolerance" and "Self Healing"?

    Explain to me how any of the failure responses I see discussed in the article or in these discussions qualifies as "Healing"? Almost all fault tolerant systems isolate failing components or programs from the rest of the system (killing rouge processes counts as isolation). Quarantine is not an attempt to heal, it is an attempt to tolerate. Are there actually any non-quarantine "self healin
    • I think that when most people talk about self-healing, they mean fault tolerant. An example is the Tandem systems mentioned a little below this. Yet I also think that self-healing and fault tolerance are a bit the same.

      However, if you want to understand what self-healing really means (and does not mean), consider that our DNA are self-healing.

      Now, I do not claim to understand the mechanisms whereby the DNA is self healing. I am aware that there is a recent article that points out how the DNA breaks get
    • Fault Tolerance implies the ability to not just detect the fault (i.e. a failed cpu), but to keep the processes running as if nothing happened. This is possible with Stratus and Tandem boxes. It is genrally not possible with common x86/Power/SPARC boxes (unless you put a lot of software on top of two boxes to make them look like one big virual system).

      "Self Healing", in this context, is the systems ability to detect a fault (hardware or software), deal with it (restart a process, isolate hardware, etc) and
    • Memwatch will repair it's data structures if they're thrashed. http://en.wikipedia.org/wiki/Memwatch
  • by Animats ( 122034 ) on Tuesday December 21, 2004 @08:18PM (#11154308) Homepage
    There are operating systems for which "self-healing" is quite feasible, but UNIX is all wrong for it.

    The most successful example is Tandem. For decades, systems that have to keep running have run on Tandem's operating system. For an overview of how they did it, see the 1985 paper Why Computers Stop and What Can Be Done About It. [hp.com]

    The basic concepts are:

    • All the permanent state is in a database with proper atomic restart and recovery mechanisms.
    • Flat "files" are implemented on top of the database, not the other way round.
    • When applications fail, they are usually restarted completely, with any in-process transactions being backed out.
    • Applications with long-running state are tracked by a watching program on another machine which periodically receives state updates from the first program. If the first program fails, the watching program restarts it from a previous good state.

    Every time you use an ATM or trade a stock, somewhere a Tandem cluster was involved.

    Tandem's problem was that they had rather expensive proprietary hardware. You also needed extra hardware to allow for fail-operational systems. But it all really does work. HP still sells Tandem, but since Carly, it's being neglected, like most other high technology at HP.

    • by rlp ( 11898 ) on Tuesday December 21, 2004 @08:33PM (#11154438)
      Tandem had a FT Unix division in Austin. One of the teams I managed that was responsible for an embedded expert system that monitored faults in the redundant components of the system. Every component was replicated. Each logical CPU actually consisted of four processors - two pairs running in lock-step. If one CPU in a pair disagreed with it's counter-part, the pair would be taken out of service. The expert system monitored transient faults and would "predict" that a component was going to fail, and could take it out of service. The system had a modem that would "phone home" in the event of a component failure, and a service tech would be dispatched with a part - often before the customer knew there was a problem.

      The machines used MIPS processors (supporting SMP) and ran a Tandem variant of System V UNIX. Combine this with a decent transactional database, and application software capable of check-pointing itself, and you have a very robust system. Albeit a very expensive one.

      Tandem was bought out by Compaq, and then by HP. When I left, Tandem had quite a few interesting ideas they were working on, but near as I can tell, they never saw the light of day.
  • the health of the rest of the system is monitored, and what are you gonna do if it comes to wrong conclusions?
  • Self-healing would seem to be a critical step toward a self-aware artificial intelligence. Self-healing requires an ability for introspection that is sufficient to identify and correct corrupted internal states. Code that is able to introspect its own behavior and internal structure could lead it to interesting outcomes if tied to a learning algorithm (even a simple hill-climbing algorithm).

    It is then a small step to go from simple feedback self-healing mechanisms to feed-forward control mechanisms. F
  • From TFA: One approach is simply to make an individual system the unit of recovery; if anything fails, either restart the whole thing or fail-over to another system providing redundancy. Unfortunately, with the increasing physical resources available to each system, this approach is inherently wasteful: Why restart a whole system if you can disable a particular processor core, restart an individual application, or refrain from using a bit of your spacious memory or a particular I/O path until a repair is tr
  • by skinfitz ( 564041 ) on Tuesday December 21, 2004 @08:49PM (#11154573) Journal
    I have it so that if one of our firewalls detects an attempt to access gator.com it enrols the machine into an active directory system group which the SMS server queries to automatically de-spyware it with SpyBot.

    I'd call that a self healing system. I'm a network admin though so my perception of these things tends to be on a larger scale.
    • Comment removed based on user account deletion
    • That's like curing the symptoms and not the cause.
      Your systems shouldn't have gotten infected with spyware in the first place, and the fact that they did shows you have bigger problems. What if they get infected with something more malicious than gator? Or how about something that's not detected by the spyware removal tools?
      • I agree completely - we do not allow admin or Power User rights on our systems, and typically if a machine has gator on it, it usually has other problems too. In fact I'll guarantee that if any machine has gator on it, it usually has LOTS of other problems.

        Tracking the symptoms like this alerts me to these problems - running SpyBot on a machine never hurts, and I'll do other things too like have a script email me the list of adminstrators on the machine and perhaps change the password.

        As for more malici
  • by Etcetera ( 14711 ) * on Tuesday December 21, 2004 @08:52PM (#11154597) Homepage

    HAL: I've just picked up a fault in the AE35 unit. It's going to go 100% failure in 72 hours.

    This is really something that, IMHO, calls for more interaction between the best of the futurists, science-fiction writers, and coders, and other complexity thinkers.

    In order for any system to have an understanding of and proper diagnosis of its own operation, it needs to be able to conceptualize its relationship to other systems around it. Am I important? What functions do I provide? What level of error is proper to report to my administrator? Do I have a history of hardware problems? Has chip 2341 on motherboard 12 been acting up intermittently? If so, is it getting worse or better? How have I been doing over the last few days? Is there a new virus going around that is similar to something I've had before?

    What good is a self-diagnosing system without a memory of its prior actions?

    All of these questions imply some sort of context that will require the system to use symbols to represent "things" in the "world" around it. Clearly, the largest (though perhaps not qualitatively different) symbol will be a "self" symbol.

    From there, all you have to do is follow Hofstadter [dmoz.org]'s path and you'll arrive at a system with emergent self-awareness or consciousness.

    The end result of this will be something a) very complex and b) designed/grown by itself. You'll have either the computer from the U.S.S. Enterprise [epguides.info] or H.A.L. [underview.com]

    Side question: What is CYC doing these days? [cyc.com]
  • For swarmstreaming, we use the Tree Hash EXchange format (THEX) [open-content.net] to provide cryptographic integrity verification down to a single 1KB resolution so we can automatically repair the corruption.
  • by Doc Ruby ( 173196 ) on Tuesday December 21, 2004 @09:13PM (#11154737) Homepage Journal
    How about just systems that fail *verbosely*, so admins can quickly diagnose them? Once the patient can complain properly, we can get to work replacing the admin doctors with "self-healing" metasystems that use those diagnostics. It will be a lot easier just mimicking the best admins' best practices by automating them, than all this screwing around trying to compile marketsprach like "self-healing" without understanding how it even works in nature.
    • I can second that. If just the error message is good/detailed enough, it cuts the time going thru obscure log files etc...
      but i doubt many developers think from a user's point of view...
    • That is more or less how things have evolved here at The Internet Archive [archive.org].

      Unixes and the services that run on them can be configured to be very verbose in their errors and warnings, and error messages can be used as triggers to check various logs and system states for additional information, but in a nontrivial cluster there are major problems with humans trying to digest this flow of information and make sense of it all.

      Better tools help, but beyond a point it just makes sense to try and make the tools

      • You have taken the diagnostics tools to the next level at TIA, keeping the human in the loop, but expanding both the complexity of the diagnostic data, and the tools to process that data. So both the senses and manipulators of the human are bionic :). Maybe that's why TIA is so cool, and reliable. Thanks a lot - with your content, you are keeping the computers as tools in the service of human intercommunication, instead of descending into communications primarily with the machine, as they appear to prefer a
  • It's a long way (Score:4, Interesting)

    by jd ( 1658 ) <imipak@ y a hoo.com> on Tuesday December 21, 2004 @09:17PM (#11154754) Homepage Journal
    ...from what we have now to the Liberator (DSV-2) from Blake's 7, the Ultimate in self-repairing systems. At the moment, most "self-repair" is in the form of software error-correction and bypassing faulty hardware. (The "badmem" patches for Linux do this, for example.)


    The former could be considered self-repair, but it is limited as you don't have to have much in the way of an error to totally swamp most error-correction codes.


    The second form isn't really self-repair as much as it is damage control. This is just as important as self-repair, as you can't do much repair work if your software can't run.


    On the whole, "normal" systems don't need any kind of self-repair, beyond the basic error-correction codes. Instead, you are likely better off to have a "hot fail-over" system - two systems running in parallel with the same data, only one of them is kept "silent". Both take input from the same source(s), and so should have identical states at all times, with no synchronization required.


    If the "active" one fails, just "unsilence" the other one and restore the first one's state. If the "silent" one fails, all you do is copy the state over.


    However, computers are deterministic. Two identical machines, performing identical operations, will always produce identical results. Therefore, in order to have a meaningful hot fail-over of the kind described, the two can't be identical. They have to be different enough to not fail under identical conditions, but be similar enough that you can trivially switch the output from one to the other without anybody noticing.


    eg: The use of a Linux box on an AMD running Roxen, and an OpenBSD box on an Intel running Apache, would be pretty much guaranteed not to have common points of failure. If you used a keepalive daemon for each box to monitor the other's health, you could easily ensure that only one box was "talking" at a time, even if both were receiving.


    The added complexity is minimal, which is always good for reliability, and the result is as good or better than any existing software self-repair method out there.


    Now, you can't always use such solutions. Anything designed to work in space, these days, uses a combination of the above techniques to extend the lifetime of the computer. By dynamically monitoring the health of the components, re-routing data flow as needed, and repairing data/code stored in transistors that have become damaged, you ensure the system will keep functioning.


    Transistors get destroyed by radiation quite easily. If you didn't have some kind of self-repair/damage-control, you'd either be using chips with transistors which may or may not work, or you'd have to scrub the entire chip after a single transistor went.

    • However, computers are deterministic. Two identical machines, performing identical operations, will always produce identical results.

      Fuck no! Computers have souls, man. That is because they are so complicated that the deterministic model no longer holds; there's a non-deterministic layer that gives the machines their personal features.

  • I thought I saw an article about that earlier, but on second glance it turned out to about self-heating coffee. Yawn.

    Now, as for self-heating systems...
  • First of all, it has the phrase that pays: "graceful degradation "

    Next, it talks about verbose and useful errors, so that a techy can make intelligent decisions about terminating a process, restarting it, altering a file, or some other fix. Presumably, once a tech marks a problem "successfully fixed" by a certain set of actions enough times, the system wiull try those series of actions before throwing an error message.

    What will be nice is when the system recognizes what it is it's doing, so it'll have
  • worst case? (Score:1, Insightful)

    by bird603568 ( 808629 )
    this sounds good un till somebody meake a worn that uses an exploit that (for the sake of argument say there is one) was/(will be might be) found. The worm tricks the server in to thinking it is severly messed up so it orders a boat load of parts or shuts down or both. the tech shows up and its just a worm. now you have these parts and have to pay up. also the server shut down, now its lost time. did i mention its a worm so it spreads. thats just worst case, but i could be great unless you fix broken server
  • That makes it sound like people want computers to be able to mechanically fix themselves when they break.

    Wouldn't a "self-healing" system just be good at a) reporting what hardware is actually broken on the machine b) automating well defined responses to well defined programs and c) building parallel, fault tolerant hardware at all levels of the system?

    As far as I know, even the best AI research hasn't come up with software that can diagnose and fix unknown, first time, bizarre problems. Ultimately, it a
  • by JRHelgeson ( 576325 ) on Tuesday December 21, 2004 @11:11PM (#11155479) Homepage Journal
    The space shuttle, as old as it is, has an absolutely incredible computer system that is self healing.

    The Shuttle has many thousands of sensors and backup sensors. Each sensor feeds into one of many computer systems. These computer systems talk to each other as more of a committee rather than just passing data amongst themselves. If a computer discovers a fault, another computer will see that fault as well, it will combine data gathered from other computer systems throughout the suttle and each computer system will literally cast a vote on what the best solution should be for the particular fault discovered.

    If one computer system suffers a partial or complete failure, the remaining systems will work around the failed system.

    This computer system has managed to keep our astronauts alive for every mission, except those two that suffered from a catastrophic mechanical failure. The second of which (Columbia) the computers kept the craft flying until it broke apart completely.

    I say not bad for a system designed over 20 years ago!
  • Well, I've seen some nice systems. When I see some nice fully systems and some quality fully self- systems, I'll be ready for the advent of fully self-healing systems. I expect we will get there one step at a time.

  • too late (Score:3, Funny)

    by Anne Thwacks ( 531696 ) on Wednesday December 22, 2004 @05:13AM (#11156901)
    But I read in 1958 that we would have self healing systems "within a decade" - surely we must have had them for over 30 years!
  • Having thought this thing through, I guess it's all about different levels of (inter-)dependability. One program relying on the other, etc..
    I guess if you work this out upto a low enough level, this includes the hardware, you can actually make the system heal itself.
    You could probably start at the root of the whole system: power, and build your way up from there in a sort of tree-version. However, other environmental issues for you system could exist that make a power failure seem like christmas.
    It could
  • by supersnail ( 106701 ) on Wednesday December 22, 2004 @06:19AM (#11157091)
    .... given away the tshirts.

    The currentzSeries machines come with 16 cpus and L2 & L1 packaged together on a board.
    But only 12 cpus are used.

    Each "cpu" is actually two cpus and a comparitor. When the cpus come up with a different answer the cpu is shutdown and procesing is taken over by one of the four free cpus on the board.

    You will never know it happened until you run one of the mainrneance utilities.

    In the way of IBM this technoligy will probaly appear on top end pSeries (AIX/Linux) and iSeries boxes in a couple of years.

  • IBM are basing their future application self-healing abilities on what they call a whole branch of research they have been investing in for years called Autonomic Computing [ibm.com]

    It's not all pie in the sky either - they've already released preliminary Autonomic Computing Toolkits as part of their Emerging Technologies Toolkit [ibm.com]. Start by looking at the Logging and Trace components, and then maybe look at the Solution Install pieces - they underpin the whole framework.

    It will take a generation, or two (10-15 yea

It is easier to write an incorrect program than understand a correct one.

Working...