Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Operating Systems Software Programming IT Technology

A Diagnosis of Self-Healing Systems 149

gManZboy writes "We've been hearing about self-healing systems for a while, but (as is usual), so far it's more hype than reality. Well it looks like Mike Shapiro (from Sun's Solaris Kernel group) has been doing a little actual work in this direction. His prognosis is that there's a long way to go before we get fully self-healing systems. In this article he talks a little bit about what he's done, points out some alternative approaches to his own, as well as what's left to do."
This discussion has been archived. No new comments can be posted.

A Diagnosis of Self-Healing Systems

Comments Filter:
  • Had this 3 years ago (Score:5, Interesting)

    by shoppa ( 464619 ) on Tuesday December 21, 2004 @07:45PM (#11154012)
    According to a documentary movie from 3 years ago, we already had this. HAL 9000 sent an astronaout out to help repair the antenna azimuth control board.

    Which turned out not to be faulty... hmmm...

    Some IBM mainframes are already at this level of self-diagnosis. Where I work, IBM repairmen show up with spare drives for the RAID array when they fail and the array phones IBM to report the fault. We don't know that a drive failed until the field service tech shows up!

  • by jomas1 ( 696853 ) on Tuesday December 21, 2004 @08:01PM (#11154168) Homepage
    Some IBM mainframes are already at this level of self-diagnosis. Where I work, IBM repairmen show up with spare drives for the RAID array when they fail and the array phones IBM to report the fault. We don't know that a drive failed until the field service tech shows up!

    Interesting. Where I work this happens too except instead of IBM techs we get sent techs who work for the city and instead of finding out that they were sent for some good reason, 90% of the time it turns out that the techs were sent for no reason. The techs usually don't even know that a machine called in a service request and waste a lot of time asking me why they were called.

    If the future holds more of this I hope I die soon.
  • Reset button (Score:2, Interesting)

    by mboverload ( 657893 ) on Tuesday December 21, 2004 @08:08PM (#11154229) Journal
    I don't know why windows doesn't just have a reset button for all the settings to return it to it's original condition. It's a bitch to reinstall it twice a year, you know.
  • by Animats ( 122034 ) on Tuesday December 21, 2004 @08:18PM (#11154308) Homepage
    There are operating systems for which "self-healing" is quite feasible, but UNIX is all wrong for it.

    The most successful example is Tandem. For decades, systems that have to keep running have run on Tandem's operating system. For an overview of how they did it, see the 1985 paper Why Computers Stop and What Can Be Done About It. [hp.com]

    The basic concepts are:

    • All the permanent state is in a database with proper atomic restart and recovery mechanisms.
    • Flat "files" are implemented on top of the database, not the other way round.
    • When applications fail, they are usually restarted completely, with any in-process transactions being backed out.
    • Applications with long-running state are tracked by a watching program on another machine which periodically receives state updates from the first program. If the first program fails, the watching program restarts it from a previous good state.

    Every time you use an ATM or trade a stock, somewhere a Tandem cluster was involved.

    Tandem's problem was that they had rather expensive proprietary hardware. You also needed extra hardware to allow for fail-operational systems. But it all really does work. HP still sells Tandem, but since Carly, it's being neglected, like most other high technology at HP.

  • by rlp ( 11898 ) on Tuesday December 21, 2004 @08:33PM (#11154438)
    Tandem had a FT Unix division in Austin. One of the teams I managed that was responsible for an embedded expert system that monitored faults in the redundant components of the system. Every component was replicated. Each logical CPU actually consisted of four processors - two pairs running in lock-step. If one CPU in a pair disagreed with it's counter-part, the pair would be taken out of service. The expert system monitored transient faults and would "predict" that a component was going to fail, and could take it out of service. The system had a modem that would "phone home" in the event of a component failure, and a service tech would be dispatched with a part - often before the customer knew there was a problem.

    The machines used MIPS processors (supporting SMP) and ran a Tandem variant of System V UNIX. Combine this with a decent transactional database, and application software capable of check-pointing itself, and you have a very robust system. Albeit a very expensive one.

    Tandem was bought out by Compaq, and then by HP. When I left, Tandem had quite a few interesting ideas they were working on, but near as I can tell, they never saw the light of day.
  • by skinfitz ( 564041 ) on Tuesday December 21, 2004 @08:49PM (#11154573) Journal
    I have it so that if one of our firewalls detects an attempt to access gator.com it enrols the machine into an active directory system group which the SMS server queries to automatically de-spyware it with SpyBot.

    I'd call that a self healing system. I'm a network admin though so my perception of these things tends to be on a larger scale.
  • by Etcetera ( 14711 ) * on Tuesday December 21, 2004 @08:52PM (#11154597) Homepage

    HAL: I've just picked up a fault in the AE35 unit. It's going to go 100% failure in 72 hours.

    This is really something that, IMHO, calls for more interaction between the best of the futurists, science-fiction writers, and coders, and other complexity thinkers.

    In order for any system to have an understanding of and proper diagnosis of its own operation, it needs to be able to conceptualize its relationship to other systems around it. Am I important? What functions do I provide? What level of error is proper to report to my administrator? Do I have a history of hardware problems? Has chip 2341 on motherboard 12 been acting up intermittently? If so, is it getting worse or better? How have I been doing over the last few days? Is there a new virus going around that is similar to something I've had before?

    What good is a self-diagnosing system without a memory of its prior actions?

    All of these questions imply some sort of context that will require the system to use symbols to represent "things" in the "world" around it. Clearly, the largest (though perhaps not qualitatively different) symbol will be a "self" symbol.

    From there, all you have to do is follow Hofstadter [dmoz.org]'s path and you'll arrive at a system with emergent self-awareness or consciousness.

    The end result of this will be something a) very complex and b) designed/grown by itself. You'll have either the computer from the U.S.S. Enterprise [epguides.info] or H.A.L. [underview.com]

    Side question: What is CYC doing these days? [cyc.com]
  • It's a long way (Score:4, Interesting)

    by jd ( 1658 ) <imipak@yahoGINSBERGo.com minus poet> on Tuesday December 21, 2004 @09:17PM (#11154754) Homepage Journal
    ...from what we have now to the Liberator (DSV-2) from Blake's 7, the Ultimate in self-repairing systems. At the moment, most "self-repair" is in the form of software error-correction and bypassing faulty hardware. (The "badmem" patches for Linux do this, for example.)


    The former could be considered self-repair, but it is limited as you don't have to have much in the way of an error to totally swamp most error-correction codes.


    The second form isn't really self-repair as much as it is damage control. This is just as important as self-repair, as you can't do much repair work if your software can't run.


    On the whole, "normal" systems don't need any kind of self-repair, beyond the basic error-correction codes. Instead, you are likely better off to have a "hot fail-over" system - two systems running in parallel with the same data, only one of them is kept "silent". Both take input from the same source(s), and so should have identical states at all times, with no synchronization required.


    If the "active" one fails, just "unsilence" the other one and restore the first one's state. If the "silent" one fails, all you do is copy the state over.


    However, computers are deterministic. Two identical machines, performing identical operations, will always produce identical results. Therefore, in order to have a meaningful hot fail-over of the kind described, the two can't be identical. They have to be different enough to not fail under identical conditions, but be similar enough that you can trivially switch the output from one to the other without anybody noticing.


    eg: The use of a Linux box on an AMD running Roxen, and an OpenBSD box on an Intel running Apache, would be pretty much guaranteed not to have common points of failure. If you used a keepalive daemon for each box to monitor the other's health, you could easily ensure that only one box was "talking" at a time, even if both were receiving.


    The added complexity is minimal, which is always good for reliability, and the result is as good or better than any existing software self-repair method out there.


    Now, you can't always use such solutions. Anything designed to work in space, these days, uses a combination of the above techniques to extend the lifetime of the computer. By dynamically monitoring the health of the components, re-routing data flow as needed, and repairing data/code stored in transistors that have become damaged, you ensure the system will keep functioning.


    Transistors get destroyed by radiation quite easily. If you didn't have some kind of self-repair/damage-control, you'd either be using chips with transistors which may or may not work, or you'd have to scrub the entire chip after a single transistor went.

  • by rednaxel ( 532554 ) on Tuesday December 21, 2004 @11:44PM (#11155671) Homepage Journal
    I did R & D for an elevator factory 12 years ago, and back then we made a box that called home when something went wrong. The system scanned some critical points of the circuit and, if the readings were not in the expected pattern, an external modem was used to call the maintenance and send a full report of the readings, indicating the cause of the failure.

    For example, a broken door sensor could make the door fail to slow down when closing, and the only symptom would be the louder sound of the door slamming. However, in a few days other parts would be damaged, increasing the cost of the repair and rendering the elevator out of service.

    The tech could get in the building before the elevator stopped working. According with the marketing guys, it would gave us an image of excellence in hardware and service.

    All this was written in 80C51 Assembly using less than 16 Kb. The PC code for the field service central was written in C, and featured a nice EGA graphic (640x350 in 4 pages) of the electric circuit. In real-time mode (when the central called the elevator) the graph could show the relays, interruptors, buttons, etc all animated. We could even tell how many people entered the elevator by the number of times the door sensor was activated, or which buttons were pushed. Cool!

  • by Bert64 ( 520050 ) <bert AT slashdot DOT firenzee DOT com> on Wednesday December 22, 2004 @10:34AM (#11158054) Homepage
    That's like curing the symptoms and not the cause.
    Your systems shouldn't have gotten infected with spyware in the first place, and the fact that they did shows you have bigger problems. What if they get infected with something more malicious than gator? Or how about something that's not detected by the spyware removal tools?
  • by skinfitz ( 564041 ) on Wednesday December 22, 2004 @02:58PM (#11161102) Journal
    I agree completely - we do not allow admin or Power User rights on our systems, and typically if a machine has gator on it, it usually has other problems too. In fact I'll guarantee that if any machine has gator on it, it usually has LOTS of other problems.

    Tracking the symptoms like this alerts me to these problems - running SpyBot on a machine never hurts, and I'll do other things too like have a script email me the list of adminstrators on the machine and perhaps change the password.

    As for more malicious, I have used the same technique with Snort sensors around the network logging into a database. Another script queries the database and takes the appropriate action du jour - for example during Nimda I had scripts that would scan the database and clean infected machines.

    Always worth putting in the extra time to automate these things as you have a solution for the future and can sit back and admire your work.

    As for curing the symptoms and not the cause, this frees up my time to tackle the cause. If I ran around manually cleaning up systems my time would go nowhere.

"No matter where you go, there you are..." -- Buckaroo Banzai

Working...