Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Programming Space IT Technology

Writing Code for Spacecraft 204

CowboyRobot writes "In an article subtitled, "And you think *your* operating system needs to be reliable." Queue has an interview with the developer of the OS that runs on the Mars Rovers. Mike Deliman, chief engineer of operating systems at Wind River Systems, has quotes like, 'Writing the code for spacecraft is no harder than for any other realtime life- or mission-critical application. The thing that is hard is debugging a problem from another planet.' and, 'The operating system and kernel fit in less than 2 megabytes; the rest of the code, plus data space, eventually exceeded 30 megabytes.'"
This discussion has been archived. No new comments can be posted.

Writing Code for Spacecraft

Comments Filter:
  • hmm... (Score:2, Interesting)

    by opqdonut ( 768567 )
    I wonder will they be releasing the source. It could be an interesting read.
    • Re:hmm... (Score:2, Informative)

      by Anonymous Coward
      WindRiver ROTS (real time operating system) is painful to work with. There debugging environment is a nightmare and the cost of development and deployment is almost 3x that of an embedded linux. My little company just finished doing a trade study of the various ROTS kernels available and yes, thiers might be more reliable, but at a huge cost. Furthermore, performance wise, it just isn't to snuff vs say MercuryOS on a single CPU, let alone a multi CPU system.
      As to releasing of thier source code? From Wind Ri
      • Re:hmm... (Score:2, Funny)

        by grub ( 11606 )

        i>and the cost of development and deployment is almost 3x that of an embedded linux

        When a spacecraft millions of kilometers from Earth packs it in I'm sure a project leader at NASA would be happy they saved 2/3 of the price on a relatively small ticket item.
      • Re:hmm... (Score:4, Insightful)

        by Richthofen80 ( 412488 ) on Saturday November 20, 2004 @03:23PM (#10875837) Homepage
        thiers might be more reliable, but at a huge cost.

        Probably not as big a cost as losing a Mars rover because your OS wasn't reliable enough.
        • if you look back, they almost did lose a rover to the _reliability_ of wind river. It was something about having to many files in a directory or some such silliness, took the rover offline and almost lost it. Hmmmmmmm.
          • Actually that was a priority inversion issue which can happen with any operating system when the code isn't designed properly. It was because of the flexibility of VxWorks when a debug build is used that allowed them to patch the software and fix the problem.
      • Don't you mean ROTS: "real operating time system"?
      • Re:hmm... (Score:4, Interesting)

        by The Vulture ( 248871 ) on Saturday November 20, 2004 @05:12PM (#10876485) Homepage
        Yes, and seeing as I'm currently working with embedded Linux, I can honestly say that it's a pain. (Note: I must preface this by saying that I am using Linux 2.4.18 for MIPS and my company is not using any sort of real-time extensions, just the bare 2.4.18 tree).

        You get what you pay for... I've used VxWorks for a few years now, and while it does have it's share of problems, and while they are sometimes difficult to deal with, it is a great platform for development. You get much better control of the system as opposed to Linux (the main problem with using Linux in an embedded environment is the user to kernel relationship. It's solved neatly in vxWorks by getting rid of it (everything is in kernel space)). This works out very nicely for MIPS processors, which I deal with most of the time. Threading (or tasks as vxWorks has) is much better than Linux - you can at least somewhat guarantee when your tasks run, unlike with the default Linux scheduler.

        I am very interested in trying QNX out, to see how it compares to vxWorks, one of these days.

        -- Joe
      • Firstly Linux isn't an ROTS (sic). It's not predictive and using pthreads say for thread control and IPC has much to be desired when compared to VxWorks. For example in pthreads you can't try and take a semaphore and then have it either return when it can take or when a specified time out is fulfilled. You have to sleep and poll.

        Secondly, yes Tornado is rather dated, but I don't believe the debugging environment any worse than say ddd and gdb server. In fact I generally find it quicker to develop in To
    • Re:hmm... (Score:5, Funny)

      by Infinityis ( 807294 ) on Saturday November 20, 2004 @03:08PM (#10875745) Homepage
      Not gonna happen, for one big reason. I could just see the Slashdot headline:

      Mars Rover HaX0r3d and OS replaced with Linux.

      Shortly thereafter, Micro$oft claims that they can enforce patent infringement on Mars...
      • Re:hmm... (Score:3, Funny)

        by Tablizer ( 95088 )
        [public source code] Not gonna happen, for one big reason. I could just see the Slashdot headline: Mars Rover HaX0r3d and OS replaced with Linux.

        More likely: "Mars Rover Draws Goatse In Sand"
  • hard to imagine.. (Score:2, Interesting)

    all software has bugs, what happens when 1/2 thru the trip they have an update? who installs remotely, and I guess having a sysop reboot is out of the question...

    CBB
    • Re:hard to imagine.. (Score:4, Informative)

      by brilinux ( 255400 ) on Saturday November 20, 2004 @02:43PM (#10875612) Journal
      Actually, if I remember correctly, there was a problem with one of the rovers, and they had to re-flash it from millions of KM away. I am not sure whether they had a backup copy of the OS on the rover that would facilitate the re-flashing, or whether there was some patch that was transmitted, but I remember them talking about it on the news.
      • Re:hard to imagine.. (Score:3, Interesting)

        by Cylix ( 55374 ) *
        They had a section of the flash memory go bad... so they patched a work around for those sectors if I remember correctly.
      • Re:hard to imagine.. (Score:5, Interesting)

        by Vardamir ( 266484 ) on Saturday November 20, 2004 @02:58PM (#10875692)
        Yes, here is an email my OS prof sent our class on the subject:

        Subject: What really happened on Mars Rover Pathfinder

        The Mars Pathfinder mission was widely proclaimed as "flawless" in the early
        days after its July 4th, 1997 landing on the Martian surface. Successes
        included its unconventional "landing" -- bouncing onto the Martian surface
        surrounded by airbags, deploying the Sojourner rover, and gathering and
        transmitting voluminous data back to Earth, including the panoramic pictures
        that were such a hit on the Web. But a few days into the mission, not long
        after Pathfinder started gathering meteorological data, the spacecraft began
        experiencing total system resets, each resulting in losses of data. The
        press reported these failures in terms such as "software glitches" and "the
        computer was trying to do too many things at once".

        This week at the IEEE Real-Time Systems Symposium I heard a fascinating
        keynote address by David Wilner, Chief Technical Officer of Wind River
        Systems. Wind River makes VxWorks, the real-time embedded systems kernel
        that was used in the Mars Pathfinder mission. In his talk, he explained in
        detail the actual software problems that caused the total system resets of
        the Pathfinder spacecraft, how they were diagnosed, and how they were
        solved. I wanted to share his story with each of you.

        VxWorks provides preemptive priority scheduling of threads. Tasks on the
        Pathfinder spacecraft were executed as threads with priorities that were
        assigned in the usual manner reflecting the relative urgency of these tasks.

        Pathfinder contained an "information bus", which you can think of as a
        shared memory area used for passing information between different components
        of the spacecraft. A bus management task ran frequently with high priority
        to move certain kinds of data in and out of the information bus. Access to
        the bus was synchronized with mutual exclusion locks (mutexes).

        The meteorological data gathering task ran as an infrequent, low priority
        thread, and used the information bus to publish its data. When publishing
        its data, it would acquire a mutex, do writes to the bus, and release the
        mutex. If an interrupt caused the information bus thread to be scheduled
        while this mutex was held, and if the information bus thread then attempted
        to acquire this same mutex in order to retrieve published data, this would
        cause it to block on the mutex, waiting until the meteorological thread
        released the mutex before it could continue. The spacecraft also contained
        a communications task that ran with medium priority.

        Most of the time this combination worked fine. However, very infrequently
        it was possible for an interrupt to occur that caused the (medium priority)
        communications task to be scheduled during the short interval while the
        (high priority) information bus thread was blocked waiting for the (low
        priority) meteorological data thread. In this case, the long-running
        communications task, having higher priority than the meteorological task,
        would prevent it from running, consequently preventing the blocked
        information bus task from running. After some time had passed, a watchdog
        timer would go off, notice that the data bus task had not been executed for
        some time, conclude that something had gone drastically wrong, and initiate
        a total system reset.

        This scenario is a classic case of priority inversion.

        HOW WAS THIS DEBUGGED?

        VxWorks can be run in a mode where it records a total trace of all
        interesting system events, including context switches, uses of
        synchronization objects, and interrupts. After the failure, JPL engineers
        spent hours and hours running the system on the exact spacecraft replica in
        their lab with tracing turned on, attempting to replicate the precise
        conditions under which they believed that the reset occurred. Early in the
        morning, after all but one engineer had gone
        • by hey ( 83763 ) on Saturday November 20, 2004 @03:52PM (#10875984) Journal
          In my experience mutex's, semaphores, etc always cause trouble. There is nearly always another way to write things.

          And you'll never ever seem me coding an infinite wait for a mutex. That's just asking for trouble.

          Bad: in Windows, FindNextChangeNotification()
          requires those IPC operations and I always gives me grief.

          Good: The Linux File Activity Monitor (FAM). Lets you open and read a pipe of actions. Nice!
          • by Anonymous Coward
            In my experience, it is engineers who do not understand how to correctly employ mutexes and semaphores that always cause trouble.
        • Re:hard to imagine.. (Score:5, Interesting)

          by AaronW ( 33736 ) on Saturday November 20, 2004 @04:28PM (#10876217) Homepage
          As someone who's worked with VxWorks for the last several years I'm surprised they didn't turn on priority inheritance to begin with for the semaphore. As a rule, we usually turn on priority inheritance for our mutex semaphores.

          Other problems in the Mars Pathfinder were related to using the VxWorks filesystem. VxWorks basically only supports FAT on top of flash. For flash, FAT is a poor choice since some areas of the disk like the root directory and FAT tables will quickly wear out. Also, I don't think VxWorks has much support for working around bad sections of flash.

          As far as VxWorks memory allocation support, in an ideal world one would statically allocate all memory, but oftentimes things are not ideal. In the product I work on, we have to have dynamic memory allocation, since depending on how the product is being used at the time, different data structures are required with no way of knowing beforehand how many of a particular type are needed, and this changes dynamically. For a simple device, it's easy to statically allocate everything, or if you have enough memory where you can statically allocate everything.

          In our case, while we statically allocate memory where we can, however, in many cases we cannot. For example, I have to maintain a data structure keeping track of all of the network gateways connected to an output interface. We can have many thousands of gateways and thousands of output interfaces. There could be anything between one and thousands of gateways on an interface. In this case, I use static arrays for information on each gateway and each output interface, but must use dynamic data structures to list all the gateways connected to an output interface. It would be prohibitive to allocate storage for 30,000 gateways with 30,000 interfaces! I also can't use a linked list of gateways per interface since it doesn't scale, a linked list having access time O(n).

          Also, we use third party libraries that perform dynamic memory allocation and it would be prohibitive to change that.

          By replacing Wind River's malloc code with Doug Lea's code we eliminated fragmentation problems and saw our startup time jump from 50 minutes to 3 minutes. Doug Lea's malloc code is the basis of malloc in glibc and is very effecient. We also added support for tracing heap memory allocations to keep track of which task allocated a block and where it was allocated. This alone helped tremendously in tracking down a number of memory leaks since we can just walk the heap and see exactly where all the memory is being allocated. This is a sorely missing feature in VxWorks.

          The lack of memory protection is another major problem for complex tasks. We have a bug we've spent weeks trying to track down the cause without any luck where random memory locations get corrupted.

          Needless to say, all new projects where I work will not run on VxWorks. All of the chip vendors we're looking at are either dropping support for it or have already dropped it and are focusing on Linux.

          BTW, this is one feature I would *REALLY* love to see added to Linux. The company I'm working for is looking at writing our next generation platform on top of an embedded Linux. We have not yet decided which one to use, but want something 2.6 based.

          With priority inheritance, if a mutex is held by a low priority task and a high priority task tries to grab it, the low priority task is automatically boosted to the highest priority task that has attempted to acquire the semaphore. When the semaphore is released, the low priority task's priority is restored.

          Some other nice features are interrupt scheduling and better priority based message passing support (which may already be present, I'm still looking into this).

          Finally, one very useful feature would be the ability to guarantee a real-time thread a certain percentage of the CPU, with the option of placing a hard limit if it tries to exceed that or temporarily lowering it's priority to non-realtime so as to not starve no
        • Re:hard to imagine.. (Score:4, Interesting)

          by GileadGreene ( 539584 ) on Saturday November 20, 2004 @06:13PM (#10876848) Homepage
          You are confusing Mars Pathfinder (a 1997 mission, which suffered a priority inversion problem) with Mars Exploration Rover (a 2003-2004 mission which suffered a file allocation issue in Flash memory, and the subject of TFA). Although both used VxWorks.
    • I guess having a sysop reboot is out of the question...

      Oh shit, i forgot to rerun 'lilo' before rebooting!
      • That's exactly what I was thinking! When I reboot my server if I screw anything up with Lilo (or if the kernel doesn't boot) I have to hall the old monitor out of the garage and hook it up just so I can choose my old kernel to get back in.

        CB
    • all software has bugs,
      But not all software has significant bugs. What matters most is how much effort you are willing to put into testing, proving the code correct, etc...
  • "The operating system and kernel fit in less than 2 megabytes; the rest of the code, plus data space, eventually exceeded 30 megabytes." This should be used as the example for efficient coding
    • Re:Efficiency (Score:3, Interesting)

      by Omicron32 ( 646469 )
      That's all well and good, but don't forget that this kernel only has to interface with one set of hardware.

      Things like the Linux kernel has to know about hundereds and thousands of different devices which is why it's so big.
      • Re:Efficiency (Score:3, Interesting)

        I can get Linux on a 1.44mb floppy and run a system from it. 2 megs ain't that hard.
      • Re:Efficiency (Score:3, Insightful)

        by CarlDenny ( 415322 )
        Just to clarify, VxWorks runs on a hell of a lot of hardware, dozens of CPUs across all the major families, thousands of device drivers.

        Now, any particular instance of the kernel gets compiled for a specific processor, and only includes the drivers it needs. Which does save on some space. But a lot of that extra space comes from things like a dynamic loader/loader, graphics packages, local shells (usually in multiple flavors,) and host of other applications that are "standard."

        The thing that saves *that
    • Re:Efficiency (Score:5, Interesting)

      by Armchair Dissident ( 557503 ) * on Saturday November 20, 2004 @03:06PM (#10875740)
      I used to write embedded applications using OS-9 (NOT MacOS 9) on 68000-based systems as a sub-contractor for Nuclear Electric (nuclear power stations company in the UK before it became BNFL). Our development system - complete with OS/Kernel and compilers - had only about a meg of memory; the final embeded systems often only had 512K if we were lucky

      Okay, so this was some 14 years ago - but it was doing a lot of work. 2 megabytes is a lot of memory! There's a phenomenal amount of code and data that can be stored in 2 meg. Maybe it's good by current standards, but - personally - I would suggest that current standards is a bad place to start from.
      • Okay, so this was some 14 years ago - but it was doing a lot of work. 2 megabytes is a lot of memory! There's a phenomenal amount of code and data that can be stored in 2 meg.

        Agreed. Take the FCS MK 98/2, which controls the Navy's Trident II missiles and performs the prelaunch guidance calculations. It takes about 20 mins to calculate a launch package (24 missiles x 8 warheads ea) from a standing start, and controls the launch sequence in real time. (Including assembling a complex data preload for the

    • Re:Efficiency (Score:5, Informative)

      by Brett Buck ( 811747 ) on Saturday November 20, 2004 @03:55PM (#10876011)
      > "The operating system and kernel fit in less than 2
      > megabytes; the rest of the code, plus data space,
      > eventually exceeded 30 megabytes." This should be used as
      > the example for efficient coding

      You've GOT to be kidding, right? 2 meg of OS code? That's ULTRABLOAT compared to most spacecraft. In fact, for the vast majority of the space age, that would have exceeded the resources of the computer by several orders of magnitude.

      I've done this kind of programming for a living (for 10 years, moved up to controls design) but the last system I programmed for has 372k of memory, total. That includes data, code, OS, everything. Runs at 432 KIPS. And it performs what it probably one of the most complex in-flight autonomous control operations ever.

      Most are even more restrictive. For example, 8K of PROM and 1k of volatile memory (and 28 WORDS) of non-volatile memory. This more than adequate for most applications, if you do it right.

      Many spacecraft OS's are more akin to this:

      hardware interrupt
      external electronics power up processor.
      external electronics set PC = 80hex
      run
      {execute all the code}
      halt
      power down

      Once every 1/4 a second for 15 years.

      The project I am currently working on uses VxWorks (and so we were quite interested in the Mars Rover problem) and it's so bloated with unnecessary features it's absurd. This is not a Windows box, it's a spacecraft processor.

      I can't argue with the 30 meg of data space. Using the memory as a data recorder would be quite useful and a good picture takes a lot of space. But it's alarming to me that you could figure out how to waste maybe 4-5 meg on code. If you started with a bare home-brew OS, I would guess (and I get paid for this sort of guess) that you could do the entire flight code in 512K, with maybe 8k of data space, excluding the science data.

      Only recently have space-qualified rad-hard processors with this kind of capability become available. Until then, if you said you needed 2 meg for the OS alone, you would have gotten fired on the sopt and referred to mental health professionals. The availability of these processors enabled people to use high-level languages with tremendous overhead (like C++) to be used. And this was only done for employee retention purposes during the bubble. For years it was done at the assembler or even machine level. It's still not at all uncommon to do, and we've done MANY flight code patches, with only a processor handbook, an engineering paper pad, and by setting individual bits one-by-one.

      Brett
      • Yup. I did SW testing on the ENVISAT-1 satellite, and we had a 16-bit CPU with 64k of RAM for use (including code, data, heap and patch store). Code was in Ada (very tight code for a language with lots of features), and the OS was a custom operating system called "ASTRES", which took 1.5 kbytes (you could compile out all the features you didn't need). Did co-operative multi-tasking and memory management, which was pretty impressive...

        Note: This was in 1997... I believe that 32-bit space-hardened CPUs d
  • by boingyzain ( 739759 ) on Saturday November 20, 2004 @02:45PM (#10875617)
    while (1 = 1) { Dig(); Picture(); }
  • George Neville-Neil (Score:5, Informative)

    by cpghost ( 719344 ) on Saturday November 20, 2004 @02:45PM (#10875618) Homepage

    The interviewer George Neville-Neil co-authored "The Design and Implementation of the FreeBSD Operating System" with Marshall Kirk McKusick.

  • by Anonymous Coward on Saturday November 20, 2004 @02:49PM (#10875644)
    Should have just used WinCE, with a few of the productivity apps cut out. Adding a copy of pocket Auto-route, with some Martian JPEGS would have helped navigation as well.
  • Carmack (Score:4, Interesting)

    by mfh ( 56 ) on Saturday November 20, 2004 @02:49PM (#10875648) Homepage Journal
    I would like to think that this article embodies the reasons that John Carmack got into space program development to begin with.

    In the beginning he got into 3d game applications for a similar reason. The cutting edge is always the very outer area of human development, and Carmack makes a good example of a programmer who has taken aim at the edge of what is known to programmers. Maybe Mr. Carmack would care to comment?

    Much like how Id Software develops engines, the space craft programming is new an innovative, although the difference is that space craft have systems have no room for error.
  • Wait a minute? (Score:4, Insightful)

    by Billly Gates ( 198444 ) on Saturday November 20, 2004 @02:52PM (#10875669) Journal
    Was not the OS about Rover loaded with problems? Go read past news from last Febuarary here on slashdot?

    VXworks does not even offer memory protection and the ram can get fragmented. Not to sound trollish but I would pick something like Qnx or NetBSD for any critical app or embedded device.

    Its amazing the engineers fixed it and got it to work reliably but better more mission critical operating systems would be a better choice.

    • Re:Wait a minute? (Score:3, Interesting)

      by cpghost ( 719344 )

      I would pick something like Qnx or NetBSD for any critical app

      Okay, let's turn NetBSD into a real-time OS. Add some "hardening" features like watchdogs etc. Hmm... what should we call it? Perhaps: SpaceBSD?

    • VXworks does not even offer memory protection and the ram can get fragmented. Not to sound trollish but I would pick something like Qnx or NetBSD for any critical app or embedded device.

      I think QNX is a valid alternative. But is NetBSD hard-real-time?
      • NetBSD is not Hard Real Time. But most applications don't need true Real Time behavior. I use it at work for a couple of projects and find it more satisfactory than VXworks, Linux or (god forbid) Windows XP embedded.

        Also Dynamic Memory allocation makes for ... "Interesting" testing "Oppurtunities". That's not to say I've never done it, only that I sort of wish I hadn't

        • NetBSD is not Hard Real Time. But most applications don't need true Real Time behavior.

          Um, yeah, but we're talking about spacecraft here. I think that qualifies as an application that needs true Real Time behavior.
    • Re:Wait a minute? (Score:5, Insightful)

      by neonstz ( 79215 ) * on Saturday November 20, 2004 @03:20PM (#10875822) Homepage
      VXworks does not even offer memory protection and the ram can get fragmented.

      Dynamically allocating memory is usually a big no-no in real time systems.

    • Re:Wait a minute? (Score:4, Insightful)

      by RAMMS+EIN ( 578166 ) on Saturday November 20, 2004 @03:41PM (#10875916) Homepage Journal
      ``VXworks does not even offer memory protection and the ram can get fragmented.''

      Why would you even want memory protection in a system like this? Memory protection is great to prevent crappy apps on your PC from doing too much damage, but in a system like the Rover it's pure overhead.

      As for ram getting fragmented, it all depends on how you program it. Often, you don't even need memory allocation, so you won't have any problem with fragmentation.
      • Hell yes! (Score:5, Insightful)

        by devphil ( 51341 ) on Saturday November 20, 2004 @05:38PM (#10876651) Homepage


        Why would you even want memory protection in a system like this? Memory protection is great to prevent crappy apps on your PC from doing too much damage, but in a system like the Rover it's pure overhead.

        Exactly!

        The problem is that most /.ers are used to thinking of an OS as something that needs to run any arbitrary program under any arbitrary conditions and survive any arbitrary crash in those programs.

        For a Rover, none of those are true. They know exactly what code is going to be run. They know exactly where it's going to sit in memory. And they test it. (This is the part that /.ers can't quite understand.) They test these programs far more rigorously than any bog-standard x86 Linux OSS program ever gets tested. Those programs have their problems, but they will be mistakes in logic (metric/imperial conversions, or thread priority inversions), not segfaults because of derefing a null pointer.

        I wonder how many undergrand CS degree programs still teach correctness proofs? Not "yeah, I ran it lots of times and it didn't crash," but "I ran it 100,000 times with 100,000 different inputs, all random, and it didn't crash, but while it was running I also sat down and mathematically proved the code is correct."

        Embedded programming is just plain different than "normal" progrmming. It's usually a mistake to try to generalize from one to the other.

        (All that said, the next version of VxWorks is advertised to optionally support a "traditional Unix" process model, and I think protected memory boundaries are one of the features. In case your embedded app needs to run arbitrary third-party software which probably doesn't get stress-tested at JPL :-), you can turn all that stuff on and live with the overhead.)

        • Not "yeah, I ran it lots of times and it didn't crash," but "I ran it 100,000 times with 100,000 different inputs, all random, and it didn't crash, but while it was running I also sat down and mathematically proved the code is correct."

          In our work we have to use a component supplied by what is essentially a parent company. One high level developer/manager is very proud of the fact that he runs tests with random input. The component often still has serious, basic problems when we get it. I'm not convi


          • I mostly agree with you, but was trying to make a rhetorical point. :-) As you say, proper testing does more than just spew bits at the input pipe.

  • #include
    int main() {
    printf("Hello World!\n");
    return 0;
    }

    marsrover.c: 3: You are no longer on the planet Earth.
  • by EqualSlash ( 690076 ) on Saturday November 20, 2004 @03:05PM (#10875731)

    Remember sometime ago Spirit was continously rebooting due to a flash memory problem. The usage of FAT file system in the embedded systems was partly responsible for the mess.

    The problem, Denise said, was in the file system the rover used. In DOS, a directory structure is actually stored as a file. As that directory tree grows, the directory file grows, as well. The Achilles' heel, Denise said, was that deleting files from the directory tree does not reduce the size of the directory file. Instead, deleted files are represented within the directory by special characters, which tell the OS that the files can be replaced with new data.

    By itself, the cancerous file might not have been an issue. Combined with a "feature" of a third-party piece of software used by the onboard Wind River embedded OS, however, the glitch proved nearly fatal.

    According to Denise, the Spirit rover contains 256 Mbytes of flash memory, a nonvolatile memory that can be written and rewritten thousands of times. The rover also contains 128 Mbytes of DRAM, 96 Mbytes of which are used for data, such as buffering image files in preparation for transmitting them to Earth. The other 32 Mbytes are used for code storage. An additional 11 Mbytes of EEPROM memory are used for additional program code storage.

    The undisclosed software vendor required that data stored in flash memory be mirrored in RAM. Since the rover's flash memory was twice the size of the system RAM, a crash was almost inevitable, Denise said.

    Moving an actuator, for example, generates a large number of tiny data files. After the rover rebooted, the OSes heap memory would be a hair's breadth away from a crash, as the system RAM would be nearly full, Denise said. Adding another data file would generate a memory allocation command to a nonexistent memory address, prompting a fatal error.

    Source: DOS Glitch Nearly Killed Mars Rover [extremetech.com]

    BTW, there is another interview of Mike Deliman [pcworld.com] I read sometime ago in PCWorld.

  • by Dominic_Mazzoni ( 125164 ) on Saturday November 20, 2004 @03:08PM (#10875748) Homepage
    For those who are wondering, JPL is very aware of the shortcomings of VxWorks and has seriously considered other alternatives for every mission. Keep in mind that the choice of OS has to be made years before launch, so at the time the OS for the 2004 Mars Rovers was decided on, many options that are possibilities today were not contenders. Also keep in mind that in spite of many shortcomings, VxWorks is a known quantity. JPL has been working with it for years and had a lot of in-house expertise with it.

    There are a few groups at JPL that have been actively experimenting with other options, including RTLinux and a few different variants of hard-real-time Java (basically Java with explicit memory management and no garbage collection).
    • I recently started my first project with VxWorks about 30-days ago. I can honestly say that I'm not the least bit impressed.
      • Yeah, when I was doing RT stuff at my former employer we made a pretty unanimous decision to not even get close to WR's stuff. Not that it couldn't do the job. They had some funky licensing thing that interfered with how we wanted to use the code. We ended up looking at a linux variant that had some tweaks to the tasking algorithm that fit perfectly with what we wanted. I think we ended up actually going in-house because one of the engineers we had programmed some code for an earlier project and we foun
  • by adeyadey ( 678765 ) on Saturday November 20, 2004 @03:13PM (#10875778) Journal
    you are in a red rocky landscape..

    GO NORTH..

    you are in a red rocky landscape..

    DIG.

    ok. you see some red sand.
    it is getting dark.

    GO NORTH..

    you were eaten by a grue.
  • by nil5 ( 538942 ) on Saturday November 20, 2004 @03:25PM (#10875847) Homepage
    I worked on a satellite mission where we had some trouble. Due to an error the satellite wound up pointing 16 degrees away from the sun in a higher-than-expected orbit of 443 miles (714 kilometers) above Earth.

    The misalignment meant the spacecraft was unable to look directly at the sun's center to record the amount of radiation streaming toward Earth. To accurately measure sunlight, the darn thing needed to be pointed to within a quarter of a degree of dead center.

    It took about four and a half months to fix that problem, due to uplink difficulties. Ground controllers from first had to slow the spacecraft's spin in order to transmit a series of software "patches" and then gradually speed it up to see how well the commands worked.

    Then things were fixed.

    Moral of the story: it is a tough job indeed!
  • Marketing crap (Score:4, Insightful)

    by jeffmock ( 188913 ) on Saturday November 20, 2004 @03:54PM (#10875994)
    Okay, I've got to call foul on this WindRiver marketing ploy. They're trading on the last days of being able to get away with saying that something mystical and special and super-high quality is going on behind the walls of trade secret and proprietary software.

    I used vxworks on a reasonably large project several years ago, it's a fine piece of work, but nothing special, it's no where close to the quality of a recent linux kernel.

    About half-way through our project we developed a need for a local filesystem on our box. We bought a FAT filesystem add-on from wind river that was annoyingly poor quality, lots of bizarre little problems, memory leaks, and of course no source to look at. In the end we didn't use it, we put together our own filesystem from freely available sources.

    When I read the articles about vxworks filesystem problems nearly borking the entire Mars rover mission I laughed and laughed. I'm sure that it was the same crappy code (although I don't really know for sure).

    For me it's a case study on why you shouldn't use closed source software, you can't evaluate the quality of the code on the other side trade-secret barrier and you wind up trusting things like glossy brochures.

    jeff
    • Re:Marketing crap (Score:2, Interesting)

      by Anonymous Coward
      Well said! And ditto.

      I do embedded software for a living as well, and run like heck away from any project involving WindRiver.

      WindRiver is great for those people who don't know what they are doing in the embedded space. And it's useful as a red flag for telling one as such.

      But for people who actually know what they are doing, and who actually do understand OS's, Linux solutions are a far better choice. The time-to-market is absolutely unbeatable; as well as all the choices that one has in order to get a
    • Ditto (Score:3, Insightful)

      by wowbagger ( 69688 )
      I as well have had the misfortune to pick WindRiver as the core OS for my project, and have had no end of problems.

      Part of the problem in my case was that VxWorks is for smaller embedded systems, which my project is NOT. I need fast disk storage, I need graphics, I need networking, I need things that VxWorks just doesn't provide very well.

      Were I able to change one decision about the design of my project, I would have gone with Linux instead.

      WRS *used* to have something to offer, in that they provided a r
      • Huh? (Score:4, Insightful)

        by devphil ( 51341 ) on Sunday November 21, 2004 @02:54AM (#10879426) Homepage


        I need fast disk storage, I need graphics, I need networking, I need things that VxWorks just doesn't provide very well.

        "...and even though I chose the wrong tool for the job, it's still the tool's fault for not doing everything I need."

        • Re:Huh? (Score:4, Informative)

          by wowbagger ( 69688 ) on Sunday November 21, 2004 @10:23AM (#10880514) Homepage Journal
          It's called:

          "WindRiver portayed their tool as being able to do those things, thus I made the wrong decision based upon the false claims of the manufacturer."

          You see, WRS would have you believe that VxWorks has a reasonable disk subsystem, even though they have no option of using DMA for the data transfers, a fact they convienently don't make available.

          WRS had a port of XFree available for VxWorks. However, they did not release the source for it, and they stopped supporting it, and thus it fell behind in support for the video chips now in use. Of course, they did not inform developers of their impending decision to drop support until it was too late.

          WRS has a TCP/IP stack. However, they did NOT have support for DHCP, nor DNS, and on certain platforms their stack has gross errors (e.g. packets being shifted by one byte so that when the reach the application they are corrupted.)

          WRS claims to have board support packages so that you don't have to develop them. They don't mention that they don't support half the hardware on most boards (e.g. they don't enable the cache on XScale processors, halving the speed of the processor).

          WRS claimed they would support development under Linux as a host OS "within a couple of months" - that was back in 1998. They started supporting development under Linux this year - and then not very well.

          Yes, I choose the wrong tool for the job - because WRS did not correctly represent their tool's capabilities and there was no other way to evaluate the capabilities of the tool.
  • by relaxrelax ( 820738 ) on Saturday November 20, 2004 @03:56PM (#10876018)

    If that was open source, there are so many space nerds who are programmers that flaws of that magnitude would never get by the army of testers.

    Many would help out simply because hey it's the *space program* and that's good enough for them. Other would want their name listed next to some obscure bug fix on a NASA site; it's good for the ego or your CV.

    Simply put, even a binary distribution of that code would allow unlimited free testing for crashes. Why wouldn't NASA do it?

    Because there are still people in washington that think code mysteriously get damaged by being public - even if such code isn't modifiable by the public who reads it.

    This is evidence of advanced cluelessness in Washington and maybe independant anti-free-source advocates (spelled M-i-c-r-o-s-o-f-t) are at cause.

    But I've learned not to bash. Never explain by Microsoft malice what could be explained by stupidity. Such as using DOS on a space thing...
    • Or perhaps because NASA doesn't own the code -WindRiver does.
    • Uhhh... and exactly how are you going to allow people to test "spaceware"? Last I checked, nobody owns their own satellite system. You just don't dump some satellite code onto your PC and "test" it.

      Open Source is great and all, but it's hardly the answer to everything.

      • It's probably worth adding that from the TFA it seems that VxWorks code is shared between different spacecraft programs. So the folks who can test on real satellites are sharing their patches, fixes, and features.
      • by johannesg ( 664142 ) on Saturday November 20, 2004 @08:01PM (#10877511)
        You just don't dump some satellite code onto your PC and "test" it.

        Sure you can. We [terma.com] make that kind of software. The reason you won't ever see it as open source is because the various instruments on the spacecraft are covered by confidentiality agreements (or worse, in case of military hardware). And as hardware goes it is typically rather obscure stuff, requiring significant domain knowledge as well to emulate correctly.

        Another issue is that these systems are rather CPU-intensive - we have a 16-CPU box for the spacecraft instruments plus a dedicated PC to emulate the flight computer itself. But you could run it on simpler hardware if you are willing to run at less than realtime speed.

        Interestingly, the closest we ever get to seeing the actual flight software is binary images of it. While that is a lot closer than most slashdotters are likely to get, it is still far removed from being able to do something useful with it.

        Of course the other good reason why this isn't going to be open source is because of price. For details you should really contact a salesperson, but let me give you a clue: (raises little finger to mouth) "Mwuhahaha!" ;-)

        • Your products sound like cool stuff.

          It just adds to my point that Open Source of space software just isn't really viable. Your 1 MILLION DOLLARS! ;-) software package isn't going to be readily available to your average OSS hacker.

          I suppose I should rephrase my statement... You just don't dump some satellite code onto your average OSS hacker's PC and "test" it.
    • by ragnar ( 3268 )
      I agree about opening the source, but for entirely different reasons. It would be an ideal teaching aid in a real time CS course or for enthusiasts. Although it might be possible to contribute bug fixes, I wouldn't count on it. From what I've read and seen concerning the open source projects, they tend to gather contributors for features much more readily than for bug fixes, especially the variety that are very hard to reproduce or require formal proof along with the fix.
      • There are several open source RTOS's out there, if you want to provide some kind of educational aid. Off the top of my head, I can think of RTEMS, eCos, and RT-Linux. I've seen several real-time courses use the MicroC OS, but I don't recall if it is open source or not. The odds of WindRiver (NASA doesn't own the code) open sourcing VxWorks are pretty minimal I would imagine.
      • "Hi, I just wanted to let you know that last night I checked in a patch for the space shuttle that will let it make an extra loop around the moon to drop off some supplies for a buddy of mine who is stuck there for a few weeks. Hope you don't mind!"
        • "Hi, I just wanted to let you know that last night I checked in a patch for the space shuttle that will let it make an extra loop around the moon to drop off some supplies for a buddy of mine who is stuck there for a few weeks. Hope you don't mind!"
          You mean this guy [dangertheater.com]?
    • If that was open source, there are so many space nerds who are programmers that flaws of that magnitude would never get by the army of testers.

      Almost certainly not, as none of that army of geeks would have the specialized hardware that the Rovers use.

      Many would help out simply because hey it's the *space program* and that's good enough for them.

      Few would accomplish anything, as few would bother to study, and learn, and analyze the structure of the program.

  • by voodoo1man ( 594237 ) on Saturday November 20, 2004 @04:09PM (#10876098)
    In 1998-2001, the JPL successfuly flew the Deep Space 1 [nasa.gov] spacecraft. One of the systems on board was the Remote Agent [nasa.gov], a fully autonomous spacecraft control and guidance system. The software was written entirely in Common Lisp, and parts were verified in SPIN [spinroot.com] (there is an interesting paper [psu.edu] written on the verification process, along with an informal account [google.com] by one of the designers), which yielded the detection of several unforeseen race conditions. The parts that were not verified were thought to be thread-safe, but unfortunately this proved mistaken as a race condition occured in-flight. With the help of the Read-Eval-Print Loop and other Lisp debugging facilities, the bug was tracked down and fixed in less than a day, and Remote Agent went on to win NASA's Software of the Year Award.

    Perhaps not surprisingly for anyone who has heard about the management at NASA, C++ was selected for the successors to the Remote Agent on the grounds that it is supposed to be more reliable (this despite the fact that the Remote Agent was originally to be developed in C++, an effort that was abandoned after a year of failure). This caused more than a few people to be upset [google.com] (including a very personal account [flownet.com] by one of the aforementioned designers). Clearly the debugging facilities of Common Lisp are far superior to static systems like C++, something which is very useful in diagnosing unexpected error conditions in spacecraft software (read the first question on p. 3 of the interview to see what pains the JPL staff went through to adapt similar, ad-hoc methods to VxWorks). It's also clear from this interview (question: "How is application programming done for a spacecraft?" Answer:"Much the same as for anything elsesoftware requirements are written, with specifications and test plans, then the software is written and tested, problems are fixed, and eventually its sent off to do its job.") that NASA has in no way tried to adapt formal verification methods for it's software, prefering instead to rely on the "tried and true" (at failing, maybe) poke-and-test development "methods."

    Clearly, formal verification methods to eliminate bugs before critical software is deployed, and deployment in a system with advanced debugging facilities is a clear win for spacecraft software, and should be adapted as the standard model of development. Unfortunately, like in many other software development enterprises, inertia keeps outdated, inadequate systems going despite a strong failure correlation rate.

    • by GileadGreene ( 539584 ) on Saturday November 20, 2004 @04:58PM (#10876399) Homepage
      NASA has had an active formal methods/formal verification program for a number of years, located at NASA Langley [nasa.gov]. They mostly do research, but have worked on a few practical applications, mostly in the shuttle program. Additionally, JPL recently (2003) set up the JPL Laboratory for Reliable Software [nasa.gov], which is chartered to look into formal verification among other things. The lead technologist in the LaRS is none other than Gerard Holzmann [spinroot.com], the man behind SPIN.

      Having said all of that, I'll agree that formal verification at NASA is in its infancy, and is facing an uphill battle for acceptance (witness how long the Langley group has been trying to push formal methods). It'll be interesting to see what happens with JPL's LaRS.

  • Why, in the 21st century, is it necessary to fit something like the Mars rover code in 2MB of memory? If something like a Gameboy Advance or a PDA can hold 64MB-a couple gigs, what is holding NASA back, with their gigantic budget and all?

    I can't imagine it would be the cost of the memory... I mean I know it costs much much more to make chips to a very strict specification, but if you are already producing so few units, isn't your cost of production going to be extrodinarily high whether you are making 64KB
    • Re:Out of curiousity (Score:5, Informative)

      by The Vulture ( 248871 ) on Saturday November 20, 2004 @05:20PM (#10876531) Homepage
      The problem is that technology moves too quickly for it to get "NASA certified". When you send something up in space where making changes to it will be difficult, you need something that is known to be robust and reliable, that has several years of testing.

      Last I read (maybe a year ago?), NASA still used 386 and 486 chips because they didn't generate a lot of heat (compared to todays machines) and could be made to withstand higher than normal forces (through extra padding on the device I imagine). They were more resiliant to the issues you might see in space than newer processors.

      Simply put, if they put the latest CPU with tons of RAM in there, and it fails, how are they going to fix it?

      -- Joe

      • ...the memory inside the Gameboy Advance and whatnot isn't radiation-hardened.

        The grandparent poster needs to RTFA, and note what had to be done to protect circuits from Marvin the Martian's cosmic rays. The chips get physically bigger (sometimes a lot bigger), and that builds up quickly.

    • Re:Out of curiousity (Score:4, Informative)

      by arnasobr ( 134440 ) on Saturday November 20, 2004 @06:26PM (#10876928)
      Feature size. The smaller the feature (think gate level), the higher the chance it will be ruined by random radiation exposure. And that's the one-sentence summary of the "Radiation Effects on Microelectronics" class I took about 7 years ago.

      Smaller memory capacity for a given surface area implies larger feature size.

      By the way, the class I took was 1-on-1 with Prof. Stephen McGuire at Cornell. Extremely cool guy.
    • by grozzie2 ( 698656 ) on Sunday November 21, 2004 @06:45AM (#10879982)
      This just illustrates why /. folks are typically not actually involved in spacecraft design and deployment. If you were, you would know the real reason for this, and wouldn't ask the question (which is not a dumb question btw).

      In the real world, once you get up in the vicinity of the Van Allen belt, you get into hard radiation. If you use typical modern high density chips, with 0.15 micron die spacing, a single particle will short/damage half a dozen traces on the chip on a single impact. If you use really old stuff, with 5 micron die spacing (and higher), a particle will be to small to get multiple traces in a single impact. you may still get a single bit flip, but, ecc will catch that, and you can deal with it. In the former case of a high density die, the failure would end up being catastrophic when a particle impacts the chip. There are practical limits to the size of die that can be mounted on a carrier, and the trace density defines the capacity of that die. Yes, it's possible to cram 32 meg of ram into that space, but, it wont last but a few minutes in a hard radiation environment. Take that same silicon wafer, using 5 micron traces, and it'll last years exposed to the same environment, but, it'll only have 1 meg of useable ram locations due to the decrease in density. you cant just throw more of them on, because then power consumption becomes the issue, in overly simplified terms, the chip is going to use power relative to it's surface area, matters not if it's got 1 or 32 meg of addressable locations in that area. Clock frequency is the other major contributor to power consumption, hence its not uncommon at all to see space hardware measured in KHZ rather than MHZ and GHZ like most folks are used to, and there are damn good reasons to leave it that way.

      An all up spacecraft platform has hard limits on physical size (constrained by the physical limits of the launcher), and hard limits on total mass, determined by the launch vehicle capability to the final trajectory required. The final design will budget a portion of it's mass allowance to power generation, and that power is in turn budgeted to various systems. the folks doing the controllers will have a hard limit on power consumption, another on volume, and a third on mass. working within those limits, they have to design and deploy a system that is expected to have 99.999999% reliability, operating in conditions more extreme than it's possible to actually simulate on earth.

      Its a shame, but there is one thing they dont seem to teach in computer science courses anymore. Out here in the real world, reality gets in the way of all the theory. Moore's law may well say chips will get faster, and density higher as time goes on, but it becomes irrelavent when other limiting factors get in the way. until gamma particles start to shrink, or we come up with an effective way of making sure they dont hit the electronics, 10 year old and older stuff is going to remain 'state of the art' for use in space. Die density and ability to shield are hard limitations, cant get past them, and you wont see more modern equipment going into the reaches of space till those limitations are overcome. That's not likely to happen in the forseeable future, the research in that area is all 'nuclear research' and that's all out of vouge these days, gonna take a couple more generations or a severely critical power shortage to change that.

  • Spacecraft (Score:4, Funny)

    by sheetsda ( 230887 ) <doug@sheets.gmail@com> on Saturday November 20, 2004 @04:56PM (#10876384)
    Writing Code for Spacecraft

    My first thought was "Spacecraft? is that a new Starcraft clone I hadn't heard about?". It was then I realized I've been hanging out on the Game Programming Wiki [gpwiki.org] too much lately.
  • I'll leave out the names to protect the guilty.

    About five years ago, I worked for a major test equipment manufacturer who was contracted to deliver a test system for POTS lines (which could eventually do ADSL prequalification) to a national telco in a major European country. The idea was to test every POTS line in the system (millions of them) every night to detect early signs of degradation so repair crews could be dispatched before dialtone was completely lost.

    As you can imagine, this involved a distributed system of test heads in each central office, networked back to a central command and control site. The sysem worked well, but had one flaw: downloading new firmware to the test heads was fraught with problems, and often led to the test head "locking up", even though a backup copy of firmware was always present, along with a hardware watchdog timer (though it was possible to lock out the watchdog interrupt, particularly when reprogramming flash, so it was a less than perfect watchdog). In these situations, one had to dispatch a "truck roll" to the affected central office, and replace EPROMs by hand.

    Needless to say, the customer was pissed. More worrying was that even if we fixed the software download problem (which we were unable to reproduce in the lab), was that we'd be paying for truck rolls all over the country. This was a not insignificant amount of money.

    Management frittered away time, instead of authorizing a root cause analysis, by requesting tweaks to TCP/IP operating parameters, and testing to see if the problem was getting better or worse. This did not prove illuminating, time was wasted, and the customer was getting royally angry.

    Finally, a small team of us were permitted to undertake a root cause analysis to find and fix the problem: the engineer responsible for the embedded flash file system, the telecom engineer on the control side, and I: responsible for the embedded O/S, and TCP/IP stack (inherited from the supplier of the embedded O/S). We wanted a month. We got two weeks. Remember, deploying experimental software to live COs requires so many layers of approval, it isn't funny, and we were worried that would be our biggest bottleneck.

    Finally, the controller telecom engineer was able to reproduce the problem, by attempting to download software from our controllers to deployed equipment in a single central office (getting permission was a feat in itself -- while there was little danger of affecting telephone service, this was a live CO).

    The problem was clear: the data network was slow (9600 b/s over an X.25 PVC, carrying PPP-encapsulated TCP/IP), resulting in the use of large MTUs to minimize packetizing overhead (latency wasn't an issue - throughput was). Because of the way the controller's TCP/IP stack worked, it misestimated the packet/ack round trip time: it used a one byte payload for the first packet, and full MTUs after that. The resulting packet ACK timeout and retransmissions exposed an inconsistency between controller and embedded TCP/IP stacks that caused the embedded system to lock up.

    Great. Now, how to fix it?

    The fix wasn't a big deal (I implemented a fix in the embedded TCP/IP code since we didn't have source to the controller TCP/IP stack), but deploying it was: remember we couldn't download the code sucessfully, and we didn't want to pay for a truck roll.

    At this point, I proposed something daring: download a small patch, in as few packets as possible (we could send three full MTUs safely). which would patch the existing code in place, which would be good enough to reliably download a complete replacement.

    The thought of "self-modifying code" freaked management out to no end: it went against every rule in the book. But all three of us stood our ground: the only other alternative was a truck roll to each central office in the country. Reluctantly, we were allowed to proceed with that fix.

    At this point, we had about ten days left. I had managed to get approval to pipeline the dev and tes

    • At least they gave you a favorable writeup. They could have put something ambiguos, like, "Was involved with the software update problem." :)
      • I've had asshole bosses too: I remember one idiot who asked if I could develop communication system over serial and "ethernet" hardware.

        No brainer: TCP/IP (already supported) over Ethernet or PPP (serial). I quoted 2-3 weeks to implement the application layer stuff over that.

        Idiot insisted that "for an embedded system", TCP/IP was "too fat" of a footprint: replace it with a home grown solution: we "only" had 128 MB RAM, "after all".

        My protests that anything I could do in three weeks would be unlikely

    • You cite two /. articles in the "Publications" section of your resume. What kind of response has this received in interviews?
    • Are you telling me that the company you work(ed) for was partly responsible for <shouting>ONE OF THE MOST ANNOYING THINGS I EVER SUFFERED FROM</shouting> ?

      Some years ago, I started being waked up haphazardly by the phone ringing. The day of the month was random, the day of the week was random, the time of the night was random between 2 and 5 AM but it sure freaked me, and my wife, out.

      Calls to the telco had no effect. They tested (or at least pretended to) the line and said: "Oh no Sir, everyt
  • by peter303 ( 12292 ) on Sunday November 21, 2004 @02:47PM (#10881804)
    It was around the first or second month of operation this year, but Spirit was unusable for a couple of weeks due to an OS failure. The symptom was Spirit tried to reboot itself about 20 times in a row- a default practice if something drastric happens. It was traced (according to the rumor mill) to flash memory overflow. Supposedly the VxWorks file manangement system improperly updated its flash memory free-inode list. So the memory appeared to run out of space.

    The nice thing about software is that JPL was able to upload a patch and get both rovers working properly again. They reconfigured the Galileo mission to the bypass the broken high gain attenna and use the hundred times slower low gain attenna with software patches and achieved most of the mission objectives.

The use of money is all the advantage there is to having money. -- B. Franklin

Working...