Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Programming IT Technology

Realtime OS Jaluna 87

rkgmd writes "Jaluna-1, a software component suite based on the respected chorus realtime os is now available in opensource (MPL-derived license) form. Jaluna, the company behind this, is a spin-off from sun to promote and develop chorus, and consists of many developers from the original chorus team before it was acquired by Sun. Chorus developed one of the earliest successful microkernel-based rtos's (could even run parallel, distributed unix in realtime on inkos transputers in 1992). Lots of good research papers here, and a link to the original newsgroup announcement."
This discussion has been archived. No new comments can be posted.

Realtime OS Jaluna

Comments Filter:
  • QNX (Score:3, Interesting)

    by reitoei1971 ( 583076 ) <reitoei@ g m x.net> on Friday November 01, 2002 @09:21AM (#4577793)
    How does this compare with QNX?
    • Re:QNX (Score:5, Informative)

      by wfmcwalter ( 124904 ) on Friday November 01, 2002 @10:36AM (#4578278) Homepage
      At least as far as the kernels are concerned, ChorusOS and QNX-neutrino are quite similar - both are realtime, microkernel based, protected-mode embeddable OSes, available for a number of microprocessors and embedded boards. Most of this (with the exception of the micrkernel part) is also true for Windriver's VxWorks AE RTOS.

      VxWorks' and QNX's advantage over ChorusOS was a combination of wider BSP support and very mature toolchains. ChorusOS big advantage was that it was specifically targetted at distributed applications - this is a issue for applications that combine real-time performance with data-centre-like reliablility (particularly telco).

      • Re:QNX (Score:2, Interesting)

        by putzin ( 99318 )

        That's true, although I'm not a big QNX guy, I did work on developing ChorusOS 4.0 about 3 years ago at Sun's Grenoble facility. I can say that Chorus had a much better memory management model that AE does, and it was very stable on PPC embedded platforms. It also did C++ more natively than other RTOS offerings.

        I know, C++ in the embedded world? C++ in an OS? Well, when done with some forethought and a brain, it's not an altogether bad idea.

        As well, I think it's really big failing was that Sun pretty much never put ANY effort into promoting or pursuing this outside of some half hearted attempts to get it into the auto and cellular infrastructure industries. They pretty much let it die a slow painful death. Couple that with the less than warm relationship between Sun's team and the original Chorus guys, and the requirements that it's few customers had, and you have a recipe for failure. It's good to see it came back. Time to dig out the blades at work and see if I can get it running again. WooHoo, nerd work!

  • by carl67lp ( 465321 ) on Friday November 01, 2002 @09:22AM (#4577802) Journal
    But it's really too bad that my university doesn't teach this stuff.

    It's something I realized the other day; we have so many advances in the field of computer science, such as Jaluna, and yet our centers of learning don't touch it. In fact, Java isn't even a core requirement in my plan of work!

    When is it that we'll finally be able to have a good environment for learning all of these spectacular technologies?
    • Java is somewhat functional supposedly portable (till M$ and everybody else added to it) but it still pretty much C based, I would hope your school teaches C as a required (and down with Cobol leave that the the grade school kiddies) but in reality schools cant keep up it's a 18 month cycle for the lasest and greatest toys. I would like to find one recent grad that knows C down pat and can code off the top of there head. Knowing C well lets you code in perl pretty quickly (like a week) java can be done at the compiler, C++ is a little different has way to much OOP garbage (OO is nice for large projects but most of the world dosent need it) but college shouldnt teach you the language dejure your internship should do that (you are planning on intering arent you?)

      Now for the more esoteric things yes Jaluna and all the RTOS things can be a great thing they have a relitivly small market though mostly embeded systems and again unless you need to have timing acurate in the NS range you can pretty much ignore it C will let you program for it and you will be happy and it will work.
    • by MosesJones ( 55544 ) on Friday November 01, 2002 @09:39AM (#4577917) Homepage
      I know it sounds harsh but the reality is that most courses worry about Java, C++, Jaluna and people become concerned about the technologies rather than the theories.

      The person who knows Knuth will be able to code in any language, the person who doesn't is limited in what they can do. Did your course teach you how to dope a transistor, build an Op-Amp ? An AND Gate ? A Compiler for a processor you design ? An OS for that Processor ?

      And did it do all of these by starting with theory or was the first lesson "Print hello world" ?

      The problem with practical courses is that they teach people to be the bricklayers of the Software Engineering world. The theory course teach you to be the engineer and how to apply theory to practicality.

      It isn't about being taught "cool" technologies, its about being taught the theory behind them. RTOS is great in that it teaches you about Thread-death, dead-lock, live-lock, IO blocking, race conditions in a very immediate environment, so when you build a bigger system you automatically avoid those issues because you understand what is the right way to work.

      Some Universities do teach the cool theory stuff, but most people don't choose to do that as its harder. It also makes you less marketable in the first year after graduation as you don't have the buzzwords... 12 months on however you'll be roasting everyone.
      • I agree that without a solid theoretical foundation it is difficult to grasp what is actually going on "behind the scenes."

        But my largest point of contention with my university is that all of the courses above the C++ programming level are theory--no hands-on practice anymore, unless you take electives (like Java, or XML, or advanced Web design--and only here do you learn a modicum of Perl). It's to the point now where my resume reflects the fact that I self-taught myself Linux, Windows 2000/XP (and server derivatives), Perl, PHP, HTML, and more.

        Universities are supposed to keep pace--not have the attitude of "let's worry about all that new-fangled stuff later." If it means refreshing their curriculum every two years, then so be it.

        Of course, I wish I knew then what I know now--I wouldn't have chosen this university at all.
        • Don't agree... (Score:4, Insightful)

          by MosesJones ( 55544 ) on Friday November 01, 2002 @09:55AM (#4577993) Homepage
          I left Uni with one programming language (Ada) (Okay and LISP, M68k, Prolog and other really useful languages!) one OS (AIX) but I understood however thing worked.

          However the answer to the question "do you know X" is always "yes" the advantage to theory is that it makes the lie true. How long to learn a new language ? If you understand the theory then the only thing that matters is syntax, 2 days ? 3 more days to learn the libraries ?

          You resume should say that at University you learnt the following, not "I taught myself" because employers will look for the former wording not the later.

          Jesus though "Advanced web design" where you do Perl. What has the planet come to ? Sorry to sound like an old fart but "Advanced Web Design" doesn't sound like something in a degree, it sounds like a Dummies book. XML as a course ? Its a bloody markup language, what is there to learn ? XSLT ?!

          Learning extra languages or technologies is simple if you just understand the principles. Then you can claim to have known them for years, even though it was only last week when you found out this interview required it. As long as you can understand the theory then everything else makes sense.... except VB.
          • While i fully agree with you in that principles and theory are important, practice and looking into the details of a project or a technology is important as well.

            From that standpoint taking a course in XML can make sense - especially if it is a practice course where you can learn how to apply the technology to real-world problems and deal with all the little details that make up the real world.

          • >"Advanced Web Design" doesn't sound like something in a degree, it sounds like a Dummies book.

            Aye, there's the rub.

            MIT cranks out PhDs that can't do what a dummy
            can do.

          • :0) (Score:2, Informative)

            by melted ( 227442 )
            You can learn C++ for years and then open a good book about it and understand that you still don't know SHIT. Don't agree? Buy yourself the following books:
            Effective C++
            More Effective C++
            Exceptional C++

            I read these books to humiliate myself, and other people when I pull something that I read in them from my head and even the "experts" start talking crap.

            The more you know, the better you understand how much more you don't know.
            *
        • I sounds like you're right, that you should have chosen a different university. Different people look to get different things out of an education.

          I have trouble imagining a university offering courses called "Java" or "XML". But if these are the types of courses you'd be interested in, then it sounds like a good community college would have been the way to go.

          If, on the other hand, you like the stuff you're learning in the theory courses but just want a chance to try some of it out in programming assignments, then it sounds like you've got screwed over pretty good by your university. In my experience a good portion of courses at reputable universities also include programming assignments for hands-on experience. For examples, a course in algorithm design and analysis or in operating systems would include assignments where you implement those algorithms or implement process control.

          So it sounds like whichever of these things you're looking for, you're at the wrong university.

          Of course, it is possible that when you say "theory courses" you do in fact mean real hard-core abstract theory, and that this university is trying to produce all theoretical computer scientists. In that case the university would not be crappy, as I claimed, but would just be different and unique. Although it would definitely not be offering what you sound like you're looking for in that case, either. However, I sincerely doubt that a university with such a daring program would offer electives called "Java" and "XML", so I don't think we have to consider this case.

          • At the college I went to they had a good balance of both theory and application of that theory in a language (be it Java, C++, Assembler, etc).

            Almost every course was taught in a different language, which meant unless you really learned the theory, you were pretty much screwed. In my time there I had to program in Java, Scheme, Lisp, VB, Perl, C, C++, x86 Assembler, Machine Code, and Java.

            The thing I liked most about it however, was all the first year courses were taught in Scheme. Why Scheme? Because nobody, and I mean nobody taking intro courses knows it. This allowed us to learn the theory without past programming experience getting in the way.
        • Over at the U of S computer science [usask.ca] department, you learn (if you don't already know them) Java, Eiffel, C, C++, Prolog, MIPS Assembler, OS design, UNIX systems programming, etc. Take a skim through the class descriptions [usask.ca].

          Considering we're such a podunk town, I find it hard to believe you can't find something as good or better than it.
        • Universities are supposed to keep pace--not have the attitude of "let's worry about all that new-fangled stuff later." If it means refreshing their curriculum every two years, then so be it.

          Why? I'm a senior at the University of North Dakota. I keep hearing similar complaints from a number of people. The only really fundamental programming change in the last 25 years has been the introduction of objects. Stacks, Queues, Lists, Trees, networking fundamentals, storage and database fundamentals - the specific technology changes, but the fundamental computer science principles remain the same. Why should you get an education in whatever the technology of the moment is when it's probably going to be dead in five years anyway? A better education in the math and engineering principles behind these(and whatever the new technology is and whatever the old technology is) will serve you far better in the long run.

    • by BoomerSooner ( 308737 ) on Friday November 01, 2002 @09:42AM (#4577930) Homepage Journal
      College is to give you a foundation of understanding so no matter where the technology goes you will have the ability to learn it due to your broad base.

      If you want tech training go to DeVry/University of Phoenix (what a crock name). This is why a degree is worth more than certifications.

      Plus, no one is stopping you from learning about RTOSes. I'm going through the Minix Computer OS Design book by Tanenbaum right now. You can either be spoon fed like most university students, or you can get up off your ass, show some motivation and learn it on your own. This is why I never have a problem finding a job and others cant get an interview. People want to hire motivated workers not someone who'll just tow the line.
      • College is by its very nature a preparation for the working world...that's why job postings have "degree required" in their list of requirements.

        How am I supposed to learn Java in a structured way that will impress my future employer? What looks better, or sounds better? "I learned Java through a two-semester lab-enhanced course!" Or, "I learned Java by reading 'Sams Teach Yourself Java in 21 Days."
    • I'm a student at the University of Waterloo. I recently took their real-time course. You would apparently be dismayed to learn that even after taking that course, and spending an average of about 35 hours a week outside of class working on it, I don't know a thing specifically about Jaluna or any other real-life real-time OS.

      Instead of teaching us how to code for Jaluna they taught us enough about the inner workings of RTOS's that we could write a limited one on our own. In fact, we did - hence the incredibly large (yet rewarding) workload.
  • Cool (Score:3, Informative)

    by captainclever ( 568610 ) <rj@NoSPAm.audioscrobbler.com> on Friday November 01, 2002 @09:22AM (#4577803) Homepage
    Another notch for opensource, sounds like a useful thing to have lying around for all your real-time os needs. "Jaluna-1 supports POSIX Real-Time standard applications, and includes state of the art tools for developing, deploying, configuring, and managing embedded systems. Jaluna-1 is being offered as open source, royalty-free software. Jaluna complements its open source software offering with technology and services enabling customers to easily migrate from proprietary Real-Time Operating System (RTOS) based projects to royalty-free Jaluna-1"
  • Inkos? (Score:3, Insightful)

    by 91degrees ( 207121 ) on Friday November 01, 2002 @09:27AM (#4577833) Journal
    Who are they?

    Don't you mean Inmos?
    • Re:Inkos? (Score:3, Informative)

      by PD ( 9577 )
      Whoever marked the parent as a troll is obviously ignorant, and I hope it gets fixed in meta.

      91degrees is correct. Inmos was a British company that released the transputer around 1985, and was specifically designed to be used in a network of interconnected processors. These chips were 32 bits and were programmed in the Occam programming language. Data transfer between nearest neighbors was over a 10-20 megabit serial connection. Each processor had 2K of memory onboard, and the entire transputer array was meant to be controlled from another computer. Typically that meant that a transputer array was implemented as a daughter card that fit into a computer such as a PC.

      Don't mod me up, mod the parent post up.
      • Atari also had one that was based on the Transuputer from Inmos ( 4 cpu ), that i dont think was merely a 'add-in', but designed from the gronud up using it as the main cpu.

        But since i never saw one in the flesh i could be wrong.
        • A company called nCube used to make a line of hyper-cube configuration transputer machines. Ten years ago when I was doing physics at university, there were some in the lab. I think they had specialised C and FORTRAN compilers for them as well as Occam.
        • The best (maddest) brochure that came across my desk at the time was a German supercomputer where each Transputer card was mounted and glued between two plates of water cooled aluminium, and each of the plates were sandwiched together to form a tower.

          It would compute the weather 3 days hence, and power clean your car at the same time.

          Now that's heavy computing power...

    • And they had a real nice development kit too - I should know ;-)
    • Anyone care to read/review/comment/review:

      http://www.wikipedia.org/wiki/INMOS_Transputer

      BTW turgid, the nCube was not Transputer based, they used their own custom CPU.
    • Re:Inkos? (Score:1, Informative)

      by Anonymous Coward
      Yes, the poster means Inmos. Still have my copy of the "Communicating Process Architecture" and "Transputer Instruction set." As you'll remember, the Transputer was the ultimate RISC machine - 16 instructions, including one that allow you to 'extend' the instructions. I first saw them F2F in 1989 at the 3rd Int'l Conference on Hypercubes and Concurrent Computers, which was held in Pasadena. Welch and other Inmos lumes were there. Each transputer had 4 communication channels, which mapped nicely to the channel construct in Occam, which was the language of choice for them in those days. Occam folks were big into 'proving' the software was correct. AIRC, the Floating point unit for the later model Transputers was written as an Occam program, proved correct, and then translated into hardware that matched said program. (T4XXs? T8XX?)
      There were standalone Transputer boxes, but there were also several folks demonstrating PCI cards that were meant to plug into a standard pc. I recall that one vendor had a 4 cpu card.

      Later, there was a C compiler for the processor, since so many folks rebelled against Occam.
  • Open Source? (Score:3, Informative)

    by Hayzeus ( 596826 ) on Friday November 01, 2002 @09:28AM (#4577844) Homepage
    It should probably be mentioned that LOT's of commercial RTOS's provide source. For a lot of applications this is pretty much a requirement. The real distinction here is the royalty-free license, although RT-Linux (which I know almost nothing about) obviously doesn't require royalties.
  • by TrollBridge ( 550878 ) on Friday November 01, 2002 @09:43AM (#4577940) Homepage Journal
    ...as opposed to TimeDelay OS?
    • Windows 3.1 and timeslicing ;)
      • Re:Realtime OS... (Score:4, Informative)

        by mrm677 ( 456727 ) on Friday November 01, 2002 @10:47AM (#4578347)
        Actually Windows 3.1 is closer to an RTOS than you think. In most RTOS's, a task can starve any other task running at the same priority (or lower). Same as the cooperative multitasking model in Windows 3.1.
        • Actually Windows 3.1 is closer to an RTOS than you think. In most RTOS's, a task can starve any other task running at the same priority (or lower). Same as the cooperative multitasking model in Windows 3.1.

          Except for the fact that under an RTOS a task with a higher priority get's to run when it needs to. Not so in Win 3.1.

          I'm working on an RTOS everyday and in the past I've actually worked for M$ (shoot me) and did a bug fix on Win 3.1 (yes, I've seen the source, no I didn't do anything major). There is nothing close about Win 3.1 and an RTOS.
          • I've done cooperative multitasking, and I have done preemptive multitasking, and maybe I am an EE who didn't study all that fancy computer science theory, but I have found cooperative is so much easier to work with, and to get any real performance out of preemptive your threads need to block (wait on signals) in order to pass control to other threads rather than wait for the preemption to come along, so you are programming in a cooperative fashion after all.

            A popular book on the Space Shuttle talked about the flight control software: Rockwell (those hard-headed EE's) wanted to use a simple round-robin schedular (pretty much what cooperative does -- you are dependant on each task not being a hog) while IBM (which did the primary -- Rockwell got the backup) went with their fancy-schmanzy preemptive system, which I believe was blamed for scrubbing a Shuttle launch early in the program. You know, Keep It Simple and Stupid, and for some applications the dumb way is simple, reliable, high performance, and cost-effective.

            • while IBM (which did the primary -- Rockwell got the backup) went with their fancy-schmanzy preemptive system, which I believe was blamed for scrubbing a Shuttle launch early in the program.

              Ahem, that's not quite how it happened...

              The cause of the crash, you refer to, was that all the processors read a single hardware clock during start-up. Usually they all read the same value but since hardware restrictions required the processors to read the clock one at a time, it could happen that the clock ticked between reads. This happened very seldom, in fact later investigation revealed that it had happened only once during testing.
              The actual crash occured much later when the flight-control computers compared their internal clocks and found out that they were different.

              At the time everybody were completely baffled and it took several months of investigation to find out what had really happened. But as you can see from this description, the complexity of the RTOS was not the culprit - this time :-)

              BTW, I read this in an ACM publication. I've tried to find a link but haven't succeeded so far.

        • In most RTOS's, a task can starve any other task running at the same priority (or lower).

          Some RTOSes decay the priority of the running task over time, so that it eventually allows other tasks of originally equal priorities (now higher priorities due to the decay) to run. Other strategies are available, such as a round robin scheduler among tasks of equal priority.

          Same as the cooperative multitasking model in Windows 3.1.

          Nonsense. A higher priority task that unblocks will get the CPU right back in an RTOS. In a cooperative multitasking OS it must wait for the current task to block, by definition.

        • Re:Realtime OS... (Score:4, Informative)

          by j3110 ( 193209 ) <samterrell.gmail@com> on Friday November 01, 2002 @05:41PM (#4581399) Homepage
          I don't know what you are talking about. How could one starve another in a RTOS? Hard deadlines are set. Most of the time, the process with the closest deadline is selected (some have time estimates and do other optimizations). If a task doesn't complete by it's deadline, it can get preempted because completing the task was just not possible. In effect, you overloaded the system therefore it is unresponsive. AFAIK, any system that is that overloaded will be unresponsive and concentrate on the higher priority tasks. You make this out to be a bad thing (or at least to a casual observer), when in all actuallity it's a great thing. If delivering the next frame to the GPU is more important to you than compiling the kernel, the kernel will get starved. In Linux and other non-RTOS's, you will run out of time slices because they are being "fair".

          This reminds me of the whole VM issue. If you don't have enough memory to complete a job, no VM you have is gonna help you. Likewise, if you don't have enough CPU to complete the job, no schedular is going to help. Where the new VM and RTOS's help is when you are playing your FPS game, you can schedule regular intervals to fill the audio buffer and calculate the next frame as well as do physics calculations. If you don't have enough CPU to do them all, pick the ones that matter most. Linux can pick them in the wrong order and miss a more important calculation getting done on time. No one has actually tested if linux can do more overall because of this, but most of us have a select few tasks actively interacting with us that we would really like to not be interrupted. RTOS can guarantee this at the sake of other processes, but that's a good thing. Win3.1 on the other hand, one unimportant process can not relenquish the CPU.
        • There are several key features of an RTOS which Windows 3.1 does not have.

          One is pre-emptability. If an event happens which would cause a higher-priority task to run, it must run within a bounded amount of time.

          Another related property is low interrupt delivery latency. If a high-priority interrupt occurs, even if a lower-priority interrupt handler is already running, it should be delivered within a bounded amount of time. This is guaranteed by arranging that nobody disables interrupts for an unbounded amount of time. Indeed, under QNX, interrupt handlers are run with interrupts initially enabled.

  • What is this, the name of a new Gundam?
  • Wonder if its running Jaluna OS ?.
  • It's time I ask this question:)

    What is a Microkernel? And what are it's advantages?

    Linux is not a microkernel, but darwin is ?

    Is this just a buzz work?
    • The basic difference is that the kernel in a microkernel system is just that, micro. It includes functionality only for the concepts of tasks, the memory they use, the CPU time they get, and the messages they pass to each other.

      This is rather different than, say, Linux. Here the kernel includes all sorts of things, like networking stacks, device drivers, file systems, etc. These are basic parts of the kernel itself, along with everything in the microkernel too.

      So how does one use a microkernel if it doesn't have all this (required) stuff? Basically all of these things are compiled into modules known as "servers", and run as separate tasks -- just like any other program. So if your web browser needs to send a request to a web server, it does so by (essentially) saying "hey microkernel, could you tell the networking program to send this to this address? Thanks!". In a traditional kernel your web browser would do a local "method call".

      This might sound piquane, and for the end user it often is. However, in theory at least, microkernels offer the ability to help development. That's because you can load up or ditch any of these little "servers" without any effect on anyone else -- that's why they're so useful for RTOSs, because you can ditch what you don't need -- and even do so while the computer continues working. No reboots to fix a bug in your ethernet card driver...
    • What is a Microkernel? And what are it's advantages?

      Glad you asked.

      (Notice that part of this post isn't a direct answer to your question, although it may prove illustrative. But thanks for giving me an opportunity to rant on this ;-)

      In Linux, the kernel does things like memory management, scheduling, writing to filesystems, to the network, to your sound card, to your video card... etc, etc. That's an awful lot to do. That's why Linux is such a big program.

      Now for the sake of introduction, let's compare Linux to Mozilla. Mozilla lets you read and compose mail, browse the Web and compose Web sites. All in one application. As a result, you get a huge application, which takes quite some time to load. And if an error occurs in the mail component, you can say bye-bye to your 34 open Web browser windows as well -- it's one big program, so if one component crashes, the others go with it. Also, if you want to change anything in the code, you're required to recompile Mozilla completely just for that.

      So maybe Mozilla would improve by getting split up in several components? Who knows. Fact is, that a kernel (like Linux) may as well improve by doing just that.

      Imagine you booting up a system, which is still quite buggy, so your network card locks up every now and then. I actually have this with Linux on my Macintosh, and it can cause the complete system to lock up. In a Microkernel, the network card driver is a separate program, called a "server", running in "user space" (vs. "kernel space"), so it's merely an application like any other. If you have trouble with your network card, you could in theory simply restart the server.

      So the advantage of this modular design is that your core is very small, and people can easily add/ delete/ modify its drivers. In fact, the main advantage of a Microkernel architecture is that it's a theoretically clean design. It looks good.

      However, with even hardware drivers running in the very protected "user space", there must still be some way for the drivers to communicate with the hardware (and with each other). It is up to the Microkernel to intermediate for this. And here is the real problem. If you just write up a design for Microkernel System Server communication that looks good on paper, chances are that it turns out to be really slow when implementing it, because many data needs to be copied when transferred from a server to the kernel, and from there to user programs, etc.

      The design of most older Microkernel interfaces was so high-level, that there was no easy way to get around this problem. Newer Microkernel designs however, recognize this problem at its core, and adapt their design, again at the core, to eliminate this problem. (The silly thing is that by this design, the newer Microkernels are usually even smaller than the older ones, if I am right.)

      Darwin is based on Mach, which is a 1st generation microkernel. It uses a communication system called Ports, I believe not quite unlike TCP/ IP ports. All in all too clever for its own good, anyway: the Microkernel is responsible for caching messages (IIRC), which requires a lot of memory/ CPU resources.

      Darwin (as well as MKLinux and the older NeXT/ Openstep systems), by the way, screw up Microkernel design at its core, by making just one big user space program with all drivers in it. These systems, politely referred to as single-server systems, only give the OS developers the advantage of not having to write the core of the OS (Mach) themselves. (Mach has a nice thread support I believe, so this might have been a good argument for choosing Mach.) There are no further immediate advantages for users or developers.

      And there is of course the disadvantage of using a first-generation, slower, Microkernel as part of your design. (As you might have noticed, all of the abovementioned Mach-based single-server systems have a connection with Apple; I guess they Think Different, or something ;-)

      Multi-server Microkernels, however, allow developers and (root) users to plug stuff in and out of their systems at runtime without having to change the kernel itself, and without having to fear crashes of drivers. The GNU HURD is a good example of a multi-server microkernel, but the HURD goes beyond everything named here, by redesigning the system so, that any (normal) user can easily adapt the Microkernel to its own needs. The advantages are somewhat beyond our current idea of computing: for example, imagine a normal user application installing a different algorithm for freeing up unused memory, because it is more efficient for this type of application. Maybe you never ever want to do this, but the HURD is designed to be this flexible.

      The HURD is currently based on Mach, but there's a transition being made to L4 (the HURD is a quite abstract layer on top of a Microkernel). L4 is a highly esteemed second-generation Microkernel, which takes away the speed disadvantages of Mach.

      Anyway, now for the real rant:

      Any time a Slashdot story is posted on topic of the HURD, idiots flock together like pigeons to say Stallman is an idiot, and the HURD sucks because Microkernels are slow. While the real truth is that RMS doesn't seem to care about the HURD as a GNU project at all these days (he's currently developing emacs, I've heard -- could you imagine a sillier way to waste your precious time, being reffered to as the last real hacker on earth and all?), and there is nothing slow about L4, and nobody ever says bad things about QNX (a cool Microkernel-based OS) here anyway.

      Slashdot readers only seem to recognize the advantages of Microkernel design, when not on the topic of the HURD, while the HURD design has advantages over most other Microkernels. This is plain silly.

      Well, there's my rant. Bye now.
      • Geezuz :)

        All in all, it seems to me that a good analogy would be to compare microkernel Vs Monolithic(linux) kernel, to linear vs Object Oriented Program.

        On the one hand, linear does the job.. and is very fast. On the other you have a highly modular, conceptually prettier design.

        Ok so it's not an exact analogy:) But couldn't you compare loading and unloading kernel modules on the fly to a microkernel like feature of modifying the core OS without having to reboot?

        (I'm aware that linux kernel modules still don't run in "user space").

        ==Me
        P.S. Darwin runs not so speedy, and I won't even get into how hugely resource hungry Aqua is (which is not the kernels fault).
  • I've been wondering this for some time now. I remember using QNX (back when it was QnX from Quantum!) on the ICON computers round-about 84-85. These were 268's with big screens, graphics, everything. The thing I remember most was that they were FAST, snappy, you know, everything this 2k box isn't.

    So is there any downside to an RTOS? I know about the slots and scheduling yadda yadda, but these strike me as theoretical problems and not nessesarily real-world ones.

    Can someone clue me in here? Why don't we all use QNX for everything?
    • RTOSes are slow, because they trade off speed for determinism. Also, most RTOSes are optimized for resource-starved embedded systems, so they can't fit or just don't care about many features that benefit desktops and servers. For example, QNX tends to have poor I/O performance because it only supports primitive filesystems and it lacks a unified page cache.
      • File I/O, you mean.
        RTOS's are at least as good at direct device I/O as other types of OS.

        > RTOSes are slow, because they trade off speed for determinism.
        This is in general _wrong_. Please explain, for instance, how context switching would be slower on a RTOS (compared to Unix, for instance), when it in general keeps less context info, and usually does not even need user/kernel mode transitions. If you want to make a real comparison, take a Unix/Linux system, disable all swap and restrict the buffer cache to a small percentage of your RAM. Then compare it to QNX or VxWorks (or, to get a better picture, LynxOS).
        QNX is "slower" at doing desktop/server stuff, because it was not designed to do these things. Which is why swapping is clunky and file access is slow.
  • The micro kernel in unicos/mk that uns on Cray T3Es is chorus. Seem to do very well in that job.

In order to dial out, it is necessary to broaden one's dimension.

Working...