Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Operating Systems Programming Software IT Technology

The Great Microkernel Debate Continues 405

ficken writes "The great conversation about micro vs. monolithic kernel is still alive and well. Andy Tanenbaum weighs in with another article about the virtues of microkernels. From the article: 'Over the years there have been endless postings on forums such as Slashdot about how microkernels are slow, how microkernels are hard to program, how they aren't in use commercially, and a lot of other nonsense. Virtually all of these postings have come from people who don't have a clue what a microkernel is or what one can do. I think it would raise the level of discussion if people making such postings would first try a microkernel-based operating system and then make postings like "I tried an OS based on a microkernel and I observed X, Y, and Z first hand." Has a lot more credibility.'"
This discussion has been archived. No new comments can be posted.

The Great Microkernel Debate Continues

Comments Filter:
  • ...as flamebait.

    "We've not argued about this for a while. Let's have a shouting match...
  • crickets (Score:5, Funny)

    by 192939495969798999 ( 58312 ) <[info] [at] [devinmoore.com]> on Thursday January 31, 2008 @12:25PM (#22247632) Homepage Journal
    And now, cue thousands of replies from people who have personally created microkernels and have sensible observations to make on their validity as a base for an OS...

    (crickets)
    • Re:crickets (Score:4, Funny)

      by trolltalk.com ( 1108067 ) on Thursday January 31, 2008 @12:29PM (#22247700) Homepage Journal

      And now, cue thousands of replies from people who have personally created microkernels and have sensible observations to make on their validity as a base for an OS...
      ... Linus Torvalds ...
    • Re:crickets (Score:5, Funny)

      by Dancindan84 ( 1056246 ) on Thursday January 31, 2008 @12:42PM (#22247904)
      Yeah. Slashdotters are being asked to:
      1) RTFA
      2) Have first hand knowledge of the subject
      3) Make a reasoned, non-biased post/article on the subject

      Talk about a dead end.
    • by Tribbin ( 565963 )
      Anyway; for most people the question is about existing-product A or existing-product B being better for the job it must perform.
    • Re: (Score:3, Interesting)

      by OrangeTide ( 124937 )
      Why would I create a microkernel when they so obviously suck?

      It's not hard to write a monolithic kernel that screams faster than something like Mach. but not all microkernels are like Mach. But even commercial microkernels like QNX have a lot of overhead for certain applications (like filesystem I/O).

      It is possible to have a fast microkernel if you completely discard the original concept of a microkernel and start over with a fresh design. L4 is quite fast for example, even if the whole Clan thing is a bit
    • Re: (Score:3, Informative)

      by savuporo ( 658486 )
      There are quite a few. FreeRTOS, eCos, RTEMS, AvrX etc, some of them commercially quite successful. The proprietary counterparts like Nucleus are definitely very successful and in wide use.
      What, you have never heard of them ? Well, there are other widely used computing platforms besides personal computers.
  • by Midnight Thunder ( 17205 ) on Thursday January 31, 2008 @12:26PM (#22247662) Homepage Journal
    One of the main issues with microkernels is that of performance, but the trade-off results in reduced stability if you have a bad driver, since there is no notion of memory protection for drivers in a monolithic kernel.

    The way I see it, is given the current performance of systems, getting a fast, but slightly less stable kernel counts for a lot, but in the future when the overhead provided a microkernel is deemed insignificant we will see them become the norm. In many ways this is much like when we were all using SCSI CD burners because the processor couldn't keep up, but now we are all using IDE CD burners because CPUs can more than handle the task.

    • Microkernels are the future and always will be. If anything, you might see some more driver code moved into userspace in existing popular kernels, but as a per-driver design choice rather than some surprise explosion in market share by Hurd.

      Next up, an enthralling debate about RISC vs CISC.

      • > Microkernels are the future and always will be.

        Is that another way of saying they're vaporware? Just like Duke Nukem will always be released "in the future"...
        • Depends. Ever hear of FUSE [sourceforge.net]? It's been showing up in quite a few distros for the capabilities it buys by running outside of kernel space. It's become so important, that it has been ported to BSD, Solaris, and Mac OS X.

          What does it do? Why, it's a monolithic driver that provides an interface to support userspace filesystem drivers. i.e. A microkernel in practice, if not in definition. Ergo, the grandparent's point about a slow migration.
          • Re: (Score:3, Informative)

            by julesh ( 229690 )
            Depends. Ever hear of FUSE? It's been showing up in quite a few distros for the capabilities it buys by running outside of kernel space. It's become so important, that it has been ported to BSD, Solaris, and Mac OS X.

            What does it do? Why, it's a monolithic driver that provides an interface to support userspace filesystem drivers. i.e. A microkernel in practice, if not in definition. Ergo, the grandparent's point about a slow migration.


            Unfortunately, the problem with FUSE is that it's painfully slow. And ye
    • by jadavis ( 473492 )
      One of the main issues with microkernels is that of performance,

      Do you have information that a microkernel is inherently slower than a monolithic kernel?
    • by peragrin ( 659227 ) on Thursday January 31, 2008 @01:09PM (#22248302)
      while you are quite correct the question becomes why should the CPU handle those instructions? It is like USB 2.0 versus firewire 400. Firewire while "slower" burst rate has a higher steady rate precisely because it offloads some instructions.

      SCSI, firewire are examples of good tech working for you. The CPU should output instructions to devices smart enough to be able to work on their own. Leaving more cycles available to do things that actually matter.
  • by davidwr ( 791652 ) on Thursday January 31, 2008 @12:27PM (#22247678) Homepage Journal
    Like many other "this vs. that" wars, neither micro- nor monolithic-kernel architectures are best for all tasks.

    In many cases the difference for the end-user is small enough that it's not worth doing things "the best way" if the tools and talent available lean the other way.

    We didn't go for VHS over Beta because it had better quality video, we went for it because of marketplace and other factors.

    We didn't go with a monolithic Linux over the once-Apple-sponsored mkLinux because it was inherently better for every possible task under the sun, we went with it because it was better for some tasks and good enough for others and it had more support from interested parties, i.e. marketplace factors.
    • Be fair here; MkLinux was really an educational project for Apple's core developers to play around with Mach before starting the heavy lifting on OS X. Porting an existing OS lets you play with things and get them wrong in a sandbox so you can see why you might want to make certain design decisions when coding one from scratch.

      I used MkLinux, and it was at the time the only way to run Linux on Mac hardware. It didn't stick around for long; once the Apple sponsored developers had played with it long enough,
    • by jadavis ( 473492 ) on Thursday January 31, 2008 @01:13PM (#22248354)
      Like many other "this vs. that" wars, neither micro- nor monolithic-kernel architectures are best for all tasks.

      Like many other "this vs. that" wars, people will use arguments like yours as a cop-out to avoid any serious analysis of the design tradeoffs and the implications of those tradeoffs.

      It is quite hollow to say that something is not the "best for all tasks," without some analysis as to when it is the best option, or which option has the most promise in the long term (such that it might be a good field of research).

    • Re: (Score:2, Insightful)

      by Anonymous Coward
      So what? New people are always poking their head in places like Slashdot. Plus the next generation of kids coming into the field.

      There has never been a clear winner in this particular debate so there is nothing wrong with getting a fresh take on things. Maybe something has changed because somebody had a great idea.

      Is/was BeOS using a microkernel? QNX is probably one of the oldest microkernels and they're still around.

      Microkernels are really popular in the small device market while monolithic kernels dom
  • From TFA:

    ... if people making such postings would first try a microkernel-based operating system and then make postings like "I tried an OS based on a microkernel and I observed X, Y, and Z first hand." Has a lot more credibility.

    The easiest way to try one is to go download MINIX 3 and try it.
    (emphasis mine)
    • Re: (Score:3, Interesting)

      I'm going to disagree with you. This debate's been raging for years, and he's calling for people to try the most available one on the market right now. He's not going to get money from it, all he'll get is maybe some development done for it. Since he's a professor and could just as easily make the development an assignment or extra credit, it's much more likely that he's trying to inject some experience into the debate instead of turning it into some nerdy gladiatorial fight between him and Linus.

      The rea
  • by fishwallop ( 792972 ) on Thursday January 31, 2008 @12:30PM (#22247716)
    I much doubt whether the average user cares (never mind understands) if the kernel is monolithic, microkernel, or heated corn -- and for what we average users tend to use our compueters for (i.e. running our office apps, surfing the Interweb, listening to music, occassionaly watching video or fixing red-eye in pictures of our children) it's not going to be the kernel that dictates user experience or perception of "slow". You db admins, SETI enthusiasts and google administrators may care more. (I turned in my geek card long ago.)
    • They don't. Fundamentally micokernelism is a software engineering issue, and you'll notice the actual SEs taking about the same view on this (hint: they don't agree with Linus - watch the reliability & bloat of Linux over the years as an example). But me-toos want to join in to support their crew. Sort of like people in your entourage talking smack and starting crap with your competitor's entourage. Of course, only you and your competitor have any actual talent or understanding of the issues at hand
  • hmmmm... (Score:5, Funny)

    by William Robinson ( 875390 ) on Thursday January 31, 2008 @12:30PM (#22247722)
    Over the years there have been endless postings on forums such as Slashdot about how microkernels are slow, how microkernels are hard to program, how they aren't in use commercially, and a lot of other nonsense. Virtually all of these postings have come from people who don't have a clue what a microkernel is

    Hmmm....he must be new here ;)

  • Old News (Score:4, Funny)

    by j_kenpo ( 571930 ) on Thursday January 31, 2008 @12:31PM (#22247732)
    Thats great.... the article is dated May 12, 2006. Is it still "news" if its "old news"?
  • Open Source monolithic kernels are best, allowing any user to build their own custom kernel and a choice of either having device drivers & support to either be built as a module or in to the kernel itself...

    personally i prefer to build as much as possible as modules with the exception of filesystem support for / (ext3) which i prefer to build in to the kernel itself thus making an initrd unnecessary...

    the Linux kernel is one of the finest pieces of software to ever be built since the beginning of
  • "I tried an OS based on a microkernel and I observed it was decades out of date first hand."

    And that was for a COBOL programming class in college 10 years ago while Linux was just
    starting to ramp up and kick ass;)
  • by Nooface ( 526234 ) on Thursday January 31, 2008 @12:39PM (#22247862) Homepage
    The rise of virtualization proves the validity of the microkernel concept, whereby the hypervisor now takes the place of the original "kernel" (note the similarity in block diagrams: microkernel [cisco.com] vs. hypervisor [com.com] designs). Virtual machines are now used instead of function-specific modules in the original microkernel designs, with specialized VMs for performing I/O and to host virtual appliances [wikipedia.org] with just enough user-level code needed to support a particular application.
    • Indeed...and noticed how pretty much all the virtualization guests and hosts and are not microkernels? In fact, virtualization makes even more difficult for true microkernels to rise, since one of their advantages (isolation) can be obtained through virtualization.
    • Re: (Score:3, Informative)

      by nxtw ( 866177 )
      Well, let's consider a few virtualization platforms.

      VMware ESX Server's "vmkernel" is supposed to be a custom microkernel that happens to use drivers from Linux (all device drivers run inside the vmkernel). Guest OSes (including the Linux-based service console used to manage the server) run on top of the vmkernel and access all hardware though it.

      The Xen hypervisor does less than VMware's vmkernel; it loads a "dom0" guest that manages the system and performs I/O. With few exceptions, this guest is the onl
  • From the article:

    Recently, my Ph.D. student Jorrit Herder, my colleague Herbert Bos, and I wrote a paper entitled Can We Make Operating Systems Reliable and Secure? and submitted it to IEEE Computer magazine, the flagship publication of the IEEE Computer Society. It was accepted and published in the May 2006 issue. (Emphasis mine)


    So, "recently" an article was published in IEEE's May 2006 issue. Looks like this is nothing new.
  • We should just convert all our OSes to run using a magical unicorn kernel. I've seen about the same number of microkernel OSes and magical unicorns, so switching to the unicorn system should be just as easy as switching to a microkernel, and it gives many additional advantages, such as immortality and a horn that can cure all wounds instantly.
  • They both work.
  • The best software is the software that, given a reasonable choice, folks choose both choose to write and choose to use. Microkernels are not a new idea, yet few folks have chosen to write them and few have chosen to use the ones that have been written. That speaks for itself.

    Besides, what does Andy think, that we're all going to say, "Wow, you're dead on, lets rewrite Linux from scratch with a microkernel?" Linux works. Unless we reach a point where it substantially doesn't (like Windows) there's no value t
    • The best software is the software that, given a reasonable choice, folks choose both choose to write and choose to use.

      So, Windows, then?
  • I'm interested in experiences /. readers have had with The Hurd [gnu.org]. Have you installed or run this system? What did you think?
  • Design Philosophy (Score:5, Interesting)

    by Darkseer ( 63318 ) on Thursday January 31, 2008 @12:54PM (#22248066) Homepage
    I did my senior project in college on this in 1998... At that time I was looking at something from MIT called the exo-kernel and comparing it to some 2.4 version of the linux kernel. Back in 1998 the problem was mainly that nobody had invested in that particular mirco-kernel technology, unlike say mach, because it was a research project. In my conclusion, it was clear I could not do a meaningful comparison of complex applications on both OSes due to its lack of maturity. But there was one thing that was clear, the design philosophy behind the micro kernel allowed a much more flexible way to interact with the hardware.

    The time it would take to design an implement a what the equivalent of driver would be were smaller. In the end it puts more flexibility into the hands of the application designer with the kernel taking care of just the bare minimum. The initial work at the time reported a 10x improvement in performance since you could customize so much of how the hardware resources were being used. This of course comes at a price, in addition to developing the application, you need to develop the drivers it uses, possibly increasing the time to write anything significant.

    But in the end, flexability was key, and you can see some of the microkernel design philosophies start to seep into the linux kernel. Take a look at kernel modules for example. The code is already being abstracted out, now if it just effectively was designed to run in userspace.

    My thoughts are that in the end the microkernel will win do to the fact that I can engineer a more complex OS that is cheaper to change, not because it is faster. Tis is the compromise that was made with compilers vs. machine language programming. In the end I think Tanenbaum will win, linux will become a microkernel out of necessity, and Linus as it turns out would have gotten a good grade from Dr. Tanenbaum. He just would have handed his final project in 40 years late by the time it happens.
     
    • Re:Design Philosophy (Score:5, Interesting)

      by jd ( 1658 ) <imipak@yahoGINSBERGo.com minus poet> on Thursday January 31, 2008 @02:44PM (#22249682) Homepage Journal
      The problems with traditional microkernels lie in the heaviness of the module-to-module communication and in the number of context switches. An exokernel is pretty much entirely in one context, and exopc seemed to have very efficient communication, so that design looked extremely good. (Although a fair comparison isn't possible, a crude one would be to compare Exopc + the Cheetha web server with Linux + Tux, both serving static content. See how well they scale, when stress-tested.)

      Exokernels aren't the only microkernels of interest, though. There have been efforts to produce mobile nanokernels, on the theory that drivers are generally smaller than data, so in a cluster, moving the code to the data should be more efficient on resources. The opposite extreme has been to produce kernels that span multiple systems, producing a single virtual machine. Here, kernelspace and userspace are segmented and the latency between machines is simply another sort of context switch delay, yet the overall performance is greater than a loosely-coupled cluster could ever produce.

      Microkernels have a lot of potential, a lot of problems have been solved, there are still problems that need to be solved better. eg: if a driver crashes, there needs to be a transaction log that permits the hardware to be returned to a valid state if at all possible, or rebooted then rolled into the last valid state. This isn't just a software problem, it's a hardware problem as well. Being able to safely reboot individual components on a motherboard in total isolation requires more than fancy coding and software switches. You need a lot more smoothing circuits and capacitors to ensure that a reboot has electrically no impact - not so much as a flicker - on anything else.

      Where microkernels would truly be "at home" would be in machines that support processor-in-memory architecture. Absurdly common function calls, instead of going to the CPU, having instructions and data fetched, and then being executed, along a long path from the OS' entry point to some outer segment of code, can be embedded in the RAM itself. Zero overhead, or damn near. It violates the principle of single-entry, single-exit, but if you don't need such a design, then why waste the cycles to support it?

  • Dr Tannenbaum may well be correct that from theoretical considerations a microkernel is superior. But AFAIK after 15+ years of maintaining that, he and his supporters still do not have a useful exemplar.

    I do not doubt they've tried. The interesting information is why it hasn't worked. Unfortunately, people seldom publicise failures of ideas they advocate.

    One very obvious impediment is the existance of priviliged instructions. For example, on x86 the HLT instruction (used to trigger powersavings) is pr

    • I strongly suspect a microkernel will suffer in security or additional ring transitions/TLB if Ring1 or Ring2 are used. This modern fast hardware, this might be less noticeable.

      In future multicore systems with many many cores, you'll be able to run a process (=microkernel daemon) in every core - we'll have true multitasking, context switching will not be needed. Not that this is going to makes microkernels happen, but it makes more feasible.
  • ... "ficken" in german means to bang, to bonk, to frig, to fuck, to hump, to screw or to shag!
  • C'mon Andy... Give it up, you're not going to sway someone with your arguments, no more than you swayed the public to run "free GNU on their 200 MIPS, 64M SPARCstation-5". A lot of the stuff Andy stated in the "Tanenbaum-Torvalds" debate turned out to be just plain wrong.

    - He asserted that x86 architechture was doomed to extinction. Yet the majority of the -planet- uses an x86 machine of some sort as of 2008.

    - He alluded to the Linux kernel being hard to port because of it's ties to x86 architechture, citin
    • Re: (Score:3, Informative)

      by turgid ( 580780 )

      He asserted that x86 architechture was doomed to extinction. Yet the majority of the -planet- uses an x86 machine of some sort as of 2008.

      *sigh* That old chestnut.

      Every x86 processor for the last decade, whether from intel, AMD or VIA is a superscalar, out-of-order, register-rich RISC internally with a layer that decodes x86 op codes and translates them into the native RISC code. The Transmeta chips were RISC/VLIW internally and could emulate any instruction set by loading the translation code at power-u

  • This shouldn't even be a slashdot discussion. Why? 99% of slashdoters don't actually have a clue on the subject. 90% of those that think they know what their talking about is because someone else told them something totally bias and they took it as fact.

    My opinion on the subject. I don't have a f'in clue, but will follow the mantra. Use the right tool for the job. I'm sure it fits in the OS kernel world just as it fits everywhere else.
  • It seems to me that at the center of many system X vs system Y debate(s) lies the fact that binary incompatibility of programs written for different system(s) or hardware(s) continues to exist in spite of the fact that virtualization has shown us the way out. The virtualization can occur at many levels including both the programming level with languages like Java or C# and at the hardware level with virtualization software such as VMWare. Now obviously there will need to be some part of the system at some l
  • by Peaker ( 72084 ) <gnupeaker@nOSPAM.yahoo.com> on Thursday January 31, 2008 @01:21PM (#22248486) Homepage
    Kernels provide:
    1. Hardware abstraction
    2. Resource management

    They do this using:
    1. Superior access at the hardware-level
    2. Implement address space separations for security/reliability purposes


    I believe the use of superior hardware access, and address space separations should die out in favor of an alternative: runtime-level protection.

    As more and more systems move to be based on bytecode-running virtual machines and as JIT's and hardware improves, it is becoming clearer that in the future, "static native code" (C/C++ executables and such) will die out to make room for JIT'd native code (Java/.NET).
    I believe that this will happen because JIT can and will optimize better than a static compiler running completely before execution. Such languages are also easier to develop with.

    Once such runtimes are used, some aspects of reliability/safety are guaranteed (memory overruns cannot occur. References to inaccessible objects cannot be synthesized). By relying on these measures for security, as well, we can eliminate both the need for elevated kernel access, and address space/context switches. This is desirable for several reasons:
    1. Simplicity. Lack of address space separations (processes) is simpler.
    2. Uniformity of communication: Objects can use one another without regard of "process boundaries", as long as a reference to the object exists.
    3. Performance: While "safe" languages are considered of lower performance today (and will have better JIT'd performance in the future), they can eliminate context and address space switches which have considerable costs in current systems.


    Once relying on the runtime for security and reliability, a "kernel" becomes nothing more than a thread scheduler and a hardware abstraction object library.

    I believe this is the correct design for future systems, and is my answer to the micro vs monolothic question: Neither!
  • by diegocgteleline.es ( 653730 ) on Thursday January 31, 2008 @01:40PM (#22248714)
    This is the opinion of plan 9 authors [cambridge.ma.us] WRT microkernels and other things:

    The implementers of Plan 9 are baffled by Andy Tanenbaum's recent posting. We suspect we are not representative of the mainline view, but we disagree at some level with most of the "GENERALLY ACCEPTED" truths Andy claims. Point by point:

          - The client-server paradigm is a good one
            Too vague to be a statement. "Good" is undefined.
          - Microkernels are the way to go
            False unless your only goal is to get papers published. Plan 9's kernel is a fraction of the size of any microkernel we know and offers more functionality and comparable or often better performance.
          - UNIX can be successfully run as an application program
            `Run' perhaps, `successfully' no. Name a product that succeeds by running UNIX as an application.
          - RPC is a good idea to base your system on
            Depends on what you mean by RPC. If you predefine the complete set of RPC's, then yes. If you make RPC a paradigm and expectevery application to build its own (c.f. stub compilers), you lose all the discipline you need to make the system comprehensive.
          - Atomic group communication (broadcast) is highly useful
            Perhaps. We've never used it or felt the need for it.
          - Caching at the file server is definitely worth doing
            True, but caching anywhere is worthwhile. This statement is like saying 'good algorithms are worth using.'
          - File server replication is an idea whose time has come
            Perhaps. Simple hardware solutions like disk mirroring solve a lot of the reliability problems much more easily. Also, at least in a stable world, keeping your file server up is a better way to solve the problem.
          - Message passing is too primitive for application programmers to use
            False.
          - Synchronous (blocking) communication is easier to use than asynchronous
            They solve different problems. It's pointless to make the distinction based on ease of use. Make the distinction based on which you need.
          - New languages are needed for writing distributed/parallel applications
            `Needed', no. `Helpful', perhaps. The jury's still out.
          - Distributed shared memory in one form or another is a convenient model
            Convenient for whom? This one baffles us: distributed shared memory is a lousy model for building systems, yet everyone seems to be doing it. (Try to find a PhD this year on a different topic.)

    How about the "CONTROVERSIAL" points? We should weigh in there, too:

          - Client caching is a good idea in a system where there are many more nodes than users, and users do not have a "home" machine (e.g., hypercubes)
            What?
          - Atomic transactions are worth the overhead
            Worth the overhead to whom?
          - Causal ordering for group communication is good enough
            We don't use group communication, so we don't know.
          - Threads should be managed by the kernel, not in user space
            Better: have a decent process model and avoid this process/thread dichotomy.

    Rob Pike
    Dave Presotto
    Ken Thompson
    Phil Winterbottom
  • by master_p ( 608214 ) on Thursday January 31, 2008 @01:53PM (#22248888)
    The only reason this debate is going on is because CPUs do not have the concept of modules. If they did, then each module would not be able to crash the rest of the modules.

    If you wonder how to do modules without sacrificing the flat address space, it's quite easy: In most CPU designs, each page descriptor has a user/supervisor bit which defines if the contents of a page are accessible by the other pages. Instead of this bit, CPUs must use the target address to look up module information from another table. In other words, the CPU must maintain a map of addresses to modules, and use this map to provide security access.

    This design is not as slow as it initially might seem. Modern CPUs are very fast, and they already contain many such maps: the Translation Lookaside Buffer, the Global Descriptor Table cache, the Local Descriptor Table cache, Victim Caches, Trace Caches, you name it.

  • Comment removed (Score:3, Informative)

    by account_deleted ( 4530225 ) on Thursday January 31, 2008 @01:59PM (#22248972)
    Comment removed based on user account deletion
  • by Animats ( 122034 ) on Thursday January 31, 2008 @02:26PM (#22249396) Homepage

    As someone who's done operating system internals work and has written extensively for QNX, I should comment.

    Down at the bottom, microkernels are about interprocess communication. The key problem is getting interprocess communication right. Botch that, from a performance or functionality standpoint, and your system will be terrible. In a world where most long-running programs now have interprocess communication, it's amazing that most operating systems still do it so badly.

    For interprocess communication, the application usually needs a subroutine call, and the operating system usually gives it read and write. Pipes, sockets, and System V IPC are all queues. So clunky subroutine call systems are built on top of them. Many different clunky subroutine call systems: SOAP, JSON, XMLHttpRequest, CORBA, OpenRPC, MySQL protocol, etc. Plus all Microsoft's stuff, from OLE onward. All of this is a workaround for the mess at the bottom. The performance penalty of those kludges dwarfs that of microkernel-based interprocess communication.

    I've recently been writing a web app that involves many long-running processes on a server, and I wish I had QNX messaging. I'm currently using Python, pickle, and pipes, and it is not fun. Most notably, handling all the error cases is much harder than under QNX.

    Driver overhead for drivers in user-space isn't that bad. I wrote a FireWire camera driver for QNX, and when sending 640 x 480 x 24 bits x 30 FPS, it used about 3% of a Pentium III, with the uncompressed data going through QNX messaging with one frame per message. So quit worrying about copying cost.

    The big problem with microkernels is that the base design is very tough. Mach is generally considered to have been botched (starting from BSD was a mistake). There have been very few good examples anyone could look at. Now that QNX source is open, developers can see how it's done. (The other big success, IBM's VM, is still proprietary.)

    Incidentally, there's another key feature a microkernel needs that isn't mentioned much - the ability to load user-space applications and shared libraries during the boot process. This removes the temptation to put stuff in the kernel because it's needed during boot. For example, in QNX, there are no display drivers in the kernel, not even a text mode driver. A driver is usually in the boot image, but it runs in user space. Also, program loading ("exec") is a subroutine in a shared object, not part of the kernel. Networking, disk drivers, and such are all user-level applications but are usually part of the boot image.

    Incidentally, the new head of Palm's OS development team comes from QNX, and I think we'll be seeing a more microkernel-oriented system from that direction.

  • by logicnazi ( 169418 ) <gerdesNO@SPAMinvariant.org> on Thursday January 31, 2008 @03:08PM (#22250062) Homepage
    This debate could use a lot more clarity about what is actually being debated. The truth is there are two separate design strategies that generally go under the term microkernel.

    1) The conceptual/syntactic division of the OS code into separate 'servers' interacting through some message passing paradigm. Note that a clever build system could easily smoosh these servers together and optimize away the message passing into local function calls.

    2) The division of the compiled code into seperate processes and the running of many integral parts of the OS as user processes.

    Note that doing 1 and not 2 is a genuine option. If the analogy is really with object oriented programming then one can do what one does with oop: program in terms of the abstract but emit code that avoids inefficencies. While sysenter/sysexit optimizations for L4 based microkernels (and probably others) have made IPC much cheaper on current hardware there is still a cost for switching in and out of kernel mode. Thus it can make a good deal of sense to just shove all the logical modules into ring0.

    --------

    This brings us to the other point that needs clarification. What is it that we want to achieve? If we want to build an OS for an ATM, an embedded device or a electric power controller I think there is a much stronger case to be made for microkernels in sense #2. However, in a desktop system it really doesn't matter so much whether the OS can recover from a crash that will leave the applications in an unstable state. If the disk module crashes taking it's buffers with it you don't want your applications to simply continue blithely along so you may as well reboot.

    But this is only a question of degree. There is no microkernels wrong macrokernel yes answer or vice versa. It's just that each OS has a different ranking of priorities and should implement isolation of kernel 'servers' to a different degree.

    ----

    The exact same can be said when it comes to dealing with microkernel style development (i.e. #1). Both Linus and Tanenbaum do have a point. Just like OO programming insisting on the abstraction of message passing servers can sometimes serve to improve code quality but also like OOP sometimes sticking religiously to the paradigm can make things less efficent or even more confusing. Also if you have enough developers and testers (like linux does) you might want to sacrifice the prettiness of the abstraction for performance and count on people catching the errors.

    However, what baffles me is why Tanenbaum seems to think you can't have the advantages of 1 without really having a microkernel. This is just a matter of code organization. If I want to insist that my disk system only talks to other components via a messaging API I can just do so in my code. I could even mostly do this and only break the abstraction when shared data makes a big difference.

    Ultimately though it's like arguing about OOP vs. functional or dynamic vs. static. Yup, they both have some advantages and disadvantages.

  • by anwyn ( 266338 ) on Thursday January 31, 2008 @05:16PM (#22252536)
    I have made a presentation on the Tanenbaum-Torvalds microkernel vs monolithic kernel Debate [io.com] in 2006 to the Austin Linux Group [austinlug.org].

    Basicly, the microkernel is a horrible example of bondage and discipline [catb.org] programming. In order to solve the low level problem of stray memory references, the professors from academia have come up with a low level solution, using the Memory Management Unit, (MMU) to prevent these errors. Unfortunately, this "solution" does high level collateral damage. By breaking the OS into a lot of little pieces, the u-kernels intoduce inefficiency. By putting constraints on how OSes are designed, ukernels make design, coding, and debugging more difficult. All of this to do checking, that at least in theory, could have been done at design, compile, or link time.

    This error is basicly caused by wishfull thinking. The u-kernel advocates wish that Operation Systems design were less difficult. To Quote Torvalds:

    So that 'microkernels are wonderful' mantra really comes from that desperate wish that the world should be simpler than it really is. It's why microkernels have obviously been very popular in academia, where often basically cannot afford to put a commercial-quality big development team on the issue, so you absolutely require that the problem is simpler.

    So reality has nothing to do with microkernels. Exactly the reverse. The whole point of microkernels is to try to escape the reality that OS design and implementation is hard, and a lot of work. It's an appealing notion.

    Criticism of microkernels is said to be almost unknown in the academic world, where it might be a career limiting move (CLM).

    In 1992, Tanenbaum said "LINUX is obsolete" and "it is now all over but the shoutin'" and "microkernels have won". It is now 2008, and the micro kernel advocates still have nothing that can compete with LINUX in its own problem space. It is time for micro kernel advocates to stop shouting.

Neutrinos have bad breadth.

Working...