Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Operating Systems Programming Software IT Technology

The Great Microkernel Debate Continues 405

ficken writes "The great conversation about micro vs. monolithic kernel is still alive and well. Andy Tanenbaum weighs in with another article about the virtues of microkernels. From the article: 'Over the years there have been endless postings on forums such as Slashdot about how microkernels are slow, how microkernels are hard to program, how they aren't in use commercially, and a lot of other nonsense. Virtually all of these postings have come from people who don't have a clue what a microkernel is or what one can do. I think it would raise the level of discussion if people making such postings would first try a microkernel-based operating system and then make postings like "I tried an OS based on a microkernel and I observed X, Y, and Z first hand." Has a lot more credibility.'"
This discussion has been archived. No new comments can be posted.

The Great Microkernel Debate Continues

Comments Filter:
  • by vtscott ( 1089271 ) on Thursday January 31, 2008 @12:30PM (#22247724)
    And the date at the bottom of the article is "12 May 2006". The same article has been linked from slashdot before too. [slashdot.org] We really haven't argued about this for a while...
  • Re:Which one? (Score:5, Informative)

    by Anonymous Coward on Thursday January 31, 2008 @12:35PM (#22247796)
    QNX. www.qnx.com. Best OS ever. Very long life support (qnx 4.x last patch was issued 17yrs after it was released). Now it is free for non-commercial use, with source.
  • Re:Which one? (Score:2, Informative)

    by blackchiney ( 556583 ) on Thursday January 31, 2008 @01:09PM (#22248290)
    You are partially correct. The mach kernel is one of the first implementations of microkernels. OS X is derived from this as well as the mklinux experiment of the late 90s. But just like the difference between CISC and RISC processors has been on a collision course, monolithic and microkernels have picked the best features of the other. OS X is based on the XNU kernel. A mach/monolithic hybrid. The performance of pure microkernels was just never up to the task.
  • by peragrin ( 659227 ) on Thursday January 31, 2008 @01:09PM (#22248302)
    while you are quite correct the question becomes why should the CPU handle those instructions? It is like USB 2.0 versus firewire 400. Firewire while "slower" burst rate has a higher steady rate precisely because it offloads some instructions.

    SCSI, firewire are examples of good tech working for you. The CPU should output instructions to devices smart enough to be able to work on their own. Leaving more cycles available to do things that actually matter.
  • by trolltalk.com ( 1108067 ) on Thursday January 31, 2008 @01:13PM (#22248358) Homepage Journal
    Geez, nobody gets the joke?

    If you read the article, Tannenbaum reminds everyone of how Microsoft paid Ken Brown to write a book accusing Linus of stealing the Minix microkernel. FTFA:

    In the unlikely event that anyone missed it, a couple of years ago Microsoft paid a guy named Ken Brown to write a book saying Linus stole Linux from my MINIX 1 system. I refuted that accusation pretty strongly to clear Linus' good name. I may not entirely agree with the Linux design, but Linux is his baby, not my baby, and I was pretty unhappy when Brown said he plagiarized it from me.
  • Re:Which one? (Score:5, Informative)

    by e4g4 ( 533831 ) on Thursday January 31, 2008 @01:16PM (#22248422)
    OS X, strictly speaking, is a hybrid kernel [wikipedia.org]. Essentially, NeXT mashed together Carnegie Mellon's microkernel Mach with BSD (a monolithic kernel) - yielding the overwhelmingly originally named XNU kernel. (X is Not Unix). So in short - yes , OS X is a microkernel based OS, but is just as much a monolithic kernel based OS.
  • Re:Which one? (Score:3, Informative)

    by jeremyp ( 130771 ) on Thursday January 31, 2008 @01:25PM (#22248544) Homepage Journal
    Actually, there is no microkernel in OS X. Everything in the operating system runs in the same kernel address space. Although individual subsystems within OS X can talk to each other with Mach messages (and also user space) they can also see each other's memory. Also, the VFS subsystem system and the network stack were lifted direct from BSD (although as of 10.4 they are much modified).

  • by gsnedders ( 928327 ) on Thursday January 31, 2008 @01:39PM (#22248712) Homepage

    - He alluded to the Linux kernel being hard to port because of it's ties to x86 architechture, citing how Minix was ported to x86, 680x0, and SPARC. Yet there's hardly a piece of silicon worthy of driving a terminal that Linux hasn't been ported to

    IIRC that comment dates back to when Linus was strongly tied to x86 -- sure, since 2.0 it's modular enough to make porting it easier, but back when 1.x was around that sure as hell wasn't the case. Porting it to 68k took a huge effort of rewriting large parts (the first port of Linux).
  • by master_p ( 608214 ) on Thursday January 31, 2008 @01:53PM (#22248888)
    The only reason this debate is going on is because CPUs do not have the concept of modules. If they did, then each module would not be able to crash the rest of the modules.

    If you wonder how to do modules without sacrificing the flat address space, it's quite easy: In most CPU designs, each page descriptor has a user/supervisor bit which defines if the contents of a page are accessible by the other pages. Instead of this bit, CPUs must use the target address to look up module information from another table. In other words, the CPU must maintain a map of addresses to modules, and use this map to provide security access.

    This design is not as slow as it initially might seem. Modern CPUs are very fast, and they already contain many such maps: the Translation Lookaside Buffer, the Global Descriptor Table cache, the Local Descriptor Table cache, Victim Caches, Trace Caches, you name it.

  • Comment removed (Score:3, Informative)

    by account_deleted ( 4530225 ) on Thursday January 31, 2008 @01:59PM (#22248972)
    Comment removed based on user account deletion
  • Re:Which one? (Score:3, Informative)

    by diegocgteleline.es ( 653730 ) on Thursday January 31, 2008 @01:59PM (#22248984)
    Calling Xen a microkernel is wrong. Yes, it provides "isolation", and it has that in common with microkernel, but isolation is not something that was invented by microkernels, so i don't see why we should call "microkernel" to anything that provides isolation. AFAIK Xen doesn't provide a way to create subprocesses to run functionality on it, it doesn't provive a real "message passing system"...i'd call it "virtual hardware", but not "microkernel".
  • by AKAImBatman ( 238306 ) <akaimbatman AT gmail DOT com> on Thursday January 31, 2008 @02:04PM (#22249068) Homepage Journal
    Depends. Ever hear of FUSE [sourceforge.net]? It's been showing up in quite a few distros for the capabilities it buys by running outside of kernel space. It's become so important, that it has been ported to BSD, Solaris, and Mac OS X.

    What does it do? Why, it's a monolithic driver that provides an interface to support userspace filesystem drivers. i.e. A microkernel in practice, if not in definition. Ergo, the grandparent's point about a slow migration.
  • Re:Which one? (Score:1, Informative)

    by Anonymous Coward on Thursday January 31, 2008 @02:04PM (#22249080)

    You can download it yourself here:

    http://memo5.dyndns.org/Sites/Lisa/softs/othersofts.html [dyndns.org]

  • by DarkOx ( 621550 ) on Thursday January 31, 2008 @02:20PM (#22249302) Journal
    umm, wrong....

    Loadable modules have nothing to do with micro/monolithic design. In a micro kernel those modules would have their own memory space, their own pid like identifier assigned by the monitor, and be doing all their communication with the rest of the kernel via some IPC process.

    When you load a module in Linux, it lives in the same memory space as the rest of the kernel and can freely exchange data with the rest of the kernel via writing directly shared data structures. Modules don't give you a less monolithic kernel, what the do is allow you to determine what the monolith looks like at run time instead of compile time.
  • by nxtw ( 866177 ) on Thursday January 31, 2008 @02:27PM (#22249398)
    Well, let's consider a few virtualization platforms.

    VMware ESX Server's "vmkernel" is supposed to be a custom microkernel that happens to use drivers from Linux (all device drivers run inside the vmkernel). Guest OSes (including the Linux-based service console used to manage the server) run on top of the vmkernel and access all hardware though it.

    The Xen hypervisor does less than VMware's vmkernel; it loads a "dom0" guest that manages the system and performs I/O. With few exceptions, this guest is the only guest that directly interfaces with hardware. The hypervisor manages memory, schedules the CPU, and manages communication between guests.

    Microsoft's Hyper-V appears to operate in a similar fashion to Xen.

    In the case of Xen and Hyper-V, it's still different than a microkernel; the guest instance that is performing I/O is still a monolithic kernel - usually Linux with Xen and currently Windows 2008 with Hyper-V.

    In all three systems, you've got one special guest that handles important system functions and one kernel handling I/O (be it a guest as in Xen/Hyper-V or be it the vmkernel in VMware). There's no "filesystem" process/VM, no "network driver" process/VM, etc.
  • Design goals (Score:4, Informative)

    by Tony ( 765 ) on Thursday January 31, 2008 @02:46PM (#22249722) Journal
    He mentions them because they meet his design goal: they are highly-reliable operating systems used in mission-critical applications. (Here, "mission" might be, "Bombing the fuck out of people.") He is building his case that it's easier to design a bullet-proof OS using a microkernel, as opposed to a monolithic kernel.

    And he's right. If your goal is reliability and security, a microkernel is a better design. Both goals rely on limiting the amount of time (and the amount of code) spent in kernel space. "Process isolation" is the mantra.

    NeXTStep was a hybrid kernel. It was *almost* a microkernel (based on Mach). And, it was *highly* usable. It had the most usable UI in the industry, and still does in its current reincarnation as OS-X.

    I think microkernels still have legs.
  • Re:Slashvertisement (Score:3, Informative)

    by cromar ( 1103585 ) on Thursday January 31, 2008 @02:51PM (#22249782)
    The userland, mostly. FTFA:

    MINIX 3 is [the MINIX kernel] plus a start at building a highly reliable, self-healing, bloat-free operating system...
    It can run X, etc. As it is mainly for educational purposes, it's more of a proof of concept than an attempt to create a user friendly OS. The article is actually pretty interesting...
  • by julesh ( 229690 ) on Thursday January 31, 2008 @03:34PM (#22250480)
    Depends. Ever hear of FUSE? It's been showing up in quite a few distros for the capabilities it buys by running outside of kernel space. It's become so important, that it has been ported to BSD, Solaris, and Mac OS X.

    What does it do? Why, it's a monolithic driver that provides an interface to support userspace filesystem drivers. i.e. A microkernel in practice, if not in definition. Ergo, the grandparent's point about a slow migration.


    Unfortunately, the problem with FUSE is that it's painfully slow. And yes, I do know what I'm talking about, having written drivers for it myself.
  • Re:QNX Rules (Score:3, Informative)

    by julesh ( 229690 ) on Thursday January 31, 2008 @03:44PM (#22250670)
    BTW, why the hell would it be the compiler's job to handle IPC? Shouldn't it the job of whatever is behind the OS API?

    The compiler doesn't really handle IPC: what happens is that the compiler (or rather the loader) verifies that the programs are type- and memory-safe before allowing them to run. Then they are all loaded into a single memory space so that IPC is trivial. It's a neat concept, although not the first time it has been implemented (see an OS called 'JX').
  • by turgid ( 580780 ) on Thursday January 31, 2008 @04:05PM (#22251094) Journal

    He asserted that x86 architechture was doomed to extinction. Yet the majority of the -planet- uses an x86 machine of some sort as of 2008.

    *sigh* That old chestnut.

    Every x86 processor for the last decade, whether from intel, AMD or VIA is a superscalar, out-of-order, register-rich RISC internally with a layer that decodes x86 op codes and translates them into the native RISC code. The Transmeta chips were RISC/VLIW internally and could emulate any instruction set by loading the translation code at power-up.

  • by Verte ( 1053342 ) on Friday February 01, 2008 @12:32AM (#22257700)

    why not take a leadership role in GNU/Hurd and get that project going, again?
    The Hurd going along swimmingly, thankyou. They don't have vast Hurds of unix-replacing developers, but they do have people who enjoy what they do and know it inside out.

    I doubt Andy would be so interested in the Hurd, he is very much the message-passing fan. He also doesn't like the GPL.
  • Re:crickets (Score:3, Informative)

    by savuporo ( 658486 ) on Friday February 01, 2008 @04:28AM (#22258782)
    There are quite a few. FreeRTOS, eCos, RTEMS, AvrX etc, some of them commercially quite successful. The proprietary counterparts like Nucleus are definitely very successful and in wide use.
    What, you have never heard of them ? Well, there are other widely used computing platforms besides personal computers.

Get hold of portable property. -- Charles Dickens, "Great Expectations"

Working...