Forgot your password?
typodupeerror
Operating Systems Programming Software IT Technology

The Great Microkernel Debate Continues 405

Posted by Zonk
from the tiny-issues dept.
ficken writes "The great conversation about micro vs. monolithic kernel is still alive and well. Andy Tanenbaum weighs in with another article about the virtues of microkernels. From the article: 'Over the years there have been endless postings on forums such as Slashdot about how microkernels are slow, how microkernels are hard to program, how they aren't in use commercially, and a lot of other nonsense. Virtually all of these postings have come from people who don't have a clue what a microkernel is or what one can do. I think it would raise the level of discussion if people making such postings would first try a microkernel-based operating system and then make postings like "I tried an OS based on a microkernel and I observed X, Y, and Z first hand." Has a lot more credibility.'"
This discussion has been archived. No new comments can be posted.

The Great Microkernel Debate Continues

Comments Filter:
  • QNX Rules (Score:1, Interesting)

    by Anonymous Coward on Thursday January 31, 2008 @12:31PM (#22247736)
    Great OS, millions in use, fast, small - Microsoft should hve bought them!

    www.qnx.com

  • by Nooface (526234) on Thursday January 31, 2008 @12:39PM (#22247862) Homepage
    The rise of virtualization proves the validity of the microkernel concept, whereby the hypervisor now takes the place of the original "kernel" (note the similarity in block diagrams: microkernel [cisco.com] vs. hypervisor [com.com] designs). Virtual machines are now used instead of function-specific modules in the original microkernel designs, with specialized VMs for performing I/O and to host virtual appliances [wikipedia.org] with just enough user-level code needed to support a particular application.
  • Re:Which one? (Score:5, Interesting)

    by mrsbrisby (60242) on Thursday January 31, 2008 @12:49PM (#22247996) Homepage
    NT (like OSX) has a microkernel, but the operating system isn't just the microkernel. Most of OSX (for example) actually runs on UNIX which runs as a single application of the microkernel. NT also has an enormous number of kernel-entry points [zdnet.com] which means that it too is a monlithic-kernel-based system that happens to run on a microkernel.

    A real microkernel-based system will have a lot of the userland facilities designed to take advantage of message passing and will probably look more like HURD or Squeak than it will like NT or NeXT. QNX [qnx.com] and VxWorks [windriver.com] are the only successful microkernel-based systems that I'm aware of, and frankly both of them are losing big to Linux, so we might have to say were the only successful systems in the future...
  • Design Philosophy (Score:5, Interesting)

    by Darkseer (63318) on Thursday January 31, 2008 @12:54PM (#22248066) Homepage
    I did my senior project in college on this in 1998... At that time I was looking at something from MIT called the exo-kernel and comparing it to some 2.4 version of the linux kernel. Back in 1998 the problem was mainly that nobody had invested in that particular mirco-kernel technology, unlike say mach, because it was a research project. In my conclusion, it was clear I could not do a meaningful comparison of complex applications on both OSes due to its lack of maturity. But there was one thing that was clear, the design philosophy behind the micro kernel allowed a much more flexible way to interact with the hardware.

    The time it would take to design an implement a what the equivalent of driver would be were smaller. In the end it puts more flexibility into the hands of the application designer with the kernel taking care of just the bare minimum. The initial work at the time reported a 10x improvement in performance since you could customize so much of how the hardware resources were being used. This of course comes at a price, in addition to developing the application, you need to develop the drivers it uses, possibly increasing the time to write anything significant.

    But in the end, flexability was key, and you can see some of the microkernel design philosophies start to seep into the linux kernel. Take a look at kernel modules for example. The code is already being abstracted out, now if it just effectively was designed to run in userspace.

    My thoughts are that in the end the microkernel will win do to the fact that I can engineer a more complex OS that is cheaper to change, not because it is faster. Tis is the compromise that was made with compilers vs. machine language programming. In the end I think Tanenbaum will win, linux will become a microkernel out of necessity, and Linus as it turns out would have gotten a good grade from Dr. Tanenbaum. He just would have handed his final project in 40 years late by the time it happens.
     
  • by N1ck N4m3 (1230664) on Thursday January 31, 2008 @12:57PM (#22248128)
    ... "ficken" in german means to bang, to bonk, to frig, to fuck, to hump, to screw or to shag!
  • Re:Which one? (Score:3, Interesting)

    by TheRaven64 (641858) on Thursday January 31, 2008 @01:11PM (#22248330) Journal
    The most popular microkernel these days is Xen. It actually has more system calls than classic UNIX, but device drivers and filesystems are all run outside of ring 0 and most of the system calls are direct equivalents to privileged instructions. It implements shared memory for IPC and recommends a lockless ring buffer for message passing.
  • Re:Which one? (Score:2, Interesting)

    by TheCoelacanth (1069408) on Thursday January 31, 2008 @01:11PM (#22248336)
    It is very misleading to call OS X microkernel based. It does run on a microkernel, but all of the normal drivers run in kernel mode so it is not a true microkernel.
  • Re:Which one? (Score:5, Interesting)

    by LWATCDR (28044) on Thursday January 31, 2008 @01:14PM (#22248384) Homepage Journal
    QNX is relevant. Is it popular on the desktop? Not really but then you could say is BSD relevant? Is it popular on the desktop compared to Windows, MacOS/X or even Linux?
    Is Linux relevant on the desktop? If you don't count duel boot machines how many Linux desktops are out there?

    "Although microkernel OSs may be "nicer" from a design point of view, on the practical side the monolithical ones are serving us very well."
    I have heard that argument before except it was about Unix. MS-DOS was so much faster and used less ram and drive space than Unix did.
    To just dismiss microkernels because monolithic kernels are good enough is silly.

    Actually Linux is starting to take some ideas from Microkernals. FUSE is a Microkernel idea. Moving more device drivers into userspace is also a very good idea. It means that security issues with a driver are less likely to root the OS or take out the OS with a crash.
    Stablity and security are important aren't they?

    But back to your comment yes QNX is relevant. It is relevant because it proves that you can have a small, fast, and stable microkernal OS.

  • by Peaker (72084) <`moc.oohay' `ta' `rekaepung'> on Thursday January 31, 2008 @01:21PM (#22248486) Homepage
    Kernels provide:
    1. Hardware abstraction
    2. Resource management

    They do this using:
    1. Superior access at the hardware-level
    2. Implement address space separations for security/reliability purposes


    I believe the use of superior hardware access, and address space separations should die out in favor of an alternative: runtime-level protection.

    As more and more systems move to be based on bytecode-running virtual machines and as JIT's and hardware improves, it is becoming clearer that in the future, "static native code" (C/C++ executables and such) will die out to make room for JIT'd native code (Java/.NET).
    I believe that this will happen because JIT can and will optimize better than a static compiler running completely before execution. Such languages are also easier to develop with.

    Once such runtimes are used, some aspects of reliability/safety are guaranteed (memory overruns cannot occur. References to inaccessible objects cannot be synthesized). By relying on these measures for security, as well, we can eliminate both the need for elevated kernel access, and address space/context switches. This is desirable for several reasons:
    1. Simplicity. Lack of address space separations (processes) is simpler.
    2. Uniformity of communication: Objects can use one another without regard of "process boundaries", as long as a reference to the object exists.
    3. Performance: While "safe" languages are considered of lower performance today (and will have better JIT'd performance in the future), they can eliminate context and address space switches which have considerable costs in current systems.


    Once relying on the runtime for security and reliability, a "kernel" becomes nothing more than a thread scheduler and a hardware abstraction object library.

    I believe this is the correct design for future systems, and is my answer to the micro vs monolothic question: Neither!
  • Re:Which one? (Score:3, Interesting)

    by macs4all (973270) on Thursday January 31, 2008 @01:24PM (#22248534)
    So perhaps, like with launchd, Apple (who really does have decades of *NIX experience. Think A/UX) has actually come up with a third, and obviously viable, option: XNU.

    Therefore, a new question suggests itself: Do we really have to have a three-way debate? Micro v. Mono v. XNU (hybrid)???
  • Re:Which one? (Score:5, Interesting)

    by diegocgteleline.es (653730) on Thursday January 31, 2008 @01:32PM (#22248620)
    I hate when people says "NT/OS X are microkernels but don't work like microkernels", or they call them "hybrid kernels"

    Either you're microkernel or not. Either you run filesystems and network stacks in separated, isolated processes and address spaces, or you don't. NT and OS X don't run anything of that as a separated process, which was the whole point of having a microkernel. They run it in the same process space than everything else. Just like like linux, solaris, windows 9x. In other words, they aren't microkernels.

    Yes, they have source-level design abstractions inherited from microkernels to make the design more modular. So do Linux, Solaris or any other decent monolithic kernel, even if they didn't inherited it from microkernels. Microkernel people wasted their years saying that a microkernel where needed to achieve "modularity", when the fact is that "modularity" in the design of software is not something that you can achieve only by running things in different process spaces. After 20 years they haven't realized that many parts of linux or solaris are more modular than their equivalents of minix or hurd.
  • Re:Slashvertisement (Score:3, Interesting)

    by moderatorrater (1095745) on Thursday January 31, 2008 @01:33PM (#22248642)
    I'm going to disagree with you. This debate's been raging for years, and he's calling for people to try the most available one on the market right now. He's not going to get money from it, all he'll get is maybe some development done for it. Since he's a professor and could just as easily make the development an assignment or extra credit, it's much more likely that he's trying to inject some experience into the debate instead of turning it into some nerdy gladiatorial fight between him and Linus.

    The real accusation you should be making is that he's a coward and a chicken because he's really only scared of Linus bringing his wife to the match.
  • by diegocgteleline.es (653730) on Thursday January 31, 2008 @01:40PM (#22248714)
    This is the opinion of plan 9 authors [cambridge.ma.us] WRT microkernels and other things:

    The implementers of Plan 9 are baffled by Andy Tanenbaum's recent posting. We suspect we are not representative of the mainline view, but we disagree at some level with most of the "GENERALLY ACCEPTED" truths Andy claims. Point by point:

          - The client-server paradigm is a good one
            Too vague to be a statement. "Good" is undefined.
          - Microkernels are the way to go
            False unless your only goal is to get papers published. Plan 9's kernel is a fraction of the size of any microkernel we know and offers more functionality and comparable or often better performance.
          - UNIX can be successfully run as an application program
            `Run' perhaps, `successfully' no. Name a product that succeeds by running UNIX as an application.
          - RPC is a good idea to base your system on
            Depends on what you mean by RPC. If you predefine the complete set of RPC's, then yes. If you make RPC a paradigm and expectevery application to build its own (c.f. stub compilers), you lose all the discipline you need to make the system comprehensive.
          - Atomic group communication (broadcast) is highly useful
            Perhaps. We've never used it or felt the need for it.
          - Caching at the file server is definitely worth doing
            True, but caching anywhere is worthwhile. This statement is like saying 'good algorithms are worth using.'
          - File server replication is an idea whose time has come
            Perhaps. Simple hardware solutions like disk mirroring solve a lot of the reliability problems much more easily. Also, at least in a stable world, keeping your file server up is a better way to solve the problem.
          - Message passing is too primitive for application programmers to use
            False.
          - Synchronous (blocking) communication is easier to use than asynchronous
            They solve different problems. It's pointless to make the distinction based on ease of use. Make the distinction based on which you need.
          - New languages are needed for writing distributed/parallel applications
            `Needed', no. `Helpful', perhaps. The jury's still out.
          - Distributed shared memory in one form or another is a convenient model
            Convenient for whom? This one baffles us: distributed shared memory is a lousy model for building systems, yet everyone seems to be doing it. (Try to find a PhD this year on a different topic.)

    How about the "CONTROVERSIAL" points? We should weigh in there, too:

          - Client caching is a good idea in a system where there are many more nodes than users, and users do not have a "home" machine (e.g., hypercubes)
            What?
          - Atomic transactions are worth the overhead
            Worth the overhead to whom?
          - Causal ordering for group communication is good enough
            We don't use group communication, so we don't know.
          - Threads should be managed by the kernel, not in user space
            Better: have a decent process model and avoid this process/thread dichotomy.

    Rob Pike
    Dave Presotto
    Ken Thompson
    Phil Winterbottom
  • Re:crickets (Score:3, Interesting)

    by OrangeTide (124937) on Thursday January 31, 2008 @01:47PM (#22248806) Homepage Journal
    Why would I create a microkernel when they so obviously suck?

    It's not hard to write a monolithic kernel that screams faster than something like Mach. but not all microkernels are like Mach. But even commercial microkernels like QNX have a lot of overhead for certain applications (like filesystem I/O).

    It is possible to have a fast microkernel if you completely discard the original concept of a microkernel and start over with a fresh design. L4 is quite fast for example, even if the whole Clan thing is a bit weird to me.

    Oh you don't get to count something as a microkernel if it is a monolithic kernel running the important bits and a microkernel running the less performance sensitive bits. (Mac OS X, Windows NT, etc)
  • Re:Which one? (Score:3, Interesting)

    by Iron Condor (964856) on Thursday January 31, 2008 @02:18PM (#22249282)

    But yeah, moving stuff out of the kernel is the way forward in terms of security, and that's pretty much the definition of a microkernel architecture.

    I'm gonna get tarred and feathered for this but .... this is of course exactly what Vista is doing: "Hey, wouldn't it be better if we stopped letting any odd piece of software talk directly to the hardware in kernel mode? If kernel mode was reserved to ... y'now, ... the kernel?" So instead they exposed a (perfectly reasonable) API instead. And the only cost is that you need a new device driver for any old hardware. Like that 20-year-old joystick.

    Oh, it also makes it a lot harder to get around OS restrictions on illegal content access. No "directly-talking-to-the-CD-drive" any more. Which means in /.-land "built-in DRM". Which it isn't, of course, but could certainly be seen that way.

    Is it "better" for stuff to get moved out of the kernel? Well, "better" for whom?

  • by 0xABADC0DA (867955) on Thursday January 31, 2008 @02:21PM (#22249338)
    We don't need a microkernel written in any language, especially not C. What we need is a kernel where everything is protected by being typesafe (a 'safe' kernel). Like a kernel written in Java (jxos) or .net (singularity), or limbo (inferno), or maybe D. People forget the original purpose of the "memory management unit" was for swap on mainframes and not for process protection. And anybody who has looked at the mess that is fork, mmap, etc dealing with memory protection in a monolithic system should know it's not good at process protection. It's absurd how much complexity and overhead is caused by this.

    A 'safe' kernel sounds slow, because it is probably interpreting bytecodes and has garbage collection. But you get many performance advantages also:

    1) idle thread can actually do something, by making programs take less room (compacting gc), offloading some of the work of free(), and optimizing code. So programs respond faster when you switch back to them.

    2) lack of data copying. Current systems often copy a *lot* of data from calls to read(2), write(2) and friends, and attempts to reduce this with calls like sendfile or page sharing is very complicated and has a lot of overhead. With a 'safe' kernel you can just give a read-only view, or any number of other very simple methods where no copying takes place.

    3) mmu can be used to optimize garbage collection. Only pages written to since the last collection need to be checked for references to new objects, which can improve performance drastically if the instructions inserted to implement a software 'memory barrier' can be removed. It can also help run a gc in parallel since it can easily know if the objects it is looking at have changed during the collection.

    4) can eliminate all TLB flushes and stalls from swapping page tables

    5) much faster context switch means programs can have smaller time slices, so responsiveness is improved. Meaning less latency in audio (and everything else) without special hacks like magic 'realtime' processes.

    6) can run on all hardware, even when lacking memory protection

    7) hardware access safer than micro or monolithic kernel, and easier to write drivers

    ... and so on.
  • by Animats (122034) on Thursday January 31, 2008 @02:26PM (#22249396) Homepage

    As someone who's done operating system internals work and has written extensively for QNX, I should comment.

    Down at the bottom, microkernels are about interprocess communication. The key problem is getting interprocess communication right. Botch that, from a performance or functionality standpoint, and your system will be terrible. In a world where most long-running programs now have interprocess communication, it's amazing that most operating systems still do it so badly.

    For interprocess communication, the application usually needs a subroutine call, and the operating system usually gives it read and write. Pipes, sockets, and System V IPC are all queues. So clunky subroutine call systems are built on top of them. Many different clunky subroutine call systems: SOAP, JSON, XMLHttpRequest, CORBA, OpenRPC, MySQL protocol, etc. Plus all Microsoft's stuff, from OLE onward. All of this is a workaround for the mess at the bottom. The performance penalty of those kludges dwarfs that of microkernel-based interprocess communication.

    I've recently been writing a web app that involves many long-running processes on a server, and I wish I had QNX messaging. I'm currently using Python, pickle, and pipes, and it is not fun. Most notably, handling all the error cases is much harder than under QNX.

    Driver overhead for drivers in user-space isn't that bad. I wrote a FireWire camera driver for QNX, and when sending 640 x 480 x 24 bits x 30 FPS, it used about 3% of a Pentium III, with the uncompressed data going through QNX messaging with one frame per message. So quit worrying about copying cost.

    The big problem with microkernels is that the base design is very tough. Mach is generally considered to have been botched (starting from BSD was a mistake). There have been very few good examples anyone could look at. Now that QNX source is open, developers can see how it's done. (The other big success, IBM's VM, is still proprietary.)

    Incidentally, there's another key feature a microkernel needs that isn't mentioned much - the ability to load user-space applications and shared libraries during the boot process. This removes the temptation to put stuff in the kernel because it's needed during boot. For example, in QNX, there are no display drivers in the kernel, not even a text mode driver. A driver is usually in the boot image, but it runs in user space. Also, program loading ("exec") is a subroutine in a shared object, not part of the kernel. Networking, disk drivers, and such are all user-level applications but are usually part of the boot image.

    Incidentally, the new head of Palm's OS development team comes from QNX, and I think we'll be seeing a more microkernel-oriented system from that direction.

  • by BrendaEM (871664) on Thursday January 31, 2008 @02:34PM (#22249524) Homepage
    Back in the 1980's, I had a Color Computer 3, which was a pretty anemic machine, but it ran Microware's OS9 Level II. The Color Computer had a 6809b processor, which could only natively map 64K into its address space. Additional hardware allowed OS9 to map any eight of 8K blocks into the processor. Of that 64K, the entire kernel was eight kilobytes. The OS was a real-time multi-user windowing operating system.

    My old system had 3-1/5 720K disks. The whole operating system fit on one disk. Adding another disk gave you a primitive graphical file manager.

    Don't believe me? Here we go!

    VCC Emulator:
    http://vcc6809.bravehost.com/index.html [bravehost.com]

    You need OS9 Level II Disk Images:
    http://vcc6809.bravehost.com/disks/os9l2.zip [bravehost.com]

    Some Quicky Instructions:
    The emulator emulates this expansion-slot thing called a Multipak, in which you drop the "502 floppy controller" into, in which you can mount the (360k) disk images, as seen above. From there you can boot, by typing: DOS

    You can load/unload commands at will, and load a bunch of merged ones with:
    load utilpak1

    There is a manual here. Check out the technical section, the whole OS is a re-enterent tree!
    http://www.clubltdstudios.com/coco/downunder/OS9/OS9_Level_2.zip [clubltdstudios.com]

    Be careful with the commands deldir (rmdir) dsave (xcopy) os9gen and cobbler...and format too. If you have external floppys the emulator can format them, if so mounted!

    A little cramped for virtual-storage? You can install a virtual hard disk controller into the Multipak, and mount this virtual disk image virtual controller.
    http://vcc6809.bravehost.com/bin/nitros9.zip [bravehost.com]

    To boot from the virtual hard disk, change the FD-502 disk controller settings to RGBDOS. To boot from the virtual HD, Mount the HDD controller in the multipack which was a slot expansion thing. To boot, type DOS253

    But ick, a small 32 column screen. You can fix it by:

    wcreate /w1 -s=2 0 0 80 24 00 02 02
    shell i=/w1&

    No change? Press [Home] you just opened another virtual terminal and forked an shell to it. You can press [F11] for fullscreen, [F10] to kill the status line.

    There's more disks here:
    http://www.clubltdstudios.com/coco/downunder/OS9/ [clubltdstudios.com]

    On the OS9 disks, you can find Basic09 and it's runtime RunB. For it's day, Basic09 was arguably the best compiled basic offered anywhere. Basic09/RunB/OS9 allowed dll-style basic programming in the 1980s. Today, you would find its error handling lacking, as actually requiring a line number, and C programmers would miss the case/switch statements.

    The asm source code is out there for both OS9 and NitrOS9, which is OS9 modded for the Hitachi 6309.

    Enjoy : )

    At times I do wonder why the Linux kernal has to be recompiled for hardware changes. The kernel modules are a step in the right direction, but why is everyone still loading Nvidia TNT support? The kernel should be the kernel and that's it, and whatever hardware you have should be abstracted, and at least separable. Linux doesn't have commands like cobbler and OS9 gen to build a bootstrap from compiled modules. While the kernel modules are a good idea, why aren't they used for all devices? Flash drives are still being mounted as SCSI's? Because the kernel isn't modular, and it makes it harder to swap out device support for the end user.

  • by mikehoskins (177074) on Thursday January 31, 2008 @02:43PM (#22249670)
    I know this article is old, but can we agree to this?

    First, a couple of background questions... Andy, you believe wholeheartedly in microkernels, right? Do you believe in them more than Minix, or is this merely a shameless plug for your product, Minix?

    Based on those two responses, here is my proposal.... Assuming you believe in microkernels more than Minix, why not take a leadership role in GNU/Hurd and get that project going, again? http://www.gnu.org/software/hurd/hurd.html [gnu.org]

    Perhaps, you can get assistance from the Xen people, too. http://www.xensource.com/ [xensource.com]

    That's my modest proposal....
  • Re:Design Philosophy (Score:5, Interesting)

    by jd (1658) <imipak@noSPam.yahoo.com> on Thursday January 31, 2008 @02:44PM (#22249682) Homepage Journal
    The problems with traditional microkernels lie in the heaviness of the module-to-module communication and in the number of context switches. An exokernel is pretty much entirely in one context, and exopc seemed to have very efficient communication, so that design looked extremely good. (Although a fair comparison isn't possible, a crude one would be to compare Exopc + the Cheetha web server with Linux + Tux, both serving static content. See how well they scale, when stress-tested.)

    Exokernels aren't the only microkernels of interest, though. There have been efforts to produce mobile nanokernels, on the theory that drivers are generally smaller than data, so in a cluster, moving the code to the data should be more efficient on resources. The opposite extreme has been to produce kernels that span multiple systems, producing a single virtual machine. Here, kernelspace and userspace are segmented and the latency between machines is simply another sort of context switch delay, yet the overall performance is greater than a loosely-coupled cluster could ever produce.

    Microkernels have a lot of potential, a lot of problems have been solved, there are still problems that need to be solved better. eg: if a driver crashes, there needs to be a transaction log that permits the hardware to be returned to a valid state if at all possible, or rebooted then rolled into the last valid state. This isn't just a software problem, it's a hardware problem as well. Being able to safely reboot individual components on a motherboard in total isolation requires more than fancy coding and software switches. You need a lot more smoothing circuits and capacitors to ensure that a reboot has electrically no impact - not so much as a flicker - on anything else.

    Where microkernels would truly be "at home" would be in machines that support processor-in-memory architecture. Absurdly common function calls, instead of going to the CPU, having instructions and data fetched, and then being executed, along a long path from the OS' entry point to some outer segment of code, can be embedded in the RAM itself. Zero overhead, or damn near. It violates the principle of single-entry, single-exit, but if you don't need such a design, then why waste the cycles to support it?

  • by logicnazi (169418) <logicnazi@gm a i l . com> on Thursday January 31, 2008 @03:08PM (#22250062) Homepage
    This debate could use a lot more clarity about what is actually being debated. The truth is there are two separate design strategies that generally go under the term microkernel.

    1) The conceptual/syntactic division of the OS code into separate 'servers' interacting through some message passing paradigm. Note that a clever build system could easily smoosh these servers together and optimize away the message passing into local function calls.

    2) The division of the compiled code into seperate processes and the running of many integral parts of the OS as user processes.

    Note that doing 1 and not 2 is a genuine option. If the analogy is really with object oriented programming then one can do what one does with oop: program in terms of the abstract but emit code that avoids inefficencies. While sysenter/sysexit optimizations for L4 based microkernels (and probably others) have made IPC much cheaper on current hardware there is still a cost for switching in and out of kernel mode. Thus it can make a good deal of sense to just shove all the logical modules into ring0.

    --------

    This brings us to the other point that needs clarification. What is it that we want to achieve? If we want to build an OS for an ATM, an embedded device or a electric power controller I think there is a much stronger case to be made for microkernels in sense #2. However, in a desktop system it really doesn't matter so much whether the OS can recover from a crash that will leave the applications in an unstable state. If the disk module crashes taking it's buffers with it you don't want your applications to simply continue blithely along so you may as well reboot.

    But this is only a question of degree. There is no microkernels wrong macrokernel yes answer or vice versa. It's just that each OS has a different ranking of priorities and should implement isolation of kernel 'servers' to a different degree.

    ----

    The exact same can be said when it comes to dealing with microkernel style development (i.e. #1). Both Linus and Tanenbaum do have a point. Just like OO programming insisting on the abstraction of message passing servers can sometimes serve to improve code quality but also like OOP sometimes sticking religiously to the paradigm can make things less efficent or even more confusing. Also if you have enough developers and testers (like linux does) you might want to sacrifice the prettiness of the abstraction for performance and count on people catching the errors.

    However, what baffles me is why Tanenbaum seems to think you can't have the advantages of 1 without really having a microkernel. This is just a matter of code organization. If I want to insist that my disk system only talks to other components via a messaging API I can just do so in my code. I could even mostly do this and only break the abstraction when shared data makes a big difference.

    Ultimately though it's like arguing about OOP vs. functional or dynamic vs. static. Yup, they both have some advantages and disadvantages.

  • Re:Which one? (Score:4, Interesting)

    by logicnazi (169418) <logicnazi@gm a i l . com> on Thursday January 31, 2008 @03:16PM (#22250192) Homepage
    What's your problem. I mean saying something is a hybrid kernel communicates what it is. No one who has a clue thinks it means they are split into separate processes or anything.

    In fact my big pet peeve is that the microkernel people don't distingush between source level abstractions and process seperation. I mean Tanenbaum's arguments here pretend like the better abstractions of message passing and no shared data structures are an argument for microkernels (in the sense of true process isolation) but they are only really an argument for certain abstractions in the source.

    Anyway all kernels use some source abstractions but presumably the reason to call some kernels 'hybrid' is that their abstractions are more robust and more throughly resemble the abstractions you would use in a microkernel. If you don't like the word tell us how we should describe microkernel code that someone stripped the process isolation from?
  • Re:crickets (Score:2, Interesting)

    by emilper (826945) on Thursday January 31, 2008 @03:31PM (#22250418)
    Compared to other supposedly more popular OSes, Linux is a pico-kernel (yes, I know size is not what "microkernel" is about). Anyway, with user-space drivers and fuse, doesn't the border between microkernels and monolithic kernels become a little blurred ?
  • by NovaX (37364) on Thursday January 31, 2008 @04:16PM (#22251292)
    GNU/HURD isn't Linux. It is an utter failure of an OS kernel and was always more about hype by the FSF - they really didn't put much effort into it. Perhaps because Stallman couldn't write it in ELisp. ;)

    Remember that Minix-3 was a fairly recent update of v2.0, which was completed in the late 80s. Minux is still a joy to work with as a programmer, but well past its time for being used as a standard OS. Its perfect for classrooms and learning kernel programming. You'd probably enjoy programming against it than Linux, for example. It was never intended, and effort was made to ensure it wasn't allowed, to be extended and used for real-world. The complexity to make it fully featured would destroy its simplicity and student projects that make it ideal for education.
  • by anwyn (266338) on Thursday January 31, 2008 @05:16PM (#22252536)
    I have made a presentation on the Tanenbaum-Torvalds microkernel vs monolithic kernel Debate [io.com] in 2006 to the Austin Linux Group [austinlug.org].

    Basicly, the microkernel is a horrible example of bondage and discipline [catb.org] programming. In order to solve the low level problem of stray memory references, the professors from academia have come up with a low level solution, using the Memory Management Unit, (MMU) to prevent these errors. Unfortunately, this "solution" does high level collateral damage. By breaking the OS into a lot of little pieces, the u-kernels intoduce inefficiency. By putting constraints on how OSes are designed, ukernels make design, coding, and debugging more difficult. All of this to do checking, that at least in theory, could have been done at design, compile, or link time.

    This error is basicly caused by wishfull thinking. The u-kernel advocates wish that Operation Systems design were less difficult. To Quote Torvalds:

    So that 'microkernels are wonderful' mantra really comes from that desperate wish that the world should be simpler than it really is. It's why microkernels have obviously been very popular in academia, where often basically cannot afford to put a commercial-quality big development team on the issue, so you absolutely require that the problem is simpler.

    So reality has nothing to do with microkernels. Exactly the reverse. The whole point of microkernels is to try to escape the reality that OS design and implementation is hard, and a lot of work. It's an appealing notion.

    Criticism of microkernels is said to be almost unknown in the academic world, where it might be a career limiting move (CLM).

    In 1992, Tanenbaum said "LINUX is obsolete" and "it is now all over but the shoutin'" and "microkernels have won". It is now 2008, and the micro kernel advocates still have nothing that can compete with LINUX in its own problem space. It is time for micro kernel advocates to stop shouting.

  • The most fascinating thing about QNX is the message passing /thread priority / context switching rules.

    As far as I can make out they are...

    1. A thread can only send messages to higher priority threads, not to lower priority ones.
    2. Whereupon a context switch immediately occurs and the high priority thread handles the message.
    3. Higher priority threads can only send structure free signals. "Hey, look at me" to lower priority threads.
    Sounds weird and restrictive, but I bet it creates a far cleaner architecture.
  • Re:crickets (Score:2, Interesting)

    by ^switch (65845) on Thursday January 31, 2008 @08:00PM (#22255358)

    It is possible to have a fast microkernel if you completely discard the original concept of a microkernel and start over with a fresh design. L4 is quite fast for example, even if the whole Clan thing is a bit weird to me.
    Most implementations of L4 no longer to the Clan's and Chief's IPC model which I believe you are referring to. In fact, I haven't seen an API with this IPC model for almost 10 years. L4 is being actively developed and researched by many groups -- http://www.ok-labs.com/ [ok-labs.com], http://www.ertos.nicta.com.au/ [nicta.com.au], http://l4ka.org/ [l4ka.org] and, as such, is a good microkernel to take a look at if you're interested to see what a modern microkernel does.
  • by Anonymuous Coward (1185377) on Thursday January 31, 2008 @11:49PM (#22257440)
    There was however a version of linux hosted as a server on Mach (MkLinux) - the Mach kernel being considered a 'hardware' platform of sorts as seen from the linux kernel perspective.

    And linux was similarly ported to the L4 microkernel.

    I don't know how Debian/Hurd manages to get all that array of kinky linuxish apps working on Hurd, but the idea of actually running the whole linux as some Hurd server shouldn't be that weird.

"If truth is beauty, how come no one has their hair done in the library?" -- Lily Tomlin

Working...