Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×

Tanenbaum-Torvalds Microkernel Debate Continues 534

twasserman writes "Andy Tanenbaum's recent article in the May 2006 issue of IEEE Computer restarted the longstanding Slashdot discussion about microkernels. He has posted a message on his website that responds to the various comments, describes numerous microkernel operating systems, including Minix3, and addresses his goal of building highly reliable, self-healing operating systems."
This discussion has been archived. No new comments can be posted.

Tanenbaum-Torvalds Microkernel Debate Continues

Comments Filter:
  • by AKAImBatman ( 238306 ) * <[moc.liamg] [ta] [namtabmiaka]> on Monday May 15, 2006 @12:27PM (#15335793) Homepage Journal
    Since I know that this story is going to turn into flame-fest central, I'm going to try to head things off by interjecting an intelligent conversion about some issues that are on my mind at the moment.

    First and foremost, does anyone have a torrent of Minix3? Tanenbaum is a bit worried [google.com] about getting slashdotted. If you've got one seeded, please share.

    Now with that out of the way. I don't know if anyone else has tried it yet, but Minix3 is kind of neat. It's a complete OS that implements the Microkernel concepts that he's been expounding on for years now. The upsides are that it supports POSIX standards (mostly), can run X-Windows, and is a useful development platform. Everything is very open, and still simple enough to trudge through without getting confused by the myriads of "gotchas" most OS code-bases contain. Unfortunately, it's still a long way from a usable OS.

    The biggest issue is that the system is lacking proper memory management. It currently uses static data segments which have to be predefined before the program is run. If the program goes over its data segment, it will start failing on mallocs. The result is that you often have to massively increase the data segment just to handle the peak usage. Right now I have BASH running with a segment size of about 80 megs just so I can run configure scripts. That means that every instance of BASH is taking up that much memory! There's apparently a Virtual Memory system in progress to help solve this issue, so this is (thankfully) a temporary problem.

    The other big issue is a lack of threading support. I'm trying to compile GNU PThreads [gnu.org] to cover over this deficiency, but it's been a slow process. (It keeps failing on the mctx stack configuration. I wish I understood what that was so I wouldn't have to blindly try different settings.)

    On the other hand, the usermode servers do work as advertised. For example, the network stack occasionally crashes under VMWare. (I'm guessing it's the same memory problems I mentioned earlier.) Simply killing and restarting dhcpd actually does get the system back up and running. It's kind of neat, even though it does take some getting used to.

    All in all, I think it's a really cool project that could go places. The key thing is that it needs attention from programmers with both the desire and time to help. Tossing lame criticisms won't help the project reach that goal. So if you're looking to help out a cool operating system that's focused on stability, security, and ease of development, come check out Minix for a bit. The worst that could happen is that you'll decide that it isn't worth investing the time and energy. And who knows? With some work, Minix might turn out to be a good alternative to QNX. :-)
    • Page based sockets? (Score:5, Interesting)

      by goombah99 ( 560566 ) on Monday May 15, 2006 @12:41PM (#15335905)
      It seems to me the whole issue boils down to memory isolation. If you always have to pass messages to communicate you have good isolation but costly syncronization of data/state and hence potential performance hits. And vica versa: Linux is prone to instability and security breaches from every non-iolated portion of it.

      As I understand it, as a novice, the only way to communincate or syncronize data is via copies of data passed via something analogous to a socket. A Socket is a serial interface. If you think about this for a moment, you realize this could be thought of as one byte of shared memory. Thus a copy operation is in effect the iteration of this one byte over the data to share. At any one moment you can only syncronize that one byte.

      But this suggests it's own solution. Why not share pages of memory in parallel between processes. This is short of full access to all of the state of another process. But it would allow locking and syncronization processes on entire system states and the rapid passing of data without copies.

      Then it would seem like the isolation of mickrokernels would be fully gained without the complications that arrise in multi processing, or compartmentalization.

      Or is there a bigger picture I'm missing.
      • As I read this, it seems quite analogous to objects in C++ (or any other OOL). All kernel interfaces could publish the data they want to have public, and hide the data that is private to the implementation of the feature.

        I would suggest that this will eventually make its way into kernel systems (just like any other good idea that has come from the programming language fields).
      • by nuzak ( 959558 ) on Monday May 15, 2006 @01:02PM (#15336091) Journal
        > But this suggests it's own solution. Why not share pages of memory in parallel between processes.

        This is precisely what shared memory is, and it's used all over the place, in Unix and Windows both. When using it, you are of course back to shared data structures and all of the synchronization nastiness, but a) sometimes it's worth paying the complexity price, and b) sometimes it doesn't actually matter if concurrent access corrupts the data if something else is going to correct it (think packet collisions).

        Still, if you have two processes that both legitimately need to read and write the same data, you probably need three processes. The communication overhead with the third process is usually pretty negligible.

        There's even more exotic concurrency mechanisms that exist that don't require copying or even explicit synchronization, but they're usually functional in nature, and incompatible with the side-effectful state machines of most OS's and applications in existence today.

        • Perhaps I'm mistaken, but Isn't shared memory essentially available to the entire macro-kernel and all it's processes. Something more fine grained like a page based socket would let two processes agree to communicate. They would be sending messages to each other over a very wide channel: the entire page, not some serial socket.

          Some other process could not butt-in on this channel however, since it's not registered to that socket.

          Or is that how shared memory works?

          Tnnebaum's point is that he can have a re-i
          • by nuzak ( 959558 )
            > Perhaps I'm mistaken, but Isn't shared memory essentially available to the entire macro-kernel and all it's processes.

            The kernel is the arbiter of shared memory, sure, because that's how it works, by futzing with the VM mappings of processes using it. It's not available to every process in the system though, it still has to ask the kernel for access.

            But "communication" over shared memory is exactly how it works -- the size of the channel is the size of the entire shm segment. You write as much data a
            • by jelle ( 14827 )
              "99% of the time when using shm, you use a semaphore, another SysV IPC primitive."

              This is my opinion, but I had to say it: I personally don't like SysV. There are various ways to synchronize, and each method has advantages and disadvantages, but SysV is at the bottom of the pack if you ask me.

              process-shared pthread mutexes and conditions are much faster than SysV, because they usually don't make a system call. A disadvantage of the SysV ipc that process-shared pthread mutexes have too is demonstrated by the
      • If it's shared memory that I pass to the kernel, and I have an application that doesn't parallelize well, then I'm sending pages off to the kernel quite often. That means the kernel uses a lot of memory.

        There are two options: the kernel could combine pages, either physically or logically; or the kernel could leave that page open for writing by the application in question.

        The easiest way would be for each application to have a page or set of pages for sending messages to the kernel; the application would vie
      • I think it's because typically the "message" which is meaningful to most application is of larger granularity than a single byte, so it makes sense to instead synchronize around that message. Also, you want to ensure that the semantics of your API and your synchronization match. It makes no sense to preserve integrety without preserving semantics. The best way to do that is to either explicitly make a copy, or to "lease" the structure until such time as you are notified that all necessary work has been d
      • The Linux v Tanenbaum debate reminds me of the debates I hear at work between those who understand that software is a commercial kludge between conflicting/changing requirements, the limitations of time and abilities of support engineers etc., and those who want to do everything "right", often blowing development budgets, and producing unusable, over-optimised, hard-to-maintain code. I exaggerate the distinction for effect, of course. Linux has built a system, it works and it's used everywhere. Microkernel
        • Linux has built a system, it works and it's used everywhere. Microkernels are all niche [...]

          One of Tanenbaum's central points is that Linux is not used everywhere. In particular, it's not used anywhere that hard-real-timeness, seriously paranoid robustness (e.g. in those applications where a hardware failure should not result in a reboot) etc are important.

          The word "niche" is, much like "legacy", often used in places where a more overt dismissal would rightly be seen as unfair. The fact that Linux c

    • From what I've seen of this "debate", it's all about what each group believes is (are) the most important aspect(s) of the kernel.

      Oblig auto analogy:
      If hauling cargo is your primary objective, then you'll probably view motorcycles as badly designed while seeing vans and trucks as "better".

      Only time (and code) will show which approach will result in all of the benefits of the other approach without any unacceptable deficiencies.
      • by JPribe ( 946570 ) <jpribe&pribe,net> on Monday May 15, 2006 @12:58PM (#15336046) Homepage
        You're on to something...you are very close to the cache. Why are we "debating" this when the asnwer seems very clear once one takes a step back: They (the kernels) can exist in harmony, each in its own place. Tanenbaum makes a decent showing of examples about where and why micros are used. This isn't a "which is better" argument. This should be a "where is one better utilized than the other in situation X" debate. That flamewar I could tolerate. Bottom line is that neither will replace the other, at least in a timely enough manner that it is worth wasting time over now.
    • by dr_dank ( 472072 ) on Monday May 15, 2006 @12:53PM (#15336009) Homepage Journal
      Since I know that this story is going to turn into flame-fest central

      Damn right, this'll be better than the less filling/tastes great argument.
      • by rcamans ( 252182 ) on Monday May 15, 2006 @01:02PM (#15336086)
        Whoo, there, good buddy. Actually I have seen some pretty entertaining videos of less filling / tastes great cat fights on the internet lately. Now, if someone wants to post videos of supermodels catfighting over microkernel / linus, I would then get pretty excited over the whole debate.
        Wait a minute, too much information here...

    • Try doing what I do with Minix3: run it in VMWare, allocate it 4GB of RAM, and let VMWare do your virtual memory manegement.

      (Yes, I know it's an ugly hack. But it means I don't worry about giving Bash 120mb, and cc some enormous number...)
  • by robla ( 4860 ) * on Monday May 15, 2006 @12:27PM (#15335800) Homepage Journal

    Tanenbaum wrote (in TFA):The average user does not care about even more features or squeezing the last drop of performance out of the hardware, but cares a lot about having the computer work flawlessly 100% of the time and never crashing. Ask your grandma.

    Interesting. My mom recently bought a computer for my grandma. Grandma doesn't have a problem with the computer crashing at all. Her secret? She never turns it on.

  • by JPribe ( 946570 ) <jpribe&pribe,net> on Monday May 15, 2006 @12:30PM (#15335820) Homepage
    When did we collectively forget that everything has its place...I doubt I'll ever see anything but a monolithic kernel on my desktops. No different than any given OS having its place. Windows and Ubuntu (until something better) will live on my desktops, not on my server. Why can't we just all get along?
    • by Tiro ( 19535 ) on Monday May 15, 2006 @01:40PM (#15336442) Journal
      I doubt I'll ever see anything but a monolithic kernel on my desktops.
      Do you realize that Mac OS X has not a monolithic kernel?
    • by monkeyGrease ( 806424 ) on Monday May 15, 2006 @03:21PM (#15337376)
      > Why can't we just all get along?

      Have you read the article? Tanenbaum basicly starts out by saying this is not a 'fight', but a technical discussion. Communication and debate is an important part of research and development. That's what is being attempted here, at least at face by Tanenbaum. There may be antagonism behind the scenes, or bias in presentation, but that is just human. The primary intent is to advance the state of the art, not fight.

      All this 'what's the point' or 'we have this now' type of talk really bugs me. Everything can always be improved, or at least that is the attitude I'd like to stick with.

      > When did we collectively forget that everything has its place

      Another key component of research and development is to question everything. Not throw everything away and always start over, but to at least question it. Just because monolithic kernels rule the desktop now does does prove that monolithic kernels are inherently the best desktop solution.

      In effect it is sometimes good to not even recognize a notion of 'everything has its place'.
  • If the Secured Edition Linux policy along with BSD's nearly error-free coding is merged with Minix3 along with POSIX capability support, then we have a good start.

    I'd be glad to give all my Windows platform for one Über-secured OS.
  • by Anonymous Coward on Monday May 15, 2006 @12:44PM (#15335929)
    I'd like to point out that Minix is already FAR FAR *FAR* ahead of Linux in the version numbering war. Minix recently moved to version 3
    And Linux seems to be stuck on version 2.6

    And v3.12 (I think, I'm going from memory here) will finally support the X windowing system

    Oh...maybe I should have left out that last sentence...kinda kills my argument
    • Minix recently moved to version 3
      And Linux seems to be stuck on version 2.6


      HAH! Windows was on version 3 SO MANY YEARS AGO. Eat your heart out, Linux!
    • And v3.12 (I think, I'm going from memory here) will finally support the X windowing system

      That's odd. I could have sworn that I was just using an X-Terminal on it a few minutes ago.

      Oh wait. I was using an X-Terminal. How in the world did that happen? </mock-sarcasm>

      To be fair, getting X-Windows running is a recent development. On the other hand, the entire Minix3 codebase is a recent development. (Only a half-year old.) They're moving at a pretty good clip for a brand-new OS. :-)
  • by Anonymous Coward on Monday May 15, 2006 @12:44PM (#15335935)
    "TVs don't have reset buttons. Stereos don't have reset buttons. Cars don't have reset buttons."

    They may not be labeled "reset" but they *do* have them. And, no offense, but I like having a reset button.
  • Whatever... (Score:3, Interesting)

    by MoxFulder ( 159829 ) on Monday May 15, 2006 @12:49PM (#15335977) Homepage
    Linux is very reliable for me, even on newer hardware with a bleeding edge kernel. Why should I care whether it has a microkernel or monolithic kernel? Everything I deal with is user space. If it runs GNOME, is POSIX-like, and supports some kind of automatic package management, I'll be happy as a clam.

    Will hardware drivers be developed faster and more reliably with a microkernel? That seems to be the biggest hurdle in reliable OS development these days... Anyone have a good answer for that, I honestly don't know.
    • Why should I care whether it has a microkernel or monolithic kernel?

      Because with Microkernel, we could have proprietary drivers *cough* ATI, nVidia *cough* without having to worry about the driver messing up the system.
    • Re:Whatever... (Score:3, Interesting)

      by einhverfr ( 238914 )
      Linux as a kernel is so reliable for me that the only times I have to use the reset button are when hardware malfunctions (usually something that a microkernel can't help with, like RAM, CPU, or the video card, though in the latter case, I tend to ust leave the computer running and ssh in from elsewhere...).

      I noticed two things about Tannenbaum's piece though. Essentially all of the microkernels he listed were either used in dedicated (including embedded) systems or were not true microkernels by his own ad
  • Minix 3 screenshots (Score:5, Informative)

    by mustafap ( 452510 ) on Monday May 15, 2006 @12:53PM (#15336002) Homepage

    I almost died of boredom looking for them. Here's the link, for the lazy:

    http://www.minix3.org/doc/screenies.html [minix3.org]
  • is if we can make a functional distro (i.e. Ubuntu) on top of Minix 3. Is it possible? What must be changed?
    • Yes it is, and I think it is a very good idea.

      Minix will need some more features though, my guess is paging and threading are the major sticking points. Probably more system calls too but VM and threading are more work.

      Being able to 'leverage' the enormous existing amount of software once Minix matures a bit would let Minix 'leapfrog' its 'competition'.

      Disclaimer: I am involved with the Minix project.

  • by Anonymous Coward on Monday May 15, 2006 @01:13PM (#15336196)
    Hello everybody out there using minix -

    I'm doing a (free) operating system (just a hobby, won't be big and
    professional like gnu) for 386(486) AT clones. This has been brewing
    since april, and is starting to get ready. I'd like any feedback on
    things people like/dislike in minix, as my OS resembles it somewhat
    (same physical layout of the file-system (due to practical reasons)
    among other things).

    I've currently ported bash and gcc, and things seem to work.
    This implies that I'll get something practical within a few months, and
    I'd like to know what features most people would want. Any suggestions
    are welcome, but I won't promise I'll implement them :-)
  • by AcidPenguin9873 ( 911493 ) on Monday May 15, 2006 @01:14PM (#15336209)
    "dont forget that Linux became only possible because 20 years of OS research was carefully studied, analyzed, discussed and thrown away."

    http://www.ussg.iu.edu/hypermail/linux/kernel/9906 .0/0746.html [iu.edu]

    He is, of course, referring to all the research in the '80s and '90s on microkernels and IPC-based operating systems.

  • by dpbsmith ( 263124 ) on Monday May 15, 2006 @01:15PM (#15336217) Homepage
    I have never experienced the "stalling" problem that affected a very small number of 2004 and 2005 Priuses last year. (OK, hubris correction, make that "not yet..." although my car's VIN is outside the range of VINs supposedly affected).

    It was apparently due to a firmware bug.

    In any case, when it happened, according to personal reports in Prius forums from owners to whom it happened, the result was loss of internal-combustion-engine power, meaning they had about of mile of electric-powered travel to get to a safe stopping location. At that point, if you reset the computer by cycling the "power" button three times, most of the warning lights would go off, and the car would be fine again. Of course many to whom this happened didn't know the three-push trick... and those to whom it did happen usually elected to drive to the nearest Toyota dealer for a "TSB" ("technical service bulletin" = firmware patch).

    These days, conventional-technology cars have a lot of firmware in them, and I'll bet they have a "reset" function available, even if it's not on the dashboard and visible to the driver.
    • ... they are an exception to a "normal" car he was refering to.

      And even if you lumped them into cars, so, you have what, a few hundred prius's that have reset buttons, among the hundreds of millions of cars. And every computer in existance still has a reset button, and at some point in time that reset button has been exercised.
  • by StevenMaurer ( 115071 ) on Monday May 15, 2006 @01:20PM (#15336264) Homepage
    ...so I can't spend a lot of time in dicussing this, but I always that the main benefit of micro-kernels is completely wasted unless you actually have utilities that can work in partially-functioning environments. What good is it to be able to continue to run a kernel even with your SCSI drive disabled, if all your software to fix the problem is on the SCSI drive?

    Now in theory I could see a high-availability microkernel being a good, less expensive alternative, to a classic mainframe environment, especially if you had a well written auto-healing system built in as a default. But that would require a lot of work outside the kernel that just isn't being done right now. And until it is, micro-kernels don't have anything more to offer than monolithic kernerls.

    To put it in API terms - it doesn't matter very much whether your library correctly returns an error code for every possible circumstance, when most user level code doesn't bother to check it (or just exits immediately on even addressable errors).

    • Its a chicken/egg situation. Until the underlying mechanisms needed for self-healing are there, we won't get self-healing systems. Until the user space code for self-healing is there, nobody thinks its worthwhile to support self-healing mechanisms. Thankfully a few folks realize that if they build it, people will come.

      Also, your API metaphor is a little bad. While you're right about the end result, saying that this invalidates the utility of the API is wrong imho. The advantage of having the API remains, be
  • by nonmaskable ( 452595 ) on Monday May 15, 2006 @01:27PM (#15336328)
    Tanenbaum as always makes a good conceptual case for his perspective, but as time has gone by his examples increasingly prove Linus' point.

    Except for QNX the software he cites are either vaporware (Coyotos, HURD), esoteric research toys (L4Linux, Singularity), or brutally violate the microkernel concept (MacOSX, Symbian).

    Even his best example, QNX is a very niche product and hard to compare to something like Linux.

    • by Miniluv ( 165290 ) on Monday May 15, 2006 @01:46PM (#15336498) Homepage
      Uhm, I'm pretty sure niche doesn't mean exceptionally widely deployed.

      QNX is everywhere, you just don't realize it. ATMs run it, lots of medical equipment runs it, lots of other embedded apps that you don't even think of run it.

      The examples Andy cites prove that in fact the microkernel concept has won in every single field where stability has gone beyond being something people wanted to something they demand. As soon as the general public realizes computers don't HAVE to crash, they'll win there too.
  • A CPU like Kernel (Score:3, Interesting)

    by Twillerror ( 536681 ) on Monday May 15, 2006 @01:33PM (#15336370) Homepage Journal
    All of these ideas are old, and while high performing don't address the largest issue of all, cross kernel compatability.

    Sure you can recompile and all that jaz, but I'd love to see a day where an app could run on any number of kernels out there. This creates real competetion.

    What I'd like to see if a kernel more like a CPU. Instead of linking your kernel calls, you place them as if you where placing an Assembly call. Then we can have many companies and open source organizations writing versions of it.

    As we move towards multi core cpus this could really lead to performance leads. Where one or more of many cores could be dedicated to the kernel operations listening for operations and taking care of them. No context switches needed, no privledge mode switching.

    Drivers and everything else run outside of kernel mode and use low level microcode to execute the code.

    The best part I think is you could make it backword compatiable as we re-write. A layer could handle old kernel calls and change them to the micro codes.

    As we define everything more and more then we might even be able to design CPUs that can handle it better.

  • by David Off ( 101038 ) on Monday May 15, 2006 @01:43PM (#15336470) Homepage
    > Virtually all of these postings have come from people who don't have a clue what a microkernel is or what one can do.

    Okay, I spent 2 years working as a engineer in the OSF's Research Institute developing Mach 3.0 from 1991. Let me answer Linus's question in a simple fashion. What Mach 3.0 bought you over Mach 2.5 or Mach 2.0 was a 12% performance hit as every call to the OS had to make a User Space -> Kernel -> User Space hit. This was true on x86, Moto and any other processor architecture available to us at the time. Not one of our customers found this an acceptable price to pay and I very much doubt they would today. One of the reasons Microsoft moved a lot of functionality into the Kernel between NT 3.5 and NT4.0 was performances (NT being, at its origins a uK based OS).

    What of the advantages ?

    Is porting easier? No not really, the machine dependent code in Mach 2.5 and Mach 3.0 was already well abstracted.

    You could run two OS personalities at once, for example you could have an Apple OS and Unix running at the same time. But why would any real world clients want to do this?

    Problems in the OS personality wouldn't bring down the uKernel - but they might stop you doing any useful work while you reboot the OS personality.

    Other things like distributed operating systems (and associated fault tolerance) were perhaps aided by the uK design and this is a path that, in my humble opinion, the OSF should have pursued with greater zeal than they did. Back in 1991 we had a Mach 3.0 based system that ran a uK across an array of x86 nodes but had different parts of the OS - say IO or memory management running on different nodes. From a user standpoint all the machines (in reality bog standard 386 machines linked by FDDI) looked like a single computer running a Unix like OS.

    I remember discussing Linux with my colleagues back in 1993, some were impressed and thought the nascent OS model was very powerful, others dismissed it as a toy with no real future. I suspect Tannenbaum was also amongst the poo=pooers and has become pretty annoyed about how things have turned out.
    • Okay, I spent 2 years working as a engineer in the OSF's Research Institute developing Mach 3.0 from 1991. Let me answer Linus's question in a simple fashion. What Mach 3.0 bought you over Mach 2.5 or Mach 2.0 was a 12% performance hit as every call to the OS had to make a User Space -> Kernel -> User Space hit. This was true on x86, Moto and any other processor architecture available to us at the time. Not one of our customers found this an acceptable price to pay and I very much doubt they would tod

  • Oh Tanenbaum, oh Tanenbaum, wie grün sind deine Blätter
    Du grünst nicht nur zur Sommerszeit, nein auch im Winter, wenn es schneit.
    Oh Tanenbaum, oh Tanenbaum, wie grün sind deine Blätter

    For the uninformed: Tannenbaum (with double n) is the german word for Fir (conifer) or the synonym for Christmas-Tree. The verse above is the first of a famous german christmas-carol. :-)
  • 'Way back when we read the first rev of this discussion, Tanenbaum made good points. At the same time, Linus was able his little monolithic kernel project jump through the hoops he wanted it to.

    Years later, Tanenbaum still makes valid observations, Linus and others continue to make a rather larger project jump through the hoops, and that's fine. The results of academic research may or may not get traction outside of a university, but without the research, there wouldn't be alternatives to contemplate. If I've gathered nothing else about Linus' personality from his writings over the years, it's that he seems to be practical, not particularly hung up on architectural (or licensing) theories... unlike me.

    At some point, if his current architecture just isn't doing it for him any more, he might morph into Tanenbaum's 'A' student. It won't be because a microkernel was always right, but that it was right now.

  • by Animats ( 122034 ) on Monday May 15, 2006 @03:06PM (#15337189) Homepage
    The real truth about microkernels is about like this:

    • Getting the architecture of a microkernel right is really hard. There are some very subtle issues in how interprocess communication interacts with scheduling. If these are botched, performance under load will be terrible. QNX got the performance part right. Mach got it wrong. Early Minix didn't address this issue. See this article in Wikipedia [wikipedia.org]. Other big issues include the mechanism by which one process finds another, and how mutually mistrustful processes interact. If you botch the basic design decisions, your microkernel will suck. Guaranteed.
    • Most academic attempts at microkernels have been duds. One can argue over why, but it's the commercial ones, like QNX, VM, and KeyKos that work well, while the academic ones, like Mach, EROS, and the Hurd have been disappointing.
    • Security models really matter. And they're hard. Multics got this right. KeyKos got this right. QNX is no better than UNIX in this area. Designers must work through "A can't do X, but A can trick B into doing X" issues.
    • Trying to turn a monolithic kernel into a microkernel doesn't work well. Mach, which started life as BSD UNIX, ran into this problem, which is why MacOS X isn't based on the microkernel version of Mach.
    • Drivers in user space have real advantages. Not only is the protection and restartability better, but because they have access to all the regular user program facilities, drivers for more modern devices are much easier. Things like Firewire and USB device discovery and hot-plugging reconfiguration are far easier at the user level, where you have threads, can block, and can call other programs. The old "top half and bottom half" driver approach doesn't generalize well to today's more dynamic configurations. Monolithic kernels have had to add kernel threads and dynamic loading of modules to handle all this, resulting in kernel bloat. Of course, a big advantage of less-privileged drivers is blame management - you can tell whether the OS or the driver is at fault.
    • Startup requires more attention. A microkernel often doesn't contain the drivers needed to get itself started. So the startup and booting process is more complex. QNX has a boot loader which loads the kernel and any desired set of programs as part of the boot image. This gets the console driver and disk driver in at startup, without having to make them part of the kernel.
    • The performance penalty is real, but not that big There's a performance penalty associated with the extra interprocess communication. It's usually not that big, but there are areas where it's a problem. If it takes one interprocess call for each graphics operation, for example, performance will be terrible. NT 3.51 had a nice solution to this problem, designed by Dave Cutler. (NT 4 and later have a more monolithic kernel, but that had to do more with making NT bug-compatible with Windows 95 than with performance problems.)
    • I/O channels would help IBM mainframe channels, which have an MMU between the peripheral and main memory, are better suited to a microkernel architecture than the "bus" model the microcomputer world uses. In the mainframe world, the kernel can put program in direct communication with the hardware without giving it the ability to write all over memory. So there's little penalty for having drivers in user space. Which is why VM for IBM mainframes has been doing this for decades.
    • If you get it right, the kernel doesn't change much over time. This is the big win, and why microkernels become more stable over time. In the QNX world, USB and Firewire support were added with no changes to the kernel. (I wrote a FireWire camera driver for QNX, so I'm sure of this.) The IBM VM kernel has changed little in decades.

    So that's what you need to know about microkernels.

    • I/O channels would help IBM mainframe channels, which have an MMU between the peripheral and main memory...

      I've heard from a friend at Intel that their new chipsets which fully support TCPA have this feature. So maybe trusted computing isn't just about copy prevention.

  • by microbee ( 682094 ) on Monday May 15, 2006 @03:14PM (#15337289)
    Before we get into arguments or understanding arguments, two most important things to note:
    - AST is a prefessor. His interest in doing research and building the best systems for the *future* that he believes in.
    - Linus is an engineer. His interest is building a system that works best *today*.

    We simply need both. Without pioneering work done before in other OSes (this included failures), Linux wouldn't have been like this today. The greatest reason for its success it not it's doing something cool, but it's doing things that are proven to work.

    So who is right? I'd say both. Linus has said this in 1992: "Linux wins heavily on points of being available now."

    Linus admits microkernels are "cooler", but he didn't (doesn't) believe in it *today* because none of the available microkernels could compete with Linux as a *general purpose* OS. It's funny how AST listed "Hurd" as one of the microkernels - it totally defeats his own arguments. The fact is Hurd is still not available today despite it was started before Linux.

    Many people talk about QNX. Sure, in many cases (especially mission critical, RTOS, where reliablility is so much more important than performance and usability) microkernels are better, but we really shouldn't compare a general-purpose OS with real-time or special purpose OS.

    So we go back to the old way: code talks. So far microkernel proponents keep saying "it's possible to do microkernel fast, etc" but the fact is they have never had an OS that could replace Linux and other popular OS that everybody could run on their desktop with enough functionality. There are two possible reasons:
    1. Lack of developers. But why? Do people tend to contribute to Linux because Linus is more handsome (than Richard Stallman that is)? There gotta be some reasons behind it other than oppotunities right?
    2. Monilithic kernels are actually more engineerable than microkernels, at least for today.

    Maybe 2 is actually the real reason?

    Think about it.
  • by jjohnson ( 62583 ) on Monday May 15, 2006 @04:38PM (#15338112) Homepage
    Linux as a kernel is sufficiently mature that the problems Linus is spending time on are management and scalability problems--organizing the large-scale kernel hacking effort and dealing with massively parallel processing.

    I'd like to see Linus say "I've done a monolithic kernel and proven its success. Now I'm going to build a performant microkernel and see what all the fuss is about." He could hand over Linux kernel development to the senior crew that's already taking care of the major modules, and try something else.

    Essentially, it would be cool for someone like Linus, with his incredibly strong practical engineering bent, to do again what he did with Linux: semi-clean-sheet a new kernel that meets his performance requirements, but is designed around different strategies for achieving what every OS tries to achieve.

    I bet that, in two or three years, he would recant his earlier dismissal of microkernels and say that there's actually some interesting stuff there, and along the way solve some of the perennial complaints that slashdotters always bring up whenever microkernels are mentioned. In his heart of hearts, I'm sure Linus has some legacy issues with the current kernel design that he'd love to jettison, but can't without massively re-organizing the existing architecture, in which too many interested parties are already involved.

    And he could put Stallman and the HURD boys to shame *again*, which is a twofer :)
  • by jonniesmokes ( 323978 ) on Monday May 15, 2006 @05:03PM (#15338380)
    More important than micro/macro to me would be the ability to keep the system running and edit the system. I used to do that with Scheme back in my college days. It made me realize how something like the telephone system could keep running 24/7 and never go down. These days with MS Windows I gotta reboot every 30 days, and the same with these fscking Linux kernel updates. What if I don't ever want to reboot. I think a microkernel/interpreter would let you modify the running system a lot easier. You could even make incremental changes and then check to make sure they work - preserving the old code so a rollback would be simple.

    The point that Andy makes which I agree on, is that computer software is still in its infancy. The part I disagree with is that it'll change by him stating the obvious.
  • Mirror set up (Score:3, Informative)

    by MrPerfekt ( 414248 ) on Monday May 15, 2006 @06:01PM (#15338722) Homepage Journal
    I placed the IDE files on our mirrors server here at Easynews...

    http://mirrors.easynews.com/minix3 [easynews.com]
  • by POds ( 241854 ) on Tuesday May 16, 2006 @06:42AM (#15341189) Homepage Journal
    Has anyone thought of that the fact this very conversation may go down in the history of computer science? In 30 more or less years, lecturers will be telling their students about this argument! We're witnessing a more interesting slice of history than our normal mundane day lives :)
  • by ajs318 ( 655362 ) <.sd_resp2. .at. .earthshod.co.uk.> on Tuesday May 16, 2006 @07:51AM (#15341491)
    There is a simple reason why microkernels do not work in practice: the abstraction layer is in the wrong place.

    <simplification>A hardware driver doing output has to take raw bytes from a process, which is treating the device as though it were an ideal device; and pass them, usually together with a lot more information, to the actual device. A driver doing input has to supply instructions to and read raw data from the device, distil down the data and output it as though it came from an ideal device.</simplification>

    In general, the data pathway between the driver and the process {which we'll call the software-side} is less heavily used than the data pathway between the driver and the device {which we'll call the hardware-side}.

    <simplification>In a conventional monolithic kernel {classic BSD}, a hybrid kernel {Windows NT} or a modular kernel {Linux or Netware}, device drivers exist entirely in kernel space. The device driver process communicates with the userland process which wants to talk to the device and with the device itself. All the required munging is done within the kernel process. In a microkernel architecture, device drivers exist mainly in user space {though there is necessarily a kernel component, since userland processes are not allowed to talk to devices directly}. The device driver process communicates with the ordinary userland process which wants to talk to the device, and a much simpler kernel space process which just puts raw data and commands, fed to it by the user space driver, on the appropriate bus.</simplification>

    Ignore for a moment the fact that under a microkernel, some process pretending to be a user space device driver could effectively access hardware almost directly, as though it were a kernel space process. What's more relevant is that in a microkernel architecture, the heavily-used hardware-side path crosses the boundary between user space and kernel space.

    And it gets worse.

    <simplification>In a modular kernel, a device driver module has to be loaded the first time some process wants to talk to the device. {Anyone remember the way Betamax VCRs used to leave the tape in the cassette till the first time the user pressed PLAY? Forget the analogy then} which obviously takes some time. The software-side communications channel is established, which takes some time. Then communication takes place. The driver stays loaded until the user wants it removed. Then the communication channel is filled in and the memory used by the module is freed, which obviously takes some time.

    In a microkernel architecture, a user space device driver has to be loaded every time some process wants to talk to the device. The software and hardware side communications channels have to be established, which take some time. Then communication begins in earnest. When that particular process has finished with the device, both channels are filled, and the memory used by the driver is freed; which takes time. Between this hardware access and the next, another process may have taken over the space freed up by the driver, which means that reloading the user space driver will take time.</simplification>

    It makes good practical sense to put fences in the place where the smallest amount of data passes through them, because the overheads involved in talking over a fence do add up. That, however, may not necessarily be the most "beautiful" arrangement, if your idea of beauty is to keep as little as possible on one side the fence. It also makes sense for device drivers which are going to be used several times to stay in memory, not be continuously loaded and unloaded. {Admittedly, that's really a memory management issue, but no known memory manager can predict the future.}

    Ultimately it's just a question of high heels vs. hiking boots.

Like punning, programming is a play on words.

Working...