Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×

Microkernel: The Comeback? 722

bariswheel writes "In a paper co-authored by the Microkernel Maestro Andrew Tanenbaum, the fragility of modern kernels are addressed: "Current operating systems have two characteristics that make them unreliable and insecure: They are huge and they have very poor fault isolation. The Linux kernel has more than 2.5 million lines of code; the Windows XP kernel is more than twice as large." Consider this analogy: "Modern ships have multiple compartments within the hull; if one compartment springs a leak, only that one is flooded, not the entire hull. Current operating systems are like ships before compartmentalization was invented: Every leak can sink the ship." Clearly one argument here is security and reliability has surpassed performance in terms of priorities. Let's see if our good friend Linus chimes in here; hopefully we'll have ourselves another friendly conversation."
This discussion has been archived. No new comments can be posted.

Microkernel: The Comeback?

Comments Filter:
  • Eh hem. (Score:4, Insightful)

    by suso ( 153703 ) * on Monday May 08, 2006 @09:09AM (#15284896) Journal
    Current operating systems are like ships before compartmentalization was invented

    Isn't SELinux kinda like compartmentalization of the OS?
    • Re:Eh hem. (Score:3, Informative)

      by Anonymous Coward
      SE Linux provides security models to compartmentalize your programs and applications and such. This is a completely different beast then compartmentalizing your individual kernel parts. Modules was kind of a primitive step in the direction of a microkernel, but still a long ways off from a technical standpoint.
    • Re:Eh hem. (Score:5, Funny)

      by Ohreally_factor ( 593551 ) on Monday May 08, 2006 @09:45AM (#15285155) Journal
      Ship analogies are confusing and a tool of the devil.

      Could someone out this into an easy-to-understand car analogy, like the good Lord intended?
    • Re:Eh hem. (Score:4, Informative)

      by Kjella ( 173770 ) on Monday May 08, 2006 @10:44AM (#15285580) Homepage
      Isn't SELinux kinda like compartmentalization of the OS?

      No, it's compartmentalization of the applications. Besides, the analogy is really bad because a ship with a blown compartment is quite useful. Computers with a blown network driver will e.g. break any network connections going on, in other words a massive failure. What about a hard disk controller which crashes while data is being written? Drivers should not crash, period. Trying to make a system that could survive driver failure will just lead to kernel bloat with recovery code.
  • by ZombieRoboNinja ( 905329 ) on Monday May 08, 2006 @09:10AM (#15284902)
    ...I got nothing.
  • by Random Destruction ( 866027 ) on Monday May 08, 2006 @09:12AM (#15284913)
    So this microkernel is the unsinkable kernel?
    FULL SPEED AHEAD!
    • May i Point out that the Titanic was divided into compartments, but only in the vertical direction. No horizontal compartments, and hence since she sutained damage to critical_number_of_compartments + 1, trimming of the ship + water ingress allowed the remaing compartments to be progressively filled.
      • It also should be noted that for some insane reason, the Titanic crew didn't counterflood. If they had they might have been able to significantly slow down the entire drop of the ship, and almost certainly the frame of the ship would have remained intact on the water rather then having the stern of the ship rise out of the water, and then the entire ship snapping.
    • by this great guy ( 922511 ) on Monday May 08, 2006 @12:50PM (#15286742)
      Andrew should name such an unsinkable kernel TiTanenbaum.
  • How hard... (Score:3, Interesting)

    by JonJ ( 907502 ) <jon.jahren@gmail.com> on Monday May 08, 2006 @09:12AM (#15284915)
    Would it be to convert Linux to a microkernel? And Apple is using Mach and BSD as the kernel XNU, are they planning to make it a true microkernel? AFAIK it does some things in kernel space that makes it not a microkernel.
    • by cp.tar ( 871488 )

      Well, I hear that GNU/HURD is in the making...

    • A false dichotomy (Score:5, Insightful)

      by The Conductor ( 758639 ) on Monday May 08, 2006 @09:34AM (#15285081)
      I seem to find this microkernel vs. monolithic argument a bit a of a false dichotomy. Micorkernels are just at one end of a modularity vs. $other_goal trade-off. There are a thousand steps in-between. So we see implementations (like the Amiga for example) that are almost microkernels, at which the purists shout objections (the Amiga permits interrupt handlers that bypass the OS-supplied services, for example). We also see utter kludges (Windows for example) improve their modularity as backwards compatibility and monopolizing marketing tactics permit (not much, but you have to say things have improved since Win3.1).

      When viewed as a Platonic Ideal, a microkernel architechture is a useful way to think about an OS, but most real-world applications will have to make compromises for compatibility, performance, quirky hardware, schedule, marketing glitz, and so on. That's just the way it is.

      In other words, I'd rather have a microkernel than a monolithic kernel, but I would rather have a monolithic kernel that does what I need (runs my software, runs on my hardware, runs fast) that a micokernel that sits in a lab. It is more realistic to ask for a kernel that is more microkernel-like, but still does what I need.

    • Re:How hard... (Score:3, Informative)

      by mmkkbb ( 816035 )
      Linux can be run on top of a Mach microkernel. Apple supported an effort to do this called mkLinux [mklinux.org] which ran on both Intel and PowerPC hardware.
      • Re:How hard... (Score:3, Informative)

        by samkass ( 174571 )
        Years before mkLinux, CMU experimented with some software called MacMach. It ran a "real" BSD kernel over Mach, and MacOS 6.0.7 on top of that. (By "real" I mean that you had to have one of those uber-expensive old BSD licenses to look at the code.) You could even "ps" and get MacOS processes in the list, although they weren't full peers of a unix process. I believe the most recent machine MacMach could run on was a Mac ][ci.

        Also, in the early 90's Tenon Intersystems had a MacOS running on Mach that had
    • Re:How hard... (Score:5, Interesting)

      by iabervon ( 1971 ) on Monday May 08, 2006 @10:33AM (#15285498) Homepage Journal
      This is actually sort of happening. Recent work has increased the number of features that can be provided in userspace. Of course, this is done very differently from how a traditional microkernel does it; the kernel is providing virtual features, which can be implemented in user space. For example, the kernel has the "virtual file system", which handles all of the system calls, to the point where a call to the actual filesystem is needed (if the cache, which the VFS handles, is not sufficient). The actual calls may be made to userspace, which is a bit slow, but it doesn't matter, because it's going to wait for disk access anyway.

      The current state is that Linux is essentially coming around to a microkernel view, but not the classic microkernel approach. And the new idea is not one that could easily grow out of a classic microkernel, but one that grows naturally out of having a macrokernel but wanting to push bug-prone code out of it.
      • Re:How hard... (Score:3, Insightful)

        by rg3 ( 858575 )
        Another example of this approach is libusb. Instead of providing drivers for USB devices inside the kernel, you can do that with libusb. It gives you an interface to the USB system. Many scanner and printer drivers use it, and the drivers are included in the CUPS or SANE packages.
  • Or... (Score:5, Funny)

    by Mr. Underbridge ( 666784 ) on Monday May 08, 2006 @09:13AM (#15284923)
    You could just have a small monolithic kernel, and do as much as possible in userland.

    Best of both worlds, no? Wow, I wish someone would make such an operating system...

    • Re:Or... (Score:3, Interesting)

      by Zarhan ( 415465 )
      Considering how much stuff has recently been moved to userland in Linux (udev, hotplug, hal, FUSE (filesystems), etc) I think we're heading in that direction. SELinux is also something that could be considered "compartmentalized".
  • NT4 (Score:3, Interesting)

    by truthsearch ( 249536 ) on Monday May 08, 2006 @09:14AM (#15284925) Homepage Journal
    NT4 had a microkernel whose sole purpose was object brokering. What I think we're missing today is a truely compartmentalized microkernel. The NT4 kernel handled all messages between kernel objects, but all it did was pass them along. One object running in kernel space could still bring down the rest. I assume that's still the basis of the XP kernel today.

    I haven't looked at GNU/Hurd but I have yet to see a "proper" non-academic microkernel which lets one part fail while the rest remain.
    • Re:NT4 (Score:4, Interesting)

      by segedunum ( 883035 ) on Monday May 08, 2006 @09:19AM (#15284964)
      NT4 had a microkernel whose sole purpose was object brokering.

      Well, I wouldn't call NT's kernel a microkernel in any way for the very reason that it was not truly compartmentalised and the house could still be brought very much down - quadruply so in the case of NT 4. You could call it a hybrid, but that's like saying someone is a little bit pregnant. You either are or you're not.
      • Re:NT4 (Score:3, Informative)

        by Xiaran ( 836924 )
        Indeed. Microsoft have also been forced time and time again to make compromises to get better performance. An example of this is known to people that write file filter ddrivers for NT. Basically they seemingly couldnt make(a reasonably nice) object model run fast enough in some circumstance. So now there are effectively *two* file system interfaces for and NT files system : one that uses the regular IRP passing schmantics and the other doing direct calls into your driver to speed things up.

        This, as peop
    • QNX ! (Score:5, Informative)

      by alexhs ( 877055 ) on Monday May 08, 2006 @09:55AM (#15285232) Homepage Journal
      I have yet to see a "proper" non-academic microkernel which lets one part fail while the rest remain.

      QNX [qnx.com], but it isn't open source.

      VxWorks [windriver.com] and a few other would also fit.
      • Re:QNX ! (Score:4, Insightful)

        by Kristoph ( 242780 ) on Monday May 08, 2006 @02:42PM (#15287697)
        The QNX Neutrino kernel is a very good microkernel implementation (albeit not as purist as, say the Ka micro-kernel line, but the fact that it is not open makes it unusable.

        The sheduler, for example, is real time only so for non-real time applications is questionable at best. A simple problem to address in the open source world but, apparently "not a high priority" for the manufacturer of this fine technology.

        -rant-

        I fail to understand the point of closed source kernel implementations. The kernel is now a commodity.

        -/rant-

        ]{
  • Trusted Computing (Score:3, Interesting)

    by SavedLinuXgeeK ( 769306 ) on Monday May 08, 2006 @09:14AM (#15284931) Homepage
    Isn't this similar, in idea, to the Trusted Computing movement. It doesn't compartamentalize, but it does ensure integrity at all levels, so if one area is compromised, the nothing else is given the ability to run. That might be a better move, than the idea of compartamentalizing the kernel, as too many parts are interconnected. If my memory handler fails, or if my disk can't read, I have a serious problem, that sinks the ship, no matter what you do.
    • Trusted computing merely checks that the code hasn't changed since it was shipped. This verifies that no new bugs have been added and that no old old bugs have been fixed.

  • by maynard ( 3337 ) on Monday May 08, 2006 @09:14AM (#15284934) Journal
    didn't save the Titanic [wikipedia.org]. Every microkernel system I've seen has been terribly slow due to message passing overhead. While it may make marginal sense from a security standpoint to isolate drivers into userland processes, the upshot is that if a critical driver goes *poof!* the system still goes down.

    Solution: better code management and testing.
    • by LurkerXXX ( 667952 ) on Monday May 08, 2006 @09:23AM (#15284994)
      BeOS didn't seem slow to me. No matter what I threw at it.
    • The Titanic wasn't actually _properly_ compartmentalised, as each compartment leaked at the top (unlike a number of properly compartmentalised ships built around the same time, which would have survived the iceberg).

    • didn't save the Titanic.

      It actually took hitting something like half the compartments to sink her. If it had hit just one less compartment, she would have stay afloat. In contrast, one hole in a none compartmentalized ship can sink it.

      That is no different than an OS. In just about any monolithic OS, one bug is enough to sink them.

    • by gEvil (beta) ( 945888 ) on Monday May 08, 2006 @10:03AM (#15285279)
      So wait a second. In your analogy, which part of Linux plays the Leonardo DiCaprio role? (I'm curious to know which part of Linux I should take out back and kick repeatedly.)
  • The thing is... (Score:5, Interesting)

    by gowen ( 141411 ) <gwowen@gmail.com> on Monday May 08, 2006 @09:15AM (#15284940) Homepage Journal
    Container ships don't have to move cargo from one part of the ship to another, on a regular basis. You load it up, sail off, and then unload at the other end of the journey. If the stuff in the bow had to be transported to the stern every twelve hours, you'd probably find fewer enormous steel bulkheads between them, and more wide doors.
    • Re:The thing is... (Score:5, Insightful)

      by crawling_chaos ( 23007 ) on Monday May 08, 2006 @09:44AM (#15285145) Homepage
      Compartmentalization had very little to do with the advent of the container ship. Titanic was partially compartmented, but they didn't run above the waterline, so that the breach of several bow compartments led to overtopping of the remainder and the eventual loss of the ship. Lusitania and Mauretania were built with full compartments and even one longitudinal bulkhead because the Royal Navy funded them in part to use as auxilliary troopships. Both would have survived the iceberg collision, which really does make one wonder what was in Lusitania's holds when those torpedoes hit her.

      Comparments do interfere with efficient operation, which is why Titanic's designers only went halfway. Full watertight bulkheads and a longitudinal one would have screwed up the vistas of the great dining rooms and first class cabins. It would also have made communication between parts of the ship more difficult as watertight bulkheads tend to have a limited number of doors.

      The analogy is actually quite apt: more watertight security leads to decreased usability, but a hybrid system (Titanic's) can only delay the inevitable, not prevent it, and nothing really helps when someone is lobbing high explosives at you from surprise.

  • Theory Vs. Practice (Score:4, Interesting)

    by mikeisme77 ( 938209 ) on Monday May 08, 2006 @09:18AM (#15284958) Homepage Journal
    This sounds great in theory, but in reality it would be impractical. 2.5 million lines of code handling all of the necessary things the Linux Kernel handles really isn't that bad. Adding compartmentalization into the mix will only make it more complicated and make it more likely for a hole to spring somewhere in the "hull"--maybe only one compartment will be flooded then, but the hole may be harder to patch. However, I wouldn't rule compartmentalization out completely, but it should be understood that doing so will increase the complexity/size and not necessarily lower the size/complexity. And isn't Windows XP or Vista like 30 million lines of code (or more)? That's a LOT more than double the size of the Linux kernel...
    • by Shazow ( 263582 ) <(andrey.petrov) (at) (shazow.net)> on Monday May 08, 2006 @09:24AM (#15284998) Homepage
      wouldn't rule compartmentalization out completely, but it should be understood that doing so will increase the complexity/size and not necessarily lower the size/complexity.

      Just to clear things up, my understanding is that Tanenbaum is advocating moving the complexity out of kernel space to user space (such as drivers). So you wouldn't be lowering the size/complexity of the kernel altogether, you'd just be moving huge portions of it to a place where it can't do as much damage to the system. Then the kernel just becomes one big manager which tells the OS what it's allowed to do and how.

      - shazow
      • by mikeisme77 ( 938209 ) on Monday May 08, 2006 @09:32AM (#15285059) Homepage Journal
        But then you'd have issues with performance and such. The reason the current items are in the kernel to begin with have to do with the need for them to be able to easily communicate with one another and their need to be able to have system override access to all resources. It does make his claim more valid, but it's still not a good idea in practice (unless you're primary focus for an OS is security rather than performance). I also still think that this method would make the various "kernel" components harder to manage/patch--I put kernel in quotes because the parts that would be moved to user land would still be part of the kernel to me (even if not physically).
    • by zhiwenchong ( 155773 ) on Monday May 08, 2006 @09:43AM (#15285139)
      In theory, there is no difference between theory and practice. But, in practice, there is.

      - Jan L.A. van de Snepscheut

      Sorry, couldn't resist. ;-)
  • Most drivers don't need to run in kernel mode (read: any USB device driver)... or at least they don't need to run in response to system calls.
    The hardware manipulating parts kernel should stick to providing higher-level APIs for most bus and system protocols and provide async-io for kernel and user space. If most kernel mode drivers that power your typical /dev/dsp and /dev/input/mouse and such could be rewritten as kernel-threads that dispatch requests to and from other kernel threads servicing physical hardware in the system you can provide fault-isolation and state reconstruction in the face of crashes without incurring much overhead. Plus user processes could also drive these interfaces directly so user space programs could talk to hardware without needing to load in dangerous, untrusted kernel modules (esp. from closed-source hardware vendors).

    Or am I just crazy?

    Yeah but microkernels seems like taking things to an extreme that can be accomplished with other means.
  • by Hacksaw ( 3678 ) on Monday May 08, 2006 @09:21AM (#15284973) Homepage Journal
    I won't claim that Professor T is wrong, but the proof is in the pudding. If he could produce a kernel set up with all the bells and whistles of Linux, which is the same speed and demonstrably more secure, I'd use it.

    But most design is about tradoffs, and it seems like the tradeoff with microkernels is compartmentalism vs. speed. Frankly, most people would rather have speed, unless the security situation is just untenable. So far it's been acceptable to a lot of people using Linux.

    Notably, if security is of higher import than speed, people don't reach for micro-kernels, they reach for things like OpenBSD, itself a monolithic kernel.

    • by WindBourne ( 631190 ) on Monday May 08, 2006 @09:37AM (#15285092) Journal
      OpenBSD's security strength has NOTHING to do with the kernel. It has to do with the fact that mulitple trained eyes are looking over code. The other thing that you will note is that they do not include new code in it. It is almost all older code that has been proven on other systems (read netbsd, apple, linux, etc). IOW, by being back several revs, they are gaining the advantage of everybody else as well as their own.
      • Wrong. OpenBSDs strength is partially because of their testing and code review policy, but ALSO because of design issues (like kernel memory management).

        Certain types of security flaws are much harder to exploit when the OS addresses memory in unpredictable ways.

        Other design principles, which encourage access log review, aid to the security of the system without having anything to do with code review.
    • Actually, it's been proven over and over that microkernel designs dont HAVE to be slow. Read the Liedtke paper on IPC in L3 from 1993 as one example.

      The problem is the hardware is optimized for something else now. Also, modern programmers that only know Java can't code ASM and understand the hardware worth a damn. I should know, I have to try and teach them.

      And yes, all people care about is speed, becasue you cannot benchmark securrity, and benchmarks are all marketing people understand, and gamers need som
  • Hindsight is 20/20 (Score:4, Insightful)

    by youknowmewell ( 754551 ) on Monday May 08, 2006 @09:32AM (#15285061)
    From the link to the Linus vs. Tanenbaum arguement:

    "The limitations of MINIX relate at least partly to my being a professor: An explicit design goal was to make it run on cheap hardware so students could afford it. In particular, for years it ran on a regular 4.77 MHZ PC with no hard disk. You could do everything here including modify and recompile the system. Just for the record, as of about 1 year ago, there were two versions, one for the PC (360K diskettes) and one for the 286/386 (1.2M). The PC version was outselling the 286/386 version by 2 to 1. I don't have figures, but my guess is that the fraction of the 60 million existing PCs that are 386/486 machines as opposed to 8088/286/680x0 etc is small. Among students it is even smaller. Making software free, but only for folks with enough money to buy first class hardware is an interesting concept. Of course 5 years from now that will be different, but 5 years from now everyone will be running free GNU on their 200 MIPS, 64M SPARCstation-5."
  • by youknowmewell ( 754551 ) on Monday May 08, 2006 @09:39AM (#15285103)
    The same arguements for using monolithic kernels vs. microkernels is the same sort of arguement for using C/C++ over languages like Lisp, Java, Python, Ruby, etc. I think maybe we're at a point that microkernels are now practical, same as with those high-level languages. I'm no kernel designer, but it seems reasonable that a monolithic kernel could be refactored into a microkernel.
  • by devangm ( 869429 ) on Monday May 08, 2006 @09:44AM (#15285151) Homepage
    That friendly conversation is hilarious. "Linus: ...linux still beats the pants of minix in almost all areas"

    "Andy: ...I still maintain the point that designing a monolithic kernel in 1991 is a fundamental error. Be thankful you are not my student. You would not get a high grade for such a design :-)"

    The most interesting part: "Linus: The very /idea/ of an operating system is to use the hardware features, and hide them behind a layer of high-level calls. That is exactly what linux does: it just uses a bigger subset of the 386 features than other kernels seem to do. Of course this makes the kernel proper unportable, but it also makes for a /much/ simpler design. An acceptable trade-off, and one that made linux possible in the first place."
  • by joshv ( 13017 ) on Monday May 08, 2006 @09:54AM (#15285221)
    I never really understood why buggy drivers constantly restarting is a desirable state. Say what you will about the monolithic kernel, but the fact that one bad driver can crash the whole works tends to make people work much harder to create solid drivers that don't crash.

    In Andrew Tanenbaum's world, a driver developer can write a driver, and not even realize the thing is being restarted every 5 minutes because of some bug. This sort of thing could even get into a shipping product, with who knows what security and performance implications.
    • Certainly there would be a logging facility to capture that sort of event. Yeah, it might not blow up the machine, but a bouncing driver *should* make a lot of noise.
    • Restarting drivers (Score:5, Informative)

      by Sits ( 117492 ) on Monday May 08, 2006 @12:37PM (#15286613) Homepage Journal
      I'm going to weigh in over here mainly becuase my quiet slumber in the minix newsgroup has been disturbed by a call to arms from ast [google.co.uk] to correct some of the FUD here on Slashdot.

      Drivers have measurably more bugs in them than other parts of the kernel. This has been shown by many studies (see the third reference in the article). This can also been shown empirically - modern versions of Windows are often fine until a buggy driver gets on to them and destablises things. Drivers are so bad that XP even warns you about drivers that haven't been through checks. Saying people should be careful just doesn't cut it and is akin to saying people were more careful in the days of multitasking without protected memory. Maybe they were but some program errors slipped through anyway, bringing down the whole OS when I used AmigaOS (or Windows 95). These days, if my my web browser misbehaves at least it doesn't take my word processor with it, losing the web browser is pain enough.

      In all probability you would know that a driver had to be restarted because there's a good chance its previous state had to be wiped away. However a driver that can be safely restarted is better than a driver that locks up everything that touches it (ever had an unkillable process stuck in the D state? That's probably due to a driver getting stuck). You might be even able to do a safe shutdown and lose less work. From a debugging point of view I prefer not having to reboot the machine to restart the entire kernel when driver goes south - it makes inspection of the problem easier.

      (Just to prove that I do use Minix though I shall say that killing the network driver results in a kernel panic which is a bit of a shame. Apparently the state is too complex to recover from but perhaps this will be corrected in the future).

      At the end of the day it would be better if people didn't make mistakes but since they do it is wise to take steps to mitigate the damage.
      • Drivers are so bad that XP even warns you about drivers that haven't been through checks.

        However, the driver certification program is to some extent a waste of time anyway:
        • When MS sign the driver they cannot test all execution paths - there are known cases where the driver manufacturers have put the drivers into a safe (read: slow) mode for the certification and then switched to a completely different (fast) execution path in real life - this makes the driver no more stable than an uncertified driver
        • Many dr
  • by Junks Jerzey ( 54586 ) on Monday May 08, 2006 @10:06AM (#15285303)
    Lots of big ideas in programming get pooh-poohed for being too resource intensive (a.k.a. big and slow), but eventually we look back and think how silly we were to be worried about such details, and that of course we should go with the cleaner, more reliable option. Some examples:

    zbuffering - go back to any book from the 1970s, and it sounds like a pipe dream (more memory needed for a high-res zbuffer than in entire computer systems of the time)

    Lisp, Prolog, and other high-level languages on home computers - these are fast and safe options, but were comically bloated on typical hardware of 20 years ago.

    Operating systems not written in assembly language - lots of people never expected to see the day.

  • by Inoshiro ( 71693 ) on Monday May 08, 2006 @10:28AM (#15285467) Homepage
    Slashdot may be news for nerds, but it has a serious drawback when it comes to things such as this. The drawback is that what is accepted as "fact" by most people is never questioned.

    "Fact": Micorkernel systems perform poorly due to message passing overhead.

    Fact: Mach performs poorly due to message passing overhead. L3, L4, hybridized kernels (NT executive, XNU), K42, etc, do not.

    "Fact": Micorkernel systems perform poorly in general.

    Fact: OpenBSD (monolithic kernel) performs worse than MacOS X (microkernel) on comparable hardware! Go download lmbench and do some testing of the VFS layer.

    Within the size of L1 cache, your speed is determined by how quickly your cache will fill. Within L2, it's how effecient your algorithm is (do you invalidate too many cache lines?) -- smaller sections of kernel code are a win here, as much as good algorithms are a win here. Outside of L2 (anything over 512k on my Athlon64), throughput of common operations is limited by how fast the RAM is -- not IPC throughput. Most microkernel overhead is a constant value -- if your Linux kernel us O(n) or O(1), then it's possible to tune the microkernel to be O(n+k) or O(1+k) for the equivalent operations. The faster your hardware, the smaller this value of k since it's a constant value. L4Linux was 4-5% slower than "pure" Linux in 1997 (See L4Linux site for the PDF of the paper [l4linux.org]).

    But none of this is something the average slashdotter will do. No, I see lots of comments such as "micorkernels suck!" already at +4 and +5. Just because Mach set back microkernel research by about 20 years, doesn't mean that all micorkernels suck.
    • by galvanash ( 631838 ) on Monday May 08, 2006 @02:22PM (#15287546)

      Do you actually want people to take you seriously when you post utter shit like this?

      Fact: Mach performs poorly due to message passing overhead. L3, L4, hybridized kernels (NT executive, XNU), K42, etc, do not.

      That is a veiled lie. Mach performed very poorly mostly because of message _validation_, not message passing (although it was pretty slow at that too). I.e. it spent alot of cycles making sure messages were correct. L3/L4 and K42 simple dont do any validation, they leave it up to the user code. In other words once you put back the validation in userland that Mach had in kernelspace, things are a bit more even. And for the love of god NT is NOT a microkernel. It never was a microkernel. And stop using the term "hybrid", all hybrid means is that the marketing dept. wanted people to think it was a microkernel...

      Now I will throw a few "facts" at you. It is possible with alot of clever trickery to simulate message passing using zero-copy shared memory (this is what L3/L4/K42/QNX/etc... any microkernel wanting to do message passing quickly). And if done correctly it CAN perform in the same league as monolithic code for many things where the paradigm is a good fit. But there are ALWAYS situations where it is going to be desirable for seperate parts of an OS to directly touch the same memory in a cooperative manner, and when this is the case a microkernel just gets in your damn way...

      Fact: OpenBSD (monolithic kernel) performs worse than MacOS X (microkernel) on comparable hardware! Go download lmbench and do some testing of the VFS layer.

      Ok... Two things. OpenBSD is pretty much the slowest of all BSD derivitives (which is fine, those guys are more concerned with other aspects of the system and its users are as well), so using it in this comparison shows an obvious bias on your part... Secondly, and please listen very closely because this bullshit needs to stop already, !!OSX IS NOT A MICROKERNEL!! It is a monolithic kernel. Yes it is based on Mach, just like mkLinux was (which also was not a microkernel). Lets get something straight here, being based on Mach doesnt make your kernel a microkernel, it just makes it slow. If you compile away the message passing and implement your drivers in kernel space, then you DO NOT have a microkernel anymore.

      So what you actually said in your post could be re-written like this:

      Fact: OSX is sooooo slow that the only thing it is faster than is OpenBSD. And you cant even blame its slowness on it being a microkernel. How pathetic... Wow, that says it all in my book :)

      And no, you dont have to believe me... Please read this [usenix.org] before bothering to reply.

      • "Fact: OSX is sooooo slow that the only thing it is faster than is OpenBSD. And you cant even blame its slowness on it being a microkernel. How pathetic... Wow, that says it all in my book :)"

        Actually, OS X was within a few percentage points of Linux on all hardware tested; actually outperforming it on memory throughput on PowerPC and some other tests. It's also faster than NT.

        "But there are ALWAYS situations where it is going to be desirable for seperate parts of an OS to directly touch the same memory in
  • driver banishment (Score:4, Interesting)

    by bperkins ( 12056 ) on Monday May 08, 2006 @10:35AM (#15285517) Homepage Journal
    What I'd like to see is a compromise.

    There are quite a few drivers out there to support weird hardware (like webcams and such) that are just not fully stable. It would be nice to be able to choose that a driver be run in kernel mode, at full speed, or in a sort of DMZ with reduced performance. This could also make it easier to reverse engineer non-GPL kernel drivers, as well facilitate driver development.

  • I read their "what's new" [gnu.org] and they're participating in Google's Summer of Code.


    27 April 2006

            The GNU Hurd project will participate in this year's Google Summer of Code, under the aegis of the GNU project.

            The following is a list of items you might want to work on. If you want to modify or extend these tasks or have your own ideas what to work on, please feel invited to contact us on the bug-hurd mailing list or the #hurd IRC channel.

                    * Make GNU Mach use more up to date device drivers.
                    * Work on GNU Mach's IPC / VM system.
                    * Design and implement a sound system.
                    * Transition the Hurd libraries and servers from cthreads to pthreads.
                    * Find and implement a reasonable way to make the Hurd servers use syslog.
                    * Design and implement libchannel, a library for streams.
                    * Rewrite pfinet, our interface to the IPv4 world.
                    * Implement and make the Hurd properly use extended attributes.
                    * Design / implement / enhance support for the...
                                o Andrew File System (AFS);
                                o NFS client and NFSd;
                                o EXT3 file system;
                                o Logical Volume Manager (LVM).

            Please see the page GNU guidelines for Summer of Code projects about how to make an application and Summer of Code project ideas list for a list of tasks for various GNU projects and information about about how to submit your own ideas for tasks.
  • Has anyone tried? (Score:3, Insightful)

    by Spazmania ( 174582 ) on Monday May 08, 2006 @10:53AM (#15285666) Homepage
    Why are TV sets, DVD recorders, MP3 players, cell phones, and other software-laden electronic devices reliable and secure but computers are not?

    Well, the nice thing about software in rom is that you can't write to it. If you can't inject your own code and unplugging and replugging the device does a full reset back to the factory code then there is a very limited about of damage a hacker can do.

    Then too, sets capable of receiving a sophisticated digital signal (HDTV) have only recently come in to wide-spread use. To what extent has anyone even tried to gain control of a TV set's computer by sending malformed data?

  • by jthill ( 303417 ) on Monday May 08, 2006 @10:53AM (#15285670)
    Microkernels are just one way to compartmentalize. Compartmentalization is good, yadda yadda momncherrypie yadda. We've known this for what, 20 years? 30? 40? Nobody suspects it's a fad anymore. The kinds of faults VM isolation guards against aren't the kinds of faults that worry people so much today. Panics and bluescreens aren't solved, but they're down in the background noise. Experience and diligence and increasingly good tools have been enough to put them there and will remain enough to keep them there, because the tools are getting better by leaps and bounds.

    "In the 1980s, performance counted for everything, and reliability and security were not yet on the radar" is remarkable. Not on whose radar? MVS wasn't and z/OS isn't a microkernel either, and the NCSC didn't give out B1 ratings lightly.

    One thing I found interesting is the notion of running a parallel virtual machine solely to sandbox drivers you don't trust.

  • Minix (Score:3, Informative)

    by Espectr0 ( 577637 ) on Monday May 08, 2006 @11:22AM (#15285917) Journal
    I have a friend who is getting his doctorate in amsterdam, and he has tanenbaum next door.

    Guess what he told me. A revamped version of minix is coming.

This is clearly another case of too many mad scientists, and not enough hunchbacks.

Working...