The Great Microkernel Debate Continues 405
ficken writes "The great conversation about micro vs. monolithic kernel is still alive and well. Andy Tanenbaum weighs in with another article about the virtues of microkernels. From the article: 'Over the years there have been endless postings on forums such as Slashdot about how microkernels are slow, how microkernels are hard to program, how they aren't in use commercially, and a lot of other nonsense. Virtually all of these postings have come from people who don't have a clue what a microkernel is or what one can do. I think it would raise the level of discussion if people making such postings would first try a microkernel-based operating system and then make postings like "I tried an OS based on a microkernel and I observed X, Y, and Z first hand." Has a lot more credibility.'"
Re:Tag this article... (Score:5, Informative)
Re:Which one? (Score:5, Informative)
Re:Which one? (Score:2, Informative)
Re:Microkernels are the future (Score:4, Informative)
SCSI, firewire are examples of good tech working for you. The CPU should output instructions to devices smart enough to be able to work on their own. Leaving more cycles available to do things that actually matter.
Linux microkernal ... (Score:5, Informative)
If you read the article, Tannenbaum reminds everyone of how Microsoft paid Ken Brown to write a book accusing Linus of stealing the Minix microkernel. FTFA:
Re:Which one? (Score:5, Informative)
Re:Which one? (Score:3, Informative)
Re:Is he really still talking about this??? (Score:2, Informative)
Fix the CPU and stop this silly debate (Score:4, Informative)
If you wonder how to do modules without sacrificing the flat address space, it's quite easy: In most CPU designs, each page descriptor has a user/supervisor bit which defines if the contents of a page are accessible by the other pages. Instead of this bit, CPUs must use the target address to look up module information from another table. In other words, the CPU must maintain a map of addresses to modules, and use this map to provide security access.
This design is not as slow as it initially might seem. Modern CPUs are very fast, and they already contain many such maps: the Translation Lookaside Buffer, the Global Descriptor Table cache, the Local Descriptor Table cache, Victim Caches, Trace Caches, you name it.
Comment removed (Score:3, Informative)
Re:Which one? (Score:3, Informative)
Re:Microkernels are the future (Score:5, Informative)
What does it do? Why, it's a monolithic driver that provides an interface to support userspace filesystem drivers. i.e. A microkernel in practice, if not in definition. Ergo, the grandparent's point about a slow migration.
Re:Which one? (Score:1, Informative)
You can download it yourself here:
http://memo5.dyndns.org/Sites/Lisa/softs/othersofts.html [dyndns.org]
Re:Open Source monolithic kernels (Score:3, Informative)
Loadable modules have nothing to do with micro/monolithic design. In a micro kernel those modules would have their own memory space, their own pid like identifier assigned by the monitor, and be doing all their communication with the rest of the kernel via some IPC process.
When you load a module in Linux, it lives in the same memory space as the rest of the kernel and can freely exchange data with the rest of the kernel via writing directly shared data structures. Modules don't give you a less monolithic kernel, what the do is allow you to determine what the monolith looks like at run time instead of compile time.
Re:Rise of virtualization = return of microkernel (Score:3, Informative)
VMware ESX Server's "vmkernel" is supposed to be a custom microkernel that happens to use drivers from Linux (all device drivers run inside the vmkernel). Guest OSes (including the Linux-based service console used to manage the server) run on top of the vmkernel and access all hardware though it.
The Xen hypervisor does less than VMware's vmkernel; it loads a "dom0" guest that manages the system and performs I/O. With few exceptions, this guest is the only guest that directly interfaces with hardware. The hypervisor manages memory, schedules the CPU, and manages communication between guests.
Microsoft's Hyper-V appears to operate in a similar fashion to Xen.
In the case of Xen and Hyper-V, it's still different than a microkernel; the guest instance that is performing I/O is still a monolithic kernel - usually Linux with Xen and currently Windows 2008 with Hyper-V.
In all three systems, you've got one special guest that handles important system functions and one kernel handling I/O (be it a guest as in Xen/Hyper-V or be it the vmkernel in VMware). There's no "filesystem" process/VM, no "network driver" process/VM, etc.
Design goals (Score:4, Informative)
And he's right. If your goal is reliability and security, a microkernel is a better design. Both goals rely on limiting the amount of time (and the amount of code) spent in kernel space. "Process isolation" is the mantra.
NeXTStep was a hybrid kernel. It was *almost* a microkernel (based on Mach). And, it was *highly* usable. It had the most usable UI in the industry, and still does in its current reincarnation as OS-X.
I think microkernels still have legs.
Re:Slashvertisement (Score:3, Informative)
Re:Microkernels are the future (Score:3, Informative)
What does it do? Why, it's a monolithic driver that provides an interface to support userspace filesystem drivers. i.e. A microkernel in practice, if not in definition. Ergo, the grandparent's point about a slow migration.
Unfortunately, the problem with FUSE is that it's painfully slow. And yes, I do know what I'm talking about, having written drivers for it myself.
Re:QNX Rules (Score:3, Informative)
The compiler doesn't really handle IPC: what happens is that the compiler (or rather the loader) verifies that the programs are type- and memory-safe before allowing them to run. Then they are all loaded into a single memory space so that IPC is trivial. It's a neat concept, although not the first time it has been implemented (see an OS called 'JX').
Re:Is he really still talking about this??? (Score:3, Informative)
He asserted that x86 architechture was doomed to extinction. Yet the majority of the -planet- uses an x86 machine of some sort as of 2008.
*sigh* That old chestnut.
Every x86 processor for the last decade, whether from intel, AMD or VIA is a superscalar, out-of-order, register-rich RISC internally with a layer that decodes x86 op codes and translates them into the native RISC code. The Transmeta chips were RISC/VLIW internally and could emulate any instruction set by loading the translation code at power-up.
Re:A modest proposal for Tanenbaum (Score:3, Informative)
I doubt Andy would be so interested in the Hurd, he is very much the message-passing fan. He also doesn't like the GPL.
Re:crickets (Score:3, Informative)
What, you have never heard of them ? Well, there are other widely used computing platforms besides personal computers.