The Great Microkernel Debate Continues 405
ficken writes "The great conversation about micro vs. monolithic kernel is still alive and well. Andy Tanenbaum weighs in with another article about the virtues of microkernels. From the article: 'Over the years there have been endless postings on forums such as Slashdot about how microkernels are slow, how microkernels are hard to program, how they aren't in use commercially, and a lot of other nonsense. Virtually all of these postings have come from people who don't have a clue what a microkernel is or what one can do. I think it would raise the level of discussion if people making such postings would first try a microkernel-based operating system and then make postings like "I tried an OS based on a microkernel and I observed X, Y, and Z first hand." Has a lot more credibility.'"
Tag this article... (Score:2, Insightful)
...as flamebait.
"We've not argued about this for a while. Let's have a shouting match...Re:Tag this article... (Score:5, Informative)
Re:Tag this article... (Score:5, Insightful)
Not two, but three birds with one stone
Re:Tag this article... (Score:5, Funny)
Like jump-rope (Score:5, Funny)
(With pardons to Eddie Izzard.)
Re:Tag this article... (Score:5, Funny)
Can he stick the landing? (Score:5, Funny)
Still, two flame wars in one sentence is nothing to scoff at, which is why the artistic score will be high. However, the judges really wanted to see some sort of garbage collection vs. malloc/free or even an Intel/AMD mention. That could cost him the gold.
Let's see what the rest of the competitors have to offer.
crickets (Score:5, Funny)
(crickets)
Re:crickets (Score:4, Funny)
Re: (Score:3, Insightful)
Re:crickets (Score:5, Funny)
Linux microkernal ... (Score:5, Informative)
If you read the article, Tannenbaum reminds everyone of how Microsoft paid Ken Brown to write a book accusing Linus of stealing the Minix microkernel. FTFA:
Whoosh! (Score:2)
O <- your head
Re:crickets (Score:5, Funny)
1) RTFA
2) Have first hand knowledge of the subject
3) Make a reasoned, non-biased post/article on the subject
Talk about a dead end.
Re:crickets (Score:5, Insightful)
Re:crickets (Score:4, Insightful)
History is full of people who didn't know how to do something but did it anyway. Knowing the broad principles is a good starting point.
I am one of those people, so I know.
Re: (Score:2)
Re: (Score:3, Interesting)
It's not hard to write a monolithic kernel that screams faster than something like Mach. but not all microkernels are like Mach. But even commercial microkernels like QNX have a lot of overhead for certain applications (like filesystem I/O).
It is possible to have a fast microkernel if you completely discard the original concept of a microkernel and start over with a fresh design. L4 is quite fast for example, even if the whole Clan thing is a bit
Re: (Score:3, Informative)
What, you have never heard of them ? Well, there are other widely used computing platforms besides personal computers.
Microkernels are the future (Score:4, Insightful)
The way I see it, is given the current performance of systems, getting a fast, but slightly less stable kernel counts for a lot, but in the future when the overhead provided a microkernel is deemed insignificant we will see them become the norm. In many ways this is much like when we were all using SCSI CD burners because the processor couldn't keep up, but now we are all using IDE CD burners because CPUs can more than handle the task.
Re: (Score:2)
Next up, an enthralling debate about RISC vs CISC.
Re: (Score:2)
Is that another way of saying they're vaporware? Just like Duke Nukem will always be released "in the future"...
Re:Microkernels are the future (Score:5, Informative)
What does it do? Why, it's a monolithic driver that provides an interface to support userspace filesystem drivers. i.e. A microkernel in practice, if not in definition. Ergo, the grandparent's point about a slow migration.
Re: (Score:3, Informative)
What does it do? Why, it's a monolithic driver that provides an interface to support userspace filesystem drivers. i.e. A microkernel in practice, if not in definition. Ergo, the grandparent's point about a slow migration.
Unfortunately, the problem with FUSE is that it's painfully slow. And ye
Re: (Score:2)
Do you have information that a microkernel is inherently slower than a monolithic kernel?
Re:Microkernels are the future (Score:4, Informative)
SCSI, firewire are examples of good tech working for you. The CPU should output instructions to devices smart enough to be able to work on their own. Leaving more cycles available to do things that actually matter.
Need a safe kernel, not micro (Score:5, Interesting)
A 'safe' kernel sounds slow, because it is probably interpreting bytecodes and has garbage collection. But you get many performance advantages also:
1) idle thread can actually do something, by making programs take less room (compacting gc), offloading some of the work of free(), and optimizing code. So programs respond faster when you switch back to them.
2) lack of data copying. Current systems often copy a *lot* of data from calls to read(2), write(2) and friends, and attempts to reduce this with calls like sendfile or page sharing is very complicated and has a lot of overhead. With a 'safe' kernel you can just give a read-only view, or any number of other very simple methods where no copying takes place.
3) mmu can be used to optimize garbage collection. Only pages written to since the last collection need to be checked for references to new objects, which can improve performance drastically if the instructions inserted to implement a software 'memory barrier' can be removed. It can also help run a gc in parallel since it can easily know if the objects it is looking at have changed during the collection.
4) can eliminate all TLB flushes and stalls from swapping page tables
5) much faster context switch means programs can have smaller time slices, so responsiveness is improved. Meaning less latency in audio (and everything else) without special hacks like magic 'realtime' processes.
6) can run on all hardware, even when lacking memory protection
7) hardware access safer than micro or monolithic kernel, and easier to write drivers
Re: (Score:3, Insightful)
What you want is a kernel which only runs code that comes with a proof that it doesn't do anything bad (overwrite another process's memory). This could be in terms of type safety it could be some other form of analysis. This doesn't mean it even has to be interpreted (though I suspect most code would be)
Both have their place (Score:3, Insightful)
In many cases the difference for the end-user is small enough that it's not worth doing things "the best way" if the tools and talent available lean the other way.
We didn't go for VHS over Beta because it had better quality video, we went for it because of marketplace and other factors.
We didn't go with a monolithic Linux over the once-Apple-sponsored mkLinux because it was inherently better for every possible task under the sun, we went with it because it was better for some tasks and good enough for others and it had more support from interested parties, i.e. marketplace factors.
Re: (Score:2)
I used MkLinux, and it was at the time the only way to run Linux on Mac hardware. It didn't stick around for long; once the Apple sponsored developers had played with it long enough,
Re:Both have their place (Score:5, Insightful)
Like many other "this vs. that" wars, people will use arguments like yours as a cop-out to avoid any serious analysis of the design tradeoffs and the implications of those tradeoffs.
It is quite hollow to say that something is not the "best for all tasks," without some analysis as to when it is the best option, or which option has the most promise in the long term (such that it might be a good field of research).
And another debate goes on. (Score:5, Funny)
Re: (Score:2, Insightful)
There has never been a clear winner in this particular debate so there is nothing wrong with getting a fresh take on things. Maybe something has changed because somebody had a great idea.
Is/was BeOS using a microkernel? QNX is probably one of the oldest microkernels and they're still around.
Microkernels are really popular in the small device market while monolithic kernels dom
Slashvertisement (Score:2)
The easiest way to try one is to go download MINIX 3 and try it.
Re: (Score:3, Interesting)
The rea
Re: (Score:3, Informative)
Who cares about the kernel? (Score:3, Insightful)
Re: (Score:2)
hmmmm... (Score:5, Funny)
Hmmm....he must be new here ;)
Old News (Score:4, Funny)
Re:Old News (Score:5, Funny)
Problem uploading (Score:2)
At least that's my theory.
Open Source monolithic kernels (Score:2)
personally i prefer to build as much as possible as modules with the exception of filesystem support for / (ext3) which i prefer to build in to the kernel itself thus making an initrd unnecessary...
the Linux kernel is one of the finest pieces of software to ever be built since the beginning of
Re: (Score:3, Informative)
Loadable modules have nothing to do with micro/monolithic design. In a micro kernel those modules would have their own memory space, their own pid like identifier assigned by the monitor, and be doing all their communication with the rest of the kernel via some IPC process.
When you load a module in Linux, it lives in the same memory space as the rest of the kernel and can freely exchange data with the rest of the kernel via writing directly shared data structures. Modules don't give you a le
VAX VMS (Score:2)
And that was for a COBOL programming class in college 10 years ago while Linux was just
starting to ramp up and kick ass;)
Rise of virtualization = return of microkernel (Score:5, Interesting)
Re: (Score:2)
Re: (Score:3, Informative)
VMware ESX Server's "vmkernel" is supposed to be a custom microkernel that happens to use drivers from Linux (all device drivers run inside the vmkernel). Guest OSes (including the Linux-based service console used to manage the server) run on top of the vmkernel and access all hardware though it.
The Xen hypervisor does less than VMware's vmkernel; it loads a "dom0" guest that manages the system and performs I/O. With few exceptions, this guest is the onl
A little old... (Score:2)
So, "recently" an article was published in IEEE's May 2006 issue. Looks like this is nothing new.
Magical Unicorn kernel (Score:2)
Who cares... (Score:2)
Software Darwinism (Score:2, Troll)
Besides, what does Andy think, that we're all going to say, "Wow, you're dead on, lets rewrite Linux from scratch with a microkernel?" Linux works. Unless we reach a point where it substantially doesn't (like Windows) there's no value t
Re: (Score:2)
So, Windows, then?
Have you tried The GNU Hurd? (Score:2)
Design Philosophy (Score:5, Interesting)
The time it would take to design an implement a what the equivalent of driver would be were smaller. In the end it puts more flexibility into the hands of the application designer with the kernel taking care of just the bare minimum. The initial work at the time reported a 10x improvement in performance since you could customize so much of how the hardware resources were being used. This of course comes at a price, in addition to developing the application, you need to develop the drivers it uses, possibly increasing the time to write anything significant.
But in the end, flexability was key, and you can see some of the microkernel design philosophies start to seep into the linux kernel. Take a look at kernel modules for example. The code is already being abstracted out, now if it just effectively was designed to run in userspace.
My thoughts are that in the end the microkernel will win do to the fact that I can engineer a more complex OS that is cheaper to change, not because it is faster. Tis is the compromise that was made with compilers vs. machine language programming. In the end I think Tanenbaum will win, linux will become a microkernel out of necessity, and Linus as it turns out would have gotten a good grade from Dr. Tanenbaum. He just would have handed his final project in 40 years late by the time it happens.
Re:Design Philosophy (Score:5, Interesting)
Exokernels aren't the only microkernels of interest, though. There have been efforts to produce mobile nanokernels, on the theory that drivers are generally smaller than data, so in a cluster, moving the code to the data should be more efficient on resources. The opposite extreme has been to produce kernels that span multiple systems, producing a single virtual machine. Here, kernelspace and userspace are segmented and the latency between machines is simply another sort of context switch delay, yet the overall performance is greater than a loosely-coupled cluster could ever produce.
Microkernels have a lot of potential, a lot of problems have been solved, there are still problems that need to be solved better. eg: if a driver crashes, there needs to be a transaction log that permits the hardware to be returned to a valid state if at all possible, or rebooted then rolled into the last valid state. This isn't just a software problem, it's a hardware problem as well. Being able to safely reboot individual components on a motherboard in total isolation requires more than fancy coding and software switches. You need a lot more smoothing circuits and capacitors to ensure that a reboot has electrically no impact - not so much as a flicker - on anything else.
Where microkernels would truly be "at home" would be in machines that support processor-in-memory architecture. Absurdly common function calls, instead of going to the CPU, having instructions and data fetched, and then being executed, along a long path from the OS' entry point to some outer segment of code, can be embedded in the RAM itself. Zero overhead, or damn near. It violates the principle of single-entry, single-exit, but if you don't need such a design, then why waste the cycles to support it?
Theory vs practice ... science vs engineering (Score:2, Insightful)
I do not doubt they've tried. The interesting information is why it hasn't worked. Unfortunately, people seldom publicise failures of ideas they advocate.
One very obvious impediment is the existance of priviliged instructions. For example, on x86 the HLT instruction (used to trigger powersavings) is pr
Re: (Score:2)
In future multicore systems with many many cores, you'll be able to run a process (=microkernel daemon) in every core - we'll have true multitasking, context switching will not be needed. Not that this is going to makes microkernels happen, but it makes more feasible.
The more you know ... (Score:2, Interesting)
Is he really still talking about this??? (Score:2, Insightful)
- He asserted that x86 architechture was doomed to extinction. Yet the majority of the -planet- uses an x86 machine of some sort as of 2008.
- He alluded to the Linux kernel being hard to port because of it's ties to x86 architechture, citin
Re: (Score:3, Informative)
He asserted that x86 architechture was doomed to extinction. Yet the majority of the -planet- uses an x86 machine of some sort as of 2008.
*sigh* That old chestnut.
Every x86 processor for the last decade, whether from intel, AMD or VIA is a superscalar, out-of-order, register-rich RISC internally with a layer that decodes x86 op codes and translates them into the native RISC code. The Transmeta chips were RISC/VLIW internally and could emulate any instruction set by loading the translation code at power-u
I think I'm right damnit, so STFU! (Score:2)
My opinion on the subject. I don't have a f'in clue, but will follow the mantra. Use the right tool for the job. I'm sure it fits in the OS kernel world just as it fits everywhere else.
Virtualization Resolves These Issues (Score:2)
The death of the kernel? (Score:3, Interesting)
They do this using:
I believe the use of superior hardware access, and address space separations should die out in favor of an alternative: runtime-level protection.
As more and more systems move to be based on bytecode-running virtual machines and as JIT's and hardware improves, it is becoming clearer that in the future, "static native code" (C/C++ executables and such) will die out to make room for JIT'd native code (Java/.NET).
I believe that this will happen because JIT can and will optimize better than a static compiler running completely before execution. Such languages are also easier to develop with.
Once such runtimes are used, some aspects of reliability/safety are guaranteed (memory overruns cannot occur. References to inaccessible objects cannot be synthesized). By relying on these measures for security, as well, we can eliminate both the need for elevated kernel access, and address space/context switches. This is desirable for several reasons:
Once relying on the runtime for security and reliability, a "kernel" becomes nothing more than a thread scheduler and a hardware abstraction object library.
I believe this is the correct design for future systems, and is my answer to the micro vs monolothic question: Neither!
Plan 9 authors: "Tanenbaum hasn't learned anything (Score:5, Interesting)
- The client-server paradigm is a good one
Too vague to be a statement. "Good" is undefined.
- Microkernels are the way to go
False unless your only goal is to get papers published. Plan 9's kernel is a fraction of the size of any microkernel we know and offers more functionality and comparable or often better performance.
- UNIX can be successfully run as an application program
`Run' perhaps, `successfully' no. Name a product that succeeds by running UNIX as an application.
- RPC is a good idea to base your system on
Depends on what you mean by RPC. If you predefine the complete set of RPC's, then yes. If you make RPC a paradigm and expectevery application to build its own (c.f. stub compilers), you lose all the discipline you need to make the system comprehensive.
- Atomic group communication (broadcast) is highly useful
Perhaps. We've never used it or felt the need for it.
- Caching at the file server is definitely worth doing
True, but caching anywhere is worthwhile. This statement is like saying 'good algorithms are worth using.'
- File server replication is an idea whose time has come
Perhaps. Simple hardware solutions like disk mirroring solve a lot of the reliability problems much more easily. Also, at least in a stable world, keeping your file server up is a better way to solve the problem.
- Message passing is too primitive for application programmers to use
False.
- Synchronous (blocking) communication is easier to use than asynchronous
They solve different problems. It's pointless to make the distinction based on ease of use. Make the distinction based on which you need.
- New languages are needed for writing distributed/parallel applications
`Needed', no. `Helpful', perhaps. The jury's still out.
- Distributed shared memory in one form or another is a convenient model
Convenient for whom? This one baffles us: distributed shared memory is a lousy model for building systems, yet everyone seems to be doing it. (Try to find a PhD this year on a different topic.)
How about the "CONTROVERSIAL" points? We should weigh in there, too:
- Client caching is a good idea in a system where there are many more nodes than users, and users do not have a "home" machine (e.g., hypercubes)
What?
- Atomic transactions are worth the overhead
Worth the overhead to whom?
- Causal ordering for group communication is good enough
We don't use group communication, so we don't know.
- Threads should be managed by the kernel, not in user space
Better: have a decent process model and avoid this process/thread dichotomy.
Rob Pike
Dave Presotto
Ken Thompson
Phil Winterbottom
Fix the CPU and stop this silly debate (Score:4, Informative)
If you wonder how to do modules without sacrificing the flat address space, it's quite easy: In most CPU designs, each page descriptor has a user/supervisor bit which defines if the contents of a page are accessible by the other pages. Instead of this bit, CPUs must use the target address to look up module information from another table. In other words, the CPU must maintain a map of addresses to modules, and use this map to provide security access.
This design is not as slow as it initially might seem. Modern CPUs are very fast, and they already contain many such maps: the Translation Lookaside Buffer, the Global Descriptor Table cache, the Local Descriptor Table cache, Victim Caches, Trace Caches, you name it.
Comment removed (Score:3, Informative)
Notes on interprocess communication (Score:5, Interesting)
As someone who's done operating system internals work and has written extensively for QNX, I should comment.
Down at the bottom, microkernels are about interprocess communication. The key problem is getting interprocess communication right. Botch that, from a performance or functionality standpoint, and your system will be terrible. In a world where most long-running programs now have interprocess communication, it's amazing that most operating systems still do it so badly.
For interprocess communication, the application usually needs a subroutine call, and the operating system usually gives it read and write. Pipes, sockets, and System V IPC are all queues. So clunky subroutine call systems are built on top of them. Many different clunky subroutine call systems: SOAP, JSON, XMLHttpRequest, CORBA, OpenRPC, MySQL protocol, etc. Plus all Microsoft's stuff, from OLE onward. All of this is a workaround for the mess at the bottom. The performance penalty of those kludges dwarfs that of microkernel-based interprocess communication.
I've recently been writing a web app that involves many long-running processes on a server, and I wish I had QNX messaging. I'm currently using Python, pickle, and pipes, and it is not fun. Most notably, handling all the error cases is much harder than under QNX.
Driver overhead for drivers in user-space isn't that bad. I wrote a FireWire camera driver for QNX, and when sending 640 x 480 x 24 bits x 30 FPS, it used about 3% of a Pentium III, with the uncompressed data going through QNX messaging with one frame per message. So quit worrying about copying cost.
The big problem with microkernels is that the base design is very tough. Mach is generally considered to have been botched (starting from BSD was a mistake). There have been very few good examples anyone could look at. Now that QNX source is open, developers can see how it's done. (The other big success, IBM's VM, is still proprietary.)
Incidentally, there's another key feature a microkernel needs that isn't mentioned much - the ability to load user-space applications and shared libraries during the boot process. This removes the temptation to put stuff in the kernel because it's needed during boot. For example, in QNX, there are no display drivers in the kernel, not even a text mode driver. A driver is usually in the boot image, but it runs in user space. Also, program loading ("exec") is a subroutine in a shared object, not part of the kernel. Networking, disk drivers, and such are all user-level applications but are usually part of the boot image.
Incidentally, the new head of Palm's OS development team comes from QNX, and I think we'll be seeing a more microkernel-oriented system from that direction.
Debate Needs More Clarity (Score:3, Interesting)
1) The conceptual/syntactic division of the OS code into separate 'servers' interacting through some message passing paradigm. Note that a clever build system could easily smoosh these servers together and optimize away the message passing into local function calls.
2) The division of the compiled code into seperate processes and the running of many integral parts of the OS as user processes.
Note that doing 1 and not 2 is a genuine option. If the analogy is really with object oriented programming then one can do what one does with oop: program in terms of the abstract but emit code that avoids inefficencies. While sysenter/sysexit optimizations for L4 based microkernels (and probably others) have made IPC much cheaper on current hardware there is still a cost for switching in and out of kernel mode. Thus it can make a good deal of sense to just shove all the logical modules into ring0.
--------
This brings us to the other point that needs clarification. What is it that we want to achieve? If we want to build an OS for an ATM, an embedded device or a electric power controller I think there is a much stronger case to be made for microkernels in sense #2. However, in a desktop system it really doesn't matter so much whether the OS can recover from a crash that will leave the applications in an unstable state. If the disk module crashes taking it's buffers with it you don't want your applications to simply continue blithely along so you may as well reboot.
But this is only a question of degree. There is no microkernels wrong macrokernel yes answer or vice versa. It's just that each OS has a different ranking of priorities and should implement isolation of kernel 'servers' to a different degree.
----
The exact same can be said when it comes to dealing with microkernel style development (i.e. #1). Both Linus and Tanenbaum do have a point. Just like OO programming insisting on the abstraction of message passing servers can sometimes serve to improve code quality but also like OOP sometimes sticking religiously to the paradigm can make things less efficent or even more confusing. Also if you have enough developers and testers (like linux does) you might want to sacrifice the prettiness of the abstraction for performance and count on people catching the errors.
However, what baffles me is why Tanenbaum seems to think you can't have the advantages of 1 without really having a microkernel. This is just a matter of code organization. If I want to insist that my disk system only talks to other components via a messaging API I can just do so in my code. I could even mostly do this and only break the abstraction when shared data makes a big difference.
Ultimately though it's like arguing about OOP vs. functional or dynamic vs. static. Yup, they both have some advantages and disadvantages.
I gave a presentation on the Microkernel Debate. (Score:3, Interesting)
Basicly, the microkernel is a horrible example of bondage and discipline [catb.org] programming. In order to solve the low level problem of stray memory references, the professors from academia have come up with a low level solution, using the Memory Management Unit, (MMU) to prevent these errors. Unfortunately, this "solution" does high level collateral damage. By breaking the OS into a lot of little pieces, the u-kernels intoduce inefficiency. By putting constraints on how OSes are designed, ukernels make design, coding, and debugging more difficult. All of this to do checking, that at least in theory, could have been done at design, compile, or link time.
This error is basicly caused by wishfull thinking. The u-kernel advocates wish that Operation Systems design were less difficult. To Quote Torvalds:
Criticism of microkernels is said to be almost unknown in the academic world, where it might be a career limiting move (CLM).In 1992, Tanenbaum said "LINUX is obsolete" and "it is now all over but the shoutin'" and "microkernels have won". It is now 2008, and the micro kernel advocates still have nothing that can compete with LINUX in its own problem space. It is time for micro kernel advocates to stop shouting.
Re:Which one? (Score:5, Informative)
Design goals (Score:4, Informative)
And he's right. If your goal is reliability and security, a microkernel is a better design. Both goals rely on limiting the amount of time (and the amount of code) spent in kernel space. "Process isolation" is the mantra.
NeXTStep was a hybrid kernel. It was *almost* a microkernel (based on Mach). And, it was *highly* usable. It had the most usable UI in the industry, and still does in its current reincarnation as OS-X.
I think microkernels still have legs.
Re: (Score:3, Insightful)
Which brings us to the question of just why those AREN'T goals? Wait, I know. It's so we can get 5 more FPS out of Unreal, right?
But seriously, even in a transaction-processing server environment, isn't it worth giving up some performance for a system that can't be crashed and can't be hacked?
Re: (Score:2)
Re: (Score:3, Insightful)
But is QNX relevant? I mean, outside the world of embedded devices?
Even agreeing that it is a nice OS, what is its market share as a desktop or server platform? Certainly less than 1/1000th of what Linux or even BSD have.
Although microkernel OSs may be "nicer" from a design point of view, on the practical side the monolithical ones are serving us very well.
Re:Which one? (Score:5, Interesting)
Is Linux relevant on the desktop? If you don't count duel boot machines how many Linux desktops are out there?
"Although microkernel OSs may be "nicer" from a design point of view, on the practical side the monolithical ones are serving us very well."
I have heard that argument before except it was about Unix. MS-DOS was so much faster and used less ram and drive space than Unix did.
To just dismiss microkernels because monolithic kernels are good enough is silly.
Actually Linux is starting to take some ideas from Microkernals. FUSE is a Microkernel idea. Moving more device drivers into userspace is also a very good idea. It means that security issues with a driver are less likely to root the OS or take out the OS with a crash.
Stablity and security are important aren't they?
But back to your comment yes QNX is relevant. It is relevant because it proves that you can have a small, fast, and stable microkernal OS.
Re: (Score:3, Interesting)
But yeah, moving stuff out of the kernel is the way forward in terms of security, and that's pretty much the definition of a microkernel architecture.
I'm gonna get tarred and feathered for this but .... this is of course exactly what Vista is doing: "Hey, wouldn't it be better if we stopped letting any odd piece of software talk directly to the hardware in kernel mode? If kernel mode was reserved to ... y'now, ... the kernel?" So instead they exposed a (perfectly reasonable) API instead. And the only cost is that you need a new device driver for any old hardware. Like that 20-year-old joystick.
Oh, it also makes it a lot harder to get around OS restri
QNX message parsing == Cleaner Architectured Prog? (Score:3, Interesting)
As far as I can make out they are...
Sounds weird and restrictive, but I bet it creates a far cleaner architect
Re: (Score:3, Funny)
That's just a wild stab in the dark, though.
Re: (Score:3, Insightful)
Imagine if the people who developed and adopted Linux had said "It doesn't have 10% of the features that my current O/S has. Why should I bother with it?"
Re: (Score:2, Informative)
Re: (Score:2, Interesting)
Re:Which one? (Score:5, Informative)
Re: (Score:3, Interesting)
Therefore, a new question suggests itself: Do we really have to have a three-way debate? Micro v. Mono v. XNU (hybrid)???
Re: (Score:3, Insightful)
I don't see the point in buying QNX. They already have Singularity [microsoft.com] which seems very interesting to me. Now I don't know much about microkernels but the idea looks nice. Let compiler handle all the nasty IPC stuff at compile time to lower the performance penalty which comes from process context switches and such.
Re: (Score:2)
There's almost nothing a compiler can do about IPC. Since it involves a context switch and some kernel work, it's entirely dependent on the OS and hardware.
Re: (Score:3, Informative)
The compiler doesn't really handle IPC: what happens is that the compiler (or rather the loader) verifies that the programs are type- and memory-safe before allowing them to run. Then they are all loaded into a single memory space so that IPC is trivial. It's a neat concept, although not the first time it has been implemented (see an OS called 'JX').
Re:Which one? (Score:5, Interesting)
A real microkernel-based system will have a lot of the userland facilities designed to take advantage of message passing and will probably look more like HURD or Squeak than it will like NT or NeXT. QNX [qnx.com] and VxWorks [windriver.com] are the only successful microkernel-based systems that I'm aware of, and frankly both of them are losing big to Linux, so we might have to say were the only successful systems in the future...
Re: (Score:3, Interesting)
Re: (Score:3, Informative)
Re: (Score:3, Informative)
Re:Which one? (Score:5, Interesting)
Either you're microkernel or not. Either you run filesystems and network stacks in separated, isolated processes and address spaces, or you don't. NT and OS X don't run anything of that as a separated process, which was the whole point of having a microkernel. They run it in the same process space than everything else. Just like like linux, solaris, windows 9x. In other words, they aren't microkernels.
Yes, they have source-level design abstractions inherited from microkernels to make the design more modular. So do Linux, Solaris or any other decent monolithic kernel, even if they didn't inherited it from microkernels. Microkernel people wasted their years saying that a microkernel where needed to achieve "modularity", when the fact is that "modularity" in the design of software is not something that you can achieve only by running things in different process spaces. After 20 years they haven't realized that many parts of linux or solaris are more modular than their equivalents of minix or hurd.
Re:Which one? (Score:4, Interesting)
In fact my big pet peeve is that the microkernel people don't distingush between source level abstractions and process seperation. I mean Tanenbaum's arguments here pretend like the better abstractions of message passing and no shared data structures are an argument for microkernels (in the sense of true process isolation) but they are only really an argument for certain abstractions in the source.
Anyway all kernels use some source abstractions but presumably the reason to call some kernels 'hybrid' is that their abstractions are more robust and more throughly resemble the abstractions you would use in a microkernel. If you don't like the word tell us how we should describe microkernel code that someone stripped the process isolation from?
Re: (Score:2)
Nope, Windows NT hasn't been a microkernel based OS since 1996. Version 3 (3.1 according to the article, but I think it applies to the entire 3 series) was microkernelish, but version 4.0 removed the microkernel aspects. This change was to make performance, particularly for graphics, better by allowing drivers direct access to the hardware, but it buggered up the stability no end.
A modest proposal for Tanenbaum (Score:3, Interesting)
First, a couple of background questions... Andy, you believe wholeheartedly in microkernels, right? Do you believe in them more than Minix, or is this merely a shameless plug for your product, Minix?
Based on those two responses, here is my proposal.... Assuming you believe in microkernels more than Minix, why not take a leadership role in GNU/Hurd and get that project going, again? http://www.gnu.org/software/hurd/hurd.html [gnu.org]
Perhaps, you can get assista
Re:A modest proposal for Tanenbaum (Score:5, Insightful)
His real interest is in building highly reliable, self healing operating systems. The research he has been involved in happens to demonstrate that microkernels are a good candidate towards achieving that goal, certainly better than a monolithic kernel anyway. He doesn't believe in microkernels per se, but simply as a tool that will help him achieve what is a higher goal - a highly reliable, self healing operating system. Imagine not having to reboot your computer, even when running the worst written applications or device drivers.
Re: (Score:3, Informative)
I doubt Andy would be so interested in the Hurd, he is very much the message-passing fan. He also doesn't like the GPL.
Re: (Score:3, Interesting)
Remember that Minix-3 was a fairly recent update of v2.0, which was completed in the late 80s. Minux is still a joy to work with as a programmer, but well past its time for being used as a standard OS. Its perfect for classrooms and learning kernel programming. You'd probably enjoy programming against it th
Re: (Score:3, Insightful)
Linux is a kernel, which typically has GNU userland stuff running on it. HURD is a kernel, albeit a rather strange one. Like Linux, It is designed to run with a GNU userland and any applications which run on modern POSIX OSs like BSD, Linux and Solaris.
When people talk about running Debian on it, they mean the userspace utilities and applications from Debian, which form a good base for any experimental POSIX OS. It would not be "Debian GNU/Linux", because it would not have any Linux
Still wrong. (Score:5, Insightful)
It may be a fine instructional OS. Great! That's awesome. I applaud it and have no qualms promoting it in that realm. Beautiful."
Not really Minix 3 isn't trying a microkernel verison of Linux it is trying to be a more secure and reliable POSIX operating system. It uses a microkernel design to achieve things like self healing and security. Adding those features to Linux would a complete rewrite of Linux.
"If my info on GNU/Hurd was invalid, then I stand corrected. I assumed that Hurd was the microkernel with Linux (usually Debian) on top. I should have been clearer about that."
You info is wrong and no it doesn't run Linux on top, and no you can't be clearer because that statment is totally wrong.
"It's conceptually similar, in many ways, to Xen's hypervisor."
No it isn't. Xen is a hypervisor it isn't a microkernel. You could host Hurd and or Minx 3 on Xen but you can't host Xen on Minx 3 or Hurd. You don't take an OS and just run it as a service under an microkernel and a hypervisor by it's self doesn't run applications like a microkernal OS does. The only way that they are similar is that they are small compact bits of code the provide some type of abstraction of the underlining hardware.
"In both cases, Linux isn't the only OS to be hostable on Hurd."
You don't host any OS on Hurd. You can create servers that offer the same services as a specific OS. Much like Wine does under Linux.
"On the other hand, Tanenbaum isn't making apples to apples comparisons, otherwise why not take Vista to task, at the same time? Linux is nothing like Minix, so why compare the two in this way? Why not go after Solaris and others, as well?"
Did you read the article? He wasn't comparing Linux to Minix 3 at all. He didn't go after any one.
And yes he was critical of all current Operating Systems.
"Better yet, make a super cool microkernel for Linux, support Xen-style hypervisors, or something. In other words, don't just complain, do something useful, to help out."
Um.. Gee let's see he is working a POSIX operating system with the goals of making it secure and self healing.... Yea that is so useless. Just for kicks what OS or significant piece of FOSS have you written? How have you helped?
So you have written a lot with NO UNDERSTANDING of what you are talking about.
I want some new options for moderation just for posts like this. +5 Ignorant +5 Arrogant!
Before you start telling Tanenbaum what he should do to be usful you need to learn the difference between a Hypervisor and a Microkernel, the difference between Hurd and Linux, and the difference between someone with an actual education in Computer Science and yourself. Might I suggest you pick up Tanenbaum's text book? It was of great help to Linus.