Tanenbaum-Torvalds Microkernel Debate Continues 534
twasserman writes "Andy Tanenbaum's recent article in the May 2006 issue of IEEE Computer restarted the longstanding Slashdot discussion about microkernels. He has posted a message on his website that responds to the various comments, describes numerous microkernel operating systems, including Minix3, and addresses his goal of building highly reliable, self-healing operating systems."
To Interject for a moment (Score:5, Informative)
First and foremost, does anyone have a torrent of Minix3? Tanenbaum is a bit worried [google.com] about getting slashdotted. If you've got one seeded, please share.
Now with that out of the way. I don't know if anyone else has tried it yet, but Minix3 is kind of neat. It's a complete OS that implements the Microkernel concepts that he's been expounding on for years now. The upsides are that it supports POSIX standards (mostly), can run X-Windows, and is a useful development platform. Everything is very open, and still simple enough to trudge through without getting confused by the myriads of "gotchas" most OS code-bases contain. Unfortunately, it's still a long way from a usable OS.
The biggest issue is that the system is lacking proper memory management. It currently uses static data segments which have to be predefined before the program is run. If the program goes over its data segment, it will start failing on mallocs. The result is that you often have to massively increase the data segment just to handle the peak usage. Right now I have BASH running with a segment size of about 80 megs just so I can run configure scripts. That means that every instance of BASH is taking up that much memory! There's apparently a Virtual Memory system in progress to help solve this issue, so this is (thankfully) a temporary problem.
The other big issue is a lack of threading support. I'm trying to compile GNU PThreads [gnu.org] to cover over this deficiency, but it's been a slow process. (It keeps failing on the mctx stack configuration. I wish I understood what that was so I wouldn't have to blindly try different settings.)
On the other hand, the usermode servers do work as advertised. For example, the network stack occasionally crashes under VMWare. (I'm guessing it's the same memory problems I mentioned earlier.) Simply killing and restarting dhcpd actually does get the system back up and running. It's kind of neat, even though it does take some getting used to.
All in all, I think it's a really cool project that could go places. The key thing is that it needs attention from programmers with both the desire and time to help. Tossing lame criticisms won't help the project reach that goal. So if you're looking to help out a cool operating system that's focused on stability, security, and ease of development, come check out Minix for a bit. The worst that could happen is that you'll decide that it isn't worth investing the time and energy. And who knows? With some work, Minix might turn out to be a good alternative to QNX.
Page based sockets? (Score:5, Interesting)
As I understand it, as a novice, the only way to communincate or syncronize data is via copies of data passed via something analogous to a socket. A Socket is a serial interface. If you think about this for a moment, you realize this could be thought of as one byte of shared memory. Thus a copy operation is in effect the iteration of this one byte over the data to share. At any one moment you can only syncronize that one byte.
But this suggests it's own solution. Why not share pages of memory in parallel between processes. This is short of full access to all of the state of another process. But it would allow locking and syncronization processes on entire system states and the rapid passing of data without copies.
Then it would seem like the isolation of mickrokernels would be fully gained without the complications that arrise in multi processing, or compartmentalization.
Or is there a bigger picture I'm missing.
Re:Page based sockets? (Score:2, Interesting)
I would suggest that this will eventually make its way into kernel systems (just like any other good idea that has come from the programming language fields).
Re:Page based sockets? (Score:4, Interesting)
This is precisely what shared memory is, and it's used all over the place, in Unix and Windows both. When using it, you are of course back to shared data structures and all of the synchronization nastiness, but a) sometimes it's worth paying the complexity price, and b) sometimes it doesn't actually matter if concurrent access corrupts the data if something else is going to correct it (think packet collisions).
Still, if you have two processes that both legitimately need to read and write the same data, you probably need three processes. The communication overhead with the third process is usually pretty negligible.
There's even more exotic concurrency mechanisms that exist that don't require copying or even explicit synchronization, but they're usually functional in nature, and incompatible with the side-effectful state machines of most OS's and applications in existence today.
Re:Page based sockets? (Score:3, Interesting)
Some other process could not butt-in on this channel however, since it's not registered to that socket.
Or is that how shared memory works?
Tnnebaum's point is that he can have a re-i
Re:Page based sockets? (Score:3, Informative)
The kernel is the arbiter of shared memory, sure, because that's how it works, by futzing with the VM mappings of processes using it. It's not available to every process in the system though, it still has to ask the kernel for access.
But "communication" over shared memory is exactly how it works -- the size of the channel is the size of the entire shm segment. You write as much data a
Re:Page based sockets? (Score:3, Informative)
This is my opinion, but I had to say it: I personally don't like SysV. There are various ways to synchronize, and each method has advantages and disadvantages, but SysV is at the bottom of the pack if you ask me.
process-shared pthread mutexes and conditions are much faster than SysV, because they usually don't make a system call. A disadvantage of the SysV ipc that process-shared pthread mutexes have too is demonstrated by the
Race conditions? (Score:2)
There are two options: the kernel could combine pages, either physically or logically; or the kernel could leave that page open for writing by the application in question.
The easiest way would be for each application to have a page or set of pages for sending messages to the kernel; the application would vie
Re:Page based sockets? (Score:3, Interesting)
Re:Page based sockets? (Score:3, Interesting)
Re:Page based sockets? (Score:3, Insightful)
One of Tanenbaum's central points is that Linux is not used everywhere. In particular, it's not used anywhere that hard-real-timeness, seriously paranoid robustness (e.g. in those applications where a hardware failure should not result in a reboot) etc are important.
The word "niche" is, much like "legacy", often used in places where a more overt dismissal would rightly be seen as unfair. The fact that Linux c
Page based messages (Score:3, Informative)
I don't think it's about works vs not works. (Score:3, Insightful)
Oblig auto analogy:
If hauling cargo is your primary objective, then you'll probably view motorcycles as badly designed while seeing vans and trucks as "better".
Only time (and code) will show which approach will result in all of the benefits of the other approach without any unacceptable deficiencies.
Re:I don't think it's about works vs not works. (Score:4, Insightful)
Re:To Interject for a moment (Score:4, Funny)
Damn right, this'll be better than the less filling/tastes great argument.
Re:To Interject for a moment (Score:5, Funny)
Wait a minute, too much information here...
Re:To Interject for a moment (Score:3, Interesting)
(Yes, I know it's an ugly hack. But it means I don't worry about giving Bash 120mb, and cc some enormous number...)
Re:To Interject for a moment (Score:3, Informative)
Pardon me? Sir? Sir? You seem to have diarrhea of the mouth and constipation of the brain.
Minix 1 & 2 codebases are indeed older than Linux. And they could have *been* Linux, except that Tanenbaum was focused on teaching. As a result, he rejected the requests to add features, thus leading to the development of Linux.
However, he has apparently decided that it's time to start a microkernel p
Re:To Interject for a moment (Score:3, Informative)
Re:To Interject for a moment (Score:5, Insightful)
There were a couple of good replies in there, but they all got drowned out in the noise. Soooo, I think it's a better idea to focus on how Minix might be made a viable OS rather than arguing the same nonsense all over again. As several of the posters here have already proven, they're not reading Tanenbaum's arguments anyway. So why should we expect this time be any different than the last?
Re:To Interject for a moment (Score:4, Insightful)
Hey Linus did win this. He was right and NOTHING has changed in the last ten years!
Computers are not that much faster than they where back then and the need for security is no different that then!
Yes I am so kidding. Linus won this because at the time his goal was to get out a Unix clone that ran on the 386 as quickly as possible. Doctor Tanenbaum on the other hand was interested in a Unix clone that would run on cheap hardware and that made a very good learning tool. For his goal Minix was the better system.
Now we live in world of Gigs. It is common to have many gigs of hard drive space, at least a gigabyte of ram, and multigigahertz multi-core cpus. Not to mention that even the cheap built in graphics chip sets would blow the doors off of any video card you could get in 1995.
For all but the biggest FPS gaming freak our computers are fast enough. What we want now is reliability, security, and ease of use. I use Linux every day. I depend on Linux. What I will not do is give up hope on something better than what we are using today. New idea's should be explored.
I am also a little bit disapointed how little respect Doctor Tanenbaum has gotten on Slashdot. Linus compiled the first versions of Linux using Gcc running under Minix. I am pretty sure that Linus read Doctor Tanenbaum's book and probably learned a lot about how to write an OS from it. When it comes to computer science Tanenbaum's name is right up there with Wirth and Knuth. Of course the odds that any of the people that use STFU in a post have ever read Knuth, Wirth, or Tanenbaum is probably not worth measuring.
Even if you are not convinced that Tanenbaum's methods are correct, his goals of a super reliable, self-healing, and secure OS are correct.
Re:This debate will never be over... (Score:4, Insightful)
you mean like Tanenbaum [minix3.org] (slashdotted, try later) did?
FTFA [cs.vu.nl]:
So **PLEASE** no more comments like "If Tanenbaum thinks microkernels are so great, why doesn't he write an OS based on one?" He did.
i don't reall know what you mean by proof of concept
again, FTFA:
It is definitely not as complete or mature as Linux or BSD yet, but it clearly demonstrates that implementing a reliable, self-healing, multiserver UNIX clone in user space based on a small, easy-to-understand microkernel is doable. Don't confuse lack of maturity (we've only been at it for a bit over a year with three people) with issues relating to microkernels.
i know this is slashdot, and RTFA is some kind of mortal sin, but please at least try.
Grandma's computer never crashes (Score:4, Funny)
Tanenbaum wrote (in TFA):The average user does not care about even more features or squeezing the last drop of performance out of the hardware, but cares a lot about having the computer work flawlessly 100% of the time and never crashing. Ask your grandma.
Interesting. My mom recently bought a computer for my grandma. Grandma doesn't have a problem with the computer crashing at all. Her secret? She never turns it on.
So when did we forget... (Score:4, Insightful)
Re:So when did we forget... (Score:4, Insightful)
Re:So when did we forget... (Score:5, Informative)
Re:So when did we forget... (Score:5, Insightful)
Have you read the article? Tanenbaum basicly starts out by saying this is not a 'fight', but a technical discussion. Communication and debate is an important part of research and development. That's what is being attempted here, at least at face by Tanenbaum. There may be antagonism behind the scenes, or bias in presentation, but that is just human. The primary intent is to advance the state of the art, not fight.
All this 'what's the point' or 'we have this now' type of talk really bugs me. Everything can always be improved, or at least that is the attitude I'd like to stick with.
> When did we collectively forget that everything has its place
Another key component of research and development is to question everything. Not throw everything away and always start over, but to at least question it. Just because monolithic kernels rule the desktop now does does prove that monolithic kernels are inherently the best desktop solution.
In effect it is sometimes good to not even recognize a notion of 'everything has its place'.
SE Linux (Score:2)
I'd be glad to give all my Windows platform for one Über-secured OS.
Re:SE Linux (Score:2)
Minix is already on version 3 (Score:5, Funny)
And Linux seems to be stuck on version 2.6
And v3.12 (I think, I'm going from memory here) will finally support the X windowing system
Oh...maybe I should have left out that last sentence...kinda kills my argument
Re:Minix is already on version 3 (Score:2)
And Linux seems to be stuck on version 2.6
HAH! Windows was on version 3 SO MANY YEARS AGO. Eat your heart out, Linux!
Re:Minix is already on version 3 (Score:3, Informative)
That's odd. I could have sworn that I was just using an X-Terminal on it a few minutes ago.
Oh wait. I was using an X-Terminal. How in the world did that happen? </mock-sarcasm>
To be fair, getting X-Windows running is a recent development. On the other hand, the entire Minix3 codebase is a recent development. (Only a half-year old.) They're moving at a pretty good clip for a brand-new OS.
Re:Minix is already on version 3 (Score:3, Informative)
I'm thinking that's a ways down the road. If Minix could at least be viable for embedding into smaller, pre-configured devices, it could garner a lot more support in the device-driver arena.
And it won't even get as far as BSD unless it has a BSD-like license.
Sorry? Minix3 is distributed under the BSD license [minix3.org].
Any word on a Xen compatibility?
Apparently it's up and running [google.com].
Anti-reset button fanatic (Score:4, Insightful)
They may not be labeled "reset" but they *do* have them. And, no offense, but I like having a reset button.
Whatever... (Score:3, Interesting)
Will hardware drivers be developed faster and more reliably with a microkernel? That seems to be the biggest hurdle in reliable OS development these days... Anyone have a good answer for that, I honestly don't know.
Re:Whatever... (Score:2)
Because with Microkernel, we could have proprietary drivers *cough* ATI, nVidia *cough* without having to worry about the driver messing up the system.
Re:Whatever... (Score:4, Funny)
Re:Whatever... (Score:3, Interesting)
I noticed two things about Tannenbaum's piece though. Essentially all of the microkernels he listed were either used in dedicated (including embedded) systems or were not true microkernels by his own ad
Minix 3 screenshots (Score:5, Informative)
I almost died of boredom looking for them. Here's the link, for the lazy:
http://www.minix3.org/doc/screenies.html [minix3.org]
All I want to know... (Score:2, Insightful)
Re:All I want to know... (Score:3, Informative)
Minix will need some more features though, my guess is paging and threading are the major sticking points. Probably more system calls too but VM and threading are more work.
Being able to 'leverage' the enormous existing amount of software once Minix matures a bit would let Minix 'leapfrog' its 'competition'.
Disclaimer: I am involved with the Minix project.
hey eveybody (Score:5, Funny)
I'm doing a (free) operating system (just a hobby, won't be big and
professional like gnu) for 386(486) AT clones. This has been brewing
since april, and is starting to get ready. I'd like any feedback on
things people like/dislike in minix, as my OS resembles it somewhat
(same physical layout of the file-system (due to practical reasons)
among other things).
I've currently ported bash and gcc, and things seem to work.
This implies that I'll get something practical within a few months, and
I'd like to know what features most people would want. Any suggestions
are welcome, but I won't promise I'll implement them
Obligatory Igno Molnar quote (Score:4, Funny)
http://www.ussg.iu.edu/hypermail/linux/kernel/9906 .0/0746.html [iu.edu]
He is, of course, referring to all the research in the '80s and '90s on microkernels and IPC-based operating systems.
"Cars don't have reset buttons." My Prius does... (Score:5, Interesting)
It was apparently due to a firmware bug.
In any case, when it happened, according to personal reports in Prius forums from owners to whom it happened, the result was loss of internal-combustion-engine power, meaning they had about of mile of electric-powered travel to get to a safe stopping location. At that point, if you reset the computer by cycling the "power" button three times, most of the warning lights would go off, and the car would be fine again. Of course many to whom this happened didn't know the three-push trick... and those to whom it did happen usually elected to drive to the nearest Toyota dealer for a "TSB" ("technical service bulletin" = firmware patch).
These days, conventional-technology cars have a lot of firmware in them, and I'll bet they have a "reset" function available, even if it's not on the dashboard and visible to the driver.
Hybrids are a first generation device... (Score:3, Informative)
And even if you lumped them into cars, so, you have what, a few hundred prius's that have reset buttons, among the hundreds of millions of cars. And every computer in existance still has a reset button, and at some point in time that reset button has been exercised.
I should be working... (Score:5, Interesting)
Now in theory I could see a high-availability microkernel being a good, less expensive alternative, to a classic mainframe environment, especially if you had a well written auto-healing system built in as a default. But that would require a lot of work outside the kernel that just isn't being done right now. And until it is, micro-kernels don't have anything more to offer than monolithic kernerls.
To put it in API terms - it doesn't matter very much whether your library correctly returns an error code for every possible circumstance, when most user level code doesn't bother to check it (or just exits immediately on even addressable errors).
Re:I should be working... (Score:3, Insightful)
Also, your API metaphor is a little bad. While you're right about the end result, saying that this invalidates the utility of the API is wrong imho. The advantage of having the API remains, be
Examples prove Linus' point (Score:3, Interesting)
Except for QNX the software he cites are either vaporware (Coyotos, HURD), esoteric research toys (L4Linux, Singularity), or brutally violate the microkernel concept (MacOSX, Symbian).
Even his best example, QNX is a very niche product and hard to compare to something like Linux.
Re:Examples prove Linus' point (Score:4, Insightful)
QNX is everywhere, you just don't realize it. ATMs run it, lots of medical equipment runs it, lots of other embedded apps that you don't even think of run it.
The examples Andy cites prove that in fact the microkernel concept has won in every single field where stability has gone beyond being something people wanted to something they demand. As soon as the general public realizes computers don't HAVE to crash, they'll win there too.
Re:Examples prove Linus' point (Score:3, Insightful)
While I see your point, and agree to an extent, its a poor metaphor (windscreen glass is a pretty niche application of glass, wouldn't you say?).
My point was to refute the implied "QNX isn't anywhere important" statement rather than the exact meaning of niche.
"But MY computer never crashes (Linux); so what else has it to offer? Security? Got that too."
Thats wonderful, and my data center ful
A CPU like Kernel (Score:3, Interesting)
Sure you can recompile and all that jaz, but I'd love to see a day where an app could run on any number of kernels out there. This creates real competetion.
What I'd like to see if a kernel more like a CPU. Instead of linking your kernel calls, you place them as if you where placing an Assembly call. Then we can have many companies and open source organizations writing versions of it.
As we move towards multi core cpus this could really lead to performance leads. Where one or more of many cores could be dedicated to the kernel operations listening for operations and taking care of them. No context switches needed, no privledge mode switching.
Drivers and everything else run outside of kernel mode and use low level microcode to execute the code.
The best part I think is you could make it backword compatiable as we re-write. A layer could handle old kernel calls and change them to the micro codes.
As we define everything more and more then we might even be able to design CPUs that can handle it better.
Performance Hit of uKs unacceptable for most users (Score:3, Informative)
Okay, I spent 2 years working as a engineer in the OSF's Research Institute developing Mach 3.0 from 1991. Let me answer Linus's question in a simple fashion. What Mach 3.0 bought you over Mach 2.5 or Mach 2.0 was a 12% performance hit as every call to the OS had to make a User Space -> Kernel -> User Space hit. This was true on x86, Moto and any other processor architecture available to us at the time. Not one of our customers found this an acceptable price to pay and I very much doubt they would today. One of the reasons Microsoft moved a lot of functionality into the Kernel between NT 3.5 and NT4.0 was performances (NT being, at its origins a uK based OS).
What of the advantages ?
Is porting easier? No not really, the machine dependent code in Mach 2.5 and Mach 3.0 was already well abstracted.
You could run two OS personalities at once, for example you could have an Apple OS and Unix running at the same time. But why would any real world clients want to do this?
Problems in the OS personality wouldn't bring down the uKernel - but they might stop you doing any useful work while you reboot the OS personality.
Other things like distributed operating systems (and associated fault tolerance) were perhaps aided by the uK design and this is a path that, in my humble opinion, the OSF should have pursued with greater zeal than they did. Back in 1991 we had a Mach 3.0 based system that ran a uK across an array of x86 nodes but had different parts of the OS - say IO or memory management running on different nodes. From a user standpoint all the machines (in reality bog standard 386 machines linked by FDDI) looked like a single computer running a Unix like OS.
I remember discussing Linux with my colleagues back in 1993, some were impressed and thought the nascent OS model was very powerful, others dismissed it as a toy with no real future. I suspect Tannenbaum was also amongst the poo=pooers and has become pretty annoyed about how things have turned out.
Re:Performance Hit of uKs unacceptable for most us (Score:3, Insightful)
Oh Tanenbaum, oh Tanenbaum, wie grün sind ... (Score:3, Funny)
Du grünst nicht nur zur Sommerszeit, nein auch im Winter, wenn es schneit.
Oh Tanenbaum, oh Tanenbaum, wie grün sind deine Blätter
For the uninformed: Tannenbaum (with double n) is the german word for Fir (conifer) or the synonym for Christmas-Tree. The verse above is the first of a famous german christmas-carol.
Linux Will Stay "Mono" 'Til Linus Can't Take It (Score:3, Interesting)
Years later, Tanenbaum still makes valid observations, Linus and others continue to make a rather larger project jump through the hoops, and that's fine. The results of academic research may or may not get traction outside of a university, but without the research, there wouldn't be alternatives to contemplate. If I've gathered nothing else about Linus' personality from his writings over the years, it's that he seems to be practical, not particularly hung up on architectural (or licensing) theories... unlike me.
At some point, if his current architecture just isn't doing it for him any more, he might morph into Tanenbaum's 'A' student. It won't be because a microkernel was always right, but that it was right now.
The truth about microkernels (Score:5, Informative)
So that's what you need to know about microkernels.
Re:The truth about microkernels (Score:3, Interesting)
I/O channels would help IBM mainframe channels, which have an MMU between the peripheral and main memory...
I've heard from a friend at Intel that their new chipsets which fully support TCPA have this feature. So maybe trusted computing isn't just about copy prevention.
Still, on availability and usability (Score:4, Insightful)
- AST is a prefessor. His interest in doing research and building the best systems for the *future* that he believes in.
- Linus is an engineer. His interest is building a system that works best *today*.
We simply need both. Without pioneering work done before in other OSes (this included failures), Linux wouldn't have been like this today. The greatest reason for its success it not it's doing something cool, but it's doing things that are proven to work.
So who is right? I'd say both. Linus has said this in 1992: "Linux wins heavily on points of being available now."
Linus admits microkernels are "cooler", but he didn't (doesn't) believe in it *today* because none of the available microkernels could compete with Linux as a *general purpose* OS. It's funny how AST listed "Hurd" as one of the microkernels - it totally defeats his own arguments. The fact is Hurd is still not available today despite it was started before Linux.
Many people talk about QNX. Sure, in many cases (especially mission critical, RTOS, where reliablility is so much more important than performance and usability) microkernels are better, but we really shouldn't compare a general-purpose OS with real-time or special purpose OS.
So we go back to the old way: code talks. So far microkernel proponents keep saying "it's possible to do microkernel fast, etc" but the fact is they have never had an OS that could replace Linux and other popular OS that everybody could run on their desktop with enough functionality. There are two possible reasons:
1. Lack of developers. But why? Do people tend to contribute to Linux because Linus is more handsome (than Richard Stallman that is)? There gotta be some reasons behind it other than oppotunities right?
2. Monilithic kernels are actually more engineerable than microkernels, at least for today.
Maybe 2 is actually the real reason?
Think about it.
What I'd Like To See (Score:4, Funny)
I'd like to see Linus say "I've done a monolithic kernel and proven its success. Now I'm going to build a performant microkernel and see what all the fuss is about." He could hand over Linux kernel development to the senior crew that's already taking care of the major modules, and try something else.
Essentially, it would be cool for someone like Linus, with his incredibly strong practical engineering bent, to do again what he did with Linux: semi-clean-sheet a new kernel that meets his performance requirements, but is designed around different strategies for achieving what every OS tries to achieve.
I bet that, in two or three years, he would recant his earlier dismissal of microkernels and say that there's actually some interesting stuff there, and along the way solve some of the perennial complaints that slashdotters always bring up whenever microkernels are mentioned. In his heart of hearts, I'm sure Linus has some legacy issues with the current kernel design that he'd love to jettison, but can't without massively re-organizing the existing architecture, in which too many interested parties are already involved.
And he could put Stallman and the HURD boys to shame *again*, which is a twofer
Micro/Macro - how about run time modifiable (Score:3, Interesting)
The point that Andy makes which I agree on, is that computer software is still in its infancy. The part I disagree with is that it'll change by him stating the obvious.
Mirror set up (Score:3, Informative)
http://mirrors.easynews.com/minix3 [easynews.com]
history in the making (Score:4, Interesting)
Why Microkernels Do Not Work (Score:3, Insightful)
<simplification>A hardware driver doing output has to take raw bytes from a process, which is treating the device as though it were an ideal device; and pass them, usually together with a lot more information, to the actual device. A driver doing input has to supply instructions to and read raw data from the device, distil down the data and output it as though it came from an ideal device.</simplification>
In general, the data pathway between the driver and the process {which we'll call the software-side} is less heavily used than the data pathway between the driver and the device {which we'll call the hardware-side}.
<simplification>In a conventional monolithic kernel {classic BSD}, a hybrid kernel {Windows NT} or a modular kernel {Linux or Netware}, device drivers exist entirely in kernel space. The device driver process communicates with the userland process which wants to talk to the device and with the device itself. All the required munging is done within the kernel process. In a microkernel architecture, device drivers exist mainly in user space {though there is necessarily a kernel component, since userland processes are not allowed to talk to devices directly}. The device driver process communicates with the ordinary userland process which wants to talk to the device, and a much simpler kernel space process which just puts raw data and commands, fed to it by the user space driver, on the appropriate bus.</simplification>
Ignore for a moment the fact that under a microkernel, some process pretending to be a user space device driver could effectively access hardware almost directly, as though it were a kernel space process. What's more relevant is that in a microkernel architecture, the heavily-used hardware-side path crosses the boundary between user space and kernel space.
And it gets worse.
<simplification>In a modular kernel, a device driver module has to be loaded the first time some process wants to talk to the device. {Anyone remember the way Betamax VCRs used to leave the tape in the cassette till the first time the user pressed PLAY? Forget the analogy then} which obviously takes some time. The software-side communications channel is established, which takes some time. Then communication takes place. The driver stays loaded until the user wants it removed. Then the communication channel is filled in and the memory used by the module is freed, which obviously takes some time.
In a microkernel architecture, a user space device driver has to be loaded every time some process wants to talk to the device. The software and hardware side communications channels have to be established, which take some time. Then communication begins in earnest. When that particular process has finished with the device, both channels are filled, and the memory used by the driver is freed; which takes time. Between this hardware access and the next, another process may have taken over the space freed up by the driver, which means that reloading the user space driver will take time.</simplification>
It makes good practical sense to put fences in the place where the smallest amount of data passes through them, because the overheads involved in talking over a fence do add up. That, however, may not necessarily be the most "beautiful" arrangement, if your idea of beauty is to keep as little as possible on one side the fence. It also makes sense for device drivers which are going to be used several times to stay in memory, not be continuously loaded and unloaded. {Admittedly, that's really a memory management issue, but no known memory manager can predict the future.}
Ultimately it's just a question of high heels vs. hiking boots.
Re:Andy Tanenbaum ? (Score:2)
Re:Andy Tanenbaum ? (Score:3, Informative)
Read Tanenbaum's Wikipedia bio [wikipedia.org].
Re:Andy Tanenbaum ? (Score:2)
Tanenbaum had a doctorate before Linus was potty trained.
Re:Andy Tanenbaum ? (Score:3, Interesting)
Linus has written the Linux kernel used in millions of computers ranging from PCs to Mainframe.
Tanenbaum still has Minix and doctorate.
Education means nothing if you do nothing with it. Linus has applied his education very well and progress well beyond anything Tanenbaum has accomplished, with or without a doctorate...
Re:Andy Tanenbaum ? (Score:2)
And tanenbaum now comes up with a kernel on which programs have problems doing malloc [slashdot.org] for dynamic memory allocation???
Goes to show who moves like a glacier and who swims around its icebergs in circles... like a penguin.
Re:Andy Tanenbaum ? (Score:2)
Perhaps so, but which one of them is wearing diapers NOW?
Go check the article out. (Score:5, Informative)
If you have a computer science degree you have probably used at least one if not more of his textbooks. He's one of the more prominent computer science researchers of the last couple decades.
Re:Go check the article out. (Score:2)
Re:Andy Tanenbaum ? (Score:3, Interesting)
Re:Andy Tanenbaum ? (Score:5, Insightful)
Really? And what exactly do you base this on? According to the article, which it's clear that you did not read, Tanenbaum simply had a recent article printed in IEEE Computer and someone on Slashdot posted a link to it, which caused Linus to weigh in with his 2 cents about something that was never directed at him. It sounds more to me like Linus is obsessed with proving that macrokernels are the only way to go. Why does he even care? It's not like Minix is a threat to Linux. If he believes so strongly that microkernels are wrong, he should just let Tanenbaum and company waste their time on them instead of endlessing arguing the same points he made years ago.
Re:Andy Tanenbaum ? (Score:2)
Minix was an Mi
Re:Andy Tanenbaum ? (Score:3, Informative)
No, actually he created two that I know of. Well, technically three since MINIX 3 is probably sufficiently different from MINIX 1 to be thought of as a different kernel. Amoeba was another microkernel-based OS designed to run on distributed systems, presenting an entire cluster as a single machine.
MINIX 1 was a teaching tool. MINIX 3 is a real OS, although still very young (less than two years old), but doing very well. Amoeba is so far ahead of Linux conceptuall
Re:Andy Tanenbaum ? (Score:2)
Re:Andy Tanenbaum ? (Score:2)
Re:Still Debating (Score:5, Insightful)
I am NOT implying that uKernels are better, I am playing devils advocate.
Not everything that "wins" is the best... Look at Windows
Re:Still Debating (Score:2, Insightful)
Personally, I don't care either way in the micro/macro kernel debate. As long as we have people still interested in both its a win-win situation for us computer enthusiasts.
Re:Still Debating (Score:2)
Re:Still Debating (Score:2, Informative)
Re:Still Debating (Score:2)
Re:Still Debating (Score:2)
Debate on. Microkernels will win in the end. (Score:2, Interesting)
In the CPU cycle flush 00s the debate is just different. Less code running at ring0 means less code that can cause a kernel panic, blue screen or whatever they call it in OSX.
A significant part of the market is OK running Java. The comparitivly small performance cost and high stability payoff of a mic
Re:Still Debating (Score:3, Interesting)
* QNX Neutrino. This is the most successful microkernel ever. It deserves all the praise it gets. Yet it is still a niche product.
* Hurd. After twenty years we're still waiting for a halfway stable release. Hurd development is almost an argument *for* monolithic kernels!
*Minix. This is still an educational kernel. A teaching tool. It remains unsuitable for "r
Re:Still Debating (Score:5, Interesting)
*Minix. This is still an educational kernel. A teaching tool. It remains unsuitable for "real world" use.
Actually, it's a start of a full-up Microkernel operating system. This isn't your grand-pappy's Minix, it's a brand new code base under the BSD license, intended to be developed out into a complete system. It's still taking baby-steps at the moment, but it's coming along quite nicely.
* NT. This is NOT a microkernel!
NT is a hybrid. It has Microkernel facilities that are constantly being used for something different in each version. Early versions of NT were apparently full Microkernels, but this was changed for performance.
* QNX Neutrino. This is the most successful microkernel ever. It deserves all the praise it gets. Yet it is still a niche product.
I would hardly call QNX a "niche" product. Running on everything from your car engine to Kiosk PCs (yes, that stupid iOpener ran it too), it's an extremely powerful and versatile operating system. Its Microkernel architecture even gives it the ability to be heavily customized for the needs of the application. Don't need networking? So don't run the server! Need a GUI? Just add the Graphics server to the startup.
Microkernels haven't failed. However, you may notice that nearly all the popular Operating Systems we use today were all developed back in the late 80's and early 90's. The real problem is that there hasn't been a need to develop new OSes until now. Now that Security and Stability are more difficult pressing issues than performance, we can go back to the drawing board and start designing new OSes to meet our needs for the next decade and a half.
Re:Still Debating (Score:4, Informative)
No-no-no-no-NO! I swear this kills me... Why does this myth continue to propogate? The ONLY thing about NT that was EVER uKernelish was that it did alot of IPC (message passing) and that it implemented "personalities" (but it did so in a most decidedly non-microkernel way). Both of these traits were commonly associated with microKernels at the time, but regardless the things that ACTUALLY make a kernel a microKernel never existed in NT... EVER...
Re:Still Debating (Score:3, Informative)
4. Network stacks, at least up to the transport layer, are implemented in kernel space.
Celebrity death-coding (Score:2)
Torvalds vs. Tanenbaum
Re:Celebrity death-coding (Score:3, Funny)
Under what conditions? (Score:2)
How? Those who value performance over security will prefer the monolithic kernel approach, those who value security over performance will go with the micro kernel. In a nutshell, how slow is too slow?
Re:Under what conditions? (Score:2)
For reliability, effectiveness and efficiency.
Maybe one can even define a way to measure these features, so the winner would be declared by measurement!
The Question Is (Score:5, Insightful)
If you were given the choice between rebooting your machine every 3 months or so for updates/driver install or never rebooting your machine and but taking a 3-5% performance hit (I think this is what the most efficient uKernels waste on address space switches) which would you choose.
I know my answer. For embedded systems/media center type stuff I don't care about the 3-5% performance hit. I don't ever want to screw with them.
For my computer I don't care about rebooting every 3 months or so. I want that extra little bit of speed.
Re:I'm not sure where this is going? (Score:5, Insightful)
"For small embedded environments where speed or device support isn't a main concern. Micro-kernels will excel for their stability but take a look around and that's not reality or what we have today. We have lots of different hardware, lots of different interfaces and to manage that all via objects it'll just be extremely large."
And none of that has anything to do with monolithic versus microkernel, except perhaps tangentially. Microkernels do not ask each device driver to be a server all its own with zero code reuse, they use generic servers to wrap drivers for specific hardware while still isolating them from kernel space. This means there's no functional difference to the driver programmer from a monolithic to a microkernel architecture, either way you look at the driver interface and write the necessary code.
"If you think the linux kernel is big the relevant code for this would be numerous times larger. It just pushes the code from the kernel into userspace and you will definitely need more code to manage and access data structure"
Why do you suddenly need more code to the same thing? Andy's point is that when you stop sharing data structures, and instead start passing messages from one discrete server to another through well defined interfaces you reduce the amount of complexity (and therefor code) involved in protecting the coherency of those data structures. You will end up with more interfaces, but thats not necessarily a bad thing. I'd gladly trade all of the critical section protection logic for some nice interface logic. Especially since making the latter work reliably is a hell of a lot easier to do, and gives each subsystem the freedom to rework their internals without requiring me to lift a finger.
"If you can isolate your facets and only plan on supporting X number of devices/platforms/chipsets/etc and don't expect any blazing performance. Microkernels are great. Beyond that? With the rate that technology moves, it just becomes a management nightmare."
There's still no credible evidence to suggest that microkernel performance is that horrible, especially with modern clock speeds. Aside from gaming and large scientific compute clusters, very little being done today on a computer uses any significant measure of their speed. We've already covered how you're totally off base on device support (i.e. its orthogonal to the entire debate), and you throw "management nightmare" out there without bothering to define it, let alone defend it.
Large unix systems are already complex as hell to manage. A lot of that complexity is "hidden" in the kernel, which while fine for desktop users is a big pain for system administrators, and would be exposed for manageability in a microkernel setup.
As for OS X and its performance, its not horribly slow. Especially considering that your complaint almost certainly centers around PPC performance not x86, where it was hampered by lower clock speeds that were not counterbalanced by better IPC in any significant fashion. OS X's memory hunger has little to do with the kernel and lots to do with their operating environment, and all of the gee whiz graphical functionality that OS X brings along with it.
Ultimately though, OSX performance is a success story because on a G3 700mhz with 256M of ram its actually useable. Have you tried running Windows XP on a similar setup? Tried turning all of the eye candy on? Bet you didn't like the way it performed either.
Re:There is on "one true solution" (Score:4, Insightful)
More to the point, ``because it's faster'' has been the bane of Unix. To see that in stark relief, look at the shambles of NFS being in the kernel. Rather than fix the generic problems of providing a user-space nfsd, we saw a race into the kernel for a cheap my-code-only win, plus the horror of system calls that never return. Look at the vogue for in-kernel windowing systems (Suntools, for example) although X mercifully killed that off. Repeatedly we've seen massively complex and invasive kernel subsystems produced, when a generic solution to the problems that going into the kernel allegedly solves would have benefitted everyone for longer.
You've got a problem. You decide to solve it with a kernel extension. Now you've got two problems.
ian
Re:Plug central (Score:5, Insightful)
It's apparent from this thread that one needs no expertise whatsoever to talk about operating system kernel design, so running MINIX should if anything overqualify you.
Re:Only one way to settle this! (Score:3, Insightful)
Re:OOP (Score:4, Funny)
Pizza
Let's see... (Score:3, Insightful)
On the monolithic kernel side we have
I think, after you allow for the 20 year head start, microkernels aren't doing that badly.
Re:Let's see... (Score:3, Interesting)
And Minix 3 doesn't count for real world use. It may be a goo