Microkernel: The Comeback? 722
bariswheel writes "In a paper co-authored by the Microkernel Maestro Andrew Tanenbaum, the fragility of modern kernels are addressed: "Current operating systems have two characteristics that make them unreliable and insecure: They are huge and they have very poor fault isolation. The Linux kernel has more than 2.5 million lines of code; the Windows XP kernel is more than twice as large." Consider this analogy: "Modern ships have multiple compartments within the hull; if one compartment springs a leak, only that one is flooded, not the entire hull. Current operating systems are like ships before compartmentalization was invented: Every leak can sink the ship." Clearly one argument here is security and reliability has surpassed performance in terms of priorities. Let's see if our good friend Linus chimes in here; hopefully we'll have ourselves another friendly conversation."
Eh hem. (Score:4, Insightful)
Isn't SELinux kinda like compartmentalization of the OS?
Re:Eh hem. (Score:3, Informative)
Re:Eh hem. (Score:5, Funny)
Could someone out this into an easy-to-understand car analogy, like the good Lord intended?
Re:Eh hem. (Score:4, Informative)
No, it's compartmentalization of the applications. Besides, the analogy is really bad because a ship with a blown compartment is quite useful. Computers with a blown network driver will e.g. break any network connections going on, in other words a massive failure. What about a hard disk controller which crashes while data is being written? Drivers should not crash, period. Trying to make a system that could survive driver failure will just lead to kernel bloat with recovery code.
O Tanenbaum... (Score:3, Funny)
Re:O Tanenbaum... (Score:5, Funny)
Your microk3rn3l rul3z!
O Tanenbaum, O Tanenbaum
Those m0n0lithic foolz!
They build a kernel all-in-one,
Where all the bugs can have free run.
O Tanenbaum, O Tanenbaum
Those Linux guys just drool.
The unsinkable Kernel (Score:5, Funny)
FULL SPEED AHEAD!
Re:The unsinkable Kernel (Score:3, Informative)
Re:The unsinkable Kernel (Score:3, Interesting)
Re:The unsinkable Kernel-Just add water. (Score:3, Interesting)
Engineering estimates is that it might have added 3-5 hours onto titanics lifespan, enough to save
Re:The unsinkable Kernel (Score:4, Funny)
How hard... (Score:3, Interesting)
Re:How hard... (Score:3, Funny)
Well, I hear that GNU/HURD is in the making...
A false dichotomy (Score:5, Insightful)
When viewed as a Platonic Ideal, a microkernel architechture is a useful way to think about an OS, but most real-world applications will have to make compromises for compatibility, performance, quirky hardware, schedule, marketing glitz, and so on. That's just the way it is.
In other words, I'd rather have a microkernel than a monolithic kernel, but I would rather have a monolithic kernel that does what I need (runs my software, runs on my hardware, runs fast) that a micokernel that sits in a lab. It is more realistic to ask for a kernel that is more microkernel-like, but still does what I need.
Re:How hard... (Score:3, Informative)
Re:How hard... (Score:3, Informative)
Also, in the early 90's Tenon Intersystems had a MacOS running on Mach that had
Re:How hard... (Score:5, Interesting)
The current state is that Linux is essentially coming around to a microkernel view, but not the classic microkernel approach. And the new idea is not one that could easily grow out of a classic microkernel, but one that grows naturally out of having a macrokernel but wanting to push bug-prone code out of it.
Re:How hard... (Score:3, Insightful)
Mklinux (Score:3, Informative)
Re:How hard... (Score:3, Insightful)
Or... (Score:5, Funny)
Best of both worlds, no? Wow, I wish someone would make such an operating system...
Re:Or... (Score:3, Interesting)
Re:We can dream. (Score:3, Funny)
NT4 (Score:3, Interesting)
I haven't looked at GNU/Hurd but I have yet to see a "proper" non-academic microkernel which lets one part fail while the rest remain.
Re:NT4 (Score:4, Interesting)
Well, I wouldn't call NT's kernel a microkernel in any way for the very reason that it was not truly compartmentalised and the house could still be brought very much down - quadruply so in the case of NT 4. You could call it a hybrid, but that's like saying someone is a little bit pregnant. You either are or you're not.
Re:NT4 (Score:3, Informative)
This, as peop
QNX ! (Score:5, Informative)
QNX [qnx.com], but it isn't open source.
VxWorks [windriver.com] and a few other would also fit.
Re:QNX ! (Score:4, Insightful)
The sheduler, for example, is real time only so for non-real time applications is questionable at best. A simple problem to address in the open source world but, apparently "not a high priority" for the manufacturer of this fine technology.
-rant-
I fail to understand the point of closed source kernel implementations. The kernel is now a commodity.
-/rant-
]{
Trusted Computing (Score:3, Interesting)
Re:Trusted Computing (Score:3, Insightful)
Trusted computing merely checks that the code hasn't changed since it was shipped. This verifies that no new bugs have been added and that no old old bugs have been fixed.
multicompartment isolation (Score:3, Insightful)
Solution: better code management and testing.
Re:multicompartment isolation (Score:5, Insightful)
Re:multicompartment isolation (Score:3, Informative)
Re:multicompartment isolation (Score:3, Interesting)
Re:multicompartment isolation (Score:3, Insightful)
It actually took hitting something like half the compartments to sink her. If it had hit just one less compartment, she would have stay afloat. In contrast, one hole in a none compartmentalized ship can sink it.
That is no different than an OS. In just about any monolithic OS, one bug is enough to sink them.
Re:multicompartment isolation (Score:5, Funny)
Re:multicompartment isolation (Score:3, Informative)
Ive coded under QNX a lot and would stronghly disagree with your view on the message passing overhead. From this QNX [wikipedia.org] page.
QNX interprocess communication consists of sending a message from one process to another and waiting for a reply. This is a single operation, called MsgSend. The message is copied, by the k
The thing is... (Score:5, Interesting)
Re:The thing is... (Score:5, Insightful)
Comparments do interfere with efficient operation, which is why Titanic's designers only went halfway. Full watertight bulkheads and a longitudinal one would have screwed up the vistas of the great dining rooms and first class cabins. It would also have made communication between parts of the ship more difficult as watertight bulkheads tend to have a limited number of doors.
The analogy is actually quite apt: more watertight security leads to decreased usability, but a hybrid system (Titanic's) can only delay the inevitable, not prevent it, and nothing really helps when someone is lobbing high explosives at you from surprise.
Analogies are like... something. (Score:4, Insightful)
When it comes to security, imagine aliens trying to take over your ship...
This has got to be the best juxtaposition of two sentences ever found on Slashdot.
Theory Vs. Practice (Score:4, Interesting)
Re:Theory Vs. Practice (Score:4, Informative)
Just to clear things up, my understanding is that Tanenbaum is advocating moving the complexity out of kernel space to user space (such as drivers). So you wouldn't be lowering the size/complexity of the kernel altogether, you'd just be moving huge portions of it to a place where it can't do as much damage to the system. Then the kernel just becomes one big manager which tells the OS what it's allowed to do and how.
- shazow
Re:Theory Vs. Practice (Score:4, Interesting)
Re:Theory Vs. Practice (Score:5, Funny)
- Jan L.A. van de Snepscheut
Sorry, couldn't resist.
A compromise needs to be made. (Score:5, Interesting)
The hardware manipulating parts kernel should stick to providing higher-level APIs for most bus and system protocols and provide async-io for kernel and user space. If most kernel mode drivers that power your typical
Or am I just crazy?
Yeah but microkernels seems like taking things to an extreme that can be accomplished with other means.
Re:A compromise needs to be made. (Score:4, Interesting)
It doesn't necessarily make it less crash prone. But it does make it instrumentable if it proves to be unstable (you could easily trace, debug, intercept, or otherwise validate the requests the blob made if so needed).
Furthermore, the kernel mode portion would merely be relaying commands to trusted memory-mapped regions and IO space requested by the process initially (limited by configuration files, perhaps). Most kernel crashes are the cause of errors (pointer mistakes, buf overflow, race condition, etc.) in the complex driver code which "trap" the system in kernel space. The user space portion would likely instead SIG11 and die... if it left the hardware in a weird state it could be fixed by simply restarting the driver program which would, at its outset, send RESET type commands to the device putting it in a known state.
The largest problem I see is that it isn't possible to easily recast a userspace driver program into a device node without a mechanism like FUSE. It only works if the hardware target in question is nearly always accessed behind a userspace library (OpenGL, libalsa/libjack/OpenAL, libusb).
Proof is in the pudding (Score:5, Interesting)
But most design is about tradoffs, and it seems like the tradeoff with microkernels is compartmentalism vs. speed. Frankly, most people would rather have speed, unless the security situation is just untenable. So far it's been acceptable to a lot of people using Linux.
Notably, if security is of higher import than speed, people don't reach for micro-kernels, they reach for things like OpenBSD, itself a monolithic kernel.
Re:Proof is in the pudding (Score:4, Insightful)
Re:Proof is in the pudding (Score:3, Insightful)
Certain types of security flaws are much harder to exploit when the OS addresses memory in unpredictable ways.
Other design principles, which encourage access log review, aid to the security of the system without having anything to do with code review.
Re:Proof is in the pudding (Score:3, Insightful)
The problem is the hardware is optimized for something else now. Also, modern programmers that only know Java can't code ASM and understand the hardware worth a damn. I should know, I have to try and teach them.
And yes, all people care about is speed, becasue you cannot benchmark securrity, and benchmarks are all marketing people understand, and gamers need som
Re:Proof is in the pudding (Score:4, Funny)
Hindsight is 20/20 (Score:4, Insightful)
"The limitations of MINIX relate at least partly to my being a professor: An explicit design goal was to make it run on cheap hardware so students could afford it. In particular, for years it ran on a regular 4.77 MHZ PC with no hard disk. You could do everything here including modify and recompile the system. Just for the record, as of about 1 year ago, there were two versions, one for the PC (360K diskettes) and one for the 286/386 (1.2M). The PC version was outselling the 286/386 version by 2 to 1. I don't have figures, but my guess is that the fraction of the 60 million existing PCs that are 386/486 machines as opposed to 8088/286/680x0 etc is small. Among students it is even smaller. Making software free, but only for folks with enough money to buy first class hardware is an interesting concept. Of course 5 years from now that will be different, but 5 years from now everyone will be running free GNU on their 200 MIPS, 64M SPARCstation-5."
Re:Hindsight is 20/20 (Score:3, Funny)
Oops!
I misspelled "Duo Core Intel Mac"!
Interesting corollation (Score:3, Interesting)
friendly conversation (Score:3, Insightful)
"Andy:
The most interesting part: "Linus: The very
This has always bugged me about this argument (Score:4, Insightful)
In Andrew Tanenbaum's world, a driver developer can write a driver, and not even realize the thing is being restarted every 5 minutes because of some bug. This sort of thing could even get into a shipping product, with who knows what security and performance implications.
Re:This has always bugged me about this argument (Score:3, Insightful)
Restarting drivers (Score:5, Informative)
Drivers have measurably more bugs in them than other parts of the kernel. This has been shown by many studies (see the third reference in the article). This can also been shown empirically - modern versions of Windows are often fine until a buggy driver gets on to them and destablises things. Drivers are so bad that XP even warns you about drivers that haven't been through checks. Saying people should be careful just doesn't cut it and is akin to saying people were more careful in the days of multitasking without protected memory. Maybe they were but some program errors slipped through anyway, bringing down the whole OS when I used AmigaOS (or Windows 95). These days, if my my web browser misbehaves at least it doesn't take my word processor with it, losing the web browser is pain enough.
In all probability you would know that a driver had to be restarted because there's a good chance its previous state had to be wiped away. However a driver that can be safely restarted is better than a driver that locks up everything that touches it (ever had an unkillable process stuck in the D state? That's probably due to a driver getting stuck). You might be even able to do a safe shutdown and lose less work. From a debugging point of view I prefer not having to reboot the machine to restart the entire kernel when driver goes south - it makes inspection of the problem easier.
(Just to prove that I do use Minix though I shall say that killing the network driver results in a kernel panic which is a bit of a shame. Apparently the state is too complex to recover from but perhaps this will be corrected in the future).
At the end of the day it would be better if people didn't make mistakes but since they do it is wise to take steps to mitigate the damage.
Re:Restarting drivers (Score:3, Informative)
However, the driver certification program is to some extent a waste of time anyway:
Less a comeback than hardware has caught up (Score:4, Interesting)
zbuffering - go back to any book from the 1970s, and it sounds like a pipe dream (more memory needed for a high-res zbuffer than in entire computer systems of the time)
Lisp, Prolog, and other high-level languages on home computers - these are fast and safe options, but were comically bloated on typical hardware of 20 years ago.
Operating systems not written in assembly language - lots of people never expected to see the day.
Cue the peanut gallery. (Score:5, Interesting)
"Fact": Micorkernel systems perform poorly due to message passing overhead.
Fact: Mach performs poorly due to message passing overhead. L3, L4, hybridized kernels (NT executive, XNU), K42, etc, do not.
"Fact": Micorkernel systems perform poorly in general.
Fact: OpenBSD (monolithic kernel) performs worse than MacOS X (microkernel) on comparable hardware! Go download lmbench and do some testing of the VFS layer.
Within the size of L1 cache, your speed is determined by how quickly your cache will fill. Within L2, it's how effecient your algorithm is (do you invalidate too many cache lines?) -- smaller sections of kernel code are a win here, as much as good algorithms are a win here. Outside of L2 (anything over 512k on my Athlon64), throughput of common operations is limited by how fast the RAM is -- not IPC throughput. Most microkernel overhead is a constant value -- if your Linux kernel us O(n) or O(1), then it's possible to tune the microkernel to be O(n+k) or O(1+k) for the equivalent operations. The faster your hardware, the smaller this value of k since it's a constant value. L4Linux was 4-5% slower than "pure" Linux in 1997 (See L4Linux site for the PDF of the paper [l4linux.org]).
But none of this is something the average slashdotter will do. No, I see lots of comments such as "micorkernels suck!" already at +4 and +5. Just because Mach set back microkernel research by about 20 years, doesn't mean that all micorkernels suck.
Re:Cue the peanut gallery. (Score:5, Informative)
Do you actually want people to take you seriously when you post utter shit like this?
That is a veiled lie. Mach performed very poorly mostly because of message _validation_, not message passing (although it was pretty slow at that too). I.e. it spent alot of cycles making sure messages were correct. L3/L4 and K42 simple dont do any validation, they leave it up to the user code. In other words once you put back the validation in userland that Mach had in kernelspace, things are a bit more even. And for the love of god NT is NOT a microkernel. It never was a microkernel. And stop using the term "hybrid", all hybrid means is that the marketing dept. wanted people to think it was a microkernel...
Now I will throw a few "facts" at you. It is possible with alot of clever trickery to simulate message passing using zero-copy shared memory (this is what L3/L4/K42/QNX/etc... any microkernel wanting to do message passing quickly). And if done correctly it CAN perform in the same league as monolithic code for many things where the paradigm is a good fit. But there are ALWAYS situations where it is going to be desirable for seperate parts of an OS to directly touch the same memory in a cooperative manner, and when this is the case a microkernel just gets in your damn way...
Ok... Two things. OpenBSD is pretty much the slowest of all BSD derivitives (which is fine, those guys are more concerned with other aspects of the system and its users are as well), so using it in this comparison shows an obvious bias on your part... Secondly, and please listen very closely because this bullshit needs to stop already, !!OSX IS NOT A MICROKERNEL!! It is a monolithic kernel. Yes it is based on Mach, just like mkLinux was (which also was not a microkernel). Lets get something straight here, being based on Mach doesnt make your kernel a microkernel, it just makes it slow. If you compile away the message passing and implement your drivers in kernel space, then you DO NOT have a microkernel anymore.
So what you actually said in your post could be re-written like this:
Fact: OSX is sooooo slow that the only thing it is faster than is OpenBSD. And you cant even blame its slowness on it being a microkernel. How pathetic... Wow, that says it all in my book :)
And no, you dont have to believe me... Please read this [usenix.org] before bothering to reply.
Cue the peanut gallery redux. (Score:3, Interesting)
Actually, OS X was within a few percentage points of Linux on all hardware tested; actually outperforming it on memory throughput on PowerPC and some other tests. It's also faster than NT.
"But there are ALWAYS situations where it is going to be desirable for seperate parts of an OS to directly touch the same memory in
driver banishment (Score:4, Interesting)
There are quite a few drivers out there to support weird hardware (like webcams and such) that are just not fully stable. It would be nice to be able to choose that a driver be run in kernel mode, at full speed, or in a sort of DMZ with reduced performance. This could also make it easier to reverse engineer non-GPL kernel drivers, as well facilitate driver development.
Hurd in Google's summer-of-code (Score:4, Informative)
Has anyone tried? (Score:3, Insightful)
Well, the nice thing about software in rom is that you can't write to it. If you can't inject your own code and unplugging and replugging the device does a full reset back to the factory code then there is a very limited about of damage a hacker can do.
Then too, sets capable of receiving a sophisticated digital signal (HDTV) have only recently come in to wide-spread use. To what extent has anyone even tried to gain control of a TV set's computer by sending malformed data?
What, like VM boundaries are the only way? (Score:5, Insightful)
"In the 1980s, performance counted for everything, and reliability and security were not yet on the radar" is remarkable. Not on whose radar? MVS wasn't and z/OS isn't a microkernel either, and the NCSC didn't give out B1 ratings lightly.
One thing I found interesting is the notion of running a parallel virtual machine solely to sandbox drivers you don't trust.
Minix (Score:3, Informative)
Guess what he told me. A revamped version of minix is coming.
Re:Feh. (Score:5, Interesting)
WRONG.
Tanenbaum's research is correct, in that a Microkernel architecture is more secure, easier to maintain, and just all around better. The problem is that early Microkernel architectures killed the concept back when most of the OSes we use today were being developed.
What was the key problem with these kernels? Performance. Mach (one of the more popular research OSes) incurred a huge cost in message passing as every message was checked for validity as it was sent. This wouldn't have been *so* bad, but it ended up worse because a variety of flaws in the Mach implementation. There was some attempt to address this in Mach 3, but the project eventually tappered off. Oddly, NeXT (and later Apple) picked up the Mach kernel and used it in their products. Performance was fixed partly through a series of hacks, and partly through raw horsepower.
Beyond that, you might want to read the rest of TFA. Tanenbaum goes over several other concepts that are hot at the moment, include Virtual Machines, Virtualization, and driver protection.
Re:Feh. (Score:3, Informative)
O RLY [anandtech.com]
Re:Feh. (Score:5, Informative)
There were several flaws in their tests:
1. They used GCC 3.x compiler instead of GCC 4.x compiler shipping with Tiger because the linux distros they were comparing against had not updated to 4.x of GCC yet.
2. They did not include the OS X specific patches to alter the threading mechanism. This caused a significant performance hit as MySQL was written for the linux threading model rather than a Mach one or more generic model.
3. Binary builds with OS X specific patches were available for download via links from the official sites. There was no need to compile a crippled version.
4. They should have also tested the free/evaluation versions of Oracle as there are optimized version available for both linux and OS X. Assuming this was not a test of only OSS but rather performance as a "server", I do not see why they did not include it.
Virtualization (Score:5, Insightful)
It seems reasonable to think that a tiny microkernel built for virtualization and able to support multiple virtual os's with minimal overhead is really going to be a very attractive platform. If we then get minimal, very application specific kernels to run on top of it for specific needs, we could get an environment in which various applications (http servers, databases, network servers of other sorts, browsers) could run in secure environments which could leverage multi-processor architectures, provide for increased user security, make inter-os communications work nicely and generally be a Good Thing. Certainly that would not prohibit running complete unix/MS/??? systems from running as well. (Granting, of course, that OS vendors go along with the idea, which some of the big players may find economically threatening.)
Could be very fun stuff and make viable setups that are currently difficult or impossible to manage well.
Re:Virtualization (Score:3, Interesting)
The article (bad me, dont read linked articles, I know) actually mentions hypervisors as sort-of microkernels, so yes. Xen and VmWare ESX fits rather well into such a mapping, as they both have their own 'kernel', and where the controller domain is merely another virtual machine.
"If we then get minimal, very application specific kernels to run on top of it for specific needs"
You can accomplish a similar setup by just fudging your
Tanenbaum is wrong, and should know it (Score:5, Informative)
Kernels don't often crash for reasons related to lack of memory protection. It's quite silly to imagine that memory protection is some magic bullet. Kernel programmers rarely make beginner mistakes like buffer overflows.
Kernels crash from race conditions and deadlocks. Microkernels only make these problems worse. The interaction between "simple" microkernel components gets horribly complex. It's truly mind-bending for microkernel designs that are more interesting than a toy OS like Minux.
Kernels also crash from drivers causing the hardware to do Very Bad Things. The USB driver can DMA a mouse packet right over the scheduler code or page tables, and there isn't a damn thing that memory protection can do about it. CRASH, big time. A driver can put a device into some weird state where it locks up the PCI bus. Say bye-bye to all devices on the bus. A driver can cause a "screaming interrupt", which is where an interrupt is left demanding attention and never gets turned off. That eats 100% CPU. If the motherboard provides a way to stop this, great, but then any device sharing the same interrupt will be dead.
I note that Tanenbaum is trying to sell books. Hmmm. He knows his audience well too: those who can't do, teach. In academia, cute theories win over the ugly truths of the real world.
Re:Tanenbaum is wrong, and should know it (Score:5, Insightful)
- couldn't
DMA a mouse packet over scheduler code (which ought to be read-only at the MMU) or the MMU's page table.That is what Tannenbaum's research is asking. Can such a system be built? Does it perform? What are the tradeoffs? Does the end result offer enough benefits (reliability and security) to overcome the costs (performance)?
Re:Tanenbaum is wrong, and should know it (Score:4, Insightful)
I do believe that Tanenbaum was addressing security in his article, not kmem protection. His point was that the segregation of the servers prevents a hole in these programs from opening an elevated privledge attack. Furthermore, he points out that the elevated permissions of the kernel are likely to be far more secure due to the miniscule size of the kernel itself.
You make an interesting point about the stability of the kernel, but that wasn't his point in the slightest.
Re:Feh. (Score:4, Interesting)
On the other hand, if you architect the system so that it is impossible to pass a bad message, you may find that performance can actually be *increased*. My own preference has always been an OS based on a VM like Java where it is literally impossible to write code that can cross memory barriers. The result would be that the hardware protection of an MMU would be unnecessary, as would the firewall between the kernel and usermode. Performance would increase substantially due to a lack of kernel mode (i.e. Ring 0) interrupts or jumps.
Re:Feh. (Score:4, Interesting)
Re:Feh. (Score:5, Interesting)
Are they not upgrading the kernel? I know that Win2K has had some critical updates in the last few years that required a reboot.
Microkernels do have the potential to be easier to secure than monolithic kernels.
In theory a secure system is a secure system. It is possible to make a monolithic kernel as secure as a microkernel, however it will be harder to make a monolithic kernel as secure as a microkernel.
Just like everything else it is a trade off.
Monolithic
Easier to make a hi-performance kernel.
Harder to secure and to test security.
Microkernel.
Easier to make secure and to test security.
Harder to make hi-performance.
There are secure monolithic systems OpenBSD, Linux, Solaris, and Z/OS jump to my mind.
There are fast microkernels. QNX is a very nice system.
I really like the idea of a microkernel OS. I will try out the first stable, useful OSS Microkernel OS that I find.
Re:Feh. (Score:3, Interesting)
a microkernel can indeed be secure enough so that the system doesn't have to reboot for years, with a monolithic kernel and over a million lines of code, this is just a wet dream.
infact, the microkernel if written well enough, can take a lot of updates, including updates to disk drivers or graphics drivers and stuff without restarting the whole system, so far the monolithic kernels have run flat on this feature.
i would give away a few percentages of performance if the
Re:Feh. (Score:3, Informative)
Where does NT implements all that? In kernel space. A NULL pointer in that code brings the system down.
Just because it was STARTED from a microkernel (like mac os x) doesn't means it's a REAL microkernel. How can you call "microkernel" to something that implements the filesystem in kernelspace?
Re:Feh. (Score:5, Informative)
Well, trust is placed in those user-land programs to perform the task for which they are responsible. Whereas in a monolithic kernel, trust is placed in each subsystem to not only perform the task it is responsible for, but also not to muck with the workings of every other subsystem in the kernel as they all reside in the same address space. Therefore in a microkernel you can have a bug in your network stack without compromising your file system driver or authentication module, while this isn't necessarily true in a macrokernel. Compartmentalization is very good for security.
Which is just one of the reasons Mach is so popular as a research OS, despite never seeing any success in the real world. Compartmentalization also makes the OS easier to maintain, easier to understand, and easier to make modifications for. Plus it's very easy to port to new hardware, if that's required.
In a sense, most OSes are "microkerneled" anyway. Most functionality is implemented by programs running on top of the kernel, which pass messages back and forth between themselves and the kernel. Perhaps my view on this is a little naive, but I don't see too much of a difference between a microkernel module and any other process on the machine.
I think you underestimate the things that are handled by the kernel? Unix uses many user-land services, but also has many services integrated into the kernel. Take the concept of moving functionality into user space to the limit, and you have a microkernel. Your last observation isn't naive, it's correct: a microkernel module isn't necessarily any different than any other process on your machine.
Re:Feh. (Score:3, Insightful)
Microkernels actually may help with that as well. If it is very obvious to the OS -- and to the user -- which drivers are crashing, that will provide incentive for the hardware vendors to write drivers correctly. Right now there is no accountability, so as long as the whole system works most of the time, users will buy it. But with microkernels, if new hardware comes out and you have review sites saying "That hardware driver is crash
Not entirely accurate (Score:4, Insightful)
The 2'nd approach(paravirtualization) could actually be used WRT linux as a means of not only separating the usermode from the device drivers, but it would also allow for some nice networking capabilities. After all, the average systems does not really need all the capabilities that is has. If a simple server(s) can be set up for the house and then multiple desktops without driver is set up, it simplifies life.
Titanic (Score:4, Insightful)
How is kernel compartmentalization going to protect against users installing spyware and doing things they're already authorized to do?
Re:Feh. (Score:3, Insightful)
The industry has better and more important things to worry about.
Like what? Reliability and security ought to be paramount. The IT industry (relating to multipurpose computers, anyway) is currently a joke in that area - compare to virtually any other industry.
Screenshot of GNU Hurd (Score:3, Funny)
Computer bought the farm
Re:Oh Dear (Score:5, Insightful)
All I ask is that the GUI is reasonably slick, the screen design doesn't actively give me hives and the mail application is pleasant. Performance? Within reason, I really couldn't care less.
ian
OS X - First make it work, then make it fast (Score:5, Insightful)
First make it work, then make it fast
Specifically:
Write it as simply and cleanly as you can,
THEN check performance,
THEN optimize, but ONLY where measurement tells you to.
Judging by the performance improvements over time, this is what the OS X team has been doing. Their stuff has been getting bigger, with more functionality, AND faster on the same hardware, with each release. If anyone else has been doing that, I haven't heard of it.
Re:OS X - First make it work, then make it fast (Score:3, Insightful)
FIRST design the system, and make sure the algorithms and architecture are sufficiently straightforward and efficient.
No amount of optimizing will save you if your system is slow by design, and there is no place where this was more true than in the early microkernals. That is why the microkernel architecture was rejected by linux and windows kernel developers.
The Mach kernal has some fundamental effeciency issues, and while it has been improved since it's introduction, there are
Re:OS X - First make it work, then make it fast (Score:3, Insightful)
While it stops developers tweaking every possible piece of code to be an unreadable high-performance mess, I've also seen it used as an excuse to 'not think about performance now' at design time. Even when you are showing them evidence that they are repeating a known performance problem - and some performance problems require major restructuring - i.e. the stuff you can't fix by tuning the code inside a
Re:Oh Dear (Score:3, Interesting)
QNX for teh win :) (Score:4, Informative)
Unlike certain other OS's, QNX is used in control applications with life and death implications. (nuclear reactors and medical equipment for example)
QNX has been through a lot of changes since then. And I have not kept up with most of them. I do know that as of a few years ago they did make a "free for personal use" release that included their development system. And a few years before that, they had a 1.44meg demo disk that had their entire OS, GUI and web browser on it.
But don't take my word for it go check out their website.
Re:Metaphors eh? (Score:4, Insightful)
-matthew
Re:Metaphors eh? (Score:4, Informative)
The goal is to have a system such that you maximize the segregation of the parts. If the SCSI subsystem crashes -- for example -- you flush it and restart it. While it may not be possible to totally isolate every subsystem, with a microkernel subsystems should be more robust than in monolithic kernels.
For all of Linus' scorn of microkernels, Linux borrows heavily from the concept, if not from the theory. One could almost say that Linux implements a microkernel poorly through the kernel module interface. It fails to be a true microkernel in a number of ways, though, not least of which is the low degree to which it isolates modules.
In any case, your nervousness about a system where a "fundamental" subsystem craps out is understandable in someone who's main experience is with monolithic kernels, because the corruption of one subsystem often infects other systems. For example, IME when the Linux SCSI module starts barfing (which happens with distressing regularity), if you're lucky, you can unload and reload the SCSI modules, but eventually, you're going to have to reboot, because it never quite works well after a reload. In a microkernel, subsystems are just services that other subsystems may use, but aren't intimate with. A corruption in one subsystem shouldn't lead to corruptions in any other subsystem.
--- SER
Re:Metaphors eh? (Score:4, Interesting)
Re:Lessons on ID for csoto (Score:3, Insightful)
That's looking at it from an end-user standpoint. The problem with that view is that the better method will never become viable.
To extend your evolution metaphor, you're limiting yourself to a subset of the genepool. Sure, a species that has already been selected for / adapted to that particular niche would outcompete *now* in that niche; but that does not mean that another species allowed to adapt to that niche wouldn'
Re:Let's let the users sort it out... (Score:3)
Re:A Good example? (Score:3, Informative)
Mac OS X is not a true microkernel architecture. Apple has a kernel plug-in architecture and allows drivers to run in kernel space which goes against true microkernel design principles.
How can the average user see this? When "Software Update" runs, almost any update to the system
Re:Isn't Tannenbaum the one who said... (Score:3, Insightful)
I think it is clear that Linux won that argument.
This is not at all clear. By what metric do you claim that Linux won that argument? Popularity? Then surely the Windows kernel wins even more.
Truth is, just because one technology is superior to another (in terms of, say, stability, maintainability, whatever) doesn't mean that it will immediately win in the marketplace. I think that Linux became a success because of other factors, such as that it was easy for people to contribute, and because it cons