KernelTrap Talks WIth GNU/Hurd Developer Neal Walfield 218
An Anonymous Coward writes: "One of the GNU/Hurd developers, Neal Walfield, was recently interviewed by KernelTrap. Nice read."
The computer is to the information industry roughly what the central power station is to the electrical industry. -- Peter Drucker
Interesting (Score:1)
He doesn't deny that the performance sucks, but he feels the added flexibility will be worth it.
What do I doubt this?
Re:Interesting (Score:1, Insightful)
chance of surviving on any Intel-based
architectures, as context switching is just too
damn expensive (well over 500 cycles to do it
properly). If drivers and other speed-critical code
can't live all within the same context, there's no
way you can get viable performance out of it.
Re:Interesting (Score:2)
Re:Interesting (Score:2)
Its true; betting on Moore's law has been proven to be a winning strategy. Ask Bill Gates.
It makes me think of Windows 95. When I first installed it on a 486/33, it seemed huge, bloated and slow. If I run it now on a PIII/800, it seems to be fast, lean, stripped down and almost elegant. I guess context is important.
Re:Interesting (Score:3, Funny)
I'm not quite sure that's true. Win95 seemed bloated and slow on the ancient 486/25 I first used it on
Best explanation I can come up with is that there hasn't been any increase in processor speed in the last 5 years. I'm convinced that they hit a wall around the 386 or so, and have simply been rebranding the same chips every year or so, trusting that we'll convince ourselves that things really are going faster.
Re:Interesting (Score:2)
Fortunately, I have been wearing a tinfoil hat which reflects the government mind-control ray away!
They should change the kernel (Score:1)
These guys also need to consider device drivers. If they want their OS to become popular, it's going to need to support a wide variety of hardware. Linux already offers that.
I really like the ideas of Hurd, but they are not being proactive enough in getting more developers on board. This reminds me of the Atheos guy, who'd rather write the OS himself. One of the best things about Linux is that a lot of people are working on it, and BSD also has a wide developer base.
Hopefully HURD will become more relevant than that OS from MIT. I'm looking forward to trying that out and make some comparisons
Maybe not as crazy as it sounds (Score:2)
Re: Device drivers (Score:2, Informative)
So in answer to your point: they have considered the device drivers.
Re:They should change the kernel (Score:1)
Jeremy
Using the HURD in production (Score:5, Interesting)
The HURD machine has been surprisingly stable since we set it up last year. We may have had a few instances where it would get into an undesirable state and need rebooting, but by and large its downtime has been attributable to hardware upgrades and power interruptions. Its integrated userspace/kernel space has provoked us to write some very interesting programs on that box that we would not have been able to create with an ordinary UNIX or clone.
What's interesting about the HURD is that, despite its departures from many UNIX conventions, its developers are striving to form a clean upgrade path from Linux to HURD. Likewise, many HURD features (like POSIX b.1 capabilities) have made it into Linux in recent years. It's too early to tell, but perhaps the future holds a merging of Linux with HURD in a couple of years.
~wally
Re:Using the HURD in production (Score:1, Insightful)
Re:Using the HURD in production (Score:1)
Re:Using the HURD in production (Score:1, Insightful)
And even so, karma or lying, he wouldn't be a troll.
Re:Using the HURD in production (Score:2)
Re:Using the HURD in production (Score:2, Insightful)
Besides, Linus hates the HURD design.
Re:Using the HURD in production (Score:2)
Agreed. You might as well hope for a FreeBSD / Plan9 merge or something on that order.
Don't forget, people, that Linux is a kernel, not an operating system. The interview mentioned that certain code from Linux was used in Hurd, but a merging of the two is simply not going to happen.
That aside, with the GPL, Linus has very little to say should someone try and merge Linux and Hurd.
Re:Merging? (Score:2)
Re:Merging? (Score:2, Informative)
insmod
read the man page for insmod.
Re:need a delopment release (Score:1)
I think the next one is planned for 7Dec.
microkernel == too slow on x86 (Score:1, Insightful)
chance of surviving on any Intel-based
architectures, as context switching is just too
damn expensive (well over 500 cycles to do it
properly). If drivers and other speed-critical code
can't live all within the same context, there's no
way you can get viable performance out of it.
Re:microkernel == too slow on x86 (Score:3, Interesting)
Re:microkernel == too slow on x86 (Score:4, Insightful)
My first comment is that "performance" means different things to different people. To some it means "throughput", that is, the amount of work that the system can do just prior to being overloaded. To some it means how well it can handle overload. To some it means low latency, that is, that the system can respond to an important event quickly. Which one is important for you depends on what you're doing.
Secondly, you're basing your assumptions on "microkernels" like Mach, which dates from around the same era as the original Windows NT. That's an "old style" microkernel. Back then, we thought that the only advantage of using microkernels was flexibility, so kernels didn't have to be very "micro".
Nowadays we know that merely reducing the kernel's domain of influence doesn't buy you much. You also need to simplify your kernel to realise performance gains. You do lose something (cost of context switch etc) but you also gain lots too, so it's not so much of a penalty, but rather it's a tradeoff.
For example, consider this: Linux often has to suspend a task deep inside a system call. If you call read() on a block of disk which is not in the buffer cache, say, you need to suspend until the block is read in from disk. A monolithic kernel may have to do this for a hundred reasons, depending on which modules are loaded. So a context switch consists of dumping registers on the kernel stack then switching stacks.
Now consider a microkernel. You already know in advance what operations you may have to suspend on, and that number is quite small. (In the read() example, you only suspend on IPC while the server responsible for disk I/O does the hard work.) So you can separate the "front half" of each system call from the "back half". You can come up close to the surface and then context switch if necessary. (Note: You have to do this anyway on a modern system, because a higher priority task may have unblocked during the system call.) Once you've done that, each thread doesn't need its own kernel stack, which makes the context switch a little cheaper, saves memory, makes thread creation cheaper, and the kernel can be made re-entrant, delivering an IRQ in the middle of delivering another IRQ, thus improving latency. It also means you don't need to hack around the problem of signal delivery while suspended. (BSD does this by ensuring that the "front half" of every system call is idempotent, and thus possibly less efficient than it could be.)
So you can see that focussing on the cost of context switching alone can be misleading.
Plus, of course, keep this in mind: if raw throughput was our most important criterion, we wouldn't have virtual memory.
Re:microkernel == too slow on x86 (Score:2, Informative)
Re:microkernel == too slow on x86 (Score:5, Insightful)
"Blahblah is to slow" arguments are lame. Microkernels are too slow. Java is too slow. 3D graphics are too slow. GUIs are too slow. Virtual memory is too slow. Accessing files over a network is too slow. Calling the OS instead of directly banging the hardware is too slow. *yawn*
Upgrade your 386SX to a new Athlon, dude. (Or better yet, a dual Athlon -- one less context switch ;-). Then nothing is too slow anymore. You can run CPU-bound stuff continuously at 87.5% utilization and your computer will still be just as fast as what you had 6 years ago, which was already overkill.
500 clock cycles, with 2 billion clock cycles per second, that works out to... a good-looking excuse to blame things on the next time you get fragged in Quake.
What's really funny is that you'd dare to say that something with a slight performance decrease has a "very little chance of surviving on Intel-based architectures." And yet just last week, I saw someone at my office spending way too much time, struggling to copy a bunch of files with MS Windows' explorer shell. I guess Windows has very little chance of surviving too. Unless .. wait .. unless maybe people don't care? Could it be?!?
Anyone Know Where this guy went to College? (Score:1)
Re:Anyone Know Where this guy went to College? (Score:1)
Not to be an ass but behold the power
of google
http://www.google.com/search?hl=en&q=UMASS+Neal+W
Jeremy
Hurd Speed (Score:3)
Re:Hurd Speed (Score:1)
Re:Hurd Speed (Score:2, Interesting)
Which makes me really excited about implementing real-time software in Hurd.
Re:Hurd Speed (Score:3, Insightful)
Mach is an old-style microkernel. It comes from the same era as Windows NT.
QNX/Neutrino is a modern microkernel which comes from the same era as BeOS.
There's no comparison. Mach is big and tries to do too much, even for a microkernel. But it comes from an era when we throught that the most important advantage of microkernels was flexibility. We now know that by making them very "micro" they can give performance too.
You won't find hard real-time in Hurd any time soon. Not as long as they're using Mach and allow use of Linux device drivers, anyway. Hard real-time needs to be designed in everywhere, from the driver to the kernel to the application.
The world does need a free real-time general-purpose OS, you're right. Real-time is becoming ever more important even in server applications (e.g. ATM routing, streaming media). You won't find it in the Hurd, but there are one or two projects happening in relative secrecy at the moment. Watch this space.
Re:Hurd Speed (Score:2, Interesting)
Many people say, computers get faster. Sure I agree, but software gets slower as well. The real question is if the software gets slower faster than the CPU or not? For example while C++ as a GUI is hard to program, C++ gui's are miles faster than any Java GUI. On top of that C++ GUI's are nicer than Java's. Hence why Java has not made it on the desktop.
Folks sometimes you need the speed!!!!
Re:Hurd Speed (Score:2)
Well, some of the servers are still there, but yes, you're right. NT until version 3.51 used to be an old-style object-oriented microkernel OS not unlike Mach (only better than Mach). Now it's a kind of a bastard child of microkernel and monolithic kernel, retaining some of the benefits and most of the drawbacks thereof.
Re:Hurd Speed (Score:2)
Fiasco will be fantastic when it's finished, although I think using OSKit is a mistake except in the short term. However, a "real-time kernel" won't help if your drivers and servers aren't also low-latency. Beware of Linux drivers in particular. They almost all assume that interrupts are disabled during execution of the interrupt handler. Excising that will be hard.
Re:Hurd Speed (Score:4, Insightful)
There are going to be speed problems with any microkernel-based OS -- OS X is not necessarily exempt either.
Basically, if you spend a lot of time copying data between address spaces of different chunks of the kernel, you're going to pay for it. If you have to switch address spaces to switch kernel tasks, you're going to pay for it (in cache misses).
Even in a monolithic-kernel OS (which will always be superior, if you assume the parts of the kernel are well-enough written that they can be trusted by other parts of the kernel), you have some cost moving data from userspace to kernel space. You can get around that in clever cases -- Linux does this with zerocopy networking, passing sets of (physical) pages around and dumping them directly to the card driver.
As Linus once said "Mantra: `Everything is a stream of bytes'. Repeat until enlightened." In other words, any obsessiveness that gets in the way of moving streams of bytes around extremely efficiently is not good architecture. Message passing (and separate address spaces for kernel "server" modules) fall into this category.
Re:Hurd Speed (Score:2)
First: You don't need to spend any more time copying data between address spaces in a modern microkernel OS than in a monolithic-kernel OS unless you've designed it badly. For example, it's lunacy to put your disk driver and file system driver in different address spaces. Performance-conscious microkernels do what monolithic kernels do: use dynamically loaded objects. The difference is that the disk server dynamically loads the filesystem it's going to use, or the network server loads the NIC driver and protocol implementations. There's no reason at all why a microkernel OS can't use zero-copy networking. (I think that BeOS even did zero-copy sound in some situations. Digitised data would come in off the soundcard and go straight into the mixer. Try doing that in Linux without hacking the kernel.)
As for context switches, true, you have to do more of them, but you get performance gains elsewhere as a result, as I have noted previously [slashdot.org].
If you need a microkernel mantra, here it is: "It's not a penalty, it's a tradeoff." Repeat until enlightened. :-)
Re:Hurd Speed (Score:2)
Re:Hurd Speed (Score:2)
Just to clarify, aheitner's assertion was not that Hurd or Mach's IPC is slow, but that any microkernel-based OS will be slow compared with a monolithic kernel-based OS. I don't know enough about Hurd to comment specifically, but there are plenty of modern microkernel-based OSes which are competitive in this area (e.g. L4, BeOS, QNX).
Re:Hurd Speed (Score:2, Interesting)
The solution for that is to use posix_spawn (in the latest posix drafts). This signals that a new task can be setup cheaply. Hopefully when bash, make, and gcc use that, we'll see a huge improvement in speed.
So far raw execution speed seems fine. I don't use X (since mostly my machine just sits and compiles binaries), but even when it's going full tilt, it's quite usable on a PII/233mhz. (Multiple ssh sessions, irc)
Bring back Multics and VME (Score:1)
When was the last time you used a kernel that was really monolithic, one that had been built, supplied and tested as a unit, by a common engineering team? The fact is that all modern systems are supplied in untestably complex configurations, which if reliability is not to be compromised, must be able to protect themselves from problem components.
If the design choice was really between copying memory and passing pointers that allowed the receiver to stamp all over the sender's address space then life would be rather depressing. However, in the absence of hardware features like capabilities and Multics-style protection levels, there is a solution in the form of a safe, intermediate language such as Java bytecode. This way, you only have to trust your VM/JIT compiler for basic address-space integrity.
Slow? Well, device drivers probably shouldn't be the first part of Linux to be bullet-proofed in this way, but for serious components (think KDE applications currently using DCOP etc.) the VM can easily outperform native code, because it can optimize the execution path *across* separately loaded components, and eliminate null procedures such as unused access checks, RPCs for local objects etc.
Linux (and Linus, by the sound of it) need to wake up to the power of VMs. MS apps will soon no longer be tied to x86, Java is still growing, while efforts that could be used for Linux (Perl/Python and a few LISP engines) are niche environments, to say the least.
Anybody that believes Linux is still going to grow when it possesses zero inbuilt protection and requires apps to be manually recompiled for every platform variant is living in cloud-cuckoo land.
--
alex
Microkernels and Monokernels (Score:2)
I think that micro and monolithic kernels each have their place. For my PC, though a monolithic kernel probably meets my needs best. Also, I will be referring to monolithic kernels as "monokernels" even if this is not technically correct
Microkernels beat out monokernels often when it comes to Really Big Servers and Supercomputers, in part because SMP is much more difficult to do on monokernel designs I suspect that this is why UNICOS/mk is a Microkernel (of course, it is from Cray).
As Linus once said "Mantra: `Everything is a stream of bytes'. Repeat until enlightened." In other words, any obsessiveness that gets in the way of moving streams of bytes around extremely efficiently is not good architecture. Message passing (and separate address spaces for kernel "server" modules) fall into this category.
Exactly, but your considerations for that 64-proc supercomputer or mainframe are different than your considerations for your 1 proc workstation, aren't they? Microkernel may be more efficient for the former, but monokernels are more efficient for te latter.
I am not a kernel hacker ... (Score:3, Insightful)
Seriously, having read the interview it seems like Hurd does some interesting stuff removing features that are part of the kernel in other unix systems and moving them into userspace.
The real question, though, is whether we need an entirely new operating system to gain these features or whether they could instead be implemented into the standard Linux kernel. Unless they can really get a large group of people starting to develop and use it, it may go the way of the buffalo. By working on getting their changes into Linux, however, they would have a much larger userbase to start from.
Re:I am not a kernel hacker ... (Score:1)
Re:I am not a kernel hacker ... (Score:2)
By most definitions, especially the couse I took on "Operating Systems", the kernel pretty much is the operating system.
Difference in philosophy (Score:5, Insightful)
Nope. The Linux developers are hell-bent on sticking to their monolithic design. Even if you could develop the Hurd as a set of patches, they would never make it into the "standard Linux kernel". (Curious use of the word "standard", BTW.)
The rift of Hurd vs Linux is like vi vs emacs. Vi and Hurd are meant to be a small tools designed to work in conjunction with other small specialised tools, the whole being greater than the sum of the parts. Emacs and Linux are meant to be "all features under one roof".
Actually, that's a good way of looking at your question. Asking "can't you just implement these features in Linux?" is like asking "why do you need all those POSIX commands like diff(1); can't you just implement that in Emacs?" The answer is "yes", but would you want to?
Re:Difference in philosophy (Score:1)
I would say Emacs follows Hurd philosophy: a small kernel (Lisp interpreter; yes, Lisp is small), that serves as base to implement a lot of modules that interact together via a known protocol.
Re:Difference in philosophy (Score:3, Informative)
I've read it, and I think he's wrong. A well-designed modern microkernel OS system should do no more copying than a monolithic kernel does (and it does, say, between user-space and kernel-space).
I suspect that Linus was talking about old systems like Mach, Minix and Windows NT, and not modern systems like QNX, L4 or BeOS. If so, his mistake is understandable.
Re:Difference in philosophy (Score:2)
Emacs users don't just edit ascii files. They also compile code, read news and get therapy from Eliza. :-)
I dunno about "capability". I'll agree with the other two, though. The stability and efficiency are mostly because of its longevity, of course. "Get it working then get it fast."
Re:I am not a kernel hacker ... (Score:1)
Nope, that'll undermine the greatest thing of hurd. The ability to do lots of things as a regular user.
What we need is a standard for device-drivers, so you've got one source for *BSD's, Linux and other OSes. And one binary for all OSes running on some microkernel.
Software just needs good design.
Isn't GNU/HURD redundant? (Score:1, Redundant)
Re:Isn't GNU/HURD redundant? (Score:1)
Enough said.
fsmunoz
Re:Isn't GNU/HURD redundant? (Score:3, Funny)
Didn't you know? There's already a GNU OS. It's also been called "Emacs" in some circles.
Re:Isn't GNU/HURD redundant? (Score:1)
"Built on NT Technology"
Re:Isn't GNU/HURD redundant? (Score:1)
But "HURD" is a much cooler complex metaphor than "GNU OS".
Re:Isn't GNU/HURD redundant? (Score:1)
GRUB/HURD/GNU is just as valid.
Re:Isn't GNU/HURD redundant? (Score:1)
I don't think GNU needs an official kernel. You should be able to choose your kernel for the GNU operating system.
Re:Isn't GNU/HURD redundant? (Score:2)
Who is using it? (Score:1)
Re:Who is using it? (Score:1)
Re:Who is using it? (Score:1)
It's just that powerusers like a flexible system, which GNU provides.
Huh? (Score:1)
Re:Huh? (Score:1)
He is saying that users can mount any file system to any place they wish (giving they have the permission).
Linux does have some of this ability comming though with gnome's vfs. but this is not quite the same thing.
Not much new stuff (Score:2)
The authentication part looked nice, but I thought I saw a contradiction when he first spoke of the safety of the system because the authentication daemons ups priviliges and second talks about a user-owned authentication daemon which is secure because cracked passwd's cannot be used on daemon's outside this users' space. This would imply that the public authentication server is hackable also in a way that authentication tokens can be had illegally.
Nevertheless I like the removal of root access necessity for a lot of stuff.
Re:Not much new stuff (Score:2)
A user-owned authentication daemon is not just user-owned. It could also be user-written. You can't guarantee the security of something a user writes. So, it can be cracked, possibly. And, you can get passwords and tokens from it, possibly. However, they do you no good in any other authentication daemon.
The main, global, authentication server for the OS should be very damned secure.
Justin Dubs
Re:Not much new stuff (Score:2)
Re:Not much new stuff (Score:2)
Re:Not much new stuff (Score:2)
You would get a root token on the SYSTEM'S authentication server, therefore granting you root access to the complete system.
The authentication scheme can prohibit you from becoming root by overflowing the authentication server and 'becoming it' because it doesn't run as root. It doesn't prevent you, however, from overflowing it and getting the root token after which you can abuse that one accordingly. Therefore this scheme only partly solves UNIX' inherent security problems.
Re:Not much new stuff (Score:2)
You will still be able to overflow the authentication server and the the authentication token for root. Please explain why you would only get root "at the user level one", whatever you mean by that.
Re:Not much new stuff (Score:2)
Oh Lord, How Long? (Score:2)
Better hope it's the last one. Anything else reflects very negatively on Project GNU's ability to make actual friggin progress. They've been working on Hurd since 1991!
Re:Oh Lord, How Long? (Score:2, Insightful)
As far as GNU's ability to deliver is concered, what about that editor you use (emacs). What about tools like make, flex, yacc, et all? Get real, GNU has done delivered too much to the computing community; and for free.
Honestly, getting the hurd up 'n running has been not as important since we already have Linux. After Linux, there was no urgent need for a Free OS (what GNU was really all about at that time).
As far as having a robust OS is concerned, we already have Linux. Whatever Hurd is going to be, it is going to be well thought out and based on good and new ideas that markedly better than conventional UNIX. Hurd is not there to replace Linux, but the project exist solely to get a new kind of OS out.
Read the interview. It's good.
Long, long, long (Score:2)
And I'm sorry, but the basic GNU software set is a disaster. I speak from personal experience. I've been hassling with their bugs for years. Ten or so years ago it was hassling with the official port of GNU source control to DOS -- done by somebody who didn't understand FAT filesystem semantics. Last year I had to sweat blood to deal with the reference counting bug [redhat.com] in glibc. This bug was eventually fixed -- after glibc maintenance moved from GNU to Red Hat. What can you say about a team that takes so long to fix such a basic bug? Aside from the fact that it has Lots of Really Great Ideas?
Face it, GNU has "succeeded" only because you need it to do anything useful with the Linux kernel.
Well, of course not. Hurd as been around much longer, if you count its Project Mach origins. If Hurd had had its act together back when LT was a grad student, he probably never would have bothered to write the Linux kernel. Proably just as well...Look, if Hurd is so wonderful and important, then you should want something serious to happen with it. That is not going to happen if all its supporters just stand around saying "cool!" And it's certainly not going to happen if nobody asks why this project has been chasing its own tail for so long.
Re: Production Grade (Score:1)
At the time, as soon as I discoved slackware, I thought it was great and switched to it right away for "production" work.
The trouble that now we use linux with stability and featurefulness and it's easy to look at the Hurd with jaded vision.
So it's relative. GNU/Hurd as it is now would have been considered fine for production work if in 1995. Not to mention that GNU/Hurd now has Debian infrastructure... Is that good enough? For some, I'd say yes.
Anyway, it's way more stable than Win98 and is getting better much faster that it used to.
Role on woody+1.
Re:Oh Lord, How Long? (Score:2)
HURD vs Plan9 (Score:2, Interesting)
Re:HURD vs Plan9 (Score:1, Informative)
Re:HURD vs Plan9 (Score:3, Interesting)
This limited set of "messages" also makes it trivial to insert filters between components. In Mach I believe any kind of filter will have to interpret the entire message description structure, right?
Stability (Score:2)
In this age of MS-think, that means it's time to release it!
That said, I would not recommend using the GNU/Hurd on a server. At least not yet.
Hmm, that never stopped our friends in Redmond.
(Seriously, though, an interesting interview.)
Wine? (Score:1)
Re:Wine? (Score:1)
The important parts:
Charming.
fsmunoz
Re:Wine? (Score:1)
avoiding flamebait (Score:1)
The HURD could be in public use today (Score:5, Interesting)
Anyways, the long and the short of it was that RMS threw a giant hissy fit about the license so they never did business together. It seems that RMS can't see the forrest for the trees sometimes. Instead of giving the community a rock-n-roll new kernel, he decided to cut off his nose to spite his face.
Yours,
-Jack
A few notes from a Hurd user (Score:5, Informative)
A few people have mentioned trying to merge Linux with the Hurd. For many reasons, this probably won't happen, and would probably detract from some of the advantages the Hurd's design offers.For example, Neal Walfield mentions in the interview that there's a fellow who's succeeded by himself at porting substantially the Hurd to the PowerPC architecture. He took OSFMach from the MkLinux project, modified slightly the four core servers and libc, and had a system capable of running bash, fileutils, and I think some other standard apps. This feat confirms the portability of the Hurd's design, which might not be as easily accomplished with the Linux kernel. I don't know Linux's internal arrangement very well, but I have read comments [alaska.edu] of Linus's to the effect that kernel development shouldn't be easy. While writing Hurd servers or an implementation of Mach isn't particularly easy, it looks as though the portability and modularity promises of the microkernel advocates may be borne out. In addition, at least one fellow has succeeded at running Lites, the BSD single-server, alongside the Hurd on a machine running Mach. In principle it should be possible to run the MkLinux single server in a similar way atop Mach, perhaps concurrently with the Hurd. This would be similar, according to the Hurd's developers in a recent list discussion, to the virtual server capabilities discussed last week someplace [slashdot.org]
The Hurd accomplishes this while remaining POSIX compliance, sufficient to make the user experience indistinguishable from standard *nixes. At first my biggest disappointment with the Hurd was that nothing much seemed different. All the standard utilities were there, I got X working (though I don't use kde or gnome -- just windowmaker), and found myself somewhat surprised that most of what I need to do I can get done with my GNU/Hurd machine. This seems to have been accomplished by about ten or so kernel developers plus maybe fifty application porters over a long time; naturally if the user and developer bases were larger, things would be farther along.
My GNU/Hurd system is, however, slow. I haven't done any careful tests, but it feels sluggish at times. File access and network operations are fairly slow, similar operations are noticeably faster with Linux. There's a lot of driver support missing [e.g. no sound
Anyway, it's not quite there yet, but things are coming along, both feature- and performance-wise. It's worth trying out, if you've got a spare pc with a gigabyte of disk or so.
So Linus doesn't like microkernels... (Score:5, Insightful)
Right, right.
As others have noted, there's no way for a microkernel to be as speedy at flipping bits around as a monolithic kernel, copying between address spaces and everything. Apple attempts to mitigate some of those costs by keeping all their Mach threads in one address space, IIRC, but even with that speed up there's still some overhead.
But that doesn't mean that microkernels suck.
Apache doesn't serve static pages as fast as other web servers. It doesn't serve dynamic content as fast as some servers. But people use Apache for other reasons, things like configurability, extensibility, and support. And because the only thing you *can't* do with an Apache module is make babies.
Microkernels are interesting to computer scientists because they allow abstraction in the kernel, and God only knows that there's no word that computer scientists like more than "abstraction". Microkernels, for all there faults, are just plain prettier. And as research continues into microkernels and how to mitigate their many flaws, there might come a time when the extra processing they require might be worth it. Maybe all the abstraction and objectiness will be worth it to some system designer in the future.
At some point, there may be people who will be willing to trade some latency and throughput for extensibility and configurability in their kernels. They might be willing to trade some clock cycles for the ease with which they can implement different security policies, ala HURD. The point is that not everyone needs the same kernel, and not everyone needs the same kind of kernel.
And competition is good for them both. The HURD developers are incouraged to speed up the kernel, thanks to Linux. Linux kernel hackers will eventually desire some of the design niceties of the HURD kernel, they just won't admit it on the LKML.
But Linus *does* like microkernels! (Score:3, Insightful)
The thing is, Linus does care about speed difference. A lot.
To sum things up (Score:3, Interesting)
The HURD isn't popular yet because
Maybe instead of reinventing the wheel, they should just use the NT kernel with the GNU runtime tools and release GNU/NT.
Something I perhaps don't understand... (Score:2, Interesting)
Better scaling and less forking (Score:2, Interesting)
Vast wars already take place about what should or should not be in the Linux kernel. The rate of progress on all of these fronts together is approaching glacial. There is just no easy way for a small team to keep a grip on a monolithic kernel that is being pulled in different directions by different developers. It has taken years for a journaling filesystem to be accepted into the kernel.
The Hurd neatly sidesteps all of these issues by not requiring any of these things to be in the kernel. Useful functionality like reiserfs and tux can be developed and released without any requirement for them to be "accepted" by any controlling group. At the moment, the only way to do something similar with Linux is to release a patch. This makes Linux much more susceptible to forking.
In case anyone disagrees with me on the forking issue, consider that Linux already has a very high profile fork: RedHat. RH have been distributing a forked kernel for a long time. Admittedly they only patch the kernel in minor ways, but nonetheless they maintain an alternative version. This is because if their views on what they want out of the OS differ from the kernel admins they have no choice but to fork.
But a RedHat version of GNU/Hurd could be composed of whatever OS servers they choose without any patching. They could mix and match the kernel distribution in much the same way as they currently mix and match the userspace distribution. Because most of the Hurd kernel *is* a userspace distribution.
xMach (Score:2)
Re:hurd = turd (Score:2)
Re:It is (Score:2)
I think the thing that Hurd is really lacking is a Linus or Theo-like figure with an established cult of personality.
And it looks like Neal's got some hope. He's reasonably pragmatic (important) and can use humor effectively (as in the comment that he hasn't done drugs)
Of course, the problem with Mach is that there's five people working on it, and it's not ready for a mess of people to start hacking on it. It has some interesting ideas, and some really cool directions that it could go in. Somebody's going to have to do some leadership and start setting it on a course from R&D to a real production project sometime.
Re:Can someone answer my stupid questions? (Score:1)
GNOME and KDE need pthreads (again, right there on the interview) before anything mroe can be attempted.
fsmunoz
Re:Can someone answer my stupid questions? (Score:1)
that's something best left to opinion
2> Will Hurd have an Xfree port soon? Will existing drivers be easily portable?
Read the article...they already have XFree working on HURD...XFree runs on OS/2, of course it will run on a POSIX system
3> What about window managers and environments. How hard would it be to bring KDE/Gnome and some window managers to hurd?
Gnome will obviously get ported...think about it...Gnome has RMS endorsement...besides window managers are generally portable across UNIXes
Re:Can someone answer my stupid questions? (Score:1)
Maybe when there is a fancy menu-install.
Re:Why???? (Score:1)
Uh, maybe because it's all of that in one OS? I can't exactly use Plan 9's namespace stuff along with Be's translators on top of EROS right now.../p
Re:Why???? (Score:1, Interesting)
2) As for L4, the article states that Hurd is being ported to L4.
4) Microkernels, by their design, are inherently more secure than regular kernels since they allow you to do more in user space with less privilege.
5) QNX is *also* a microkernel. The main problem about QNX is that it's not free software.
7) Amoebo is also a microkernel, so it should be possible to port it to HURD so that it co-resides with it. As stated in the article, you can have multiple authentification services, device manager, file systems, etc. Just *try* and do that with Linux.
I think that answers your questions.
Re:Why???? (Score:2)
Re:Why???? (Score:2)
The Hurd is aimed at being a Free, POSIX, usable system, unlike BeOS and QNX (not Free), Minix (a teaching system), Eros (a experimental system) and Plan9 (not Free, and not POSIX.) As for L4, the article points out that the Hurd can be ported to L4, it's just that nobody has put in the elbow grease yet.
Re:Why???? (Score:2)