Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
GNU is Not Unix

KernelTrap Talks WIth GNU/Hurd Developer Neal Walfield 218

An Anonymous Coward writes: "One of the GNU/Hurd developers, Neal Walfield, was recently interviewed by KernelTrap. Nice read."
This discussion has been archived. No new comments can be posted.

KernelTrap Talks WIth GNU/Hurd Developer Neal Walfield

Comments Filter:
  • by Anonymous Coward

    He doesn't deny that the performance sucks, but he feels the added flexibility will be worth it.

    What do I doubt this?

    • Re:Interesting (Score:1, Insightful)

      by Anonymous Coward
      Any sort of microkernel architecture has very little
      chance of surviving on any Intel-based
      architectures, as context switching is just too
      damn expensive (well over 500 cycles to do it
      properly). If drivers and other speed-critical code
      can't live all within the same context, there's no
      way you can get viable performance out of it.
    • By the time the hurd is stable and featureful enough for everyone to use, we'll all be using 20GHZ machines. At that point, it may be worth the loss of speed.
      • By the time the hurd is stable and featureful enough for everyone to use, we'll all be using 20GHZ machines.

        Its true; betting on Moore's law has been proven to be a winning strategy. Ask Bill Gates.

        It makes me think of Windows 95. When I first installed it on a 486/33, it seemed huge, bloated and slow. If I run it now on a PIII/800, it seems to be fast, lean, stripped down and almost elegant. I guess context is important.

        • It makes me think of Windows 95. When I first installed it on a 486/33, it seemed huge, bloated and slow. If I run it now on a PIII/800, it seems to be fast, lean, stripped down and almost elegant.


          I'm not quite sure that's true. Win95 seemed bloated and slow on the ancient 486/25 I first used it on ... and yet, it's _still_ bloated and slow on new machines. One would think that it would be quick as lightning, and yet I still "click the start button ... drum fingers for a second..." Same really goes for both KDE and GNOME. On the flip side, Blackbox was lightning fast on the first machine I installed Linux on (a 486/66, IIRC), and it seems just as fast on my machine today.


          Best explanation I can come up with is that there hasn't been any increase in processor speed in the last 5 years. I'm convinced that they hit a wall around the 386 or so, and have simply been rebranding the same chips every year or so, trusting that we'll convince ourselves that things really are going faster.

  • to Linux. Now, I know they want a microkernel, etc, but I think that the Linux kernel can be built to meet their need much more than Mach, or whatever else they want to use in the future.

    These guys also need to consider device drivers. If they want their OS to become popular, it's going to need to support a wide variety of hardware. Linux already offers that.

    I really like the ideas of Hurd, but they are not being proactive enough in getting more developers on board. This reminds me of the Atheos guy, who'd rather write the OS himself. One of the best things about Linux is that a lot of people are working on it, and BSD also has a wide developer base.

    Hopefully HURD will become more relevant than that OS from MIT. I'm looking forward to trying that out and make some comparisons ;-)
    • Although a Linux kernel has tons of functionality that you wouldn't use if you were running the Hurd on top of it, there would be potential advantages (SMP, running on a lot of hardware, etc). However, there are (at least) two factors which I suspect might stop this show: IPC speed and memory footprint. (1) The hurd uses a lot of interprocess communication calls between one of the hurd servers and another (or between a user application and a server). Therefore a good microkernel will try to make IPC very fast. Without having looked at Linux benchmarks in this area, I don't know how it stacks up, but I doubt it has gotten as much attention as in your average microkernel. (2) Would the Linux kernel use a lot of memory on unused functionality? (This one might be less of a big deal, although I don't happen to remember things like whether Linux swaps out kernel pages which aren't getting touched).
    • Re: Device drivers (Score:2, Informative)

      by SpringRevolt ( 1046 )
      If I understand correctly, the Hurd will, in future be moving to a new microkernel called OSKit-Mach. OSKit-mach is based (as you may guess) on OSKit and OSKit [utah.edu] (which is distributed and maintained at the University of Utah) contains Linux device drivers. As you may know, the (vast?) majority of Linux code is actually the device drivers - so most of Linux is now available for users of the Hurd.

      So in answer to your point: they have considered the device drivers.
  • by Walter Bell ( 535520 ) <wcbell@bel l a n d h o r o witz.com> on Monday November 12, 2001 @07:13PM (#2556347) Homepage
    My employer likes to stay on the leading edge in the operating systems field, and makes it a point to try to integrate up-and-coming technologies into our server farms. It should come as no surprise, then, that our team does use a HURD machine as a file/web/application server.

    The HURD machine has been surprisingly stable since we set it up last year. We may have had a few instances where it would get into an undesirable state and need rebooting, but by and large its downtime has been attributable to hardware upgrades and power interruptions. Its integrated userspace/kernel space has provoked us to write some very interesting programs on that box that we would not have been able to create with an ordinary UNIX or clone.

    What's interesting about the HURD is that, despite its departures from many UNIX conventions, its developers are striving to form a clean upgrade path from Linux to HURD. Likewise, many HURD features (like POSIX b.1 capabilities) have made it into Linux in recent years. It's too early to tell, but perhaps the future holds a merging of Linux with HURD in a couple of years.

    ~wally
    • by Anonymous Coward
      I don't know about you guys, but my Troll Detector is going haywire right about now.
      • Hmm, mine isn't. Care to elaborate?
        • by Anonymous Coward
          %80 of the /. public seems to think that troll==anything they don't like. In this case, that might have been posting for karma or lying. But, upon cursory examination of Will's account, it seems unlikely that he'd do either.

          And even so, karma or lying, he wouldn't be a troll.
    • A merging would probably be beneficial. Linux and HURD both have unique and useful features (Linux's biggest advantage being the very large number of open source developpers backing it), and I would love to see them together in one operating system. Of course, those who are tired of RMS's "GNU/Linux" should buy one-way tickets to Mars if there's any word on this from the people who control them :) Then again, Linux HURD doesn't sound to bad...
      • The problem being that Linux is largely monolithic in design while HURD is almost completely modular.

        Besides, Linus hates the HURD design.

        • Agreed. You might as well hope for a FreeBSD / Plan9 merge or something on that order.

          Don't forget, people, that Linux is a kernel, not an operating system. The interview mentioned that certain code from Linux was used in Hurd, but a merging of the two is simply not going to happen.

          That aside, with the GPL, Linus has very little to say should someone try and merge Linux and Hurd. :P
  • by Anonymous Coward
    Any sort of microkernel architecture has very little
    chance of surviving on any Intel-based
    architectures, as context switching is just too
    damn expensive (well over 500 cycles to do it
    properly). If drivers and other speed-critical code
    can't live all within the same context, there's no
    way you can get viable performance out of it.
    • Let's be frank. It's too damn slow on any CPU. I think Mach (and Hurd) are really great projects, but they appear to be fundementally flawed (messaging+context-switch). I'm very disappointed that the interviewer didn't go into any details on the well-known problems associated with micro-kernel design. Also, I've never heard a really good comparison of the micro-kernel approach vs. the monolithic design with loadable modules (I'm sure there is one; I just haven't heard it). I'd really like to hear the Hurd developer's comments on these issues.
      • by Pseudonym ( 62607 ) on Monday November 12, 2001 @08:52PM (#2556701)

        My first comment is that "performance" means different things to different people. To some it means "throughput", that is, the amount of work that the system can do just prior to being overloaded. To some it means how well it can handle overload. To some it means low latency, that is, that the system can respond to an important event quickly. Which one is important for you depends on what you're doing.

        Secondly, you're basing your assumptions on "microkernels" like Mach, which dates from around the same era as the original Windows NT. That's an "old style" microkernel. Back then, we thought that the only advantage of using microkernels was flexibility, so kernels didn't have to be very "micro".

        Nowadays we know that merely reducing the kernel's domain of influence doesn't buy you much. You also need to simplify your kernel to realise performance gains. You do lose something (cost of context switch etc) but you also gain lots too, so it's not so much of a penalty, but rather it's a tradeoff.

        For example, consider this: Linux often has to suspend a task deep inside a system call. If you call read() on a block of disk which is not in the buffer cache, say, you need to suspend until the block is read in from disk. A monolithic kernel may have to do this for a hundred reasons, depending on which modules are loaded. So a context switch consists of dumping registers on the kernel stack then switching stacks.

        Now consider a microkernel. You already know in advance what operations you may have to suspend on, and that number is quite small. (In the read() example, you only suspend on IPC while the server responsible for disk I/O does the hard work.) So you can separate the "front half" of each system call from the "back half". You can come up close to the surface and then context switch if necessary. (Note: You have to do this anyway on a modern system, because a higher priority task may have unblocked during the system call.) Once you've done that, each thread doesn't need its own kernel stack, which makes the context switch a little cheaper, saves memory, makes thread creation cheaper, and the kernel can be made re-entrant, delivering an IRQ in the middle of delivering another IRQ, thus improving latency. It also means you don't need to hack around the problem of signal delivery while suspended. (BSD does this by ensuring that the "front half" of every system call is idempotent, and thus possibly less efficient than it could be.)

        So you can see that focussing on the cost of context switching alone can be misleading.

        Plus, of course, keep this in mind: if raw throughput was our most important criterion, we wouldn't have virtual memory.

    • Strange. Last time we did measurements here (L4Ka [l4ka.org]), we ended up with 99 cycles on a 450Mhz PIII to send a message from one process to another. If one has communication within a single task the numbers are an order of magnitude lower (i.e., about 15 cycles).
    • by Sloppy ( 14984 ) on Monday November 12, 2001 @10:41PM (#2556973) Homepage Journal

      "Blahblah is to slow" arguments are lame. Microkernels are too slow. Java is too slow. 3D graphics are too slow. GUIs are too slow. Virtual memory is too slow. Accessing files over a network is too slow. Calling the OS instead of directly banging the hardware is too slow. *yawn*

      Upgrade your 386SX to a new Athlon, dude. (Or better yet, a dual Athlon -- one less context switch ;-). Then nothing is too slow anymore. You can run CPU-bound stuff continuously at 87.5% utilization and your computer will still be just as fast as what you had 6 years ago, which was already overkill.

      500 clock cycles, with 2 billion clock cycles per second, that works out to... a good-looking excuse to blame things on the next time you get fragged in Quake.

      What's really funny is that you'd dare to say that something with a slight performance decrease has a "very little chance of surviving on Intel-based architectures." And yet just last week, I saw someone at my office spending way too much time, struggling to copy a bunch of files with MS Windows' explorer shell. I guess Windows has very little chance of surviving too. Unless .. wait .. unless maybe people don't care? Could it be?!?

  • I graduated UMASS Lowell with a Neal Walfield recently so I was curious as to whether it was the same person.
  • by Elwood P Dowd ( 16933 ) <judgmentalist@gmail.com> on Monday November 12, 2001 @07:24PM (#2556390) Journal
    Hmm... He says HURD is slower than it should be, and it will get better. I was under the impression that it was doing some high-order operations, and that was why it was going to be slow. Eventually with high-order operations, don't you sortof hit a brick wall in optimization? For example, I thought they were using the very expensive (computationally) Mach messaging system for some of their features. Is this the case? Is their switch to L4 going to improve this issue, since Mach's speed issues are related to operation order? Iduno. I guarantee that I'm less educated in this subject than the majority of /. readers, so can someone elucidate?
    • i guess this will drive the directfb people nuts. they are complaining about how X is slow, (when the problem is actually xfree), and upon hearing this they shall say: we dont need another slowdown..."we need to specialize, not generalize!"
    • Re:Hurd Speed (Score:2, Interesting)

      by Ivan Raikov ( 521143 )
      I believe QNX [qnx.com] has a philosophy similar to the Hurd -- most of the traditional OS facilities there are moved in user space. Nevertheless, QNX is a leading hard real-time operating system with a very small footprint and elegant architecture.

      Which makes me really excited about implementing real-time software in Hurd.
      • Re:Hurd Speed (Score:3, Insightful)

        by Pseudonym ( 62607 )

        Mach is an old-style microkernel. It comes from the same era as Windows NT.

        QNX/Neutrino is a modern microkernel which comes from the same era as BeOS.

        There's no comparison. Mach is big and tries to do too much, even for a microkernel. But it comes from an era when we throught that the most important advantage of microkernels was flexibility. We now know that by making them very "micro" they can give performance too.

        You won't find hard real-time in Hurd any time soon. Not as long as they're using Mach and allow use of Linux device drivers, anyway. Hard real-time needs to be designed in everywhere, from the driver to the kernel to the application.

        The world does need a free real-time general-purpose OS, you're right. Real-time is becoming ever more important even in server applications (e.g. ATM routing, streaming media). You won't find it in the Hurd, but there are one or two projects happening in relative secrecy at the moment. Watch this space.

        • Re:Hurd Speed (Score:2, Interesting)

          by SerpentMage ( 13390 )
          What is interesting to note is that the original design of Windows NT with its servers was abandoned in NT 4.0. And from what I know in XP it does not even exist. While HURD did sound very interesting and many of its concepts sounded good I think speed is still an issue.

          Many people say, computers get faster. Sure I agree, but software gets slower as well. The real question is if the software gets slower faster than the CPU or not? For example while C++ as a GUI is hard to program, C++ gui's are miles faster than any Java GUI. On top of that C++ GUI's are nicer than Java's. Hence why Java has not made it on the desktop.

          Folks sometimes you need the speed!!!!
          • What is interesting to note is that the original design of Windows NT with its servers was abandoned in NT 4.0.

            Well, some of the servers are still there, but yes, you're right. NT until version 3.51 used to be an old-style object-oriented microkernel OS not unlike Mach (only better than Mach). Now it's a kind of a bastard child of microkernel and monolithic kernel, retaining some of the benefits and most of the drawbacks thereof.

    • Re:Hurd Speed (Score:4, Insightful)

      by aheitner ( 3273 ) on Monday November 12, 2001 @09:13PM (#2556760)
      (caveat: I think microkernels are silly firewalls in code that should be correct and trusted)

      There are going to be speed problems with any microkernel-based OS -- OS X is not necessarily exempt either.

      Basically, if you spend a lot of time copying data between address spaces of different chunks of the kernel, you're going to pay for it. If you have to switch address spaces to switch kernel tasks, you're going to pay for it (in cache misses).

      Even in a monolithic-kernel OS (which will always be superior, if you assume the parts of the kernel are well-enough written that they can be trusted by other parts of the kernel), you have some cost moving data from userspace to kernel space. You can get around that in clever cases -- Linux does this with zerocopy networking, passing sets of (physical) pages around and dumping them directly to the card driver.

      As Linus once said "Mantra: `Everything is a stream of bytes'. Repeat until enlightened." In other words, any obsessiveness that gets in the way of moving streams of bytes around extremely efficiently is not good architecture. Message passing (and separate address spaces for kernel "server" modules) fall into this category.
      • First: You don't need to spend any more time copying data between address spaces in a modern microkernel OS than in a monolithic-kernel OS unless you've designed it badly. For example, it's lunacy to put your disk driver and file system driver in different address spaces. Performance-conscious microkernels do what monolithic kernels do: use dynamically loaded objects. The difference is that the disk server dynamically loads the filesystem it's going to use, or the network server loads the NIC driver and protocol implementations. There's no reason at all why a microkernel OS can't use zero-copy networking. (I think that BeOS even did zero-copy sound in some situations. Digitised data would come in off the soundcard and go straight into the mixer. Try doing that in Linux without hacking the kernel.)

        As for context switches, true, you have to do more of them, but you get performance gains elsewhere as a result, as I have noted previously [slashdot.org].

        If you need a microkernel mantra, here it is: "It's not a penalty, it's a tradeoff." Repeat until enlightened. :-)

        • Hmmm. I thought that they were doing exactly that. A lot of copying between servers by using the Mach messaging system. Maybe not between the filesystem and the disk drivers, but I'm sure some of those servers do a lot of communication. Now, I have *no* idea if any of that communication is something that would have to be copied in a monolithic kernel. Perhaps all of it would. I realize that Mach messaging was a well thought out tradeoff, and I don't think that it's a bad idea for the HURD. They certainly took their time thinking about it :) I'm just curious if further optimization is really going to improve their speed a lot. I can't imagine that Neal would say it's going to improve if it isn't, so I imagine that some of my post is just plain incorrect. Is the messaging going to slow them down, or not?
          • Just to clarify, aheitner's assertion was not that Hurd or Mach's IPC is slow, but that any microkernel-based OS will be slow compared with a monolithic kernel-based OS. I don't know enough about Hurd to comment specifically, but there are plenty of modern microkernel-based OSes which are competitive in this area (e.g. L4, BeOS, QNX).

          • Re:Hurd Speed (Score:2, Interesting)

            by jbailey999 ( 146222 )
            One of the biggest speed problems we face right now is how expensive fork is. Everytime something forks, all the ports (and send rights) have to be copied from one task to another (Many many RPCs for that). If that's followed by an exec (which then has to clear it) it's quite expensive.

            The solution for that is to use posix_spawn (in the latest posix drafts). This signals that a new task can be setup cheaply. Hopefully when bash, make, and gcc use that, we'll see a huge improvement in speed.

            So far raw execution speed seems fine. I don't use X (since mostly my machine just sits and compiles binaries), but even when it's going full tilt, it's quite usable on a PII/233mhz. (Multiple ssh sessions, irc)
      • A completely impractical assumption (and, incidentally, one that is spectacularly incompatible with Open Source, at least, open source written in C).

        When was the last time you used a kernel that was really monolithic, one that had been built, supplied and tested as a unit, by a common engineering team? The fact is that all modern systems are supplied in untestably complex configurations, which if reliability is not to be compromised, must be able to protect themselves from problem components.

        If the design choice was really between copying memory and passing pointers that allowed the receiver to stamp all over the sender's address space then life would be rather depressing. However, in the absence of hardware features like capabilities and Multics-style protection levels, there is a solution in the form of a safe, intermediate language such as Java bytecode. This way, you only have to trust your VM/JIT compiler for basic address-space integrity.

        Slow? Well, device drivers probably shouldn't be the first part of Linux to be bullet-proofed in this way, but for serious components (think KDE applications currently using DCOP etc.) the VM can easily outperform native code, because it can optimize the execution path *across* separately loaded components, and eliminate null procedures such as unused access checks, RPCs for local objects etc.

        Linux (and Linus, by the sound of it) need to wake up to the power of VMs. MS apps will soon no longer be tied to x86, Java is still growing, while efforts that could be used for Linux (Perl/Python and a few LISP engines) are niche environments, to say the least.

        Anybody that believes Linux is still going to grow when it possesses zero inbuilt protection and requires apps to be manually recompiled for every platform variant is living in cloud-cuckoo land.

        --
        alex
      • Hi all;

        I think that micro and monolithic kernels each have their place. For my PC, though a monolithic kernel probably meets my needs best. Also, I will be referring to monolithic kernels as "monokernels" even if this is not technically correct ;)

        Microkernels beat out monokernels often when it comes to Really Big Servers and Supercomputers, in part because SMP is much more difficult to do on monokernel designs I suspect that this is why UNICOS/mk is a Microkernel (of course, it is from Cray).

        As Linus once said "Mantra: `Everything is a stream of bytes'. Repeat until enlightened." In other words, any obsessiveness that gets in the way of moving streams of bytes around extremely efficiently is not good architecture. Message passing (and separate address spaces for kernel "server" modules) fall into this category.

        Exactly, but your considerations for that 64-proc supercomputer or mainframe are different than your considerations for your 1 proc workstation, aren't they? Microkernel may be more efficient for the former, but monokernels are more efficient for te latter.
  • by PoiBoy ( 525770 ) <brian@poihol[ ]gs.com ['din' in gap]> on Monday November 12, 2001 @07:28PM (#2556409) Homepage
    but I play one on Slashdot. :-)

    Seriously, having read the interview it seems like Hurd does some interesting stuff removing features that are part of the kernel in other unix systems and moving them into userspace.

    The real question, though, is whether we need an entirely new operating system to gain these features or whether they could instead be implemented into the standard Linux kernel. Unless they can really get a large group of people starting to develop and use it, it may go the way of the buffalo. By working on getting their changes into Linux, however, they would have a much larger userbase to start from.

    • by Anonymous Coward
      It isn't "an entirely new operating system", it's a replacement for the kernel.
    • by Pseudonym ( 62607 ) on Monday November 12, 2001 @09:13PM (#2556757)

      Nope. The Linux developers are hell-bent on sticking to their monolithic design. Even if you could develop the Hurd as a set of patches, they would never make it into the "standard Linux kernel". (Curious use of the word "standard", BTW.)

      The rift of Hurd vs Linux is like vi vs emacs. Vi and Hurd are meant to be a small tools designed to work in conjunction with other small specialised tools, the whole being greater than the sum of the parts. Emacs and Linux are meant to be "all features under one roof".

      Actually, that's a good way of looking at your question. Asking "can't you just implement these features in Linux?" is like asking "why do you need all those POSIX commands like diff(1); can't you just implement that in Emacs?" The answer is "yes", but would you want to?

      • I would say Emacs follows Hurd philosophy: a small kernel (Lisp interpreter; yes, Lisp is small), that serves as base to implement a lot of modules that interact together via a known protocol.

    • > implemented into the standard Linux kernel.

      Nope, that'll undermine the greatest thing of hurd. The ability to do lots of things as a regular user.

      What we need is a standard for device-drivers, so you've got one source for *BSD's, Linux and other OSes. And one binary for all OSes running on some microkernel.

      Software just needs good design.
  • Since HURD is the official kernel of the GNU OS project, isn't "GNU" or "GNU OS" a sufficient name for the operating system?
  • I'm still trying to figure out who exactly uses Hurd??? The microkernel stuff is very interesting, but who is using it? What's it's device compatibility? binary compatibility? I think the idea of servers having no privies is interesting, if definitely provides for a better security model, but how many NICs does it work with? I see that it has linux driver compatibiltiy glue but that still doesn't mean it works with all of the drivers...all in all, it is probably a great server platform but for desktops/workstations, is doesn't sound to hot. At least that is my take/opinion on it. I've not actually used it, so my opinion is worth a grain of salt, but hey...think what you want.
    • Somehow, I don't think the GNU folks give a rat's ass about binary compatibility. With free software, really all you need is POSIX compliance. (If you want binary compatibility out the yinyang use BSD.) Also, the HURD team can theoretically port Linux and BSD device drivers, the source code is all there. In fact, the OSKit from the University of Utah does just that. The number of users is actually less important for long term survival than the number of maintainers, which as far as I can see is the only real problem the HURD team will run into.
    • Dude, for Desktop/WS any OS will do.

      It's just that powerusers like a flexible system, which GNU provides.
  • Why does he keep saying that only root can mount filessytems? Sure that's the default mount behavior, but certainly not the rule.
    • in linux there are some suid and configuration exceptions but they all imply getting root in some form usually suid.

      He is saying that users can mount any file system to any place they wish (giving they have the permission).

      Linux does have some of this ability comming though with gnome's vfs. but this is not quite the same thing.
  • The whole micro-kernel idea exists in MINIX, with user-space FS, MM etc..

    The authentication part looked nice, but I thought I saw a contradiction when he first spoke of the safety of the system because the authentication daemons ups priviliges and second talks about a user-owned authentication daemon which is secure because cracked passwd's cannot be used on daemon's outside this users' space. This would imply that the public authentication server is hackable also in a way that authentication tokens can be had illegally.

    Nevertheless I like the removal of root access necessity for a lot of stuff.
    • When he spoke of user-owned authentication daemon's being secure because the cracked passwords are unusable other places that's exactly what he meant.

      A user-owned authentication daemon is not just user-owned. It could also be user-written. You can't guarantee the security of something a user writes. So, it can be cracked, possibly. And, you can get passwords and tokens from it, possibly. However, they do you no good in any other authentication daemon.

      The main, global, authentication server for the OS should be very damned secure.

      Justin Dubs
      • I don't think you understand my point. Which is that the authentication system still is vulnerable to attacks where tokens are illegally obtained. Therefore a buffer overflow will not make you root because the server is running as root, but it might give your root (or any other users) password, which has the same effect.
        • It would not give a root password. It would give a root token. That token, however, would only be valid on that one authentication server, so it would only make a difference if you wanted to do horrible things to programs that use that authentication server as their primary authentication server. Regular programs would still default to the system's authentication server which would laugh at your supposed root token.
          • I'm still not getting through.

            You would get a root token on the SYSTEM'S authentication server, therefore granting you root access to the complete system.

            The authentication scheme can prohibit you from becoming root by overflowing the authentication server and 'becoming it' because it doesn't run as root. It doesn't prevent you, however, from overflowing it and getting the root token after which you can abuse that one accordingly. Therefore this scheme only partly solves UNIX' inherent security problems.
  • Is the assertion that Hurd isn't "production grade" official? Or just the consensus of its users? Or just the opinion of the interviewer?

    Better hope it's the last one. Anything else reflects very negatively on Project GNU's ability to make actual friggin progress. They've been working on Hurd since 1991!

    • I don't know what you gathered from the interview, but if y'd ask me, I think I was might impressed by the ideas in the interview. Hurd, to me, seems like an excellent idea.

      As far as GNU's ability to deliver is concered, what about that editor you use (emacs). What about tools like make, flex, yacc, et all? Get real, GNU has done delivered too much to the computing community; and for free.

      Honestly, getting the hurd up 'n running has been not as important since we already have Linux. After Linux, there was no urgent need for a Free OS (what GNU was really all about at that time).
      As far as having a robust OS is concerned, we already have Linux. Whatever Hurd is going to be, it is going to be well thought out and based on good and new ideas that markedly better than conventional UNIX. Hurd is not there to replace Linux, but the project exist solely to get a new kind of OS out.

      Read the interview. It's good.
      • I don't know what you gathered from the interview, but if y'd ask me, I think I was might impressed by the ideas in the interview. Hurd, to me, seems like an excellent idea.
        Excellent ideas are a dime a dozen. World peace, an end to hunger, a home computer that a non-techie can install and administer -- these are all excellent ideas. But in the real world, you don't judge an effort by what it wants to do. You judge it by what it accomplishes. There's a word [dictionary.com] for this kind of gap between conception and reality.
        As far as GNU's ability to deliver is concered, what about that editor you use (emacs). What about tools like make, flex, yacc, et all? Get real, GNU has done delivered too much to the computing community; and for free.
        To my mind, EMACS represents a lot of what's wrong with GNU software. Lots of Wonderful Big Ideas, but little attention to practical design.

        And I'm sorry, but the basic GNU software set is a disaster. I speak from personal experience. I've been hassling with their bugs for years. Ten or so years ago it was hassling with the official port of GNU source control to DOS -- done by somebody who didn't understand FAT filesystem semantics. Last year I had to sweat blood to deal with the reference counting bug [redhat.com] in glibc. This bug was eventually fixed -- after glibc maintenance moved from GNU to Red Hat. What can you say about a team that takes so long to fix such a basic bug? Aside from the fact that it has Lots of Really Great Ideas?

        Face it, GNU has "succeeded" only because you need it to do anything useful with the Linux kernel.

        Hurd is not there to replace Linux
        Well, of course not. Hurd as been around much longer, if you count its Project Mach origins. If Hurd had had its act together back when LT was a grad student, he probably never would have bothered to write the Linux kernel. Proably just as well...

        Look, if Hurd is so wonderful and important, then you should want something serious to happen with it. That is not going to happen if all its supporters just stand around saying "cool!" And it's certainly not going to happen if nobody asks why this project has been chasing its own tail for so long.

    • I would say that the current GNU/Hurd system is about as stable/everyday-useful as slackware was when it came on 12 floppies (without X11) and using linux 1.3.x some time in the mid 1990s.

      At the time, as soon as I discoved slackware, I thought it was great and switched to it right away for "production" work.

      The trouble that now we use linux with stability and featurefulness and it's easy to look at the Hurd with jaded vision.

      So it's relative. GNU/Hurd as it is now would have been considered fine for production work if in 1995. Not to mention that GNU/Hurd now has Debian infrastructure... Is that good enough? For some, I'd say yes.

      Anyway, it's way more stable than Win98 and is getting better much faster that it used to.

      Role on woody+1.
    • Actually they've been working on it longer than that. In 1987 [gnu.org] they started "negotiating with Professor Rashid of Carnegie-Mellon University about working with them on the development of the Mach kernel". 1991 is only when they started working on a detailed plan. It still took them until 1994 before they got to the milestone of "it boots".
  • HURD vs Plan9 (Score:2, Interesting)

    by line-bundle ( 235965 )
    I have been looking at plan9 from bell labs recently. Has anyone here used both hurd and plan9 enough to give advantages of each?
    • Re:HURD vs Plan9 (Score:1, Informative)

      by Anonymous Coward
      PLan 9 is a whole OS, Hurd is well Hurd. Basically on plan 9 *everything* is a file and accessable as such...even thinks like the mouse coordinates/buffer.
      • Re:HURD vs Plan9 (Score:3, Interesting)

        by spitzak ( 4019 )
        Actually I think Plan9 is a very good idea, and may be a much better aproach to making a microkernel. You could consider Plan 9 to be doing "message passing" but the set of messages is limited (to 17, I think). Ie there is a "read" and a "write" and a "walk" and several others. Each of these "messages" has a very fixed set of arguments, ie some have a big block of memory that needs to be sent to the message reciever, some the other way around, some return a new message pipe, etc. Since there is this limited set of types of messages they can be each hand-coded for maximum efficiency.

        This limited set of "messages" also makes it trivial to insert filters between components. In Mach I believe any kind of filter will have to interpret the entire message description structure, right?

  • From the interview: The GNU/Hurd, as a desktop system is quite usable, albeit, a bit slow. In terms of stability, there are not many major crashers. Which is to say, an uptime of over a week is quite possible.

    In this age of MS-think, that means it's time to release it!

    That said, I would not recommend using the GNU/Hurd on a server. At least not yet.

    Hmm, that never stopped our friends in Redmond.

    (Seriously, though, an interesting interview.)

  • Can somebody please explain what it meant when the HURD coder said something about having wine, but not other non-prescribed drugs. I think it would probably be helpful to a number of people.
    • Neil's comment about wine&drugs is a reference to some of the tasteful and ontopic remarks made by Linux on this article [iu.edu] in the kernel mailing-list.

      The important parts:
      (...)

      Trust me. The people who came up with MAP_COPY were stupid. Really. It's an idiotic concept, and it's not worth implementing.

      And this all for what is a administration bug in the first place.

      In short: just say NO TO DRUGS, and maybe you won't end up like the Hurd people.

      Linus


      Charming.

      fsmunoz
    • Never mind. I clicked on the link.
    • And the cool part is that the interviewer was trying to get him to start some kind of argument. But instead of getting defensive or fighting back with something he didn't like about Linux or whatever, he just sidestepped it, and humorously too. Well done.
  • by Jack Wagner ( 444727 ) on Monday November 12, 2001 @08:39PM (#2556667) Homepage Journal
    When I was doing some contractor work for a huge *nix shop (think purple)I met a fellow who told me an interesting tale. It seems this huge *nix company (think purple) had actually spent a week with RMS and some of the HURD developers to talk with them about using the base code from the HURD for a project they were kicking about. The company would have been willing to give back some of the code, under a community type BSD license, which would have brought the HURD up to a Version 1.0 level. Now bear in mind I got this story second hand but the guy who told me was a very reputable source who had been part of the compiler team for years there. He let it slip out while we were discussing the flaws in the BSD threading model and once the cat was out of the bag he spilled his guts.

    Anyways, the long and the short of it was that RMS threw a giant hissy fit about the license so they never did business together. It seems that RMS can't see the forrest for the trees sometimes. Instead of giving the community a rock-n-roll new kernel, he decided to cut off his nose to spite his face.

    Yours,
    -Jack

  • by Anonymous Coward on Monday November 12, 2001 @09:13PM (#2556758)
    I've got a GNU/Hurd machine right next to me compiling Emacs as I write this, but I'm no expert. Take the following with a salt shaker, if you like :)



    A few people have mentioned trying to merge Linux with the Hurd. For many reasons, this probably won't happen, and would probably detract from some of the advantages the Hurd's design offers.For example, Neal Walfield mentions in the interview that there's a fellow who's succeeded by himself at porting substantially the Hurd to the PowerPC architecture. He took OSFMach from the MkLinux project, modified slightly the four core servers and libc, and had a system capable of running bash, fileutils, and I think some other standard apps. This feat confirms the portability of the Hurd's design, which might not be as easily accomplished with the Linux kernel. I don't know Linux's internal arrangement very well, but I have read comments [alaska.edu] of Linus's to the effect that kernel development shouldn't be easy. While writing Hurd servers or an implementation of Mach isn't particularly easy, it looks as though the portability and modularity promises of the microkernel advocates may be borne out. In addition, at least one fellow has succeeded at running Lites, the BSD single-server, alongside the Hurd on a machine running Mach. In principle it should be possible to run the MkLinux single server in a similar way atop Mach, perhaps concurrently with the Hurd. This would be similar, according to the Hurd's developers in a recent list discussion, to the virtual server capabilities discussed last week someplace [slashdot.org]



    The Hurd accomplishes this while remaining POSIX compliance, sufficient to make the user experience indistinguishable from standard *nixes. At first my biggest disappointment with the Hurd was that nothing much seemed different. All the standard utilities were there, I got X working (though I don't use kde or gnome -- just windowmaker), and found myself somewhat surprised that most of what I need to do I can get done with my GNU/Hurd machine. This seems to have been accomplished by about ten or so kernel developers plus maybe fifty application porters over a long time; naturally if the user and developer bases were larger, things would be farther along.


    My GNU/Hurd system is, however, slow. I haven't done any careful tests, but it feels sluggish at times. File access and network operations are fairly slow, similar operations are noticeably faster with Linux. There's a lot of driver support missing [e.g. no sound :( ], which will be a problem for the foreseeable future.


    Anyway, it's not quite there yet, but things are coming along, both feature- and performance-wise. It's worth trying out, if you've got a spare pc with a gigabyte of disk or so.

  • by nadador ( 3747 ) on Monday November 12, 2001 @10:14PM (#2556909)
    therefore they are bad.

    Right, right.

    As others have noted, there's no way for a microkernel to be as speedy at flipping bits around as a monolithic kernel, copying between address spaces and everything. Apple attempts to mitigate some of those costs by keeping all their Mach threads in one address space, IIRC, but even with that speed up there's still some overhead.

    But that doesn't mean that microkernels suck.

    Apache doesn't serve static pages as fast as other web servers. It doesn't serve dynamic content as fast as some servers. But people use Apache for other reasons, things like configurability, extensibility, and support. And because the only thing you *can't* do with an Apache module is make babies.

    Microkernels are interesting to computer scientists because they allow abstraction in the kernel, and God only knows that there's no word that computer scientists like more than "abstraction". Microkernels, for all there faults, are just plain prettier. And as research continues into microkernels and how to mitigate their many flaws, there might come a time when the extra processing they require might be worth it. Maybe all the abstraction and objectiness will be worth it to some system designer in the future.

    At some point, there may be people who will be willing to trade some latency and throughput for extensibility and configurability in their kernels. They might be willing to trade some clock cycles for the ease with which they can implement different security policies, ala HURD. The point is that not everyone needs the same kernel, and not everyone needs the same kind of kernel.

    And competition is good for them both. The HURD developers are incouraged to speed up the kernel, thanks to Linux. Linux kernel hackers will eventually desire some of the design niceties of the HURD kernel, they just won't admit it on the LKML.
    • Read the Linus vs. Tanenbaum thread [www.dina.dk]. While it is a comparison of Linux and Minix, it gives you a pretty good indication of how Linus feels (or felt at the time) about microkernels in general. Here is one quote (my bold): "True, linux is monolithic, and I agree that microkernels are nicer. With a less argumentative subject, I'd probably have agreed with most of what you said. From a theoretical (and aesthetical) standpoint linux looses." --Linus

      The thing is, Linus does care about speed difference. A lot.

  • To sum things up (Score:3, Interesting)

    by Waffle Iron ( 339739 ) on Tuesday November 13, 2001 @12:02AM (#2557188)
    The main points from the interview seemed to be:

    The HURD isn't popular yet because

    • It's still slow
    • It's still buggy
    • It's POSIX compliant in theory, but not as used by the real world
    • Hardly anyone is working on it
    • There hasn't been an official release in 4 years (that's 83 years in Internet time)
    • There is no prospect of a date for any further official releases
    OTOH, the HURD is cool because:
    • It's a microkernel
    • It has granular security
    Except for the last three bad points, the HURD sounds alot like the Windows NT kernel. In fact, the biggest difference could be bad point #4 (since #5 and #6 flow from #4).

    Maybe instead of reinventing the wheel, they should just use the NT kernel with the GNU runtime tools and release GNU/NT.

  • If it is possible to implement the Hurd with any underlying microkernel, isn't it essentially possible to provide Hurd functionality with a macro-kernel as well? Layer your Hurd-servers around the macro-kernel as user space apps, disable much of the macro-kernel functionality, still suffer from not doing stuff in kernel space. This is possible isn't it? Wouldn't this allow for the "enhanced" security model which is at the heart of the Hurd? In other words, why does one have to re-invent the wheel to accomplish what the Hurd does, as well as work along side of existing systems, and not require a "compatibility" layer? The interviewer admits the Hurd suffers from a scalability standpoint. Also, isn't it possible to provide the security contexts the Hurd offers through projects for the Linux kernel, such as what NSA is working on with SE-Linux and SIDs? (known in the article as "tokens to perform specific tasks".) This is perhaps a different solution to the problem, but the result appears to be the same...
  • A point I've not seen anyone make is that the Hurd scales better than Linux ever will, and I don't mean in terms of hardware, I mean in terms of development.

    Vast wars already take place about what should or should not be in the Linux kernel. The rate of progress on all of these fronts together is approaching glacial. There is just no easy way for a small team to keep a grip on a monolithic kernel that is being pulled in different directions by different developers. It has taken years for a journaling filesystem to be accepted into the kernel.

    The Hurd neatly sidesteps all of these issues by not requiring any of these things to be in the kernel. Useful functionality like reiserfs and tux can be developed and released without any requirement for them to be "accepted" by any controlling group. At the moment, the only way to do something similar with Linux is to release a patch. This makes Linux much more susceptible to forking.

    In case anyone disagrees with me on the forking issue, consider that Linux already has a very high profile fork: RedHat. RH have been distributing a forked kernel for a long time. Admittedly they only patch the kernel in minor ways, but nonetheless they maintain an alternative version. This is because if their views on what they want out of the OS differ from the kernel admins they have no choice but to fork.

    But a RedHat version of GNU/Hurd could be composed of whatever OS servers they choose without any patching. They could mix and match the kernel distribution in much the same way as they currently mix and match the userspace distribution. Because most of the Hurd kernel *is* a userspace distribution.
  • How does Hurd compare to xMach?

The computer is to the information industry roughly what the central power station is to the electrical industry. -- Peter Drucker

Working...