Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
GNU is Not Unix

GNU/Hurd Delayed To Fix Disk Size, Serial I/O Limitations 572

gregger writes "This Infoworld article indicates that the GNU/Hurd is still waiting to stampede. Evidently they have to switch from the GNU Mach implementation they're using now to OSKit's Mach which will help them support faster serial I/O and larger hard discs. Currently GNU/Hurd will only support somewhere between 1 to 2 GB partitions."
This discussion has been archived. No new comments can be posted.

GNU/Hurd Delayed To Fix Disk Size, Serial I/O Limitations

Comments Filter:
  • by Anonymous Coward on Thursday November 07, 2002 @10:26PM (#4622324)
    I'm shocked! SHOCKED!
  • HURD has been in the works for over 20 years. This is starting to sound like "delays" on the Big Dig.


    • And after "20 years," they switch to someone elses' microkernel...
    • Re:Delayed??? (Score:5, Interesting)

      by gmack ( 197796 ) <gmack@@@innerfire...net> on Thursday November 07, 2002 @10:41PM (#4622425) Homepage Journal
      I actually watched Stallman speak in Montreal recently. One interesting tidbit was that he still seems dumbfounded about the fact that the Linux kernel beat them into production even though one of the advantages of microkernel is supposed to be ease of design and the fact that mach had half of the work done already.
      • Re:Delayed??? (Score:5, Insightful)

        by Servo ( 9177 ) <dstringf@noSPam.tutanota.com> on Thursday November 07, 2002 @10:57PM (#4622515) Journal
        The biggest issue is that Stallman is an idealist. Torvalds just wanted a working Unix-line OS.
      • Re:Delayed??? (Score:3, Interesting)

        by Zeinfeld ( 263942 )
        I actually watched Stallman speak in Montreal recently. One interesting tidbit was that he still seems dumbfounded about the fact that the Linux kernel beat them into production even though one of the advantages of microkernel is supposed to be ease of design and the fact that mach had half of the work done already.

        I have to think that the Hurd is a case of following the fashion rather than evaluating the microkernel technology on its merits.

        There are lots of folk out there who will blather on at great length about the merits of kernel design for absolutely no other reason than they think it makes them look clever.

        I have not done anything at the O/S level since writing one ten years ago (unless you count the Web as an O/S). At that time Mach was flavor of the month because OSF and Next had used it as the basis of their operating system and Cutler had used a lot of the concepts of Mach to design his follow on operating system to VMS. Then Rashid joined Cutler at Microsoft in a very high profile move.

        So yes microkernels were flavor of the month ten years ago. However the reason why they were flavor of the month had more to do with the politics and problems at the time.

        OSF was trying to build a kernel quickly to compete with System V. Microsoft was building Windows NT to get to market as fast as possible. Microkernels were touted as the equivalent of RISC in CPUs, a design that allowed for shorter development time and hence faster to market.

        Microsoft had another issue, they wanted to be able to emulate other O/S. In particular Posix so they could sell in the federal market. They also wanted to be able to migrate VMS to run on WNT at a later date as a subsystem. This is actually in the works now and will take place when HP transitions from Alpha to Itanium on the high end server line. One of the reasons Microsoft was keen to do this is that Cutler and his principal staff had left Dec after Dec cancelled the Prism project, Cutler's objective stated at the time was to make Dec have to pay for the O/S they could have had for free. At the time Dec was bigger than Microsoft.

        There are advantages to microkernels, but the NT design has not been pure microkernel for some time. In order to get acceptable performance on early hardware they had to allow the display drivers to run in kernel mode.

        The problem that I think will prevent HURD ever working is that to build a real O/S you have to really understand the reasons behind the principles you follow and break them when necessary. RMS is unfortunately a prisoner of many dogmatic beliefs which once fixed he simply will not abandon regardless of the evidence.

        Linus may or may not have known what he was doing when he had the argument with Andy Tannenbaum, but he made the right decision. Andy has written a lot of good books that are widely used as text books, I don't know if people like Cutler, Rashid, Hoare and Co would rate him as being in the front rank. It is the same situation in most fields, everyone has heard of Bruce Schneier, fewer have heard of Ron Rivest and only people in the field tend to know names like Paul Kocher (SSL 3.0, the one that works), Butler Lampson (ACLS, lotsa stuff), Clark (end to end principle), Bellovin (firewalls), Schiller (IETF Security Area director).

        Oftopic: Mark Goldston, CEO of United Online (Juno/blue light) is a clueless dweeb, he just tried to tell Mark Haynes on CNBC that cable modem router boxes are not a threat to his business as few people can afford them... Not only are WiFi cable routers $100 at frys they will be built into the cable modems soon. So either he is uninformed (unlikely) or another lying CEO.

  • cool! (Score:5, Funny)

    by Anonymous Coward on Thursday November 07, 2002 @10:27PM (#4622331)
    I hope it will be able to run the new Mosaic software. Have you guys seen that? It's like Gopher but with you can add pictures, change the font size, etc.
    • Re:cool! (Score:5, Insightful)

      by paladin_tom ( 533027 ) on Friday November 08, 2002 @01:38AM (#4623310) Homepage

      This isn't the right way to measure the "goodness" of a system. The Hurd has concepts that are actually innovative.

      If you're going to say that the Hurd sucks because it doesn't support some piece of hardware or software, then *damn*, Linux really sucks... and it did even more so at version 0.2. Gee, what am I doing... where's a Windows box? Win 98 must obviously be superior to all these Free/Open Source systems, with all the hardware and software it supports.

      • Re:cool! (Score:3, Funny)

        by Anonymous Coward
        Well, the PROPER way to measure the goodness of a system is by a simple qualitative measure know as...

        wait for it..

        *B*F*I* the BEARD FULLNESS INDEX

        Let's compare the major players in our discussion:

        HURD - Richard Stallman has a rich full beard. So.. BFI_stallman = BFI_hurd = 10

        WINDOWS - ever seen a picture of that Paul Allen dude when he was younger? BEARD! What about Bill Gates? Well, he always looked like a faggot or an old ugly woman, no facial hair. So...
        BFI_allen = 7,
        BFI_gates = -1, and therefore it follows that
        BFI_windows = (7 - 1)/2 = 3.

        LINUX - Linus is beardless (but nowhere near as faggly as Gates) .. Alan Cox on the other hand .. whew. He gets an 5.6 on the beard-o-meter, due to straggliness. Therefore:
        BFI_torvalds = 0,
        BFI_cox = 5.6, and therefore,
        BFI_linux = 2.8

        Now, clearly, beards indicate intellectuals who want to hide their faces from society. everybody knows that. that's why all professors have them.

        So it logically follows that the LOWEST BFI will yield the most POPULAR and USABLE operating system. To recap our examples:

        BFI_hurd = 10
        BFI_windows = 3
        BFI_linux = 2.8

        Therefore, clearly the winner is Linux, with WIndows a close second. Unfortunately, Stallman pushes the HURD's BFI off the chart and out of the public eye. Sorry, Dicky!
  • whoops (Score:5, Interesting)

    by seanw ( 45548 ) on Thursday November 07, 2002 @10:28PM (#4622336)

    The release of a production version of the free GNU operating system (OS) has been delayed beyond the end of the year, as the current development version of the system does not support large disk partitions and high speed serial I/O (input-output), according to Richard Stallman

    is it just me, or does it sound like they had it all ready to ship, date planned and everything, and then someone pointed out that it was lacking some major I/O features/performance, and the developers collectively slapped their foreheads and went "oh shit, yeah, we kinda forgot about that one."

    like, all this took them by surprise? sucks to forget to implement a couple crucial features, eh?

  • 19 years (Score:5, Funny)

    by Anonymous Coward on Thursday November 07, 2002 @10:29PM (#4622344)
    GNU/Hurd. 19 years [gnu.org] in the making, and worth every minute of it.

    Finally the world will have a politically correct OS that works just like other Unices have for decades.
    • Re:19 years (Score:3, Informative)

      by MyHair ( 589485 )
      GNU/Hurd. 19 years [gnu.org] in the making, and worth every minute of it.

      That's not entirely fair. A lot of Linux-based OSes contain very healthy doses of GNU software and are compiled with GCC, one of the first major contributions of GNU.

      The kernel was one of the last things they tackled, but along came Linus Torvalds and now many OS kernel developers would rather work on Linux than the Hurd.
  • Does anyone here know why they let the partition size issue languish for so long? Hell, I've had files larger than 1GB (and not porn! go figure). Hard disks have been at the 10 GB mark for years, where it really doesnt' make sense to have 10 partitions. I wish richard luck. On another note, does anyone know how HURD benchmarks against linux?
    • by cscx ( 541332 ) on Thursday November 07, 2002 @10:34PM (#4622373) Homepage
      On another note, does anyone know how HURD benchmarks against linux?

      Currently, the HURD doesn't support benchmarking software. But they hope to add that functionality within the next few years, if that answers your question.
      • Currently, the HURD doesn't support benchmarking software

        Oh man! I take back every bad thing I ever said about RMS and friends. Anyone who can disable benchmarking software deserves not only a McArthur grant, but international acclaim and recognition at every level. Why... oh... OK. Go ahead. Give him his Free Software tax. He deserves it. I'll be there to applaud him at the ribbon-cutting ceremony for the Bureau Of Software Development in Washington DC. Imagine! At last. No more benchmarks. This is a red letter day indeed.

    • IIRC the partition size limit is due to the fact that the filesystem server mmaps the partition; on 32-bit systems there isn't enough address space for large partitions.

      As for benchmarks, I think the answer is "don't ask".
      • by Waffle Iron ( 339739 ) on Thursday November 07, 2002 @11:00PM (#4622531)
        IIRC the partition size limit is due to the fact that the filesystem server mmaps the partition; on 32-bit systems there isn't enough address space for large partitions.

        Hmmm. That is a very elegant way to handle disk partitions. Maybe they shouldn't rush into a quick fix for this that loses the benefits of mmaping the whole partition. In a few short years, AMD, Intel and IBM will all be offering mainstream 64-bit CPUs, and they'll be able to mmap exabyte sized partitions without throwing out the current codebase.

        I think that they should just hold tight until then. No need for reckless haste.

    • I have to wonder if GNU/Hurd developers have been partitioning their HD's with so many 1gb partitions up until now. And, at what point did it occur to them that there was a better way to do it?
    • The project has been underfunded

      They must have just got their first 10 GB hard disk in the P100.

    • by defile ( 1059 ) on Friday November 08, 2002 @12:11AM (#4622920) Homepage Journal

      It's a really hard problem to fix.

      The ext2fs implementation is actually a userland process which takes a partition as an argument and attaches a translator to the mountpoint. The translator's job is to send requests under this namespace to this server.

      The ext2fs server actually mmap()s the partition containing the filesystem and performs all operations as if it were a contiguous block of memory. Unfortunately the ext2fs driver, since it's a userland process, can only address 2GB of memory (the kernel often takes half). Adding a heap and a stack leaves mmap() with about 1GB to safely play with.

      Eliminating this limitation would mean using either a 64-bit architecture, or using a read/write/lseek interface instead of mmap, which may mean totally throwing out the ext2fs server as-is. Perhaps they weren't concerned with the limitation because they thought everyone would be using 64-bit architectures by now?

      I'm not sure if there are any other reasons other than the filesystem servers using mmap() for the limitation.

      • by Permission Denied ( 551645 ) on Friday November 08, 2002 @04:42AM (#4623853) Journal
        The ext2fs server actually mmap()s the partition

        This is completely nuts.

        I can appreciate that the filesystem driver is a userland process - this means I can write a "filesystem" as a normal userland process (eg, make some things have a filesystem-like interface, so you can do interesting things with databases like make /etc/passwd a directory). This is a cool idea.

        However, using mmaping an entire partition is just crazy. This is poor design. What were they thinking?

        Are they trying to avoid the system call overhead for seek() calls? This is the only reason I can think of that someone would do this - when you read/write/seek, you have to do a system call for seek(), but that comes "for free" when you mmap() because you specify the address. This would only be a problem with HURD-like systems, because there is no overhead if the filesystem driver runs in kernel space along with whatever other subsystems it needs to use.

        I know everyone hates backseat designers, but I'd like to know if the following approach has been considered:

        Make two new system calls, say "readaddr" and "writeaddr". These work like read and write, except that you also specify an offset (perhaps with a "whence" field like lseek). This saves you the overhead of calling seek all the time by combining the operations into one system call. This might be useful for other things as well, but I would imagine a filesystem driver is one speed-critical piece of code that does a lot of jumping around. Actually, I just remembered that there already are calls for this: pread/pwrite.

        Another possible approach: using mmap is nice, but mapping in an entire partition is fubar. Why not map in specific parts of the partition as they become needed? Eg, keep superblock mapped right away from startup, and map in other parts as they are needed. Seems a bit complex, but could be done more easily with another layer of abstraction (some library which keeps a hash or something of mmap()ed bits and provides a nice interface for filesystem drivers). I'm guessing mmap was used as this transfers certain operations (eg, cache management) "deeper" into the "kernel" and avoids code duplication.

        But anyway, mmaping an entire partition is really nuts. They're not getting any sympathy from me.

        • by Olivier Galibert ( 774 ) on Friday November 08, 2002 @05:44AM (#4623958)
          No, what they're trying to do is offload the cache management to the virtual memory manager. With mmap() backed by the partition itself, the VM can read and write the pages transparently w.r.t the ext2 server. With read/write/lseek, you have to do actual memory management. Last time I looked, there was no interface for collaboration between the VM and the servers for cache management.

          And this kind of cache management is horribly hard in a monolithic kernel for a start. Look how long 2.4 took before the VM behaviour was considered decent (2.4.16 iirc). A decently fast distributed one is even worse to design.

          OG.
    • by paladin_tom ( 533027 ) on Friday November 08, 2002 @01:13AM (#4623210) Homepage

      Does anyone here know why they let the partition size issue languish for so long?

      The partition size is limited because the Hurd maps the entire disk partition into main memory, and the 32-bit architecture of current Intel processors limits the size of a virtual address space to 2^32 bytes, hence the limitation. Changing the Hurd to do things differently isn't exactly a one-weekend patch.

      On another note, once we go the 64-bit processors, we'll see a much larger virtual address space (double it's current size 32 times), and hence a much higher cap on the partition size (assuming no fix).

      On another note, does anyone know how HURD benchmarks against linux?

      This really isn't the right question to ask: remember that the Hurd is at version 0.2, and that "Premature optimization is the root of all evil." No new Free/Open Source kernel is going to ship and be immediately as fast and full-featured as Linux... things just don't work that way.

      What's important is that the Hurd represents new OS technology... and that's more important that any current lack of performance or drivers.

    • by Some Dumbass... ( 192298 ) on Friday November 08, 2002 @01:21AM (#4623238)
      Hell, I've had files larger than 1GB (and not porn! go figure).

      Hint: "man logrotate"
  • Systems work (Score:5, Insightful)

    by peterb ( 13831 ) on Thursday November 07, 2002 @10:31PM (#4622355) Homepage Journal
    Is harder than most people seem to think it is.

    That being said, I think the Hurd is pretty much a solution in search of a problem. Who cares? And why? The FreeBSD kernel does everything Hurd purports to want to be able to do, and is more mature, stable, and feature-complete. The same could probably be said of the Linux kernel.

    Does that mean the Hurd guys should stop what they're doing? Of course not. Writing operating systems is fun.

    It does, however, probably mean that the stuff they're doing isn't really news.
  • Ironic (Score:3, Insightful)

    by jeramybsmith ( 608791 ) on Thursday November 07, 2002 @10:31PM (#4622356)
    GNU was intended to solve the problem of their not being a free unixlike OS. Now there are like 50 but still no GNU. Maybe they should refocus on providing a great userland?
  • by LinuxGeek ( 6139 ) <djand.ncNO@SPAMgmail.com> on Thursday November 07, 2002 @10:32PM (#4622363)
    added Stallman. "I don't think it was realized how bad it is practically speaking not to be able to use whatever your disk partitioning is. Clearly most people are not going to repartition their disks to be able to try out our Hurd based system."

    What kind of systems are they using for development that they just noticed the inability to read current large partitioning schemes and interact with them? This dosen't do much to encourage me to try HURD and hope it will support much of my newfangled hardware.
  • What is HURD? (Score:5, Informative)

    by randomErr ( 172078 ) <ervin,kosch&gmail,com> on Thursday November 07, 2002 @10:33PM (#4622366) Journal
    [gnu.org]
    http://www.gnu.org/software/hurd/hurd.html

    GNU HURD is a slimmer re-write of the UNIX kernel that is completely OOP.

    Here's a cut and paste from the homepage:

    The Hurd is not the most advanced kernel known to the planet (yet), but it does have a number of enticing features:

    it's free software
    Anybody can use, modify, and redistribute it under the terms of the GNU General Public License (GPL).
    it's compatible
    The Hurd provides a familiar programming and user environment. For all intents and purposes, the Hurd is a modern Unix-like kernel. The Hurd uses the GNU C Library, whose development closely tracks standards such as ANSI/ISO, BSD, POSIX, Single Unix, SVID, and X/Open.
    it's built to survive
    Unlike other popular kernel software, the Hurd has an object-oriented structure that allows it to evolve without compromising its design. This structure will help the Hurd undergo major redesign and modifications without having to be entirely rewritten.
    it's scalable
    The Hurd implementation is aggressively multithreaded so that it runs efficiently on both single processors and symmetric multiprocessors. The Hurd interfaces are designed to allow transparent network clusters (collectives), although this feature has not yet been implemented.
    it's extensible
    The Hurd is an attractive platform for learning how to become a kernel hacker or for implementing new ideas in kernel technology. Every part of the system is designed to be modified and extended.
    it's stable
    It is possible to develop and test new Hurd kernel components without rebooting the machine (not even accidentally). Running your own kernel components doesn't interfere with other users, and so no special system privileges are required. The mechanism for kernel extensions is secure by design: it is impossible to impose your changes upon other users unless they authorize them or you are the system administrator.
    it exists
    The Hurd is real software that works Right Now. It is not a research project or a proposal. You don't have to wait at all before you can start using and developing it.
    • well, duh (Score:3, Insightful)

      by cowtamer ( 311087 )

      Unlike other popular kernel software, the Hurd has an object-oriented structure that allows it to evolve without compromising its design. This structure will help the Hurd undergo major redesign and modifications without having to be entirely rewritten.

      And you guys are wondering why it's taken 19 years???
    • by Eil ( 82413 ) on Thursday November 07, 2002 @11:45PM (#4622763) Homepage Journal

      This structure will help the Hurd undergo major redesign and modifications without having to be entirely rewritten.

      "...but we might change out the whole kernel from time to time when things aren't looking so good."
  • by BitHive ( 578094 ) on Thursday November 07, 2002 @10:34PM (#4622374) Homepage
    My fortune at the bottom of this story reads:

    When does later become never?

    Coincidence or a subtle jab at RMS?

  • by Anonymous Coward on Thursday November 07, 2002 @10:35PM (#4622382)
    A useable GNU/HURD and Duke Nukem Forever bundle. Accepting preorders now.
    Available real soon now
  • inclusion of this sort of language...
    The Affero GPL requires anyone modifying a software program to give immediate access by HTTP (Hypertext Transfer Protocol) to the complete source code of the modified software to other users interacting with the software on the network, if the original program had a provision for this kind of access.

    I guess just emailing the diffs back to the source would not be sufficient. You would be required to set up and maintain a web server to serve any code you modify.
    Some sort of FSF database at which you could register the fact that you've tweaked something, and the nature of those tweaks, might help to add some value to this requirement.
    OTOH, BeelzeBill will welcome anything that might blunt the Open Source onslaught, I would think. Who was that with the Law of Unintended Consequences?
  • Why OSKit? (Score:3, Insightful)

    by Animats ( 122034 ) on Thursday November 07, 2002 @10:41PM (#4622424) Homepage
    OSKit [utah.edu] is a collection of operating systems parts for use by researchers, not a production system. It's intended to be straightforward and modular, but not heavily optimized. If the Hurd team is switching to OSKit, they must be in deep trouble.

    Now if they were switching to L4, that would be cool. But it would be a research effort.

    And why does anyone, at this late date, care much about high-speed serial line support?

    • Re:Why OSKit? (Score:4, Informative)

      by jpmorgan ( 517966 ) on Friday November 08, 2002 @12:07AM (#4622895) Homepage
      The Hurd will be switching to L4, when the NDAs on the latest version of L4 are dropped and there's a real implementation to work from.

      That port is fairly extensive, so right now they're moving to Mach + OSKit, since OSKit supports the Linux 2.2 device drivers, something Hurd is sorely lacking.

  • by Royster ( 16042 ) on Thursday November 07, 2002 @10:42PM (#4622429) Homepage
    From Open Sources: Voices from the Open Source Revolution [oreilly.com]

    So at the time I started work on Linux in 1991, people assumed portability would come from a microkernel approach. You see, this was sort of the research darling at the time for computer scientists. However, I am a pragmatic person, and at the time I felt that microkernels (a) were experimental, (b) were obviously more complex than monolithic Kernels, and (c) executed notably slower than monolithic kernels. Speed matters a lot in a real-world operating system, and so a lot of the research dollars at the time were spent on examining optimization for microkernels to make it so they could run as fast as a normal kernel. The funny thing is if you actually read those papers, you find that, while the researchers were applying their optimizational tricks on a microkernel, in fact those same tricks could just as easily be applied to traditional kernels to accelerate their execution.
    • Linus was right at the time, yes. If he were starting Linux today, however, none of those three points would be correct. They're not experimental any more (WinNT before version 4.0, QNX and BeOS are/were all mainstream microkernel OSes), they do not execute notably slower than monolithic kernels (yeah, the Mindcraft survey was rigged, but Linux and NT are still competitive) and (apart from NT) they are no more complex than Linux is today.

      On the other hand, "I know I can do it and get it working" would still be a valid argument for him to write a monolithic kernel today, but that's open source for you.

    • The choice that Linus made was great for getting the project going: something that's easy to understand and easy to hack. But that doesn't mean it's good in the long run--the same could have been said for DOS. Microsoft makes such expedient choices constantly--that doesn't mean they are "right" in the long term.

      The Linux kernel is running out of steam--the software development is becoming more and more unmanageable (see the BitKeeper debates), and drivers and new functionality often take years to appear in stable, up-to-date form in the kernel.

      Those are the kinds of problems microkernels were supposed to address. I have no idea whether the GNU/Hurd does or does not address them, and even if it does, it is 15 year old technology. But I do know that Linux isn't addressing them right now, and that's a problem.

      I suspect that what will actually happen is that in a couple of years, there will be a severely hacked Linux kernel fork that keeps driver and file system compatibility at the source level for a while but otherwise goes its own way.

      • by Ian Bicking ( 980 ) <(moc.ydutsroloc) (ta) (bnai)> on Friday November 08, 2002 @02:54AM (#4623575) Homepage
        I have no idea whether the GNU/Hurd does or does not address them, and even if it does, it is 15 year old technology.
        The Hurd isn't really a good microkernel -- it's not really a microkernel at all, but a bunch of services built ontop of a microkernel (Mach). Of course, the microkernel is essential to the actual operation, and the services have been written with a specific micokernel in mind... but it's not unreasonable to consider the Hurd running on a different (better, more advanced, faster) microkernel. People in the Hurd community have talked about just this, though of course no one has actually done the hard work of converting it.

        But sadly, my impression of what the Hurd has shown, is that just because something is userspace doesn't mean its easy to debug. It seems like code accessibility -- even for original developers -- has not been very good. I think it's in the same way that threaded programming is much harder to debug... a complex set of interworking services is even worse.

        And while microkernels allow a certain level of modularity, it really should be possible to achieve a great deal of modularity in a monolithic kernel as well -- just not in as safe a manner. I don't know that safety is the difficult part of Linux development. Well... I'm not entirely clear on what is the difficult part, I've never tried to program on the kernel. Probably an issue of factoring -- when refactoring needs to occur across module boundries (for whatever reason) it requires different developers to communicate and agree on things (which is where the overload is occuring). But that same problem will exist in a microkernel -- only the refactoring will be occuring between processes. That's not a big difference.

        Maybe with enough thoughtfulness you can refactor everything in the Right Way, so that interfaces are entirely stable and development can occur without as much interdependence. That's not impossible -- there's a lot of experience from Linux and elsewhere to learn from. But I don't think that is related to monolithic or microkernel design.

    • by Animats ( 122034 ) on Friday November 08, 2002 @02:31AM (#4623493) Homepage
      As it turns out, microkernels aren't all that portable, because they need close ties to the hardware to support all the fast interprocess communication that they do. But because they're small, the fact that they contain CPU-dependent code isn't as much of a limitation. Porting is more work per line of code, but there aren't as many lines to port.

      Look at QNX and L4 as examples of fast microkernels. About all the kernel does is manage memory, interprocess communication, and task switching. Everything else is in user space, where it's easier to debug and can't mess up as much when it breaks.

      In addition, if you're serious about security, a system where only the microkernel is trusted is the only thing that has any hope of working. In a microkernel system, the kernel tends not to change much over time, once it's working. New functionality is all in user space. You're not patching holes forever, like we are now.

      You do take a performance hit, but in a world where Java, Perl, and XML are used for production work, it's tiny by comparison.

  • by dacarr ( 562277 ) on Thursday November 07, 2002 @10:47PM (#4622461) Homepage Journal
    I'm seeing it now. Hurd will be to Linux what OS/2 Warp is to Windoze.

    Kudos to RMS for fighting the good fight, but he's already contributed significantly to Linux. I really don't think it'll go farther than that.

  • Delayed? (Score:5, Funny)

    by grub ( 11606 ) <slashdot@grub.net> on Thursday November 07, 2002 @10:54PM (#4622498) Homepage Journal

    I want a new slashdot poll:

    Which long awaited project will be the first to become reality?
    a) Duke Nukem Forever
    b) SMP for OpenBSD
    c) GNU/Hurd
    d) The second coming of Jeebus

  • by nuckin futs ( 574289 ) on Thursday November 07, 2002 @11:02PM (#4622544)
    according to the article:
    if you get a moderate size disk you have to divide it into smaller partitions, which is a nuisance.

    I'm sorry, but I have an 80gig drive. If I need between 40 to 80 partitions (between 1 and 2 gigs each), it's not just a nuisance.
  • by zurab ( 188064 ) on Thursday November 07, 2002 @11:07PM (#4622567)
    To solve the serial port problem, the GNU project is switching from the GNU Mach to the OSKit Mach, a Mach based on the OSKit for OS development from the University of Utah in Salt Lake City, Utah. "That version of Mach is supposed to get high speed serial line support, although it apparently isn't there in it yet," Stallman said. Before the GNU project could switch to the OSKit Mach, it had to rewrite the terminal support in the Hurd to support virtual consoles.

    By the time these guys switch to the new kernel, test all modules, etc., etc. they will have to update it again for new speed improvements and HD sizes.

    Linus was right that Microkernels tend to be overdesigned, give up speed, and are less practical than monolithic. This is the living proof.
    • There are plenty of other microkernels in use very sucessfully. WinNT/2k/XP, Mac OS X, MkLinux, Minix, just to name a few!

      Don't make such wide-ranging judgements based off of one case. You're counting your chickens before they are hatched.
  • by happystink ( 204158 ) on Thursday November 07, 2002 @11:24PM (#4622658)
    They better make sure that Hurd supports hard drives up to 20 terabytes or so, since that'll be about the average size by the time Hurd ever gets done.
  • by matman ( 71405 ) on Thursday November 07, 2002 @11:28PM (#4622673)
    I wish that fewer people would be so damned hardline pragmatic. It's worth putting time into stuff that could be cool and to try to do things in ways that are nice. Maybe it'll fail, but it's worth the attempt, even if it only serves as an example of what doesn't work.
  • by Art Popp ( 29075 ) on Friday November 08, 2002 @12:08AM (#4622898)
    The best reason for HURD: "Because they want it that way."

    No one should have to justify what they want to build to you or anyone. Free software is not about the GPL. It's about freedoms. If these people want to build the most paradigmatically pure kernel ever conceived of, I think that's great.

    If they want to turn an architecturally useful chunk of marble into a useless statue of some kid named David. That's great too.

    When I enter a bunch of keywords into freshmeat and pick over the results, I occasionally ask myself, "What was this guy thinking?" Others with that same list ask that same question, but about different projects. It's the fact that we are free to combine conceptual purity, modifiability, stability, speed, and dozens of other engineering trade-offs in exactly the manner that we think is "right" that makes picking through Freshmeat like picking through a box of Dark Chocolates.

    Oddly, the same rule applies. If you don't like a particular chocolate, don't eat it; don't whine about it; just pick a different one

    I wish Mr. Stallman the fewest alpha particles and the best of luck in his noble pursuit.
  • In 1985... (Score:5, Informative)

    by ttfkam ( 37064 ) on Friday November 08, 2002 @12:53AM (#4623134) Homepage Journal
    microkernels were the rage. HURD answered the call and started work. Now, almost 20 years later, MIT pulls the rug out with exokernels [mit.edu]. So will we wait until 2020 to get a working model of that too?

    God bless HURD for trying to advance the state of the art and improve upon the dated UNIX model, but sheesh! I wish HURD were ready for prime time. I really do. But a working model with caveats (Linux, OSX, *BSD) will always be better than a better model that's mostly theoretical in the real world.

    That said, no one's paying the HURD developers. If it gos their nads, have at it. RMS needs to relax and realize that it is little more than a research experiment and not the second coming.
  • by Ageless ( 10680 ) on Friday November 08, 2002 @02:55AM (#4623577) Homepage
    [this space intentionally left blank]
  • by slashdot_commentator ( 444053 ) on Friday November 08, 2002 @03:33AM (#4623707) Journal

    HURD is not the operating system choice of "hackers" or slashdotters. Hackers want to run computer applications (reliably and speedily). That is not what HURD is about. Its the utopian platform for computer science geeks; people who want to go beyond the current paradigm of UNIX, classic sequential computing, etc. . By abstracting the ukernel to a couple of critical operations (time slicing, memory allocation, and IPC), and moving every other operation to user mode, you have a tool that can be used to implement new concepts in computer operating systems.

    Its not an alternative to Linux. Its an orange to Linux's apple. It will suck as an alternative to Linux. It will run slower than Linux (especially if they stick with Mach). It will not run more stablely than Linux (given its increased complexity). It may be a better platform for multiple CPU configurations, be we won't know that for sure until its ukernel design is complete, and an implementation of HURD actually proves it to be faster. Very few people will want to port useful packages to HURD; they'll go to Linux for reliability and performance. HURD's purpose is not a platform to run applications. Its a platform for computer science research.

    That is the reason why I do not wish death on HURD and rejoice when there is good news for it. It does not really compete with Linux for mindshare. If it proves to be a superior platform for MP processing, only then will it have a mundane use.

    I have massive contempt for its project management. Its currently looking like the OS that will never get released. And it does not deserve a serious look until it gets a quality ukernel, like L4 (which itself is unfinished). MACH will not cut it, or its UKS(?) version.
  • OpenBEOS (Score:3, Funny)

    by Abnormal Coward ( 575651 ) on Friday November 08, 2002 @06:11AM (#4623998)
    At this rate, the OpenBEOS team will have the entire OS rewritten before the hurd kenel gets to version 1.0 :).

  • Glad you noticed!!! (Score:3, Interesting)

    by 3seas ( 184403 ) on Friday November 08, 2002 @10:57AM (#4625116) Homepage Journal
    How many commenting would have never noticed any delay had they not read it here?

    Regardless of any negativity being expressed here towards the efforts of the Hurd Developers or the goal, this is a project that needs to be done.

    I have no doubt that had Linux not come along, there would have been more man power and efforts put into the Hurd these past years.

    Of course Linux was a distraction for many, yet it was also NOT a distructive distraction. Alot of GNU and GPL software has been developed and put into use. Enough So that, as we all know, MS has taken notice and has even launched a competitive campagin against not just Open Sourse, Linux and GNU, but with a focus on the GPL.

    What software there has been made to run on Linux, can and probably already has been ported to run on the Hurd....(except for a few packages that just don't make sence to port as they deal with monolithic kernel issuse that don't exist in the Hurd). The count of software packages ported is in the thousands.

    Even the drivers written for Linux are usable on the Hurd.

    All of that porting and compatability was/is alot of work for which the Hurd Development team has done. So there has been energies going into alot more than just the hurd core.

    Perhaps the really good part of all this is that MS probably doesn't have a clue as to what to expect of developer who will develop applications for the Hurd, to take advantage of the hurd. And it should be understood that the hurd opens the door up a lot more for development innovations.

    So what will you have when the Hurd is officially publicly released... production version...??

    You will have what appears to be no or very little different than using Linux. On the surface. But under the hood.... It's a more versatile, stable and in sum of.... overall more powerful in ability to bring about advancements.

    There are quite a few other OS's being developed today, under the open source idea. And there is nothing about the Hurd that says you cannot attach a personal choice smart user space interface OS to the Hurd. Integrating it to benefit from the security of the Hurd, the GNU number crunching software already written, etc... thru the IPC of the Hurd....

    I like the idea of plugging my personal Smart Interface OS into a hurd system for such benefits. A 3" CD, a smart card, or some yet to be developed re-writable device I can take with me.

    • I have no doubt that had Linux not come along, there would have been more man power and efforts put into the Hurd these past years.

      Really, this is getting to be too much.....after over a decade of floundering around, the FSF has yet to produce anything even remotely useful as a production operating system kernel. Linus and the people who worked on Linux did that in 7 - 8 years.

      It's great what the FSF has done, to give the world a compiler to produce free software, and the tools and utilities to make Linux and other OS's a finished OS. And even the Hurd as a computer science experimental kernel to play with new ideas.

      But it is ridiculous to say that Linux has distracted from the Hurd effort....the Hurd simply is not about designing a useful kernel.....it is a playground for ideas in OS architecture, and it will be many more years of flounder/play/redesign before it is known what ideas in there will even be useful for a production kernel,
  • RMS (Score:5, Interesting)

    by thoolihan ( 611712 ) on Friday November 08, 2002 @11:27AM (#4625337) Homepage
    I'm sure I will get flamed on /. for this one, but since 98% of the comments are along the lines of "down with RMS", I have to say this.

    At some point you have to decide if you are going to go along with the pithy flames or do real research. It's not popular, but it reveals the truth. If not, go to the next comment, this isn't for you.

    From a proctical standpoint, I understand the "Linux" side of the argument. However, people make that argument with statemnt like... "Don't do drugs, you'll end up like the Hurd peopl" - LT. RMS makes his argument respectfully on the GNU website and encourages people to use GNU/Linux. On the GNU site, he says the easiest and best way to start using free software is to go get a GNU/Linux distro. Personally, I respect people who make their arguments with facts instead of one-liners. If you buy things because they sound like a good quick answer, then you start going for things like "trusted computing".

    Finally, since this is a discussion of the HURD kernel: I think people should find this interesting. The GNU tools we are already familiar with are going to get a microkernel. Merit arguments aside, there are a lot of people who choose/like microkernels (apple, *BSD). Also, it's a kernel project that offers a ton of work to be done. After all, 1GB partitions is a sign that there is a long way to go. Entry level kernel hacking on a system that has a LONG way to go is easier than "even though you've never kernel hacked, figure out how to save a few cycles with this kernel module that has been working for five years". Also, keep in mind, the HURD has one major advantage over the Linux kernel. There is not a one man bottle neck.

    Personally, I like the linux kernel and use several Gentoo systems, and some OpenBSD. But I always welcome another choice in software and look forward to seeing the HURD in a more usable state.


    There is a fine line between picking your battles and cowardice

And it should be the law: If you use the word `paradigm' without knowing what the dictionary says it means, you go to jail. No exceptions. -- David Jones

Working...