Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Linux Software

No 2.7 Linux Kernel Branch Due Soon 447

An anonymous reader writes "At the fourth annual Linux Kernel Developers Summit, it was decided that there won't be a 2.7 Linux kernel development branch any time soon. Instead, Linux creator Linus Torvalds and the official 2.6 maintainer Andrew Morton have decided to continue working as a team, further enhancing the 2.6 kernel. Up to this point, kernels ending in an odd number (2.1, 2.3, 2.5, etc) were considered development kernels, and kernels ending in an even number (2.2, 2.4, 2.6, etc) were considered stable kernels. However, according to this KernelTrap article, active development will now continue in the mainline 2.6 tree, and the final stabilization will be left up to the companies that provide Linux distributions."
This discussion has been archived. No new comments can be posted.

No 2.7 Linux Kernel Branch Due Soon

Comments Filter:
  • Wow (Score:3, Insightful)

    by Anonymous Coward on Thursday July 22, 2004 @08:31AM (#9768628)
    Well, i guess that's what is was like with 2.4, unofficially.
  • This is bad. (Score:4, Insightful)

    by dangermen ( 248354 ) on Thursday July 22, 2004 @08:33AM (#9768647) Homepage
    This is bad. Not all distribution maintainers have armies of patch people. This will push people to one of a few distributions such as RedHat or Suse. Espcially if 2.6 becomes an unstable piece of crap.
    • Re:This is bad. (Score:5, Interesting)

      by stevesliva ( 648202 ) on Thursday July 22, 2004 @08:40AM (#9768692) Journal
      I was thinking the same things, but there's a bit of ambiguity in what is meant by "stable." I think they may have meant code stable, not OS stability.

      That said, it could be a good thing to preempt the distros from forking in order to add new features that they do not want to wait for, and it also adds the benefit of Linux providing the OS features that you want ASAP, not in 2005, err, maybe 2006, 2007 or 2008 when the next major release is planned-- that would be the Longhorn development model.

      • That said, it could be a good thing to preempt the distros from forking in order to add new features that they do not want to wait for, and it also adds the benefit of Linux providing the OS features that you want ASAP, not in 2005, err, maybe 2006, 2007 or 2008 when the next major release is planned-- that would be the Longhorn development model.

        Fair enough, but most distros were back-porting patches anyway. I usually would actually get rid of my default red-hat kernels and download them from kernel.org
    • Re:This is bad. (Score:2, Insightful)

      by archen ( 447353 )
      Yeah, I started to see that with RedHat 7x. When they decided to rip out the new VM and go with something else on a "stable" kernel, plus who knows how many RedHat modifacations. But if you wanted to stay secure, you were going to have to go with whatever new kernel and 'enhancements' RedHat gave you. I think I had enough of that ride, and once RedHat dumped our support I just jumped ship.

      To FreeBSD. Now I get software that is as up to date as I want it, and the base system and kernel are left alone ot
      • Re:This is bad. (Score:5, Informative)

        by 10Ghz ( 453478 ) on Thursday July 22, 2004 @08:55AM (#9768782)
        I started to see that with RedHat 7x. When they decided to rip out the new VM and go with something else on a "stable" kernel, plus who knows how many RedHat modifacations.


        Not quite. It was Linus that ripped the VM out. Red Hat sticked with the VM that originally shipped with 2.4
      • Re:This is bad. (Score:3, Insightful)

        by Laur ( 673497 )
        I think I had enough of that ride, and once RedHat dumped our support I just jumped ship. To FreeBSD. Now I get software that is as up to date as I want it, and the base system and kernel are left alone other than for security fixes. I'm free to stay with that release, or move to a newer release as I choose. I like Linux, and use it at home, but I've gotten kind of weary of letting any Linux vendor drag me along on a production machine.

        Dude, then use Debian stable or testing or another community distro.

    • This could set back Linux by years, if loads of unstable kernels keep coming out. People will be forced to wait until a kernel distributor (RedHat, Suse, etc.) get around to putting one out. That may not be fast enough for some people, and may be too expensive as well. It could be darned inconvenient if you nees new hardware support.

      Microsoft, Apple and Sun will be falling over themselves with laughter when they see this. Linux's legendary stability will go down the pan, and people will leave in droves. Bil

      • Re:Very true (Score:3, Interesting)

        by tomhudson ( 43916 )

        This could set back Linux by years, if loads of unstable kernels keep coming out. People will be forced to wait until a kernel distributor (RedHat, Suse, etc.) get around to putting one out. That may not be fast enough for some people, and may be too expensive as well. It could be darned inconvenient if you nees new hardware support.

        I too was worried when I saw the "final stabilization" phrase. If you read the article, it's no big deal.

        What I really liked was the multiple admissions that devfs was a pie

        • Re:Very true (Score:5, Insightful)

          by turgid ( 580780 ) on Thursday July 22, 2004 @09:45AM (#9769109) Journal
          What scares me is that "the community" is no longer going to aim to produce a stable kernel, but rather, it will be up to the commercial distributors. This goes against the whole idea of Free Software. The idea is to make a working, useful product, not half-baked cripple-ware that you then have to pay someone to fix. I understand the concept of paying other people to "add value" i.e. enhanced features, but I don't regard stability as an "enhanced feature." A lot of people who don't alread run Linux (but maybe Windows) will now have one less reason to change and one more piece of FUD to beat the Free and Open Source Software movement with.

          I'm a Slakware man myself, but I don't like sitting around waiting for Patrick to make a new kernel. I like to update my kernel myself from the official Linus tarball as and when required. This will no longer be possible.

          • Re:Very true (Score:3, Informative)

            by hackstraw ( 262471 ) *
            This goes against the whole idea of Free Software.

            No it doesn't. Free Software is about free software. It comes with no warantee, and a brand new kernel from kernel.org has not been thoroughly tested outside of the developers boxes themselves.

            I don't like sitting around waiting for Patrick to make a new kernel. I like to update my kernel myself from the official Linus tarball as and when required. This will no longer be possible.

            You are free to download it or not, however its pretty ignorant to blin
            • Re:Very true (Score:3, Insightful)

              by turgid ( 580780 )
              I'm not talking about "bleeding edge" I'm taking about the even-middle-numbered kernels i.e. stable. I do not have the time nor patience to run development kernels. I am protesting about the developers turning the "main kernel tree" into a constant, perpetual development kernel. That belongs in an odd number.

              It comes with no warantee

              In the hope that it might be useful.

              Plus, what feature or bugfix have you needed that required a brand new kernel?

              Bug and security fixes mainly.

              It was not the latest at the

          • Re:Very true (Score:3, Insightful)

            by peatbakke ( 52079 )
            Actually, I think this shows a maturity in the Open Source model, as the community has spread deeply into the commercial realm.

            Red Hat, IBM, Novell, SGI, Mandrake, and all the other companies that support, market, and contribute to Linux are a very significant part of the community. Many of the top kernel hackers are employed by these companies. They are heavily invested in both pushing the envelope with Linux, and supporting a solid and stable platform for their customers.

            Given that the commercial vend
          • Re:Very true (Score:3, Informative)

            by MartinG ( 52587 )
            Firstly,

            This goes against the whole idea of Free Software.

            No it does NOT.

            The whole idea of free software is that anyone is free to develop the software in whatever way they want to develop it. If you don't like how the developers are doing it, fork the kernel and do it your way. Who are you to dictate whether the idea is to make a "working, useful product"? The idea is whatever the developer wants the idea to be.

            Secondly, your suggestion that this new model will produce "half-baked cripple-ware" is
        • What's wrong with DEVfs? I use it, it works. I don't like the idea of a userspace tool for that.
    • Espcially if 2.6 becomes an unstable piece of crap.

      Your tense is incorrect.
    • > Espcially if 2.6 becomes an unstable piece of crap.

      I can't remember any development kernel in 2.5 that was an unstable piece of crap. Despite installing most of the latest releases when they come out I have not seen a kernel crash since 1996. Are there really people who have crashes on a regular basis?
    • This is bad. Not all distribution maintainers have armies of patch people. This will push people to one of a few distributions such as RedHat or Suse. Espcially if 2.6 becomes an unstable piece of crap.

      Its not really any different than it is now. There have been production kernel releases that have sucked, and most people who are using linux beyond a hobby are already relying on a distribution to provide a stable kernel. I will say that the linux kernel has never been an "unstable piece of crap" since I
    • I can understand why this seems like i frightening move, but is it really that bad? Isn't numbers just numbers?

      Let's say for the sake of argument that 2.6.8 is a stable kernel, but thing breaks at .9 when trying to implement some new development. Why not just continue using 2.6.8, if it's stable, it works -- and you don't have access to armies of patch people?

      In my opinion it doesn't really matter whether the unstable kernel is named 2.7.x or 2.6.9.
    • Re:This is bad. (Score:2, Insightful)

      by woods ( 17108 )
      Not all distribution maintainers have armies of patch people. This will push people to one of a few distributions such as RedHat or Suse.

      Distro maintainers who don't want to do their own patching of the kernel mainline can simply grab the source RPM for RedHat's or SuSE's kernel, and use that kernel and patchset for their own distro. It is GPL, after all.

  • what next? (Score:5, Funny)

    by bje2 ( 533276 ) * on Thursday July 22, 2004 @08:33AM (#9768648)
    we get stories for every kernel realease as it is, and now we get stories when there's *not* gonna be a kernel release?

    what's next? a story on microsoft *not* putting out a new version of windows?...oh, wait...
  • by BJZQ8 ( 644168 ) on Thursday July 22, 2004 @08:33AM (#9768650) Homepage Journal
    Nothing like doing development on a production machine! I love the smell of flaky kernels in the morning.
  • Oh dear... (Score:3, Insightful)

    by cheesybagel ( 670288 ) on Thursday July 22, 2004 @08:35AM (#9768660)
    IMHO this will just increase the fragmentation between the vendor kernels. There should really be one, and only one stable kernel used by all the vendors. We have enough problems with binary compatibility in Linux already.
    • Re:Oh dear... (Score:4, Insightful)

      by ZorroXXX ( 610877 ) <[hlovdal] [at] [gmail.com]> on Thursday July 22, 2004 @09:15AM (#9768910)
      Come on. No major distributions except Slackware ships with a plain vanilla kernel. "All" distributions patch their kernel (based on a stable relase) more or less heavily, sometimes resulting in problems, see for example here [talkroot.com] or here [kerneltrap.org].

      So with development continuing longer on the 2.6 branch it might help decreasing the diversety of the different vendor kernels. At least it is worth trying.

  • by martinbogo ( 468553 ) on Thursday July 22, 2004 @08:35AM (#9768662) Homepage Journal

    The 2.6 linux kernel has been a roller coaster ride of development, and it was obvious from the switch from 2.5->2.6 that the kernel was far from ready for prime time.

    So, now we're stuck with a rapidly developing 2.6 kernel that poses a lot of risks for anyone wishing to adopt the new so-percieved "stable" kernel into an OS/Embedded/Other product.

    In a way, this is just an acknowledgement that things went a bit too fast with 2.6, and that waiting to release it -after- some pretty solid core feature freezes would have been good.

    There is still a lot of development and teething going on, and it's going to be a real pain on the part of "third party distributors" to find and use whatever build-of-the-week is more stable than another in a given sub-branch of the 2.6 kernel.

    Oh well, so much for having a nice stable 2.6 base to build new functionality into.

    • by Nailer ( 69468 ) on Thursday July 22, 2004 @08:50AM (#9768752)
      Distros will pick a 2.6 release.

      Say, 2.6.6.

      Then they'll backport security fixes just like they did for 2.4.

      The difference is nothing.
      • And in the meantime, 2.4 series seem to work very well. If it ain't broke... For real, why risk your production machine with an experimental/unstable kernel unless you really, REALLY need to?

        I just finished installing a webserver and it doesn't seem to suffer from having a 2.4-series kernel. Maybe it would be COOLER with a 2.6 but I somehow doubt it. Then again, some people like to rice their cars, too...
      • Distros will pick a 2.6 release. Say, 2.6.6. Then they'll backport security fixes just like they did for 2.4.

        Then why have 2.6 vanilla releases at all? What are they there for? If they serve as a new-feature testing ground, then why not create a 2.7 branch?

        If the vanilla 2.6 series is used as a "base reference", then it must be a stable base point. This idea, however, has just been tossed out the window by Torvalds & Co.

        What about people, like myself, who like to use the vanilla releases from ke

      • So instead of having 2.6 stable and 2.7 develop we'll have 2.6.X stable and 2.6.Y dev. Sounds like USB 2.0 High Speed or USB 2.0 Full Speed.

        Version numbers may not matter to developers, but I think this is an example of a usability problem. The old version naming was good and well understood. It's almost like an unwritten contract with users that you don't switch these things mid-stream. Naming is part of the interface.

        From the article...

        Andrew's vision, as expressed at the summit, is that the mainlin

    • by vadim_t ( 324782 ) on Thursday July 22, 2004 @08:58AM (#9768813) Homepage
      Please explain?

      So far for me, 2.6 is turning out to be pretty stable, and I switched to it quite early, starting with 2.6.3 I think. In comparison, 2.4.3 was really bad. It was almost a miracle that I managed to avoid critical data loss after switching to 2.4.0 and using ReiserFS on my root partition.

      2.6 so far just works and that's it. Maybe there's some lack of polish somewhere, but so far it works fine here on SMP.
  • Not again (Score:3, Insightful)

    by flossie ( 135232 ) on Thursday July 22, 2004 @08:37AM (#9768676) Homepage
    Does this mean a return to the 2.4.x type of problem where no-one could agree on which virtual memory management to use and the stable series ended up being, well unstable, until it reached about 2.4.19 (or thereabouts)?
    • Does this mean a return to the 2.4.x type of problem where no-one could agree on which virtual memory management to use and the stable series ended up being, well unstable, until it reached about 2.4.19 (or thereabouts)?

      That's exactly what it means. Reguardless of what people say, the even numberes kernels aren't stable until a couple of even numbered releases *after* the next odd kernel is created. Anyone who tells you different is a fool. Don't trust 2.6 for production machines. For your personal des

  • by Lispy ( 136512 ) on Thursday July 22, 2004 @08:38AM (#9768680) Homepage
    So what exactly does this mean for distributions such as Slackware wich ship a vanilla kernel? Personally I always preferred having it "as it was meant to be" without any tweaking of the distributor.
    The latest Fedora Core 2 debacle proves that this can lead to trouble (NVidia Binaries broken, etc.).

    Distributions such as SLKX (wich ships a vanilla 2.4.22) didnt include the 2.6 series as the defaultkernel. My guess is Patrick didnt trust the beast yet. So what is a man like Pat to do if there isnt the manpower or will to patch the kernel but the "stable" branch cant be trusted anymore, too?
    • Dude, there was no Fedora Core 2 debacle. The 4K stacks patch went into the vanilla kernel also. Thanks to the Fedora team, NVidia had a 4K stacs driver much sooner for you vanilla guys.
    • "The latest Fedora Core 2 debacle proves that this can lead to trouble (NVidia Binaries broken, etc.)."

      Just to be clear, it was a standard (and reasonable) kernel option that broke the nVidia drivers - it's not like the FC2 team put a crazy patch in the kernel that broke them.

      -Erwos
    • The latest Fedora Core 2 debacle proves that this can lead to trouble (NVidia Binaries broken, etc.).
      No, IMO it proves Linus's point of view regarding binary modules.
    • Slackware 10 shipped with 2.4.26 as the default kernel. The 2.6.7 kernel was in /testing on one of the other cd's. So it changes nothing about Slackware shipping with a vanilla stable kernel.
  • Question (Score:3, Insightful)

    by FraggedSquid ( 737869 ) on Thursday July 22, 2004 @08:39AM (#9768689)
    As "final stabilization will be left up to the companies that provide Linux distributions." does this not introduce the risk that the final versions will begin to diverge and over time a RedHat OS, a SuSe OS etc will emerge? Will we have a VHS vs Betamax style battle in Linux?
  • by haplo21112 ( 184264 ) <haplo@epithnaFREEBSD.com minus bsd> on Thursday July 22, 2004 @08:42AM (#9768700) Homepage
    ...and nither is it totally out of the ordinary either...
    Linus hung out on the 2.0,2.2 kernels before truely turning them over to a new person for a long time. 2.4 was really the first that he jumped out of quickly because he had ideas that truely shouldn't have been going into the "stable" kernel.
  • Good! (Score:3, Insightful)

    by miffo.swe ( 547642 ) <daniel@hedblom.gmail@com> on Thursday July 22, 2004 @08:43AM (#9768713) Homepage Journal
    I like this idea as it makes development go even faster. Not that i am a big fan of development just because but rather becaues i feel there is much left to be done before things are really mature. I really hope though that this wont make security suffer. My hope lies in better security being built into the kernel and new ways of protecting it being developed.
  • vendor support (Score:2, Insightful)

    by mokeyboy ( 585139 )
    This will hopefully provide product vendors (eg NVIDIA, user-mode linux etc) with the opportunity to catch up with the release cycle. A good thing for the next 12-24 months.
  • No end user product (Score:2, Interesting)

    by jmkrtyuio ( 560488 )
    Seems to me (strictly from what I just read) that these changes will lead to users who just want their kernel to work will either either use their distro's or stay on 2.4

    Seems to me that planning on NOT offering a usuable stable product and relying on significant and independant effort by third parties is not the way to go about keeping your users happy.

    Seems to me that this will cause way more forking pressure. This may even open the possibility for a new vanilla stable kernel fork, not distro specific.
    • I started to see that with RedHat 7x. When they decided to rip out the new VM and go with something else on a "stable" kernel, plus who knows how many RedHat modifacations.


      Why not just run 2.6? I'm running 2.6.7 and if future kernels get too unstable (I see no indication of that), I'll just stick to 2.6.7 (which has proven to be rock solid for me).
  • Maybe they don't want to go to 2.7 because they're worried that they may run out of digits before they have to go to Linux 3.0

    Hmmmm.. The step to Linux 3.0. Could be a PR disaster. 2 is a sexy sequel, 3 is usually a not so sexy sequel. 4 is the beginning of something mature and steady, but 3 is just... well it's just a number! :E
    • Hmmmm.. The step to Linux 3.0. Could be a PR disaster. 2 is a sexy sequel, 3 is usually a not so sexy sequel. 4 is the beginning of something mature and steady, but 3 is just... well it's just a number! :E

      They could take the Sun route and just renumber the next major version to Linux 7. Solaris 2.6 -> Solaris 7, Linux 2.6 -> Linux 7.

    • Maybe they don't want to go to 2.7 because they're worried that they may run out of digits before they have to go to Linux 3.0


      2.7 ==> 2.8 ==> 2.9 ==> 2.10 ==> 2.11....

      I fail to see a problem here
    • I don't think that they're worried about having to go to 3.0. If they wanted to, they could just do 2.8, 2.10, 2.175, etc.

      on the other side of the coin, we'd have to worry about people associating Linux 3.1 with Windows 3.1. And let's not forget 'Linux for Workgroups' 3.11, where they simply rehash 3.1, only they make it actually work.

  • It has been a time since I've started to consider Linux too complex. 2.4 also took quite some time to stabilise, and 2.6 still isn't production ready in some quite relevant situations.

    For example, trying 2.6 + LVM2 + soft RAID5 + ext3 is asking for data loss. I and several other people reported this, but seemingly either we were statistical noise or we weren't well connected enough, and the kernel hackers never paid attention up to at least 2.6.5 - I just gave up following up nothing at all since 2.6.6.
    • know it has taken some bad decisions and now lacks critical mass, but perhaps the Hurd is the way to go... it should enable better isolation of disruptive development, and enable kernel development to continue adding features.

      The Hurd's biggest problem right now is a lack of developers. Right now a few core hackers are doing a formidable job at porting the Hurd from Mach to L4, a modern research microkernel with extremely low IPC latency. This redesign is also attempting to learn from earlier mistakes.

    • Too complex, certainly. Time for microkernels? If there were a good one with servers for all the relevant hardware ready? Sure! The HURD? HELL, NO!

      Compared to its peers (QNX, VSTa), the HURD is tremendously bloated and slow. Its development involved massive (and performance-reducing) changes to its libc (which have had negative impacts on *other* OSes that use that same libc). Compare to QNX, which runs on tiny embedded devices and which has a demo disk available with massive hardware support and a full we
      • >
        Compared to its peers (QNX, VSTa), the HURD is tremendously bloated and slow

        Problem is, these are not peers. They do not aim for full POSIX compatibility, they do not aim for multiple personalities nor for the level of development flexibility of the Hurd. They may fit on floppies, but so does GNU/Linux if you trim it enough.

        You know just enough to be dangerous

    • The idea behind microkernels was that you could break up a complex monolithic kernel into simple little parts that would be easy to understand. These parts would work together to implement UNIX or whatever else you wanted.

      Reality isn't like that. Artificial barriers between components lead to various hacks. Glue isn't free: there is complexity in the interaction of the many components. Microkernel systems pretty-much rule out many common ways of writing code, such as the use of global hash tables.

      • >
        Glue isn't free

        We have enough system resources to pay for the cost of glue. Yes, the initial implementation is more complex, but further modular development gets simpler.

        • Glue isn't free for the development effort. I didn't even address the horrid things it does for performance. Dealing with the glue is complexity. You now have complex interactions between components. That's not an improvement.

          As a Linux developer, I never have to deal with the complexity of doing RPC. I just make a function call. I never have to contort my design to avoid shared data tructures. I can do trees, hashes, hashes of trees, and so on. I don't have to worry that something might not be mapped. I

  • you have to become a kernel hacker if you want your own distro.

    sure you can go for one of those 'stable' commercial kernels... but for desktops, remember to take one of the older ones they supply, as the latest kernels from commercial distro's are usually to bleeding edge to be considered rock solid. Not true for the server editions I guess.

    But does that mean that if I want a rock solid 2.6 kernel I can't use the ones on kernel.org ????

    That sounds silly.
  • by Spooker ( 22094 ) *
    I remember waiting patiently for the 2.5 track to be turned into a 2.6 release ... then when it finally happened I admit to waiting until 2.6.3 before I even tried it (can we say coward?) ... and after running it for a few days I quickly reverted back to 2.4 ... my story probably mirrors alot of users out there who found that it didn't stand up to the reputation that the 2.2 and 2.4 release built ...

    But can they (the kernel team) hope to rebuild the trust by making this policy change? Has Redhat (sorry, ha
  • by bakreule ( 95098 ) <<moc.oohay> <ta> <neluerkb>> on Thursday July 22, 2004 @09:00AM (#9768818) Homepage
    The "release" kernel of 2.6.7 broke the ethernet driver of my board (the forcedeth driver wouldn't load properly). This bug was introduced by a patch someone put in (I have the kernel bugzilla # if you really want it). This was a *stable* release that broke a major component of a very commonly used configuration (LOTS of people use the forcedeth driver on their Asus mobos). A simple testing period would have found this bug....

    Am I the only one that thinks this new dev model is a really bad idea?? Stability is the hallmark of Linux, but that is now effectively broken. If we have a problem, we can't say anymore "oh! There's a new kernel! We should try that!" There's NOTHING wrong with the current odd-test/even-stable scheme. If Linus and Morton want to play around with new features, MAKE A 2.7 BRANCH! 2.6 is finalized, let it be! If you don't think that there's enough features in it, YOU SHOULDN'T HAVE RELEASED IT!

    A lot of people use the vanilla sources, myself included obviously, I should have to go RedHat to get a working kernel. The 2.6 branch is NOT a playground, that's what the -mm branch is for.....

    2.6.6 works for me, and I'm not changing. For the first time in my life, I *DON'T* trust what's coming from Linus & Co.... and that's scary.... It's like God is forsaking you to go play with some toys....

    • by Anonymous Coward on Thursday July 22, 2004 @09:11AM (#9768889)
      I'm sorry your forcedeth driver example is harsh. The driver is clearly labelled as "EXPERIMENTAL", and what's more it is *reverse engineered* because nvidia didn't want to give out the documentation to it. You are lucky someone was actually working on it at all.

      As a sidenote, nvidia is now actually contributing to this very driver, however that has been since 2.6.7.

      So this line of argument holds no water.
    • If you don't want to be on the bleeding edge, don't upgrade right after a kernel is released! Wait a week and watch the bugreports.
  • So basically (Score:4, Insightful)

    by Bruha ( 412869 ) on Thursday July 22, 2004 @09:08AM (#9768861) Homepage Journal
    We can no longer count on the bare kernels to have any stability and depend on companies to stabilize the kernel.

    I disagree with this method for a few reasons.

    Everyone still probably remembers when you had to use a bare kernel to recompile and get Nvidia HW Accel drivers to work with the 4k stack problem.

    Also this will pose a problem with many distro's that do not have armies of people to sit around and stabilize the kernel for their distro.

    Another problem is with drivers being fixed. A bare kernel will be fixed but the customer may have to wait 2-3 months before their specific distro comes up and included a fixed kernel.

    Lastly this will increase costs for developers of distros such as Redhat and Novell due to them having to now employ kernel hackers to deal with problems that may exist in the Kernel.

    I can see no good coming from this approach to Linux and may hurt us in the long run. I hope they reconsider.
  • by Anonymous Coward on Thursday July 22, 2004 @09:08AM (#9768865)
    However, according to this KernelTrap article, active development will now continue in the mainline 2.6 tree, and the final stabilization will be left up to the companies that provide Linux distributions.

    No sane person will now touch 2.6 tree for a production server knowing that developement is the 2.6 tree, unless they buy from RedHat etc who may guarrantee stability

    Lets hope they reconsider and create 2.7 ASAP otherwise I know some companies will probably want to either stay at current release, or abandon Linux ({cough} some may even go back to Micro$oft)
  • by 4of12 ( 97621 ) on Thursday July 22, 2004 @09:12AM (#9768890) Homepage Journal

    Even "production" kernels can have problems. Remember the VM changes around 2.4.10?

    New productions kernels deserve every developer's full attention until they're really really ready.

  • The only thing that really upset me here was the offloading of stabilization to distros. What about distros that don't have a huge developer army (Slackware for example). Before this the vanilla kernel was actually useable. I wonder if this will make it less so. OTOH 2.6 isn't really feature complete quite yet. Some things like LVM2 still are missing from 2.6. So I guess I don't mind feature additions as long as they don't destablilize the main tree, thus requiring distros to use a non-vanilla kernel. On
  • Chill. (Score:5, Insightful)

    by thecombatwombat ( 571826 ) on Thursday July 22, 2004 @09:16AM (#9768915)
    To everyone saying this will kill the independent distro, chill.

    If you were going to make a new distro right now, in my opinion you'd be better off starting with Fedora or Progeny's Componetized Linux or vanilla Debian or something as it is, stand on some shoulders people. Linus and his crew produce a kernel, not an operating system, I'm sure they're doing this to produce the best kernel they can, not because they hate you.

    Like other people said, 2.4 had so many changes go in during it's "stable" life, maybe their just trying to be realistic and make 2.6 actually be more stable than 2.4 this way?
    • Like other people said, 2.4 had so many changes go in during it's "stable" life, maybe their just trying to be realistic and make 2.6 actually be more stable than 2.4 this way?

      That is entirely contradictory; active development should not exist on a STABLE branch to prevent any unforeseen stability issues when introducing new code, ideas, and features.

      On a STABLE branch you only want bug and security patches. People are very realistic that stable (even) Linux kernels have had serious issues, but introduc

    • So? Nobody's talking about creating a new distro. We're all thinking about existing distros that don't have the manpower to ensure that their system is properly tested with each new feature of each kernel release. Taking the time to do so means less time for adding and testing new applications and user-level features, the stuff which many more people care about.

      2.4 had so many changes go in during it's [sic] "stable" life, maybe their [sic] just trying to be realistic and make 2.6 actually be more stab
    • Re:Chill. (Score:3, Interesting)

      by oconnorcjo ( 242077 )
      Like other people said, 2.4 had so many changes go in during it's "stable" life, maybe their just trying to be realistic and make 2.6 actually be more stable than 2.4 this way?

      Only when Linus was the maintainer. As soon as the kernel was handed over to Marcelo Tosatti, 2.4 got SIGNIFICANTLY more stable and development slowed to a crawl. Many peoples point is that 2.6 will never be anything but early 2.4 because Linus refuses to leave it behind and start something new. I blame a lot on Andrew Morton. M

  • by Decaff ( 42676 ) on Thursday July 22, 2004 @09:19AM (#9768933)
    I don't believe 2.7 will ever happen. In a move guaranteed to improve the acceptance of Linux by CEOs and PHBs, it will surely be...

    Linux 3000 Xtreme Professional Plus (codename: BiggerHorn) - based on NT (New Tux) technology.

  • Won't this cause Linux Kernel forking? Each distribution will be adding "stabilization" patches to the kernel, which may or may not be compatible with other distributions' "stabilization" patches. These "stabilization" patches may or may not be accepted back into the Torvalds/Morten kernel.

    They probably thought of all of this at the Kernel summit. The KernelTrap article only mentions:

    "Andrew's vision, as expressed at the summit, is that the mainline kernel will be the fastest and most feature-rich kern
  • This is a wonderful idea if you want to make it nearly impossible for small groups or individuals to develop and maintain Linux distributions. Has Linus now joined Microsoft in it's contempt for the small developer?

    The great advantage Linux has had over Windows and several other operating systems is it's stability. Now that stability is going to be placed in the hands of those maintaining the distributions rather than those who have made Linux into what it is today. Instead of being assured that every

  • by leinhos ( 143965 ) on Thursday July 22, 2004 @09:45AM (#9769110) Homepage Journal
    It seems to me that everyone is assuming that there will never be a 2.7 tree. From the article [kerneltrap.org], the simply quote Jonathan Corbet as saying that "2.7 will only be created when it becomes clear that there are sufficient patches which are truly disruptive enough to require it. When 2.7 *is* created, it could be highly experimental, and may turn out to be a throwaway tree."

    They are just concentrating on the stable branch for now, and collecting a patch set (Andrew Morton's -mm patch set, that is) as a testing ground for proposed (stable) kernel changes.

    This really doesn't seem like a big deal, and it implies that the kernel people will focusing on stability for the time being.
  • by tacocat ( 527354 ) <tallison1&twmi,rr,com> on Thursday July 22, 2004 @10:00AM (#9769188)

    IMHO this is bad. If the development process can no longer isolate between what's stable and what's under development, then they probably are incapable of measuring the stability of the current release product.

    But I wonder how Debian and Gentoo will handle this since they aren't the stereotypical Corporation who stabilizes the kernel prior to release. Debian at least, does a very slim job of customizing the Linux Kernal when compared to RedHat, Mandrake, and SuSE.

    Does this imply that the LKML has decided to abandon their origins of Free and just hack code and let someone else actually worry about a finished workable product? Sounds like they are kind of blowing off their community.

    Or is the community filling up with whinnie-assed whimps who don't know what the meaning of "make clean" means?

  • I need to know what all that number jibberish means? What's the pussy cat name for the new version? Tony the Tinger? Panther? Kitty Witty? Hello Kitty? Without a cute kitty name and crazy water bubble buttons, I'm too afraid to leave my Tree Huggers meeting and face the world!

    :)

    Joke Alert: Relax Mac hippies, I'm typing this on my Powerbook. (Cred to El Reg for the Joke Alert tag).

  • by His name cannot be s ( 16831 ) on Thursday July 22, 2004 @10:10AM (#9769257) Journal
    I've been constantly amazed by people imagining that version numbers on the Kernel, (Or other software for that matter) actually reflect anything other than a simple fact: A build at a point in time.

    Read the kernel mailing list. Sure, as a version approaches, Linus makes an attempt to not include patches which have not undergone much testing, but the simple fact of the matter is that the Kernel is no less or more stable at 2.6.n version as it is in version 2.6.n-mm4, or 2.6.n-rc7 etc...

    Versions in the kernel are really just "points in time" The apparent stability of a version is really perception as to what is working or what isn't, and is completely outside of the versioning.

    What's worse, in order to facilitate the versioning mechanism, Kernel maintainers have moved closer to the 2.n.m-rc1234 bla bla bla in order to signify that the whole number m is "stable", which, just as often it isn't quite, and requires patches anyway.

    Honestly, for my money Linus could either use 2.6.n.20040722 to signify his builds, and I'd be just as happy.

    Aside notes:
    For those who used the 2.5.xx series and found it unstable, Did you report what unstability you had?

    For those who tried the 2.6.xx serise and found unstability, did you report what unstability you had?

    I think that when you claim that a version is unstable, you should back it up with what is wrong, and how it affected you, and pass that information forward to the developers. If you don't, you are robbing the developers of the very feedback they need. Complaining about it doesn't do much good if they don't know about it.

    I thinks I rant too much
  • by Builder ( 103701 ) on Thursday July 22, 2004 @10:36AM (#9769428)

    This decision scares me. I believe that this will create pressure that will lead to one of two things:

    • Kernel fork so that companies can have a stable branch that they can trust, and just cherry pick new things from the main tree as and when they want
    • OR - vendors like Oracle, Sybase, IBM, etc. only supporting one or at most two distributions
    Before you shout me down, hear my reasoning out...

    Vendors developing applications need stable APIs and ABIs. We're already too close to a potential fragmentation situation with multiple distributions on different kernel versions, glibc versions, etc. It's giving vendors headaches because they claim to support Linux, but then the masses spew insults when their particular distribution doesn't work.

    Ignoring performance stability, instability of the code base will hurt Linux acceptance. If it costs vendors more to keep up with the ever-changing world than they can make from selling Linux solutions, they'll either find a way to freeze that world (i.e. fork), or they'll discontinue or reduce their support, and tie it to just one or two distributions. These are both bad options for the end-user.

    You also have to wonder how much trust should be placed in the distribution companies. Going for a Red Hat solution is in many cases more expensive than a similar Sun solution, and Red Hat don't provide a lot of choice. Want to use XFS or JFS on your Red Hat Enterprise Linux AS 3 server that you paid them $1000 for ? That's just tough, because if you do that, they won't support the box.

    Pay a grand and get no support - that's the price of 'choice' with Red Hat. I'm sure other distribution vendors will be the same, because at the end of the day, they need a known-good installation to troubleshoot against. That's fair enough, they're in this for business reasons. But to say that we should rely on their altruism for our stable kernels ? Doesn't seem like good forward thinking to me!
  • by mark-t ( 151149 ) <markt.nerdflat@com> on Thursday July 22, 2004 @11:48AM (#9770124) Journal
    I can think of no other words for it.

    While I have no problem with them making patches in 2.6 for security reasons or to do bug fixes or corrections, the dev branches have been traditionally an opportunity for the kernel devs to tinker and to begin adding newer and cooler features to the kernel. One could rely on the fact that in a linux kernel numbered x.y.z, that if the only thing that changed was the z, one could usually reasonably expect that nothing significant would be changing within their system. That if they upgrade from x.y.a to x.y.b, the only things of note will be bug fixes, security fixes, and _MAYBE_ a minor feature or two that 99.99% of the people wouldn't notice anyways, but nothing too significant will have actually changed unless it was previously broken. However. now they want to make 2.6.x a development tree and I can see that this could have one of two negative consequences:

    1. Addition of new features to linux is slowed down drastically, in order to keep the feature set in 2.6 as consistent as possible. With no 2.7 to put new features into, there's not much breathing room for creative development.
    2. The devs won't have a problem adding new features to 2.6, creating a high diversity in the 2.6.x feature set. This can cause some level of distrust in the 2.6.x branch, just as there has usually been in most of the previous development kernel versions, and this would slow down its acceptance and use.

    In other words, not having a 2.7 is a Bad Thing. Why they don't see this is beyond me.

It is easier to write an incorrect program than understand a correct one.

Working...