Linux Kernel Performance How Will 2.6 Measure Up? 214
An anonymous reader writes "This story offers some interesting performance comparisons between the latest stable Linux kernels (2.4.x) and the latest development Linux kernels (2.5.x), comparing performance on both a single processor and dual processors. These numbers help validate that the upcoming 2.6 kernel will outperform the current 2.4 kernel, at least in some instances..."
It's faster. (Score:2, Informative)
But how is this news? Ever since the thread on Kernel Notes a month or so ago, most of us have known it this.
Re:It's faster. (Score:2, Insightful)
Most of us?
I'm not sure that most of us read the threads on Kernel Notes... Many, perhaps... Some, maybe...
You overestimate us, I fear.
P.
Re:It's faster. (Score:2, Funny)
Re:It's faster. (Score:5, Funny)
Compile time speedups (Score:5, Interesting)
Of course, the real solution would be to not need to compile software (plug plug :)
Re:Compile time speedups (Score:5, Insightful)
that'll enable ata??/dma features. and everything will be much faster...
Re:Compile time speedups (Score:5, Informative)
Basicly, do 'hdparm
You can even use it to test cdroms and RAID arrays. Just remember that when you optimize an array, you want to optimize each disk (/dev/hd[x], not
One other note, the '-t' flag, like most synthetic tests, may not show the best settings for the drive. A lot of times a timed kernel compile (or my new fav test, a mozilla 1.0 compile) will reveal benifits, or detraction, not shown in a synthetic benchmark.
Re:Compile time speedups (Score:2)
Re:Compile time speedups (Score:4, Informative)
"A setting of 1 permits the driver to unmask other interrupts during processing of a disk interrupt, which greatly improves Linux's responsiveness"
Re:Compile time speedups (Score:2)
Of course, you didn't mention what type of server it is. If it's not a file or mail server, you might not care so much about disk I/O.
Re:Compile time speedups (Score:2)
Re:Compile time speedups (Score:2)
Re:Compile time speedups (Score:2, Offtopic)
My aim was a little different from yours though. I was going for complete binary packaging from beginning to end. No source building, as automated
Maybe I should start it back up. It's not like I have much else going on lately. hmm...
Re:Compile time speedups (Score:5, Informative)
Nope, you're right, but autopackage can figure out what libraries are present and retrieve (assuming they've been packaged) the libraries from a DNS style distributed network, apt style.
My aim was a little different from yours though. I was going for complete binary packaging from beginning to end. No source building, as automated ./configure; make; make install;s tend to make distro specific code.
Hmmm, how did you get the impression that autopackage is source based? A .package is a binary package from end to end, the user doesn't need to compile anything.
All I provided was an archive format and a self extracting gui or command line installer that totaled under 50k of overhead
We're using a similar idea except the scripting language and front end code is external and installed-on-demand when you run a .package file if it's not already present to minimize package file bloat.
Maybe I should start it back up. It's not like I have much else going on lately. hmm...
If you're interested in the problem, please take a close look at autopackage first, feel free to hop onto IRC (freenode#autopackage) and talk to us first. We're normally around in the evenings GMT (both the core developers are in europe). It'd be a shame to duplicate effort when our projects sound so similar.
Re:Compile time speedups (Score:3, Interesting)
Hopefully it will be solved for you Linux guys in 2.6.
Re:Compile time speedups (Score:4, Interesting)
I first tried FreeBSD about a month ago, and thats exactly what I noticed about FreeBSD. Smooooooooth.
For example in Linux ( 2.2.x and 2.4.18+ ) I found that when something demanding was going on ( like building mozilla, kernel, or such like ), all X11 became all choppy ( Mouse stuttered, typing lagged in bursts ).
Not so with FreeBSD. Many times ive had GnomeICQ hit a bug and use 100% cpu, but I was unaware of this until days later when looking at top.
A few days ago I installed FreeBSD onto my p100, with 64Mb of ram. Playing around, I ran many many dnetc's by having thousands of 'nohup
At a load of 350 the p100 box was still very happy to do what I told it, and with very suprising responsivness. However, once the load got up to 450, my ssh connection to the box was terminated, and I had to restart sshd locally. Which is fair enough, I guess.. one will run out of swap and ram sooner or a later.
I can recall doing this same dnetc thiwith slackware, running 2.4.something, and after a short while at load 50, I started getting seg faults every time I ran a command.
Untill Linux shurgs off huge loads effortlessly and in a stable manner like FreeBSD does, its not going to live in my boxen
Tux needs to get fit and learn how to balance on one leg properly
A contrary viewpoint (Score:3, Informative)
After a few months of that, I am back back in Mandrake 9.0 with relief and no regrets. Why?
1) I found that, for the things that I do, FreeBSD offered no advantages at all. Performance and stability was no better than Mandrake 8.2. In fact, under heavy loads, my experience is that Linux 2.4.x is much better. (I run lots of octave math simulations and lots of fortran number crunching programs, often several at a time. )
2) For people used to working with linux, there are lots of annoyances to working with FreeBSD. I missed the convenience of RPMS. Many of my favorite programs did not compile properly.
3) When pitch came to shove, my friend had no suggestions as to why the FreeBSD install did not perform as well as linux, except to tell me that I must be mistaken in how well the linux install performed! Duh!
Now, maybe under some circumstances, it is probably true that FreeBSD does outperform linux. But I could not care less. For the work I do (mostly on the desktop, running simulations, running mozilla and xine), linux is demonstrably a better system than FreeBSD.
Magnus.
Re:A contrary viewpoint (Score:2)
Now you know what it feels like to wade through all this Linux sycophantry. Geez, reading Slashdot is like reading about how Linux can do anything including your laundry.
Speaking of laundry, your laundry list of complaints boils down to one item:
1) FreeBSD is different than Linux. Duh! First, optimize the system and applications for your compiler. Second, learn how to use ports instead of assuming that anything not RPMS must be bad. Third, who the hell needs performance during a system install?
Re:Compile time speedups (Score:2)
Re:Compile time speedups (Score:3, Informative)
It sucks, but that is your fault. no, hear me out, this isn't one of those "so fix it" rants:
I've been building my own kernel since 1.2.13 days. I've [until recently] never built a crap kernel (sometimes left things out I wanted in, like sound, but it always worked smoothly, even under load).
I recently tried to roll my own 2.4.{7,18} kernels under RH 7.2, and it did exactly what you describe. The slightest bit of IO concurrent with load would stutter up the entire system.
However, redhat's kernels (based on the same version) would NOT have this problem. Smooth as astroglide on a banana peel.
So the conclusion is that the kernel broke sometime during the 2.4 task-switcher/vm mapper debacle, but not in a "no longer works" sense, but rather in that deep wizardry is neeeded to build a "good" kernel. Obviously you and I forgot to check the "do not fuck up" box.(*)
I would totally go for *BSD, but a clean RH install works ok, and I judge the overhead of applying updates to keep the system secure less than a complete OS shift and learning to administer a new not-quite-the-same system.
(*)The first instance of the "do not fuck up" box was found on the LaserWriter driver for System 7.x: if you unchecked download fonts, then _nothing_ would come out, otherwise it worked like a charm. Likewise, acrobat reader has an "avoid Level-1 PS like the plague" option that allows you to print what appears on screen even when it contains greek letters.
The problem is that each configuration dialog has this box hidden as something else, but it is always there, somewhere.
People in glasshouses shouldn't throw stones (Score:5, Interesting)
Amazing, I've been running FreeBSD since 2.8 and I've never had an unresponsive system even while doing a build world; I guess the 2.4 kernel is alot worse than imagined.
This is, by and large, the fault of the scheduler, largely unchanged in 10 years and described by Linus, even whilst he wrote it, as a 'hack'. However, it worked, and Linus, being the extremely sensible and conservative maintainer that he is, kept it until recently - process schedulers are difficult things to get right, and their performance is crucial to the performance of the kernel as a whole. Not to mention that for the tasks that Linux has been used for historically, primarily low-volume server tasks on low-end hardware, it isn't really a bottleneck.
Still, the scheduler has been gutted and rewritten for 2.6 by Ingo Molnar - the now somewhat-famous O(1) scheduler, which performs much more fairly under load, and dispenses with almost all of the strange pauses and scheduling glitches under load. Current vendor kernels based on 2.4 (Red Hat's and SuSE's at least, I think) have had the O(1) scheduler backported to them as well. In fact, if you're running near enough any current 2.4 kernel other than mainline, you get the O(1) scheduler and your share of scheduling fairness.
The new scheduler is also a fundamental basis for Linux 2.6's new NPTL 1:1 threading, which has so far proved spectacularly (record-breakingly?) fast. Hmm, on second thoughts, perhaps I probably shouldn't mention threads and FreeBSD in the same post. I mean, isn't this the same FreeBSD that's still waiting for a single half-decent pthread implementation? Oh well, better hope 5.0 is out soon...
Re:People in glasshouses shouldn't throw stones (Score:2)
Re:People in glasshouses shouldn't throw stones (Score:2)
It's nothing wrong with the POSIX threads implementation, its perfectly adequate, it's just that the FreeBSD scheduler don't do on the internals of the userprocesses.
Ok, let's get this straight. FreeBSD's default pthread implementation is an entirely userspace affair, everything occurs within a single process and the thread library does co-operative scheduling of threads within the process. Not quite sure what you're getting at when you talk about writing your own thread scheduler, the pthread specification requires that the thread library schedules and run your threads for you. A thread library that doesn't make sure threads get scheduled isn't really a thread library anyway, is it?
It works, just about, I'll give it that. However, I wouldn't call it perfectly adequate, it sucks donkey balls. It is slow, inefficient, and prone to threads blocking the whole process. It doesn't take advantage of SMP. You link to the re-entrant libc version, libc_r instead of the normal libc, and libc_r is incomplete, which means that quite a lot of pthread-using code that works fine on other platforms craps out on FreeBSD. Forget about realtime threads - you can attempt to create them, but without kernel assistance they can never be truly realtime, so it's not really a full pthread implementation anyway. Worst of all, it's buggy - it doesn't take a lot of load for very long for things to start behaving erratically.
Ok, that's just the default FreeBSD pthreads, and there are better alternatives - ironically probably the best is LinuxThreads, the standard Linux pthreads before NPTL, which attempts to do 1:1 kernel threading. It seems to work ok, but it's a whole lot slower than it is on Linux, because there's no screaming-fast clone() syscall like there is on Linux and precious little other kernel assistance. It's a little buggy too, not suprising given that it's not running on the platform which it was designed for, although it's bearable. Having to link to something in ports just to get halfway-usable threads is a really dumb situation for FreeBSD to be in, and a total PITA for developers who want their threaded software to run on FreeBSD too.
This ought to be all fixed in FreeBSD 5.0, it's getting a new pthread implementation using KSEs (Kernel Scheduled Entities), basically M:N threading like most modern OS'es use (and which Linux NPTL appears to have just obsoleted as a concept ;), but it isn't finished yet and FreeBSD 5.0 isn't out. Meanwhile, threading in FreeBSD 4.x is still dreadful.
Re:Compile time speedups (Score:2)
Of course back in the 2.2 days, when 2.4 was on its way, 2.4 was touted as being better able to recover from heavy loads. We ran several Linux web servers, and a bad CGI (or a good
2.4 improved that significantly, and even my home boxes noticed the difference. Another poster mentioned a new scheduler finally going into 2.6, so perhaps this will improve even further. Whether it will be as solid as FreeBSD or not, any improvement will still be great...
Re:Compile time speedups (Score:2, Interesting)
Re:Compile time speedups (Score:3, Informative)
nice -n 19 make && nice -n 19 make install
No responsive problems AND your computer uses all those spare cycles while you are doing other things.
I do this all the time and desktop responsiveness doesn't bog down at all. Kernel doesn't take much longer to compile either.
Re:Compile time speedups (Score:2)
Linux has a big problem with software packaging. Deb files are debian specific. The answer is not for everybody to use debian, it's to build packages that are not specific to a particular distribution. If you never ever want software that's more up to date than the stuff in Debian, if you think dpkg is the essence of perfection itself and cannot be improved in any way, then my project is not for you. Sorry :(
For people who don't use Debian though (ie the majority) I think this system has major advantages.
Quick question (Score:5, Interesting)
Re:Quick question (Score:1)
Re:Quick question (Score:5, Funny)
I am currently using a dual Athlon MP 2400 system with 4 GB RAM and a 10-drive RAID setup. I find performance acceptable, although I am looking to upgrade from my circa 1983 Hercules MGA card sometime soon...while the onscreen text is clean and crisp, I have found the preview of Doom III to be virtually unplayable.
Re:Quick question (Score:1, Interesting)
it is not the ultra fastest server in the world, but i use it as a router for my adsl/cable connection. and i am very happy with it.
Re:Quick question (Score:2)
Re:Quick question (Score:1, Interesting)
Re:Quick question (Score:1)
386 SX-16 with 4MB ram. Was used as a tiny samba-server.
Re:Quick question (Score:5, Insightful)
Hell, a P75 works fine as a Windows NT4 PDC for a small network and can also handle low-to-medium file serving for around 20 users at the same time.
Then there's the idea of using Linux network client stations, as in "How to create a Linux-based network of computers for peanuts [linuxworld.com]", to which this site linked more than a year ago. This system can even make use of 386s -- I've already tried it. True, performance is a bit slack, but just how much power do you really need to write documents? A network-based 386 (or one running Slack 2.x) with Abiword or maybe pico/vi/emacs (some people do actually like those) works just fine.
woof.
Re:Quick question (Score:2)
I currently run Linux on some oddball Olivetti server with dual P100s. Works perfectly (so far, haven't tested the scsi tape drive yet.), even though I had to compile the kernel on my main machine. Would have taken 2 to 3 hours on one proc (SMP wasn't compiled in yet) which really isn't the most viable solution. Maybe I'll dig up a few 486, restore them, install some older stuff on it like the 2.0 or 2.2 kernel and try to sell them as cheap fileservers/webservers/routers?
Re:Quick question (Score:2, Interesting)
I have on my network at home the following ancient hardware:
I also have some other odd hardware on my network, that is so ancient that linux won't even run on it (2 DEC turbochannel machines, one MIPS-based and one Alpha, both running netBSD)...
I acquired all this hardware when it was being thrown out by various people - bang per buck is essentially infinite
Re:Quick question (Score:3, Informative)
not just desktops (Score:2)
Re:Quick question (Score:2)
Plus, with this crate I learned wha the "EISA Bus Configuration" in the linux kernel is for...
I have also had a 486DX/2 100 for the same purpose, ran great too, but had to decommission it because it wouldn't take larger HDs.
(On a side note, I was able to recycle the 30-pin SIMMs from the 486 by putting them into the cache expansion slots on the P90s EISA SCSI controller card. I miss the old times.)
Re:Quick question (Score:2)
It's been running for years now without any problems -- it could probally use a bit more memory, but then again, swap space hasn't even been touched at this point.
All in all, a very good box -- especially considering all of the power outages/brownouts that I've had the past few years. This box just keeps on going.
Re:Quick question (Score:2)
My weakest spec machine I regularly use is a P75 laptop. 16M memory, 750M disk.
OS is Debian Woody.
Desktop is XFree86/Ratpoison.
Primary programs are ssh, w3m, centericq, and dillo. (Although I'm moving more towards 'View' in w3m, as well as email and newsgroups in mutt).
Why? Because its nice to have a cheap sturdy laptop I can take anywhere without worrying too much about it.
I also use a P166 at home for a fileserver/mailserver/newserver/sambaserver/firewa ll/dialoutserver.
Re:Quick question (Score:2)
I found RAM to be the limiting factor, rather than CPU speed. Thirty two megs is the minimum Red Hat recomends for it's 7.X system (which is what I have). The kernel plus modules and buffers seem to take up most of the available memory (according to /proc/meminfo). So I wouldn't want to try it with much less. And for networking, you're much better off if you don't have to swap stuff out to disk just to route a packet.
I think you could get it to work with 16 megs but less I'd be worried that performance would truly suck. Anyone know different?
Re:Quick question (Score:3, Informative)
I have a 486 (75 MHz, I think) laptop, with 12 MB RAM. It runs Debian Woody with X11 and the Blackbox window manager. I was using it yesterday to read slashdot and debianplanet. I was using the Dillo browser. Performance was slow, but tolerable. If I want to use emacs, I kill X, or else it starts swapping.
The real holdup for speed is the lack of RAM, rather than the anemic processor, but as long as I'm careful about how many processes I have going, it's fast enough. The holdup for usability is the small, 640x480 screen, and that's more a matter of it being an old laptop than being old.
It's interesting to notice that this little box could run Win3.1, but isn't a speed demon with that either. Win3.1, of course, is old and single-tasking and insecure and nothing modern runs on it. Putting a modern, proprietary OS on this sort of old hardware is probably out of the question.
Based on this, I think that any Pentium with a lot of RAM (more than 64 MB) would be just peachy for email/websurfing. Any Pentium with >32MB should be useable, if set up sensibly.
Re:Quick question (Score:2)
I'm also running a SparcStation IPC (24.88 bogomips, 32MB RAM) as an internal web server and CD image server. (Running SuSE SPARC 7.3).
Up until a few months ago my development machine at work was a P-166. (My main home machine is a dual CPU P3-550 with a half gig of RAM, and yes, I noticed the difference!)
Re:Quick question (Score:2)
Re:Quick question (Score:2)
Works fine. The hardware is more than adequate for shuffling a few Mbps from one nic to another while inspecting some ip-headers.
Re:Quick question (Score:2)
And I don't get fat thanks to a LARGE stomach.
Re:Quick question (Score:2)
X supports remote use natively, no need for a hack like vnc, and its sure a hell of a lot faster.
Whats more, X natively lets you export single apps at a time, without the overhead of a complete desktop and windowmanager, and the apps from one server integrate perfectly with the ones your running locally.
Re:Quick question (Score:2)
It may be faster... (Score:4, Interesting)
Performance is important, certainly, but I think some people (*cough* overclockers *cough*) assign it a bit too much importance.
SMP (Score:4, Informative)
It looks like the new kernel better utilizes multiple CPUs. This is a great thing. Linux needs better support for SMP systems if it is going to play with the big kids in the high-end server market. (I know, Linux is partially there).
SMP is overrated (Score:3, Interesting)
When it comes down to it, you only get cost-effective scalability by using distributed systems or clustering. In fact, for really large systems, it's the only possible way at all.
Something like OpenMosix [openmosix.org] should really be a standard part of the Linux kernel already, as should other support for simplifying clustering, distributed computing, communications, and distributed shared memory. Distributed systems and clustering is the future, not SMP.
Re:SMP is overrated (Score:2)
Re:SMP is overrated (Score:2)
Mosix, of course, is no substitute for the kinds of problems in which many processes share a lot of state, but other approaches are (including distributed shared memory and various other communications libraries).
Re:SMP is overrated (Score:3, Insightful)
Latency, Mosix is just too un-responsive compaired to an SMP option.
Time, The more people that buy SMP boxes the better they will get, the MHz wars and Windows killed of home SMP if Intel had invested in SMP design instead of MHz then there'd be cheeper cooler SMP machines out there.
Re:SMP is overrated (Score:2)
Latency depends on the problem and the design of the distributed system. For problems where Mosix is an alternative to SMP, latency doesn't enter into the picture at all because Mosix processes are usually only loosely coupled and because network and disk I/O migrates with the processes.
For problems where IPC latency is a performance concern, it can almost always be dealt with even in a distributed system through better design. The money you save on overpriced SMP designs more than lets you make up for any remaining performance losses from network latency (and I'm not convinced that with modern networking technologies, there are significant losses anyway).
The more people that buy SMP boxes the better they will get,
No, they won't. There is no magic tooth fairy of SMP, and you can't scale SMP indefinitely. If you put, say, 64 processors onto "the same memory", they aren't really on the same memory anymore--you are just paying for a very expensive box with a bunch of hamstrung processors inside that are essentially doing distributed shared memory over a special-purpose network.
Re:SMP is overrated (Score:2)
Well, your kind of wrong.
X86 is crap for SMP, mainly because of the way memory and cach are handled. I wouldn't put more than 4 x86's in an SMP configuration.
Other architectures support point to point busses and better cache handeling more memory chanels/controlers etc... you could probably scale to 100 or so processors with that type of architecture.
Now, if some of the patents cray has were available then you can scale to thousands of processors, where memory controlers record pages that have been writen to and only sync on demand (when there's a read request page that's been writen to by another processor)
Basicly x86 sucks for SMP (which is probably why it's such a cheep architecture).
Re:SMP is overrated (Score:2)
That's an illusion. If there is any significant per-processor caching going on, you basically have a distributed systems with a fast, non-standard network in between, and a costly, complex page fault handler hardwired in hardware.
You can achieve the same thing more cheaply with distributed shared memory and a standard fast network. That is, instead of building lots of expensive, inflexible, special-purpose hardware, you treat the main memory of each machine as the "per-processor cache" and you do the synchronization in software over the network using page fault handlers. It simply makes no economic sense to put something that complicated and costly in hardware, in particular if the market for it is so small.
Note that architectures like MIPS already do page fault handling in software, so those kinds of software-based approaches are competitive with hardware implementations.
Re:SMP is overrated (Score:2, Insightful)
For example, I can do distributed compilation and get about 140% speed with 2 identical machines, but with SMP I can get more like 180%. It's cheaper to buy an SMP motherboard, 2 CPUs and some slightly more expensive RAM than a whole other machine.
Dual Athlons are great, price/performance, for compiling large C++ projects (where g++ needs lots of CPU for each file.)
Rik
Re:SMP is overrated (Score:3, Insightful)
Three years ago you would have been right, but today the cheapest way to (nearly) double your computing power is to put in a dual processor board. I.e., the day of the home dual-processor has arrived. For example, you can now get a dual processor Athlon board [iwillusa.com] for $200, and in spite of what the docs say, you can put $50 processors in it instead of the $500 big brothers AMD recommends.
It's only a matter of time before you start seeing 3D games that can take advantage of dual processor configurations. In fact, they already can in the sense that if a single-threaded game can load up one processor 100% and your box still remains entirely responsive for other applications. That is, you can play Return to Castle Wolfenstein at the same time you run a compile.
Re:SMP is overrated (Score:2)
Or, when your one user that insists on running rtin on the server quits, and rtin spirals upward to 99% CPU usage, the machine is still usable by everyone else because rtin is only pegging one CPU. Replace rtin with any other program that does that (though the kernel is getting good at murdering rogue processes).
Re:/r_smp 1 (Score:2)
Thanks, dear AC, for pointing that out. In other words, id has already ushered in the era of multi-cpu gaming, and there's no question at all that dual cpus get you more bang for the buck than a single high-end processor, twice as fast. At least this goes for id games, but imho, where goes John, goes the entire 3D game industry in the long run.
So now the interesting question is: when do quad machines hit the sweet spot on the cost/performance curve? I'll go out on a limb and guess "quite soon", that is, 3 years or so from now, and that is entirely due to the fact that AMD has already integrated most of the glue you need for SMP onto the Hammer. Now it's mainly a matter of waiting for quad mainboards to start hitting the overclocker market. Yes, there is an overclocker market, and there are companies serving it.
Quad Hammer for gaming anyone?
Will 2.6 make servers... (Score:3, Funny)
BSD? (Score:4, Interesting)
Re:BSD? (Score:3, Funny)
Score: -1, Bad Pun
Re:BSD? (Score:5, Insightful)
Not exactly right. There's no question it was very much Linux's fault for having a less than totally robust virtual memory manager for a number of years. In the push to add features such as memory above 4 gigabytes, stability in corner case and swap performance kind of got left behind. This has been corrected in Linux 2.5 with the new reverse-mapped VM, which sacrifices a little raw speed in such things as process forking (look closely at the benchmarks and notice 2.5 is slightly slower in Con's "process load" benchmarks) and mallocing, in return for far better and more predictable swapping performance. Plus, the new VM provides a better base for new developments you'll see in the next series, such as active memory defragmentation. Over time, we're likely to win back the slight performance losses in (certain areas of) the 2.5 vm, and then some. In the meantime, there's no question that 2.5 is the smoothest running Linux kernel ever.
BSD continues to edge out Linux in some areas, notably NFS server performance. It used to be, BSD had a lot more advantages over Linux than it does now (the BSD developers are darn good). But in the end, Linux offers a much broader range of hardware support and has way more programmers working on it, so slowly but surely is catching up and surpassing in the few areas where BSD still has the edge. If I had to speculate about why Linux gets the massive herds of programmers, I'd say it's because of the license - many volunteer programmers prefer the GPL because of the legal guarantee that their work will remain open and not end up fading away because it had to compete against some heavily-funded proprietary product based on their own code. However, it's clear there are enough top-flight programmers to whom such considerations are unimportant to keep the BSDs not only alive, but vibrant.
See here [osnews.com] for a look at some of the nice features BSD, and some ideas for the future. In case anybody thinks the much-talked-about rivalry between Linux and BSD is some kind of war, it isn't. BSD and Linux people often work together, there is a lot of cross-pollination, and the prevailing attitude is one of mutual respect. At the end of the day, it's worth noting that, technically speaking, the closest rival to Linux in the operating system space is another open source project.
Re:BSD? (Score:3, Insightful)
"The complete OS+tools should be called GNU/Linux (as RMS insists)."
Hogwash. Adding "GNU/" when communicating on the subject of Linux does absolutely nothing to improve communication. It's well known that GNU tools are used with every common distribution of Linux. Prepending additional letters and symbols to the word "linux" needlessly complicates communication. I'll consider doing it when non-GNU tools become common enough with Linux distros to cause an ambiguity.
Re:BSD? (Score:2)
Intel/Maxtor/AmiBIOS/Nvidia/Via/Logitech/G
Afterall, the hardware is important too! And dont forget the firmware.
Which makes me wonder, why dont we have OpenBOOT/MACOSX, or OpenBOOT/Solaris, or maybe SRM/OpenVMS ? why not? because its stupid.. having a short name is much easier for people to refer to.
Aside from that, many people dont use grub to boot linux, and many other people use grub to boot other os`s such as windows.
Statistics bullshit (Score:4, Insightful)
by consequence : These numbers help validate that the current 2.4 kernel will outperform the upcoming 2.6 kernel, at least in most instances...
Sometimes I wonder if anyone ever thinks before posting. Especially editors. Comeon dudes, if you don't specify at least the importance and preferably the specs of those instances, you'd better stfu.
Re:Statistics bullshit (Score:5, Insightful)
No; you've forgotten the other possibility - that in the majority of instances, performance will be the same.
Note that I've not read the article - I am merely correcting your interpretation of the sentence you quoted, given that it is apparently "insightful", despite being incomplete...
Sometimes I wonder if anyone ever thinks before posting.
Oh, the irony
Re:Statistics bullshit (Score:2)
But my post being +4insightfull proves my statement extends from editors to moderators (to me
Re:Statistics bullshit (Score:5, Insightful)
My _guess_ is that the 2.6 kernel will seem more responsive due to the pre-emptive kernel, etc., and that another six months of performance tuning will get 2.6 very close to, if not past, most 2.4 benchmarks. But the fact is, the linux kernel generally is mature enough that really big improvements are getting harder to come by.
New better than old shock... (Score:5, Interesting)
So just to sumarise.. the newer version that have focused on SMP development and performance will be better than the old ones. I welcome the efforts of these people, they certainly know their stuff. But this isn't really a suprise. A suprise would be "2.6 to be slower than 2.4 for SMP" which would mean that someone somewhere has made a VM style error.
There are lots of other advancements to talk about and performance has never really been something that was regarded as a weakness for Linux (except SMP), and as the hardware is so cheap and increasing do these increases really merit their effort ? They do because of the new structure and clean-up that is also resulting, but when 18 months pass and your processor speed doubles, why does an extra few percent from your OS actually matter.
What matters is that this newer code is better quality than the older stuff, its better constructed while doing a harder task. This reduces the maintainance effort on Linux, which is a Good Thing(tm). It always seemed to me that the code would be faster, the great thing here is that the code also seems to be better.
Kudos to all involved for proving that fast != unreadable.
Re:New better than old shock... (Score:3, Interesting)
This is not just about a few percent higher speed, this is about (much) better responsiveness under high loads. This is about the difference between being horribly slow or just plain dead when being slashdotted.
Spotted on www.bash.org ... (Score:5, Funny)
linux hacker 1: i'm bored.
linux hacker 2: let's re-write the whole kernel!
linux hacker 1: ok.
*hackety-hack*
linux hacker 1: wow, it's 0.00001% faster and takes up 1kb less space!
linux hacker 2: w00t.
Re:Spotted on www.bash.org ... (Score:5, Funny)
Measure up (Score:4, Funny)
I can only imagine the spam that Linus gets...
"Increase the length AND thickness of your kernel"
It depends on so much (Score:5, Insightful)
Everything from file system speed to DMA to X to window managers to widget class speed right down to the applications themselves. You can have a damn fast core system and still have a system that feels slow especially on a desktop box.
Take everything into consideration first. Personally, I would be happier if they locked the XFree86 guys, gtk, qt, Mozilla, OpenOffice, KDE and gnome guys in a damn room and not let them out till they came up with some serious ways to improve the GUI response overall for the total user experience.
My SMP app servers will be happy though.
________________________________________
Re:It depends on so much (Score:2)
Generally, I've had little to complain about with my Linux desktop box performance over the past few years.
Hardware being as cheap as it is, there's little reason to be dissatisfied with Linux performance for 99% of desktop users.
The 2.6 kernel enhancements seem to be heavily oriented towards increased performance on enterprise level servers. It's pretty much as if most kernel developers know that Linux desktop performance is not a burning issue.
And with Linux gaming, a virtual non-market, the only way I see for interactive desktop issues to be further addressed in the Linux kernel is from the embedded area, where interactive response from for humans is still alive and well as an problem.
When Linux desktop users start using demanding applications, such as video editing, there'll be more attention paid to performance. Maybe we'll even get "X12" or "Y" instead of X11R6.5.3.2:)
summary would be good? (Score:5, Interesting)
in fact, if i count correctly, in 13 of the 23 tests 2.5 was slower than 2.4! it also didn't seem like the the margins where it was faster tended to be larger.
looking at the second set of results, comparing SMP with UP, it seems that 2.5 does worse with SMP than 2.4, (for example, xtar_load is twice as slow in 2.5 under SMP). this is again the opposite of what everyone seems to be saying and what these tests are supposed to be proving!
without some overall summary, its damn difficult to draw any meaningful conclusions, but my impression is that it appears no faster and in fact generally slower. no huge surprise in my opinion that linux is getting slower - look at windows' history - but why is it not possible to first generate some meaningful statistical conclusions from this data if you find it important enough to post on slashdot? might save on the hundreds of meaningless comments the article seems to have generated.
Actually, 2.5 is quite a lot faster (Score:5, Interesting)
Also, establishing connections is much faster. Multi-process or multi-threading appears to be more efficient than poll now for a large number of connections.
Re:Actually, 2.5 is quite a lot faster (Score:2, Informative)
poll() has got scaling issues - this is known. Check out /dev/epoll - this is coming in 2.6 and available as patches in 2.4. This will likely be faster than multithreaded techniques. Combining aio for read and epoll for write will probably be the fastest technique, but it remains to be seen.
Re:Actually, 2.5 is quite a lot faster (Score:2, Interesting)
However, it is also Linux specific, while poll and fork/exec are not. While it is a good sign to know that Linux scales well for applications that were written and optimized specifically for Linux, it is even more important that Linux scales well for portable POSIX/susv3 applications.
What will ZDnet say? (Score:4, Insightful)
But, you just know that when 2.6 comes out, ZDnet will be saying things like "Linux now supports SMP." (Except for those ZDnetters who have received this month's check from Microsoft; those folks will be talking about how Windows 2000 outperformed Linux 2.6 in an "independent" benchmark test.)
Will 2.6 be better than 2.4 or just faster? (Score:2)
Logical Volume Manager? (Score:2, Interesting)
Already in 2.4 (Score:2)
But, when are they going to provide an friggin' LVM?
Err... Linux 2.4 has included Sistina's LVM for some time. 2.6 will have a more generalized kernel interface, the Device Mapper, that will allow both version 2 of the Sistina LVM, and the IBM alternative, EVMS, to be built on top. Or at least, that's what Linus seems to have decided on for the moment.
The Device Manager looks pretty, too.
I think that perhaps you are confused. Device manager? And a pretty one, at that? No LVM? Hmm, ok. Maybe you need some spelling help: L-i-n-u-x spells Linux, not Windows 2000. ;)
Re:Logical Volume Manager? (Score:2, Interesting)
Features are the one thing Linux doesn't lack, its got so many tools that, outside of custom applications with extreme requirements, it can basically do anything. Oh, also, can't run freakish proprietary code from monopolists, but what can?
In short, these guys are good, and there is so much under the hood that is really incredible (and has been there for so long....) that outside of vendors supporting their own hardware (like they do for Win) with drivers (GPL please) there is little that is necessary to add to the linux kernel. Not that I don't applaud them trying, or would I stand in the way of a cool hack.
They are even trying to put a generic crash dump capability in that allows userland to dump kernel cores to:
Filesystem of your choice
Serial console
Network device of your choice
Other server
etc etc
Amazing stuff. Just freaking amazing. If you haven't looked over the capabilities of the kernel lately, you owe it to yourself to do so.
andy
Irrelevant stats (Score:2, Insightful)
In case you're wondering, no, I'm not a troll. I've done *extensive* testing in this area. So have others, which is why they've been working hard on scheduling.
Re:Sorry to Intrude here, but... (Score:5, Insightful)
As with all experimental endeavours, you do sometimes get better results, sometimes worse, but from those mistakes lessons are learned and better methods are devised.
It's not about "marketing". It never was.
Re:Sorry to Intrude here, but... (Score:3, Insightful)
cheer!
Why does it seem that so many people in the current Linux community *think* that it's about marketing and money, though? *sigh*
Re:Sorry to Intrude here, but... (Score:3, Insightful)
Because a lot of the new-comers have been beat-up with Billy G's ugly-stick for so many years. That's all they know. Marketing ploys and false hype.
monkeyboy anyone?
Re:Sorry to Intrude here, but... (Score:3, Insightful)
I'm afraid part of it might be because of the Linux community itself, though.
After RedHat and VA made it big, way back when, a certain amount of "make money fast!" thinking crept into the Linux community. It started seeping into our news and other "internal" communications too (I mean /., Newsforge, Linux e-zines and that sort of stuff, not the lkml) -- people started focusing more on how one could use Linux and open-source to make profit rather on technological issues. The line between hacker and marketer seemed to be breaking down. A O(1) scheduler is all fine and dandy, but how can my favourite business use it to make more money?
Perhaps I'm seeing something that isn't there -- I hope I am -- but seriously folks, am I the only one who have noticed such a switch of focus? I read the articles at NewsForge about the last LinuxWorld Expo, Roblimo seemed to agree that the old bunch of long-haired hackers in sandals had largely been replaced with business reps -- and to add insult to injury, Microsoft was present.
My fear is that Linux will end up becoming as sterile and dead as other "rebellious" technological (or otherwise) ventures tend to become when they're subjected to corporate clutches. Greed kills.
Re:Sorry to Intrude here, but... (Score:2)
It Should Be About Marketing And Money (Score:2)
I mean, if it were *just* about technical cleverness, the story would be over. Okay, Linux is clever. Film at eleven.
But there's also the crusade out there to *prove* to "them" that Linux can hack it in the enterprise, that it stacks up against Solaris and Windows NT. Space in server rooms is at a premium, and it's a victory for open source whenever a rack slot gets filled with a linux or bsd box.
There's also the cherished open source community belief that "sharing" and openness comprise a valid business model. In light of that, marketing and dollars are extremely important. The marketplace is a democracy, and every dollar is a vote.
(gasp)
And that's all I have to say about that.
Re:It Should Be About Marketing And Money (Score:3, Insightful)
You wrote:
Open-Source, perhaps. The concept of Open-Source was devised specifically to sell Free Software to the suits. However, Linux is as much Free Software (which is about freedom, and keeping you in control of your computer) as it is about Open Source, so whether it all boils down to corporate acceptance is completely up to the community. Linus didn't write Linux to sell it to corporations, he wrote it because it was fun. I'm not saying that corporate acceptance is intrinsically a bad thing (it isn't), I'm just saying that a lot of people seem to be forgetting about the whole aspect of having fun, when faced with cold cash.
Surprise -- since Free Software is about freedom, I think it's also about freedom from corporations and the marketplace (well-knowing that even suggesting such vile satanism is sure to get me modded far into /dev/null). It's OK that they use the software our community creates -- freedom is all about not taking that right away from anyone, including them -- but we shouldn't be spending our voluntary programming time sucking up to them for money. The vision of a Linux that is completely "de-geekified" that we sometimes hear about may be attractive to suits, but it sure sounds sterile and boring to me.
Against Microsoft, I completely agree. Microsoft are bent on our destruction, so yeah, we definitely have to fight them. Against Sun, though? I'm not too keen on Sun and their proprietary software, but seriously, one of the things I liked about Linux (and the BSDs) was how it was a free community-driven effort rather than a market contender like the rest. We're not a competitor, we're an alternative. As an old Amiga nut, I was attracted to this aspect because not being a market contender means traditional market methods won't be able to bring us down. It may slow corporate acceptance, but it won't stop the kernel hackers, or the other volunteers working on projects that they work on because they're fun. I think that such a community will be far harder for Microsoft to kill than a bunch of corporate ass-kissers (not that we are corporate ass-kissers, I just think that there is an increasing tendency towards that sad fate), because they're far better at dealing with corporate opposition than community opposition. This is why Linux got them to shit their pants in the first place. Unfortunately, we can already see how all our "selling Linux to suits" efforts have gotten some suits to make the association "Linux == dotcom fad == stupid business models == cash loss".
The marketplace is, by definition, a plutocracy. Who has more power in the marketplace, you or Bill Gates? If every dollar is a vote, how can it be democratic, considering the very uneven distribution of "votes"?
.......I think I've run out of things to say about this subject too, interesting as it may be. :.)
Re:It Should Be About Marketing And Money (Score:2)
Re:Sorry to Intrude here, but... (Score:2)
j/k
Re:M$ Publicity (Score:2, Insightful)
Most