Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Programming IT Technology

High Performance Network Applications 113

An Anonymous Coward sent in this: "An article over at SysAdmin magazine seeks the truth while comparing network application performance under RH Linux, Solaris x86, FreeBSD 4.2, and Windows 2000. I'm a little suspicious of the writer's results, but you be the judge."
This discussion has been archived. No new comments can be posted.

High Performance Network Applications

Comments Filter:
  • If their test app is like the one other application of theirs I am familiar with (Lyris ListManager 4.x). It'll run slower on Solaris SPARC with multiple CPUs than it will on any x86 system with 1 CPU. This is for two reasons, as they've told me in the past when I've tried to get support from them: First, ListManager doesn't handle context switching very well at all. They recommend for the best performance to bind your ListManager process to a single CPU using pbind on Solaris SPARC. Second, and most surprising, is that because the database is backended on FoxPro the ListManager product has to do endian conversion on the fly when running on Solaris SPARC. If their "benchmark" product is architected at all like their ListManager product, you should be very wary of the results. And now it's time for me to go back to rebuilding my ListManager databases because one of the FoxPro .fpt files hit the 2gb mark again and corrupted itself. Whee.
  • by Anonymous Coward
    The only "Tux code" that X15 uses is the new zero copy version of the sendfile syscall. FreeBSD also has this, and pre tux-merge linux still had sendfile, it just wasn't as fast. Many commerical unicies also have sendfile.

    The only other major linuxism is use of the RT Signals notification mechanism. Others have written libraries that abstract RT Signals on Linux, kqueues on Free/OpenBSD and /dev/poll on solaris.

    I don't even have TUX in my kernel, but X15 still runs like greased lightning. The zero copy work was done as part of TUX, but it is a completely seperate patch, and adds no new syscalls.
  • by Anonymous Coward
    The flaw here was that the tests relied on 'simple C++ programs' to 'evenly' benchmark the different OSs. The problem is, in the real world, this is not how serious large-scale web applications are written and the sorts of results that this study comes up with are effectively meaningless. Show me a transaction server (or object broker). Show me how the systems scale with thousands of simultaneous users. Show me web performance based on code that people are actually likely to write in real life, not the TCP/IP equivalent of "hello, world" and you may have something that may be of interest outside of the context of an assignment for an undergraduate CS course in networking.
  • by Anonymous Coward
    I'm just sick and tired of these so-called "studies" which proclaim that they are, once and for all, going to end some religious battle. These studies do nothing for professionals or the industry, so why do people still bother?

    As any professional will tell you, "it depends". Performance always depends on your needs, capabilities, money, skills, software, and hardware. Someone claiming that there is a simple answer by running some simple tests is just trying to either (1) sell consulting services, or (2) sell advertising space.

    And nothing else.

    So someone, tell me, please oh please, why I should pay attention to salesmen who claim to hold "answers". And tell me which CIOs really bite at these numbers. This is just for hit generation. Page views. These are not for me. They are not for the community. They are not for making good decisions.
  • by Anonymous Coward on Friday June 15, 2001 @12:52PM (#148272)
    They say:

    > At Lyris Technologies, we write high-performance, cross-platform,
    > email-based server applications. Better application performance is
    > a competitive advantage, so we spend a great deal of time tuning all
    > aspects of an application's performance profile (software, hardware,
    > and operating system). Our customers frequently ask us which operating
    > system is best for running our software. Or, if they have already chosen
    > an OS, they ask how to make their system run our applications faster.
    > Additionally, we run a hosting (outsourcing) division and want to reduce
    > our hardware cost while providing the best performance for our hosting
    > customers.

    What a crap! They're claiming to be experts! Ha!
    They just don't know how to tune Solaris or FreeBSD properly.
    Results will be completely different if they've tuned it well.

    Solaris Tuning Guide.

    1) Apply latest recommended patches from http://sunsolve.sun.com
    2) Add the following to the end of /etc/system:

    * Raise TCP connection buffer size
    set tcp:tcp_conn_hash_size=262144
    * Increase various kernel buffers
    set maxusers=2048
    * Set hard limit on file descriptors
    set rlim_fd_max=1024
    * Set soft limit on file descriptors
    set rlim_fd_cur=1024
    * Increase directory name lookup cache
    set ncsize=100000
    * Should be the same as setting above
    set ufs_ninode=100000
    * Enable priority paging
    set priority_paging=1

    (These settings are based on information taken from:
    http://docs.iplanet.com/docs/manuals/messaging/n ms 415/patch1/TuningGuide.html )

    3) The following should be at the bottom of /etc/init.d/inetinit:

    # TCP stack tuning
    # default is 7200000
    ndd -set /dev/tcp tcp_keepalive_interval 30000
    # default is 240000
    # change to "tcp_close_wait_interval" on Solaris 2.6
    ndd -set /dev/tcp tcp_time_wait_interval 15000
    # default is 128
    ndd -set /dev/tcp tcp_conn_req_max_q 1024
    # default is 1024
    ndd -set /dev/tcp tcp_conn_req_max_q0 1024
    # default is 8192
    ndd -set /dev/tcp tcp_xmit_hiwat 32768
    # default is 8192
    ndd -set /dev/tcp tcp_recv_hiwat 32768

    4) Speed up filesystem access under Solaris 2.7 and later.
    Add logging to filesystem mount options in /etc/vfstab, like this:

    /dev/dsk/c0t1d0s7 /dev/rdsk/c0t1d0s7 /opt ufs 2 yes logging,noatime

    I have added noatime - this is another setting that might help
    on very busy filesystem, but not that much as logging.

    FreeBSD Tuning Guide

    Recompile kernel with increased number of MAXUSERS (good number
    to start is 256) and NMBCLUSTERS (I use 10000, see netstat -m
    under load to get number that good for you).
    You might want to play with "options HZ=1000".

    Add this to /etc/sysctl.conf:

    kern.maxfiles=65536
    kern.maxfilesperproc=32768
    net.inet.tcp.delayed_ack=0
    net.local.stream.recvspace=65535
    net.local.stream.sendspace=65535
    net.inet.tcp.sendspace=65535
    net.inet.tcp.recvspace=65535

    Turn on softupdates on all filesystems
    using tunefs -n enable (noatime might help as well).

    Vadim Mikhailov
  • If that were the case for Linux, the Tux guys wouldn't be trying to put an http daemon in the kernel. They'd just keep it in user-mode and 'just code it normally'

    Have a look at X15, a userspace http server that's neck-and-neck with tux. Of course it benefits from the general infrastructure improvements that came of tux, but it's still strictly userspace.

    Here's the first announcement: http://www.uwsg.indiana.edu/hypermail/linux/kernel /0104.3/0788.html [indiana.edu]
    --
    Change is inevitable.

  • for Linux and FreeBSD, use sendfile (man 2 sendfile). Note that the call takes different parameters on each of the platforms... but it is the same thing, more or less, as TransmitFile.
  • So you're saying that if you want good performance from Linux, you just code it normally - but if you want good performance from windows, you have to use all the platform dependent nonportable operating system extensions.

    It might not be a valid benchmark, but perhaps there is a point to be learned from it after all...

    -dentin
  • You said:

    3. They only tuned the Linux, FreeBSD and Solaris setups -- they should have tuned Win2k server as well.

    Well, that's not a fair assersion. They did exactly 1 modification to each unix kernel: Change the number of file handles. They set each of them to use 65536. IIRC, windows2k doesn't need this tweak due to it's internal way of record keeping.

    The greatest problems with benchmarks is what tweaking to do. Out of box tests fail because "Any competent admin will use tweak foo", and tweaked tests fail because "tweak foo on os1 is vastly more potent than tweak bar on os3." (think the first mindcraft test).
  • Perhaps it *will* be a smoking little OS by the time a new version is released, but it is not now.
    ___
  • You're absolutely right. Their "benchmark" is perfectly valid, for their product running on a naively tuned operating system. But only a neophyte would put an out-of-the-box OS -- whether Linux, Solaris, Windows, or BSD -- into production as a high-performance network server. All the complaining boils down to two things:

    1. The article asserts that performance of a single bulk-email program is a valid way to rank the four OSes, and
    2. Ignores the system tuning that a competent system administrator would have performed.

    The FreeBSD folks are especially upset because the article states that the OS was logging resource failures but the testers still didn't perform any tuning. That's an amazing level of incompetence to display in a magazine which is supposed to inform system administrators.

    Now do you see what all the noise is about?

    -Ed
  • Agreed -- it's been a long time since I've seen a "benchmark" as poor as this one. But I don't think Windows was treated any more poorly than the other OSes. It wasn't a fair test of any of them.

    The "tuning" for the Unix systems consisted in bumping up the maximum number of file descriptors. That's it. The FreeBSD system in particular was left completely mistuned and clearly running out of socket resources -- they report that it was logging errors but seem entirely ignorant of what those errors were (beyond their being load-related) and how to correct them.

    Polling is hardly the best system interface for multiplexing TCP connections on either Windows or FreeBSD. As you mention, completion ports are best for Windows. Kqueue is best for FreeBSD. It just happens that polling is used in the crappy commercial SPAM program they "benchmarked". (All the OSes support scatter/gather, BTW, so you can't claim Windows was treated unfairly by its omission.)

    None of the systems were testing in a way that shows their actual capabilities. The article is just a thinly disguised commercial for a (barely-)cross-platform "bulk email" product.

    -Ed
  • Perhaps one of the largest opt-in mailing list houses on the internet? A good example, lyris provides the horsepower for http://www.thisistrue.com to send out each week.
  • On the other hand, Tux's performance was recently replicated in user space [zork.net]. Linux reall does have kick-ass TCP/IP performance these days.
  • How do I/O completion ports work exactly? And how are they better than WaitForMultipleObjects? Why does WaitForMultipleObjects have that limitation?

  • You're right about LIFO. I apologize. I confused my acronyms. I meant LRU, not LIFO. :-(

  • The architecture they say performs the fastest, One-thread-many-tasks (asynchronous), is exactly the one encouraged and supported by my StreaModule system [omnifarious.org]. I knew that things worked out this way, but I'm quite surprised to find such clear agreement by a third party. This idea doesn't really seem to crop up in many places.

  • Nice! So in other words, they used straight BSD sockets for their implementation - which is NOT the way to get performance from Windows. You need to use:

    1. Asynchronous, Event based socket handling.
    2. Completion ports.
    3. Scatter/Gather buffering.

    Polling is lousy no matter what way you do it. You'll lose most of your performance spent going round a small loop.

    You're an idiot. They're using the 'poll' system call. If you bothered to read anything, you'd realize that 'poll' is the way to do asynchronous event based I/O under Unix. It's close to what 'WaitForMultipleObjects' does under NT.

    They may use the sockets API, but as far as I know, that's the way to do TCP/IP under Windows. There are a few special calls to get NT 'handles' for your sockets so you can then do WaitForMultipleObjects based event based I/O handling. I'm betting this is exactly what they did.

    As for scatter gather buffering, that depends a lot on your internal application architecture. I would agree that, in general, it's a good idea. I don't think their code would do scatter gather under Unix, and not under NT. Scatter gather is implemented nearly identically under both platforms.

    Your comment shows a great deal of ignorance. It's a travesty that you were moderated to +5. *sigh*

  • You misunderstand 'poll' completely. poll asks the OS to suspend your process until one of the indicated events happens, then you get to go respond to it. It's essentially the same thing.

    Say, for example, that your dumping data into a socket. Under Unix, you write to the socket until the OS tells you that the socket buffer is full by setting the socket to non-blocking and writing until write returns EAGAIN as an error. Then you put the ability to write to that socket on the list of OS events you're interested in. Then, you go do whatever else it is you have to do. After you get done servicing everything you can service, you call poll and it blocks your process (possibly running others) until one of the indicated events happens and there's something else to service. Same basic paradigm.

  • Also, VirtualAlloc there sounds and awful like like 'mmap'. Again, same basic idea, and Microsoft does it completely differently.

    I know a fair amount about the insides of NT, and most design choices they made that are different than Unix's are worse.

    Here are just two:

    • A FIFO VM?!?!? How stupid can you get? LIFO is much better, and while not really achievable, you can come closer than FIFO using a mark & sweep-like system (or perhaps there are better algorithms today).
    • WaitForMultipleObjects, you mean, every single semaphore and mutex call is an OS call now? No 10-20 cycle mutex grabs when there's no contention?
  • Additionall, they used an Intel EtherExpressPro 10/100 card (fxp driver). My understanding from the FreeBSD mailing lists is that this driver is being completely rewritten to eliminate significant performance issues in the FreeBSD 4.x versions. I suspect that even network performance would be noticably different had they used hardware with optimized drivers accross all platforms.

    To say an OS's network or disk performance is poor, without considering the drivers used for your hardware, is kinda irresponsible.

    It's clear, as your comment shows as well, they did not make any effort to properly tune and configure the overall system for each OS tested.

  • by Royster ( 16042 ) on Friday June 15, 2001 @11:36AM (#148289) Homepage
    I'm sure Linux will talk just fine to Linux, but other platforms might not be tuned the same. (2.4 kernels were having trouble because of this recently. Linux implemented some feature that lots of routers didn't, and performance was hosed somtimes.)

    You don't seem to understand ECN. ECN is now (as of June 12) an internet standard. It will improve the performance of the Internet by allowing ECN-aware stacks to note congestion and respond appropriately instead of waiting for packets to fail to be acked and backing off one the transmission speeds. (Ever got a 'stalled' message loading a /. page? ECN is supposed to help avoid that.)

    Buggy routers responded incorrectly to ECN packets by terminating the connection. It appears as if the other computer isn't even on the net. Cisco has released bug fixes to correct this bug. They have not been applied by all of the admins.

    Yes, Linux 2.4 shipped with ECN enabled. The distribution packagers generally (all?) included a command in the start-up scripts to disable the feature.

    Because TCP/IP is a standard, there should not be performance differences between stacks whereas a stack performs better speaking to another stack of the same design. TCP/IP should be completely interoperable.
  • Most Windows programs should be using
    CriticalSection objects, which are userlevel only
    for the no contention case.
  • >Because TCP/IP is a standard, there should not
    >be performance differences between stacks
    >whereas a stack performs better speaking to
    >another stack of the same design. TCP/IP
    >should be completely interoperable.

    Where did you get the idea that interoperability has anything to do with performance?

    The correctness of a TCP/IP implementation has to do with whether it changes state properly, gives appropriate responses, delivers data intact, etc. It has nothing to do with benchmarks as compared to other implementations. The carrier pigeons implementation should make this clear - correctness doesn't depend upon speed.
  • by cpeterso ( 19082 )
    A FIFO VM?!?!? How stupid can you get? LIFO is much better, and while not really achievable, you can come closer than FIFO using a mark & sweep-like system (or perhaps there are better algorithms today).

    Why is FIFO so stupid? FIFO is an approximation of LRU. NT uses a variation of local FIFO page replacement policy for MP x86 and all Alphas, but a Unix-like clock replacement policy for UP x86.

    LIFO would be a particularly bad replacement policy because program executables are designed to have code and data locality. If the OS swaps in a code page, you don't want it to swap out that same page. You'll swap the entire page back into RAM when your program tries to execute the next instruction! :-(

    WaitForMultipleObjects, you mean, every single semaphore and mutex call is an OS call now? No 10-20 cycle mutex grabs when there's no contention?

    Win32 mutexes and semaphores are kernel calls because they are named objects. That is, they can be used to synchronize threads in different processes. If you only care about threads in your local process, then use CRITICAL_SECTIONs. They are fast.
  • by cpeterso ( 19082 )

    Linux's super high performance numbers are from using Tux in the kernel. The new X15 userspace web server is neck-in-neck with Tux because it uses the Tux code that has been merged into Linus' mainstream kernel. Is X15 portable? Can you run X15 on Solaris or BSD without removing use of Linux's non-portable zero-copy APIs and other Tux kernel code?

  • I agree. Solaris does have l33t name. Much kewler than SunOS or Phreex!

  • An IO Completion Port is just a thread pool blocked on a counted semaphore to call select() or WaitForMultipleObjects(). If you set the initial semaphore count to the number of processors, then the OS scheduler should efficiently pin each thread processing a select/WFMO event to its own processor.
  • They used all three of these systems "out-of-the-box". So look at what these systems are pre-tuned out of the box for: Redhat Linux for speed and FreeBSD for stability (and Windows Y2K for media benchmarking). Think about it.
  • So, based on using 1 of the dozen or so filesystems linux supports, you determine it's
    crap?

    Try reiserfs....
    I bet that 18 gigs takes *forever* to fsck if you reboot...
  • crappy benchmark, to say the least.

    IF the question is 'which network stack is fastest' there are ways to sort that out. 'which is better under high load'.
    There are so many questions that can be asked...

    And any of the systems tested are capable of blindingly fast network operations if the programmer takes into account the best way to do things on that particular machine.
    Compiling the same code on 4 machines and testing the output is more of a compiler/libarary benchmark than a system benchmark.
  • by AT ( 21754 ) on Friday June 15, 2001 @11:30AM (#148299)
    While your point that this benchmark is somewhat flawed is correct, you also point out a large problem with Windows:

    You are forced to use proprietary MS-only extentions rather than straight, standardized POSIX calls to achieve the best performance. That means you have to suffer proprietary lock-in if you want to code high performance network applications for Windows.

    I think is deliberate: there is no reason why calls like malloc, creat, mmap, poll, whatever, couldn't have been tuned to get similar performance to the Windows specific VirtualAlloc, CreateFile, etc. Microsoft wants you to trade off portability for speed.
  • All this so called test shows is, " Who runs my software Better".
    Not only that, but they are comparing differently tuned ssytems.

    are these guys for real? come on.!!!!

    Quarrel Numero Uno: Platform

    This is their software, which is probably targeted at THEIR dev platform, which is probably linux. Well, DUH, let me guess which platform it'lll run fastest on..

    Quarrel Numero Dos FS IO
    Linux: fs mounted Async
    Windows: fs is a JFS.
    Solaris: fs is mounted sync
    FreeBSD: fs is mounted sync..


    COME ON... this guys are comparing apples and cars here.... try mounting ext2 sync and see what happens....

    Better yet, enable softupdates on BSD FFS and see what happens...

    Or even better, use veritas, on solaris...

    The reason linux came high on top is because the fs was mounted async, which while it does give you speed, gives you VERY LITTLE RELIABILITY. th's why the BSD"s have their mounts sync. Now, soft updates closes the gap on FFS between an async and a sync mount.

    Reason MS came up high also... It's a journaling file system, compare it to veritas, another JFS, and see what happens...

    Quarrel Numero Tres APP test
    Heh, the test is an application THEY wrote, to connect to a server THEY wrote.... HAH HAH HAH, enough said....


    These guys never seize to amaze me....
  • Well, maybe that's cause if you had one, you wouldn't be able to call them spammers without reason. Wouldn't something like that be useful for perfomance testing something like sendmail? Or maybe a large product announcement list for one's customers?

    Take a look at http://www.dummiesdaily.com [dummiesdaily.com]. They have a tip of the day feature, where they send you daily tips on your topic of interest. They have over 300,000 unique subscribers - they mention this on the "About" page.

    There's a nice, big list of over 200,000 email addresses, used for legitimate purposes. There is a such thing as a "opt-in" list.

    For what it's worth, a company owned by a friend of mine hosts a ton of discussion & announcement lists using Lyris ListManager. They work very closely with Lyris (http://www.lyris.com [lyris.com]), and I can tell you that Lyris does NOT condone spam. In fact, they also sell a product designed to help STOP spam. So please, get your facts straight before making accusations.
  • It wasn't tuning per-se, just the raising of maxfiles because Unix defaults to lower settings. They point out that it wasn't necessayr under Windows, presumably because the equivalent is uncessary.

    --
  • If there were a Geek Speek generator on the net similar to the Mission Statement generators, that's what it would sound like.

    How embarrased you must be.

    --
  • That doesn't mean that if_fxp is a poor driver currently. Everything can be improved however which is what the mii rewrite is doing. if_fxp is already a very excellent driver and is the best card/driver combo under FreeBSD (and probably most OS's).

    --
  • In addition, they appear to have used a Linux box to connect to the test platforms. While I'm all for Linux, subtleties of the TCP/IP stack implementation could have influenced the results a bit. I'm sure Linux will talk just fine to Linux, but other platforms might not be tuned the same. (2.4 kernels were having trouble because of this recently. Linux implemented some feature that lots of routers didn't, and performance was hosed somtimes.)

    It would have been nice if they'd tried Solaris, Windows, and FreeBSD clients, too.

  • Ok, so that's what it looks like. However, I did an awful lot of benchmarking at my last job to get our performance up on our hardware. So what I benchmarked was our software. It was a modified version of the apache webserver. So I had extensive results from the use of our product, and virtually NO results for SQL servers, spreadsheets, 3d-games or email applications.

    I just think these guys did their job (optimizing their software), and ended up publishing their benchmark results to enlighten other people. I wish I had gathered up everything and put it out there.

    Basically we tested a version of apache on BSDI 4.01, redhat linux 6.2 and solaris 7. The systems were compaq 1850r p2 450x2 boxen. BSDI needed a LOT of tweaks, but ended up being the most efficient. Solaris was pretty stable, but a little slower. Linux was about the same performance as BSDI... sometimes. Sometimes it would flake out at high loads. I'm sure it's much better now, especially with tux.
  • In reading the top-moderated comments, one thought came to mind: Slashdot readers, who are accused of being rabid Linux supporters, are bashing a benchmark that came out pro-Linux.

    Kudos to the Slashdot community for being objective, despite your theoretical biases.

  • So you're saying that if you want good performance from Linux, you just code it normally - but if you want good performance from windows, you have to use all the platform dependent nonportable operating system extensions.

    If that were the case for Linux, the Tux guys wouldn't be trying to put an http daemon in the kernel. They'd just keep it in user-mode and 'just code it normally'

    Simon
  • by spectecjr ( 31235 ) on Friday June 15, 2001 @11:53AM (#148309) Homepage
    I think is deliberate: there is no reason why calls like malloc, creat, mmap, poll, whatever, couldn't have been tuned to get similar performance to the Windows specific VirtualAlloc, CreateFile, etc.

    ... apart from the fact that they expose different paradigms entirely?

    Malloc - heap based allocation
    VirtualAlloc - allocates entire pages from the VMM. Allows you to reserve or commit pages when and as you need them.

    fopen - opens a file handle
    CreateFile - Allows you to open a file handle, specifying buffers to use, etc etc etc.

    poll - you sit there waiting and doing nothing most of the time because you're asking all your connections "are we there yet?"
    CompletionPorts - the OS comes back to you when it's done, and tells you that it's finished. You can now use those spare cycles doing something else - like another 1000 network connections.

    Simon
  • by spectecjr ( 31235 ) on Friday June 15, 2001 @10:46AM (#148310) Homepage
    I'm sorry, but I can't see how this is a valid benchmark.

    "As a real-world test, we measured how quickly email could be sent using our MailEngine software. MailEngine is an email delivery server, ships on all the tested platforms (plus on Solaris for Sparc), and uses an asynchronous architecture (with non-blocking TCP/IP using

    the poll () system call). So that email was not actually delivered to our 200,000-member test list, we ran MailEngine in test mode. In this mode, MailEngine performs all the steps of sending mail, but sends the RSET command instead of the DATA command at the last moment. The SMTP connection is then QUIT, and no email is
    delivered to the recipient. Our workload consisted of a single message being delivered to 200,000 distinct email addresses spread across 9113 domains. Because the same message was queued in memory for every recipient, disk I/O was not a significant factor. We slowly raised the number of simultaneous connections to see how the increased load altered performance."


    Nice! So in other words, they used straight BSD sockets for their
    implementation - which is NOT the way to get performance from Windows. You
    need to use:

    1. Asynchronous, Event based socket handling.
    2. Completion ports.
    3. Scatter/Gather buffering.

    Polling is lousy no matter what way you do it. You'll lose most of your
    performance spent going round a small loop.

    Similarly you can infer that they used straight malloc() for their memory
    handling, and most likely file handling - again very lousy
    performance-wise on windows compared to the alternatives, such as
    VirtualAlloc, CreateFile(), scatter-gather file handling and more.

    As for the second test, we can guess (from their comments) that they're
    using straight C++/C file operations under windows instead of tuning them to
    the architecture, so of course performance is going to be lousy -- they're
    benchmarking Microsoft's C runtime implementation, nothing more, nothing
    less.

    Also note that:
    1. They don't provide details of which compiler they're using.

    2. They don't provide details of the actual benchmark code for test 2.

    3. They only tuned the Linux, FreeBSD and Solaris setups -- they should have
    tuned Win2k server as well.

    Sheesh. Talk about a crappy way to benchmark.

    Simon
  • by Restil ( 31903 ) on Friday June 15, 2001 @12:37PM (#148311) Homepage
    Anyone else notice the heavy concentration in that article about the efficiency of mailing out large numbers of email messages. Now, I'm certain there are many MANY legitimate reasons why someone would have a "test list" of 200,000 email addresses, its just that I can't seem to think of any at the moment.

    -Restil
  • True that the benchmark seems mighty suspicious, but doesn't they slightly resemble a reverted mindcraft one?

    It doesn't really matter which OS that wins, the sad thing is that some will take these numbers for authentic and realistic. More useless overhead in the neverending "which OS got the most wang" debate.

    Has the FUD-wheel of the mighty penguin started to turn?
  • In case you didn't notice, NT is not Unix, never has been Unix and never will be Unix. There are so many design differences in the underlying system that it is hard to believe you are even suggesting it's a good idea to use a single code base.

    I've seen plenty of code that has a single source for Unix and NT, but NONE of it is high performance and most of it behaves very strangely on NT when you compare it to a properly written NT service or application.

    If you are writing high performance code then you are almost certainly writing for a particular system and have to write the code for that system. Writing for NT is different to writing for Unix (I prefer writing for NT personally but that's another issue) and trying to say that it's lock-in is just stating the obvious.

    By your argument Linux is deliberately encouraging the use of non-portable code through applications like Tux which only work on Linux boxes and not Windows, or even other Unixes.

    It's just daft.
  • select() is a pig of a system call (IMHO). Replacing it is probably not a good idea though - you'll break compatibility all over the place. I believe this is discussed every so often in the Linux kernel and improvements are slowly being made here and there with single thread wakeups and the like.

    There's plenty of evidence that Unix is every bit as fast as NT with it's API. My point is that just because NT supports the BSD sockets API doesn't mean you should use it in high performance code. Personally I prefer using ReadFileEx() and WriteFileEx() on NT for socket (and file, named pipe and everything else) I/O. If you do it properly you don't have to wait for any events at all - the system just calls your completion routine when it's finished all by itself (kinda like a signal).

    Winsock 2.x is good now that socket handles are full blooded file handles. Damn shame you still can't pass them as stdin or stdout to a child process though. :-(
  • by throx ( 42621 ) on Friday June 15, 2001 @11:29AM (#148315) Homepage
    I'm not sure. It looks like they've tried to use the same methods on 4 different operating systems. This is something that is doomed to failure in a benchmark situation as there are different programming paradigms for the different systems.

    A much better benchmark would have been simply comparing IIS to Apache or Tux. Oh yeah. That's been done. Tux won. Hehe.
  • by throx ( 42621 ) on Friday June 15, 2001 @05:16PM (#148316) Homepage
    True. There's another way that's also very fast in NT that would be really difficult to emulate on Unix (probably because it wouldn't be fast on Unix):

    To set this up you treat the sockets as file handles and use ReadFileEx() and WriteFileEx() with the lpCompletionRoutine parameter set to point to a function that the OS should call directly when the I/O is done. When you are blocked waiting for activity, put the thread in an alertable wait state using *WaitForXXXObjectEx() function and the completion routine you specified will be called by magic (actually via an Asynchronous Procedure Call or APC, but close enough to magic) when the I/O has finished.

    This works very quickly on NT because it mirrors the way the underlying kernel and device driver stack works. Basically the I/O completion can come straight up from the driver routine into user space with a minimal delay and minimal number of context switches. The second advantage is you don't have to open event handles for every I/O you have outstanding, and so you don't run into the limit of waiting on 64 objects at a time.

    The only drawback to this method (if you can call it a drawback) is that I/O that is initiated on one thread is always sent back to that thread so you have to run one thread per CPU and round robin them

    The closest thing on Unix to this sort of behaviour is signals, but signals and multithreaded code tend not to mix very well.

    Just a FYI really, not saying it's good or bad compared to Unix - just another thing to have in your bag of tricks.
  • by throx ( 42621 ) on Friday June 15, 2001 @10:51AM (#148317) Homepage
    The method used here for programming Windows 2000 is almost certain to guarantee slow results. Assuming he's written his code to use select() or even WaitForSingleObject() then he's signifiantly slowing down the system.

    If you want to write high performance socket applications on Windows you MUST use I/O completion ports (something this article failed to mention at all). Most high load applications I've written using sockets have shown a 50% to 100% improvement in throughput for the same CPU load when switching to I/O Completion ports from a tradition (Unix style) asyncronous I/O model.

    I'm not saying in this case that Win2k would beat Linux, just that the tests were skewed by the author's inadequate knowledge of writing high performance code on Windows 2000.
  • LOL Well said sir!

    ROFLMAO!
  • If you want people to take your comments seriously, you probably shouldn't hold someones education (or lack of same) against them.

    ;)

    (yes yes, we all know people who's brains filled to capacity around the time they got those letters, but just as not all college dropouts are worthless, so too are all people of letters. )
  • true enough. Of course, PHDs have high marketing points, on the BS scale too! LOL
  • Thank you for agreeing with my point, but coming to a diffrent conclusion. As far as developing for all systems... Does it really matter? can you really tell me 'all coders are created equal'? Isn't it more logical to conclude the majority of development is done on one system, and then 'ported' to the others (why else do we have portable code?). Most coders would probably aggree a native version is always going to take more advantage of that systems abilities than a simple port.

    It doesn't really matter in this case, however.
    (shrug) As I tried to point out, the results are only relevent if you are using Lyris' software, hence my conclusion that this is a comercial, not a test. They even say, right in the article, that this is a result of testing customers have asked for so they know which platform is best suited for THIER apps. The results are wrapped up in something that is guarenteed to cause controversy at the 'religous' level. I say, well done to the marketing weenies.
  • by Brew Bird ( 59050 ) on Friday June 15, 2001 @10:51AM (#148322)
    I read this a couple of weeks back when a linux-centric friend sent it to me... my main observation: This is Obviously a comercial masquerading as a 'test'. When the 'device' being used to do this so called 'benchmark' is a software application written by the testers for something else, there is nothing else to call it. Maybe the title of the article is a bit misleading, the meat clearly says all they are doing is showing which OS they have optimized thier application for. They then use that as the FLAWED basis for determining which OS is 'best'? Give me a break.
  • You forgot OS/2 users too. Shame on you.
  • One can look at it another way. The majority of developers use Win32! Why can't Linux get with the program?
    If you're benchmarking an OS, you use whatever is fastest on the OS. And since most Win2K server programs WILL use whatever is fastest on Win2K, the benchmark can be valid as a real-world test (assuming all other factors are correct, of course!)

    >>>>>
    For the English-imparied, I'm not advocating that Linux switch to Win32. I'm simply stating that what is "normal" is in the eye of the beholder.
  • Actually, for 2k, they could have turned off 8.3 filename creation via a registry entry. On NTFS with x0,000's of thousands of files, the 8.3 name creation can be a drag. On a email server, you should be able to make the registry change, that breaks dos compatibility. SInce one of the tests specifically did create 10k files in one dir, the tweak might help ostiguy
  • by staplin ( 78853 ) on Friday June 15, 2001 @02:02PM (#148326) Homepage Journal
    First off, I have to agree with many of the above comments. The benchmarks are suspicious. But then you have to take it all with a grain of salt because the source is SysAdmin magazine.

    My own complimentary subscription for presenting at LISA '99 just ran out, but as anyone who's read this journal before can tell you, this article was just written by Joe Admin, and was about on par for the magazine. Even if you haven't read the journal before, you could click on the big "Write For Us" [sysadminmag.com] link at the top of the page, and see that "all of our articles are written by readers."

    Now, I'm not slamming the magazine! It's a decent piece of work, and actually has some good articles about tricks and tools that help sys admins get their day to day jobs done. But at the same time, it's also subject to some one-sided reviews and some articles take a lot of flak for their controversial positions. Just look at who wrote the article (the original developer of the mail engine) and take it with a grain of salt.

    And if you really disagree write them a counter piece, or at least a letter to the editor pointing out the flaws.

  • The reasoning behind I/O completion ports is that it permits you to do something on completion of I/O, such as initiate yet another I/O (a "feedme" signal that is delivered reliably as an event, unlike a UNIX signal, which is merely a persistant
    condition).
  • paraphrase> "FreeBSD is not CPU scalable"

    That is certainly true of FreeBSD 4.x and previous. The FreeBSD kernel at the moment has a single giant lock (think: n kids and 1 bathroom). But I do feel compelled to point out that FreeBSD 5.x is slated to have the locks pushed down with an eye towards making the kernel highly scalable across multiple CPUs. Will 5.0-RELEASE be as good as Solaris in this regard? Probably not. It took Sun a long time to get it right. FreeBSD hopefully will be able to get it right quickly.
  • by nsayer ( 86181 ) <`moc.ufk' `ta' `reyasn'> on Friday June 15, 2001 @11:18AM (#148329) Homepage
    It's clear from their comments that they did not turn on Softupdates on the filesystems when they set up their FreeBSD machine for the testing. It's no wonder that they found disk I/O to be slower on FreeBSD, therefore.

    Traditionally, Linux has traded speed for safety in filesystem meta data handling. FreeBSD has always refused to do so, insisting that metadata be updated synchronously. With softupdates, the metadata is cached, but the cache is flushed in the right order. The upshot is that you get the speed and the safety.

    In short (too late), I am sure that their opinion of FreeBSD would improve markedly if they would set it up properly.

    From what I see, just about every other OS represented has a defender saying exactly the same thing. That doesn't speak well for the thoroughness of the testing. I'll leave it at that.
  • Actually WaitForMultipleObjects wouldn't be the way to go either since it only allows a maximum of 65 handles. As the original dude said, use io completion ports. If they are using WaitForMultipleObjects, they are truly sad bastards.
  • Well, you'll probably never read this but: 1) Io completion ports have a bunch of threads associated and a bunch of devices. Devices are handles. These can be just about anything. Whenever one of the devices is ready (incoming network packets, io device ready for read or write, etc) the kernal awakens one (!) and one only thread to receive the event. The thread is chosen based on how recently it ran. The kernel choose the most recently run thread. 2) They're sweet because you don't have the overhead of thread creation (mildly expensive), no unnecessary context switches (some models wake up every single thread just to tell them they don't have anything to do), multiple threads waiting on the same pool of objects and some smarts as to which is chosen, limitless objects can be waited on, the OS is smart about the number of CPUs. If you have 2 cpus no sense in waking 3 threads. The kernel doesn't do that, it will choose 2 that are ready for action. 3) I don't know why WaitForMultipleObjects has that limit. But my guess is that it was never intended for many handles.
  • > The only part that I will have to agree with is that EXT2 fs is very fast.

    It's fast because it doesn't do synchronous updates of metadata as default.

    BTW - if you want a reliable mailserver you *NEED* synchronous metadata updates. So if they had wanted a realistic and fair benchmark they should have mounted EXT2FS synchrounusly or used a different filesystem (e.g. reiserfs).
  • Actually, reliablity doesn't seem to be an important point for those benchmarkers. Or why are they using EXT2FS in its default configuration (asynchronous)?!? This is not much better than using a RAM disk for spool files...

  • Thank you for pointing out to that. I've just read the docs at MSDN describing Completion Ports API, and it seems interesting. I have always programmed network code on Windows the traditional BSD way, or using window messages (even slower, I understand).
    Do we need to think about getting something like that to *NIX? Or maybe the traditional network programming can be just as fast on *NIX? What do you think?


  • I was surprised that FreeBSD didn't show better in the "benchmark", too.

    While I'm a Linux user, I've always admired the real world performance of ftp.cdrom.com, a FreeBSD based site, IIRC. It would handle legendarily humongous loads of network connections and file transfer (bytes/day) on cheapo x86 hardware.

    Comparing to the other OS on the above review, I got the impression that FreeBSD meant not only getting the free{beer,speech} advantage over W2K and Solaris/x86, but reliability (which Linux has), but, as of several years ago, a significantly better network and VM performance than Linux.

    I was impressed.

  • There is a difference between good performance (standard use) and exceptional performance, like the one Tux gets.
  • You need to use:...1. Asynchronous, Event based socket handling. ... Polling is lousy no matter what way you do it. You'll lose most of your performance spent going round a small loop.
    Please type 'man poll'. You'll find that poll(2) is asynchronous and event based. Nothing to do with cycling in a tight loop. Which doesn't detract much from your point that the benchmarkers showed no signs of understanding or adapting to the Windows OS.
  • It's clear from their comments that they did not turn on Softupdates on the filesystems when they set up their FreeBSD machine for the testing.

    Isn't soft update still in testing? My FreeBSD 4.2-RELEASE system says this in /usr/src/sys/ufs/ffs/README:

    This package constitutes the alpha distribution of the soft update code updates for the fast filesystem.

    I don't see anything to support the idea that this changed in 4.3. In particular, I don't see any mention of soft updates at all in the 4.3 handbook. I'd expect to if it were a completed feature.

  • That wasn't updated. It's been in use since before 4.1-RELEASE. It was finally included (after years of testing) in OpenBSD 2.9. It's say it's pretty much a proven implementation at this point.

    Thanks for the correction. I may be enabling it on my machine soon, then.

  • Obviously this "test" is a crock, as FreeBSD would not be that far behind, what with it's surperior network stack and thread handling.

    This sounds very much to me like "I found this benchmark's results surprising, therefore I rejected them." You've given absolutely no evidence to support your claim that FreeBSD's network stack and thread handling are superior. You've said nothing about what mistakes may have caused their benchmarks to be skewed. If you are going to reject results you don't expect, what's the point of running the test?

    I am not biased. I have all of these operating systems installed on my machines. The desktop I am at now is running Linux 2.4 and also has a copy of Windows 2000 installed. My colocated server is running FreeBSD. The old SPARCstation in the basement has Solaris (though, admittedly, I don't really use it).

    It may be true that this is a very bad benchmark...but don't reject the results simply because they surprise you. Look into it...what was wrong with their test procedure? Without an answer to that, you have no credibility.

  • I'm reading all sorts of comments by various people that this or that OS can be much faster if a certain tuning action is taken. They forget something, Joe Schmoe doesn't know about this, Joe Schmoe doesn't know he has to turn on Command-Tag-Queueing of the Adaptec driver on linux, doesn't know about FBSD softupdates, doesn't know that he has to tweak the hell out of W2K to make it run secure on an Internet connection. Joe Schmoe doesn't know that no matter what OS you choose, you need an expert to get it to run the way it's intended, fast and secure. The explanation on the way different OSes treat their threading and I/O and the influence this has on performance of a mailserver is at least clear and should be a help to many admins and coders that are seeking performance gains on their systems.
  • So, my son, in order to beat [windows|linux|freebsd|solaris] you must now master the secret fu manchu drunken port style.
    When you have done this you shall find that you can allocate many more virtual sockets than your enemy. You will not only cut the virtual load on your cpu but your disk i/o will remain virtually at nothing compared to your enemys. Your malloc's shall allocate many times more memory than you ever thought was possible.

    All this will give you a much greater throughput on your servers and put fear in the hearts of your enemy. Now, get your <stdio.h> and we shall begin.

    Sorry about that..
  • nice! slashdotted already!

    "But you said this was the best net OS!"

    "Quiet! I'm working on my resume!"
  • it's in -current now

    :)
  • It looks like you're trolling Slashdot. Perhaps Clippy [min.net] can be of assistance.
  • I would like to run my own test. In particular I'd like to benchmark Linux vs. 2k vs. FreeBSD (assuming the later can support my gigabit nic, haven't tried it yet). What I'm looking for is benchmarking file server performance. I've written a test tool before and on windows I used TransmitFile() and completion ports, and I don't have an equivalent method on Linux. I would like to find a tool that can serve files *optimally* under every OS supported. Portability is not a concern. Any suggestions?
  • That's true. Also keep in mind that Solaris is ass-slow on IDE systems.

    If they were testing on an Ultra-5 or 10, that would make Solaris look lousy.
  • by pizen ( 178182 ) on Friday June 15, 2001 @10:52AM (#148348)
    I was going to read this article and make an informed comment about it. But, because of my laziness to wait forever for it to load, I'm just going to post this summary of comments to come:

    Linux users: Linux is better, Windows is unstable.
    Win users: Windows is better, Linux is hard.
    BSD users: You're both wrong.
    Mac users: Hey, look at us. We are pretty.
    Top 3: Mac, shut up.
    BeOS users: We're better but y'all will never know it.
    Bill Gates: All your $$ is belong to me.

    ---
  • For anyone tracking the stable releases, the MIIBUS type driver for fxp cards is now in the source tree. 4.3-RELEASE doesn't have it, but grab the source to that and cvsup to the current stable tree and you get it.

    I've done no real testing, but it builds, and appears to work. ;-)


    --
  • Sure windows isn't POSIX compatible, but be fair, the posix calls on Windows are EMULATED. For example, not only is the malloc call on windows implemented with it's own set of memory management structures (on top of VirtualAlloc) it is also thread-safe, which means that it has to lock its internal structures on every call. Very expensive.

    Imagine if you wrote the original test in Win32 then ported to the other operating systems using a portability library (such as Mainsoft's MainWin). Would you expect as good performance as if you had written it using POSIX in the first place? No, you'd be stupid to.

    In order to do a benchmark like this you need to write different programs, one for each platform, that make the best use of the APIs on those platforms. Only then can you know the true performance of the platform itself. Of course, you have to make sure that the non API-specific parts of your programs are as similar as possible to reduce any discrepancies introduced.

    As far as this benchmark goes, I have found significant performance and stability increases by moving my code from the POSIX-style sockets API on Windows to the Winsock2 API. Yeah, it's not portable, but hey, I'm not porting it.

  • The price would have been portability. Propeitary IO calls would have had to used instead of POSIX-compliant ones. I am nto the first to notice this. A nice side effect of this benchmark is to point out that Microsoft deliberately encourages the use of non-portable code in order to get performance: a text book lock-in technique.
  • Do you have any test results that "prove" this test's results are flawed, or are you just parroting generally-accepted preconceptions about each OS?

  • From reading the article, there's no mention of either mounting the filesystem asynchronously (not so good) or enabling softupdates (good).

    It's ironic that these server 'benchmarks' you see give more emphasis to speed than data security.

    --
    C-YA
    Jon
  • Windows was never about portability in the first place.

    Stick to the priorities.

  • Everyone here seems to be stating over and over the blatantly obvious fact that they didn't use code that was optimized for each target platform.

    Well, I think the test was fucked for a completely different reason: They used a live internet connection for the test. Don't they know that the latency between those connections will change from execution to execution? For example, sometimes I hit slashdot, and on my nice slick t1 here at work, it loads instantly. Other times, during heavy loads, it takes up to thirty or fourty-five seconds. My point is that they are connecting to mail-servers which they have no knowledge of the current load of each server. This probably skewed their results wildly. In order for this test to be fair, they should have set up some boxen on their own intranet, and tested the connections with no other traffic around to mess up the results.



    Well, your fingers weave quick minarets; Speak in secret alphabets;
  • All these comments blasting those benchmarks for not being fair to Win2k just make my heart swell.
    Now I won't feel nearly so bad when everybody complains about the next silly review that puts win2k above linux.
  • by BoarderPhreak ( 234086 ) on Friday June 15, 2001 @11:31AM (#148360)
    You use the platform that:

    • Supports your software best
    • Has the support you need from vendors
    • Meets your hardware requirements for redundancy, failover, high-availability and robustness, among other buzzwords

    It means nothing if "A" is fastest, if it runs on a bad OS, cheap commodity hardware or isn't supported. You go with "B" becuase it DOES.

    Fast != correct all the time.

  • The only part that I will have to agree with is that EXT2 fs is very fast. Although my one mail server that runs linux has an 18G mail partition that has just reached 50% fragmentation. The other 3 servers run FreeBSD with 1% fragmentation. That server will be switched to FreeBSD on Tuesday. After seeing that I am having trouble convincing anyone that Linux is a good OS.. especially myself.

    Sudo Chop!
  • In today's cheap iron world, I don't think narrow margins in performance matter that much.
    Everything is about reliability. I would rather have an OS that works is up >99% of the time, and shell out a few more bucks for hardware, than an OS that runs itself into a blue screen like a freakin' rocket.


    Whatcha doooo with those rollin' papers?
    Make doooooobieees?
  • > Because TCP/IP is a standard, there should not be performance differences between stacks whereas a stack performs better speaking to another stack of the same design. TCP/IP should be completely interoperable.

    TCP/IP is indeed interoperable, but some things will be faster on one system, and some on another.
    Because of the different implementation of TCP/IP.

    --
    Two witches watch two watches.
  • Obviously this "test" is a crock, as FreeBSD would not be that far behind, what with it's surperior network stack and thread handling

    Thanks AC for the nice plug. I'm a frontline developer for FreeBSD and I've been tasked with doing a complete overhaul for the tcp/ip stack, as well as doing a POSIX implementaion of threads, something it sorely needed.

    It's quite possible this test was done without the BSD_booey_stack extensions compiled in, that would account for the less than stellar results. It's almost impossible for the tcp/ip stack to be the bottleneck for any problems anymore since I introduced double pass ID caching into the CVS. Basically it uses user space memory to reduce the overhead on the kernel, while performing table lookup translations on a stack hashtable. It's getting still better times if you enable my optimized dynamic MTU settings, even though that's still in the experimental stages.

    I don't let this stuff get me down though, when we release the next point version at comdex next year the industry will watch with awe, it's going to be a smoking little OS by then.

  • by sethbag ( 448116 ) on Friday June 15, 2001 @12:47PM (#148372)
    Solaris is much more finely grained in its locking than any of the other OSes mentioned. Because of that, comparisons with other OSes running on one or two CPUs (usually on PCs) do not do Solaris its due justice. Sure, Linux or FreeBSD, which aren't very finely grained in their locking (but are working towards changing that) spend less overhead in locking calls, so they run faster.

    But how fast can they run on a 32-cpu machine? Or a 64-cpu machine? According to some public documents I saw, Sun will release a 72-cpu machine this summer. They currently support 64 cpus on their E10000 machines. Solaris is a highly scalable OS. Linux is not. FreeBSD is most certainly not. Windows2000 may like to style itself scalable, but come on, we all know they are dreaming. Maybe scalable to 4 CPUs (if you own Pentium Xeons), and maybe in someone's wet dream it could scale to 16 CPUs or so, but none, I repeat none, of these OSes can scale like Solaris.

    Solaris' strength isn't the fact that it's blazing fast on a single CPU, because a lot of tests can show Linux is faster. But Solaris *is* blazing fast on massively parallel machines. Solaris shows time and again an amazing ability to scale performance with the addition of more CPUs. The overhead required to build that scalability into the OS penalizes Solaris on single or dual-cpu machines, and that *must* be taken into account by people.

    And don't even talk about 64-bit. Sure, Solaris for Intel is limited to 32-bit address spaces due to the constraints of the CPU architecture on which it runs, but Solaris the OS is built through and through as a 64-bit OS, and Solaris running on UltraSparc hardware supports zillions of bytes of RAM. The new SunFire 6800s can support in the hundreds of gigabytes of RAM.

    Can Windows2000 do that? Can Linux do that? Can FreeBSD do that? Really we are talking about different markets here, that's all. You really need to test the OSes in the areas they are designed to operate, and then you'll see who the real champ is.
  • Sorry about that, I screwed up the link.

    This [freebsd.org] one should work.

  • by (char *) jmi ( 451718 ) on Friday June 15, 2001 @12:41PM (#148375)
    In case anybody out there has a FreeBSD system which needs some tuning, I would advise reading the tuning(7) man page, in recent -STABLE and -CURRENT (after around the start of June). Those of you without a copy can fetch one from the CVS web interface:

    <a href="http://www.FreeBSD.org/cgi/cvsweb.cgi/src/sh are/man/man7/tuning.7">http://www.FreeBSD.org/c gi/cvsweb.cgi/src/share/ma n/man7/tuning.7</a>
  • Why isn't Novell Netware included here. It seems kind of strange that THE best Network OS is being left out here. -- Jim
  • Well, of course he's suspicious of the results! It looks exactly like someone just took a poll of people's favorite server operating systems! I mean, think about it: who the hell likes x86 Slowaris? Meanwhile, of course, Windows 2000 and Redhat are tied for first. What's curious, now, is that FreeBSD is so far behind both Linux and Windows 2000.

    Obviously this "test" is a crock, as FreeBSD would not be that far behind, what with it's surperior network stack and thread handling. Windows 2000 may be a fast system, but it certainly isn't as fast as FreeBSD, despite the fact that MS largely ripped their TCP/IP stack off the FreeBSD project. The jury, however, is still out on where Linux should have placed. It may not be the fastest system in the world, but it certainly rates at least above Windows 2K.

  • I have to say that I am really tired of seeing tons of people jump at the chance to trash an OS comparison. I have been reading posts like this for years, and this is the FIRST time that I have ever posted any comments.

    Did anyone actually read the article. Under the heading "Real-World Test" the author said in very clear terms "The operating systems were the latest version available from a commercial distribution and were not recompiled (i.e., everything was tested right out of the box)." The author only made a 1 change to the amount of file descriptors that were available. There is always someone that has to say that if they tweaked a little more then "" would blow the doors of all others. This logic brings us back to MindCraft study where Microsoft installed an eperimental patch that allowed the admin to bind CPUs to NICs. You have to remember that most commercial distros cater to about 75% - 90% of their users. Not to the 5% that worry about things like 2 millions emails per hour or 10 billion hit per day. Also the more that you tweak the less stable the system may become.

    There will NEVER be an end-all-be-all benchmark between all OSes. They are too different.

    Something that everyone ALWAYS seems to forget is that certain applications are better suited for different OSes.

    Also consider the distribution of this article, SysAdmin Magazine. They have limited space for their articles in their magazine. If the author included all of the data, code, graphs, it would probably fill the entire magazine. This was not meant to be a white paper or a doctoral thesis. If you want the entire thing including all 12 graphs that comprise the data of Figure 3, then email the author. I'm sure he would be happy to send it to you.

    There is a moral to my rant. Benchmarks are only good for the person/group doing the benckmarks. Anyone reading those benchmarks outside the authors environment should only use that information as a guide, not an absolute. What works in one environment might not work in another. If you have an application that is supported on multiple platforms, then test it yourself in YOUR environment. MOST people do not do that, and usually end up spending way more time and money than they would have if they had tested it in the first place.

    -GH

Remember to say hello to your bank teller.

Working...