Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Java Programming

Java IO Faster Than NIO 270

rsk writes "Paul Tyma, the man behind Mailinator, has put together an excellent performance analysis comparing old-school synchronous programming (java.io.*) to Java's asynchronous programming (java.nio.*) — showing a consistent 25% performance deficiency with the asynchronous code. As it turns out, old-style blocking I/O with modern threading libraries like Linux NPTL and multi-core machines gives you idle-thread and non-contending thread management for an extremely low cost; less than it takes to switch-and-restore connection state constantly with a selector approach."
This discussion has been archived. No new comments can be posted.

Java IO Faster Than NIO

Comments Filter:
  • And this is news? (Score:4, Insightful)

    by Just_Say_Duhhh ( 1318603 ) on Tuesday July 27, 2010 @04:30PM (#33050280)
    Of course old school techniques are faster. We don't drop old school because we want better performance, we drop it because we're lazy, and want easier ways to get the job done!
  • by bolthole ( 122186 ) on Tuesday July 27, 2010 @04:33PM (#33050330) Journal

    naw, old school gets dropped simply because it's "old" (ie: not trendy/buzzword compliant).
    Many times, the "old school" way is EASIER than the newfangled way.

    Example: the 100-200 line perl scripts that can be done in 10 lines of regular oldfashion shell.

  • by ShadowRangerRIT ( 1301549 ) on Tuesday July 27, 2010 @04:36PM (#33050374)
    Asynchronous I/O is by no means easier. There's a hell of a lot more to keep track of, and a lot more work to do to make asynchronous I/O work correctly; synchronous I/O is much easier to code, and apparently it's faster on Linux to boot.
  • by djKing ( 1970 ) on Tuesday July 27, 2010 @04:40PM (#33050438) Homepage Journal

    Except NIO is the old school C/C++ way to do it. One thread per socket was the new Java way. So NIO was new to Java, but still old school.

  • by cosm ( 1072588 ) <thecosm3NO@SPAMgmail.com> on Tuesday July 27, 2010 @04:43PM (#33050470)

    Of course some old school techniques are faster. We don't drop old school because we want better performance, we drop it because we're lazy, and want easier ways to get the job done!

    Minor addition to your comment, for some may get the wrong impression if it gets modded up the chain.

    That is a bit of a generalization, and not necessarily accurate. I would say that heavily tested, tried and true techniques are faster. Libraries that fall into the aforementioned realm tend to be older, and hence more time for testing and refinement, but being old doesn't necessarily guarantee it will always be faster all of the time, as your comment implies.

  • by yvajj ( 970228 ) on Tuesday July 27, 2010 @04:44PM (#33050478)

    I'm not sure where / when NIO got equated to lower latency. The primary benefits of NIO (from my understanding of having designed and deployed both IO and NIO based servers) is that NIO allows you to have better concurrency on a single box i.e. you can service many more calls / transactions on a single machine since you aren't limited by the number of threads you can spawn on that box (and you aren't limited as much by memory, since each thread consumes a fair number of resources on the box).

    For the most part (and from my experimentation), NIO actually has slightly higher latency than standard IO (especially with heavy loaded boxes).

    The question you need to ask yourself is... do you require higher concurrency and fewer boxes (cheaper to run / maintain) at the expense of slightly higher latency (which would work well for most web sites), or are your transactions latency sensitive / real-time, in which case using standard IO would work better (at the cost of requiring more hardware and support).

  • by ShadowRangerRIT ( 1301549 ) on Tuesday July 27, 2010 @04:51PM (#33050572)

    Example: the 100-200 line perl scripts that can be done in 10 lines of regular oldfashion shell.

    Clearly you're not using Perl the way it was meant to be used. This obsession with coding Perl the way you'd code Java (with classes/objects, libraries to do what shell utilities do, etc.) makes it very verbose. But if you use it the old way (quick and dirty scripts, no compunctions about calling to external shell utilities where they can do the job quicker, not bothering with use strict or use warnings, using the implicit variables shamelessly, etc.), Perl is, almost be definition, just as compact as shell. After all, if shell can do it, so can Perl, you just need to wrap it in backticks (and most of the time, Perl can do it natively with equal or greater compactness). Granted, when you code Perl like that it becomes more fragile and the code is hard to maintain. But then, so was the shell script.

    The problem with a lot of verbose Perl scripts is that the developers were taught to program Perl like C with dynamic typing (as I was initially, before I had to do it for a job and read Learning Perl and Effective Perl Programming cover to cover). I'm not completely insane, so I do code with use strict and warnings enabled, but I don't use the awful OO features, and even with the code overhead from use strict, my Perl scripts are usually equal to or less than 120% the length of an equivalent shell script (and often much shorter). Plus, using Perl means you don't need to learn the intricacies of every one of the dozens of shell utilities, most of your code can transfer to environments without the GNU tools (and heck, it doesn't explode if the machine you run on only offers csh and you wrote in bash), and most of what you're doing runs in a single process, instead of requiring multiple processes, piping text from one to another, constantly reparsing from string form to process usable form.

  • uh...... DUH?! (Score:5, Insightful)

    by Michael Kristopeit ( 1751814 ) on Tuesday July 27, 2010 @05:07PM (#33050744)
    the entire point of asynchronous is to acknowledge you will be waiting for IO, and try to do something else useful rather than just wait... asynchronous will obviously end up taking more time because of the overhead of managing states and performing the switches, but the tradeoff is something useful was getting done while waiting for IO a little longer instead of doing nothing except wait for the IO to complete. which method is best is completely application specific.
  • by phantomcircuit ( 938963 ) on Tuesday July 27, 2010 @05:07PM (#33050752) Homepage

    You'll laugh, hysterically.

  • by CODiNE ( 27417 ) on Tuesday July 27, 2010 @05:17PM (#33050862) Homepage

    In agreement with your post...

    As a recent article showed, traditional algorithms may be less optimal on modern systems with multiple layers of cache and various speed memory systems. New or old it's always important to benchmark and find the right tool for your particular needs.

  • by Monkeedude1212 ( 1560403 ) on Tuesday July 27, 2010 @05:27PM (#33050950) Journal

    Of course old school techniques are faster

    Ha! Hahaha!

    Nonono, that's not the case. You're thinking of language levels. Low Level programming is very close to the hardware and thus, since you are using the very specific instructions, so you don't lose any efficiency unless you wrote your code illogically. A higher level language abstracts it from the hardware, so your commands have to find the proper opcodes to execute.

    Techniques however, are not languages. I can use the same technique I would in C as I would in Assembly or C# or possibly some other very-high level language.

    The idea they try to convey here is You are trying to efficiently thread for a multicore machine. You can either
    A)Use Java's Asynchronous IO (NIO)
    or
    B) Use regular Java IO, with a modern threading library (like Linux NPTL!)
    to achieve this.

    Turns out - B is faster.

  • No shit Watson (Score:3, Insightful)

    by Alex Belits ( 437 ) * on Tuesday July 27, 2010 @05:38PM (#33051060) Homepage

    Ff you have multiple cores that do nothing otherwise (like all benchmarks happen to act), multithreading will use them and asynchronous nonblocking I/O won't, so maximum transfer rate for static data in memory over low-latency network will be always faster for blocking threads.

    In real-life applications if you always have enough work to distribute between cores/processors, your nonblocking I/O process or thread will only depend on the data production and transfer rate, not the raw throughput of the combination of syscalls that it makes. If output buffers are always empty, and input buffers are empty every time a transaction happens, then both data transfer speed is maxed out, and adding more threads that perform I/O simultaneously will only increase overhead. If it is not maxed out, same applies to queued data before/after processing -- that is, if there is processing. So if worker threads/processes do more than copying data, then giving additional cores to them is more useful than throwing them on to be used for I/O.

  • by Lunix Nutcase ( 1092239 ) on Tuesday July 27, 2010 @05:45PM (#33051106)

    In the past, successful developers were all highly skilled. It was a necessary trait for success both because development was difficult, and because there were so few ways to make money developing software. Unsuccessful developers stopped developing, and their code does not persist until today.

    You must not work with much legacy code. I've dealt with shitty code that is both a couple years old to a many decades old (a mix of C, Fortran, Ada, various assembly, etc). This notion that all old programmers were godlike gurus is mostly myth.

  • Re:Old news. (Score:5, Insightful)

    by Anonymous Coward on Tuesday July 27, 2010 @06:53PM (#33051678)

    Would that be the problem of never having heard about Nmap?

  • by Anonymous Coward on Tuesday July 27, 2010 @07:10PM (#33051782)

    What really pains me is when people decide to do the "easy" threaded I/O and then evolve strange monstrosities like thread pools to try to deal with the scaling problems. Allocating N*k worker threads to N processors and then doing select-style polling in each of them is just so ugly. Either the OS and language runtime should provide high performance async I/O or unlimited threading via smarter compilers and schedulers (or both). Having every application evolve from "trivial but slow" (simplistic async I/O) or "trivial but fragile" (simplistic threaded I/O) into "very complex but usually fast and scalable" (weird hybrids with thread pooling and application level work dispatchers) is just a terrible waste of software engineering resources.

  • by kaffiene ( 38781 ) on Tuesday July 27, 2010 @07:34PM (#33051960)

    Exactly!

    It's frustrating to see that 98% of the commentry on this article is clearly from people who don't understand the select vs single thread/poll trade off or who are just out and out ill-informed Java haters. *sigh* This *is* slashdot, I suppose.

  • by Sarten-X ( 1102295 ) on Tuesday July 27, 2010 @08:12PM (#33052302) Homepage

    So from a different perspective, Microsoft had to kill off Java to get anyone to use XNA, and this is supposed to be evidence of XNA's superiority?

    ...But I digress...

    I don't think you quite got my point. Let's try a few more examples:

    • Which version of XNA can run on a toaster?
    • Which of Boeing's 7x7 series airplanes works best underwater?
    • Which brand of cream cheese is most effective for use as a boat anchor?

    As should be painfully obvious by now, placing arbitrary restrictions on a comparison makes the comparison meaningless. Your original statement was that comparisons are null if the target systems aren't equal. Limiting the discussion to a single case where you know the comparison is flawed makes the comparison useless.

    Instead, let's simply compare where the two technologies can be used. Java can be targeted for many [java-virtual-machine.net] systems. XNA can run on four [wikipedia.org]. For any randomly-selected non-PC target platform, it seems the chance of Java working is significantly higher than XNA (or anything else, for that matter).

    A more equally-weighted comparison is Java vs. .NET. Both are based on publicly-available specifications, and both offer similar functionality. I'd argue that neither is any better than the other in theory, though in practice Java has better support.

  • by icebraining ( 1313345 ) on Tuesday July 27, 2010 @10:50PM (#33053126) Homepage

    Easier is less important than readable. Remember, "Programs must be written for people to read, and only incidentally for machines to execute."
    (sure, that's not always true - other concerns may take priority, like performance - but it's a good practice).

    Now, about the script itself: wtf is $_ ? Poor naming convections.
    wtf is the regex being applied to? Code less than explicit, hard to follow.
    also, wtf is the print printing?

    Sure, in that case it's easy to follow, but with a larger script that you didn't wrote... well, look at this website!

  • by dave87656 ( 1179347 ) on Wednesday July 28, 2010 @02:06AM (#33053788)

    My understanding is that it is not supposed to be faster. It is non-blocking and asynchronous which serves a different need.

  • by gringer ( 252588 ) on Wednesday July 28, 2010 @08:31AM (#33054946)

    Now, about the script itself:

    If you're familiar with Perl, these things are obvious. You need to learn Perl basics before you can understand Perl code.

    wtf is $_ ?

    The default variable, or the "default input and pattern matching space". Many functions are implicitly applied to this variable, and return this variable as a result.

    wtf is the regex being applied to?

    If not otherwise specified, regex are applied to the "default input and pattern matching space".

    wtf is the print printing?

    The print statement is printing the "default input and pattern matching space", which in this case is the result from the previous command.

    Most of the confusion that you have mentioned can be overcome by understanding the "default" concept of Perl.

  • by slim ( 1652 ) <john.hartnup@net> on Wednesday July 28, 2010 @10:44AM (#33056370) Homepage

    fork() is still the way forward in many, many situations. Having every server session in its own protected memory space gives me warm fuzzy feelings. One can segfault and the rest will keep on running, and that's just the tip of the security iceberg.

    select() has advantages, described in other posts here, but it has disadvantages too.

Always draw your curves, then plot your reading.

Working...