Java IO Faster Than NIO 270
rsk writes "Paul Tyma, the man behind Mailinator, has put together an excellent performance analysis comparing old-school synchronous programming (java.io.*) to Java's asynchronous programming (java.nio.*) — showing a consistent 25% performance deficiency with the asynchronous code. As it turns out, old-style blocking I/O with modern threading libraries like Linux NPTL and multi-core machines gives you idle-thread and non-contending thread management for an extremely low cost; less than it takes to switch-and-restore connection state constantly with a selector approach."
And this is news? (Score:4, Insightful)
Re:And this is news? (Score:5, Insightful)
naw, old school gets dropped simply because it's "old" (ie: not trendy/buzzword compliant).
Many times, the "old school" way is EASIER than the newfangled way.
Example: the 100-200 line perl scripts that can be done in 10 lines of regular oldfashion shell.
Re:And this is news? (Score:5, Insightful)
Example: the 100-200 line perl scripts that can be done in 10 lines of regular oldfashion shell.
Clearly you're not using Perl the way it was meant to be used. This obsession with coding Perl the way you'd code Java (with classes/objects, libraries to do what shell utilities do, etc.) makes it very verbose. But if you use it the old way (quick and dirty scripts, no compunctions about calling to external shell utilities where they can do the job quicker, not bothering with use strict or use warnings, using the implicit variables shamelessly, etc.), Perl is, almost be definition, just as compact as shell. After all, if shell can do it, so can Perl, you just need to wrap it in backticks (and most of the time, Perl can do it natively with equal or greater compactness). Granted, when you code Perl like that it becomes more fragile and the code is hard to maintain. But then, so was the shell script.
The problem with a lot of verbose Perl scripts is that the developers were taught to program Perl like C with dynamic typing (as I was initially, before I had to do it for a job and read Learning Perl and Effective Perl Programming cover to cover). I'm not completely insane, so I do code with use strict and warnings enabled, but I don't use the awful OO features, and even with the code overhead from use strict, my Perl scripts are usually equal to or less than 120% the length of an equivalent shell script (and often much shorter). Plus, using Perl means you don't need to learn the intricacies of every one of the dozens of shell utilities, most of your code can transfer to environments without the GNU tools (and heck, it doesn't explode if the machine you run on only offers csh and you wrote in bash), and most of what you're doing runs in a single process, instead of requiring multiple processes, piping text from one to another, constantly reparsing from string form to process usable form.
Re: (Score:2, Funny)
With regard to Perl OO, I'm reminded of one of my favorite quotes, I think it's from Advacned Perl Programming. Requires some explanation for those not Perl nerds.
For those unfamiliar, Perl's idea of "objects" are effectively just an OO framework on top of a procedural model. So you have packages, think C++ namespace. All a Perl object is, is a hash that is "blessed" into the package. You just call bless $hashref,package; and it makes it so you can do neat shit like $hashref->doShit, and then $hashre
Re: (Score:2, Informative)
Of course there are 3 ways to do this and each has subtle differences.
Likewise, whether you pass a hash, or a reference to a hash, or you shift single parameters off the stack. It's totally up to you!
I love using perl for integrating to the shell and other systems plus using its text parsing abilities but man its OO is brutal and I wouldn't use perl in any large projects especially if multiple developers are required.
Re: (Score:2)
Clearly you're not using Perl the way it was meant to be used.
I think Larry might disagree [wall.org] with your assertion that Perl was meant to be used in a specific fashion.
Re: (Score:2)
He said there's more than one way, he didn't say all of them were good.
Re: (Score:2)
Re: (Score:2)
Example: the 100-200 line perl scripts
200 lines of perl ? Isn't that enough to code SkyNet. (While disguising it as simple line noise).
Re:And this is news? (Score:4, Funny)
Some people are less afraid of SkyNet than they are of regular expressions.
Re: (Score:3, Funny)
Perl is not New Fangled. I am sorry to say Perl is one of those .COM languages that has sparked peoples interest for a few years but have settled down to niche language. So it is now an Old School Language... Sorry...
:o
GET OFF MY LAWN!
Re:And this is news? (Score:5, Informative)
Re: (Score:3, Funny)
Re: (Score:2)
Funny that, I find Perl really shines when I use it to extract data and create a report.
Re: (Score:2)
I find scripts written by other people to be hard to read and because I don't write perl this includes all perl scripts.
Re: (Score:2)
Other than he probably meant the last 'c' in the string to be a 'z', the code in his sig is quite simple and little more than a kids decoder ring. How would you do the same thing in your favorite language that would make it so much easier?
Re: (Score:3, Insightful)
Easier is less important than readable. Remember, "Programs must be written for people to read, and only incidentally for machines to execute."
(sure, that's not always true - other concerns may take priority, like performance - but it's a good practice).
Now, about the script itself: wtf is $_ ? Poor naming convections.
wtf is the regex being applied to? Code less than explicit, hard to follow.
also, wtf is the print printing?
Sure, in that case it's easy to follow, but with a larger script that you didn't wrot
Re: (Score:3, Insightful)
Now, about the script itself:
If you're familiar with Perl, these things are obvious. You need to learn Perl basics before you can understand Perl code.
wtf is $_ ?
The default variable, or the "default input and pattern matching space". Many functions are implicitly applied to this variable, and return this variable as a result.
wtf is the regex being applied to?
If not otherwise specified, regex are applied to the "default input and pattern matching space".
wtf is the print printing?
The print statement is printing the "default input and pattern matching space", which in this case is the result from the previo
Re: (Score:3, Interesting)
Java may not be "sexy" anymore (or "all the rage" as you put it), but it is not exactly a niche language. It still runs in surprisingly many places, like cell phone apps (yes, a lot of us still use regular cell phones, and Android is Java-ish but with some tweaks), and more importantly just about every corporate data center uses Java. That last "niche" is pretty huge, and the only thing that threatens Java in that space is dot-Net, the Java platform clone.
Java, like it or not, has become the COBOL of the
Re:And this is news? (Score:4, Informative)
Re: (Score:2)
Yah, forget those .com apps. These days we're only happy with .net apps. But one of these days we'll finally be freed when the days of .exe code are here!
Re: (Score:3, Interesting)
Re:And this is news? (Score:5, Informative)
Re:And this is news? (Score:5, Funny)
Re: (Score:2)
you forgot "disable the trackpad in the bios. your best bet is to join the rest of us and use /dev/event/foo and configure X to only use the pointer stick. or your mouse.
Re:And this is news? (Score:4, Funny)
Re: (Score:2)
Oh for the days of BASIC in ROM when peek and poke were the go.
Re:And this is news? (Score:4, Insightful)
Re: (Score:3, Interesting)
The extra stuff to take care of is why asynchronous I/O applications tend to have lower throughput than synchronous I/O if you have good OS threading.
There have only ever been two good reasons to use application-multiplexed I/O: Your OS sucks at threading (like Windows and Solaris the last time I looked at them), or you have more clients than memory. Languages like C and Java require applications to dedicate multiple kilobytes per thread for the thread's stack -- but usually default to megabytes per thread
Re:And this is news? (Score:4, Informative)
but usually default to megabytes per thread, so if you have thousands of concurrent clients, you will soak up memory in fairly large quantities.
There's an important distinction to make here: a thread's stack will reserve (so many) kilobytes/megabytes of address space, but it won't actually use up very much RAM unless/until the thread starts to actually use a lot of stack space (e.g. by doing a lot of recursion).
On a 32-bit machine, starting too many threads can allocate all of your process's 2-4 gigabytes of address space, which can cause problems even though you have plenty of RAM still free.
On a 64-bit machine, on the other hand, the amount of available address space is mind-bogglingly huge, so running out of address space isn't a problem you're likely to run into, even if you run a gazillion threads at once.
Re: (Score:2)
Except, in this case, the newer version is also not any easier to use, and the old version isn't going to be dropped.
Re: (Score:3, Insightful)
Except NIO is the old school C/C++ way to do it. One thread per socket was the new Java way. So NIO was new to Java, but still old school.
Re: (Score:3, Informative)
One _thread_ is indeed a new way (for certain values of "new"), but back in the day we used fork() instead of non-blocking IO.
Re: (Score:2)
Re: (Score:3, Insightful)
fork() is still the way forward in many, many situations. Having every server session in its own protected memory space gives me warm fuzzy feelings. One can segfault and the rest will keep on running, and that's just the tip of the security iceberg.
select() has advantages, described in other posts here, but it has disadvantages too.
Re: (Score:2)
NIO supports async and is a part of the official class library standard.
You can't have a 1.5+ certified java implementation without it working.
Re: (Score:3, Informative)
It's not in the C Standard Library, but there most certainly is async IO for C. See 'aio.h' in POSIX for example.
Re:And this is news? (Score:5, Insightful)
Of course some old school techniques are faster. We don't drop old school because we want better performance, we drop it because we're lazy, and want easier ways to get the job done!
Minor addition to your comment, for some may get the wrong impression if it gets modded up the chain.
That is a bit of a generalization, and not necessarily accurate. I would say that heavily tested, tried and true techniques are faster. Libraries that fall into the aforementioned realm tend to be older, and hence more time for testing and refinement, but being old doesn't necessarily guarantee it will always be faster all of the time, as your comment implies.
Re:And this is news? (Score:5, Insightful)
In agreement with your post...
As a recent article showed, traditional algorithms may be less optimal on modern systems with multiple layers of cache and various speed memory systems. New or old it's always important to benchmark and find the right tool for your particular needs.
Re: (Score:2)
Short of some algorithmic breakthrough, it does imply that older implementations are necessarily faster.
The answer is that most of the newer methods are merely bloat, developed not for speed and efficiency, but for ease of development and maintenance.
In the past, successful developers were all highly skilled. It was a necessary trait for success both because development was difficult, and because there were so few ways to make money developing software. Unsuccessful developers stopped developing, and their
Re:And this is news? (Score:4, Insightful)
In the past, successful developers were all highly skilled. It was a necessary trait for success both because development was difficult, and because there were so few ways to make money developing software. Unsuccessful developers stopped developing, and their code does not persist until today.
You must not work with much legacy code. I've dealt with shitty code that is both a couple years old to a many decades old (a mix of C, Fortran, Ada, various assembly, etc). This notion that all old programmers were godlike gurus is mostly myth.
Re: (Score:2)
You must not work with much legacy code. I've dealt with shitty code that is both a couple years old to a many decades old (a mix of C, Fortran, Ada, various assembly, etc). This notion that all old programmers were godlike gurus is mostly myth.
Also a lot of them were gurus in stuff that's not really relevant anymore like saving two bytes of memory here, a function call here and two processor instructions there - at least on the simple C compiler they used in 1987 - at the cost of code clarity, encapsulation and so on. Almost all the spectacular performance failures I see is not due to issues like that, it's that you've created a spaghetti mess and eventually settle for the only solution you found even though it's probably 100x slower than it shou
Re: (Score:2)
Re: (Score:2)
We don't drop old school because we want better performance, we drop it because we're lazy...
Hehe yep. Software engineers are lazy and overworked!
Re: (Score:3, Insightful)
Of course old school techniques are faster
Ha! Hahaha!
Nonono, that's not the case. You're thinking of language levels. Low Level programming is very close to the hardware and thus, since you are using the very specific instructions, so you don't lose any efficiency unless you wrote your code illogically. A higher level language abstracts it from the hardware, so your commands have to find the proper opcodes to execute.
Techniques however, are not languages. I can use the same technique I would in C as I would in Assembly or C# or possibly some other
Waiting for JDK 7 (Score:4, Informative)
JDK7 will bring a new IO API that underneath uses epoll (Linux) or completion port (Windows). High performance servers will be possible in Java too.
Re:Waiting for JDK 7 (Score:4, Informative)
Finally, all the worlds enterprise systems can switch to Java... ....oh wait
Re: (Score:2, Informative)
101 Reasons why Java is better than .NET - http://helpdesk-software.ws/it/29-04-2004.htm [helpdesk-software.ws]
You do more harm than good to Java by comparing it to a 6-8 -year-old version of .NET, since your ignorance gives the impression that we (Java developers) just aren't keeping up with the times. Then again, for as long as you've kept that pageful of crap there in spite of multiple comments like mine, I begin to think that this is your intention.
How else to explain the fact that in addition to a bunch of invalid arguments, the links to detail for each one bring you to an error page?
Re: (Score:3, Informative)
Not really. Here:
Re: (Score:2)
Re: (Score:2)
So, you're touting .NET's cross platform friendlyness against Java's?
Really?
Re: (Score:2)
So, you're touting .NET's cross platform friendlyness against Java's?
It doesn't matter how many platforms language X and library Y run on if the set of such platforms doesn't include platform P, and platform P is the only platform that remotely matches the business plan for your application.
Re: (Score:2)
PropJavelin [wikispaces.com], for the HYDRA [wikipedia.org].
Which XNA version lets the public program for the Wii or Playstation?
Re: (Score:2)
Which XNA version lets the public program for the Wii or Playstation?
Nintendo and Sony have made the choice not to let the public develop for their platforms. (Even on PS3, the latest firmware will erase your Other OS.) Microsoft is the only game console maker with a public SDK, and the only way to use Java on its platform is through J#, which Microsoft appears to be phasing out.
Re: (Score:3, Insightful)
So from a different perspective, Microsoft had to kill off Java to get anyone to use XNA, and this is supposed to be evidence of XNA's superiority?
...But I digress...
I don't think you quite got my point. Let's try a few more examples:
As should be painfully obvious by now, placing arbitrary restrictions on a comparison makes the compari
Re: (Score:2)
A more equally-weighted comparison is Java vs. .NET.
If your game's back-end (physics and AI) is written in any 100% Pure .NET language, an XNA front-end (input, graphics, and sound) for this game can be created. But the only way to share the back-end between Java front-ends and .NET front-ends appears to be J#, and I don't know how long Microsoft plans to maintain that.
Re: (Score:2)
Really? That's your argument?
Re: (Score:2)
That's your argument?
Allow me to rephrase: If you're trying to bring an application to a given platform, sometimes the platform forces the language choice. On iPhone, it's Objective-C++; on Windows Phone 7, it's a .NET managed language. I agree that Java is useful on platforms with a JVM, but you may need to reconsider use of Java if your business plan includes expanding to a platform that lacks one.
Re: (Score:3, Informative)
gcj (Score:2)
Java will run on platforms that support C.
XNA Game Studio does not support C or Standard C++. It supports a largely incompatible C++ dialect called "C++/CLI with /clr:safe".
Please see GNU gcj and the Classpath project.
Does gcj compile Java into C or directly into object code? The iPhone developer program requires that Xcode and only Xcode compile your program's source code, which must be written in Objective-C (of which C is a subset) or Objective-C++ (of which Standard C++ is a subset).
Re: (Score:2, Informative)
JDK7 will bring a new IO API that underneath uses epoll (Linux)
From TFA:
To work around not so performant/scalable poll()
implementation on Linux's we tried using epoll with
Blackwidow JVM on a 2.6.5 kernel. while epoll improved the
over scalability, the performance still remained 25% below
the vanilla thread per connection model. With epoll we
needed lot fewer threads to get to the best performance
mark that we could get out of NIO.
Old news. (Score:5, Informative)
Look at the timestamp of this presentation :) It's a bit of old news.
It was discussed here: http://www.theserverside.com/news/thread.tss?thread_id=48449 [theserverside.com]
And it mostly shows that NIO is deficient. I encountered similar problems in my tests. Solved them by using http://mina.apache.org/ [apache.org] .
Re:Old news. (Score:4, Informative)
Mina is great although the brains behind the project left and started a new project, Netty [jboss.org].
I've heard from multiple sources that netty tends to outperform mina although I've been using mina with no problems.
Re:Old news. (Score:5, Interesting)
I had a problem where the customer wanted to discover a class-b network in a reasonable amount of time.
Aside from Java's lack of ping causing huge heartaches the limitation was that when using old Java IO it allocated a thread per connection while waiting for a response.
This limited me to 2-4000 outstanding connection attempts at any time. Since most didn't connect, I needed at least 3 retries on each with progressive back-off times--the threads were absolutely the bottleneck.
I reduced the time for this discovery process from days (or the machine just locked up) to 15 minutes. With nio I probably could have reduced it significantly more (although at some point packet collisions would have become problematic).
NIO may not be defective, it just may be solving a problem you haven't conceived of.
Re: (Score:2)
Sounds like Java was the wrong tool for the job. There are other languages designed explicitly for massive concurrency which may have worked out better.
Re:Old news. (Score:5, Insightful)
Would that be the problem of never having heard about Nmap?
NIO != lower latency (Score:5, Insightful)
I'm not sure where / when NIO got equated to lower latency. The primary benefits of NIO (from my understanding of having designed and deployed both IO and NIO based servers) is that NIO allows you to have better concurrency on a single box i.e. you can service many more calls / transactions on a single machine since you aren't limited by the number of threads you can spawn on that box (and you aren't limited as much by memory, since each thread consumes a fair number of resources on the box).
For the most part (and from my experimentation), NIO actually has slightly higher latency than standard IO (especially with heavy loaded boxes).
The question you need to ask yourself is... do you require higher concurrency and fewer boxes (cheaper to run / maintain) at the expense of slightly higher latency (which would work well for most web sites), or are your transactions latency sensitive / real-time, in which case using standard IO would work better (at the cost of requiring more hardware and support).
Re: (Score:3, Informative)
Re: (Score:2)
The presentation went on to explain what happened because years ago in Java 1.1 and 1.2 it was nasty having to write for concurrency on servers using IO so you'd switch to NIO and be happy.
NIO was new in 1.4, so something's been garbled somewhere.
Re: (Score:2)
A couple years back when we tested this (around 3-4 years back I think), we could at most sustain 6k - 7k connections per box using standard IO (this was not HTTP / web traffic). The issues we ran into were spawing new native threads on the box (ran out of handles on linux) as well as running out of memory.
Anyone who wants scalability NEVER uses a 1:1 thread model. This is why Apache specifically uses a hybrid model to avoid such insanity. A 1:1 model is a recipe for strong contention which is completely contrary to the notion of scalability and actively defeats hardware scalability by adding additional cores/CPUs.
The fact that you run out of handles on linux means the box was not properly configured.
The memory issue is easily fixed by either adding more memory or using a sane connection model.
Re: (Score:3, Insightful)
Exactly!
It's frustrating to see that 98% of the commentry on this article is clearly from people who don't understand the select vs single thread/poll trade off or who are just out and out ill-informed Java haters. *sigh* This *is* slashdot, I suppose.
is this what I think it is? (Score:2)
This looks like polling vs. pending, and if it is, pending won that war about 40 years ago.
Re: (Score:2)
No. And you have no idea what are you talking about.
uh...... DUH?! (Score:5, Insightful)
Re: (Score:2)
So use the old faster io stuff in a thread
Provided that you have enough threads available to do "the old faster io stuff". Once you start juggling more than a thousand connections to a server, you run into the kind of limit that bill_kress mentioned [slashdot.org].
Suggestion: Skip to page 21 (Score:3, Insightful)
You'll laugh, hysterically.
Re: (Score:2)
Re: (Score:2)
Should be using Scatter/Gather +IOCP on windows (Score:3, Informative)
On Windows, the fastest way to do multithreaded I/O with a producer/consumer queue pattern is IO Completion Ports.
The fastest way to write a bunch of buffers to disk is WriteFileScatter. The fastest way to read a bunch of data from disk is ReadFileGather.
SQL Server uses these APIS to scale.
When I used to work at MS in evangelism, there was a big debate about how Unix does things one way, and Microsoft does it a COMPLETELY different way that you just can't #define away - it's just different. A guy named Michael Parkes said "I cannot go to these clients and say REPENT! and use IO completion ports! They do thread per client, because they have fork()".
When you listen to the technical explanations, the Microsoft way actually IS better - it's just aht it's totally incompatible with evrything else.
Learn IOCP and watch your context switches drop.
Re: (Score:2)
This is a problem, not a solution. One of bigger problems with win32 development is the multitude of totally incompatible APIs that do the same thing.
Re: (Score:2)
Re: (Score:2)
you must call read/write again
I meant the case where the previous call returned with the pending status.
Re: (Score:3, Interesting)
Actually, they are better for different things. In Linux you get notified when you can perform an I/O, perform a bunch of non-blocking I/O, and then wait for another notification. In Windows you perform an I/O, and it will either complete immediately or notify you when it does. This means async I/O on Linux can use less memory, while on Windows it can give higher throughput.
Of course, these are merely API advantages -- if the implementation is poor, that won't matter. I'm not aware of any serious tests
Re: (Score:2)
This means async I/O on Linux can use less memory, while on Windows it can give higher throughput.
No. It means that you don't know how pending I/O controls the scheduler. On Linux processes are almost never "notified" by asynchronously arriving signals, they are either processing a request, or sleeping while performing a syscall, so they are awakened by a scheduler when whatever they are waiting for (usually data arrived or buffer is available) happened. Scheduler takes into account the number of pending I/O requests and priorities, so processes are awakened in the optimal order -- I/O priority dictates
Re: (Score:2)
In Linux you get notified when you can perform an I/O, perform a bunch of non-blocking I/O
In edge triggered epoll on Linux, a read call can also get immediate result if the data is already there. If not, it returns with a pending code.
Re:Should be using Scatter/Gather +IOCP on windows (Score:5, Interesting)
I'm afraid I have to disagree. No fan of Microsoft, but I helped build a the-Java-Programming-Language-TM Virtual Machine on Windows, with M:N threads, back before Java 1.4, and IO Completion ports worked well, and we got good performance out of them. We rewrote the network IO to work behind the curtain with threads, with the result that the one-socket-per-thread model actually did the I/O completion port thing, with as many as 32k Java threads running in a grand total of about a dozen Windows threads (stacks were small, stacks grew on demand. Certain things were tricky.).
The largest wins of doing it this way were:
1) got to use the underlying OS's preferred way of doing async IO (on another OS, we might do it differently)
2) lots of threads allowed
3) because Java "context switches" were extremely lightweight, lots of "expensive" stuff got faster (e.g., lock contention).
I also accidentally (really -- I had to choose one of two threads to go first, and chose the right one, on a whim) built-in an anti-convoying heuristic for contended locks, that was really useful when code contained a hot lock.
But, the rest of the system was not especially Microsoft-y; all of us came form a Unix background, and when we were done, we did Unix again. IO Completion ports, at least one Windows, were the best choice (and I tried it 2 or 3 other ways, and they sucked).
Re: (Score:3, Interesting)
The goodness of this strategy assumes some sort of linear-in-delay metric. If there's a deadline, with high penalties for exceeding it (say, if you are serving web pages), you don't want to be stochastically fair, you want to be fair.
The scheduler I wrote was 100% fair, EXCEPT in the case where a thread exited a critical section that had other threads competing for (i.e., blocked). In that case, the exiting thread would give up its quantum to the head (longest waiter) of the queue, who would do the same,
No shit Watson (Score:3, Insightful)
Ff you have multiple cores that do nothing otherwise (like all benchmarks happen to act), multithreading will use them and asynchronous nonblocking I/O won't, so maximum transfer rate for static data in memory over low-latency network will be always faster for blocking threads.
In real-life applications if you always have enough work to distribute between cores/processors, your nonblocking I/O process or thread will only depend on the data production and transfer rate, not the raw throughput of the combination of syscalls that it makes. If output buffers are always empty, and input buffers are empty every time a transaction happens, then both data transfer speed is maxed out, and adding more threads that perform I/O simultaneously will only increase overhead. If it is not maxed out, same applies to queued data before/after processing -- that is, if there is processing. So if worker threads/processes do more than copying data, then giving additional cores to them is more useful than throwing them on to be used for I/O.
2008 (Score:2)
This presentation is actually from 2008 (as indicated by every single slide in the PDF -- and thanks for the PDF warning, BTW). Aside from being old, is there any indication that it's still true?
This is news? (Score:2)
True for JAVA, but not generally true... (Score:5, Interesting)
This may be true for Java.
It isn't true for C/C++.
With C/C++ and NPTL, the many-thread blocking IO style yields slightly lower latency at low IO rates, but offers significant latency variability and sharply decreased thruput at higher IO rates.
It seems that the linux scheduler is much to blame for this-- the number of times that a thread is scheduled on a different CPU increases dramatically with more threads, and this trashes the caches.
I've seen order-of-magnitude decreases in performance and order-of-magnitude increases in latency as a result of what appears to be the cache trashing.
Re: (Score:2)
Re:True for JAVA, but not generally true... (Score:5, Interesting)
Unfortunately, nothing I can publish without permission.
I can say that I'm in charge of maintaining the software that terminates all HTTP traffic for Google. Draw your own conclusions.
Re: (Score:3, Funny)
You're head of R&D for a supervillain's start-up attempting to kill the internet?
The ideal method (Score:2)
The best way to write IO is to use one thread or process per CPU core and in that thread use non-blocking IO. I thought everyone knew this.
Speed isn't the point... (Score:2)
It's not supposed to be faster ... (Score:3, Insightful)
My understanding is that it is not supposed to be faster. It is non-blocking and asynchronous which serves a different need.
Re: (Score:2)
Probably won't happen. My bet on what's going on here is Java OO being so damn heavy it is heavier than the kernel's thread/task struct.
Re: (Score:2)
Well, your bet is wrong.
Re: (Score:2)
I wonder how much slower nio2 will be...