Scalable Nonblocking Data Structures 216
An anonymous reader writes "InfoQ has an interesting writeup of Dr. Cliff Click's work on developing highly concurrent data structures for use on the Azul hardware (which is in production with 768 cores), supporting 700+ hardware threads in Java. The basic idea is to use a new coding style that involves a large array to hold the data (allowing scalable parallel access), atomic update on those array words, and a finite-state machine built from the atomic update and logically replicated per array word. The end result is a coding style that has allowed Click to build 2.5 lock-free data structures that also scale remarkably well."
Re:Java???? (Score:4, Insightful)
Or... is this just a way to avoid having to get the really, really good coders who are more costly than the burn-bags?
Re:Java???? (Score:3, Insightful)
Hmm... lemme think about that. Maybe because Java has decent threading support built into the language? Maybe because the platform is portable to any architecture? Maybe because the JVM can "optimize the hell" out of the running Java code far better than you could "optimize the hell" out of your C++ by hand?
"Java is Slow" is a mantra that is easily 5+ years out of date. Java surpassed C++ performance many years ago, and by such a wide margin that no one even bothers running benchmarks anymore. Anyone repeating the "Java is Slow" mantra is merely branding themselves ignorant.
scalable noNBLocking data sTRructures .. :) (Score:2, Insightful)
KeyWords:
concurrent data structures, hardware threads, java, large array, scalable parallel access, atomic update, words, finite-state machine, lock-free, data structures
Re:Java???? (Score:5, Insightful)
Bulk-Synchronous Parallel model, anyone ? (Score:2, Insightful)
Just my
Re:Sounds great! (Score:2, Insightful)
Re:Java???? (Score:5, Insightful)
But it's not conclusively faster than C++, at least not in a general sense. In terms of a small task involving lots of simple operations, you'll still often see a significant speed increase using C++. This [multicon.pl] is a good example. Now I'm sure there's more optimizations available on both sides - and plenty of stuff to tweak - but C++ is going to come out ahead by a significant margin on a lot of these tasks.
A good example where the participants on both sides have some motivation is on TopCoder (where I spend a fair bit of time). Performance isn't usually the driving factor in language choice there - but sometimes it is, and when it is the answer is pretty much always C++ (unless it's a comparison between Java BigInteger and a naive implementation of the same in C++).
Reasonably often you'll see people write an initial solution in Java, find it runs a bit too slow, and quickly port it to C++ (or pre-emptively switch to C++ if they think they'll be near the time limit). It's not uncommon to see a factor of two difference in performance.
To be clear - these are not usually "real world" tasks. As more memory and objects come into play, Java is going to do better and better. But these kinds of tasks still exist - there's still plenty of places where C++ is going to be the choice because of performance.
In any case, your contention that Java is so much faster that nobody does benchmarks anymore is unsupported and wrong.
Benchmark from many years ago (Score:3, Insightful)
Now, before you C/C++ or Java fanboys get too excited, the absolute hands down fastest language on most of the benchmarks was LISP (I think it was Common LISP). I don't use LISP because I hate the syntax (and LISP fanboys regard it as a requirement of club membership that you like it). But the performance is quite impressive (with GC too), and that's what I would look into if I needed top performance from a high level language. Eat your heart out C++.
I use Java quite a bit, and the biggest drawback performance wise is the amount of memory chewed up by the GC schemes and JIT. It is otherwise quite fast. I often turn off JIT to save memory (at the expense of CPU), and I would like to see options to use a GC algorithm that minimizes memory consumption (at the expense of CPU).
From the video... (Score:3, Insightful)
The other thing is that his algorithm shows a remarkable departure from traditional concurrent programming (as I learned it a decade ago) and he's not getting bogged down with locking and synchronize... instead he has a very simple "think about it" approach that uses the state machine as a thinking aide. Whom ever posted the video... thank you that was very enlightening. Perhaps many of these concurrency problems just need some creativity after all?
Re:Java???? (Score:3, Insightful)
And you're also postulating that there's more likely to be a fully standards-compliant C++ compiler with a standard thread interface on all those esoteric machines of which you speak than a basic JVM?
I understand it's cool to hate Java on Slashdot, but sometimes I think the Java bashers are trying a little too hard.
Re:Sounds great! (Score:3, Insightful)
Researchers don't work at the fringes of what can be done "more easily". They work at the fringes of what is currently possible.
Think about multi-tasking for a moment. If your problem is inherently sequential (e.g. everything is effectively sequentialised by a shared resource that can't be split, such as a piece of hardware), then you can use a single-process event loop. If your problem is inherently parallel (e.g. a web server), then you can use multiple forked processes. In otherwords: If your problem is easy, use an easy solution.
But not every job is easy. That's why we have multi-threading.
Similarly, these new data structures are designed to solve difficult problems.