Scalable Nonblocking Data Structures 216
An anonymous reader writes "InfoQ has an interesting writeup of Dr. Cliff Click's work on developing highly concurrent data structures for use on the Azul hardware (which is in production with 768 cores), supporting 700+ hardware threads in Java. The basic idea is to use a new coding style that involves a large array to hold the data (allowing scalable parallel access), atomic update on those array words, and a finite-state machine built from the atomic update and logically replicated per array word. The end result is a coding style that has allowed Click to build 2.5 lock-free data structures that also scale remarkably well."
why (Score:5, Interesting)
Re:why (Score:5, Informative)
It was not meant to imply that the 768 processor system will use exactly 700 worker threads. It was meant to imply that the system breaks through the traditional scalability limits of 50-100 threads, thus the 700+.
Re: (Score:2)
Surely, 640 threads ought to be enough for anybody.
But seriously, have you seen the price of this beast? HUGE price tag. Why not build your own cluster?
There are technologies today (like, say, Terracotta Server [markturansky.com]) that allow for easy distribution of work across a large number of JVMs.
I suppose the companies that need all those cores and threads in one machine can afford the Honkin' Big Iron. For the rest of us, clustering is getting cheaper and cheaper these days.
Re: (Score:2)
Chances are it is being used for R&D and not actually crunching numbers.
When systems like that hit mass production then we have the data structures for them.
Cant do that with a cluster.
Re: (Score:2)
Hint: it's not about how many CPUs you have or how fast they are, it's how fast the interlinks are between processors.
Re: (Score:2)
Of course, the same programming techniques used to build this are absolutely not going to scale to a cluster. Probably vice versa, but I'm not convinced of that yet.
Re:why (Score:5, Informative)
Consider an in-memory database. (Mysql-cluster (NDB), for example). You wouldn't want to pass the entire database around (or even portions of it around) for each 'job'. Instead, you'd like at most only partitions of the data where massive working-sets reside on each partition and do inter-data operations. Then your message passing is limited to only interactions that aren't held in the local memory space (i.e. NUMA).
With Terracotta you are breaking a sequential application into a series of behind-the-scenes messages which go from clustered node to clustered node as necessary (I'm not very well versed on this product, but I've reviewed it a couple times).
Thus for certain problems that do not nicely break down into small messages, you are indeed limited to single-memory-space hardware. And thus, the more CPUs (that leverage MESI (sp?) CPU cache) the more efficient the overall architecture.
Now, I can't imagine that a 768CPU monster is that cost effective - you're problem space is probably pretty limited. But a simultaneous 700 thread application is NOT hard to write in java at all. I regularly create systems that have between 1,000 and 2,000 semi-active threads. But I try to keep a CPU-intensive pool down to near the number of physical CPUs (4, 8 or 16 as the case may be). Java has tons of tools to allow execution-pools of configureable size.
Re: (Score:2)
Consider an in-memory database.
OK. [apache.org]
Instead, you'd like at most only partitions of the data where massive working-sets reside on each partition and do inter-data operations.
Got it. [danga.com] Can't find a link, but I'm thinking specifically the hashing mechanism. Given a key, I can find which node should be caching that key.
Thus for certain problems that do not nicely break down into small messages, you are indeed limited to single-memory-space hardware.
I'm not sure I've seen such a problem. For example, the CPU cache alone is an example of what happens when you break a problem down into smaller chunks.
I can see where a single memory space might do better, though.
a simultaneous 700 thread application is NOT hard to write in java at all.
Once you know how, I suppose. Consider that most programmers who use threads find ways to deadlock on one or two cores.
The reason I'm drawn to mes
Re: (Score:2)
Will someone please take the expected jokes about "patching" or "darning" over the 640 thread limit?
Re: (Score:2, Informative)
Re: (Score:2)
How I'd love to have one of these boxes
Re: (Score:2, Funny)
Inspiration... (Score:5, Informative)
Re: (Score:2)
Google Talk (Score:5, Informative)
Re: (Score:3, Funny)
768 Cores? (Score:3, Funny)
Re:768 Cores? (Score:4, Funny)
Google Tech Talk (Score:4, Informative)
scalable noNBLocking data sTRructures .. :) (Score:2, Insightful)
KeyWords:
concurrent data structures, hardware threads, java, large array, scalable parallel access, atomic update, words, finite-state machine, lock-free, data structures
Re:scalable noNBLocking data sTRructures .. :) (Score:5, Interesting)
Your turn.
Theory Pong (Score:3, Funny)
Catamorphisms. Linear Logic.
Back to you
false sharing? (Score:2, Interesting)
1. A common killer in parallelization is false sharing. That is, threads on two processors fight over a cache-line even though they are accessing independent variables. A cache-line is typically bigger than an individual variable. The approach of using adjacent elements of an array for parallelism sounds naive. One needs to pad the array.
2. Updating a shared variable, especially a non-scaler, in an inner loop is naive. One should ref
Re: (Score:2)
As for Java, in the article Dr. Click says it has a well-understood and well-implemented memory model.
Re: (Score:3, Informative)
1. A common killer in parallelization is false sharing. That is, threads on two processors fight over a cache-line even though they are accessing independent variables. A cache-line is typically bigger than an individual variable. The approach of using adjacent elements of an array for parallelism sounds naive. One needs to pad the array.
The keyword in your statement is "typically". Click is working on the Azul processor, which is desig
Geek serendipity in a summary (Score:4, Funny)
Now *that* is what I call geek speak.
Re: (Score:2)
Now *that* is what I call geek speak.
Well, sort of lock-free. (Score:2, Informative)
It's not really "lock free". The algorithms in the slides still have WHILE loops wrapped around atomic compare-and-swap operations, so they are effectively spin locks, tying up the CPU while the other CPUs do something. However, the design is such that the WHILE loops shouldn't stall for too long.
This concept has two parts - a way of constructing bigger atomic operations from hardware-supported word-sized atomic operations, and a scheme for resizing arrays while they're in use. The latter is more impo
No it is Lock Free (Score:2, Interesting)
Re: (Score:2)
From the article: (Score:4, Funny)
Clearly, the data structures have been touched by his noodly appendage.
one per (Score:3, Funny)
Azul hardware (which is in production with 768 cores), supporting 700+ hardware threads in Java
Hmmm... one core per Java thread?
That sounds about right for Java apps...
Re: (Score:2)
Bulk-Synchronous Parallel model, anyone ? (Score:2, Insightful)
Re: (Score:2)
So where's the code ?
(Yeah, I know about it, played with it... lots of noise, not enough code!)
WTF? (Score:2, Informative)
Re: (Score:2)
Quoth the article: "On Azul's hardware it obtains linear scaling to 768 CPUs"
That kind-of implies 768 hardware threads are in use.
Re: (Score:2)
The factor does not have to be 1.
If it's 4, then it would be 4 threads for one core, and 2800 threads for 700 cores. And that would still be linear scaling.
From the video... (Score:3, Insightful)
The other thing is that his algorithm shows a remarkable departure from traditional concurrent programming (as I learned it a decade ago) and he's not getting bogged down with locking and synchronize... instead he has a very simple "think about it" approach that uses the state machine as a thinking aide. Whom ever posted the video... thank you that was very enlightening. Perhaps many of these concurrency problems just need some creativity after all?
Welcome back (Score:2)
And to all you industry people, if you'd stop firing everyone when they turn 30, you'd know this too!
Re:Sounds great! (Score:5, Funny)
Re: (Score:2, Insightful)
Re: (Score:3, Insightful)
Researchers don't work at the fringes of what can be done "more easily". They work at the fringes of what is currently possible.
Think about multi-tasking for a moment. If your problem is inherently sequential (e.g. everything is effectively sequentialised by a shared resource that can't be split, such as a piece of hardware), then you can use a single-process event loop. If your problem is inherently parallel (e.g. a web server), then you can use multiple forked processes. In otherwords: If your proble
Re:Sounds great! (Score:5, Informative)
With such applications you will often have a array of variables that are accessible by all threads (eg. current processing modes of the application).
To preserve the integrity of the system, you need to only allow one thread to write to each variable at any time. If you have a single read/write lock for all the variables, you will end up with large number of threads queuing up in a suspended state waiting to read a variable, while one thread writes.
The author uses the Load-Link/Store Conditional [wikipedia.org] pair of instructions to guarantee that the new value is written to all locations. Load-Link loads the value from memory. Store-Conditional only writes the value back if no other write requests have been performed on that location, otherwise it fails.
Check-And-Set [wikipedia.org] only replaces the variable with a new value if the value of the variable matches a previously read old value.
Using these methods (having the writer check for any changes) eliminates the need for suspending threads when trying to read shared variables.
Re:Sounds great! (Score:4, Funny)
Thanks
Re:Sounds great! (Score:5, Funny)
Re: (Score:2)
They said they'd provide nonblocking executions in a scalable manner.
Sounds bogus? (Score:2)
Well... if I remember what I learned from OS and hardware classes right. LL/SC and CAS operations do involve locks at the hardware level. These operations may need no OS system call, may use no explicit semaphore or lock, but the memory bus has to be locked briefly -- especially to guarantee all CPUs seeing the same updated value, it has to do a write-through and cannot just update the values in cache local to the CPU. And when you have large number of CPU cores running, the memory bus becomes the bottlenec
Re:Sounds bogus? (Score:5, Informative)
If a thread locks in software, any subsequent thread must block, waiting for the first thread to finish. If the thread is preempted, then the waiting threads wait needlessly. If the thread dies, then the waiting threads are hosed.
Lock-free techniques prevent this problem, at the expense of more complicated algorithms and data structures. The basic structure of most lock-free algorithms is read a value, do something to it, and then attempt to commit the changed value back to memory. The attempt fails if another thread has changed the value from underneath you, and you must try again. (This is detected through operations like compare-and-swap.) This allows greater concurrency and guarantees that the system as a whole will make progress, even if a thread is preempted or dies.
Lock-free algorithms and data structures is a well established area. What Click has done here is provide a Java implementation of some data structures that yield good performance on the manycore systems his company makes.
Re: (Score:2)
Re: (Score:2)
I'm not sure there is anything radically new in this project. However, given the exceptional difficulty of writing correct lock-free algorithms, just about any project explo
Re:Sounds bogus? (Score:5, Informative)
That's not strictly true.
First, most lock operations do not require a full bus lock. All you have to do is to ensure atomicity of the load and store. Which effectively means you have to 1) acquire the cache line in the modified state (you're the only one who has it here), and 2) prevent system probes from invalidating the line before you can write to it by NACKing those probes until the LOCK is done. Practically this means the locked op has to be the oldest on that cpu before it can start, which ultimately delays its retirement, but not by as much as a full bus lock. Also it has minimal effect on the memory system. The LOCK does not fundamentally add any additional traffic.
Second, the way the value is propagated to other CPUs is the same as any other store. When the cache line is in the modified state, only one CPU can have a copy. All other CPUs that want it will send probes, and the CPU with the M copy will send its data to all the CPUs requesting it, either invalidating or changing to Shared its own copy depending on the types of requests, coherence protocol, etc. If nobody else wants it, and it is eventually evicted from the CPU cache, it will be written to memory. This is the same, LOCK or no.
Third, an explicit mutex requires at least two separate memory requests, possibly three: One to acquire the lock, and the other to modify the protected data. This is going to result in two cache misses for the other CPUs, one for the mutex and one for the data, which are both going to be in the modified state and thus only present in the cache of the cpu that held the mutex. In some consistency models, a final memory barrier is required to let go of the mutex to ensure all writes done inside the lock are seen (x86 not being one of them).
Fourth, with fine enough granularity, most mutexes are uncontested. This means the overhead of locking the mutex is really just that, overhead. Getting maximal granularity/concurrency with mutexes would mean having a separate mutex variable for every element of your data array. This is wasteful of memory and bandwidth. Building your assumptions of atomicity into the structure itself means you use the minimal amount of memory (and thus mem bw), and have the maximal amount of concurrency.
So basically, while it isn't necessarily "radical" (practical improvements often aren't), it is definitely more than bogus marketing. There's a lot more to it than that.
Can also be done with a clean cache (Score:3, Interesting)
Re: (Score:2)
Ooh, don't know anything about that code actually. A cache flush is a lot heavier weight than a normal lock, if you're only accessing one cache line that is, but I could see it being a good idea in a number of situations in OSes. Interesting.
Re: (Score:2)
Re: (Score:2)
Talk about thinking outside of the box... probably the reason I don't work for Google =)
Re:Sounds great! (Score:5, Funny)
Is this something to do with a Blondie tour?
Re:Java???? (Score:4, Insightful)
Or... is this just a way to avoid having to get the really, really good coders who are more costly than the burn-bags?
Re:Java???? (Score:5, Funny)
Or... is this just a way to avoid having to get the really, really good coders who are more costly than the burn-bags?
Re: (Score:2)
Oh, and Erlang is wannabe-functional. Go play with Haskell if you want a purely-functional language. No side effects means, among other things, the ability to have the parallelism done for you, instead of having to explicitly spawn a thread -- or
Re: (Score:2)
I'm also not entirely sure I like transactions yet. Haven't thought hard enough about it, yet.
Re: (Score:2)
Re: (Score:2)
When you have a highly parallel system, it is counter-productive to try to make the workload sequential.
Re: (Score:2)
The parent on the other hand I can agree with, Java development is not much faster then C++ provided you are working on more esoteric things and can't take advantage of the huge library selection. The syntax is formal patterns are mostly the same after all. Developing a mass
Re: (Score:3, Insightful)
And you're also postulating that there's more likely to be a fully standards-compliant C++ compiler with a standard thread interface on all those esoteric machines of which you speak than a basic JVM?
I understand it's cool to hate Java on Sla
Stoopid (Score:2)
And coding a good, hyper parallelizable algorithm in Java vs C will give you at worst a 1/2x speed down, and at the same time an nx speed up, where n=768.
And I'm not even talking about the mess ASM would be.
Re: (Score:3, Insightful)
Hmm... lemme think about that. Maybe because Java has decent threading support built into the language? Maybe because the platform is portable to any architecture? Maybe because the JVM can "optimize the hell" out of the running Java code far better than you could "optimize the hell" out of your C++ by hand?
"Java is Slow" is a mantra that is easily 5+ years out of date. Java surpassed C++ performance many years ago, and by such a wide margin that no one even bothers runni
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Okay, I'll agree that well-written Java code is generally performance-competitive with compiled code, but this is a pretty sweeping assertion. Do you have any evidence for it -- or is it just a little too convenient that "no one even bothers" with benchmarks?
Benchmark from many years ago (Score:3, Insightful)
Now, before you C/C++ or Java fanboys get too excited, the absolute hands down fastest language on most of the benchmarks
Re:Java???? (Score:5, Insightful)
But it's not conclusively faster than C++, at least not in a general sense. In terms of a small task involving lots of simple operations, you'll still often see a significant speed increase using C++. This [multicon.pl] is a good example. Now I'm sure there's more optimizations available on both sides - and plenty of stuff to tweak - but C++ is going to come out ahead by a significant margin on a lot of these tasks.
A good example where the participants on both sides have some motivation is on TopCoder (where I spend a fair bit of time). Performance isn't usually the driving factor in language choice there - but sometimes it is, and when it is the answer is pretty much always C++ (unless it's a comparison between Java BigInteger and a naive implementation of the same in C++).
Reasonably often you'll see people write an initial solution in Java, find it runs a bit too slow, and quickly port it to C++ (or pre-emptively switch to C++ if they think they'll be near the time limit). It's not uncommon to see a factor of two difference in performance.
To be clear - these are not usually "real world" tasks. As more memory and objects come into play, Java is going to do better and better. But these kinds of tasks still exist - there's still plenty of places where C++ is going to be the choice because of performance.
In any case, your contention that Java is so much faster that nobody does benchmarks anymore is unsupported and wrong.
Re: (Score:2, Interesting)
In any case, the author of that test failed. The test includes start-up times in the results. Java is always going to lose micro-benchmarks where start-up time is included. Why? Because the JVM load gets included in the benchmarks. As anyone who knows
Re: (Score:2)
The test includes start-up times in the results.
This may be a significant factor for some of his smaller tests, but I don't think it jeopardizes the overall conclusion. For example, the Ackerman test was 15s vs. 60s. There isn't going to be more than a second of startup, and most of the tests [multicon.pl] are showing differences on the order of many seconds.
In terms of "is it harder to write fast code in C++", I guess it depends on what you're good at. Looking at the changes he did to the C++ code, none of it is esot
Re: (Score:2)
More accurately, Java is usually more than fast enough, even though it's nearly never as fast as equivalent C++. When you get down to it, performance rarely matters much.
Re: (Score:3, Interesting)
Of course when your boss/customer requires Java, use Java
I'm mixed on Lisp- Lisp is kind of high level and kind of not... I guess I'm just too lazy - perl has CPAN = lots of wheels I don't have to reinvent - and document
It's not the Java language that is the problem (Score:2)
So it goes, the circular argument....
Re: (Score:2)
Re: (Score:2)
Whether you will actually achieve the highest possible performance depends on whether you actually write the most efficient code possible. This, in turn, depends on your skills and the time you have to complete the project.
Thus, what you will see in practice is that there is a lot of v
Re: (Score:3, Informative)
He never said that Java had surpassed C in speed, he said that Java had surpassed C++. C++ library classes are not the same as C library classes, and many C++ libraries (especially the ones outside STL and Boost) are woefully under optimized. Java has many more optimized libraries "packaged in" with the language itself.
Second, neither the Doom or Unreal engines are multi-threaded. Java has threading support built into the language. To get the same with C you'd have to use POSIX threads (killing Windows
Re: (Score:2)
Java is not faster then C++ and it never will be. You can't fundamentally add the overhead of a byte code interpreter and somehome come out ahead. What Java does and does very well is save programers from themselves.
Like most programers I am not God's gift to software development. Chances are pretty good that as any given software project I am working on becomes larger and starts to involve more objects, and or data structures the memory management in Java is going to be bette
Re: (Score:2)
Java doesn't necessarily involve a byte-code interpreter. JVMs have supplied Just In Time compilation for years now.
That's not a panacea though: optimization is generally a difficult task, and good optimization can be quite difficult indeed -- many (perhaps most) NP-complete problems are basically ones of optimization. While optimizing code isn't nec
Re: (Score:2)
Re: (Score:2)
Mind you the manual does specifically state that you shouldn't use them inside interrupts.
Re: (Score:2)
What's absurd about one bit of native binary code outperforming a different bit of native binary code? o_O
Re: (Score:2)
Well, nothing... but based on the GP's usage of the phrase "JIT bytecode", my best guess is that he's talking about Python (compiles source code to bytecode "JIT") as opposed to Java (compiles bytecode to machine code JIT).
Re:Java???? (Score:4, Informative)
Re: (Score:2)
Even if the read in Thread B occurs after the write in Thread A chronologically, Thread B may be looking at stale data.
If Thread B has stale data, then it's commit (through a compare-and-swap or similar atomic operation) should fail, and the thread should try the operation again.
Take a look at the algorithms in the presentation. I've done a fair amount of lock-free programming [vt.edu], but all of it was in C. His algorithms look like standard lock-free practices, but since I'm not familiar with what correct lock-free Java code looks like, I can't say with confidence.
I will, however, say that I would be surprised if Cliff Click mad
java.util.concurrent.atomic (Score:2)
Re:Java???? (Score:5, Insightful)
Re: (Score:3, Interesting)
Umm, because Azul runs the Java in hardware. It *is* optimized by being in Java.
Re: (Score:2)
To get the work done and working correctly before the hardware is obsolete.
Re: (Score:2)
Re: (Score:2)
Sorry , remind me what language the JIT compiler is written in. Java is it? No , thought not.
Re: (Score:2, Informative)
In the problems that TFA addresses, I'd wager that most of the time is spent in dissemination barriers of some sort, since invariably the problems in parallel computing move into issues within the problem domain (which is ideally what we want, after all).
As for a JIT being inefficient compared to a static optimizing compiler, it depends so much on the code in question and on the platform, as to not be something you can make blanket statements about.
Let's hear from some HPC researchers on this. Get some rea
Re:Java???? (Score:4, Funny)
Sure, Java manages memory for you, but it's generally much easier to incorporate a garbage collector into C than it is to write java without file I/O.
Re: (Score:2)
Call me old fashioned, but ummmmm, you malloc() it , you free() it.
I mean when did this whole notion of malloc() it and forget it come to pass? Did Ron popeil suddenly start writing programming languages or something? I don't care how complex your code is, at some point that bit of memory is no longer required and at that point it is your job as the programmer, to clean up after yourself and free() it, no?.
Re: (Score:2)
Re:"2.5"? WTF? (Score:5, Informative)
I doubt that very much (Score:2)
But a FSM-controlled data access for multi-threading is a bit different. The closest I've seen is the implementation of dual-port RAM which uses hardware state machines to control access to the underlying shared RAM.
Oh, I've been doing embedded for over 20 years too.