Multi-Threaded Programming Without the Pain 327
holden karau writes "Gigahertz are out and cores are in. Programmers must begin to develop applications that take full advantage of the increasing number of cores present in modern computers. However, multi-threaded development has been notoriously hard to do. Researcher Stefanus Du Toit discusses and demonstrates RapidMind, a software system he co-authored, that takes the pain out of multi-threaded programming in C++. For his demo he created a program on the PlayStation 3 representing thousands of chickens, each independently tracked by a single processing core. The talk itself is interesting but the demo is golden."
Which Comes First? (Score:5, Funny)
--josh
Re: (Score:2, Funny)
Re:Which Comes First? (Score:5, Funny)
Re:Which Comes First? (Score:4, Funny)
Re: (Score:2)
Re:Which Comes First? (Score:5, Funny)
Deadlocked! (Score:2, Funny)
Re: (Score:2)
Huh? (Score:5, Insightful)
I think what he meant was 'each tracked in a separate thread'...obviously each core is still handling many threads. I haven't watched the presentation and don't plan on it until later today, too much to do and I'd rather read something about it. It just sounds like it provides an efficient high level way to write a multi threaded app. Evolutionary but not revolutionary?
Re:Huh? (Score:5, Insightful)
Oh, and there's no such thing as "easy" multi-threading. Hell, the average programmer can't even grasp OO, so what makes them think they can grasp threading which has many many more aspects to it?
Re:Huh? (Score:4, Interesting)
Fsck the chickens...show me what this does with a real game or a real world app that lends itself to highly parallel operations, then demo it on a quad quad core Xenon.
Re: (Score:3, Interesting)
Re:Huh? (Score:4, Funny)
Re:Huh? (Score:5, Interesting)
Not scalable? I beg to differ. Thousands threads for sure scale are a lot better then when you just have two or four or whatever, since with thousands you don't really have an upper limit of how many CPU you want to throw at the problem. The real issue with threads is that OS threads are extremely slow, so you can't have thousands threads or your machine would go to a crawl. Threads also are painful to work with since the languages just aren't up to the task.
However for both these issues there exist solutions, namely Erlang, using user-level threads there is no upper limits and you really can have each chicken have its own thread without a problem and the language is also build from the base up to work nicely with threads.
Now I haven't yet seen the talk, bittorrent still busy downloading it, but I seriously doubt that it will just be yet-another-simple-wrapper class.
Re:Huh? (Score:5, Informative)
Using the threadpool concept, however, you can tune the size of the threadpool via performance metrics from the threads in the threadpool for the optimum size of threadpool, after which you can place however many objects on the pool you'd like. Generally, this is based on the work the thread has to do. If there is no I/O blocking, I've found that 2-3 threads per CPU with moderate CPU time work units will load it to 100% (read moderate CPU time work units as work units that take on the order of 100-1000 ms to complete). If you start adding in any type of I/O blocking, including large amounts of memory access, then that number goes up. A DB retriever system wound up running 64 threads for my particular work load due primarily to the lag involved in the synchronous calls made to the DB. I could have tuned that further using future tasks and reducing the number of threads (a Doug Lea addition to the JDK 1.5 and also available in his previous concurrency library) but my particular case didn't have any negative effects by running 64 threads, so we left it at that. This particular DB access module ran across 64 systems (64*64 threads) serving roughly 35K concurrent customers.
I haven't run Erlang, so can't comment. I have heard nice things about it though, and I'm curious about it. One day I'll have enough time to play with it.
Re:Huh? (Score:5, Insightful)
What makes these programming languages easy to grasp the massive concurrency of is one of 2 things:
1) In Erlang and Termite (A scheme dialect) there is no mutable state, and no globals. Every function is in essence a "service" that simply gets messages and then responds with replies. There is no need to think about locking in such a system and very easy message passing idioms to do what you would normally do with mutable object orientation.
2) In languages like Haskell, there is no concept of a "thread" at all... not even a single thread. There is no concept of "ordering". Things are defined as they are in mathematics.. as relationships between functions and variables. There is no mutable state allowed. This strictness allows the compiler to make very deep conclusions as to what can be parallelized. The compiler can then load balance under the covers across any number of procs without exposing any issues of concurrency to the user at all.
So yes, in Java (and OO in general), concurrency is very, very difficult. In other paradigms though it can be trivial, or even transparent.
Re: (Score:3, Interesting)
Yes, the upper limit is thousand(s)! Go directly to jail. Do not pass Go. Do not collect $200.
Seriously, with companies already offering 4 cores per CPU, and promising to offer 16 cores in the near future, and Moores law being as it is, you don't exactly have to be a visionary to predict that the futu
Re: (Score:3, Informative)
As it goes, when it comes to multithreading, the model used by C++, Java and similar languages is rapidly becoming outdated.
Re: (Score:3, Insightful)
Bah humbug (Score:2, Insightful)
Multithreaded development is commonplace in applications that need it. The places it's not common in are:
-- old-style Unix development, because of the 'lightweight process model'. It's a unix-ism that's on the way out but until it disappears we will have some things like Ruby that don't 'get it'.
-- places that have absolutely no need for it, which certainly includes the chicken demo. One core per chicken?? Seems more like the guy just discovered threads but hasn't quite grasped what they're for.
Re:Bah humbug (Score:4, Insightful)
Multithreading is a tool. Just like more traditional tools, like the hammer, this one is useful for certain applications. But multithreading is not the only tool at your disposal - people need to stop looking at everything as if it were a nail.
Re: (Score:2)
Re:Bah humbug (Score:4, Insightful)
I'm not sure I follow you there. Lightweight process models are perfect for multi-cores. The more the merrier. Given the andundance of high-quality networking and commodity machines, heavyweight programs outside of very niche areas that use internal threads are less suitable for distributed computing than lighteight process models that can call across the network or the OS to other lightweight processes. A heavywight process can only scale to the number of cores avaiable on the machine it is running on, whereas a flock of lightweight processes can scale to the locally available cores and onto to other machines in a distributed fashion without a major bump in the road between local and remote. Any machine that has multi-cores today could easily run, say, one Ruby process per core with negligible overhead.
Re:Bah humbug (Score:4, Insightful)
And it's silly for it to be "on the way out".
Anyone remember the Amiga? It had a preemptive multitasking OS that lacked hardware memory protection because the hardware it was running on couldn't support it. And while the OS itself was very fast and efficient, the overall system was relatively crash-prone, because any memory-related programming error in any running application had a decent chance of taking down the system.
Fast forward to today. Every computer sold has hardware memory protection built-in. Anyone who doesn't know why that's a good thing needs to spend time on an Amiga.
And yet, despite that, threads are all the rage. Why? Because people have this idiotic belief that they're somehow "more efficient" than processes. Such people probably program about as well as they think, which is to say not very well. Threads are indeed more efficient at context switching than processes, but the real question is: does that really matter? In the vast majority of cases, it doesn't, because in the vast majority of cases multiple threads are being used to make the user interface responsive. There's no way a human being can tell the difference between a millisecond-level context switch time and a microsecond-level one.
On top of that, processes bring one critical advantage to the table that threads don't: memory protection. And for the same reason memory protection is important at the OS and hardware level, so too is it important at the process and thread level: it allows clean, protected separation of concern and greater overall application stability.
The vast, vast majority of applications that are multithreaded don't actually need the slight additional context switch performance advantage that threads bring to the table, but they very much need the memory protection facilities that processes bring to the table. Which is another way of saying that if your application needs concurrency, you're a fool if you blindly use threads instead of processes.
Even Windows supports fork() these days, with the POSIX subsystem (available, as far as I know, on any Windows 2000 and later system), so creating a clone of your current process is dirt simple even under Windows. End result: application authors have no good reason to use threads over processes unless they've actually done the math and can prove that their application really needs the slight performance advantage of threads more than the significant reliability advantage of processes.
As to the other reason for using threads, the sharing of memory, there's this really cool new technology out these days. Maybe you've heard of it. It's called "shared memory". It's only been available for 20 years or so. No wonder most people haven't heard of it. Being forced to explicitly declare what's shared and what isn't is a good thing, because it makes you program easier to maintain, easier to debug, and more reliable -- all at the same time.
The bottom line is this: if you need concurrency in your application, you should be using processes, not threads. If you insist on using threads, you'd better have a damned good reason for it, because the reliability implications of threads are hugely negative while the performance implications are modest at best.
Re: (Score:2, Insightful)
Please correct me if I'm wrong, but it seems to me this discussion has gone into apples and oranges mode. Threads, as far as I'm aware, are supposed to be used for single, explicit tasks and always under supervision by a parent thread. I've used multi-threading with excellent results, but then I've taken pains to ensure that the threads don't have any privileges whatsoever. Processes, on the other hand, are more like stand-alone programs working in the same context.
Re: (Score:2, Interesting)
Since you need a reason, here's one, its called concurrency. With processes I have to consume finite system resources to handle concurrency issues or role my own, which is called reinventing the wheel (aka waste of time). Thread libraries will do this for me.
Re: (Score:2)
Right now C (and other iterative languages) are starting to look like assembler was in the 50s and 60s, lots of people insisting that the only way to get decent performance was to program at the lowest level possible. As the number of cores inc
Re: (Score:2)
You haven't done a lot of GUI programming, have you?
From your text above, I'd guess you worked in Microsoft for the Outlook programming team.
Re:Bah humbug (Score:5, Interesting)
If you need concurrency in your apps, there isn't that much between threads and processes. However, if you need interprocess-communication then you are far better off with threads, they are significantly faster wrt locking than processes as all process-based locks must be done at the OS level, using shared (and finite) system resources. Threads can just use a critical section and have done with it, almost no overhead.
Threads are not more efficient at context switching than processes, the same procedure happens whether a thread is switched, or a process is (in fact, a process is really an app with 1 thread). However, as threads can share memory more efficiently, locking is often not needed as much so they appear to be more efficient.
The best argument for threads v processes is Apache. Personally, I agree with the Apache group that Apache 2 with its thread-based model is better. They should know.
Re:Bah humbug (Score:4, Informative)
This and the reduced startup time are the most compelling reasons to use threads instead of processes on a single core.
However, on a large number of cores, things aren't so clear-cut, since if you have as many cores as active processes, you're not doing the context switching as much, and the benefit of using threading to reduce cache flushes isn't so clear. You'd still benefit from the quick startup of threads, so for things like a highly concurrent web server that creates a thread per user, threads may still be a better solution.
Interestingly, the much maligned cooperative threads (user-space) are the fastest of all since the programmer can control when the context switch happens. However, if there's blocking or an infinite loop, the whole application will hang. You have to use asynchronous I/O and make sure no thread runs for too long.
Like most things, it's a trade off between protection from various mistakes and errors vs. speed and control. Processes give you the most protection with the greatest amount of overhead, while user level threads give you the best performance, but only if you design everything correctly.
Re:Bah humbug (Score:4, Insightful)
Once you've got multiple cores, getting multiple threads of execution (either in multiple processes or in multiple threads) makes a lot of sense. I believe hyperthreading benefits particularly such code that has multiple threads executing in the same bit of code since the parallelism there is within a memory management domain, so OpenMP is better there than pthreads, and pthreads is (probably) better than processes. On the other hand, if you're potentially working across a cluster (cue the beowulf jokes!) your code had better be written with processes (and probably MPI) in mind. Of course if you're going that way, you also ought to spend on getting a good interconnect network...
All in all, getting proper high performance is tricky. The best guide to making things go faster is to try to reduce the amount of shared state between threads-of-execution. Reducing shared state also helps to make the code easier to debug. (Alas, dealing with the bits of state that must be shared is what makes life hard.)
"hundreds of cores"? (Score:2)
Re: (Score:3, Interesting)
Just rampant speculation, but it is certainly possible.
Re: (Score:3, Funny)
Re:"hundreds of cores"? (Score:4, Informative)
Yeah, they're looking ahead too eagerly. That's what academics do.
Let's not forget that Intel [intel.com] and IBM [ibm.com] both recently found a manufacturing process to keep Moore's law going for the next several years. Most people in 2006 thought we hit a wall, and that the multicore revolution was inevitably under way, but that just might not be true anymore. That said, it is always nice to have at least a few cores in available in your system.
At the same time, AMD's Fusion [tgdaily.com] strategy looks pretty interesting. I really wonder what's going to become of that.
RapidMind = vendor lock-in (Score:5, Insightful)
Re:RapidMind = vendor lock-in (Score:4, Informative)
Re:RapidMind = vendor lock-in (Score:4, Informative)
Does anyone know if there is progress being made on this?
The GPUs will ship with C compilers soon enough. They are already supporting limited forms of C. Actually we will see hybrid CPUs (the cell being a first example) which are capable of massive amounts of parallel math operations stacked in along side some of your CPU cores in time. As the number of cores grows, room is made for specialized processors where that makes sense in the market.
Pthreads? (Score:4, Informative)
Pthreads has been out for a while. It is open source, and runs on Linux, Windows, and Mac(?).
Whether or not you believe concurrency should be an explicit library or a matter of compiler extension is a bit of a religious argument. But pthreads does offer the functionality, and works fairly well.
Re: (Score:2)
And Pthreads is a C API. TFS says this is C++. Still, it's not clear how this is better than Boost.Threads.
Dr Stephan better hold off that new mortgage! (Score:2)
Despite what the USPTO* clerks tell you, programming ideas are a dime a dozen. He's got as much chance of getting you to pay for this as I have of convincing all you C++ programmers to switch to my new proprietary (*D)++++(R)(TM) language. Only $1,250 a seat! What are you waiting for boys?
* = At least the guy who picks up garbage knows trash wh
Ah, but... (Score:2)
many of the things their stuff does with SH.
use gcc4.2 (Score:3, Informative)
Functional programming (Score:4, Interesting)
The main problem I see is that there is lack of focus in the functional arena. Many current functional languages are designed to use a VM with bytecode (Erlang for example) and don't support native threads easily (often requiring multiple VM instances and slow[er] message passing). The languages that do support native compiling almost always have other problems like horrible syntax (O'Caml, Lisp) or just general lack of refinement. Arguably Haskell comes the closest but suffers from a complicated and large backend support requirement like Java.
Without native thread support it's hard to take advantage of multiple processor cores. Too bad we don't see more mature native compiled functional languages out there.
Re: (Score:2, Insightful)
What?
Sorry, that's bullshit. If you want to take advantage of multiple processor cores, use multiple processes! Even Windows has fork() these days, thanks to its POSIX subsystem, so creating a clone of your process is very easy.
You should use threads over processes only if you can prove that the context switch savings really does
Re: (Score:3, Insightful)
The key to performance and stability does not lie in the discovery of high-level tools that abstract away all the hardware details for you. And it definitely doesn
Re: (Score:3, Insightful)
In the Real World (ie not game consoles), programs must be portable. They must be maintainable. They must be writeable in a short time. Your approach completely ignores these requirements which enormously outweigh the tiny performance gains that you can get by tw
Re: (Score:3, Interesting)
Re: (Score:3, Informative)
Whoa whoa whoa! You may not like Erlang's implementation, but you can hardly attribute it to a lack of focus. The whole language was built with concurrency in mind. Heck, the concurrency even has built-in network awareness. And Erlang's been multi-core since last May.
Erlang goes multi-core [ericsson.com]
Yeah, that doesn't say anything about your VM worries. I don't have those, though. Seamless multi-threading and a language par
What?! (Score:5, Interesting)
You choose to go with a multi-threaded application when it is necessary. Anyone who just starts adding threads because they feel they need to utilize the number of cores is a complete idiot in my book. Hell, why don't we just put spin locks in there so your CPU usage shoots up and it looks like I'm using it to its full potential?
My point is that there have been a few applications I've written that require a multi-threaded solution. Perhaps this API would have made my life easier but I doubt it as I had to pretty much structure by hand each thread. There are frameworks, graphical libraries and that also use multi-threading that the scheduler has taken care of in the past. Hurray for multi-core if you use those.
A good programmer keeps things as simple as possible. They will be easier to maintain in the future. I'm afraid that this is unneeded layer of abstraction or some nut case trying to "utilize cores" for the sake of it. No one has only one application running at one time. The OS is usually running, you have a network process, etc. If I write my application to use one core, I'm giving the user more options to do with the other cores whatever he wants. Let the scheduler work with the futuristic hardware and sort that crap out.
Also, not everyone is multi-core already. Take use into consideration please!
Re: (Score:3)
why don't we just put spin locks in there so your CPU usage shoots up and it looks like I'm using it to its full potential?
I heard stories of this being done by games companies when their publishers complained they weren't using the VU1 on the PS2 enough. That was the VU which was really hard to utilize because had no access to the rendering hardware. And yes, publishers ran the diagnostic tools available when you submitted builds.
Yep, concurrency is a problem, not a solution! (Score:3, Interesting)
Re: (Score:3)
Re:What?! (Score:5, Insightful)
I believe what he is saying is that if your an application developer who is pushing the limits of what a single core is capable of in terms of performance, then you are going to see decreasing rate of improvement and then stagnation because the focus of hardware development is shifting away from more power in a single core to more power because there are more cores.
At some point you will hit a wall, and for single-threaded applications you're going to reach a point where there isn't any more power to be had.
Therefore if you want to tap that extra power that a multi-core processor has, you will by definition *need* to start multi-threaded programming. This isn't about you people who are happy with the speed and power that you already have, research is pointless if you already have everything you could possibly need. This is for the people who push the edge, at some point if you need more you will need to learn to multi-thread correctly.
And a simpler way to do it, is gold in my books.
*From a former University classmate of Stephanos*
Re: (Score:2)
Re: (Score:2)
This is not about dreaming up ways to add concurrency, but utilizing concurrency options that already exist. For example, when a user of your application double clicks a row in your table, you need to grab the detail from the server and create a complex dialog to display that data. Clearly these tasks can run concurrently, but generally they are coded sequentially. On a single co
what a joke (Score:5, Insightful)
From the site [rapidmind.net]:
Man thats some funny stuff. Wow that cracked me up. A *games* company using a tool that has this level of indirection?!? I sure hope these guys got a lot of money from their sucker VC to roll in.
Look guys. There is no multi-processing silver bullet. It isn't even such a hard problem, *if you stop trying to solve it at such a low level*. Break your application into separate pieces that, *don't need to communicate very often.* Then this is the same kind of problem scalable websites like Google, MySpace, Hotmail and so on, have already, just without having to factor in the reliability issues. Finer grained multi-threading just leads to deadlocks and is really hard to debug. If you *really must* render the same sphere on 100 processors at the same time, then you need the speed of a custom coded solution. But you don't so let it go. The main loop of your program will be just fine as a single threaded implementation, 1 processor will do, and farm the 10% code / 90 % heavy lifting out in big clean chunks to other processors. If you find yourself writing some bizzare multi-threaded message passing system so that you can have 100s of threads all modifying the same live object model at the same time -- you are fucked, just forget about it 'cause you will never be able to debug that one killer bug that you know is going to get you right as you go to ship.
400MB download (Score:2)
Re:400MB download (Score:5, Funny)
Square peg, round hole. (Score:3, Informative)
No. Whether something can be done effectively on multiple cores doesn't depend on the programmer, but on the type of processing. Some things have to be done in a certain order, and there's nothing even the best programmer in the world can change about that, period. If you try hacking something together that uses multiple threads for this type of processing, you'll just end up making things slower and messier.
On the other hand, there are other types of processing that just lend themselves fantastically to being done multithreaded.
Re: (Score:2)
I'll be damned... that's the same excuse my wife uses when I try to get her in bed!
Toy Supercomputer (Score:4, Interesting)
The PS3 doesn't seem to have the PCI-Express bus that would solve all these problems. For some reason Sony left out its old pet, FireWire, which could have added buses at 800Mbps each. There doesn't seem to be any expansion whatsoever, except changing the HD on the single SATA connector. To use what it's got, a huge amount of complex, heterogeneous IO management is necessary to use its power.
It's strange to think that a $600 machine with around 5Gbps throughput and 7Tbps processing is a "toy", but the cropped IO makes the PS3 look that way, relative to its full power. Maybe a HW mod, even at $500 or possibly up to $2000, that adds PCIe for a half-dozen 2x10Gig-E cards, or even InfiniBand, will make this crazy little toy into more than just a development platform for games or prototypes for really expensive Cell machines. Who's got the way out?
Re: (Score:2)
Snippet 1:
while(list != null){
if(list.val == i){
break;
list = list->next;
}
Snippet 2:
fprintf(IOPORT, "Startin
Re: (Score:2)
You don't seem to know much about DSP. The vast majority of DSP is not logic, but arithmetic - the logic isn't usually that fast (except sometimes zero-overhead looping), but the arithmetic is extremely fast. The entire game in DSP is keeping the pipeline full. 2000:1 keeps the compute pipeline, the critical link, empty much of the time.
Moreover, there's no time for cache fetches in DSP loops - it
Re: (Score:3, Insightful)
If I listen to you, Anonymous defeatist Coward, and just cry "waaaahhh, I'm too dumb to hack a toy into a tool", then I'll just have a really cool toy.
Allow me to introduce you to the term hack [catb.org], which is what Slashdotters used to do before we were mostly posers [catb.org].
Re: (Score:2)
The part of building my own Cell machine that's not "hacking" is that it's really system design, while getting the PS3 to do things its designers didn't expect is what hacking is all about. Using the existing device to exploit all its pent-up power is the soul of hacking.
I'm not bitching about what Sony didn't do - I even pointed out that they did quite a lo
How many cores? (Score:3, Informative)
Wait wait wait... How many cores does a PS3 have? Thousands? I suspect someone has their facts sadly mistaken. I think they meant 'each with its own thread and using multiple cores to processing the threads,' but that isn't nearly as impressive sounding.
Where are the chicks? (Score:2, Funny)
Relativity (Score:2, Interesting)
Only at first, once you wrap your head around it it becomes second nature.
To a newbie, recursion is hard to do. To somebody who's been writing functional FORTRAN for 25 years, object oriented is hard to do.
It's just another way of thinking about problems. The real bitch is having the toolkits and thread safe libraries at your disposal.
Re: (Score:2, Informative)
So no its not only 'another way of thinking'.
And good luck trying to 'extend' multithreaded stuff.
Multithreading should only be used on very special occasions where it is really needed.
That is harly ever in most end users applications.
Active Objects (Score:2, Interesting)
Life is Pain (Score:3, Insightful)
First of all, I, and many others before me, have been writing multithreaded applications for years in the likes of Linux and UNIX. I have had to maintain multithreaded applications created by others. My collective experience tells me:
It is not trivial.
Let me repeat: It is not a trivial task. Even if you have libraries and an API which abstracts out the ugly stuff, you still have the problem of concurrency, proper locking, deadlocks, etc...
The majority of problems with using multithreaded programming come not from "ugly" parts of the OS/API layer, but from a misunderstanding of the problem. A few problems in computer science - particularly in the physical sciences - do benefit from multithreading. And it is easier to use threads when writing a game than just to execute all of the IO in one big loop (Hello DOS!). But for most applications, using threads is not only unnecessary, but overkill, and introduces the possibility of yet another class of bugs for which the application must be tested. Furthermore, as deadlock and race conditions are often timing related, they are the most difficult type of bug to find and fix. Finding and fixing this class of bugs is still somewhat of a black art in the industry, and is highly dependent on the skill and experience of the programmer.
In short, unless your system/application design cannot do without multithreaded programming, it is best not to use it. Even with a glossy API, you still cannot escape the fact that debugging a multithreaded application is an order of magnitude more difficult than a single threaded one. In any case, you shouldn't be using threads just because you can.
OS/2? (Score:2)
The problem, I think, is that the majority of programmers out there today who were just hobbyists back then, were learning on a very single-threaded platform. Because the model was never there, it's 'hard'. With OS/2 3+, it was always there, and anybody who dabbled on that platform were immediately exposed to how to implement threads, as they we
Comments from the presenter (Score:5, Insightful)
Good morning slashdot!
As the (slightly terrified to find himself mentioned on slashdot) presenter in the video linked to above I thought I'd respond to a couple of comments in bulk. First off, I'm part of a much bigger team at RapidMind that builds this software to make targeting multicore and stream processors easier -- the system and the "chicken demo" was a group effort, and you can read more about it and the company in general in the article linked to from here [rapidmind.net], which unfortunately is PDF-only.
For those crying out about multi-threading not being the solution: you're absolutely right! Our platform's approach to programming multi-core processors is to expose a data parallel model. In this model, the programmer explicitly deals with parallel programming (writing algorithms to work well on arbitrarily many cores) but all of the standard multi-threading issues such as deadlocks and race conditions are avoided, and the developer doesn't worry about how many cores there actually are.
And no, the chicken demo didn't run each chicken on an individual core ;). But it did automatically scale to however many cores were available -- 6 SPUs and a PPU on the PS3, and 16 SPUs and 2 PPUs on a Cell Blade (on which we originally showed the simulation at GDC 2006).
If you want to learn more, drop by our website at http://www.rapidmind.net [rapidmind.net]. You can sign up for a free no-strings-attached evaluation version if you want to try it yourself.
Or you could just use Ada (Score:3, Insightful)
Re: (Score:3, Insightful)
Tripe (Score:2)
This is ridiculous tripe. Multi-threaded programming is hard not because the libraries are hard to use but because it requires alot of planning and thought to decide if you can actually gain a benefit by going multi-threaded.
The main benefit of multiple cores will not happen in userland. It will be in the kernels and the libcs'. Once userland processes can effectively get memory from the heap with minimal locking we will see a performance boost system wide(I'm talking 100 processes can all request memo
There is already an open source solution (Score:2)
And it is called boost::futures [boost-consulting.com]
.The theory behind it, though, it not new: the Actor model is quite old, and it has been used in Erlang [erlang.org] for quite sometime.
CSP Occam and Transputers (Score:3, Interesting)
The OCCAM language implemented this style of processing and the Transputer chip implemented a fast context switching hardware that OCCAM could run on.
This was all done back in the 1980s.
I even implemented the original version of the Java Communicating Sequential Processes API which brought CSP style programming to the Java world, although it is based on Java's underlying Thread mechanism so context switching isn't as fast as it could be.
Transactional Memory (Score:4, Interesting)
This isn't multithreading in the traditional sense (Score:4, Informative)
This is about propping up an obsolete technology (Score:3)
The marketplace wants and needs new technologies for more powerful processors. Multicore serves the needs of chip makers, not their customers. Making all software multi-threaded is trying to solve the wrong problem. It's going to result in lower-quality software without a significant increase in performance.
Dining Chickens Problem (Score:4, Insightful)
Download and play with QtConcurrent today (Score:3, Interesting)
From the project page:
The classes and functions available in the Qt Concurrent package allows you to write multi-threaded applications without having to use the basic threading synchronization primitives such as mutexes and wait conditions. This makes it easier to reason about and test parallel programs to make sure that they are correct.
The Qt Concurrent components manage the threads they use automatically. Each application has a global thread counter, which limits the maximum number threads used at the same time. The maximum is scaled according to the number of CPU cores on the system at runtime. This means that programs written with Qt Concurrent today will continue to scale when deployed on many-core systems in the future.
Very cool.
Inmos had the elegant solution (Score:3, Interesting)
{/* execute these statements in parallel if possible */
statement1;
statement2;
statementn;
}
sequential
{/* execute these statements in order as written */
statement_1;
statement_2;
statement_n;
}
Re: (Score:2)
Multi-threaded programming is a skill that comes with a level of understanding, much like students of mathematics must reach a level of understanding to comprehend Algebra, Calculus, Differential Equations, and Partial Differential Equations (yeah, that last one's a bear, especially when you apply it to various physical
Re: (Score:2)
Writing good code is harder, designing good OO code is even harder, and designing and writing good multi-threaded code is yet a step beyond that.
In theory writing good multi-threaded code shouldn't be much harder than designing good OO code - it's a matter of actually learning the right paradigm to think about things, and then it all flows easily (presuming you've got a language that supports your paradaigm well - otherwise it is doable, but a little clunky, much like OO). If you're willing to let go of shared state concurrency and think in terms of message passing then things get much easier. Think of actors passing messages and try writing multi-
Re: (Score:3, Informative)
Well, if you're going to remove 99+% of the common trouble spots of multi-threaded coding by moving to a messaging paradigm, then yes, it probably is conceptually easier than OO. It can also be significantly slower depending upon the application's design and function and greatly increase its memory footprint. e.g., I don't think a game like Quake would work all that well under this paradigm.
I think it is nowhere near as bad as you seem to think - it all depends on how message passing is handled. If you're doing via some slow complex scheme then, sure, it will be slow. But the trick is to think conceptually in terms of message passing - that doesn't mean it actually has to be handled with a big clunky message passing interface internally; just in terms of how you think about it. Take SCOOP for instance. The "message passing" mechanism there is feature calls, the message being parameters passed
Re: (Score:2)
Re: (Score:3, Informative)
The code so generated can be run immediately, or deferred (note -- its been a while since I've looked at Sh, so I am being vague).
I didn't think that this was a GENERAL multi-threading solution; more a way to easily generate code for the parallel machines that
Re:Don't Bother (Score:4, Funny)
Re: (Score:3, Interesting)
Think of the most basic email app possible. Now when a user presses "send mail" would you create a new fork (), try and micro manage the remote connation in a thread that handles the GUI, or force the user to wait around?
Next think about video where you have a resource intensive task AND you still want a highly responsive GUI.
Granted if all you ever work with is simple biz apps with one user you have a point but I think your 99.99% estimate says more about
Re: (Score:2)
Re: (Score:2)
Now you could easily have a GUI thread that uses message queues to talk with the networking layer(s) without changing much at a
Re: (Score:2)
"Transactional memory" [wikipedia.org]
--jeffk++
Re: (Score:2)
As the articles say, the lock pressure is moved from the reader to the writer. Transactional Memory scales amazingly better when you have multiple threads which are reading common data. Please note that in today's system architectures even READING data on different cores at the same time may not be thread safe without memory barriers put into place to synchronize the caches.
There have been many papers written about the efficiency gains.
And as a bonus, writing multithreade
Re: (Score:2)
Re:Don't Bother (Score:4, Informative)
Plus, you completely and utterly missed the point of the poster you replied to. Most apps (who cares about major?) are single-threaded. The poster's point is that writing a multi-threaded app JUST BECAUSE THERE ARE MORE CPUs/CORES to handle them is pointless and stupid. If the app only requires a single thread, use just one. The other resources will get used by the OS or by other apps (that may, God forbid, *also* be single-threaded). He wasn't talking about dedicating a computing resource to an app. He was saying that an app should only use what it needs, with the understanding that the OS will make good use of any remaining resources for other tasks.
What a lot of multi-thread-happy people seem to miss is that as long as the OS is multi-tasking, the other resources will not go to waste just because the app in the foreground isn't using them.
JUST GIVE US THE CHICKENS (Score:2)
Many complete distros fit in under 411MB.
Nuts --
Just give us a little animated GIF of the chickens, we dont believe the rest of your claims anyway.
Re: (Score:2)