Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Programming

Multi-Threaded Programming Without the Pain 327

holden karau writes "Gigahertz are out and cores are in. Programmers must begin to develop applications that take full advantage of the increasing number of cores present in modern computers. However, multi-threaded development has been notoriously hard to do. Researcher Stefanus Du Toit discusses and demonstrates RapidMind, a software system he co-authored, that takes the pain out of multi-threaded programming in C++. For his demo he created a program on the PlayStation 3 representing thousands of chickens, each independently tracked by a single processing core. The talk itself is interesting but the demo is golden."
This discussion has been archived. No new comments can be posted.

Multi-Threaded Programming Without the Pain

Comments Filter:
  • by jforest1 ( 966315 ) on Thursday March 22, 2007 @08:35AM (#18441581)
    The multi-threaded chicken or the multi-threaded egg?

    --josh
  • Deadlocked! (Score:2, Funny)

    by Trimbo2 ( 661670 )
    Deadlock detected!
  • Huh? (Score:5, Insightful)

    by dreamchaser ( 49529 ) on Thursday March 22, 2007 @08:44AM (#18441649) Homepage Journal
    I didn't know the PS3 had thousands of cores ;)

    I think what he meant was 'each tracked in a separate thread'...obviously each core is still handling many threads. I haven't watched the presentation and don't plan on it until later today, too much to do and I'd rather read something about it. It just sounds like it provides an efficient high level way to write a multi threaded app. Evolutionary but not revolutionary?
    • Re:Huh? (Score:5, Insightful)

      by Gr8Apes ( 679165 ) on Thursday March 22, 2007 @09:03AM (#18441809)
      Even so, this is a "bad" implementation. There's absolutely no reason for there to be 1 thread per chicken. That's inherently not scalable. What you really want are an optimum number of threads for the number of cores in a pool that handle work units (chickens). This will scale much higher than the 1 thread per object model discussed in this topic.

      Oh, and there's no such thing as "easy" multi-threading. Hell, the average programmer can't even grasp OO, so what makes them think they can grasp threading which has many many more aspects to it?
      • Re:Huh? (Score:4, Interesting)

        by dreamchaser ( 49529 ) on Thursday March 22, 2007 @09:07AM (#18441843) Homepage Journal
        Having written more than my share of threaded apps I agree 100%. I still haven't looked into this more, but it's probably a C++ class library that abstracts the creation and management of threads. Too many threads thrashes the processor nicely in many cases, so unless they have some magic behind the scenes managing the number of threads vs. cores then this is just a hyped up multi threading library.

        Fsck the chickens...show me what this does with a real game or a real world app that lends itself to highly parallel operations, then demo it on a quad quad core Xenon.
        • Re: (Score:3, Interesting)

          by Gr8Apes ( 679165 )
          The chicken scenario described removed any curiosity I had about looking into the library further. Why? Because it's very similar to the Java 101 bouncing ball thread demo (one thread per ball) which is used to show why 1 thread per ball doesn't scale to first time would be multi-threaded programmers.
      • Re:Huh? (Score:5, Interesting)

        by grumbel ( 592662 ) <grumbel+slashdot@gmail.com> on Thursday March 22, 2007 @10:22AM (#18442813) Homepage
        ### That's inherently not scalable.

        Not scalable? I beg to differ. Thousands threads for sure scale are a lot better then when you just have two or four or whatever, since with thousands you don't really have an upper limit of how many CPU you want to throw at the problem. The real issue with threads is that OS threads are extremely slow, so you can't have thousands threads or your machine would go to a crawl. Threads also are painful to work with since the languages just aren't up to the task.

        However for both these issues there exist solutions, namely Erlang, using user-level threads there is no upper limits and you really can have each chicken have its own thread without a problem and the language is also build from the base up to work nicely with threads.

        Now I haven't yet seen the talk, bittorrent still busy downloading it, but I seriously doubt that it will just be yet-another-simple-wrapper class.
        • Re:Huh? (Score:5, Informative)

          by Gr8Apes ( 679165 ) on Thursday March 22, 2007 @10:47AM (#18443179)
          First, last time I ran the ball test just to see how processors had improved in their capabilities to run code, I got to over 2K threads in a single JVM before significant degradation occurred and then it occurred rapidly.

          Using the threadpool concept, however, you can tune the size of the threadpool via performance metrics from the threads in the threadpool for the optimum size of threadpool, after which you can place however many objects on the pool you'd like. Generally, this is based on the work the thread has to do. If there is no I/O blocking, I've found that 2-3 threads per CPU with moderate CPU time work units will load it to 100% (read moderate CPU time work units as work units that take on the order of 100-1000 ms to complete). If you start adding in any type of I/O blocking, including large amounts of memory access, then that number goes up. A DB retriever system wound up running 64 threads for my particular work load due primarily to the lag involved in the synchronous calls made to the DB. I could have tuned that further using future tasks and reducing the number of threads (a Doug Lea addition to the JDK 1.5 and also available in his previous concurrency library) but my particular case didn't have any negative effects by running 64 threads, so we left it at that. This particular DB access module ran across 64 systems (64*64 threads) serving roughly 35K concurrent customers.

          I haven't run Erlang, so can't comment. I have heard nice things about it though, and I'm curious about it. One day I'll have enough time to play with it.
          • Re:Huh? (Score:5, Insightful)

            by Procyon101 ( 61366 ) on Thursday March 22, 2007 @03:08PM (#18447923) Journal
            You are using JVM threads. Most massively scalable threaded languages, like Erlang, use green threads. A green thread acts like a thread from the standpoint of the programmer, but carries little or no context switch cost (because it's not really a thread). The underlying platform then load balances these green threads across the actual hardware in an optimal pool of true threads.

            What makes these programming languages easy to grasp the massive concurrency of is one of 2 things:

            1) In Erlang and Termite (A scheme dialect) there is no mutable state, and no globals. Every function is in essence a "service" that simply gets messages and then responds with replies. There is no need to think about locking in such a system and very easy message passing idioms to do what you would normally do with mutable object orientation.

            2) In languages like Haskell, there is no concept of a "thread" at all... not even a single thread. There is no concept of "ordering". Things are defined as they are in mathematics.. as relationships between functions and variables. There is no mutable state allowed. This strictness allows the compiler to make very deep conclusions as to what can be parallelized. The compiler can then load balance under the covers across any number of procs without exposing any issues of concurrency to the user at all.

            So yes, in Java (and OO in general), concurrency is very, very difficult. In other paradigms though it can be trivial, or even transparent.
        • Re: (Score:3, Interesting)

          by joto ( 134244 )

          Thousands threads for sure scale are a lot better then when you just have two or four or whatever, since with thousands you don't really have an upper limit of how many CPU you want to throw at the problem.

          Yes, the upper limit is thousand(s)! Go directly to jail. Do not pass Go. Do not collect $200.

          Seriously, with companies already offering 4 cores per CPU, and promising to offer 16 cores in the near future, and Moores law being as it is, you don't exactly have to be a visionary to predict that the futu

      • Re: (Score:3, Informative)

        Have a read please.

        As it goes, when it comes to multithreading, the model used by C++, Java and similar languages is rapidly becoming outdated.

  • Bah humbug (Score:2, Insightful)

    by kahei ( 466208 )

    Multithreaded development is commonplace in applications that need it. The places it's not common in are:

    -- old-style Unix development, because of the 'lightweight process model'. It's a unix-ism that's on the way out but until it disappears we will have some things like Ruby that don't 'get it'.

    -- places that have absolutely no need for it, which certainly includes the chicken demo. One core per chicken?? Seems more like the guy just discovered threads but hasn't quite grasped what they're for.
    • Re:Bah humbug (Score:4, Insightful)

      by ari_j ( 90255 ) on Thursday March 22, 2007 @09:14AM (#18441911)
      You are, of course, correct. The other thing that people need to keep in mind is that there is rarely only a single process running on a given machine. For applications where it makes sense, such as video rendering on a machine doing nothing else, multithreading can increase overall performance. For applications where it doesn't or where there are other things running on the same machine, you normally end up with worse overall performance by trying to get your naturally single-threaded program to run on multiple cores at once when the extra cores would be better dedicated to running things other than your program.

      Multithreading is a tool. Just like more traditional tools, like the hammer, this one is useful for certain applications. But multithreading is not the only tool at your disposal - people need to stop looking at everything as if it were a nail.
    • Re:Bah humbug (Score:4, Insightful)

      by aldheorte ( 162967 ) on Thursday March 22, 2007 @09:39AM (#18442253)
      "old-style Unix development, because of the 'lightweight process model'. It's a unix-ism that's on the way out but until it disappears we will have some things like Ruby that don't 'get it'."

      I'm not sure I follow you there. Lightweight process models are perfect for multi-cores. The more the merrier. Given the andundance of high-quality networking and commodity machines, heavyweight programs outside of very niche areas that use internal threads are less suitable for distributed computing than lighteight process models that can call across the network or the OS to other lightweight processes. A heavywight process can only scale to the number of cores avaiable on the machine it is running on, whereas a flock of lightweight processes can scale to the locally available cores and onto to other machines in a distributed fashion without a major bump in the road between local and remote. Any machine that has multi-cores today could easily run, say, one Ruby process per core with negligible overhead.
    • Re:Bah humbug (Score:4, Insightful)

      by kcbrown ( 7426 ) <slashdot@sysexperts.com> on Thursday March 22, 2007 @09:43AM (#18442321)

      -- old-style Unix development, because of the 'lightweight process model'. It's a unix-ism that's on the way out but until it disappears we will have some things like Ruby that don't 'get it'.

      And it's silly for it to be "on the way out".

      Anyone remember the Amiga? It had a preemptive multitasking OS that lacked hardware memory protection because the hardware it was running on couldn't support it. And while the OS itself was very fast and efficient, the overall system was relatively crash-prone, because any memory-related programming error in any running application had a decent chance of taking down the system.

      Fast forward to today. Every computer sold has hardware memory protection built-in. Anyone who doesn't know why that's a good thing needs to spend time on an Amiga.

      And yet, despite that, threads are all the rage. Why? Because people have this idiotic belief that they're somehow "more efficient" than processes. Such people probably program about as well as they think, which is to say not very well. Threads are indeed more efficient at context switching than processes, but the real question is: does that really matter? In the vast majority of cases, it doesn't, because in the vast majority of cases multiple threads are being used to make the user interface responsive. There's no way a human being can tell the difference between a millisecond-level context switch time and a microsecond-level one.

      On top of that, processes bring one critical advantage to the table that threads don't: memory protection. And for the same reason memory protection is important at the OS and hardware level, so too is it important at the process and thread level: it allows clean, protected separation of concern and greater overall application stability.

      The vast, vast majority of applications that are multithreaded don't actually need the slight additional context switch performance advantage that threads bring to the table, but they very much need the memory protection facilities that processes bring to the table. Which is another way of saying that if your application needs concurrency, you're a fool if you blindly use threads instead of processes.

      Even Windows supports fork() these days, with the POSIX subsystem (available, as far as I know, on any Windows 2000 and later system), so creating a clone of your current process is dirt simple even under Windows. End result: application authors have no good reason to use threads over processes unless they've actually done the math and can prove that their application really needs the slight performance advantage of threads more than the significant reliability advantage of processes.

      As to the other reason for using threads, the sharing of memory, there's this really cool new technology out these days. Maybe you've heard of it. It's called "shared memory". It's only been available for 20 years or so. No wonder most people haven't heard of it. Being forced to explicitly declare what's shared and what isn't is a good thing, because it makes you program easier to maintain, easier to debug, and more reliable -- all at the same time.

      The bottom line is this: if you need concurrency in your application, you should be using processes, not threads. If you insist on using threads, you'd better have a damned good reason for it, because the reliability implications of threads are hugely negative while the performance implications are modest at best.

      • Re: (Score:2, Insightful)

        Please correct me if I'm wrong, but it seems to me this discussion has gone into apples and oranges mode. Threads, as far as I'm aware, are supposed to be used for single, explicit tasks and always under supervision by a parent thread. I've used multi-threading with excellent results, but then I've taken pains to ensure that the threads don't have any privileges whatsoever. Processes, on the other hand, are more like stand-alone programs working in the same context.

      • Re: (Score:2, Interesting)

        Huh? Threads and processes are two different animals.
        Since you need a reason, here's one, its called concurrency. With processes I have to consume finite system resources to handle concurrency issues or role my own, which is called reinventing the wheel (aka waste of time). Thread libraries will do this for me.
      • by porlw ( 169848 )
        The main reason that multi-threaded programming is the in thing is that process creation on Windows is many times slower than thread creation. You can see this if you compare the performance of a complex shell script on Unix vs. Windows. See also Apache 1 vs Apache 2.

        Right now C (and other iterative languages) are starting to look like assembler was in the 50s and 60s, lots of people insisting that the only way to get decent performance was to program at the lowest level possible. As the number of cores inc
      • The bottom line is this: if you need concurrency in your application, you should be using processes, not threads. If you insist on using threads, you'd better have a damned good reason for it, because the reliability implications of threads are hugely negative while the performance implications are modest at best.

        You haven't done a lot of GUI programming, have you?



        From your text above, I'd guess you worked in Microsoft for the Outlook programming team.

      • Re:Bah humbug (Score:5, Interesting)

        by gbjbaanb ( 229885 ) on Thursday March 22, 2007 @11:38AM (#18444039)
        Unfortunately you say processes have their own memory protection which is better than threads that have to do their own synchronisation when accessing shared memory, but then go on about process-based shared memory needing its own additional protection.

        If you need concurrency in your apps, there isn't that much between threads and processes. However, if you need interprocess-communication then you are far better off with threads, they are significantly faster wrt locking than processes as all process-based locks must be done at the OS level, using shared (and finite) system resources. Threads can just use a critical section and have done with it, almost no overhead.

        Threads are not more efficient at context switching than processes, the same procedure happens whether a thread is switched, or a process is (in fact, a process is really an app with 1 thread). However, as threads can share memory more efficiently, locking is often not needed as much so they appear to be more efficient.

        The best argument for threads v processes is Apache. Personally, I agree with the Apache group that Apache 2 with its thread-based model is better. They should know.
  • Where is the abstract getting "hundreds of cores in desktops on the horizon" from? Is this actually expected soon, or are they just looking ahead a bit too eagerly?
    • Re: (Score:3, Interesting)

      by dreamchaser ( 49529 )
      If by 'on the horizon' they mean 'possibly in the next ten years', then sure. I can see that happening. Quad cores are already here. If they double the number of cores every 18 months that means in 7.5 years we'll have 128 cores. I'm just throwing that out as an example, but it's certainly possible even if all the cores are not on the same package. Take 8 physical CPU's with 16 cores each for example.

      Just rampant speculation, but it is certainly possible.
    • by Lazerf4rt ( 969888 ) on Thursday March 22, 2007 @11:07AM (#18443525)

      Yeah, they're looking ahead too eagerly. That's what academics do.

      Let's not forget that Intel [intel.com] and IBM [ibm.com] both recently found a manufacturing process to keep Moore's law going for the next several years. Most people in 2006 thought we hit a wall, and that the multicore revolution was inevitably under way, but that just might not be true anymore. That said, it is always nice to have at least a few cores in available in your system.

      At the same time, AMD's Fusion [tgdaily.com] strategy looks pretty interesting. I really wonder what's going to become of that.

  • by Anonymous Coward on Thursday March 22, 2007 @08:54AM (#18441717)
    Both, RapidMind and Peakstream are proprietary commercial solutions and those companies are trying to lock users into their particular framework. What we really need is the equivalent as true open-source solution, perhaps as a gcc extension. Does anyone know if there is progress being made on this?
    • by Anonymous Coward on Thursday March 22, 2007 @09:15AM (#18441915)
      OpenMP is implemented into GCC 4.2 (I think, I've never used it in GCC).
    • by acidrain ( 35064 ) on Thursday March 22, 2007 @09:24AM (#18442051)

      Does anyone know if there is progress being made on this?

      The GPUs will ship with C compilers soon enough. They are already supporting limited forms of C. Actually we will see hybrid CPUs (the cell being a first example) which are capable of massive amounts of parallel math operations stacked in along side some of your CPU cores in time. As the number of cores grows, room is made for specialized processors where that makes sense in the market.

    • Pthreads? (Score:4, Informative)

      by gillbates ( 106458 ) on Thursday March 22, 2007 @09:43AM (#18442327) Homepage Journal

      Pthreads has been out for a while. It is open source, and runs on Linux, Windows, and Mac(?).

      Whether or not you believe concurrency should be an explicit library or a matter of compiler extension is a bit of a religious argument. But pthreads does offer the functionality, and works fairly well.

      • And Pthreads is a C API. TFS says this is C++. Still, it's not clear how this is better than Boost.Threads.

    • Agreed. The question is "will this be free?" If the answer is "no" then everyone leaves and Dr. Stephan finishes giving his speech to the guy picking up the rubbish.

      Despite what the USPTO* clerks tell you, programming ideas are a dime a dozen. He's got as much chance of getting you to pay for this as I have of convincing all you C++ programmers to switch to my new proprietary (*D)++++(R)(TM) language. Only $1,250 a seat! What are you waiting for boys?

      * = At least the guy who picks up garbage knows trash wh
    • SH, from which RapidMind's core tech came from, is FOSS and you can do
      many of the things their stuff does with SH.
    • use gcc4.2 (Score:3, Informative)

      by drerwk ( 695572 )
      Yes, gcc 4.2 supports OpenMP. As others note, parallel programming is still not trivial. But OpenMP is very nice. I have a write up on building and testing gcc 4.2 on OS X here: http://alphakilo.com/openmp-on-os-x-using-gcc-42/ [alphakilo.com]. Serious advantages are that OpenMP can be retrofitted to existing C/C++ and Fortran code. I know that everyone prefers to start from scratch and use Erlang or some other solution, but in a project I am working on, we already have about a million lines of C++. Current OpenMP imple
  • by Cthefuture ( 665326 ) on Thursday March 22, 2007 @08:56AM (#18441749)
    Also note that certain programming languages can make multithreaded programming a lot easier. Nothing against C++ (one of my favorite languages) but no matter what you do it's relatively hard to use in multithreaded applications compared to a functional language. We are already seeing more functional features put into existing languages.

    The main problem I see is that there is lack of focus in the functional arena. Many current functional languages are designed to use a VM with bytecode (Erlang for example) and don't support native threads easily (often requiring multiple VM instances and slow[er] message passing). The languages that do support native compiling almost always have other problems like horrible syntax (O'Caml, Lisp) or just general lack of refinement. Arguably Haskell comes the closest but suffers from a complicated and large backend support requirement like Java.

    Without native thread support it's hard to take advantage of multiple processor cores. Too bad we don't see more mature native compiled functional languages out there.
    • Re: (Score:2, Insightful)

      by kcbrown ( 7426 )

      Without native thread support it's hard to take advantage of multiple processor cores. Too bad we don't see more mature native compiled functional languages out there.

      What?

      Sorry, that's bullshit. If you want to take advantage of multiple processor cores, use multiple processes! Even Windows has fork() these days, thanks to its POSIX subsystem, so creating a clone of your process is very easy.

      You should use threads over processes only if you can prove that the context switch savings really does

    • Re: (Score:3, Insightful)

      by Lazerf4rt ( 969888 )
      I recently shipped an Xbox 360 game and am about to ship a PS3 game, and having done a lot of system-level programming and optimization for both, I can tell you don't know what you're talking about. You're probably a smart guy and a good programmer but you're obviously speaking out of academic experience without having much real-world experience.

      The key to performance and stability does not lie in the discovery of high-level tools that abstract away all the hardware details for you. And it definitely doesn
      • Re: (Score:3, Insightful)

        That approach works just fine if you know exactly what hardware your code is going to be running on, and you know that it will never have to run on any other hardware, and you know that you won't have to ever work on it again once it is released.

        In the Real World (ie not game consoles), programs must be portable. They must be maintainable. They must be writeable in a short time. Your approach completely ignores these requirements which enormously outweigh the tiny performance gains that you can get by tw
      • Re: (Score:3, Interesting)

        Well, speaking of game development, I hope you'll excuse me if I will hold the opinion of that guy Tim Sweeney, you know, the one behind Unreal, higher than yours? 'Cause he seems to disagree [uni-sb.de] with you pretty strongly on many things, threading issues among them. Tools (which languages are) are key to solving this problem, and a lot of it does come from academia, just as all things heavily used in the industry today (like OOP) did.
    • Re: (Score:3, Informative)

      by Communomancer ( 8024 )
      The main problem I see is that there is lack of focus in the functional arena.

      Whoa whoa whoa! You may not like Erlang's implementation, but you can hardly attribute it to a lack of focus. The whole language was built with concurrency in mind. Heck, the concurrency even has built-in network awareness. And Erlang's been multi-core since last May.

      Erlang goes multi-core [ericsson.com]

      Yeah, that doesn't say anything about your VM worries. I don't have those, though. Seamless multi-threading and a language par
  • What?! (Score:5, Interesting)

    by eldavojohn ( 898314 ) * <eldavojohn@noSpAM.gmail.com> on Thursday March 22, 2007 @08:59AM (#18441765) Journal

    Programmers must begin to develop applications that take full advantage of the increasing number of cores present in modern computers.
    I'm a developer. I may not be the greatest one but I enjoy it. This declaration baffles me.

    You choose to go with a multi-threaded application when it is necessary. Anyone who just starts adding threads because they feel they need to utilize the number of cores is a complete idiot in my book. Hell, why don't we just put spin locks in there so your CPU usage shoots up and it looks like I'm using it to its full potential?

    My point is that there have been a few applications I've written that require a multi-threaded solution. Perhaps this API would have made my life easier but I doubt it as I had to pretty much structure by hand each thread. There are frameworks, graphical libraries and that also use multi-threading that the scheduler has taken care of in the past. Hurray for multi-core if you use those.

    A good programmer keeps things as simple as possible. They will be easier to maintain in the future. I'm afraid that this is unneeded layer of abstraction or some nut case trying to "utilize cores" for the sake of it. No one has only one application running at one time. The OS is usually running, you have a network process, etc. If I write my application to use one core, I'm giving the user more options to do with the other cores whatever he wants. Let the scheduler work with the futuristic hardware and sort that crap out.

    Also, not everyone is multi-core already. Take use into consideration please!
    • by acidrain ( 35064 )

      why don't we just put spin locks in there so your CPU usage shoots up and it looks like I'm using it to its full potential?

      I heard stories of this being done by games companies when their publishers complained they weren't using the VU1 on the PS2 enough. That was the VU which was really hard to utilize because had no access to the rendering hardware. And yes, publishers ran the diagnostic tools available when you submitted builds.

    • 100% agree. Concurrency is a problem, not a solution, and it needs to be abstracted out early if you need it at all.
      • by grumbel ( 592662 )
        Concurrency is a problem, but its one that you *can't* avoid. Everything in todays CPU development points very strongly into a multi-core direction, in a few years you can't buy single cores any more and in a few years down the road again something like eight cores might be the norm. So how exactly do you want to write programs then? Single thread, using only 12.5% of the available computing power? I don't think so. Now it is of course hard to write multi-threading in C++, but other languages such as Erlang
    • Re:What?! (Score:5, Insightful)

      by zx75 ( 304335 ) on Thursday March 22, 2007 @09:34AM (#18442175) Homepage
      I think you've missed the mark a little.

      I believe what he is saying is that if your an application developer who is pushing the limits of what a single core is capable of in terms of performance, then you are going to see decreasing rate of improvement and then stagnation because the focus of hardware development is shifting away from more power in a single core to more power because there are more cores.
      At some point you will hit a wall, and for single-threaded applications you're going to reach a point where there isn't any more power to be had.

      Therefore if you want to tap that extra power that a multi-core processor has, you will by definition *need* to start multi-threaded programming. This isn't about you people who are happy with the speed and power that you already have, research is pointless if you already have everything you could possibly need. This is for the people who push the edge, at some point if you need more you will need to learn to multi-thread correctly.

      And a simpler way to do it, is gold in my books.

      *From a former University classmate of Stephanos*
    • by LWATCDR ( 28044 )
      Your are right. Some programs will never need to be multi-threaded. However if a program is running slow right now you can no longer bet on the next generation of hardware to give you a performance boost unless it is multi-threaded. It will really depend on your application.
    • "Anyone who just starts adding threads because they feel they need to utilize the number of cores is a complete idiot in my book."

      This is not about dreaming up ways to add concurrency, but utilizing concurrency options that already exist. For example, when a user of your application double clicks a row in your table, you need to grab the detail from the server and create a complex dialog to display that data. Clearly these tasks can run concurrently, but generally they are coded sequentially. On a single co
  • what a joke (Score:5, Insightful)

    by acidrain ( 35064 ) on Thursday March 22, 2007 @09:00AM (#18441773)

    From the site [rapidmind.net]:

    • 1. Replace types: The developer replaces numerical types representing floating point numbers and integers with the equivalent RapidMind platform types.
    • 2. Capture computations: While the user's application is running, sequences of numerical operations invoked by the user's application can be captured, recorded, and dynamically compiled to a program object by the RapidMind platform.
    • 3. Stream execution: The RapidMind platform runtime is used for managed parallel execution of program objects on the target hardware platform, which can be a GPU, the Cell processor, or a multicore CPU.

    Man thats some funny stuff. Wow that cracked me up. A *games* company using a tool that has this level of indirection?!? I sure hope these guys got a lot of money from their sucker VC to roll in.

    Look guys. There is no multi-processing silver bullet. It isn't even such a hard problem, *if you stop trying to solve it at such a low level*. Break your application into separate pieces that, *don't need to communicate very often.* Then this is the same kind of problem scalable websites like Google, MySpace, Hotmail and so on, have already, just without having to factor in the reliability issues. Finer grained multi-threading just leads to deadlocks and is really hard to debug. If you *really must* render the same sphere on 100 processors at the same time, then you need the speed of a custom coded solution. But you don't so let it go. The main loop of your program will be just fine as a single threaded implementation, 1 processor will do, and farm the 10% code / 90 % heavy lifting out in big clean chunks to other processors. If you find yourself writing some bizzare multi-threaded message passing system so that you can have 100s of threads all modifying the same live object model at the same time -- you are fucked, just forget about it 'cause you will never be able to debug that one killer bug that you know is going to get you right as you go to ship.

  • Is there a version that isn't a 400+MB movie file? I was expecting an article.
  • by Ihlosi ( 895663 ) on Thursday March 22, 2007 @09:06AM (#18441839)
    Programmers must begin to develop applications that take full advantage of the increasing number of cores present in modern computers.



    No. Whether something can be done effectively on multiple cores doesn't depend on the programmer, but on the type of processing. Some things have to be done in a certain order, and there's nothing even the best programmer in the world can change about that, period. If you try hacking something together that uses multiple threads for this type of processing, you'll just end up making things slower and messier.



    On the other hand, there are other types of processing that just lend themselves fantastically to being done multithreaded.

  • Toy Supercomputer (Score:4, Interesting)

    by Doc Ruby ( 173196 ) on Thursday March 22, 2007 @09:17AM (#18441945) Homepage Journal
    The problem with programming the PS3 is that once the complexity of its parallel processors is handled, the CPU is so fast that it consumes and produces data much faster than the IO available. The Cell is a basically 204GFLOPS/32bit machine (plus the Power RISC, basically a Mac G5), with an internal 1.6Tbps bus. But even its builtin gigabit ethernet is puny compared to that kind of dataflow. It's not clear whether the USB slots are 1, 2 or 4 buses at 480Mbps each, but even 2Gbps more isn't so much. Maybe another gig-e can plug into its CompactFlash slot, bringing the total up to 4Gbps, but that's still only 0.25% the chip bus. In desperation, perhaps the SATA bus could also be used for another 1.3Gbps. Adding the HDMI output with some fancy codecing (especially on the receiving host) gives 10.2Gbps out, so the other 5.3Gbps can be used for input, but that's still only 5.3Gbps throughput, probably a lot less at under 100% efficiency per channel. The Cell can spin its wheels with 2000 instructions on the data it's got before it gets more. There are lots of "multimedia mixing" and transformation applications that could run multiple cycles in that 2K instructions, which instead need more machines for more IO.

    The PS3 doesn't seem to have the PCI-Express bus that would solve all these problems. For some reason Sony left out its old pet, FireWire, which could have added buses at 800Mbps each. There doesn't seem to be any expansion whatsoever, except changing the HD on the single SATA connector. To use what it's got, a huge amount of complex, heterogeneous IO management is necessary to use its power.

    It's strange to think that a $600 machine with around 5Gbps throughput and 7Tbps processing is a "toy", but the cropped IO makes the PS3 look that way, relative to its full power. Maybe a HW mod, even at $500 or possibly up to $2000, that adds PCIe for a half-dozen 2x10Gig-E cards, or even InfiniBand, will make this crazy little toy into more than just a development platform for games or prototypes for really expensive Cell machines. Who's got the way out?
    • by giminy ( 94188 )
      Your post seems to imply that *every* operation a CPU does requires some I/O. This is hardly the case. Most of the computation that occurs in a normal well-written program only requires the CPU. For example, compare these two code snippets and tell me which one is more likely to be seen in an actual program that gets run on your computer:

      Snippet 1:

      while(list != null){
      if(list.val == i){
      break;
      list = list->next;
      }

      Snippet 2:

      fprintf(IOPORT, "Startin
      • Not only did I not imply that, I explicitly mentioned that the CPU:IO throughput means 2000 instructions per IO.

        You don't seem to know much about DSP. The vast majority of DSP is not logic, but arithmetic - the logic isn't usually that fast (except sometimes zero-overhead looping), but the arithmetic is extremely fast. The entire game in DSP is keeping the pipeline full. 2000:1 keeps the compute pipeline, the critical link, empty much of the time.

        Moreover, there's no time for cache fetches in DSP loops - it
  • How many cores? (Score:3, Informative)

    by Aladrin ( 926209 ) on Thursday March 22, 2007 @09:20AM (#18441991)
    "For his demo he created a program on the PlayStation 3 representing thousands of chickens, each independently tracked by a single processing core. "

    Wait wait wait... How many cores does a PS3 have? Thousands? I suspect someone has their facts sadly mistaken. I think they meant 'each with its own thread and using multiple cores to processing the threads,' but that isn't nearly as impressive sounding.
  • I browsed through the 411 MB ogg file, but could not find any chicks. Where are they?
  • Relativity (Score:2, Interesting)

    by stratjakt ( 596332 )
    However, multi-threaded development has been notoriously hard to do

    Only at first, once you wrap your head around it it becomes second nature.

    To a newbie, recursion is hard to do. To somebody who's been writing functional FORTRAN for 25 years, object oriented is hard to do.

    It's just another way of thinking about problems. The real bitch is having the toolkits and thread safe libraries at your disposal.
    • Re: (Score:2, Informative)

      by Anonymous Coward
      The real bitch is when you have a bug because your bug is not reproductible as easily than in any other programming method.
      So no its not only 'another way of thinking'.
      And good luck trying to 'extend' multithreaded stuff.
      Multithreading should only be used on very special occasions where it is really needed.
      That is harly ever in most end users applications.

  • Active Objects (Score:2, Interesting)

    by lefticus ( 5620 )
    I'm not sure what techniques the developer is using as the um, "article" is a little light on details (unless I missed something) But the concept of Active Objects (a trivialized way of using threads) has been around for a while with generic implementations of them becoming more mainstream rapidly. In the past week there as been much discussion about active objects [boost.org] and "futures" [boost.org] on the boost [boost.org] mailing list and it is likely that both will become part of boost shortly. To put it simply, an active object is an o
  • Life is Pain (Score:3, Insightful)

    by gillbates ( 106458 ) on Thursday March 22, 2007 @09:33AM (#18442171) Homepage Journal

    First of all, I, and many others before me, have been writing multithreaded applications for years in the likes of Linux and UNIX. I have had to maintain multithreaded applications created by others. My collective experience tells me:

    It is not trivial.

    Let me repeat: It is not a trivial task. Even if you have libraries and an API which abstracts out the ugly stuff, you still have the problem of concurrency, proper locking, deadlocks, etc...

    The majority of problems with using multithreaded programming come not from "ugly" parts of the OS/API layer, but from a misunderstanding of the problem. A few problems in computer science - particularly in the physical sciences - do benefit from multithreading. And it is easier to use threads when writing a game than just to execute all of the IO in one big loop (Hello DOS!). But for most applications, using threads is not only unnecessary, but overkill, and introduces the possibility of yet another class of bugs for which the application must be tested. Furthermore, as deadlock and race conditions are often timing related, they are the most difficult type of bug to find and fix. Finding and fixing this class of bugs is still somewhat of a black art in the industry, and is highly dependent on the skill and experience of the programmer.

    In short, unless your system/application design cannot do without multithreaded programming, it is best not to use it. Even with a glossy API, you still cannot escape the fact that debugging a multithreaded application is an order of magnitude more difficult than a single threaded one. In any case, you shouldn't be using threads just because you can.

  • I seem to recall pretty much every app I used under OS/2 took advantage of threading. The workplace shell, of course, being the prime example. This was in 1992.

    The problem, I think, is that the majority of programmers out there today who were just hobbyists back then, were learning on a very single-threaded platform. Because the model was never there, it's 'hard'. With OS/2 3+, it was always there, and anybody who dabbled on that platform were immediately exposed to how to implement threads, as they we
  • by sdt ( 7606 ) on Thursday March 22, 2007 @10:01AM (#18442549) Homepage

    Good morning slashdot!

    As the (slightly terrified to find himself mentioned on slashdot) presenter in the video linked to above I thought I'd respond to a couple of comments in bulk. First off, I'm part of a much bigger team at RapidMind that builds this software to make targeting multicore and stream processors easier -- the system and the "chicken demo" was a group effort, and you can read more about it and the company in general in the article linked to from here [rapidmind.net], which unfortunately is PDF-only.

    For those crying out about multi-threading not being the solution: you're absolutely right! Our platform's approach to programming multi-core processors is to expose a data parallel model. In this model, the programmer explicitly deals with parallel programming (writing algorithms to work well on arbitrarily many cores) but all of the standard multi-threading issues such as deadlocks and race conditions are avoided, and the developer doesn't worry about how many cores there actually are.

    And no, the chicken demo didn't run each chicken on an individual core ;). But it did automatically scale to however many cores were available -- 6 SPUs and a PPU on the PS3, and 16 SPUs and 2 PPUs on a Cell Blade (on which we originally showed the simulation at GDC 2006).

    If you want to learn more, drop by our website at http://www.rapidmind.net [rapidmind.net]. You can sign up for a free no-strings-attached evaluation version if you want to try it yourself.

  • by Black Parrot ( 19622 ) on Thursday March 22, 2007 @10:07AM (#18442627)
    which has had easy-to-use multithreading constructs built right into the language for the past 25 years or so.
    • Re: (Score:3, Insightful)

      by Coryoth ( 254751 )
      Unfortunately I think many programmers that read Slashdot are scared off by the clear, readable, maintainable syntax. Typing end is clearly too much work, or something, and as we all know IDEs can't possibly help with that... I would like to see Ada get more use, but unfortunately I doubt it is going to happen.
  • This is ridiculous tripe. Multi-threaded programming is hard not because the libraries are hard to use but because it requires alot of planning and thought to decide if you can actually gain a benefit by going multi-threaded.

    The main benefit of multiple cores will not happen in userland. It will be in the kernels and the libcs'. Once userland processes can effectively get memory from the heap with minimal locking we will see a performance boost system wide(I'm talking 100 processes can all request memo

  • And it is called boost::futures [boost-consulting.com]

    .

    The theory behind it, though, it not new: the Actor model is quite old, and it has been used in Erlang [erlang.org] for quite sometime.

  • by Anonymous Coward on Thursday March 22, 2007 @10:24AM (#18442845)
    The communicating Sequential Processes style of programming allows for many lightweight simple threads that communicate over channels rather that the monitor based thread synchronization.

    The OCCAM language implemented this style of processing and the Transputer chip implemented a fast context switching hardware that OCCAM could run on.

    This was all done back in the 1980s.

    I even implemented the original version of the Java Communicating Sequential Processes API which brought CSP style programming to the Java world, although it is based on Java's underlying Thread mechanism so context switching isn't as fast as it could be.
  • Transactional Memory (Score:4, Interesting)

    by omnirealm ( 244599 ) on Thursday March 22, 2007 @10:31AM (#18442961) Homepage
    For those who have not caught wind of this yet, transactional memory [wikipedia.org] is currently the most promising solution to this problem and perhaps the most-covered subject in research conferences on parallel computing today. There have been several proposals for both hardware-based (at the cache level) and software-based architectures. Transactional memory greatly simplifies concurrent programming. When using transactions instead of locks, deadlocks go away completely and there is increased concurrency.
  • by igomaniac ( 409731 ) on Thursday March 22, 2007 @10:36AM (#18443031)
    There's a lot of posts saying that multithreading is really hard, which is completely true... But what RapidMind is providing is something else, something more like a SIMD model or vector computations. It solves things like elementwise operations on large arrays in an efficient manner using whatever parallel computing resources are available. It's a language with a semantics that don't require complicated synchronisation because you're bascially telling the compiler which operations are independent and then it can go off and compute it in the most efficient way possible. RapidMind was designed to make GPGPU programming easy, so it's a generalisation of the pixel shader model where you have a lot of 'threads' computing the color of each pixel on the display in parallel. This is an easy problem, because there is basically no communication between threads.
  • "Gigahertz are out and cores are in. Programmers must begin to develop applications that take full advantage of the increasing number of cores present in modern computers."

    The marketplace wants and needs new technologies for more powerful processors. Multicore serves the needs of chip makers, not their customers. Making all software multi-threaded is trying to solve the wrong problem. It's going to result in lower-quality software without a significant increase in performance.
  • by MS-06FZ ( 832329 ) on Thursday March 22, 2007 @12:25PM (#18444717) Homepage Journal
    I'm sure the demonstration would've been a lot more difficult if he'd used philosophers instead of chickens. Thing is, chickens can't even hold chopsticks. A chicken just goes straight for the feed, so there's just one resource being acquired. It's still possible for a chicken to starve, but as chickens don't eat that much it's more likely that any shut-out chickens would simply go hungry for a while, and then get to eat before starving.
  • by IceFox ( 18179 ) on Thursday March 22, 2007 @03:29PM (#18448261) Homepage
    A project that you can download and play with today is Trolltech's QtConcurrent [trolltech.com]. Given a task it will automatically manage creating threads and distributing the task among your cores.

    From the project page:

    The classes and functions available in the Qt Concurrent package allows you to write multi-threaded applications without having to use the basic threading synchronization primitives such as mutexes and wait conditions. This makes it easier to reason about and test parallel programs to make sure that they are correct.
    The Qt Concurrent components manage the threads they use automatically. Each application has a global thread counter, which limits the maximum number threads used at the same time. The maximum is scaled according to the number of CPU cores on the system at runtime. This means that programs written with Qt Concurrent today will continue to scale when deployed on many-core systems in the future.

    Very cool.
  • by Tjp($)pjT ( 266360 ) on Thursday March 22, 2007 @03:33PM (#18448325)
    Inmos Transputers C language development had an elegant solution. It should be migrated to mainstream C and C++ in my opinion:

    parallel
    {/* execute these statements in parallel if possible */
    statement1;
    statement2;
    ...
    statementn;
    }

    sequential
    {/* execute these statements in order as written */
    statement_1;
    statement_2;
    ...
    statement_n;
    }

"What man has done, man can aspire to do." -- Jerry Pournelle, about space flight

Working...