Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Software

Is Parallel Programming Just Too Hard? 680

pcause writes "There has been a lot of talk recently about the need for programmers to shift paradigms and begin building more parallel applications and systems. The need to do this and the hardware and systems to support it have been around for a while, but we haven't seen a lot of progress. The article says that gaming systems have made progress, but MMOGs are typically years late and I'll bet part of the problem is trying to be more parallel/distributed. Since this discussion has been going on for over three decades with little progress in terms of widespread change, one has to ask: is parallel programming just too difficult for most programmers? Are the tools inadequate or perhaps is it that it is very difficult to think about parallel systems? Maybe it is a fundamental human limit. Will we really see progress in the next 10 years that matches the progress of the silicon?"
This discussion has been archived. No new comments can be posted.

Is Parallel Programming Just Too Hard?

Comments Filter:
  • Nope. (Score:2, Insightful)

    by Anonymous Coward
    Parallel programming isn't all that hard, what is difficult is justifying it.

    What's hard, is trying to write multi-threaded java applications that work on my VT-100 terminal. :-)
    • Re:Nope. (Score:5, Interesting)

      by lmpeters ( 892805 ) on Monday May 28, 2007 @11:49PM (#19305295)

      It is not difficult to justify parallel programming. Ten years ago, it was difficult to justify because most computers had a single processor. Today, dual-core systems are increasingly common, and 8-core PC's are not unheard of. And software developers are already complaining because it's "too hard" to write parallel programs.

      Since Intel is already developing processors with around 80 cores [intel.com], I think that multi-core (i.e. multi-processor) processors are only going to become more common. If software developers intend to write software that can take advantage of current and future processors, they're going to have to deal with parallel programming.

      I think that what's most likely to happen is we'll see the emergence of a new programming model, which allows us to specify an algorithm in a form resembling a Hasse diagram [wikipedia.org], where each point represent a step and each edge represents a dependency, so that a compiler can recognize what can and cannot be done in parallel and set up multiple threads of execution (or some similar construct) according to that.

      • Re:Nope. (Score:5, Interesting)

        by poopdeville ( 841677 ) on Tuesday May 29, 2007 @12:11AM (#19305465)
        I think that what's most likely to happen is we'll see the emergence of a new programming model, which allows us to specify an algorithm in a form resembling a Hasse diagram, where each point represent a step and each edge represents a dependency, so that a compiler can recognize what can and cannot be done in parallel and set up multiple threads of execution (or some similar construct) according to that.

        This is more-or-less how functional programming works. You write your program using an XML-like tree syntax. The compiler utilizes the tree to figure out dependencies. See http://mitpress.mit.edu/sicp/full-text/book/book-Z -H-10.html#%25_sec_1.1.5 [mit.edu]. More parallelism can be drawn out if the interpreter "compiles" as yet unused functions while evaluating others. See the following section.
      • Re:Nope. (Score:5, Interesting)

        by Lost Engineer ( 459920 ) on Tuesday May 29, 2007 @12:34AM (#19305587)
        It is still difficult to justify if you can more easily write more efficient single-threaded apps. What consumer-level apps out there really need more processing power than a single core of a modern CPU can provide? I already understand the enterprise need. In fact, multi-threaded solutions for enterprise and scientific apps are already prevalent, that market having had SMP for a long time.
        • Re: (Score:3, Insightful)

          by lmpeters ( 892805 )

          What consumer-level apps out there really need more processing power than a single core of a modern CPU can provide?

          The iLife suite. Especially iMovie. And let's not forget the various consumer and professional incarnations of Photoshop--none of them support more than two cores.

        • Re:Nope. (Score:4, Informative)

          by Anonymous Coward on Tuesday May 29, 2007 @01:00AM (#19305731)
          Functional programming is no harder than procedural/OO programming. A good functional interpreter can already draw this kind of parallelism out of a program. And yes, there are compiled functional languages with parallelizing compilers. (Erlang and OCaml come to mind)
          • Re:Nope. (Score:4, Insightful)

            by RzUpAnmsCwrds ( 262647 ) on Tuesday May 29, 2007 @02:22AM (#19306095)

            Functional programming is no harder than procedural/OO programming.


            That's difficult to say because true functional programming is so vastly different. We have so much time and energy invested in imperative algorithms that it's difficult to know whether or not functional methods are easier or more difficult to design.

            In a sense, it's like saying that Hybrid Synergy Drive is simpler than a traditional transmission. It's true on a conceptual level, but Toyota hasn't tried to put HSD everywhere it has put a traditional transmission and therefore we may not fully understand the complexities of trying to extrapolate the results to the entire problem space.

            So, I think the bottom line is, functional programming probably wouldn't be any harder if it existed in a world where it was dominant.

            Remember, a large part of being an effective programmer is understanding how (and if) your problem has been solved before. It may be far from optimal, but CPUs are largely designed to look like they are executing sequential instructions. Multicore and multithreaded designs are changing that model, but it's not a change that happens overnight.
        • Re: (Score:3, Insightful)

          by bcrowell ( 177657 )
          It is still difficult to justify if you can more easily write more efficient single-threaded apps. What consumer-level apps out there really need more processing power than a single core of a modern CPU can provide?
          Agreed. I just replaced my old machine with a dual-core machine a few months ago, and I find that I basically always have one cpu idle. When things are slow, they're usually slow because they're waiting on I/O. At one point I thought I had a killer app, which was ripping MP3s from CDs. I would
      • Re:Nope. (Score:4, Informative)

        by Durandal64 ( 658649 ) on Tuesday May 29, 2007 @01:30AM (#19305865)
        Multi-threaded programming is very difficult. But some things just shouldn't be done on multiple threads either. Multi-threading is a trade-off to get (generally) better efficiency and performance in exchange for vastly more complex control logic in many cases. This greater complexity means that the program is much more difficult to debug and maintain. Sometimes multi-threading a program is just a matter of replacing a function call with a call to pthread_create(...). But sometimes a program just can't be multi-threaded without introducing unacceptable complexity. A lot of the complaints of difficulty in multi-threading comes from people trying to multi-thread programs that can't be easily changed.
      • Re: Nope. (Score:4, Informative)

        by Dolda2000 ( 759023 ) <fredrik@nOsPAM.dolda2000.com> on Tuesday May 29, 2007 @06:16AM (#19307171) Homepage
        It is definitely hard to justify parallel programming, even though many computers are gaining SMP capabilities. The thing isn't necessarily that it is particularly hard to write multi-threaded applications -- the thing is that it is a lot harder to write a multi-threaded program than to write a single-threaded program. Suddenly, you have to introduce locks in all shared data structures and ensure proper locking in all parts of the program. That kind of thing just adds a significant part to the complexity of a program, and it requires a lot more testing as well. Therefore, justification is definitely needed.

        The real question then, is: Is it justified? To be honest, for most programs, the answer is no. Most interactive programs have a CPU-time/real-time ratio of a lot less than 1% during their lifetime (and very likely far less than 10% during normal, active use), so any difference brought by parallelizing them won't even be noticed. Other programs, like compilers, don't need to be parallelized, since you can just run "make -j8" to use all of your 8 cores at once. I would also believe that there are indeed certain programs that are rather hard to parallelize, like games. I haven't written a game in a quite a long time now, and I don't know the advances that the industry has made as of late, but a game engine's step cycle usually involves a lot of small steps, where the execution of the next depends on the result of the previous one. You can't even coherently draw a scene before you know that the state of all game objects has been calculated in full. Not that I'm saying that it isn't parallelizable, but I would think it is, indeed, rather hard.

        So where does that leave us? I, for one, don't really see a great segment of programs that really need parallelizing. There may be a few interactive programs, like movie editors, where the program logic is heavy enough for it to warrant a separate UI thread to maintain interactive responsiveness, but I'd argue that segment is rather small. A single CPU core is often fast enough not to warrant parellelizing even many CPU-heavy programs. There definitely is a category of programs that do benefit from parellelization (e.g. database engines which serve multiple clients), but they are often parellelized already. For everyone else, there just isn't incentive enough.

    • Re:Nope. (Score:4, Funny)

      by dch24 ( 904899 ) on Tuesday May 29, 2007 @12:07AM (#19305443) Journal

      Parallel programming isn't all that hard
      Then why is it that (as of right now) all the up-modded posts are laid out sequentially down the comment tree?
      • Re:Nope. (Score:5, Insightful)

        by Gorshkov ( 932507 ) <AdmiralGorshkov@[ ]il.com ['gma' in gap]> on Tuesday May 29, 2007 @12:47AM (#19305651)

        Then why is it that (as of right now) all the up-modded posts are laid out sequentially down the comment tree?
        Because one of the things TFM neglects to mention is that parallel programming, like any other programming method, is suitable for some things and not for others .... and the hard reality is, is that most application programmes you see on the desktop are basically serial, simply because of the way PEOPLE process tasks & information

        There is a very real limit as to how much you can parallelize standard office tasks.
    • Re: (Score:3, Insightful)

      by gnalre ( 323830 )
      Most of the problems of parallel programming were solved decades ago. Unfortunately the industry can be very conservative with new programming methods and tends to shoe horn the tried and trusted on to new problems.

      The truth is languages such as C/C++ and Java(to lesser extent) are not good languages to write parallel code in. They do not have the constructs built in such as process communication links, mutexes, semaphores that parallel programs rely on. You end up writing aracne code to get this sort of fu
      • Re: (Score:3, Informative)

        by stony3k ( 709718 )

        The truth is languages such as C/C++ and Java(to lesser extent) are not good languages to write parallel code in.
        To a certain extent Java is trying to correct that with the java.util.concurrent library. Unfortunately, not many people seem to have started using it yet.
        • Re: (Score:3, Insightful)

          by EricTheRed ( 5613 )
          That is true. Over the last six months I've been doing interviews for a single developer position, and the one common question I've asked as about java.util.concurrent. In some 20 interviewees, only one had a vague idea what it was.

          Here we use it quite extensively because it's the best way to handle parallel code and shared data structures.

          It's probably the most useful things in Java 5 & 6, and the least used.
    • by node159 ( 636992 ) on Tuesday May 29, 2007 @05:02AM (#19306817)
      I've seen it over and over in the industry, there is a distinct lack of parallel programming skill and knowledge, and even senior developers with years under their belt struggle with the fundamentals.

      Parallel programming does add a layer of complexity and its inherent lack of general solutions does make abstracting its complexity away difficult, but I suspect that the biggest issue is the lack of training of the work force. It isn't something you can pick up easily without a steep learning curve with many hard lessons in it, definitely not something that can be incorporated as a new thing to be learnt on the fly with deadlines looming.
      Another aspect is that its fundamental to the design, parallelism can and often will dictate the design and if the software architects are not actively designing for it or are not experienced enough to ensure that it remains a viable future option, future attempts to parallelise can be difficult at best.

      Ultimately the key issues are:
      * Lack of skill and training in the work force (including the understanding that there are no general solutions)
      * Lack of mature development tool to easy the development process (debugging tools especially)
      • Re: (Score:3, Interesting)

        by CodeBuster ( 516420 )
        Lack of skill and training in the work force (including the understanding that there are no general solutions)

        You may be shocked then to discover that a substantial percentage of the lecturers and professors in computer science programs at American (and probably foreign as well) universities have little or no *practical* experience in programming large multithreaded applications as well (they know what it is of course and they wave their hands while describing it but the programming details are often le
  • by rritterson ( 588983 ) * on Monday May 28, 2007 @11:28PM (#19305175)
    I can't speak for the rest of the world, or even the programming community. That disclaimer spoken, however, I can say that parallel programming is indeed hard. The trivial examples, like simply running many processes in parallel that are doing the same thing (as in, for example, Monte Carlo sampling) are easy, but the more difficult examples of parallelized mathematical algorithms I've seen, such as those in linear algebra are difficult to conceptualize, let alone program. Trying to manage multiple threads and process communication in an efficient way when actually implementing it adds an additional level of complexity.

    I think the biggest reason why it is difficult is that people tend to process information in a linear fashion. I break large projects into a series of chronologically ordered steps and complete one at a time. Sometimes if I am working on multiple projects, I will multitask and do them in parallel, but that is really an example of trivial parallelization.

    Ironically, the best parallel programmers may be those good managers, who have to break exceptionally large projects into parallel units for their employees to simultaneously complete. Unfortunately, trying to explain any sort of technical algorithm to my managers usually exacts a look of panic and confusion.
    • by Anonymous Coward on Monday May 28, 2007 @11:36PM (#19305223)
      I do a lot of multithreaded programming, this is my bread and butter really. It is not easy - it takes a specific mindset, though I would disagree that it has much to do with management. I am not a manager, never was one and never will be. It requires discipline and careful planning.

      That said, parallel processing is hardly a holy grail. On one hand, everything is parallel processing (you are reading this message in parallel with others, aren't you?). On the other, when we are talking about a single computer running a specific program, parallel usually means "serialized but switched really fast". At most there is a handful of processing units. That means, that whatever it is you are splitting among these units has to give itself well to splitting this number of ways. Do more - and you are overloading one of them, do less - and you are underutilizing resources. In the end, it would be easier often to do processing serially. Potential performance advantage is not always very high (I know, I get paid to squeeze this last bit), and is usually more than offset by difficulty in maintenance.
      • Re: (Score:3, Insightful)

        you are reading this message in parallel with others, aren't you?

        I don't know about the rest of Slashdot, but I read comments in a linear fashion - one comment, then the next comment, etc. Most people that I have known read in a linear fashion.

        Walking and chewing gum is a parallel process. Reading is generally linear.
        • by poopdeville ( 841677 ) on Tuesday May 29, 2007 @12:35AM (#19305593)
          That's debatable. You don't look at text one letter at a time to try to decipher the meaning, do you? You probably look at several letters, or even words, at a time. Understanding how the letters or words relate (spatially, syntactically, semantically) is a parallel process. Unless you're a very slow reader, your eyes have probably moved on from the words you're interpreting before you've understood their meaning. This is normal. This is how you establish a context for a particular word or phrase -- by looking at the surrounding words. Another parallel process.

          Every process is serial from a broad enough perspective. Eight hypothetical modems can send 8 bits per second. Or are they actually sending a single byte?
          • Re: (Score:3, Insightful)

            If you want to make that argument, you might as well make the argument that a computer doing the simple addition of two numbers is doing parallel processing.

            It could also be stated in a twisted manner of your view - looked at narrowly enough, anything can be considered to be parallel. However, realistically, we know that isn't really the case.
          • That's irrelevant. (Score:5, Insightful)

            by Estanislao Martínez ( 203477 ) on Tuesday May 29, 2007 @02:54AM (#19306233) Homepage

            Our cognitive system does many things at the same time, yes. That doesn't answer the question that's being posed here: whether explicit, conscious reasoning about parallel processing is hard for people.

          • by Moraelin ( 679338 ) on Tuesday May 29, 2007 @03:54AM (#19306453) Journal
            Actually, it's more like pipelined. The fact that your eyes already moved to the next letter, just says that the old one is still going through the pipeline. Yeah, there'll be some bayesian prediction and pre-fetching involved, but it's nowhere near consciously doing things in parallel.

            Try reading two different texts side by side, at the same time, and it won't work that neatly parallel any more.

            Heck, there were some recent articles about why most Powerpoint presentations are a disaster: in a nutshell, because your brain isn't that parallel, or doesn't have the bandwidth for it. If you try to read _and_ hear someone saying something (slightly) different at the same time, you just get overloaded and do neither well. The result is those time-wasting meetings where everyone goes fuzzy-brained and forgets everything as soon as the presentation flipped to the next chart.

            To get back to the pipeline idea, the brain seems to be quite the pipelined design. Starting from say, the eyes, you just don't have the bandwidth to consciously process the raw stream of pixels. There are several stages of buffering, filtering out the irrelevant bits (e.g., if you focus on the blonde in the car, you won't even notice the pink gorilla jumping up and down in the background), "tokenizing" it, matching and cross-referencing it, etc, and your conscious levels work on the pre-processed executive summary.

            We already know, for example, that the shortest term buffer can store about 8 seconds worth of raw data in transit. And that after about 8 seconds it will discard that data, whether it's been used or not. (Try closing your eyes while moving around a room, and for about 8 seconds you're still good. After that, you no longer know where you are and what the room looks like.)

            There's a lot of stuff done in parallel at each stage, yes, but the overall process is really just a serial pipeline.

            At any rate, yeah, your eyes may already be up to 8 seconds ahead of what your brain currently processes. It doesn't mean you're that much of a lean, mean, parallel-processing machine, it just means that some data is buffered in transit.

            Even time-slicing won't really work that well, because of that (potential) latency and the finite buffers. If you want to suddenly focus on another bit of the picture, or switch context to think of something else, you'll basically lose some data in the process. Your pipeline still has the old data in it, and it's going right to the bit bucket. That or both streams get thrashed because there's simply not enough processing power and bandwidth for both to go through the pipeline at the same time.

            Again, you only need to look at the fuzzy-brain effect of bad Powerpoint presentations to see just that in practice. Forced to try to process two streams at the same time (speech and text), people just make a hash of both.
            • by Mattintosh ( 758112 ) on Tuesday May 29, 2007 @10:25AM (#19309595)
              Again, you only need to look at the fuzzy-brain effect of bad Powerpoint presentations to see just that in practice. Forced to try to process two streams at the same time (speech and text), people just make a hash of both.

              There's way more to it than that. The brain is an efficient organizer and sorter. It also tends to be great at estimating the relative importance of things and discarding the lesser ones it can't deal with. Thus, 99.999999999% of the time, I ignore Powerpoint presentations. It goes in my eyes, the brain deciphers the signal, decides that the Powerpoint stuff is useless drivel, and it continues processing the audio (in parallel!) and terminates the visual processing thread. Shortly thereafter, the audio signal is also determined to be of minimal benefit and is discarded as useless drivel as well, leaving more processing time for other things. Things like pondering the answer to the age-old question, "What's for lunch?"
          • Re: (Score:3, Funny)

            by Hoi Polloi ( 522990 )
            Actually I parallel process when I read most Slashdot comments. Right now for instance I was reading your comment but thinking about cupcakes. Ummm, cupcakes...
    • Re: (Score:3, Interesting)

      People may or may not process information in a linear fashion, but human brains are, apparently, massively parallel computational devices.

      Addressing architecture for Brain-like Massively Parallel Computers [acm.org]

      or from a brain-science perspective

      Natural and Artificial Parallel Computation [mit.edu]

      The common tools (Java, C#, C++, Visual Basic) are still primitive for parallel programming. Not much more than semaphores and some basic multi-threading code (start/stop/pause/communicate from one thread to another v
      • Re: (Score:3, Insightful)

        by buswolley ( 591500 )
        Our brains may be massively parallel, but this doesn't not mean we can effectively attend to multiple problem domains at once. Especially, our conscious attentional processes tend to be able to focus on at most a couple of things at once, albeit at a high level of organizational description.
        • by TheMCP ( 121589 ) on Tuesday May 29, 2007 @01:20AM (#19305821) Homepage
          Most programmers have difficulty thinking about recursive processes as well, but there are still some who don't and we still have use for them. I should say "us", as I make many other programmers batty by using recursion frequently. Programmers tell me all the time that they find recursion difficult - difficult to write, difficult to trace, difficult to understand, difficult to debug. Conversely, I find it easier - all I have to do is reduce the problem to its simplest form and determine the end case, and a tiny snip of code will do where a huge mess of iterative code would otherwise have been required. So, I don't understand why anyone would want to write iterative code when recursion can solve the problem.

          I suspect that parallel programming may be similar - some programmers will "get it", others won't. Those who "get it" will find it fun and easy and be unable to understand why everyone else finds it hard.

          Also, most developement tools were created with a single processor system in mind: IDEs for parallel programming are a new-ish concept and there are few. As more are developed we'll learn about how the computer can best help the programmer to create code for a parallel system, and the whole process can become more efficient. Or maybe automated entirely; at least in some cases, if the code can be effectively profiled the computer may be able to determine how to parallelize it and the programmer may not have to worry about it. So, I think it's premature to argue about whether parallel programming is hard or not - it's different, but until we have taken the time to further develop the relevant tools, we won't know if it's really hard or not.

          And of course, for a lot of tasks it simply won't *matter* - anything with a live user sitting there, for example, only has to be fast enough that the person perceives it as being instantaneous. Any faster than that is essentially useless. So, for anything that has states requiring user input, there is a "fast enough" beyond which we need not bother optimizing unless we're just trying to speed up the system as a whole, and that sort of optimization is usually done at the compiler level. It is only for software requiring unusually large amounts of computation or for systems which have been abstracted to the point of being massively inefficient beneath the surface that the fastest possible computing speed is really required, and those are the sorts of systems to which specialist programmers could be applied.
          • Re: (Score:3, Insightful)

            by gnalre ( 323830 )
            Brought up programming C and Fortran I struggled for a long time with recursion. When I moved on to using functional languages(erlang to be specific) because recursion is integral(no loop construct) I eventually became quite comfortable with it.

            It goes to show to become a better programmer investigate as many programming paradigms as possible.

    • by bloosqr ( 33593 ) on Monday May 28, 2007 @11:52PM (#19305331) Homepage
      Back when i was in graduate school we used to joke .. in the future everything will be monte carlo :)

      While everything perhaps can't be solved using monte carlo type integration tricks .. there is more that can be done w/ 'variations of the theme' than is perhaps obvious .. (or perhaps you can rephrase the problem and ask the same question a different way) .. perhaps if you are dreaming like .. what happens if i have a 100,000 processors at my disposal etc

    • by jd ( 1658 ) <imipak&yahoo,com> on Tuesday May 29, 2007 @12:00AM (#19305395) Homepage Journal
      Parallel programming is indeed hard. The standard approach these days is to decompose the parallel problem into a definite set of serial problems. The serial problems (applets) can then be coded more-or-less as normal, usually using message-passing rather than direct calls to communicate with other applets. Making sure that everything is scheduled correctly does indeed take managerial skills. The same techniques used to schedule projects (critical path analysis) can be used to minimize time overheads. The same techniques used to minimize the use of physical resources (SIMPLEX aka Operational Research) works just as well on software for physical resources there.

      The first problem is that people who make good managers make lousy coders. The second problem is that people who make good coders make lousy managers. The third problem is that plenty of upper management types (unfortunately, I regard the names I could mention as genuinely dangerous and unpredictable) simply have no understanding of programming in general, never mind the intricacies of parallelism.

      However, resource management and coding is not enough. These sorts of problems are typically either CPU-bound and not heavy on the networks, or light on the CPU but are network-killers. (Look at any HPC paper on cascading network errors for an example.) Typically, you get hardware which isn't ideally suited to either extreme, so the problem must be transformed into one that is functionally identical but within the limits of what equipment there is. (There is no such thing as a generic parallel app for a generic parallel architecture. There are headaches and there are high-velocity exploding neurons, but that's the range of choices.)

  • by Anonymous Coward on Monday May 28, 2007 @11:29PM (#19305183)
    Implement it, add CPUs, earn billion$. Just Google it.

    • by allenw ( 33234 ) on Tuesday May 29, 2007 @12:15AM (#19305487) Homepage Journal
      Implementing MapReduce is much easier these days: just install and contribute to the Hadoop [apache.org] project. This is an open source, Java-based MapReduce implementation, including a distrbuted filesystem called HDFS.

      Even though it is implemented in Java, you can use just about anything with it, using the Hadoop streaming [apache.org] functionality.

    • Re: (Score:3, Insightful)

      Two words: map-reduce
      (Score:4, Interesting)
      by Anonymous Coward on 07-05-29 6:29 (#19305183)
      Implement it, add CPUs, earn billion$. Just Google it.


      Wow, fantastic. Turns out parallel programming was really really easy after all. I guess we can make all future computer science courses one week then, since every problem can be solved by map-reduce! The AC has finally found the silver bullet. Bravo, mods.
      • Re: (Score:3, Funny)

        by pchan- ( 118053 )
        I guess we can make all future computer science courses one week then, since every problem can be solved by map-reduce!

        What do you mean? I just wrote a clone of Half-Life 2 using only Map-Reduce. It works great, but it requires 600 exabytes of pre-computed index values and data sets and about 200 high end machines to reach decent frame rates.
  • by Corydon76 ( 46817 ) on Monday May 28, 2007 @11:33PM (#19305207) Homepage

    Oh noes! Software doesn't get churned out immediately upon the suggestion of parallel programming! Programmers might actually be debugging their own code!

    There's nothing new here: just somebody being impatient. Parallel code is getting written. It is not difficult, nor are the tools inadequate. What we have is non-programmers not understanding that it takes a while to write new code.

    If anything, that the world hasn't exploded with massive amounts of parallel code is a good thing: it means that proper engineering practice is being used to develop sound programs, and the jonny-come-lately programmers aren't able to fake their way into the marketplace with crappy code, like they did 10 years ago.

    • Oh noes! Software doesn't get churned out immediately upon the suggestion of parallel programming! Programmers might actually be debugging their own code! There's nothing new here: just somebody being impatient. Parallel code is getting written. It is not difficult, nor are the tools inadequate. What we have is non-programmers not understanding that it takes a while to write new code. If anything, that the world hasn't exploded with massive amounts of parallel code is a good thing: it means that proper en

  • by Opportunist ( 166417 ) on Monday May 28, 2007 @11:34PM (#19305211)
    Aside from my usual lament that people already call themselves programmers when they can fire up Visual Studio, parallelizing your tasks opens quite a few cans of worms. Many things can't be done simultanously, many side effects can occur if you don't take care and generally, programmers don't really enjoy multithreaded applications, for exactly those reasons.

    And often enough, it's far from necessary. Unless you're actually dealing with an application that does a lot of "work", calculate or display, preferable simultanously (games would be one of the few applications that come to my mind), most of the time, your application is waiting. Either for input from the user or for data from a slow source, like a network or even the internet. The average text processor or database client is usually not in the situation that it needs more than the processing power of one core. Modern machines are by magnitudes faster than anything you usually need.

    Generally, we'll have to deal with this issue sooner or later, especially if our systems become more and more overburdened with "features" while the advance of processing speed will not keep up with it. I don't see the overwhelming need for parallel processing within a single application for most programs, though.
  • Not justifyable (Score:4, Interesting)

    by dj245 ( 732906 ) on Monday May 28, 2007 @11:40PM (#19305235) Homepage
    I can see this going down in cubicles all through the gaming industry. The game is mostly coming together, the models have been tuned, textures drawn, code is coming together, and the coder goes to the pointy haired boss.

    Coder: We need more time to make this game multithreaded!
    PHB: Why? Can it run on one core of a X?
    Coder: Well I suppose it can but...
    PHB: Shove it out the door then.

    If flight simulator X is any indication (a game that should have been easy to parallize) this conversation happens all the time and games are launched taking advantage of only one core.
    • Re:Not justifyable (Score:5, Insightful)

      by Grave ( 8234 ) <awalbert88@hotma ... m minus math_god> on Tuesday May 29, 2007 @12:18AM (#19305501)
      I seem to recall comments from Tim Sweeney and John Carmack that parallelism needed to start from the beginning of the code - IE, if you weren't thinking about it and implementing it when you started the engine, it was too late. You can't just tack it on as a feature. Unreal Engine 3 is a prime example of an engine that is properly parallelized. It was designed from the ground up to take full advantage of multiple processing cores.

      If your programmers are telling you they need more time to turn a single-threaded game into a multi-threaded one, then the correct solution IS to push the game out the door, because it won't benefit performance to try to do it at the end of a project. It's a fundamental design choice that has to be made early on.
    • Re: (Score:3, Insightful)

      PHB: Why? Can it run on one core of a X?
      Coder: Well I suppose it can but...
      PHB: Shove it out the door then.


      I think the PHB may have a point. Games are expensive to make, gamers are fickle, and if reviews say a game "looks dated" it is a death sentence for sales. Rewriting an engine to be multi-threaded after it is done will take a lot of time and money, and most PCs out there have more dated hardware than you think, so most current players won't see any benefints whatsoever from it anyway.
  • by ArmorFiend ( 151674 ) on Monday May 28, 2007 @11:40PM (#19305243) Homepage Journal
    For this generation of "average" programmers, yes its too hard. Its the programming language, stupid. The average programming language has come a remarkably short distance in the last 30 years. Java and Fortran really aren't very different, and neither is well suited to paralellizing programs.

    Why isn't there a mass stampede to Erlang or Haskell, languages that address this problem in a serious way? My conclusion is that most programmers are just too dumb to do major mind-bending once they've burned their first couple languages into their ROMs.

    Wait for the next generation, or make yourself above average.

  • Programmers (Score:2, Insightful)

    by Anonymous Coward
    Well, the way they 'teach' programming nowadays, programmers are simply just typists.

    No need to worry about memory management, java will do it for you.

    No need to worry about data location, let the java technology of the day do it for you.

    No need to worry about how/which algorithm you use, just let java do it for you, no need to optimize your code.

    Problem X => Java cookbook solution Y
  • by vortex2.71 ( 802986 ) on Monday May 28, 2007 @11:42PM (#19305251)
    Though there are many very good parallel programmers who make excellent use of the Message Passing Interface, we are entering a new era of parallel computing where MPI will soon be unusable. Consider when the switch was made from assembly language to a programming language - when the "processor" contained too many components to be effectively programmed with machine language. That same threshold has long since passed with parallel computers. Now that we have computers with more than 100 thousand processors and are working to build computers with more than a million processors, MPI has become the assembly language of parallel programming. It hence, needs to be replaced with a new parallel language that can controll great numbers of processors.
  • In one word, the answer is yes. It's difficult for people who are used to programming for a single CPU.

    Programmers that are accustomed to non-parallel programming environments forget to think about the synchronization issues that come up in parallel programming. Several conventional programs do not take into account synchronization of the shared memory or message passing requirements that come up for these programs to work correctly in a parallel environment.

    This is not to say that there will not be an
  • Clusters? (Score:4, Insightful)

    by bill_mcgonigle ( 4333 ) * on Monday May 28, 2007 @11:43PM (#19305263) Homepage Journal
    Since this discussion has been going on for over three decades with little progress in terms of widespread change

    Funny, I've seen an explosion in the number of compute clusters in the past decade. Those employ parallelism, of differing types and degrees. I guess I'm not focused as much on the games scene - is this somebody from the Cell group writing in?

    I mean, when there's an ancient Slashdot joke about something there has to be some entrenchment.

    The costs are just getting to the point where lots of big companies and academic departments can afford compute clusters. Just last year the price of multi-core CPU's made it into mainstream desktops (ironically, more in laptops so far). Don't be so quick to write off a technology that's just out of its first year of being on the desktop.

    Now, that doesn't mean that all programmers are going to be good at it - generally programmers have a specialty. I'm told the guys who write microcode are very special, are well fed, and generally left undisturbed in their dark rooms, for fear that they might go look for a better employer, leaving the current one to sift through a stack of 40,000 resumes to find another. I probably wouldn't stand a chance at it, and they might not do well in my field, internet applications - yet we both need to understand parallelism - they in their special languages and me, perhaps with Java this week, doing a multithreaded network server.

  • Yes and No (Score:5, Interesting)

    by synx ( 29979 ) on Monday May 28, 2007 @11:47PM (#19305289)
    The problem with parallel programming is we don't have the right set of primitives. Right now the primitives are threads, mutexes, semaphores, shared memory and queues. This is the machine language of concurrency - it's too primitive to effective write lots of code by anyone who isn't a genius.

    What we need is more advanced primitives. Here are my 2 or 3 top likely suspects:

    - Concurrent Sequential Programs - CSP. This is the programming model behind Erlang - one of the most successful concurrent programming languages available. Writing large, concurrent, robust apps is as simple as 'hello world' in Erlang. There is a whole new way of thinking that is pretty much mind bending. However, it is that new methodology that is key to the concurrency and robustness of the end applications. Be warned, it's functional!
    - Highly optimizing functional languages (HOFL) - These are in the proto-phase, and there isn't much available, but I think this will be the key to extremely high performance parallel apps. Erlang is nice, but not high performance computing, but HOFLs won't be as safe as Erlang. You get one or the other. The basic concept is most computation in high performance systems is bound up in various loops. A loop is a 'noop' from a semantic point of view. To get efficient highly parallel systems Cray uses loop annotations and special compilers to get more information about loops. In a functional language (such as Haskel) you would use map/fold functions or list comprehensions. Both of which convey more semantic meaning to the compiler. The compiler can auto-parallelize a functional-map where each individual map-computation is not dependent on any other.
    - Map-reduce - the paper is elegant and really cool. It seems like this is a half way model between C++ and HOFLs that might tide people over.

    In the end, the problem is the abstractions. People will consider threads and mutexes as dangerous and unnecessary as we consider manual memory allocation today.
    • Re: (Score:3, Interesting)

      Concurrent Sequential Programs - CSP. This is the programming model behind Erlang

      Technically speaking, Erlang has more in common with the Actor model than with CSP. The Actor model (like Erlang) is based on named actors (cf Erlang pids) that communicate via asynchronous passing of messages sent to specific names. CSP is based on anonymous processes that communicate via synchronized passing of messages sent through names channels. Granted, you can essentially simulate one model within the other. But it pays

  • by Max Romantschuk ( 132276 ) <max@romantschuk.fi> on Monday May 28, 2007 @11:47PM (#19305293) Homepage
    Parallel programming and construction share one crucial fundamental requirement: Proper communication. But building a house is much easier to grasp. Programming is abstract.

    I think part of the problem is, that many programmers tend to be lone wolves, and having to take other people (and their code, processes, and threads) into consideration is a huge psychological hurdle.

    Just think about traffic: If everyone were to cooperate and people wouldn't cut lanes and fuck around in general we'd all be better off. But traffic laws are still needed.

    I figure what we really need is to develop proper guidelines and "laws" for increased parallelism.

    Disclaimer: This is all totally unscientific coming from the top of my head...
  • by Coryoth ( 254751 ) on Monday May 28, 2007 @11:51PM (#19305319) Homepage Journal
    Parallel programming doesn't have to be quite as painful as it currently is. The catch is that you have to face the fact that you can't go on thinking with a sequential paradigm and have some tool, library, or methodology magically make everything work. And now, I'm not talking about functional programming. Functional programming is great, and has a lot going for it, but solving concurrent programming issues is not one of those things. Functional programming deals with concurrency issues by simply avoiding them. For problems that have no state and can be coded purely functionally this is fine, but for a large number of problems you end up either tainting the purity of your functions, or wrapping things up in monads which end up having the same concurrency issues all over again. It does have the benefit that you can isolate the state, and code that doesn't need it is fine, but it doesn't solve the issue of concurrent programming.

    No, the different sorts of paradigms I'm talking about no shared state, message passing concurrency models ala CSP [usingcsp.com] and pi Calculus [wikipedia.org] and the Actor Model [wikipedia.org]. That sort of approach in terms of how to think about the problem shows up in languages like Erlang [erlang.org], and Oz [wikipedia.org] which handle concurrency well. The aim here is to make message passing and threads lightweight and integrated right into the language. You think in terms actors passing data, and the language supports you in thinking this way. Personally I'm rather fond of SCOOP for Eiffel [se.ethz.ch] which elegantly integrates this idea into OO paradigms (an object making a method call is, ostensibly, passing a message after all). That's still research work though (only available as a preprocessor and library, with promises of eventually integrating it into the compiler). At least it makes thinking about concurrency easier, while still staying somewhat close more traditional paradigms (it's well worth having a look at if you've never heard of it).

    The reality, however, is that these new languages which provide the newer and better paradigms for thinking and reasoning about concurrent code, just aren't going to get developer uptake. Programmers are too conservative and too wedded to their C, C++, and Java to step off and think as differently as the solution really requires. No, what I expect we'll get is kluginess retrofitted on to existing languages in a slipshod way that sort of work, in as much as it is an improvement over previous concurrent programming in that language, but doesn't really make the leap required to make the problem truly significantly easier.
  • Amdahl's law (Score:5, Insightful)

    by apsmith ( 17989 ) * on Monday May 28, 2007 @11:51PM (#19305325) Homepage
    I've worked with parallel software for years - there are lots of ways to do it, lots of good programming tools around even a couple of decades back (my stuff ranged from custom message passing in C to using "Connection-Machine Fortran"; now it's java threads) but the fundamental problem was stated long ago by Gene Amdahl [wikipedia.org] - if half the things you need to do are simply not parallelizable, then it doesn't matter how much you parallelize everything else, you'll never go more than twice as fast as using a single thread.

    Now there's been lots of work on eliminating those single-threaded bits in our algorithms, but every new software problem needs to be analyzed anew. It's just another example of the no-silver-bullet problem of software engineering...
  • by putaro ( 235078 ) on Tuesday May 29, 2007 @12:01AM (#19305401) Journal
    I've been doing true parallel programming for the better part of 20 years now. I started off writing kernel code on multi-processors and have moved on to writing distributed systems.

    Multi-threaded code is hard. Keeping track of locks, race conditions and possible deadlocks is a bitch. Working on projects with multiple programmers passing data across threads is hard (I remember one problem that took days to track down where a programmer passed a pointer to something on his stack across threads. Every now and then by the time the other thread went to read the data it was not what was expected. But most of the time it worked).

    At the same time we are passing comments back and forth here on Slashdot between thousands of different processors using a system written in Perl. Why does this work when parallel programming is so hard?

    Traditional multi-threaded code places way too much emphasis on synchronization of INSTRUCTION streams, rather than synchronization of data flow. It's like having a bunch of blind cooks in a kitchen and trying to work it so that you can give them instructions so that if they follow the instructions each cook will be in exactly the right place at the right time. They're passing knives and pots of boiling hot soup between them. One misstep and, ouch, that was a carving knife in the ribs.

    In contrast, distributed programming typically puts each blind cook in his own area with well defined spots to use his knives that no one else enters and well defined places to put that pot of boiling soup. Often there are queues between cooks so that one cook can work a little faster for a while without messing everything up.

    As we move into this era of cheap, ubiquitous parallel chips we're going to have to give up synchronizing instruction streams and start moving to programming models based on data flow. It may be a bit less efficient but it's much easier to code for and much more forgiving of errors.
    • Re: (Score:3, Interesting)

      by underflowx ( 1108573 )
      My experience with data flow is LabVIEW [wikipedia.org]. As a language designed to handle simultaneous slow hardware communication and fast dataset processing, it's a natural for multi-threading. Parallelization is automated within the compiler based on program structure. The compiler's not all that great at it (limited to 5 explicit threads plus whatever internal tweaking is done), but... the actual writing of the code is just damn easy. Not to excuse the LabVIEW compiler: closed architecture, tight binding to the IDE
  • bad education (Score:4, Insightful)

    by nanosquid ( 1074949 ) on Tuesday May 29, 2007 @12:09AM (#19305459)
    No, parallel programming isn't "too hard", it's just that programmers never learn how to do it because they spend all their time on mostly useless crap: enormous and bloated APIs, enormous IDEs, gimmicky tools, and fancy development methodologies and management speak. Most of them, however, don't understand even the fundamentals of non-procedural programming, parallel programming, elementary algorithms, or even how a CPU works.

    These same programmers often think that ideas like "garbage collection", "extreme programming", "visual GUI design", "object relational mappings", "unit testing", "backwards stepping debuggers", and "refactoring IDEs" (to name just a few) are innovations of the last few years, when in reality, many of them have been around for a quarter of a century or more. And, to add insult to injury, those programmers are often the ones that are the most vocal opponents of the kinds of technologies that make parallel programming easier: declarative programming and functional programming (not that they could actually define those terms, they just reject any language that offers such features).

    If you learn the basics of programming, then parallel programming isn't "too hard". But if all you have ever known is how to throw together some application in Eclipse or Visual Studio, then it's not surprising that you find it too hard.
  • Two Problems (Score:4, Insightful)

    by SRA8 ( 859587 ) on Tuesday May 29, 2007 @12:11AM (#19305469)
    I've encountered two problems with parallel programming 1. For applications which are constantly being changed under tight deadlines, parallel programming becomes an obstacle. Parallelism adds complexity which hinders quick changes to applications. This is a very broad generalization, but often the case. 2. Risk. Parallelism introduces a lot of risk, things that arent easy to debug. Some problems I faced happened only once every couple of weeks, and involved underlying black-box libraries. For financial applications, this was absolutely too much risk to bear. I would never do parallel programming for financial applications unless management was behind it fully (as they are for HUGE efforts such as monte carlo simulations and VaR apps.)
  • A Case Study (Score:3, Informative)

    by iamdrscience ( 541136 ) on Tuesday May 29, 2007 @12:37AM (#19305603) Homepage
    My brother just recently started doing IT stuff for the research psych department at a respected university I won't name. They do a lot of mathematical simulations with Matlab and in order to speed these up they decided to buy several Mac Pro towers with dual quad core Xeons at $15,000*. The problem is, their simulations aren't multithreaded (I don't know if this is a limitation of Matlab or of their programming abilities -- maybe both) so while one core is cranking on their simulations to the max, the other 7 are sitting there idle! So while a big part of this ridiculous situation is just uninformed people not understanding their computing needs, it also shows that there are plenty of programmers stuck playing catch-up since computers with multiple cores (i.e. Core 2 Duo, Athlon X2, etc.) have made their way onto the desktops of normal users.

    I think this is a temporary situation though, and something that has happened before, there have been many cases where new powerful hardware seeps into the mainstream before programmers are prepared to use it.



    *I know what you're thinking: "How the hell do you spend $15,000 on a Mac?". I wouldn't have thought it was possible either, but basically all you have to do is buy a Mac with every single option that no sane person would buy: max out the overpriced RAM, buy the four 750GB hard drives at 100% markup, throw in a $1700 Nvidia Quadro FX 4500, get the $1K quad fibre channel card, etc.
  • The Bill comes due (Score:5, Insightful)

    by stox ( 131684 ) on Tuesday May 29, 2007 @12:46AM (#19305637) Homepage
    After years of driving the programming profession to its least common denominator, and eliminating anything that was considered non-essential, somebody is surprised that current professionals are not elastic enough to quickly adapt to a changing environment in hardware. Whoda thunk it? The ones, you may have left, with some skills are nearing retirement.
  • by david.emery ( 127135 ) on Tuesday May 29, 2007 @01:04AM (#19305763)
    Many of the responses here, frankly, demonstrate how poorly people understand both the problem and the potential solutions.

    Are we talking about situations that lend themselves to data flow techniques? Or Single Instruction/Multiple Data (e.g. vector) techniques? Or problems that can be broken down and distributed, requiring explicit synchronization or avoidance?

    I agree with other posters (and I've said it elsewhere on /.) that for parallel/distributed processing, Language Does Matter. (And UML is clearly part of the problem...)

    Then there's the related issue of long-lived computation. Many of the interesting problems in the real-world take more than a couple of seconds to run, even on today's machines. So you have to worry about faults, failures, timeouts, etc.

    One place to start for distributed processing kinds of problems, including fault tolerance, is the work of Nancy Lynch of MIT. She has 2 books out (unfortunately not cheap), "Distributed Algorithms" and "Atomic Transactions". (No personal/financial connection, but I've read & used results from her papers for many years...)

    I get the sense that parallelism/distributed computation is not taught at the undergrad level (it's been a long time since I was an undergrad, and mine is not a computer science/software engineering/computer engineering degree.) And that's a big part of the problem...

            dave
  • Not too hard at all (Score:4, Interesting)

    by JustNiz ( 692889 ) on Tuesday May 29, 2007 @01:21AM (#19305831)
    Parallel programming is NOT too hard. Yes its harder than a single-threaded approach sometimes, but in my experience usually the problem maps more naturally into a multi-threaded paradigm.
    The real probelm is this: I've seen time and again that the real problem is that most companies do not require, recognise, or reward high technical skill, experience and ability instead they favour minimal cost and speed of product delivery times over quality.
    Also it seems most Human Resources staff/agents don't have the necessary skills to actually identify skilled Software Developers compared to useless Software Developers that have a few matching buzzwords on their resume, because they themselves don't understand enough to ask the right questions so resort to resume-keyword matching.
    The consequence is that the whole notion of Software Development being a skilled profession is being undermined and devalued. This is allowing a vast amount of people to be employed as Software Developers that don't have the natural ability and/or proper training to do the job.
    To those people, parallel programming IS hard. To anyone with some natural ability, the proper understanding of the issues (you get from say a BS Computer Science degree) and a naturally rigorus approach, no it really isn't.
  • by netfunk ( 32040 ) <(icculus) (at) (icculus.org)> on Tuesday May 29, 2007 @02:03AM (#19306013) Homepage
    Tom Leonard, a programmer from Valve, gave a fascinating talk about this at GDC this year, about retrofitting multicore support into Half-Life 2 (specifically, into the Source Engine, which powers Half-Life 2). Not surprisingly, this talk was named "Dragged Kicking and Screaming" [gdconf.com] ...

    There was a lot of really good wisdom in there, whether you are writing a game or something else that needs to get every possible performance boost.

    I'm sure they probably drew from 20+ years worth of whitepapers (and some newer ones about "lock-free" mutexes, see chapter 1.1 of "Game Programming Gems 6"), but what I walked away from the talk with was the question: "why the hell didn't _i_ think of that?"

    There were several techniques they used that, once you built a framework to support it, made parallelizing tasks dirt simple. A lot of it involves putting specific jobs onto queues and letting worker threads pick them up when they are idle, and being able to assign specific jobs to specific cores to protect your investment in CPU cache.

    Most of the rest of the work is building things that don't need a result immediately, and trying to build things that can be processed without having to compete for various pieces of state...sometimes easier said than done, sure. But after hearing his talk, I was of the opinion that while parallelism is always more complex than single-threaded code, doing this well is something most developers aren't even _thinking_ about yet. In most cases, we're not even at the point where we can talk about _languages_ and _tools_, since we aren't even using the ones we have well.

    --ryan.

  • No. No. No. (Score:4, Interesting)

    by Duncan3 ( 10537 ) on Tuesday May 29, 2007 @02:06AM (#19306029) Homepage
    We've had a steady stream of multi-core guests here at Stanford lecturing on the horror and panic in the industry about multi-core, and how the world is ending. I've seen Intel guys practically shit themselves on stage, they are so terrified we can't find them a new killer multi-core app in time. That's BS. OK so it's a hour lecture, maybe two if you're not that fast. Parallel/SMP is not that hard, you just have to be careful and follow one rule, and it's not even a hard rule. There are even software tools to check you followed the rule!

    But that's not the problem...

    The problem is, a multi-year old desktop PC is still doing IM, email, web surfing, Excel, facebook/myspace, and even video playing fast enough a new one won't "feel" any faster once you load up all the software, not one bit. For everyone but the hardcore momma's basement dwelling gamers, the PC on your home/work desk is already fast enough. All the killer apps are now considered low-CPU, bandwidth is the problem.

    Now sure, I use 8-core systems at the lab, and sit on top of the 250k-node Folding@home so it sounds like I've lost my mind, but you know what, us super/distributed computing science geeks are still 0% of the computer market if you round. (and we'll always need every last TFLOP of computation on earth, and still need more)

    That's it. Simple. Sweet. And a REAL crisis - but only to the bottom line. The market has nowhere to go but lower wattage lower cost systems which means lower profits. Ouch.
    • Re: (Score:3, Insightful)

      by GigsVT ( 208848 )
      Games have been driving PC performance lately. You shouldn't be opposed to such things.

      You can get a 4 core chip for under $600 now because of it. If you are into high performance computing then you should beg the game developers for something that can use as many cores as you can throw at it. Because as you said, you are 0% of the market, and gamers are a huge chunk of it.

  • by 12357bd ( 686909 ) on Tuesday May 29, 2007 @02:18AM (#19306077)

    could be to add a especifically parallel iterator keyword to programming languages, ie:

    for-every a in list; do something(a); done;

    The compiler then assumes that something(a) can be safely executed in parallel for every element in the list.

    Is not rocket science, it could lead to parallel spagheti, but is a straightforward method to promote parallel programming.

    • Re: (Score:3, Interesting)

      by TeknoHog ( 164938 )
      Something like this exists in many languages, for example Fortran with its built-in matrix math and the map() construct in Python. The problem is that people don't use these languages, and while they stick with something like C they are forced to reinvent the wheel with things like explicit threads.
  • by ebunga ( 95613 ) on Tuesday May 29, 2007 @10:00AM (#19309271)
    Most "programmers" seem to have a difficult time churning out normal, non-parallel code that runs, much less code that is bug free. Do you seriously think your average code monkey reusing snippets from the language-of-the-week cookbook is going to be able to wrap his head around something that actually requires a little thought?
  • Multics (Score:4, Informative)

    by Mybrid ( 410232 ) on Tuesday May 29, 2007 @10:21AM (#19309551)
    Multics was the precursor to Unix. Multics was a massively parallel operating system. It was designed to be an OS for a public utility. The thinking back then was all Americans would have computer terminals like a vt100 and plug into a municipal computer over a network.

    Multics was designed for long lived processes. Short lived processes are something we take for granted today but wasn't assumed back then. Today we assume that the sequence is we open a program, perform a task, close the program. Microsoft Outlook, for example, relies on Outlook being closed for its queue when to purge email that's been deleted. Programs are not designed to be up years-on-end. In fact, Microsoft reboots their OS every month with Windows Update. I've often speculated that the reboot is often requested not because the patch requires it but because Microsoft knows that its OS needs rebooted, often.

    Why? Why wouldn't one just leave every application you've ever opened, opened?

    The reason is that programmers cannot reliably write long running process code. Programs crashed all the time in Multics. Something Multics wasn't very good at handling back in the 1960s. There was some research done and it was observed that programmers could write code for short lived processes fairly well but not long lived.

    So, one lessoned learn from from the failure Multics is that programmers do not write reliable, long running code.

    Parallel processing is a processing better suited to long running processes. Since humans are not good at writing long running processes it makes sense then that parallel processing is rare. The innovation to deal with this sticky dilemma was the client-server model. Only the server needs to be up for long periods of time. The clients can and should perform short lived tasks and only the server needs to be reliably programmed to run 24/7. Consequently you see servers have clusters, RAID storage, SAN storage and other parallel engineering and clients do not. In some sense, Windows is the terminal they were thinking of back in the Multics days. The twist is that given humans are not very good at writing long running processes then the client OS, Windows, is designed around short lived processes. Open, perform task, close. I leave Firefox open for more than a couple of days and it is using 300MB of memory and slowing my machine down. I close and reopen Firefox daily.

    Threads didn't exist in the computing word until Virtual Memory with it's program isolation came to be. What happened before threads? What happened before threads is that programmers were in a shared, non-isolated environment. Where Multics gave isolation to every program, Unix just recognizes two users: root and everyone else. Before Virtual Memory, this meant that all user programs could easily step on each other and programs could bring each down. Which happened a lot.

    Virtual Memory is one of the greatest computing advances because it specifically deals with a short coming in human nature. Humans are not very good at programming memory directly, i.e. pointers.

    It wasn't very long after VM came out that threads were invented to allow intra-application memory sharing. Here's the irony though. There still as no advancement in getting humans to perform reliable programming. Humans today are still not very good at programming memory directly, even with monitors, semaphores and other OS helpers.

    When I was in my graduate OS class the question was raised then of "when do you invoke a thread?" given you probably shouldn't to avoid instability.

    The answer was to think about threads then as "light weight processes". The teaching was that given this a thread was appropriate for:

    1. IO blocked requests.

      Have one thread per IO device like keyboard, mouse, monitor, etc. There should be one thread dedicated to CPU only and the CPU thread controls all the IO threads. The IO threads should be given the simple task of servicing requests on behalf of the CPU thread.

    2. Performance

      Onl

  • System of Systems (Score:3, Interesting)

    by Zarf ( 5735 ) on Tuesday May 29, 2007 @11:14AM (#19310253) Journal
    System of Systems [wikipedia.org] is a philosophy that should look a helluva lot like the Unix Philosophy [catb.org] because... it is the same philosophy... and people do write code for these environments. If you want Parallel Programming to succeed make it work as systems of systems interacting to create a complex whole. Create environments where independent "objects" or code units can interact with each other asynchronously.

    Otherwise, as discussed in TFA there are certain problems that just don't parallelize. However, there are whole classes of algorithms that aren't developed in modern Computer Science because such stuff is too radical a departure from classical mathematics. Consider HTA [wikipedia.org] as a computing model. It is just too far away from traditional programming.

    Parallel programing is just alien to traditional procedural declarative programming models. One solution is to abandon traditional programming languages and/or models as inadequate to represent more complex parallel problems. Another is to isolate procedural code to message each other asynchronously. Another is to create a system of systems... or combinations of both.

    If there is a limitation it is in the declarative sequential paradigm and that limitation is partially addressed by all the latest work in asynchronous 'net technologies. These distributed paradigms still haven't filtered through the whole industry.

I cannot conceive that anybody will require multiplications at the rate of 40,000 or even 4,000 per hour ... -- F. H. Wales (1936)

Working...