Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Intel Programming IT Technology

Choice Overload In Parallel Programming 288

scott3778 writes to recommend a post by Timothy Mattson over at Intel's Research Blog. He argues, convincingly, that the most important paper for programming language designers to read today is one written by two social psychology professors in 2000. This is the well-known academic study, "When Choice is Demotivating: Can One Desire too Much of a Good Thing?" "And then we show them the parallel programming environments they can work with: MPI, OpenMP, Ct, HPF, TBB, Erlang, Shmemm, Portals, ZPL, BSP, CHARM++, Cilk, Co-array Fortran, PVM, Pthreads, windows threads, Tstreams, GA, Java, UPC, Titanium, Parlog, NESL,Split-C... and the list goes on and on. If we aren't careful, the result could very well be a 'choice overload' experience with software vendors running away in frustration."
This discussion has been archived. No new comments can be posted.

Choice Overload In Parallel Programming

Comments Filter:
  • Fortran (Score:4, Funny)

    by confused one ( 671304 ) on Tuesday October 02, 2007 @08:54PM (#20832205)
    Because I'm sick (in the head) I say we go with the Fortran option!

    'twas my second language; after BASIC. Ahhh, the fond memories...
    • Re: (Score:3, Funny)

      by AuMatar ( 183847 )
      Dear god, the mind damage from Fortran don't just effect your logic skills, they effect your language centers as well! You can't even remember the difference between "fond" and "hell on earth".
    • Re:Fortran (Score:4, Insightful)

      by Anonymous Coward on Tuesday October 02, 2007 @09:47PM (#20832633)
      Uh, most good parallel programming frameworks are cross-language. MPI has APIs in C, Fortran, and C++ (I believe) ... so does OpenMP. Also, most supercomputing programs are either written in C or Fortran. So, YES!, choose Fortran (and MPI) for all of your supercomputing needs. Or write it in C. Or write subroutines/functions in both languages and compile away (the newest version of Fortran will be fully C-compatible).

      The main differences between these parallel programming frameworks are ... the actual implementation of parallel programming! Is it distributed (MPI) or shared (OpenMP)? Does it have elegant syntax for accessing variables across processors (Co-Array) or just function based? Because there is no one true way to write a parallel program (it really depends on the algorithm), there will always be multiple frameworks to choose from. O well! The people who write parallel programs are typically smart enough to deal with excessive choices. (No comment on others).

      • Re:Fortran (Score:4, Insightful)

        by try_anything ( 880404 ) on Wednesday October 03, 2007 @01:36AM (#20833879)

        Because there is no one true way to write a parallel program (it really depends on the algorithm), there will always be multiple frameworks to choose from. O well! The people who write parallel programs are typically smart enough to deal with excessive choices. (No comment on others).

        You're right; there's nothing more tragic than watching a programmer butcher his well-written program in a futile attempt to make it fit the only concurrency model he knows. Closely associating a language with a single, well-designed concurrency framework would at best do the same thing for it that Rails did for Ruby: bring it a flurry of popularity in the short run and damage its reputation in the long run as people doggedly apply the framework to unsuitable problems and blame the language for the results.

        On the other hand, at some point we're all supposed to face up to the end of the free lunch [www.gotw.ca], and a fad for an exotic kind of concurrency might be a clumsy, spasmodic step in the right direction.
        • by Anonymous Brave Guy ( 457657 ) on Wednesday October 03, 2007 @07:47AM (#20835609)

          I always find it amusing (in a sad kind of way) how people talk about Herb Sutter's "call to action" over this. It's not that I've got anything against Herb himself: he's a decent writer, an excellent speaker, and a guy who can use the word "expert" legitimately in areas like C++. But it's also not like he's the first guy to notice that modern desktop computer architectures have been heading for parallelisation rather than increased speed for several years now.

          Despite being right in the thick of this culture shift myself — I'm sure I'm not the only one here who has been talking about this for a while, and is just seeing management catch up — I don't think this is going to be that big of a deal for most people. The harsh reality, for the buzzword-wielding consultants rubbing their hands with glee at a new programming approach they can hype up, is that most people just don't need all this.

          Your average desktop PC is more than powerful enough for most things that most people do with it: Internet communications, writing documents, working with databases, shop floor software, and the like. As long as the operating system is reasonably smart about scheduling, the guys writing these common types of applications don't really have to know anything about multithreading, locking, message passing, and all that jazz. Similarly, your average mobile device has more than enough juice to dial another phone, write a quick e-mail, or capture a digital photo.

          At the other end of the spectrum, serious servers (database, communications, whatever) have been dealing with parallel processing of many requests since forever. High-end systems doing serious maths (the guys modelling weather systems, say) have also been using massive parallelisation on their supercomputers for zillions of years now.

          There is a gap between these different areas, which we might traditionally have called the "workstation" market: the guys doing moderate number crunching for CAD, scientific visualisation, simulations, and the like. Many modern games also fall into this classification. This market is ripe for a parallel processing revolution, because historically it hasn't followed this approach very much because the hardware wouldn't really take advantage of it, yet the extra power is genuinely useful. But I don't think this represents some huge proportion of the software development industry as a whole. The guys working in these areas tend to be pretty smart, and will no doubt adopt useful practices and conventions fairly quickly now that the hardware has reached the point that they are useful.

          As to what those conventions are, I just don't buy the whole "choice overload" theory. There are relatively few basic models for parallel processing: for example, you can have no shared state and communicate only through message passing, or you can have shared state. In the latter case, you then have the question of how to make sure that the sharing is safe, which leads to lock-based or lock-free approaches. Funky toys like transactional memory run at a slightly higher level than this, but they are ultimately constructed from the same building blocks, and again there are only a small number of approaches at this level to consider.

          I'm not familiar with all of those libraries mentioned in the story, but I'll bet that those three classifications (no shared state, shared state with explicit locking, shared state without explicit locks) probably cover the models used by most if not all of them. If you understand the trade-offs in those, you can produce a sensible design, and then the toolkit or framework you use to code it up is mostly just an implementation detail. Given that the trade-offs are pretty obvious and will often steer projects clearly in one direction, I don't think there's really that much to choose at all.

    • Re:Fortran (Score:4, Interesting)

      by TeknoHog ( 164938 ) on Wednesday October 03, 2007 @05:07AM (#20834817) Homepage Journal
      I say we go with Fortran, because it supports array programming natively without any of this bolt-on stuff like OpenMP.
  • by Prune ( 557140 )
    What's wrong with OpenMP?
    • Actually (from the blog author's bio):

      Among his many roles at Intel, he was applications manager for the ASCI teraFLOPS project, helped create OpenMP, founded the Open Cluster Group...
      You might notice, he helped create OpenMP.
    • Re:Hmm (Score:4, Interesting)

      by Frumious Wombat ( 845680 ) on Tuesday October 02, 2007 @09:30PM (#20832479)
      It's designed for shared memory boxes. This is great if you own an E25K, not so great if you've chained together a couple thousand itaniums.
    • Re:Hmm (Score:4, Insightful)

      by Bill, Shooter of Bul ( 629286 ) on Tuesday October 02, 2007 @09:46PM (#20832617) Journal
      The other choices are whats wrong with Open MP. The market needs to shake out the pretenders, before more people will make the correct choices. Thats the basic idea of this story.

      Having said that, I'm praying for Fortran 95 to take over. Its the only Malt Liquor I drink.
    • Re: (Score:2, Informative)

      by n dot l ( 1099033 )
      It's useless in cases where you don't have shared memory (though, really, most "general-purpose" solutions are going to be useless in this case).

      It's implementation is, essentialy, compiler magic. This automatically rules it out in a lot of cases where you need precise control of what your code is doing. If, for example, you need logic to spin for a few cycles while a DMA operation completes (so as to not interrupt/stall something else - and yes, people do actually optimize to that level on some platforms)
    • Here's a list from Wikipedia.

      • Currently only runs efficiently in shared-memory multiprocessor platforms
      • Requires a compiler that supports OpenMP.
      • Scalability is limited by memory architecture.
      • Reliable error handling is missing.
      • Lack fine-grain mechanisms to control thread-processor mapping.
      • Synchronization between a subset of threads is not allowed.
  • link to the paper (Score:5, Informative)

    by skywire ( 469351 ) * on Tuesday October 02, 2007 @08:59PM (#20832229)
    • Re:link to the paper (Score:4, Interesting)

      by SurturZ ( 54334 ) on Tuesday October 02, 2007 @10:36PM (#20832931) Homepage Journal
      Someone told me once there was a study where people also are more likely to choose something out of a group if it is clearly superior to another item in the group.

      I found this true for myself once, I was looking to buy a DVD and had the usual overload of choices. I noticed that there was a copy of "LXG movie only" for $30, but also a copy of "LXG with special features" for $20. This triggered me buying the LXG with special features. In hindsight, it was the fact that I "knew" I was getting a "bargain" that tipped me over the edge. No doubt it was a deliberate marketing ploy.

      A bit sexist to say this, but women seem to be especially targetted by "discount marketing" of this sort. Mainly with shoes :-)
  • by Anonymous Coward on Tuesday October 02, 2007 @09:00PM (#20832237)
    Microsoft will come along and tell you what your choice will be.
    • Re:Don't worry.... (Score:5, Insightful)

      by tomhudson ( 43916 ) <barbara@hudson.barbara-hudson@com> on Tuesday October 02, 2007 @09:12PM (#20832341) Journal
      > "Microsoft will come along and tell you what your choice will be."

      And they'll change it every three years, so as to make more money off of certifications, software sales, and save money by not having to fix bugs in that "old, obsolete" stuff that was so "shiney new" stuff so recently.

      If Microsoft wants to tell me what to do, they'd better be ready to sign a check with 6 figures to the left of the decimal point ...

      • And they'll change it every three years,
        Damn, isn't that the truth.
      • *Hands tomhudson a check for 000001.00 from Microsoft*

        You are hereby instructed to use Basic, and only Basic, for all of your code.
        • Dave, is that you?

          Unfortunately, that sounds like where I work... Even though I can make a (very good) case that it's NOT the best language to use for our application

        • Re: (Score:3, Funny)

          by tomhudson ( 43916 )

          Hand in your geek card. Leading zeros are not significant, unless you're filming "Tora Tora Tora!"

      • Re: (Score:3, Insightful)

        by Tim C ( 15259 )
        What would you rather they do, stick with version 1.0 for the rest of time, never incorporating new features?

        You have to strike a balance - change stuff too fast and people avoid it as its unstable. Change things too slowly and people avoid it as it doesn't provide a required or desired feature that some competing product provides.

        To be honest, depending on what exactly it is, three years doesn't sound unreasonable. It's not like anyone's forcing you to get certified either - I don't know about your country
      • by weicco ( 645927 )

        And they'll change it every three years

        So? I have VB code from 1993 which still runs properly. All the new code is written in C# and guess what? Everything runs side-by-side nicely on the same machine. Difference between 1993 and 2007 code is that old code is about 100000 lines longer which means more bugs and more testing. It's not that you need to rewrite everything every three year.

  • by Anonymous Coward on Tuesday October 02, 2007 @09:00PM (#20832247)
    Write concurrently in two languages, then you're sure to make full use of available CPU cores.
  • It's drivel (Score:3, Insightful)

    by Harmonious Botch ( 921977 ) * on Tuesday October 02, 2007 @09:01PM (#20832249) Homepage Journal
    This whole idea of 'choice overload' is so much drivel, IMHO. And, no, I'm not trying to flame here.

    Have you ever known anybody to say: "There are just too many girls to choose from, I guess I'll go hide in the basement."?
    Or: "There are ten thousand restaurants in this city. I just can't cope. I'm going to stop eating."?

    A better label for the whole subject would be: " How a small minority of people fail to learn tree-pruning techniques, and dissolve in panic." Then we all could say: "Yep, sounds like my ex-girlfriend. Been there, done that. Next?"
    • Re: (Score:2, Funny)

      by shaka ( 13165 )

      Have you ever known anybody to say: "There are just too many girls to choose from, I guess I'll go hide in the basement."?
      Or: "There are ten thousand restaurants in this city. I just can't cope. I'm going to stop eating."?

      A better label for the whole subject would be: " How a small minority of people fail to learn tree-pruning techniques, and dissolve in panic." Then we all could say: "Yep, sounds like my ex-girlfriend. Been there, done that. Next?"

      Girls to choose from?! Ex-girlfriend?!?!! Last night just c

    • Re:It's drivel (Score:5, Insightful)

      by nycguy ( 892403 ) on Tuesday October 02, 2007 @09:17PM (#20832375)
      Actually, both of your analogies are poor. The problem with "choice overload" in the software context is that with so many platforms to choose from, no one platform builds the critical mass to be useful for a broad range of problems, and developers are almost certain to build systems and components that do not interoperate because they are built in separate frameworks. In software, there's a benefit to having everyone chose the same platform to build on.

      On the other hand, I don't know about the benefit of everyone chosing the same girl or the same restaurant, though--unless you like gang-bangs, long lines, etc.
      • I don't recall mentioning software. Did I mention software anyplace? If I had, then they might be poor analogies.

        What I said was: "This whole idea of 'choice overload' is..." I was writing about the concept, not the particulars of the concept as applied to software.

        I'll agree with you that there are problems in making the choice about software, but they are concensus issues, not overload issues.
    • How about choice overload for the restaurants? If there are thousands, some are going out of business? I'm sure this analogy is leading somewhere...
    • Too many choices between languages, where in many of them the pros and cons balance and cancel for your particular application, can be deadly.
      • by timeOday ( 582209 ) on Tuesday October 02, 2007 @10:48PM (#20833029)
        Meh. This is a good example of social sciences run amok. You do some interesting little study, but then try to apply it to everything in the world. Here are some reasons why the analogy between parallel programming languages and choosing a flavor of jam (yeah, that's what the study was about) might not hold:
        • Jams are functionally equivalent; the choice is inconsequential. This is far from the case with programming languages, which have meaningful differences.
        • Programming languages solve important problems, so a choice will be made. You can't just give up on the whole idea and walk away as with specialty jams.
        • There are so many different aspects of a language, you can have a great number of them, yet they can all be very different from each other.
        • Significant resources are devoted to developing and choosing parallel languages. This greatly increases the number of choices that can be evaluated. Consider how much time you spend shopping for the right car vs. a jar of jam.
        Now would be a terrible time to stop developing parallel languages, because the problem is just now coming to the forefront with the limits of single-core performance pushing back and multi-cores taking over. I'm suspect the parallel programing paradigm of the next 40 years hasn't been invented yet, and I'm almost certain it hasn't yet been popularized. So I say, let a hundred flowers bloom.
    • Re: (Score:3, Insightful)

      by Otter ( 3800 )
      Have you ever known anybody to say...

      That people don't consciously think something, let alone admit to thinking something, doesn't mean their behavior isn't driven by it.

    • Well put. Just because there are many alternatives doesn't mean we should stop trying new things. Somehow, though, it's not surprising that someone at Intel would be pushing for earlier standardization. They're a player in this space, and I'm sure they'd like to tie everyone to their chosen flavor before the market heats up even more.
    • Re: (Score:3, Funny)

      by Lisandro ( 799651 )
      This whole idea of 'choice overload' is so much drivel, IMHO. And, no, I'm not trying to flame here.

      Ditto. You think too many choices of programming languages is a bad thing? Let's have two. And let one be Perl, that should be fun to watch.

      (Yes, i do like Perl :)
    • At a store by my home you can choose from 17 different options to buy water, not counting the hose for filling your radiator and three kinds of ice. The only omitted choice is water that costs less than gasoline. Choice is not inherently bad so I agree with you.
    • Re: (Score:2, Insightful)

      by ljw1004 ( 764174 )
      Well, yes, humans do suffer psychologically if there's too much choice. As the number of girls to choose from increases, we get only marginal increase in happiness from the choice we've made, but we get progressively larger increase in unhappiness from regrets and second-guessing and worrying that our choice was sub-optimal.

      There's a good summary of the research in the article "The Tyranny of Choice" by Barry Schwartz, Scientific American, April 2004, pp. 70--75.
      • So that's why we are driven to choose only one; once you are locked into a platform (language, mate, restaurant) *and only then*, can it begin to fullful it's potential to maximize your happiness! No, really; it's not until you commit and focus deeply on that one platform that you learn not only the basic functionality (keywords / favorite romantic phrases / items on the menu) but the minute details to the point of being able to hack said platform into doing things no one else has thought of yet (I'll let
    • There's more to the concept of choice overload than you may think. I found this talk [ted.com] to be quite interesting.
    • Re: (Score:2, Informative)

      by Alphax.au ( 913011 )

      This whole idea of 'choice overload' is so much drivel, IMHO. And, no, I'm not trying to flame here.
      I saw an episode of Catalyst [wikipedia.org] where they interviewed the people who wrote that paper, and gave a demonstration at a supermarket using jars of jam: people threw up their hands and said that there were too many to choose from. The results are definitely not bogus.
    • This whole idea of 'choice overload' is so much drivel, ... Have you ever known anybody to say: "There are just too many girls to choose from, I guess I'll go hide in the basement."?

      So that's why I like living in the basement.

    • Logically thinking thats how you'd hope people would act, but you would be wrong.

      When someone is presented with a large number of options, with no real objective way to quantify the difference between them, people often choose to go without rather then risk making the "wrong" choice.

      Imagine going in to buy a dishwasher, assuming you don't trust what the salesman says and you have no ideas about what makes a good dishwasher, you are pretty much SOL when it comes to being able to tell the difference between w
    • Re:It's drivel (Score:4, Insightful)

      by Anonymous Brave Guy ( 457657 ) on Wednesday October 03, 2007 @08:00AM (#20835725)

      Have you ever known anybody to say: "There are just too many girls to choose from, I guess I'll go hide in the basement."?

      Have you never heard the expression "too much love will kill you"? Never been in (or seen) a situation where someone is torn between a relationship with one person and with another, when they genuinely care for both?

      The results of such a dilemma are usually very unpleasant for the losing party, and all too often don't work out for the others either because there's that nagging doubt about whether the eventual choice was the right one. People can put off making that choice for a long time, just to avoid the sadness and doubts. And that's (usually) just with two alternatives.

      Now, clearly a choice between programming libraries isn't going to have the same kind of emotional effect on a normal person. (I'd suggest that if it does for you, then you need to reevaluate your priorities in life!) But the basic situation is still the same: analysis paralysis, where you're so afraid of making the wrong choice that you don't commit to any approach at all.

    • Re: (Score:3, Funny)

      by NewWorldDan ( 899800 )
      No, but I have heard people say, "There are just too many girls to choose from. I'll take the easy way out and hook up with one of the ugly ones." This apparently explains the continued existance of COBOL, FORTRAN and BASIC.
      • Re: (Score:3, Insightful)

        A better analogy is the guy who goes to college and meets a bunch of smart, sexy, fascinating women [named Haskell, Erlang, and Lisp] and then lets his parents [college loans] pressure him into marrying a dull hometown girl [named COBOL] with a dowry [a job offer] and good family connections [predictable future employment.]
  • by Grond ( 15515 ) on Tuesday October 02, 2007 @09:04PM (#20832281) Homepage
    Quoth the blogger: "With hundreds of languages and API's out there, is anyone really dumb enough to think "yet another one" will fix our parallel programming problems?"

    Yet Intel touts its Threading Building Blocks [intel.com] library as just such a fix to many parallel programming problems. Now, TBB is a very nice product, and in many ways it is superior to a lot of existing libraries, APIs, and languages, but one gets the sense that maybe the left hand doesn't know what the right hand is doing at Intel.

    I might also draw an analogy to the open source world, where there are often dozens of solutions to both simple/mundane problems (text editors, media players, command line shells, etc) and more complex ones (window managers, Linux distributions, etc). I wonder if the free and open source software world wouldn't also benefit from a "culling of the herd," so to speak.
    • by Arabani ( 1127547 ) on Tuesday October 02, 2007 @09:43PM (#20832585)

      Yet Intel touts its Threading Building Blocks library as just such a fix to many parallel programming problems. Now, TBB is a very nice product, and in many ways it is superior to a lot of existing libraries, APIs, and languages, but one gets the sense that maybe the left hand doesn't know what the right hand is doing at Intel.
      Do note that this is from the blog of a single developer at Intel. It's really just his own opinion of the situation, and nothing more. There's a big difference between what one guy thinks and says, and what the marketing department decides to do.
    • Quoth the blogger: "With hundreds of languages and API's out there, is anyone really dumb enough to think "yet another one" will fix our parallel programming problems?"

      Yet Intel touts its Threading Building Blocks library as just such a fix to many parallel programming problems. Now, TBB is a very nice product, and in many ways it is superior to a lot of existing libraries, APIs, and languages, but one gets the sense that maybe the left hand doesn't know what the right hand is doing at Intel.

      Not only TBB, b

    • Re: (Score:3, Funny)

      It isn't a silver bullet, but if it helps, so much the better; I like having lots of bullets to choose from.
  • good lord. (Score:5, Insightful)

    by russellh ( 547685 ) on Tuesday October 02, 2007 @09:18PM (#20832381) Homepage
    and I say that as an atheist.

    Ok, first: he writes as if all choices are equivalent. One jam might as well be the same as another, they just differ by taste. It's not like I walk into the store already invested raspberries. It's not as if Java programmers are going to decide that the Fortran parallel library is better, so why not just switch to Fortran.

    Second, I doubt explicit parallel programming is going to be mainstream anytime soon. No, make that ever. Ever! Parallel programming will only happen in the mainstream when it is handled implicitly by the language, like a dataflow language. Asking normal programmers to deal with parallel programming is trouble when basic logic eludes most of them.

    Third, all you people, including the author of TFA, who think that more than one or two standards is bad thing ("the great thing about standards is there are so many to choose from!") it's time to wake up: the world is not about to consolidate. The future is going to require C3PO and R2D2: there will be so many fricking languages and standards that your translator is going to require AI and legs to come along with you. For every one thing that fades away, eventually, probably 10 or 100 replace it. The future is a big mess.
    • Amen. (Score:4, Insightful)

      by Blob Pet ( 86206 ) on Tuesday October 02, 2007 @10:16PM (#20832805) Homepage
      And I say that as an agnostic.

      This guy must be missing the point of having different programming languages and environments - parallel or not. He lists ZPL, which is, first and foremost in my opinion, a really cool array-based language. There are certain things you're going to want to do in ZPL as opposed to non-array based languages, such as image processing (which lends itself really well to parallel processing IMHO). For things that don't require non-multi-dimensional array processing, you wouldn't want to use ZPL.
    • i disagree about the choices being equivalent. having sampled from several, but not all of those options, i can say with near certainty that the choice is not like that of choosing what flavor of jam you like. the choice is really which major surgery you'd like to undergo without benefit of anesthetic.

      on the one hand, you have things like openmp which uses special comments to give hints how to essentially vectorize code on a shared memory machine (and likely a UMA machine at that). on the other extreme, you
    • Basically, with any new interesting technology people try out many different approaches, and as the technology matures, a few of them will survive as the de-facto standards.

  • Choices are Good (Score:5, Insightful)

    by ChaoticCoyote ( 195677 ) on Tuesday October 02, 2007 @09:18PM (#20832383) Homepage

    Choice is good if it provides different tools for different tasks. The list provided is somewhat silly, since several of the technologies address completely different issues and applications. There's a reason Sears sell thirty different shapes of hammers -- all nails are not the same.

    After considerable deliberation and experimentation, I've shosen OpenMP for most task-parallel applications. The syntax is simple, it operates across C, C++, and Fortran, and it is supported by most major compilers on Linux, Windows, and Sun. The only quirk has been problematic support in GCC 4.2, but that will likely be cleared up within a few months. For cluster work, I tend to use MPI, because it has a long history and good support. I'm sure other tools have good versatility in environments different from those I frequent.

  • Bullshit (Score:4, Insightful)

    by n dot l ( 1099033 ) on Tuesday October 02, 2007 @09:24PM (#20832425)

    They created two displays of gourmet jams. One display had 24 jars. The other had 6.

    The larger display was better at getting people's attention. But the number of choices overwhelmed them and they just walked away with out deciding to purchase a jam.
    Er. Sorry, in my experience programmers tackle parallel programming because it's somewhere in the requirements of their program that they do so, not because it sounds cool. It's not like multi-threading is a random fad or anything. The only way this would have been a relevant comparison is if the group of people had been pre-screened to those who definitely intend to buy a Jam of some sort.

    On top of that, if this really is something that affects programmers then why the hell aren't we all rendered utterly useless by the number of programming languages? Or all the possible ways one could format code? Etc.

    But hey, the guy's writing in a "research" blog and, as in academia, when you don't have anything real to contribute you can cite something completely unrelated and pretend it has relevance.

    Honestly, this sounds vaguely like "there's too much to choose from, so everyone just use Intel Thread Building Blocks, K? You can't possibly do better so just use our stuff because we cover all cases..."
  • I read a lot of whining in the comments for this article. Let me put this in a different perspective: this is more like a frontier in programming that we're on here. In the past, a single core processor running serial code was fine on any desktop. Any applications needing parallel programming was run on high-end servers or Beowulf Clusters. They weren't for the average computer user out there.

    Now we have all these new-fangled dual/tri/quad core processors in the average microcomputer. It would be foolish t
    • It just became economical for just about every application to be written in parallel.

      Not really. Especially not in the desktop world. Seriously, why would any developer waste time and money multi-threading something as inherantly serial as an event loop that doesn't come anywhere near saturating even a single core?

      What does a web browser need more than one core for? Or a word processor? Or an IM client? The only "desktop PC" type tasks I can think of that might actually be able to saturate even a single CPU are multimedia and gaming. In the case of the former, it's usually enough for the O

      • Seriously, why would any developer waste time and money multi-threading something as inherantly serial as an event loop that doesn't come anywhere near saturating even a single core?
        Responsiveness. If my GUI program will occasionally do things that take a second or two, those things should be done in a second thread so the interface doesn't freeze up.

        What does a web browser need more than one core for?
        Because when I open some huge page (something like a big slashdot discussion) in a second tab, it freezes for a few seconds trying to render it. That sort of thing should go in an extra thread to prevent this.
    • There is something unsatisfying about developing for a small number of cores or parallel execution units.

      Here's the reasoning. To do heavy multi-thread parallelism to speed up some kind of multi-media, game, data visualization program, you probably want a higher-level language with garbage collection to handle some kind of data flow model -- say Java with a good class libary to support this -- in place of C with pthreads and trying to place locks on data without the whole thing deadlocking. Assume for s

      • by thatskinnyguy ( 1129515 ) on Tuesday October 02, 2007 @10:53PM (#20833065)

        On the other hand, I heard that maybe 16 processors is an upper limit for the shared memory multi-threaded model because of all of the cache synchronization issues, and to go beyond that, you will need to go to clusters with communication between local processor memory.
        Beowulf Cluster on one processor die. How badass would that be? And by badass, I mean, totally 1337. And by "be", I mean be.
  • My LM-2 beats all yo puny non-programmable languages!

    (mapcar* #'traveling-salesman '(ottawa sydney new-york berlin))
  • by bcrowell ( 177657 ) on Tuesday October 02, 2007 @09:40PM (#20832559) Homepage

    I've been gradually trying to learn more about functional programming, partly because I think fp techniques and ways of thinking come in handy even if you're programming in a procedurally oriented language, and partly because fp seems like a paradigm that is likely to get more and more useful as we get machines with more and more cores. Okay, fp!=parallel, but, e.g., one of the big selling points of Erlang is supposed to be that it lends itself to completely transparent use of parallel processors.

    The choice overload does seem like kind of an issue to me. For as long as I continue to keep programming comfortably in the procedural languages I'm comfy with (e.g., perl), I'm never going to really wrap my mind around the radically different ways of thinking that you get in a more fp world. I'm been thinking for a long time that it would be fun to do a coding project in ocaml ... or haskell ... or lisp ... or erlang ... or -- you get the idea.

    The trouble is, it's really not clear what to hitch my wagon to. Ocaml seems to have a very high quality implementation, but its garbage collector isn't multithreaded, the only book you can buy is in French (it's nice that you can download the English version for free, but I'd prefer to buy something bound), and the availability of libraries (and documentation for them) isn't quite as wonderful as I've gotten used to with perl. Lisp could be cool, but I hate the fact that it's not standardized, and I'm not convinced that eschewing arbitrary syntax really carries more pros than cons. Haskell? Maybe, but it sounds like putting on a hair shirt. The list goes on. I really feel like a deer in the headlights.

    • Re: (Score:2, Informative)

      by ljw1004 ( 764174 )
      F# is an implementation of ocaml on .net, so it benefits from microsoft's fast multithreaded garbage collector. Also you get lots of libraries and documentation for free. I've switched most of my preliminary programming to F#, to be reimplemneted in C# or C++ depending on what's needed. The reimplementation's always between 4 and 10 times as big (in terms of lines of code).
    • Re: (Score:3, Informative)

      by Coryoth ( 254751 )

      Okay, fp!=parallel, but, e.g., one of the big selling points of Erlang is supposed to be that it lends itself to completely transparent use of parallel processors.

      Mostly what makes Erlang good for parallel programming, however, it is its Actor model, no shared memory, message passing, based approach to concurrency that provides that; the FP side is somewhat incidental (it certainly doesn't hurt, but it isn't required). You can have similarly clean and easy parallelism, as long as you take a message passing style approach, in a non-FP language: take a look at SCOOP for Eiffel [se.ethz.ch] which provides fairly transparent parallel code with an OO language.

    • Most people think of Python as procedural or object-oriented, but it actually has all of the tools required to do functional programming. And, being Python, the syntax is logical and easy to read.

      Check out this series of articles for more info: http://www.ibm.com/developerworks/library/l-prog.html [ibm.com]
      • Re: (Score:3, Informative)

        Python ... actually has all of the tools required to do functional programming.
        Much as I like Python (and I do like it), last I checked it doesn't support tail-call optimization, which is pretty much required for many of the recursive algorithms used when doing FP.
    • and partly because fp seems like a paradigm that is likely to get more and more useful as we get machines with more and more cores.

      Well a functional language is just a declarative language where you explicitly state what parts can be run in parallel -- these choice points are called functions and the parallel parts are called parameters. An example:

      Functional:
      return add(calculateA(), calculateB());

      Iterative:
      ## BEGIN ANY ORDER
      tmpA = calculateA();
      tmpB = calculateB();
      ## END
      return add(A, B);

      What's really nee

  • There are hundreds of languages that support loops, variable assignments, recursion, definition of subroutines and Joe knows what else.

    Language constructs to support mp are bound to be just as numerous. I'm not normally one to be so dismissive of a post, but I think this one of the more pointless items ever shared with erudite little community.
  • Silly... (Score:4, Insightful)

    by ed.markovich ( 1118143 ) on Tuesday October 02, 2007 @09:43PM (#20832583) Homepage
    That's just silly. There are two types of programmers that could be making choices like this, and neither one of them would suffer from too much choice.

    The first kind is a programmer just trying to paralelize existing code. In that case, the choice of threading platforms is pretty much obvious. Existing Windows code? Use windows threads. C/C++ on Unix? Pthreads probably. Java code? Java threads... Probably not even 2 seconds worth of thought will go into considering the alternatives (and that's probably fine)

    The other type of programmer is one who's actively looking to develop high performance paralelized software. I am talking about cases where performance is the primary objective and it drives the choice of programming language and platform. In these cases, the nuance of the different thread models might matter but the programmer of this type would be happy (rather than scared) to investigate all the options. After all, if he didn't care, he'd just go with the default choices like the first programmer.

    • The parent has the correct perspective on this whole issue.

      Programmers tasked with writing another workflow app or writing another e-commerce website are not going to even think about the dynamics of paralell programming (and don't need to). The developers/engineers builting real-time robotic machinery will have been thinking about this since they were 16 years old.
  • by grumbel ( 592662 ) <grumbel+slashdot@gmail.com> on Tuesday October 02, 2007 @09:47PM (#20832625) Homepage
    A nice video about the The Paradox of Choice [google.com] is available at Google Video. It is an interesting topic, but I don't think it applies all that much to parallel programming. The issue isn't that there are to many languages, but simply that there are a bunch of very well established languages that provide you little to no help with writing parallel programs properly, so everybody just continues to write their programs the way they did the last 20 years and thus takes little or no advantage of the available multiprocessor systems. And I doubt that just reducing the choice would help much at all about that right now, since we really still don't know how to write parallel programs on a large scale (i.e. in a way that everybody can and does it), so some more research and experimentation is needed.

  • RubyMPI (Score:3, Interesting)

    by GrEp ( 89884 ) <crb002@NOSPAM.gmail.com> on Tuesday October 02, 2007 @09:52PM (#20832663) Homepage Journal
    Just use RubyMPI when I release it next week :)
    The power of MPI wrapped in the beauty of Ruby.

    http://www.public.iastate.edu/~crb002/ [iastate.edu]
  • I am confused (Score:5, Insightful)

    by LWATCDR ( 28044 ) on Tuesday October 02, 2007 @10:02PM (#20832709) Homepage Journal
    Okay this list seems to be of several different technologies some of which over lap but several are used for very different tasks. You can not replace MPI with Pthreads.
    I don't see the problem. Just as we have many different programing languages these different interfaces all have different niches.

  • Ahh politics (Score:4, Insightful)

    by Rufus211 ( 221883 ) <rufus-slashdot@@@hackish...org> on Tuesday October 02, 2007 @10:02PM (#20832719) Homepage
    It's kind of amusing looking at the languages he lists. MPI and OpenMP are by far the most-used environments, but pthreads and java should probably be next not at the end of the list. Ct, intel's new parallel language, hasn't even been formally announced yet let along there being any released documentation / code for it. CUDA however, Nvidia's competing parallel language, isn't even mentioned though it's been released for months now.
    • Re: (Score:2, Insightful)

      by kangasloth ( 114799 )

      I think your comment helps illustrate the point. The information you can so easily put hand to wasn't free. Finding the best tool for the job can require significant research. I don't pass up orange marmalade because I'm confused, I walk away because I don't want to spend a half hour sampling jam. When faced with a set already winnowed down to those with the broadest appeal, I'm far more likely to invest the time because I'll feel like I've got a shot at finding one that's good enough in an acceptable

  • With hundreds of drugs out there, is anyone really dumb enough to think "yet another one" will fix their illness? With hundreds of energy sources out there, is anyone really dumb enough to think "yet another one" will solve our problems? With hundreds of CPU/computer architectures, is anyone really dumb enoguh to think "yet another one" will solve our problems? Sometimes the answer is yes, because it's not "just another" one. Sometimes the first hundred alternatives all suck, but the hundred-and-first
  • by SoupIsGood Food ( 1179 ) on Tuesday October 02, 2007 @10:11PM (#20832759)
    Part of the problem is that there isn't a good solution yet, so there's a lot of effort being put into trying to find a way for a bad solution to be more comfortable.

    Old-school iterative languages are a clumsy fit. They're night impossible to debug, and ones that let you do clever things at the hardware level will bring the whole project down in screaming flames when someone tries to get clever. So new libraries for old languages seldom fill the bill.

    New-hotness functional languages are insane. It's very, very, very difficult for seasoned programmers to get their heads around it, and impossible for n00bz who don't have heavy math backgrounds. Compounding the issue is that the syntax tends to be on the wrong side of horrible with little or no syntactic sugar to make the medicine go down. So re-imagining the paradigm is a bit like picturing a five dimensional sphere - great fun, if you're smart enough to do it. No-one is smart enough to do it.

    We're probably looking at a problem space that is best tackled by something that doesn't exist yet - an elegant, easily understood tool that simply makes sense, like objects or everything-is-a-file or scripting languages or regex. We're seeing so many different approaches to MPP because programmers are trying to figure out what that tool is. Once someone hits on it, the field will shake itself out.

    Since we haven't hit on it, too much choice is a good thing - it means people will take the initiative to do something on their own that works better, rather than trying to get something suboptimal to work because it's the "standard".
    • Re: (Score:3, Insightful)

      by Coryoth ( 254751 )

      Part of the problem is that there isn't a good solution yet, so there's a lot of effort being put into trying to find a way for a bad solution to be more comfortable...Old-school iterative languages are a clumsy fit...New-hotness functional languages are insane.

      I think you're looking at the wrong dichotomy there. If you want languages that make concurrency easy to write, easy to reason about, and easy to get right, then you want languages that are based on message passing and no-shared state. That can be either functional, like Erlang [wikipedia.org], or iterative OO like E [wikipedia.org].

      The problem is more that iterative programmers are used to using shared state as a crutch, and having message passing systems that incurred significant overhead. FP solves the shared-state problems by elimina



    • A 5 dimensional sphere is easy to visualize. The other two dimensions could be color and size, with the other three being the normal x y z coordinates.

      People always assume that the extra dimensions are obscure and bizarre extension of space-time. They don't have to be. A dimension can be used for any variable you want. A dimension could be reflectivity of light, smell, fluffyness, firmness. hardness, etc.

      Diamonds, for instance, are priced on a four-dimensional scale (carat, color, clarity, cut). Those dimen
      • Re: (Score:3, Interesting)

        by paul248 ( 536459 )
        No, a red fuzzy sphere is NOTHING like a 5-dimensional hypersphere. Even a sphere of 4 spatial dimensions would blow your fucking mind. If we lived in a universe with 4 dimensions of space, then planets would be 4D spheres with a 3D surface. You would be able to walk around, and "turn" up and down without ever leaving the ground, just like you can turn left and right in our universe. It would be impossible to tie a knot. Imagine, if you will, people living on the surface of a circular planet in a 2D un
  • Are there any electricians, or mechanics here who have the problem of too much choice when they go over to Sears, or where it is that you shop at?
  • This "choice overload" is just a symptom of bad computer science in general: nobody knows whether any of those systems is "better" than any of the others. Nobody even knows what "better" would mean.

    Furthermore, the academic process rewards people not doing the work to find out. If you spent six months to find out that your hot new idea is actually (1) worse than what was there before and (2) not so new anyway, you don't get tenure because you don't publish enough.
  • by GreatDrok ( 684119 ) on Tuesday October 02, 2007 @11:00PM (#20833119) Journal
    MPI, pthreads and so on are really a poor way of doing parallel programming. The reality is that these languages are all simply serial languages with parallelism bolted on. What you really need to do is use a truely parallel language. Way back in 1990 I learned to program transputers using Occam which was parallel through and through. On that platform it was trivial to write pure parallel code and more to the point, you could write it in a very fine grained way which could easily be serialised to run on a smaller number of processors. In some ways it was similar to MPI but far more potent because of the built in support in the transputer architecture. It is very sad that in the intervening 20 years or so since the transputer was first invented, parallel programming has gone largely nowhere. Attempts at automatic parallelisation of serial code are doomed to failure and threading within serial languages is always going to be a blunt tool. Maybe in another 20 years we will be back where we were in the late 1980's.
  • I'm about (Score:3, Funny)

    by jsse ( 254124 ) on Tuesday October 02, 2007 @11:01PM (#20833121) Homepage Journal
    to welcome our new Choice Overlord personally until I found that I misread.

    Sorry.
  • by 12357bd ( 686909 ) on Wednesday October 03, 2007 @12:03AM (#20833429)

    is not an excess of choice, is an excess of improvisation.

    Long story short... now that hardware speed is not easily doubled every few years, the industry has found a 'simple' way to keep pushing the weel, duplicate cores!. Well, it turns out that after decades of ignoring the parallel programming demand from academics now they are trying to push the 'somewath parallel' mess thay are producing.

    The problem is 'duplicated cores' != 'parallel programming', that's the problem.

  • by RAMMS+EIN ( 578166 ) on Wednesday October 03, 2007 @04:15AM (#20834551) Homepage Journal
    Choice overload is a problem, but only if you are actually faced with all the choices. It need not be that way. There are various kinds of parallelism and various angles to attack each. Some of the choices one has will not make a lot of sense for type of problem one is looking
    at. So, categorizing can help.

    Some technologies will be in rapid development, others will be no longer actively maintained, and yet others will be stable but actively maintained. This also affects which choices are good.

    Then there's licensing. Depending on the task, closed-source or copyleft
    licenses might not be acceptable.

    Some of the solutions may be low-level, allowing programmers to build something matching their application out of the provided building blocks, where other solutions may focus on providing higher level constructs, ready to be used. Sometimes, these will match what you need, and sometimes, they won't.

    I am sure there are other axes of differentiation. Setting requirements will narrow ones choices, as well as illustrate why choice is a Good Thing. If there were only a few choices, it is unavoidable that none of them would actually fit some sets of requirements.

    Now, the thing is that categorizing the various solutions is not something that every potential user of the solutions has to do. Part of the work can be done by the developers of each solution. Presumably, the solution is developed because a satisfactory solution did not already exist. In my opinion, the developers _should_ list related work, compare their solution to it, and explain why they saw fit to develop their solution. This is a standard part of research.

    Another part of the work is comparisons done by third parties. Some independent person would go and investigate a number of solutions, and provide a write-up of the requirements they assumed, the solutions they investigated, how these solutions fit their requirements, and what their overall impression of the solutions was (w.r.t. things like ease of setup, documentation, development status, etc.). This, too, is valid research. It should be published, so everyone benefits.

    In the end, what you get to do when you need to pick a solution for parallel programming, is

    1. Define your requirements
    2. Get a list of possible solutions
    3. See what has been written about them
    4. Check if that seems to be valid (it might be out of date, for one)
    5. Possibly investigate any solutions that you found but that haven't been covered by others.
    6. Decide which one to go with, based on the information you have gathered.

    Sure, this is a far cry from

    1. Find the only available solution
    2. There is no step 2

    but for that you are almost guaranteed to get a choice that better fits your requirements (you would be very lucky to have the only available solution be a great match), without having to pay the full cost of investigating every solution out there.

    The thing to remember about the paradox of choice is that you will probably _feel_ less happy (there is always the nagging feeling that you could have made a better choice), but that you will generally end up with something _better_ than if the choice hadn't been there is the first place.

    If you _really_ aren't happy about having to choose, you can always pick one (say, at random) and pretend that was your only choice. I conjecture that this is what the situation of having only one option is really like.
  • by Colin Smith ( 2679 ) on Wednesday October 03, 2007 @04:15AM (#20834553)
    Seems to me to be a philosophical problem. With a single CPU you have a single gate processing a single sequence of instructions. It's easy to push the instructions and data through that gate in order. When you have 10, 20, 100 gates choosing which direction to push the instructions and data becomes exponentially more complex.

    The solution it would seem to me would be to start pulling the instructions and data through the gates instead of pushing it.
     
  • Apples and Oranges (Score:3, Informative)

    by master_p ( 608214 ) on Wednesday October 03, 2007 @04:28AM (#20834623)
    Parallel programming has many facets. Libraries like OpenMP can not be compared to Windows threads, for example: the former is a mechanism for doing task/data parallelism at language level, the latter offers the primitives for making threads. PThreads is a thread API similar to win32 threads. I don't see how there is an overload of choices, really.
  • by pcause ( 209643 ) on Wednesday October 03, 2007 @08:10AM (#20835837)
    The overload is just a symptom if the real problem and that is that parallel programming is just plain hard. We've had these issues for over a decade and we haven't seen a step function in use of parallel programming. It is difficult for most people to think of many things happening at the same time and to design and debug this class of program. We tend to start by thinking of a task in serial steps and then look for ways to add a little parallelism.

    The folks who are low level systems programmers (OS and networks) tend to be folks who have an aptitude for thinking about parallelism and designing with parallelism in mind. There are a group of people in the scientific space who make use of parallelism, but then again they are Phd mathematicians and physicists. After that it drops off rapidly.

    Maybe it has something to do with he way we are educated. perhaps it is a more fundamental issue of brain wiring. After all, we c perform complex physical tasks in parallel, but maybe only a small segment of the population is wired to think about programming problems in parallel.

    The chip guys are throwing more cores at us and we can't create the software to fully utilize the hardware due to this issue. Perhaps it is time to take a step back and to stop trying to solve the problem by throwing more and different programming packages at the problem and examine why folks have so much trouble in this area.
  • BS Overload (Score:3, Insightful)

    by wytcld ( 179112 ) on Wednesday October 03, 2007 @09:19AM (#20836789) Homepage
    So do you want to go to a bar with only one woman in it? Or a bar with 20? If you believe the premise of "choice overload" making you unhappy, you should choose the first, right?

    Aside from that in many contexts the "choice overload" hypothesis is flat-out wrong (unless you really can claim to have felt a special rush of happiness just when you went into that bar with only one dame in it - and she wasn't, say, your girl already), there are open questions about how representative the test sample was. Psychological problems run culturally in certain populations. How can we be sure that the population tested for "choice overload" didn't share a psychological problem regarding choice that has no foundation is basic human psychology, but rather was relative to their own cultural limitations?

    For most people in most cultures over history, the trick is to be happy with not much choice. That's generally the case for the working class, for the infantry soldiers, and for tribal peoples in environments of scarcity. Yet even in those cultures there are other classes for whom the trick is to be happy with a great deal of choice - the upper class, the generals, and tribal peoples in environments of plenty. Those whose cultures and religions derive primarily from desert (scarcity) environments are those driven craziest by "choice overload" - thus the Islamic meltdown, and the rejection of modern freedom by American fundamentalists. But we do have other cultures here. And the studies associated with the "choice overload" hypothesis, do not, I'll bet dimes to dollars, correct in any way for (sub)culture and psychological diagnosis.

My sister opened a computer store in Hawaii. She sells C shells down by the seashore.

Working...