Forgot your password?
typodupeerror
Programming Java IT Technology

New Languages Vs. Old For Parallel Programming 321

Posted by timothy
from the nothing-new-will-ever-happen dept.
joabj writes "Getting the most from multicore processors is becoming an increasingly difficult task for programmers. DARPA has commissioned a number of new programming languages, notably X10 and Chapel, written especially for developing programs that can be run across multiple processors, though others see them as too much of a departure to ever gain widespread usage among coders."
This discussion has been archived. No new comments can be posted.

New Languages Vs. Old For Parallel Programming

Comments Filter:
  • by Nursie (632944) on Sunday June 07, 2009 @02:52PM (#28243091)

    How blinkered are you?

    There exist whole classes of software that have been doing parallel execution, be it through threads, processes or messaging, for decades.

    Look at any/all server software, for god's sake, look at apache, or any database, or any transaction engine.

    If you're talking about desktop apps then make it clear. The thing with most of those is that the machines far exceed their requirements with a single core, most of the time. But stuff like video encoding has been threaded for a while too.

  • by number6x (626555) on Sunday June 07, 2009 @02:55PM (#28243127)

    Erlang [erlang.org] is an older established language designed for parallel processing.

    Erlang was first developed in 1986, making it about a decade older than Java or Ruby. It is younger than Perl or C, and just a tad older than Python. It is a mature language with a large support community, especially in industrial applications. It is time tested and proven.

    It is also Open source and offers many options for commercial support.

    Before anyone at DARPA thinks that they can design a better language for concurrent parallel programming then I think they should be forced to spend 1 year learning Ada, and a second year working in Ada. If they survive they will most likely be cured of the thought that the Defense department can design good programming languages

  • by Nursie (632944) on Sunday June 07, 2009 @02:55PM (#28243131)

    Bullshit.

    Tell that to apache, and oracle, and basically anything that runs in a server room.

  • Re:What's so hard? (Score:5, Informative)

    by beelsebob (529313) on Sunday June 07, 2009 @02:59PM (#28243165)

    It's not creating threads that's hard - it's getting them to communicate with each other, without ever getting into a situation where thread a is waiting for thread b and thread b is waiting for thread a that's hard.

  • by Raul654 (453029) on Sunday June 07, 2009 @03:03PM (#28243199) Homepage

    This is a subject near and dear to my heart. I got to participate in one of the early X10 alpha tests (my research group was asked to try it out and give feedback to Vivek Sarker's IBM team). Since then, I've worked with lots of other specialized programming HPC programming languages.

    One extremely important aspect of supercomputing, a point that many people fail to grasp, is that application code tends to live a long, long, long time. Far longer than the machines themselves. Rewriting code is simply too expensive and economically inefficient. At Los Alamos National Lab, much of the source code they run are nuclear simulations written Fortran 77 or Fortran 90. Someone might have updated it to use MPI, but otherwise it's the same program. So it's important to bear in mind that those older languages, while not nearly as well suited for parallelism (either for programmer ease-of-use/effeciency, or to allow the compiler to do deep analysis/optimization/scheduling), are going to be around for a long time yet.

  • Re:Clojure (Score:3, Informative)

    by slasho81 (455509) on Sunday June 07, 2009 @03:35PM (#28243437)
    Erlang is meant for distributed computation, which is a grand overkill for most programs. See here: http://groups.google.com/group/clojure/msg/2ad59d1c4bb165ff [google.com] Scala unlike Clojure did not embrace the importance of immutability to concurrency programming, which is why I think it's badly lacking. See here: http://clojure.org/state [clojure.org]
  • by coppro (1143801) on Sunday June 07, 2009 @03:36PM (#28243457)
    Erlang is probably the best language for servers and similar applications available. Not only is in inherently parallel (though they've only recently actually made the engine multithreaded, as the paralellism is in the software), but it is very easily networked as well. As a result, a well-written Erlang program can only be taken down by simultaneously killing an entire cluster of computers.

    What's more, it has a little-seen feature of being able to handle code upgrades to most any component of the program without ever stopping - it keeps two versions of each module (old and new) in memory, and code can be written to automatically ensure a smooth transition into the new code when the upgrade occurs.

    If I recall correctly, the Swedish telecom where Erlang was designed had one server running it with 7 continuous years uptime.
  • by Raul654 (453029) on Sunday June 07, 2009 @05:25PM (#28244193) Homepage

    I think you're confusing two different uses of parallelism. One is "small" parallelism -- the kind you see in graphical user interfaces. That is to say, if Firefox is busy loading a page, you can still click on the menus and get a response. Different aspects of the GUI are handled by different threads, so the program is responsive instead of hanging. That's done by using threading libraries like Posix and the like. But that's really a negligible application of parallelism. The really important use of parallelism is for really large programs that require lots of hardware to run in a reasonable amount of time.

    The threading libraries that you use for GUI applications don't work well for computationally intensive applications requiring parallelism. They require a shared-memory architecture. (A shared memory architecture is one in which all processors see the same values in the RAM. E.g, if processor #1 writes X to memory block 0x87654321, and processor #2 reads 0x87654321, it returns X instead of whatever value processor #2 last wrote there) Shared-memory architectures don't scale -- the biggest ones you can buy have about 64 CPUs. If you do want to run computationally intensive applications on shared memory architectures, then OpenMP is the library of choice. It's also fairly simple to use.

    If you want to run a big applications, you need to use a distributed memory architecture. And MPI (message passing interface) is pretty much the only game in town where that is concerned. It's by far the dominant player.

  • Re:Clojure (Score:3, Informative)

    by slasho81 (455509) on Sunday June 07, 2009 @05:26PM (#28244217)
    Here's what Rich Hickey wrote about the matter in http://clojure.org/state [clojure.org]

    I chose not to use the Erlang-style actor model for same-process state management in Clojure for several reasons:

    • It is a much more complex programming model, requiring 2-message conversations for the simplest data reads, and forcing the use of blocking message receives, which introduce the potential for deadlock. Programming for the failure modes of distribution means utilizing timeouts etc. It causes a bifurcation of the program protocols, some of which are represented by functions and others by the values of messages.
    • It doesn't let you fully leverage the efficiencies of being in the same process. It is quite possible to efficiently directly share a large immutable data structure between threads, but the actor model forces intervening conversations and, potentially, copying. Reads and writes get serialized and block each other, etc.
    • It reduces your flexibility in modeling - this is a world in which everyone sits in a windowless room and communicates only by mail. Programs are decomposed as piles of blocking switch statements. You can only handle messages you anticipated receiving. Coordinating activities involving multiple actors is very difficult. You can't observe anything without its cooperation/coordination - making ad-hoc reporting or analysis impossible, instead forcing every actor to participate in each protocol.
    • It is often the case that taking something that works well locally and transparently distributing it doesn't work out - the conversation granularity is too chatty or the message payloads are too large or the failure modes change the optimal work partitioning, i.e. transparent distribution isn't transparent and the code has to change anyway.
  • by Anonymous Coward on Sunday June 07, 2009 @06:26PM (#28244715)

    The AXD301 which was the first major product to be implemented in Erlang by Ericsson was measured to achieve nine-nines uptime the first couple of years.

    To read more about concurrency in Erlang and uptime have a look at this article:
    http://www.computer.org/portal/pages/dsonline/2007/10/w5tow.xml by Steve Vinovski

  • by Haeleth (414428) on Monday June 08, 2009 @04:10AM (#28248209) Journal

    That's not what he meant. Yes, technically the index variable is changing, and it may also be used inside the loop. But the relevant questions are: "does it matter what order the iterations run in", and "does one iteration have to finish before the next can begin".

    If the answer to both questions is "no", then you can run several loop bodies at once on different processors. Bingo, instant speedup.

  • by ceoyoyo (59147) on Monday June 08, 2009 @10:20AM (#28250685)

    Haeleth is correct. The problem with a for loop is that it may or may not be parallelizable. There are some compilers that attempt to guess, but they generally do a poor job.

    The secret to efficient programming in interpreted languages is that whenever you want to do something where you'd normally use a for loop, you call a compiled function to do that for you. Those utility functions are generally explicitly either parallelizable or not, so the compiler doesn't have to guess - it knows.

    Suppose I want to do C = A + B, where A and B are large arrays. In a language like C I would write a for loop:

    for (int i=0; iaLength; i++) {
        C[i] = A[i] + B[i]
    }

    That's a fairly trivial example, but that simple loop can foul up a parallelizing compiler. The equivalent in, say Python, is:

    C = A + B

    where the + operator calls a compiled C function. Whoever writes that C function KNOWS the necessary loop can be calculated in parallel, and so can code it that way. From then on, everyone who uses the + operator benefits.

    Yes, you can do the same thing with parallel libraries in compiled languages, but programmers in those languages tend not to be used to that way of thinking. As an interpreted programmer you see many, many clever tricks to, for example, do large array computations using provided compiled functions rather than straightforward for loops. Those tricks are precisely the ones you need to learn as a major component of effective parallel programming.

  • Re:Chapel? (Score:3, Informative)

    by Eskarel (565631) on Monday June 08, 2009 @11:25AM (#28251485)

    Mandarin is/was the court language of China, it was created in a rather clever way.

    The country, being very large had a number of different dialects. Mandarin was developed so that while the spoken word might be different in different areas of the country the Mandarin text was identical regardless of where you were. It was a language of scholars and regular people never really learned it(they didn't need to). The spoken form wasn't used much by anyone who wasn't a courtier(though neither was the written form).

    For example, Peking and Beijing are the same place, Peking is what they call(ed) it in the south, and Beijing is what they call it in the north, the written mandarin for both words is the same.

    To a certain extent nearly all written languages were initially created because so very few people actually wrote in the early days that there wasn't really any sort of natural evolution for a lot of written languages. For a more specific example, during the early years of the soviet union when the soviets were encouraging the development of their ethnic minorities(as opposed to kicking them off their land and putting them in work camps as they did a few years later. The Soviet government actually sent linguists out to some of the more nomadic of these groups to develop a written form of their language, which prior to this effort had never existed. That's not even counting resurrected languages which likely bear no significant resemblance to their previous forms, but which are spoken by real live people, or made up languages like Klingon that regular folks .

    Languages are both created and naturally evolve, and written languages and spoken languages do not always begin at the same time, are not always used by the same people, and are sometimes rather arbitrary.

"In order to make an apple pie from scratch, you must first create the universe." -- Carl Sagan, Cosmos

Working...