New Languages Vs. Old For Parallel Programming 321
joabj writes "Getting the most from multicore processors is becoming an increasingly difficult task for programmers. DARPA has commissioned a number of new programming languages, notably X10 and Chapel, written especially for developing programs that can be run across multiple processors, though others see them as too much of a departure to ever gain widespread usage among coders."
Re:Parallel is here to stay but not for every app (Score:5, Informative)
How blinkered are you?
There exist whole classes of software that have been doing parallel execution, be it through threads, processes or messaging, for decades.
Look at any/all server software, for god's sake, look at apache, or any database, or any transaction engine.
If you're talking about desktop apps then make it clear. The thing with most of those is that the machines far exceed their requirements with a single core, most of the time. But stuff like video encoding has been threaded for a while too.
Old languages designed for parallel processing? (Score:5, Informative)
Erlang [erlang.org] is an older established language designed for parallel processing.
Erlang was first developed in 1986, making it about a decade older than Java or Ruby. It is younger than Perl or C, and just a tad older than Python. It is a mature language with a large support community, especially in industrial applications. It is time tested and proven.
It is also Open source and offers many options for commercial support.
Before anyone at DARPA thinks that they can design a better language for concurrent parallel programming then I think they should be forced to spend 1 year learning Ada, and a second year working in Ada. If they survive they will most likely be cured of the thought that the Defense department can design good programming languages
Re:Parallel programming is dead. No one uses it... (Score:4, Informative)
Bullshit.
Tell that to apache, and oracle, and basically anything that runs in a server room.
Re:What's so hard? (Score:5, Informative)
It's not creating threads that's hard - it's getting them to communicate with each other, without ever getting into a situation where thread a is waiting for thread b and thread b is waiting for thread a that's hard.
Established vs new programming languages for HPC (Score:4, Informative)
This is a subject near and dear to my heart. I got to participate in one of the early X10 alpha tests (my research group was asked to try it out and give feedback to Vivek Sarker's IBM team). Since then, I've worked with lots of other specialized programming HPC programming languages.
One extremely important aspect of supercomputing, a point that many people fail to grasp, is that application code tends to live a long, long, long time. Far longer than the machines themselves. Rewriting code is simply too expensive and economically inefficient. At Los Alamos National Lab, much of the source code they run are nuclear simulations written Fortran 77 or Fortran 90. Someone might have updated it to use MPI, but otherwise it's the same program. So it's important to bear in mind that those older languages, while not nearly as well suited for parallelism (either for programmer ease-of-use/effeciency, or to allow the compiler to do deep analysis/optimization/scheduling), are going to be around for a long time yet.
Re:Clojure (Score:3, Informative)
Re:Old languages designed for parallel processing? (Score:5, Informative)
What's more, it has a little-seen feature of being able to handle code upgrades to most any component of the program without ever stopping - it keeps two versions of each module (old and new) in memory, and code can be written to automatically ensure a smooth transition into the new code when the upgrade occurs.
If I recall correctly, the Swedish telecom where Erlang was designed had one server running it with 7 continuous years uptime.
Re:Established vs new programming languages for HP (Score:3, Informative)
I think you're confusing two different uses of parallelism. One is "small" parallelism -- the kind you see in graphical user interfaces. That is to say, if Firefox is busy loading a page, you can still click on the menus and get a response. Different aspects of the GUI are handled by different threads, so the program is responsive instead of hanging. That's done by using threading libraries like Posix and the like. But that's really a negligible application of parallelism. The really important use of parallelism is for really large programs that require lots of hardware to run in a reasonable amount of time.
The threading libraries that you use for GUI applications don't work well for computationally intensive applications requiring parallelism. They require a shared-memory architecture. (A shared memory architecture is one in which all processors see the same values in the RAM. E.g, if processor #1 writes X to memory block 0x87654321, and processor #2 reads 0x87654321, it returns X instead of whatever value processor #2 last wrote there) Shared-memory architectures don't scale -- the biggest ones you can buy have about 64 CPUs. If you do want to run computationally intensive applications on shared memory architectures, then OpenMP is the library of choice. It's also fairly simple to use.
If you want to run a big applications, you need to use a distributed memory architecture. And MPI (message passing interface) is pretty much the only game in town where that is concerned. It's by far the dominant player.
Re:Clojure (Score:3, Informative)
Re:Old languages designed for parallel processing? (Score:2, Informative)
The AXD301 which was the first major product to be implemented in Erlang by Ericsson was measured to achieve nine-nines uptime the first couple of years.
To read more about concurrency in Erlang and uptime have a look at this article:
http://www.computer.org/portal/pages/dsonline/2007/10/w5tow.xml by Steve Vinovski
Re:"for" or "while" loops (Score:3, Informative)
That's not what he meant. Yes, technically the index variable is changing, and it may also be used inside the loop. But the relevant questions are: "does it matter what order the iterations run in", and "does one iteration have to finish before the next can begin".
If the answer to both questions is "no", then you can run several loop bodies at once on different processors. Bingo, instant speedup.
Re:I'm waiting for parallel libs for R (Score:3, Informative)
Haeleth is correct. The problem with a for loop is that it may or may not be parallelizable. There are some compilers that attempt to guess, but they generally do a poor job.
The secret to efficient programming in interpreted languages is that whenever you want to do something where you'd normally use a for loop, you call a compiled function to do that for you. Those utility functions are generally explicitly either parallelizable or not, so the compiler doesn't have to guess - it knows.
Suppose I want to do C = A + B, where A and B are large arrays. In a language like C I would write a for loop:
for (int i=0; iaLength; i++) {
C[i] = A[i] + B[i]
}
That's a fairly trivial example, but that simple loop can foul up a parallelizing compiler. The equivalent in, say Python, is:
C = A + B
where the + operator calls a compiled C function. Whoever writes that C function KNOWS the necessary loop can be calculated in parallel, and so can code it that way. From then on, everyone who uses the + operator benefits.
Yes, you can do the same thing with parallel libraries in compiled languages, but programmers in those languages tend not to be used to that way of thinking. As an interpreted programmer you see many, many clever tricks to, for example, do large array computations using provided compiled functions rather than straightforward for loops. Those tricks are precisely the ones you need to learn as a major component of effective parallel programming.
Re:Chapel? (Score:3, Informative)
Mandarin is/was the court language of China, it was created in a rather clever way.
The country, being very large had a number of different dialects. Mandarin was developed so that while the spoken word might be different in different areas of the country the Mandarin text was identical regardless of where you were. It was a language of scholars and regular people never really learned it(they didn't need to). The spoken form wasn't used much by anyone who wasn't a courtier(though neither was the written form).
For example, Peking and Beijing are the same place, Peking is what they call(ed) it in the south, and Beijing is what they call it in the north, the written mandarin for both words is the same.
To a certain extent nearly all written languages were initially created because so very few people actually wrote in the early days that there wasn't really any sort of natural evolution for a lot of written languages. For a more specific example, during the early years of the soviet union when the soviets were encouraging the development of their ethnic minorities(as opposed to kicking them off their land and putting them in work camps as they did a few years later. The Soviet government actually sent linguists out to some of the more nomadic of these groups to develop a written form of their language, which prior to this effort had never existed. That's not even counting resurrected languages which likely bear no significant resemblance to their previous forms, but which are spoken by real live people, or made up languages like Klingon that regular folks .
Languages are both created and naturally evolve, and written languages and spoken languages do not always begin at the same time, are not always used by the same people, and are sometimes rather arbitrary.