Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Programming Java IT Technology

New Languages Vs. Old For Parallel Programming 321

joabj writes "Getting the most from multicore processors is becoming an increasingly difficult task for programmers. DARPA has commissioned a number of new programming languages, notably X10 and Chapel, written especially for developing programs that can be run across multiple processors, though others see them as too much of a departure to ever gain widespread usage among coders."
This discussion has been archived. No new comments can be posted.

New Languages Vs. Old For Parallel Programming

Comments Filter:
  • I'm waiting for parallel libs for R, even if i'm told that scripted languages won't have much of a future in parallel processing. All I can do is hope. Sigh.

    • Re: (Score:3, Interesting)

      There are some packages on CRAN that claim to implement parallel processing for R -- go to http://cran.r-project.org/web/packages/ [r-project.org] and search for the text "parallel" to find several examples. I haven't tried any of them out yet, but sooner or later I'm going to have to.

      And actually, I think that "scripting" languages in general will have a very bright future in the parallel processing world. If memory management and garbage collection are implemented invisibly (and well!) in the core language, then the pr

    • by ceoyoyo ( 59147 ) on Sunday June 07, 2009 @08:57PM (#28245833)

      Whoever told you that is mistaken.

      The easiest way to take advantage of a multiprocessing environment is to use techniques that will be familiar to any high level programmer. For example, you don't write for loops, you call functions written in a low level language to do things like that for you. Those low level functions can be easily parallelized, giving all your code a boost.

  • by Meshach ( 578918 ) on Sunday June 07, 2009 @02:39PM (#28242987)
    Parallel is not going to go anywhere but is only really valid for certain types if applications. Larger items like operating systems or most system tasks need it. Whether it is worthwhile in lowly application land is a case by case decision; but will mostly depend on the skill of programmers involved and the budget for the particular application in question.
    • Re: (Score:2, Insightful)

      by aereinha ( 1462049 )

      Parallel is not going to go anywhere but is only really valid for certain types if applications.

      Exactly, some problems are inherently serial. These programs would run slower if you made them run in parallel.

      • Most computing intensive problems that a user will encounter at home are easily parallelizable, i.e. video encoding, gaming, photoshop filters, webbrowsing and so on. The amount of times where I maxed out a single CPU and the given problem would not have been to some large degree parallelizable are close to zero.

        The trouble is that they are only "easy" parallelizable in concept, implementing parallelization on an exciting serial codebase is where it gets messy.

      • Exactly, some problems are inherently serial. These programs would run slower if you made them run in parallel.

        If they are inherently sequential, then obviously they cannot be made to run in parallel. The truth is that the vast majority of computing applications, both existing and future, are inherently parallel. As soon as some maverick startup (forget the big players like Intel, Microsoft, or AMD because they are too married to the old ways) figures out the solution to the parallel programming crisis (see

      • ... and they often run in parallel with other software.

        It blows my mind how many people don't realize their computer is almost always doing more than one thing at a time. A good OS that knows how to schedule the correct processes to the correct processors can give you a good benefit from parallelism without needing to run multithreaded software.

        Certainly if you only ever use one program on your OS it will be minimal, but it will still be there, even if its just your anti-virus software running in parallel

    • by Nursie ( 632944 ) on Sunday June 07, 2009 @02:52PM (#28243091)

      How blinkered are you?

      There exist whole classes of software that have been doing parallel execution, be it through threads, processes or messaging, for decades.

      Look at any/all server software, for god's sake, look at apache, or any database, or any transaction engine.

      If you're talking about desktop apps then make it clear. The thing with most of those is that the machines far exceed their requirements with a single core, most of the time. But stuff like video encoding has been threaded for a while too.

      • Re: (Score:3, Insightful)

        by Meshach ( 578918 )
        I guess in hindsight I should have been clearer and less "blinkered"...

        For real time apps that do transactions Parallel is needed. What I was comparing them to is desktop apps where in many cases the benefit does not really exist. The main point I was trying to get across is that parallel programming is difficult and not needed for every application.
        • Define the applications for which parallel processing is not needed. It might be a smaller list than you think. For example, think of a spreadsheet - parallel processing can really help here when trying to resolve all the cells with complex inter-related calculations. I mean, they already need to do tricks to keep them responsive today, trying to recalculate only the stuff that's showing rather than the entire document (all n sheets).

          Word processors? Well, besides having embedded spreadsheets, they also

      • ah, but all of those server-side apps are effectively doing a single task, multiple times - ie, each request occurs in a different thread, they do not split 1 request onto several CPUs. That's what all this talk of 'desktop parallelism' is all about.

        So now everyone sees multiple cores on the desktop and think to themselves, that data grid is populating really slowly.. I know, we need to parallelise it, that'll make it go faster! (yeah, sure it will)

        I'm sure there are tasks that will benefit from parallel pr

      • Re: (Score:3, Insightful)

        by Eskarel ( 565631 )

        While technically most servers are somewhat parallel in nature, it isn't really the same sort of thing that these sorts of languages are designed to achieve.

        Servers, for the most part, are parallel because they have to be able to handle a lot of requests simultaneously, so they spin off a new thread or process(depending on the architecture) for each request to the server, do some relatively simple concurrency checking and then run each request, for the most part, in serial. They're parallel because the task

    • by Daniel Dvorkin ( 106857 ) * on Sunday June 07, 2009 @02:53PM (#28243111) Homepage Journal

      True enough, but the class of applications for which parallel processing is useful is growing rapidly as programmers learn to think in those terms. Any program with a "for" or "while" loop in which the results of one iteration do not depend on the results of the previous iteration, as well as a fair number of such loops in which the results do have such a dependency, is a candidate for parallelization -- and that means most of the programs which most programmers will ever write. We just need the languages not to make coding this way too painful.

      • by Tablizer ( 95088 )

        Any program with a "for" or "while" loop in which the results of one iteration do not depend on the results of the previous iteration, as well as a fair number of such loops...is a candidate for parallelization

        A lot of those kinds of operations can be farmed off to a database or database-like thing where explicit loops are needed less often. The database is an excellent place to take advantage of parallelism because most query languages are functional-like.

        I remember in desktop-databases (dBase, FoxPro, Par

    • "Parallel is not going to go anywhere..."

      Really? Look inside any machine nowadays. I'm working on an 8 core machine right now. The individual cores aren't going to get that much faster in the years to come, but the number of cores in a given processor is going to increase dramatically. Unless you want your programs to stay at the same execution speed for the next 5-10 years, you need parallel. And what we need is languages and compilers that abstract away the actual hard work so that anybody can mak
      • Re: (Score:2, Funny)

        by cheftw ( 996831 )

        The individual cores aren't going to get that much faster in the years to come

        I'm sure I've heard something like this before...

        • Except in this case it's likely to be true. Transistors can only be so small before it becomes technically impossible with infeasible being somewhat before that. Additionally electrons can only go so fast through a circuit and you need a certain number of them to work well. Or to put it another way, we're getting relatively close to the point of diminishing returns on that aspect of computing. Sure engineers could make things go quite a bit faster, but realistically it's questionable as to how much faster a
          • There are a number of advancements in microarchitecture that will keep Moore's Law going for a good while yet.

            The processor makers have added parallel cores because it was easier and less expensive for them to do so. On the other hand, there are "sharpening" techniques that will allow light lithography to be scaled smaller still, and none of the major chip houses have even started using X-Ray lithography yet, which will allow them to go smaller still.

            Yes, it is getting to the point that individual fe
          • by jbolden ( 176878 )

            Don't forget coprocessors. Imagine if your video card understood video decoding itself and cached....

        • Re: (Score:3, Interesting)

          by peragrin ( 659227 )

          yea why can't you buy 6 ghz cores ? Is it because unless you super cool them you can't clock them that high?

          3.8 ghz P4 was released in 2005. Instead Intel has focused on power savings, and adding cores while to shrink die sizes.

          Quantum computing is a long ways off, heck they can't even get a good Memresistor yet. The advantage we are having is that Memory speeds are finally catching up to processor speeds. Combine that with a memresistor at that speed and Computing will take a whole new direction for ef

      • by Moochman ( 54872 )

        I think he means "not going anywhere" as in "here to stay". In other words he's agreeing with you.

        But yeah, I read it the other way too the first time around.

      • by AuMatar ( 183847 ) on Sunday June 07, 2009 @03:54PM (#28243603)

        And how many of those cores are above 2% utilization for 90% of the day? Parallelization on the desktop is a solution is search of a problem- we have in a single core dozens of times what the average user needs. My email, web browsing, word processor, etc aren't cpu limited. They're network limited, and after that they're user limited (a human can only read so many slashdot stories a minute). There's no point in anything other than servers having 4 or 8 cores. But if Intel doesn't fool people into thinking they need new computers their revenue will go down, so 16 core desktops next year it is.

        • by bertok ( 226922 ) on Monday June 08, 2009 @04:10AM (#28248205)

          The % utilization metric is a red herring. Most servers are underutilized by that metric, which is why VMware is making so much money consolidating them!

          Users don't actually notice, or care, about CPU utilization. What users notice, is latency. If my computer is 99% idle, that's fine, but I want it to respond to mouse clicks in a timely fashion. I don't want to wait, even if it's just a few hundred milliseconds. This is where parallel computation can bring big wins.

          One thing I noticed is that MS SQL Server still has its default "threshold for query parallelism" set to "5", which AFAIK means that if the query planner estimates that a query will take more than 5 seconds, it'll attempt a parallel query plan instead. That's insane! I don't know what kind of users Microsoft is thinking of, but in my world, if a form takes 5 seconds to display, it's way too slow to be considered acceptable. Many servers now have 8 or more cores, and 24 (4x hexacore) is going to be common for database servers very soon. In that picture, even if you only consider a 15x speedup due to overhead, 5 seconds becomes something like 300 milliseconds!

          Ordinary Windows applications can benefit from the same kind of speedup. For example, a huge number of applications use compression internally (all Java JAR files, of the docx-style Office 2007 files, etc...), yet the only parallel compressor I know of is WinRAR, which really does get 4x the speed on my quad-core. Did you know that the average compression rate for a normal algorithm like zip is something like 10MB/sec/core? That's pathetic. A Core i7 with 8 threads could probably do the same thing at 60 MB/sec or more, which is more in line with, say, gigabit ethernet speeds, or a typical hard-drive.

          In other words, for a large class of apps, your hard-drive is not the bottleneck, your CPU is. How pathetic is that? A modern CPU has 4 or more cores, and it's busy hammering just one of those while your hard-drive, a mechanical component, is waiting to send it more data.

          You wait until you get an SSD. Suddenly, a whole range of apps become "cpu limited".

        • Re: (Score:3, Insightful)

          And how many of those cores are above 2% utilization for 90% of the day? Parallelization on the desktop is a solution is search of a problem-

          The indication of the need to parallelization isn't the number of cores that are above 2% utilization for 90% of the day, but the number that are above 90% utilization for any part of the day.

          My email, web browsing, word processor, etc aren't cpu limited.

          Some of use our computers for more than that.

        • Re: (Score:3, Insightful)

          by PitaBred ( 632671 )
          What about that other 10% of the day? It's mostly for gamers and developers, but multiple cores really does speed a lot of things up. And they're starting to be quite useful now that Joe User is getting into video and audio editing and so on. Those most certainly are CPU-limited applications, and they are pretty amenable to parallelism as well. Just because you only email, browse the web and use a word processor doesn't mean that's what everyone does.
    • by nurb432 ( 527695 )

      If we all become part of some huge cloud and share our ( mostly mobile ) resources by default, it may apply even to the most lowly of text editors.

  • All you need is Fortran.

  • by PhrostyMcByte ( 589271 ) <phrosty@gmail.com> on Sunday June 07, 2009 @02:44PM (#28243031) Homepage
    A lot of problems are I/O driven -- I would like to see more database client libraries allow a full async approach that lets us not block the threads we are trying to do concurrent work on.
  • What's so hard? (Score:3, Interesting)

    by 4D6963 ( 933028 ) on Sunday June 07, 2009 @02:48PM (#28243069)

    Not trying to troll or anything, but I'd always hear of how parallel programming is very complicated for programmers, but then I learnt to use pthread in C to parallelise everything in my C program from parallel concurrent processing of the same things to threading any aspect of the program, and I was surprised by how simple and straightforward it was using pthread, even creating a number of threads depending on the number of detected cores was simple.

    OK, maybe what I did was simple enough, but I just don't see what's so inherently hard about parellel programming. Surely I am missing something.

    • Re: (Score:2, Insightful)

      by Anonymous Coward
      Do your programs ever leak memory? Did you have to work with a team of 100+ SWE's to write the program? Did you have technical specs to satisfy, or was this a weekend project? This is the difference between swimming 100 meters and sailing across the Pacific.
    • Re:What's so hard? (Score:5, Informative)

      by beelsebob ( 529313 ) on Sunday June 07, 2009 @02:59PM (#28243165)

      It's not creating threads that's hard - it's getting them to communicate with each other, without ever getting into a situation where thread a is waiting for thread b and thread b is waiting for thread a that's hard.

    • Re:What's so hard? (Score:4, Interesting)

      by Unoti ( 731964 ) on Sunday June 07, 2009 @03:21PM (#28243335) Journal

      The fact that it seems so simple at first is where the problem starts. You had no trouble in your program. One program. That's a great start. Now do something non-trivial. Say, make something that simulates digital circuits-- and gates, or gates, not gates. Let them be wired up together. Accept an arbitrarily complex setup of digital logic gates. Have it simulate the outputs propagating to the inputs. And make it so that it expands across an arbitrary number of threads, and make it expand across an arbitrary number of processes, both on the same computer and on other computers on the same network.

      There are some languages and approaches you could choose for such a project that will help you avoid the kinds of pitfalls that await you, and provide most or all of the infrastructure that you'd have to write yourself in other languages.

      If you're interested in learning more about parallel programming, why it's hard, and what can go wrong, and how to make it easy, I suggest you read a book about Erlang [amazon.com]. Then read a book about Scala. [amazon.com]

      The thing is, it looks easy at first, and it really is easy at first. Then you launch your application into production, and stuff goes real funny and it's nigh unto impossible to troubleshoot what's wrong. In the lab, it's always easy. With multithreaded/multiprocess/multi-node systems, you've got to work very very hard to make them mess up in the lab the same way they will in the real world. So it seems like not a big deal at first until you launch the stuff and have to support it running every day in crazy unpredictable conditions.

    • Re: (Score:3, Insightful)

      by quanticle ( 843097 )

      Making a threaded application in C isn't difficult. Testing and debugging said application is. Given that threads share memory, rigorously testing buffer overflow conditions becomes doubly important. In addition, adding threading introduces a whole new set of potential errors (such as race conditions, deadlocks, etc.) that need to be tested for.

      Its easy enough to create a multi-threaded version of a program when its for personal use. However, there are a number of issues that arise whenever a threaded p

    • by synaptik ( 125 )

      even creating a number of threads depending on the number of detected cores was simple.

      Are you guaranteed that those spawned threads will be evenly distributed amongst the cores, on a given architecture? There's also a matter of locality; you want the threads that are dealing with certain data to run on cores that are close to that data.

      MT is not the same thing as MP. You may have written a multi-threaded app, but when on a single-core you likely didn't see any perf gains. MT apps on a single CPU core can have benefits-- such as, your UI can remain responsive to the user during seriou

    • Re: (Score:3, Interesting)

      by Yacoby ( 1295064 )
      Data communication in a foolproof way. Writing a threaded program is easy if the program is simple. You can even get a bit more performance out of a program using multiple threads if you use locking. If you use locking, you end up with the possibility of race conditions, deadlock and other nightmares.

      Extending this to something like a game engine is much harder. Say we split our physics and rendering into two threads. How does the physics thread update the render thread? We could just lock the whole sc
    • by grumbel ( 592662 )

      Implementing threading in a new app written from scratch isn't that hard (even so it has quite a bit problems on its own), the real troublesome part is rewriting legacy code that wasn't build for threading, as that often makes a lot of assumptions that simply break in threading.

    • Not trying to troll or anything, but I'd always hear of how parallel programming is very complicated for programmers, but then I learnt to use pthread in C to parallelise everything in my C program from parallel concurrent processing of the same things to threading any aspect of the program, and I was surprised by how simple and straightforward it was using pthread, even creating a number of threads depending on the number of detected cores was simple.

      Really? With the pthread API? Pray tell, how does that work?

      Note that reading from /proc/ is neither part of the pthread API, nor portable...

    • by jbolden ( 176878 )

      An example is the locking problem on variables that are shared. Which variables get locked, for how long? How does the lock get released? To many locks you run sequentially, too few you corrupt your threads.

    • OK, maybe what I did was simple enough, but I just don't see what's so inherently hard about parellel programming. Surely I am missing something.

      For me, the two things that are hardest are designing an efficient parallel algorithm for the target platform and ensuring fast but proper synchronization.

      For instance, if your target is a GPU, then you have a bunch of execution units in parallel, but communication between them is limited. You have to take this into consideration when designing the algorithm.

      If your target is regular CPU's, then you might have a handful for execution units and communication can be fast. However you need to ensure proper syn

    • Spawning threads to handle isolated tasks within a single address space isn't all that hard. Handling interrelated tasks across more processors than could possibly share one address space, doing it correctly so it doesn't have deadlocks or race conditions, distributing the work so there aren't performance-killing bottlenecks even in a non-trivial system topology handling an ever-varying work distribution, etc. . . . that's where things get just a bit more challenging, and what the newer crop of languages a

  • by number6x ( 626555 ) on Sunday June 07, 2009 @02:55PM (#28243127)

    Erlang [erlang.org] is an older established language designed for parallel processing.

    Erlang was first developed in 1986, making it about a decade older than Java or Ruby. It is younger than Perl or C, and just a tad older than Python. It is a mature language with a large support community, especially in industrial applications. It is time tested and proven.

    It is also Open source and offers many options for commercial support.

    Before anyone at DARPA thinks that they can design a better language for concurrent parallel programming then I think they should be forced to spend 1 year learning Ada, and a second year working in Ada. If they survive they will most likely be cured of the thought that the Defense department can design good programming languages

    • by coppro ( 1143801 ) on Sunday June 07, 2009 @03:36PM (#28243457)
      Erlang is probably the best language for servers and similar applications available. Not only is in inherently parallel (though they've only recently actually made the engine multithreaded, as the paralellism is in the software), but it is very easily networked as well. As a result, a well-written Erlang program can only be taken down by simultaneously killing an entire cluster of computers.

      What's more, it has a little-seen feature of being able to handle code upgrades to most any component of the program without ever stopping - it keeps two versions of each module (old and new) in memory, and code can be written to automatically ensure a smooth transition into the new code when the upgrade occurs.

      If I recall correctly, the Swedish telecom where Erlang was designed had one server running it with 7 continuous years uptime.
    • Ada was created by a french team: http://en.wikipedia.org/wiki/Ada_(programming_language) [wikipedia.org]

      Four teams competed to create a new language suitable for the DoD, and the french team won.

    • Before anyone at DARPA thinks that they can design a better language for concurrent parallel programming then I think they should be forced to spend 1 year learning Ada, and a second year working in Ada. If they survive they will most likely be cured of the thought that the Defense department can design good programming languages

      Well, it's based on Pascal, so whatya expect? Still, does work. (The 777 flight control system is written in it...if it was written in, for example, C or VB, would you get on the 'plane?)

  • by theNote ( 319197 ) on Sunday June 07, 2009 @02:55PM (#28243129)

    The example in the article is atrocious.

    Why would you want the withdrawal and balance check to run concurrently?

    • Re: (Score:2, Funny)

      by Anonymous Coward

      The example in the article is atrocious.

      Why would you want the withdrawal and balance check to run concurrently?

      Because it would make it much easier to profit from self-introduced race conditions and other obscure bugs when I get to the ATM tomorrow :)

    • by awol ( 98751 ) on Sunday June 07, 2009 @03:27PM (#28243379) Journal

      The example in the article is atrocious.

      Why would you want the withdrawal and balance check to run concurrently?

      Because I can do a whole lot of "local" withdrawal processing whilst my balance check is off checking the canonical source of balance information. If it's comes back OK then the work I have been doing in parallel is now commitable work and my transaction is done. Perhaps in no more time than either of the balance check or the withdrawal whichever is the longest. Whilst the balance check/withdrawal example may seem ridiculous. There are some very interesting applications of this kind of problem in securities (financial) trading systems where the canonical balances of different instruments would conveniently (and some times mandatorily) stored in different locations and some complex synthetic transactions require access to balances from more than one instrument in order to execute properly.

      It seems to me that most of the interesting parallism problems relate to distributed systems and it is not just a question of N phase commit databases but rather a construct of "end to end" dependencies in your processing chain where the true source of data cannot be accessed from all the nodes in the cluster at the same time from a procedural perspective.

      It is this fact that to me suggests that the answer to these issues is a radical change in language toward the functional or logical types of languages like haskel and prolog with erlang being a very interesting place on that path for right now.

  • I see some potential in combining innovations meant for the netbooks with multiple processors. Low power & lightweight software may mix well with multiple CPUSs.
    • Not when Microsoft and Intel are limiting the number of cores on netbooks to 1 for fear of them competing with their more lucrative OSs or CPUs.
  • Clojure (Score:5, Interesting)

    by slasho81 ( 455509 ) on Sunday June 07, 2009 @03:00PM (#28243173)
    Check out Clojure [clojure.org]. The only programming language around that really addresses the issue of programming in a multi-core environment. It's also quite a sweet language besides that.
    • by Cyberax ( 705495 )

      Check Erlang ;)

      • Check my reply to the other reply.
        • Re: (Score:3, Insightful)

          by Cyberax ( 705495 )

          Erlang is quite OK for non-distributed programming. Its model of threads exchanging messages is just a natural fit for it. As it is for multicore systems.

          • Re: (Score:3, Informative)

            by slasho81 ( 455509 )
            Here's what Rich Hickey wrote about the matter in http://clojure.org/state [clojure.org]

            I chose not to use the Erlang-style actor model for same-process state management in Clojure for several reasons:

            • It is a much more complex programming model, requiring 2-message conversations for the simplest data reads, and forcing the use of blocking message receives, which introduce the potential for deadlock. Programming for the failure modes of distribution means utilizing timeouts etc. It causes a bifurcation of the progr
    • Re: (Score:3, Insightful)

      Check out Clojure [clojure.org]. The only programming language around that really addresses the issue of programming in a multi-core environment.

      That's a rather bold statement. You do realize that those neat features of Clojure like STM or actors weren't originally invented for it? In fact, you could do most (all?) of that in Haskell before Clojure even appeared.

      On a side note, while STM sounds great in theory for care-free concurrent programming, the performance penalty that comes with it in existing implementations is hefty. It's definitely a prospective area, but it needs more research before the results are consistently usable in production.

      • Re: (Score:3, Interesting)

        by slasho81 ( 455509 )

        That's a rather bold statement. You do realize that those neat features of Clojure like STM or actors weren't originally invented for it? In fact, you could do most (all?) of that in Haskell before Clojure even appeared.

        I do realize that many of the innovations in Clojure are not brand new, but Clojure did put them into a practical form that incorporates many "right" innovations into one language. Haskell is a fine language and one of the languages that heavily influenced Clojure. Clojure makes some parad

  • by Anonymous Coward on Sunday June 07, 2009 @03:01PM (#28243189)

    Rehash time...

    Parallelism typically falls into two buckets: Data parallel and functional parallel. The first challenge for the general programming public is identifying what is what. The second challenge is synchronizing parallelism in as bug free way as possible while retaining the performance advantage of the parallelism.

    Doing fine-grained parallelism - what the functional crowd is promising, is something that will take a *long* time to become mainstream (Other interesting examples are things like LLVM and K, but they tend to focus more on data parallel). Functional is too abstract for most people to deal with (yes, I understand it is easy for *you*).

    Short term (i.e. ~5 years), the real benefit will be in threaded/parallel frameworks (my app logic can be serial, tasks that my app needs happen in the background).

    Changing industry tool-chains to something entirely new takes many many years. What most likely will happen is transactional memory will make it into some level of hardware, enabling faster parallel constructs, a cool new language will pop up formalizing all of these features. Someone will tear that cool new language apart by removing the rigor and giving it C/C++ style syntax, then the industry will start using it

  • by Raul654 ( 453029 ) on Sunday June 07, 2009 @03:03PM (#28243199) Homepage

    This is a subject near and dear to my heart. I got to participate in one of the early X10 alpha tests (my research group was asked to try it out and give feedback to Vivek Sarker's IBM team). Since then, I've worked with lots of other specialized programming HPC programming languages.

    One extremely important aspect of supercomputing, a point that many people fail to grasp, is that application code tends to live a long, long, long time. Far longer than the machines themselves. Rewriting code is simply too expensive and economically inefficient. At Los Alamos National Lab, much of the source code they run are nuclear simulations written Fortran 77 or Fortran 90. Someone might have updated it to use MPI, but otherwise it's the same program. So it's important to bear in mind that those older languages, while not nearly as well suited for parallelism (either for programmer ease-of-use/effeciency, or to allow the compiler to do deep analysis/optimization/scheduling), are going to be around for a long time yet.

    • Being on the inside could you perhaps explain to me why they went with threading instead of message passing?

      • Re: (Score:3, Informative)

        by Raul654 ( 453029 )

        I think you're confusing two different uses of parallelism. One is "small" parallelism -- the kind you see in graphical user interfaces. That is to say, if Firefox is busy loading a page, you can still click on the menus and get a response. Different aspects of the GUI are handled by different threads, so the program is responsive instead of hanging. That's done by using threading libraries like Posix and the like. But that's really a negligible application of parallelism. The really important use of parall

  • But my first thought upon reading "Chapel" was...

    I'm multi-threaded bitch!
  • The mess (Score:5, Interesting)

    by Animats ( 122034 ) on Sunday June 07, 2009 @03:44PM (#28243519) Homepage

    I've been very disappointed in parallel programming support. The C/C++ community has a major blind spot in this area - they think parallelism is an operating system feature, not a language issue. As a result, C and C++ provide no assistance in keeping track of what locks what. Hence race conditions. In Java, the problem was at least thought about, but "synchronized" didn't work out as well as expected. Microsoft Research people have done some good work in this area, and some of it made it into C#, but they have too much legacy to deal with.

    At the OS level, in most operating systems, the message passing primitives suck. The usual approach in the UNIX/Linux world is to put marshalling on top of byte streams on top of sockets. Stuff like XML and CORBA, with huge overhead. The situation sucks so bad that people think JSON is a step forward.

    What you usually want is a subroutine call; what the OS usually gives you is an I/O operation. There are better and faster message passing primitives (see MsgSend/MsgReceive in QNX), but they've never achieved any traction in the UNIX/Linux world. Nobody uses System V IPC, a mediocre idea from the 1980s. For that matter, there are still applications being written using lock files.

    Erlang is one of the few parallel languages actually used to implement large industrial applications.

    • Please keep up. Granted the next C++ standard is seemingly mired in bureaucracy, but at least it has addressed threading for a decade. It would be great if the powers that be would finally call it quits and finalize what we have, then we could move on to the next issue, automatic failover.
    • Inmos had it right (Score:3, Interesting)

      by Tjp($)pjT ( 266360 )
      In the let the compiler decide attitude of the C language families ... Inmos C had the correct solution. You add two new keywords to the language, parallel and sequential.
      sequential
      {
      stmt1;
      stmt2;
      stmt3;
      }

      as opposed to

      parallel
      {
      stmt4;
      stmt5;
      stmt6;
      }

      The stmt1 must be executed before stmt2 which must be executed before stmt3 in the sequential construct. C languages actually already support this in a bit more awkward way with the ravel operator. But sequential is an easier to understand
  • There needs to be an equivalent of Donald Knuth's "Art of Computer Programming" as a definitive reference for parallel algorithms. Until then, I don't care how many cores you have, you won't get the most out of them.
  • I remember back in the 90's uc berkeley had developed split-c and titanium.

    Split-c was interesting in it was C based and had some interesting concepts such as running block on one or many processors, synchronizing processors and spread pointers (pointers across memory across machines).

    Titainium was a Java like language for parallel processing, but at the time didn't have multithreaded implemented.

    MPI seemed to be the main api used on standard languages.

  • Chapel (Score:3, Interesting)

    by jbolden ( 176878 ) on Sunday June 07, 2009 @04:41PM (#28243861) Homepage

    Looking at the 99 bottles Chapel code (from original article)
    http://99-bottles-of-beer.net/language-chapel-1215.html [99-bottles-of-beer.net]

    This looks like the way you do stuff in Haskell. Functions compute the data and the I/O routine is moved into a "monad" where you need to sequence. This doesn't seem outside the realm of the possible.

  • by ipoverscsi ( 523760 ) on Sunday June 07, 2009 @04:51PM (#28243923)

    I have not read the article (par for the course here) but I think there is probably some confusion among the commenters regarding the difference between multi-threading programs and parallel algorithms. Database servers, asynchronous I/O, background tasks and web servers are all examples of multi-threaded applications, where each thread can run independently of every other thread with locks protecting access to shared objects. This is different from (and probably simpler than) parallel programs. Map-reduce is a great example of a parallel distributed algorithm, but it is only one parallel computing model: Multiple Instruction / Multiple Data (MIMD). Single Instruction / Multiple Data (SIMD) algorithms implemented on super-computers like Cray (more of a vector machine, but it's close enough to SIMD) and MasPar systems require different and far more complex algorithms. In addition, purpose-built supercomputers may have additional restrictions on their memory accesses, such as whether multiple CPUs can concurrently read or write from memory.

    Of course, the Cray and Maspar systems are purpose-built machines, and, much like special-build processors have fallen in performance to general purpose CPUs, Cray and Maspar systems have fallen into disuse and virtual obscurity; therefore, one might argue that SIMD-type systems and their associated algorithms should be discounted. But, there is a large class of problems -- particularly sorting algorithms -- well suited to SIMD algorithms, so perhaps we shouldn't be so quick to dismiss them.

    There is a book called An Introduction to Parallel Algorithms by Joseph JaJa (http://www.amazon.com/Introduction-Parallel-Algorithms-Joseph-JaJa/dp/0201548569 [amazon.com]) that shows some of the complexities of developing truly parallel algorithms.

    (Disclaimer: I own a copy of that book but otherwise have no financial interests in it.)

  • by drfireman ( 101623 ) <dan@kiMOSCOWmberg.com minus city> on Sunday June 07, 2009 @10:08PM (#28246253) Homepage

    Recent versions of gcc support OpenMP, and there's now experimental support for a multithreading library that I gather is going to be in the next c++ standard. These don't solve everyone's problems, but certainly it's getting easier, not harder, to take better advantage of multi-processor multi-core systems. I recently test retrofit some of my own code with OpenMP, and it was ridiculously easy. Five years ago it would have been a much more irritating process. I realize not everyone develops in c/c++, nor does everyone use a compiler that supports OpenMP. But I doubt it's actually getting harder, probably just the rate at which it's getting easier is not the same for everyone.

  • LIBRARIES!! (Score:3, Interesting)

    by HiThere ( 15173 ) <charleshixsn@@@earthlink...net> on Monday June 08, 2009 @12:41AM (#28247157)

    The main problem faced by each new language is "How do I access all the stuff that's already been done?"

    The "Do it over again" answer hasn't been successful since Sun pushed Java, and Java's initial target was an area that hadn't had a lot of development work. Sun spent a lot of money pushing Java, and was only partially successful. Now it probably couldn't be done again even by a major corporation.

    The other main answer is make calling stuff written in C or C++ (or Java) trivial.Python has used this to great effect, and Ruby to a slightly lesser one. Also note Jython, Groovy, Scala, etc. But if you're after high performance, Java has the dead weight of an interpreter (i.e., virtual machine). So that basically leaves easy linkage with C or C++. And both are purely DREADFUL languages to link to, due to pointer/integer conversions and macros. And callbacks. Individual libraries can be wrapped, but it's not easy to craft global solutions that work nicely. gcc has some compiler options that could be used to eliminate macros. Presumably so do other compilers. But they definitely aren't standardized. And you're still left not knowing what's a pointer so you don't know what memory can be freed.

    The result of this is that to get a new language into a workable state means a tremendous effort to wrap libraries. And this needs to be done AFTER the language is stabilized. And the people willing to work on this aren't the same people as the language implementers (who have their own jobs).

    I looked over those language sites, and I couldn't see any sign that thoughts had been given to either Foreign Function Interfaces or wrapping external libraries. Possibly they just used different terms, but I suspect not. My suspicion is that the implementers aren't really interested in language use so much as proving a concept. So THESE aren't the languages that we want, but they are test-beds for working out ideas that will later be imported into other languages.

Love may laugh at locksmiths, but he has a profound respect for money bags. -- Sidney Paternoster, "The Folly of the Wise"

Working...