Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Programming Hardware

Revisiting Amdahl's Law 54

An anonymous reader writes "A German computer scientist is taking a fresh look at the 46-year old Amdahl's law, which took a first look at limitations in parallel computing with respect to serial computing. The fresh look considers software development models as a way to overcome parallel computing limitations. 'DEEP keeps the code parts of a simulation that can only be parallelized up to a concurrency of p = L on a Cluster Computer equipped with fast general purpose processors. The highly parallelizable parts of the simulation are run on a massively parallel Booster-system with a concurrency of p = H, H >> L. The booster is equipped with many-core Xeon Phi processors and connected by a 3D-torus network of sub-microsecond latency based on EXTOLL technology. The DEEP system software allows to dynamically distribute the tasks to the most appropriate parts of the hardware in order to achieve highest computational efficiency.' Amdahl's law has been revisited many times, most notably by John Gustafson."
This discussion has been archived. No new comments can be posted.

Revisiting Amdahl's Law

Comments Filter:
  • Re:Xeon dream on (Score:4, Informative)

    by godrik ( 1287354 ) on Wednesday June 19, 2013 @03:23AM (#44047167)

    "Xeon Phi = unavailable vaporware"

    You know, I wrote a paper on SpMV for Xeon Phi and I got quite a lot of people from all over the world asking me for clarification and for code. So it seems to be quite widespread. You can actually buy some online, Google points to several vendors.

    "in order to discourage folks from porting big science applications to CUDA"

    There are two things wrong with this statement. First of all, I do not think scientist are discourage from giving a shot to CUDA. Just check any scientific conference and you'll see GPU and CUDA everywhere. Actually we see so much GPU programming that it is getting boring.
    Also porting to CUDA is difficult and alien for most people. If we can get similar performance using programming model people are used to, how is that not a good thing? What is so good about CUDA? It is just pretty much the only way to get good performance out of NVIDIA gpus.

    The tradeoff between performance, hardware cost and developper cost is a difficult tradeoff. I say let's throw them all in the arena and see what stands.

    Disclaimer: my research is supported by both Intel and NVIDIA.

  • Poor summary (Score:5, Informative)

    by Anonymous Coward on Wednesday June 19, 2013 @03:57AM (#44047315)

    Amdahl's Law still stands. TFA is about changing the assumptions that Amdahl's Law is based on; instead of homogenous parallel processing, you stick a few big grunty processors in for the serial components of your task, and a huge pile of basic processors for the embaressingly parallel components. You're still limited by the fastest processing of non-parellel tasks, but by using a heterogenous mix of processors you're not wasting CPU time (and thus power and money) leaving processors idle.

"God is a comedian playing to an audience too afraid to laugh." - Voltaire

Working...