Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Programming Hardware

Revisiting Amdahl's Law 54

An anonymous reader writes "A German computer scientist is taking a fresh look at the 46-year old Amdahl's law, which took a first look at limitations in parallel computing with respect to serial computing. The fresh look considers software development models as a way to overcome parallel computing limitations. 'DEEP keeps the code parts of a simulation that can only be parallelized up to a concurrency of p = L on a Cluster Computer equipped with fast general purpose processors. The highly parallelizable parts of the simulation are run on a massively parallel Booster-system with a concurrency of p = H, H >> L. The booster is equipped with many-core Xeon Phi processors and connected by a 3D-torus network of sub-microsecond latency based on EXTOLL technology. The DEEP system software allows to dynamically distribute the tasks to the most appropriate parts of the hardware in order to achieve highest computational efficiency.' Amdahl's law has been revisited many times, most notably by John Gustafson."
This discussion has been archived. No new comments can be posted.

Revisiting Amdahl's Law

Comments Filter:
  • Buzzword-heavy (Score:5, Insightful)

    by Animats ( 122034 ) on Wednesday June 19, 2013 @01:33AM (#44046925) Homepage

    The article makes little sense. The site of the DEEP project [deep-project.eu] is more useful. It has the look of an EU publicly funded boondoggle. Those have a long history; see Plan Calcul [wikipedia.org], the 1966 plan to create a major European computing industry. That didn't do too well.

    The trouble with supercomputers is that only governments buy them. When they do, they tend not to use them very effectively. The US has pork programs like the Alabama Supercomputer Center [asc.edu]. One of their main activities is providing the censorware for Alabama schools. [asc.edu]

    There's something to be said for trying to come up with better ways of making sequential computation more parallel. But the track record of failures is discouraging. The game industry beat their head against the wall for five years trying to get the Cell processors in the PS3 to do useful work. Sony has given up; the PS4 is an ordinary shared-memory multiprocessor. So are all the XBox machines.

    It's encouraging to see how much useful work people are getting out of GPUs, though.

    • Re:Buzzword-heavy (Score:4, Interesting)

      by cold fjord ( 826450 ) on Wednesday June 19, 2013 @01:51AM (#44047029)

      The article makes sense, but I don't think the work appears to be especially innovative even if it could be very useful.

      It is more than governments that buy supercomputers. They are also used in industry for things like oil and gas exploration, economic modeling, and weather forecasts. Universities and research organizations also use them for a variety of purposes. Time on an actual supercomputer tends to be highly valuable and sought after. You may disagree with the use, but that is a different question from not being used effectively.

      The Secret Lives of Supercomputers, Part 1 [technewsworld.com]

      "It is probably the biggest trend in supercomputers -- the movement away from ivory-tower research and government-sponsored research to commerce and business," Michael Corrado, an IBM spokesperson, told TechNewsWorld. In 1997, there were 161 supersystems deployed in business and industry, but that figure grew to 287 by June 2008, he noted. "More than half the list reside in commercial enterprises. That's a huge shift, and it's been under way for years."

      Uses for supercomputers [zdnet.com]

    • How dare you criticise the author - he is a physicist and he has stooped to coming and telling us computer science types how to do it properly!

      There is a deeply appropriate xkcd but I cannot be bothered to find it. Decoding the garbage in the pcworld story tell us that he is going to break Amdahl's Law by dynamically partitioning the workload between a fast single threaded processor and many slower parallel processors. I would guess that my failing to make a fair comparison they can claim that the portion r

      • by mysidia ( 191772 )

        claim that the portion running under the boosted clock somehow beats the bounds predicted by Amdahl's law.

        Right... their system cannot 'break' Amdahl's law. They bypass it by allowing the sequential portion of the workload to run on faster hardware, and the parallel portion of the workload to run on the massively parallel (but slower) architecture.

        Designing an approach that allows better parallel computing despite Amdahl's law, does not imply necessarily breaking the law.

        It's more like: working cle

        • Go back further to Von Neumann and you'll see that this is a hybrid model, where the state machine is respected, with mgmt processes acting as controler daemons to child processes. It's not really a bypass, just a hybrid representation as the distributed portions still respect Amdahl's precepts.

        • Your phrasing is kind of hard to parse - I actually can't tell if you are agreeing with what I wrote, or arguing in a passive-aggressive way. This implies that I have had too many arguments with passive aggressive people recently and I need to learn to read things more neutrally again. But yes, that is what I was pointing out: tweaking the frequency in the fast sequential part is still covered by Amdahl's law, contrary to their wild hyperbole.

      • Hey, don't disrespect physicists in parallel computing. Some of us actually understand how to do it properly and agree with what you state. Superlinear speedup is not precisely unknown, but it is rare and depends on architectural "tricks" that typically preserve Amdahl's law at a low level but apparently violate it at a higher level. In the naivest, stupidest example, if we didn't count cores instead of processors, even embarrassingly parallel code would exhibit superlinear speedup on a single processor

    • The trouble with supercomputers is that only governments buy them.

      Actually, not so. For about 15 minutes, I once owned a supercomputer myself, believe it or not.

      It wasn't a major supercomputer, but it was classified as a true supercomputer and I was acting as an intermediary for an oil industry company who had offended the seller, so the seller wouldn't sell directly to them.

      Governments are definitely big consumers of supercomputers, but universities also do a lot of computationally-intensive work, not all of which is necessarily government-funded. I've already mentioned

      • Double ditto. I've written magazine articles on beowulf-style supercomputers I've built at home (I used to write a column for "Cluster World" magazine in the brief time that it existed, but I also wrote an article or two for more mainstream computer mags). I have also set up clusters for companies I've founded and helped others set up corporate clusters. Some of what are arguably the world's largest parallel supercomputers -- Google's cluster, for example -- are not government funded. Many others aren

    • I agree. The article is next to worthless. In particular, it appears (and that is the problem - the article is just too vague) that they are not counting the GPU time against Amdahl's law. That's splitting hairs, at best.

      There might be some "there there" if they tried to refine Amdahl's law to include different kinds of processors, and the kinds of physical restrictions they talk about. All the article does is say such a thing might be possible - I think we already knew that.

    • by delt0r ( 999393 )
      I use supercomputers all the time for my work. I am at university so this perhaps is a government one. But we are not idiots and use it quite effectively thank you very much. In most of the supercomputers in the EU at least are for universities. They are mostly used quite well. At least all the ones i have used. Which is quite a few of them.
    • Somebody has an axe to grind.

  • by klapaucjusz ( 1167407 ) on Wednesday June 19, 2013 @02:05AM (#44047095) Homepage
    SMBC [smbc-comics.com]
    • by godrik ( 1287354 )

      Yeah, there is nothing wrong with amdahl's law. People that need to care about it clearly understand what it means. That is to say, when you increase parallelism, sequential parts become bottlenecks. You need to reengineer the problem/algorithm/architecture around that new bottleneck.

      • by Anonymous Coward

        "Sequential parts" usually mean "We won't know how to proceed further on until previous step is done". However, if you have really massive, 2^data_word_length or higher scale parallelism, then you can actually try guessing, and executing next step an all possible outcomes of previous step, then throwing away every result but one as previous step completes. Even if your parallelism is of lower scale, statistically it may still yield some speedup, whenever you happen to have a lucky guess. Sure beats letting

        • by mysidia ( 191772 )

          then you can actually try guessing, and executing next step an all possible outcomes of previous step, then throwing away every result but one as previous step completes.

          However... this requires power consumption, and it still does take time and tie up your infrastructure working on the 'guess'. Meanwhile, the previous step completes, and your CPUs are all still busy working on guessing the previous step, and you need additional sequential overhead to initiate and terminate the guessing process.

          Y

          • by TheLink ( 130905 )

            I've long wondered if you can set up a quantum computer to process "all possible paths" and then "collapses" on the most probable right answer.

            After all you can have light beams (and other quantum state friendly stuff) that are superpositions of all possible states and perform functions on them.

  • Poor summary (Score:5, Informative)

    by Anonymous Coward on Wednesday June 19, 2013 @02:57AM (#44047315)

    Amdahl's Law still stands. TFA is about changing the assumptions that Amdahl's Law is based on; instead of homogenous parallel processing, you stick a few big grunty processors in for the serial components of your task, and a huge pile of basic processors for the embaressingly parallel components. You're still limited by the fastest processing of non-parellel tasks, but by using a heterogenous mix of processors you're not wasting CPU time (and thus power and money) leaving processors idle.

    • Repeat after me: (Score:5, Insightful)

      by Mashdar ( 876825 ) on Wednesday June 19, 2013 @08:06AM (#44048667)

      Ahmdal's Law only applies to individual algorithms. Ahmdal's Law only applies to individual algorithms. Ahmdal's Law only applies to individual algorithms.

      Besides which, Ahmdal's law is an obvious truth unless you can make a process take negative time. All attempts to make Ahmdal's Law sound fancy or complicated are a disservice. All attempts to pigeonhole Ahmdal's Law into only applying to parallel design are a disservice. Any attempts to "revisit" are either fallacious or focus on algorithm changes, which Amdahl made no attempt to address.

      Ahmdal's law in a nutshell: If you spend 10% of your time on X and 90% of your time on Y, you will never get more than a 1/.9 speedup by optimizing X, even if you manage to make X instantaneous. Another way to put it is that if Y takes 9 seconds, you are never going to get the process under 9 seconds by modifying X...

    • Most of the cool stuff is pure parallel anyway, like the brain, or simulations of bodies made of atoms or cells. Plenty of room to grow regardless of some un-de-serializable algorithms.

  • In 2006 I submitted this (http://slashdot.org/comments.pl?sid=183461&cid=15153431):

    "Researchers in the parallel processing community have been using Amdahl's Law and Gustafson's Law to obtain estimated speedups as measures of parallel program potential. In 1967, Amdahl's Law was used as an argument against massively parallel processing. Since 1988 Gustafson's Law has been used to justify massively parallel processing (MPP). Interestingly, a careful analysis reveals that these two laws are in fact identi

  • I am sure that means something...

  • by deadline ( 14171 ) on Wednesday June 19, 2013 @07:55AM (#44048563) Homepage

    You can't cheat Amdahl's law anymore than you can give birth in one month with nine women. The law is a rather simple idea similar to chemical kinetics, when you think about it. i.e. a rate limiting steps.

    If you are interested in a non-mathematical description of Amdahl's law have a look at http://www.clustermonkey.net/Parallel-Programming/parallel-computing-101-the-lawnmower-law.html [clustermonkey.net]

  • by sjames ( 1099 ) on Wednesday June 19, 2013 @08:07AM (#44048687) Homepage Journal

    This most certainly does NOT break Amdahl's law. It simply partitions the problem to use the cheap gear for the embarrassingly parallel portion of the workload and the expensive gear for the harder to parallelize workload.

    It necessarily cannot make a non-parallelizable portion (the serial part) run in parallel.

    Note that what part of the problem is serial depends on the hardware. The lower the latency and the higher the bandwidth of the interconnect, the more of the problem you can get to run effectively in parallel. However, there comes a point where the problem cannot be decomposed further. The atoms that remain after that may all be run at once, but the individual atom will run serially. No matter what you do, 5*(2+3) can go no faster than serially adding and then multiplying (yes, you could do two multiplications in parallel and then add, but you gain nothing for it).

If you aren't rich you should always look useful. -- Louis-Ferdinand Celine

Working...