Mortimer.CA writes with a recently released report from Berkeley entitled "The Landscape of Parallel Computing Research: A View from Berkeley: "Generally they conclude that the 'evolutionary approach to parallel hardware and software may work from 2- or 8-processor systems, but is likely to face diminishing returns as 16 and 32 processor systems are realized, just as returns fell with greater instruction-level parallelism.' This assumes things stay 'evolutionary' and that programming stays more or less how it has done in previous years (though languages like Erlang can probably help to change this)." Read on for Mortimer.CA's summary from the paper of some "conventional wisdoms" and their replacements.
Old and new conventional wisdoms:
- Old CW: Power is free, but transistors are expensive.
- New CW is the "Power wall": Power is expensive, but transistors are "free." That is, we can put more transistors on a chip than we have the power to turn on.
- Old CW: Monolithic uniprocessors in silicon are reliable internally, with errors occurring only at the pins.
- New CW: As chips drop below 65-nm feature sizes, they will have high soft and hard error rates.
- Old CW: Multiply is slow, but load and store is fast.
- New CW is the "Memory wall" [Wulf and McKee 1995]: Load and store is slow, but multiply is fast.
- Old CW: Don't bother parallelizing your application, as you can just wait a little while and run it on a much faster sequential computer.
- New CW: It will be a very long wait for a faster sequential computer (see above).