Panic in Multicore Land 367
MOBE2001 writes "There is widespread disagreement among experts on how best to design and program multicore processors, according to the EE Times. Some, like senior AMD fellow, Chuck Moore, believe that the industry should move to a new model based on a multiplicity of cores optimized for various tasks. Others disagree on the ground that heterogeneous processors would be too hard to program. The only emerging consensus seems to be that multicore computing is facing a major crisis. In a recent EE Times article titled 'Multicore puts screws to parallel-programming models', AMD's Chuck Moore is reported to have said that 'the industry is in a little bit of a panic about how to program multicore processors, especially heterogeneous ones.'"
Panic? (Score:4, Insightful)
Re:Panic? (Score:4, Insightful)
The future is here (Score:5, Insightful)
Re:Panic? (Score:5, Insightful)
Re:Panic? (Score:5, Insightful)
Re:Self Interest (Score:3, Insightful)
If he's saying that his multicore processors are going to be hard to program, then self-interest suggests he be very very quiet (;-))
Seriously, though, adding what used to be a video board to the CPU doesn't change the programming model. I suspect he's more interested in debating future issues with more tightly coupled processors.
--dave
Why choose? (Score:2, Insightful)
Re:The future is here (Score:2, Insightful)
figure out which specifics you can use without driving up the cost nor without compromizing the design ideal of a general purpose computer.
The key now is figuring out what to do with your Amiga now that no one writes applications for it anymore.
I suggest NetBSD
Specialisation is inevitable (Score:3, Insightful)
Flick the CPU monitor to aggregate usage rate mode, and I rarely clear 35% usage, and I've never seem it higher than about 55% (and even that for only a second or two once an hour). A normal PC, even fairly heavily loaded up with apps, just can't use the extra power.
And since cores aren't going to get much faster, there's no real chance of getting big wins there either.
Unless you have a specialized workload (heavy number crunching, kernel compilation, etc) there's going to simply be no point having more parallelism.
So as far as I can tell, for general loads it seems to be inevitable that if we want more straight line speed, we'll need to start making hardware more attuned for specific tasks.
So in my 16-core workstation of the future, if my Photoshop needs to apply some relatively intensive transform that has to be applied linearly, it can run off to the vector core, while I'm playing Supreme Commander on one generic core (the game) two GPU cores (the two screens) and three integer-heavy cores (for the 3 enemy AIs), and the generic System Reserved Core (for interrupts, and low-level IO stuff) hums away underneath with no pressure.
Hetrogeny also has economics on it's side.
There's very little point having specialized cores when you've only got two.
Once there's no longer scarcity in quantity, you can achieve higher productivity by specialization.
Really, any specialized core that you can keep the CPU usage rates running higher than the overall system usage rate, is a net win in productivity for the overall computer. And over time, anything that increases productivity wins.
Occam and Beyond (Score:3, Insightful)
[Although, personally, I prefer Occam's syntax over that of C's.]
http://en.wikipedia.org/wiki/Occam_programming_language [wikipedia.org]
I think that a tread aware programming language would be good in our multi-core world.
Re:Panic? (Score:5, Insightful)
It is in general, an impossible problem.
Most existing code is imperative. Most programmers write in imperative programming languages. Object orientation does not change this. Imperative code is not suited for multiple CPU implementation. Stapling things together with threads and messaging does not change this.
You could say that we should move to other programming "paradigms". However in my opinion, the reason we use imperative programs so such is because most of the tasks we want accomplished are inherently imperative in nature. Outside of intensive numerical work, most tasks people want done on a computer are done sequentially. The availability of multiple cores is not going to change the need for these tasks to be done in that way.
However, what multiple cores might do is enable previously impractical tasks to be done on modest PCs. Things like NP problems, optimizations, simulations. Of course these things are already being done, but not on the same scale as things like, say, spreadsheets, video/sound/picture editing, gaming, blogging, etc. I'm talking about relatively ordinary people being able to do things that now require supercomputers, experimenting and creating on their own laptops. Multi core programs can be written to make this feasible.
Considering I'm beginning to sound like an evangelist, I'll stop now. Safe money says PCs stay at 8 CPUs or below for the next 15 years.
Re:Panic? (Score:5, Insightful)
"...the speed the user experiences has not improved much [in the last 5-7 years]."
This may almost be true if you stay on the cutting edge, but not even close for the average user (or the power-user on a budget, like myself). 5 years ago I was running a 1.2 GHz Duron. Today I have a 2.3 GHz Athlon 64 in my notebook (which is a little over a year old, I think), and an Athlon 64 X2 5600+ (that's a dual-core 2.8 GHz, for those who don't know) in my desktop. I'd be lying if I said I didn't notice much difference between the three.
Re:My heterogeneous experience with Cell processor (Score:4, Insightful)
You're not the only person using heterogeneous cores, however. In fact, the Cell is a minority. Most people have a general purpose core, a parallel stream processing core that they use for graphics and an increasing number have another core for cryptographic functions. If you've ever done any programming for mobile devices, you'll know that they have been using even more heterogeneous cores for a long time because they give better power usage.
No problems for servers (Score:5, Insightful)
For most typical workloads most servers don't have enough I/O to keep 80 cores busy.
If there's enough I/O there's no problem keeping all 80 cores busy.
Imagine a slashdotted webserver with a database backend. If you have enough bandwidth and disk I/O, you'll have enough concurrent connections that those 80 cores will be more than busy enough
If you still have spare cores and mem, you can run a few virtual machines.
As for desktops - you could just use Firefox without noscript, after a few days the machine will be using all 80 CPUs and memory just to show flash ads and other junk
Re:Panic? (Score:3, Insightful)
Re:Panic? (Score:2, Insightful)
That must be the reason why CPU companies are looking for niches of the consumer market where there is a realistic chance of programmers actually utilizing all available processing power, despite the difficulties. It is no surprise to me that "gaming" is a common answer. But the only consumer-related answer I could find in the article is this: "It could also create desktops that automatically index personal pictures based on facial recognition software." Judge for yourself.
Re:My heterogeneous experience with Cell processor (Score:4, Insightful)
Re:Panic? (Score:4, Insightful)
Re:Panic? (Score:5, Insightful)
I write large and complex engineering applications. I have a few threads around, mostly for the purpose of doing calculation and dealing with slow devices. But I'm not going to add in more threads just because there are more cores for me to use. I'll add threads when performance issues requires that I add threads, and not before.
Most software today runs fine as a single thread anyway. The specialized software that requires maximum CPU performance (and is not already bottle-necked by HD or GPU access) will be harder to write, but for everything else the current model is just fine.
If anything, Intel should worry about 99% of all people simply not needing 80 cores to begin with...
Re:Panic? (Score:4, Insightful)
"...the speed the user experiences has not improved much [in the last 5-7 years]."
This may almost be true if you stay on the cutting edge, but not even close for the average user (or the power-user on a budget, like myself). 5 years ago I was running a 1.2 GHz Duron. Today I have a 2.3 GHz Athlon 64 in my notebook (which is a little over a year old, I think), and an Athlon 64 X2 5600+ (that's a dual-core 2.8 GHz, for those who don't know) in my desktop. I'd be lying if I said I didn't notice much difference between the three.
Do notice that multi-cores don't increase the overall clock frequency, just divide the work up among a set of lower clock frequency cores - yet most programs don't take advantage of that.
Do notice that despite clock frequencies going from 33 mhz to 2.3 GHz, the user's perceived performance of the computer has either stayed the same (most likely) or diminished over that same time period.
Do notice that programs are more bloated than ever, and programmers are lazier than ever.
In the end the GP is right.
Re:Panic? (Score:3, Insightful)
Assuming clock rates don't increase much; and they have not been, and instruction sets don't improve much, and the have not been; then beyond 3-4 cores I don't get any kind of improvement in the desktop world. I don't even see much improvement in the server world other then for running vmware and a few applications like database software that is some what parallelized; even that stuff though stops scaling well in most cases past core 8.
That means there will be no demand for new chips accross the majority of the business sector. That is a big problem of Intel and AMD.
Re:Panic? (Score:5, Insightful)
Odd selection of examples. The processing of cells can almost trivially be allocated across 80 cores. Media work can almost trivially be split into chunks across 80 cores. Games usually relatively easy to split, either by splitiing the graphics into chunks or parallelizable physics or other parallelizable simulation aspects.
Oh, and blogging.
My optical mouse has enough processing horsepower inside for blogging.
OPTICAL MOUSE CIRCUITRY:
Has the user pressed a key?
No.
Has the user pressed a key?
No.
Has the user pressed a key?
No.
(repeat 1000 times)
Has the user pressed a key?
No.
Has the user pressed a key?
No.
Has the user pressed a key?
Yes.
OOOO! YES!
QUICK QUICK QUICK! HURRY HURRY HURRY! PROCESS A KEYPRESS! YIPEE!
-
Re:Panic? (Score:3, Insightful)
Re:Panic? (Score:2, Insightful)
Re:Panic? (Score:4, Insightful)
8087 - Been There Done That (Score:3, Insightful)
Been there, done that, already. The 8087 and its 80x87 follow-on co-processors were exactly that. Specialized processors for specific tasks. Guess what? We managed to use them just fine a mere 27 years ago. DSP's have come along since and been used as well. Graphic card GPU's are specialized co-processors for graphic intensive functions, and we talk to them just fine. They're already on the chipsets, and soon to be on the processor dies. I don't think this is anything new, or anything that programming can't handle.
Re:Panic? (Score:4, Insightful)
And yes, the OS can, and has been able to for years since SMP first came about, spread loads across multiple processors and cores. But that cannot change how a single program functions in and of itself - it cannot make that single program work at any given moment on more than one single core if it was not designed to do so (i.e. if the program is not designed to use multiple threads or processes).
All-in-all, the OP is correct.
Re:Panic? (Score:2, Insightful)
How to use so many cpu's (Score:4, Insightful)
I had been working with a 100 PC cluster of P4 based systems to do H.264 HDTV compression in realtime. I spread the compression function across the cluster using each system to work on a small part of the problem and flow the data across the CPU's.
Based on this I wanted to build an array of processors on one chip, but I am not a silicon person, just software, driver and some basic electronics. So I looked at various FPGA cores, Arm, MIPS, etc. Then I went to a talk giving by Chuck Moore, author of the language FORTH. He had been building his own CPU's for many years using his own custom tools.
I worked with Chuck Moore for about a year in 2001/2002 on creating a massive multi core processor based on Chucks stack processor.
The Idea was instead of having 1,2 or 4 large processor to have 49 (7 * 7) small light but fast processors in one chip. This would be for tacking a different set of problems then your classic cpus'. It wouldn't be for running and OS or word processing, but for Multimedia, and cryptography, and other mathematic problems.
The idea was to flow data across the array of processors.
Each processor would run at 6Ghz, with 64K word of Ram each.
21 Bit wide words and bus (based off of F21 processor)
this allows for 4x 5bit instructions on a stack processor that only has 32 instructions.
Since it's a stack processor they run more efficiently. So in 16K transistors, 4000 gates,
the F21 at 500 Mhz performed about the same as a 500Mhz 486 with JPEG compress and decompress.
With the parallel core design instead of a common bus or network between the processors there would only be 4 connections into and out of each processor. These would be 4 registers that are shared with it's 4 neighboring processors that are laid out in a grid. So each chip would have a north, south, east and west register.
Data would be processed in whats called a systolic array, where each core would pick up some data, perform operations on it and pass it along to the next core.
The chips with a 7x7 grid of processors would expose the 28(4x7) bus lines off the edge processors, so that these could be tiled into a much larger grid of processors.
Each chip could perform around 117 Billion instructions per second at 1 Watt of power.
Unfortunately I was unable to raise money, partly because I couldn't' get any commitment from Chuck.
below is some links and other misc information on this project. Sorry it's not better organized.
This was my project.
---------
http://www.enumera.com/chip/ [enumera.com]
http://www.enumera.com/doc/Enumeradraft061003.htm [enumera.com]
http://www.enumera.com/doc/analysis_of_Music_Copyright.html [enumera.com]
http://www.enumera.com/doc/emtalk.ppt [enumera.com]
--------
This was Jeff foxes independent web site, he work on the F21 with Chuck.
http://www.ultratechnology.com/ml0.htm [ultratechnology.com]
http://www.ultratechnology.com/f21.html#f21 [ultratechnology.com]
http://www.ultratechnology.com/store.htm#stamp [ultratechnology.com]
http://www.ultratechnology.com/cowboys.html#cm [ultratechnology.com]
------
http://www.colorforth.com/ [colorforth.com] 25x Multicomputer Chip
Chucks site. 25x has been pulled down, but it's accessible on archive.org.
http://web.archive.org/web/*/www.colorfo [archive.org]
A problem that isn't getting solved anytime soon (Score:3, Insightful)
Look at most people's screens. Even if they have multiple programs running, they tend to have the one they are working on full screen. Studies have shown that people who multitask are less efficient than people who do one job at a time. Perhaps we are not educated to look at problems as solvable in a parallel fashion or perhaps there is some other human based problem. Maybe like many other skills, being able to think and program in a multithreaded fashion is a talent that only a small fraction of the population has.
This "panic" isn't going away and there is NO quick fix on the programming horizon. The hardware designers can stuff more cores in the box, but programmers won't keep up. what can consume the extra CPU power are things like speech recognition, hand writing and gesture recognition and rich media. Each of the can run in its 1-4 cores and help us serial humans interact with those powerful computers more easily.