Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Programming IT Technology Hardware

Panic in Multicore Land 367

MOBE2001 writes "There is widespread disagreement among experts on how best to design and program multicore processors, according to the EE Times. Some, like senior AMD fellow, Chuck Moore, believe that the industry should move to a new model based on a multiplicity of cores optimized for various tasks. Others disagree on the ground that heterogeneous processors would be too hard to program. The only emerging consensus seems to be that multicore computing is facing a major crisis. In a recent EE Times article titled 'Multicore puts screws to parallel-programming models', AMD's Chuck Moore is reported to have said that 'the industry is in a little bit of a panic about how to program multicore processors, especially heterogeneous ones.'"
This discussion has been archived. No new comments can be posted.

Panic in Multicore Land

Comments Filter:
  • Panic? (Score:4, Insightful)

    by jaavaaguru ( 261551 ) on Tuesday March 11, 2008 @07:34AM (#22713926) Homepage
    I think "panic" is a bit of an over-reaction. I use a multicore CPU. I write software that runs on it. I'm not panicking.
  • Re:Panic? (Score:4, Insightful)

    by shitzu ( 931108 ) on Tuesday March 11, 2008 @07:42AM (#22713986)
    Still, the fact remains that the x86 processors (due to the OS-s that run on them, actually) have not gone much faster in the last 5-7 years. The only thing that has shown serious progress is power consumption and heat dissipation. I mean - the speed the user experiences has not improved much.
  • The future is here (Score:5, Insightful)

    by downix ( 84795 ) on Tuesday March 11, 2008 @07:48AM (#22714038) Homepage
    What Mr Moore is saying does have a grain of truth, that generic will be beaten by specific in key functions. The Amiga proved that in 1985, being able to deliver a better graphical solution than workstations costing tens of thousands more. The key now is to figure out which specifics you can use without driving up the cost nor without compromizing the design ideal of a general purpose computer.
  • Re:Panic? (Score:5, Insightful)

    by leenks ( 906881 ) on Tuesday March 11, 2008 @07:57AM (#22714100)
    How is an 80-core cpu a cut down version of a dual-CPU box? This is the kind of technology the authors are discussing, not your Core2 duo MacBook...
  • Re:Panic? (Score:5, Insightful)

    by Chrisq ( 894406 ) on Tuesday March 11, 2008 @08:04AM (#22714148)
    Yes panic is strong, but the issue is not with multi-tasking operating systems assigning processes to different processors for execution. That works very well. The problem is when you have a single CPU-intensive task, and you want to split that over multiple processors. That, in general, is a difficult problem. Various solutions, such as functional programming, threads with spawns and waits, etc. have been proposed, but none are as easy as just using a simple procedural language.
  • Re:Self Interest (Score:3, Insightful)

    by davecb ( 6526 ) * <davecb@spamcop.net> on Tuesday March 11, 2008 @08:10AM (#22714188) Homepage Journal

    If he's saying that his multicore processors are going to be hard to program, then self-interest suggests he be very very quiet (;-))

    Seriously, though, adding what used to be a video board to the CPU doesn't change the programming model. I suspect he's more interested in debating future issues with more tightly coupled processors.

    --dave

  • Why choose? (Score:2, Insightful)

    by Evro ( 18923 ) <evandhoffman AT gmail DOT com> on Tuesday March 11, 2008 @08:21AM (#22714266) Homepage Journal
    Just build both and let the market decide.
  • by funkboy ( 71672 ) on Tuesday March 11, 2008 @08:24AM (#22714304) Homepage
    The Amiga proved that in 1985, being able to deliver a better graphical solution than workstations costing tens of thousands more. The key now is to
    figure out which specifics you can use without driving up the cost nor without compromizing the design ideal of a general purpose computer.


    The key now is figuring out what to do with your Amiga now that no one writes applications for it anymore.

    I suggest NetBSD :-)
  • by adamkennedy ( 121032 ) <adamk@c[ ].org ['pan' in gap]> on Tuesday March 11, 2008 @08:26AM (#22714318) Homepage
    I have a 4-core workstation and ALREADY I get crap usage rates out of it.

    Flick the CPU monitor to aggregate usage rate mode, and I rarely clear 35% usage, and I've never seem it higher than about 55% (and even that for only a second or two once an hour). A normal PC, even fairly heavily loaded up with apps, just can't use the extra power.

    And since cores aren't going to get much faster, there's no real chance of getting big wins there either.

    Unless you have a specialized workload (heavy number crunching, kernel compilation, etc) there's going to simply be no point having more parallelism.

    So as far as I can tell, for general loads it seems to be inevitable that if we want more straight line speed, we'll need to start making hardware more attuned for specific tasks.

    So in my 16-core workstation of the future, if my Photoshop needs to apply some relatively intensive transform that has to be applied linearly, it can run off to the vector core, while I'm playing Supreme Commander on one generic core (the game) two GPU cores (the two screens) and three integer-heavy cores (for the 3 enemy AIs), and the generic System Reserved Core (for interrupts, and low-level IO stuff) hums away underneath with no pressure.

    Hetrogeny also has economics on it's side.

    There's very little point having specialized cores when you've only got two.

    Once there's no longer scarcity in quantity, you can achieve higher productivity by specialization.

    Really, any specialized core that you can keep the CPU usage rates running higher than the overall system usage rate, is a net win in productivity for the overall computer. And over time, anything that increases productivity wins.
  • Occam and Beyond (Score:3, Insightful)

    by BrendaEM ( 871664 ) on Tuesday March 11, 2008 @08:33AM (#22714404) Homepage
    Perhaps, panic is a little strong. At the same time, programing languages such as Occam, that are built from the ground up seem very provocative now. Perhaps Occam's syntax could modified to a Python-type syntax for a more popularity.

    [Although, personally, I prefer Occam's syntax over that of C's.]

    http://en.wikipedia.org/wiki/Occam_programming_language [wikipedia.org]

    I think that a tread aware programming language would be good in our multi-core world.
  • Re:Panic? (Score:5, Insightful)

    That works very well. The problem is when you have a single CPU-intensive task, and you want to split that over multiple processors. That, in general, is a difficult problem.

    It is in general, an impossible problem.

    Most existing code is imperative. Most programmers write in imperative programming languages. Object orientation does not change this. Imperative code is not suited for multiple CPU implementation. Stapling things together with threads and messaging does not change this.

    You could say that we should move to other programming "paradigms". However in my opinion, the reason we use imperative programs so such is because most of the tasks we want accomplished are inherently imperative in nature. Outside of intensive numerical work, most tasks people want done on a computer are done sequentially. The availability of multiple cores is not going to change the need for these tasks to be done in that way.

    However, what multiple cores might do is enable previously impractical tasks to be done on modest PCs. Things like NP problems, optimizations, simulations. Of course these things are already being done, but not on the same scale as things like, say, spreadsheets, video/sound/picture editing, gaming, blogging, etc. I'm talking about relatively ordinary people being able to do things that now require supercomputers, experimenting and creating on their own laptops. Multi core programs can be written to make this feasible.

    Considering I'm beginning to sound like an evangelist, I'll stop now. Safe money says PCs stay at 8 CPUs or below for the next 15 years.

  • Re:Panic? (Score:5, Insightful)

    by Saurian_Overlord ( 983144 ) on Tuesday March 11, 2008 @08:51AM (#22714598) Homepage

    "...the speed the user experiences has not improved much [in the last 5-7 years]."

    This may almost be true if you stay on the cutting edge, but not even close for the average user (or the power-user on a budget, like myself). 5 years ago I was running a 1.2 GHz Duron. Today I have a 2.3 GHz Athlon 64 in my notebook (which is a little over a year old, I think), and an Athlon 64 X2 5600+ (that's a dual-core 2.8 GHz, for those who don't know) in my desktop. I'd be lying if I said I didn't notice much difference between the three.

  • by TheRaven64 ( 641858 ) on Tuesday March 11, 2008 @09:19AM (#22714888) Journal
    Well, part of your problem is that you're using a language which is a bunch of horrible syntactic sugar on top of a language designed for programming a PDP-8 on an architecture that looks nothing like a PDP-8.

    You're not the only person using heterogeneous cores, however. In fact, the Cell is a minority. Most people have a general purpose core, a parallel stream processing core that they use for graphics and an increasing number have another core for cryptographic functions. If you've ever done any programming for mobile devices, you'll know that they have been using even more heterogeneous cores for a long time because they give better power usage.

  • by TheLink ( 130905 ) on Tuesday March 11, 2008 @09:25AM (#22714978) Journal
    For servers the real problem is I/O. Disks are slow, network bandwidth is limited (if you solve that then memory bandwidth is limited ;) ).

    For most typical workloads most servers don't have enough I/O to keep 80 cores busy.

    If there's enough I/O there's no problem keeping all 80 cores busy.

    Imagine a slashdotted webserver with a database backend. If you have enough bandwidth and disk I/O, you'll have enough concurrent connections that those 80 cores will be more than busy enough ;).

    If you still have spare cores and mem, you can run a few virtual machines.

    As for desktops - you could just use Firefox without noscript, after a few days the machine will be using all 80 CPUs and memory just to show flash ads and other junk ;).
  • Re:Panic? (Score:3, Insightful)

    by mollymoo ( 202721 ) * on Tuesday March 11, 2008 @09:39AM (#22715144) Journal
    The 386 could run existing 16-bit code faster than the processors it replaced, so there was a market for it despite the lack of 32-bit code. This is not the same situation; an 80-core processor won't run today's code any faster than an 8-core proccessor (assuming the cores are the same). Nobody will buy an 80-core processor till there is software which would benefit from it.
  • Re:Panic? (Score:2, Insightful)

    by Sebastian Reichelt ( 1241416 ) on Tuesday March 11, 2008 @09:45AM (#22715224)
    I think you are right that a lack of demand is the reason for the panic, but that is probably a broader issue: CPU manufacturers seem to be desparately looking for fields in which more processing power would be an advantage, even though it becomes more difficult to use. For the average user, even the increasing CPU speeds of the past have not shown much of a benefit, as software has become more demanding just because it could, not because users wanted features requiring a lot of CPU power (except in certain areas such as image processing). Now that CPU speeds cannot be increased much further, wasting of CPU time will also have to stop at the current level. It is not realistic for the same programmers who have been writing more and more inefficient code, to start using multiple threads just to continue this trend.

    That must be the reason why CPU companies are looking for niches of the consumer market where there is a realistic chance of programmers actually utilizing all available processing power, despite the difficulties. It is no surprise to me that "gaming" is a common answer. But the only consumer-related answer I could find in the article is this: "It could also create desktops that automatically index personal pictures based on facial recognition software." Judge for yourself.
  • by neomunk ( 913773 ) on Tuesday March 11, 2008 @09:48AM (#22715272)
    Heterogeneous cores are already in almost every PC I've seen so far this millennium. Anyone with a GPU is running heterogeneous cores in their machine. How do we handle it? The first half of your second sentence; libraries and frameworks. OpenGL, DirectX and whatnot provide the frameworks we need while the various manufacturers provide the drivers to maintain compatibility with the various APIs. We'll see soon enough (as a result of the Cell) if the same thing (2 or more different libraries for the same processor; one for each of it's core-types) becomes the norm for other heterogeneous core system. I think so, but it may be overlooked by manufacturers who want to view a processor as a unit instead of a compilation of various units. They'll figure it out, these guys aren't MBAs, they're the truly educated. :-D

  • Re:Panic? (Score:4, Insightful)

    by mollymoo ( 202721 ) * on Tuesday March 11, 2008 @09:51AM (#22715302) Journal
    No matter how easy they make knitting I'm never going to do it, because I don't want to knit my own clothes. I just want ones which look good and work. No matter how easy you make programming most people just aren't going to do it, because they don't want to write their own programs. They just want programs that work.
  • Re:Panic? (Score:5, Insightful)

    by johannesg ( 664142 ) on Tuesday March 11, 2008 @09:56AM (#22715380)
    Let's not be too harsh on ourselves. In most systems today, the bottleneck is the hard disk, not the CPU. No amount of threading will rescue you if your memory has been swapped out.

    I write large and complex engineering applications. I have a few threads around, mostly for the purpose of doing calculation and dealing with slow devices. But I'm not going to add in more threads just because there are more cores for me to use. I'll add threads when performance issues requires that I add threads, and not before.

    Most software today runs fine as a single thread anyway. The specialized software that requires maximum CPU performance (and is not already bottle-necked by HD or GPU access) will be harder to write, but for everything else the current model is just fine.

    If anything, Intel should worry about 99% of all people simply not needing 80 cores to begin with...

  • Re:Panic? (Score:4, Insightful)

    by TemporalBeing ( 803363 ) <bm_witness@BOYSENyahoo.com minus berry> on Tuesday March 11, 2008 @10:01AM (#22715438) Homepage Journal

    "...the speed the user experiences has not improved much [in the last 5-7 years]."

    This may almost be true if you stay on the cutting edge, but not even close for the average user (or the power-user on a budget, like myself). 5 years ago I was running a 1.2 GHz Duron. Today I have a 2.3 GHz Athlon 64 in my notebook (which is a little over a year old, I think), and an Athlon 64 X2 5600+ (that's a dual-core 2.8 GHz, for those who don't know) in my desktop. I'd be lying if I said I didn't notice much difference between the three.

    Do notice that in 5 years we have barely increased the clock frequency of the CPUs

    Do notice that multi-cores don't increase the overall clock frequency, just divide the work up among a set of lower clock frequency cores - yet most programs don't take advantage of that. ;-)

    Do notice that despite clock frequencies going from 33 mhz to 2.3 GHz, the user's perceived performance of the computer has either stayed the same (most likely) or diminished over that same time period.

    Do notice that programs are more bloated than ever, and programmers are lazier than ever.
    ...
    In the end the GP is right.
  • Re:Panic? (Score:3, Insightful)

    by DarkOx ( 621550 ) on Tuesday March 11, 2008 @10:09AM (#22715552) Journal
    Its not the same as before though. In 1986 I could get something for my money buying a 386, even if there was no new software in my plans. You got speed. Moving your DOS bases accounting package from that PC-AT at 6mhz to a 368 running at 20mhz let you do your payroll cycle faster.

    Assuming clock rates don't increase much; and they have not been, and instruction sets don't improve much, and the have not been; then beyond 3-4 cores I don't get any kind of improvement in the desktop world. I don't even see much improvement in the server world other then for running vmware and a few applications like database software that is some what parallelized; even that stuff though stops scaling well in most cases past core 8.

    That means there will be no demand for new chips accross the majority of the business sector. That is a big problem of Intel and AMD.
  • Re:Panic? (Score:5, Insightful)

    by Alsee ( 515537 ) on Tuesday March 11, 2008 @11:16AM (#22716806) Homepage
    spreadsheets, video/sound/picture editing, gaming, blogging

    Odd selection of examples. The processing of cells can almost trivially be allocated across 80 cores. Media work can almost trivially be split into chunks across 80 cores. Games usually relatively easy to split, either by splitiing the graphics into chunks or parallelizable physics or other parallelizable simulation aspects.

    Oh, and blogging.
    My optical mouse has enough processing horsepower inside for blogging.

    OPTICAL MOUSE CIRCUITRY:
    Has the user pressed a key?
    No.
    Has the user pressed a key?
    No.
    Has the user pressed a key?
    No.
    (repeat 1000 times)
    Has the user pressed a key?
    No.
    Has the user pressed a key?
    No.
    Has the user pressed a key?
    Yes.
    OOOO! YES!
    QUICK QUICK QUICK! HURRY HURRY HURRY! PROCESS A KEYPRESS! YIPEE!


    -
  • Re:Panic? (Score:3, Insightful)

    by TuringTest ( 533084 ) on Tuesday March 11, 2008 @11:17AM (#22716828) Journal
    Ah, but they DO want their tedious tasks automated. If you provide users with a way to automate their tasks without them writing a whole program, just by learning what they do often [wikipedia.org], they will program the machine without knowing.
  • Re:Panic? (Score:2, Insightful)

    by nekokoneko ( 904809 ) on Tuesday March 11, 2008 @11:35AM (#22717226)

    Of course, comparing a P4 to a Core2 is like comparing Apples to Oranges as there are architecture changes across the whole chip that would change that (like the move away from P4's netburst architecture). So there are reasons other than clock frequency for that performance difference.
    That was my point. In opposition to what you had said, the fact that the clock frequency has not increased does not mean that CPU performance has not increased. Unless you didn't mean that an increase in clock frequency is necessary for an increase in performance, in which case I don't understand why you posted about clock frequency at all.

    That only works across all the different programs. An OS cannot break a single program into multiple threads/processes for the program - the program has to be coded to do so.
    Again, that was my point, quote with emphasis added: (...) the OS can still run different programs in each core, improving the overall user performance. I would suggest reading my post with a little more attention.

    In the end, despite the increase in processing power, the programs run as slow or slower than before. Numerous reasons for it. The GP of my original post in this thread is still correct.
    Quoting the GP, emphasis added: the fact remains that the x86 processors (due to the OS-s that run on them, actually) have not gone much faster in the last 5-7 years. The only thing that has shown serious progress is power consumption and heat dissipation. What do the OS's that run on them have to do with the processors' performance? Recent processors have had significant improvements in performance in the last 5-7 years, which makes the GP incorrect.
  • Re:Panic? (Score:4, Insightful)

    by cens0r ( 655208 ) on Tuesday March 11, 2008 @11:52AM (#22717548) Homepage
    If the 80 core processor can run 10 virtual machines as fast as one machine on the 8 core processor, I would be interested.
  • by Nom du Keyboard ( 633989 ) on Tuesday March 11, 2008 @11:53AM (#22717580)

    Others disagree on the ground that heterogeneous processors would be too hard to program.

    Been there, done that, already. The 8087 and its 80x87 follow-on co-processors were exactly that. Specialized processors for specific tasks. Guess what? We managed to use them just fine a mere 27 years ago. DSP's have come along since and been used as well. Graphic card GPU's are specialized co-processors for graphic intensive functions, and we talk to them just fine. They're already on the chipsets, and soon to be on the processor dies. I don't think this is anything new, or anything that programming can't handle.

  • Re:Panic? (Score:4, Insightful)

    by TemporalBeing ( 803363 ) <bm_witness@BOYSENyahoo.com minus berry> on Tuesday March 11, 2008 @11:57AM (#22717646) Homepage Journal

    What do the OS's that run on them have to do with the processors' performance? Recent processors have had significant improvements in performance in the last 5-7 years, which makes the GP incorrect.
    Perhaps you missed my statement about the user's perceived performance. It is true, I grant you, that hardware performance has gotten better. But the user's perception of that performance has not - it's gone the opposite. Some of that is because programmer's rely on a single faster core to correct for their inept programming, lack of optimization, added abstraction layers, etc. However, that is no longer how processors function - they are now two slower processors working together.

    And yes, the OS can, and has been able to for years since SMP first came about, spread loads across multiple processors and cores. But that cannot change how a single program functions in and of itself - it cannot make that single program work at any given moment on more than one single core if it was not designed to do so (i.e. if the program is not designed to use multiple threads or processes).

    All-in-all, the OP is correct.
  • Re:Panic? (Score:2, Insightful)

    by nekokoneko ( 904809 ) on Tuesday March 11, 2008 @01:21PM (#22718986)

    Perhaps you missed my statement about the user's perceived performance. It is true, I grant you, that hardware performance has gotten better. But the user's perception of that performance has not - it's gone the opposite.
    Yes, I had noticed that statement both in your post and in GP's post and there is anecdotal evidence that the perceived performance has not increased. The objectiveness of such a statement notwithstanding, one could argue that this increase in performance has not led to an increase in the users' perceived performance, but this argument has a tenuous relation at best with the other statements presented in your and the GP's post, such as statements about the increase in clock frequency. Particularly, the statement by the GP that x86 processors have not been speeding up for the past 5-7 years is patently false.

    And yes, the OS can, and has been able to for years since SMP first came about, spread loads across multiple processors and cores. But that cannot change how a single program functions in and of itself - it cannot make that single program work at any given moment on more than one single core if it was not designed to do so (i.e. if the program is not designed to use multiple threads or processes).
    I find it baffling that you insist in trying to explain to me the point I myself had made in my first post in the thread.
  • by John Sokol ( 109591 ) on Tuesday March 11, 2008 @02:03PM (#22719658) Homepage Journal
    Back in 2000 I realized that 50 Million transistors of 4004 the first processor ever created, would out perform a P4 with the same transistor count done in the same fab running at the same clock rates. it would be over 10x faster I work out. But how to use such a device?
    I had been working with a 100 PC cluster of P4 based systems to do H.264 HDTV compression in realtime. I spread the compression function across the cluster using each system to work on a small part of the problem and flow the data across the CPU's.

    Based on this I wanted to build an array of processors on one chip, but I am not a silicon person, just software, driver and some basic electronics. So I looked at various FPGA cores, Arm, MIPS, etc. Then I went to a talk giving by Chuck Moore, author of the language FORTH. He had been building his own CPU's for many years using his own custom tools.

    I worked with Chuck Moore for about a year in 2001/2002 on creating a massive multi core processor based on Chucks stack processor.

    The Idea was instead of having 1,2 or 4 large processor to have 49 (7 * 7) small light but fast processors in one chip. This would be for tacking a different set of problems then your classic cpus'. It wouldn't be for running and OS or word processing, but for Multimedia, and cryptography, and other mathematic problems.

    The idea was to flow data across the array of processors.
    Each processor would run at 6Ghz, with 64K word of Ram each.
    21 Bit wide words and bus (based off of F21 processor)
    this allows for 4x 5bit instructions on a stack processor that only has 32 instructions.
    Since it's a stack processor they run more efficiently. So in 16K transistors, 4000 gates,
    the F21 at 500 Mhz performed about the same as a 500Mhz 486 with JPEG compress and decompress.
    With the parallel core design instead of a common bus or network between the processors there would only be 4 connections into and out of each processor. These would be 4 registers that are shared with it's 4 neighboring processors that are laid out in a grid. So each chip would have a north, south, east and west register.

    Data would be processed in whats called a systolic array, where each core would pick up some data, perform operations on it and pass it along to the next core.

    The chips with a 7x7 grid of processors would expose the 28(4x7) bus lines off the edge processors, so that these could be tiled into a much larger grid of processors.

    Each chip could perform around 117 Billion instructions per second at 1 Watt of power.

    Unfortunately I was unable to raise money, partly because I couldn't' get any commitment from Chuck.

    below is some links and other misc information on this project. Sorry it's not better organized.
    This was my project.

    ---------
    http://www.enumera.com/chip/ [enumera.com]
    http://www.enumera.com/doc/Enumeradraft061003.htm [enumera.com]
    http://www.enumera.com/doc/analysis_of_Music_Copyright.html [enumera.com]
    http://www.enumera.com/doc/emtalk.ppt [enumera.com]

    --------
    This was Jeff foxes independent web site, he work on the F21 with Chuck.

    http://www.ultratechnology.com/ml0.htm [ultratechnology.com]

    http://www.ultratechnology.com/f21.html#f21 [ultratechnology.com]
    http://www.ultratechnology.com/store.htm#stamp [ultratechnology.com]

    http://www.ultratechnology.com/cowboys.html#cm [ultratechnology.com]

    ------
    http://www.colorforth.com/ [colorforth.com] 25x Multicomputer Chip

    Chucks site. 25x has been pulled down, but it's accessible on archive.org.
    http://web.archive.org/web/*/www.colorfo [archive.org]
  • by pcause ( 209643 ) on Tuesday March 11, 2008 @05:12PM (#22722132)
    The issue of the lack of progress in creating tools to simplify multithreaded programming has been a topic of discussion for well over a decade. Most programmers just don't make much use of multithreading. They take advantage of multithreading because their Web server and database support it and the Web server runs each request in a separate thread. Even then, some activity is complex and is usually not further parallelized. Operating systems programmers and some realtime programmers tend to be good a multithreading and parallel programming, but this is a small minority of programmers. Heck, look st Rails, one of the most popular Web frameworks - it isn't thread safe!

    Look at most people's screens. Even if they have multiple programs running, they tend to have the one they are working on full screen. Studies have shown that people who multitask are less efficient than people who do one job at a time. Perhaps we are not educated to look at problems as solvable in a parallel fashion or perhaps there is some other human based problem. Maybe like many other skills, being able to think and program in a multithreaded fashion is a talent that only a small fraction of the population has.

    This "panic" isn't going away and there is NO quick fix on the programming horizon. The hardware designers can stuff more cores in the box, but programmers won't keep up. what can consume the extra CPU power are things like speech recognition, hand writing and gesture recognition and rich media. Each of the can run in its 1-4 cores and help us serial humans interact with those powerful computers more easily.

Old programmers never die, they just hit account block limit.

Working...