Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Programming Intel IT Technology Hardware

Inside Intel's $20M Multicore Research Program 187

An anonymous reader writes "You may have heard about Intel's and Microsoft's efforts to finally get multi-core programming into gear so that there actually will be a developer who can program all those fancy new multicore processors, which may have dozens of core on one chip within a few years. TG Daily has an interesting article about the project, written by one of the researchers. It looks like there is a lot of excitement around the opportunity to create a new generation of development tools. Let's hope that we will soon see software that can exploit those 16+core babies. 'The problem of multi-core programming is staring at us right now. I am not sure what Intel's and Microsoft's expectations are, but it is quite possible that they are in fact looking at fundamental results from the academic centers to leverage their large work force to polish and realize the ideas that come forth. It calls for a much closer collaboration between the centers and the companies than it appears at first sight.'"
This discussion has been archived. No new comments can be posted.

Inside Intel's $20M Multicore Research Program

Comments Filter:
  • It's easy (Score:5, Funny)

    by Anonymous Coward on Thursday April 03, 2008 @02:35PM (#22955272)
    ./configure --num-cores=16
  • by OrangeTide ( 124937 ) on Thursday April 03, 2008 @02:37PM (#22955296) Homepage Journal
    The thing is, most PCs have plenty of computing power as a single core system. The hard sell is getting people to upgrade those machines mainly used for email and browsing and video playback. I think as time moves on and quad core becomes the "low-end" you will see less demand for higher end hardware. Unless the next version of Windows requires a core dedicated to the OS or something in the future.
    • by pla ( 258480 ) on Thursday April 03, 2008 @02:55PM (#22955524) Journal
      Unless the next version of Windows requires a core dedicated to the OS or something in the future.

      So, uh, you haven't Vista yet, I see...
    • Re: (Score:3, Insightful)

      The software currently in use does not involve computationally complex problems, and so the computers appear to have "plenty of computational power." This is likely to be the case for a very long time, but there are useful but complex tasks computers might do. For example, a computer that might interact with its user purely by voice -- more advanced voice and language recognition systems are likely to require significantly more cores and computational power than is currently in wide use. Even more advanc
      • by peragrin ( 659227 ) on Thursday April 03, 2008 @03:15PM (#22955758)
        Yes but voice processing is done best by dedicated hardware rather than generic. would a voice chip that can do that processing and only that processing be far more efficient? Call it the VPU, it can go next to the GPU, PPU. or it can be one of the 8 cores surrounding a cell processor. The trend that generic processors can do everything will end. maybe a plug and pray architecture where you can pick which cores you want installed on your system.

        • That sounds like a great way to continue to sell hardware at a premium, and allow vendors to keep making money off of moore's law.

          It doesn't sound like a very good way to allow end users to make efficent use of their hardware.

          If users want a cost effective piece of hardware that's good for a lot of things, I don't see how generic hardware that can be load balanced isn't a win.
        • by treeves ( 963993 )
          ...maybe a plug and pray architecture ...

          Is AMD doing so poorly that that's their only hope?

      • You can wonder whether it is sane to control your computer by interfaces which chew the bulk of your available computing power. But I think that when such system enter the market, they will have one or two dedicated cores for them (though, while facial expression may seem complex, language recognition interpretation really is a lot harder) and leave the rest of the cores alone.
        • Well there is also going to be a shift in the way computers are thought of and used. For example, right now you use a computer like a tool...a very complex, very delicate tool, but a tool nonetheless. A future system that responds to facial expressions would be an entirely different way to view a computer, since your facial expressions aren't very useful for controlling something like a web browser. In that case, the computer, or at least that facet of the computer, would be invisible to the user...
      • Re: (Score:3, Informative)

        by OrangeTide ( 124937 )
        mpeg4 decompression is far more complex than voice recognition. The processing involved is simply not that great, even for "more advanced voice and language recognition". The difficulty lies in better algorithms to do it. Turns out dynamic voice control and interpretation is not something that can be brute forced.

        Game physics needs computational power. but I'm not considering game systems.

        Scientific and Engineering projects need computational power and benefit from cost reduction in high performance process
    • by KillerCow ( 213458 ) on Thursday April 03, 2008 @03:09PM (#22955674)

      The thing is, most PCs have plenty of computing power as a single core system


      And 640k ought to be enough for anyone.

      I think as time moves on and quad core becomes the "low-end" you will see less demand for higher end hardware.


      My last purchase (6 to 8 months ago) was a "low-end" machine. I chose carefully to make sure that it was low-end and not bargain-basement. It has two cores. I don't think it's even possible to buy a single core machine through mainstream channels anymore. Today's low-end (multi-core) is more than adequate for most users to use over the next few (read: four) years.

      Unless the next version of Windows requires a core dedicated to the OS or something in the future.


      You do not understand how the scheduler works.
      • by OrangeTide ( 124937 ) on Thursday April 03, 2008 @04:08PM (#22956600) Homepage Journal

        And 640k ought to be enough for anyone.
        funny you quoted my response to that issue immediately after: "I think as time moves on and quad core becomes the "low-end" you will see less demand for higher end hardware."

        I don't think it's even possible to buy a single core machine through mainstream channels anymore.
        Conroe-L's are still shipping. And Intel has a single core ultra low power chip on the horizon designed to compete with ARM. Your phone, pda, heart monitor, etc won't be symmetric multiprocessor any time soon.

        You do not understand how the scheduler works.
        xbox 360 already works this way. three cores. 2 for the game, 1 for the OS.

        As a professional kernel developer, I realize that locking cores into specific tasks is a lot easier than writing a general purpose scheduler that performs equivalently.
        • I realize that locking cores into specific tasks is a lot easier...
          Would you please show McAfee how to lock their anti-virus processes to unique non-shared cores so I can get some work done? Thanks
          • >>>"The thing is, most PCs have plenty of computing power as a single core system

            >"And 640k ought to be enough for anyone."

            No not really, but I think PCs will follow a progression similar to cars. When cars were first available they were a mere 10-20 horsepower. As the technology developed, engineers learned to make better cars until the 1950s when a family car might have 400-500 horsepower. In theory engineers could have continued building more-and-more powerful cars, so that we would have 4
    • by bhima ( 46039 ) *
      I wish I could count the number of times I've heard variations of this. I think the first time I heard it was when Intel released the 80387. Didn't seem to be accurate then either.

      Pretty soon social networking will include 1080p video mail or 50 megapixel photos of Jr. or there will be another DOOM II or something like that golf game that had every executive upgrading their Windows 95 'business' computers. Or perhaps the latest 4x1080p 3D media encoder will have us all wanting something faster.

      But that's
      • by PitaBred ( 632671 ) <slashdot@pitabre ... org minus distro> on Thursday April 03, 2008 @04:07PM (#22956582) Homepage
        We've gotta get the bandwidth before 1080p is even remotely possible for video mail. The thing is, for the VAST majority of people, there is no killer app that will require an upgrade right now. A low-end machine will push 1080p in H.264 no problem. A 50MP picture of junior would again require more bandwidth, and a bigger monitor. Not a faster machine.
        • by Nikker ( 749551 ) *
          Defiantly bandwidth is needed but as far as 1080p compressed or uncompressed I was actually surprised when I tried to play that on my 2.53Ghz P4 and it choked, I was getting about 10fps. That is really the only reason I currently have for shopping for a new system.
          • You need a half way decent video card - they do a lot of the heavy lifting for HD video compression now.
            • So what you're saying, is he should offload the processing from his system's one core to another core ... just on the video card.

              • Or else buy a CPU that's not 6 years old. ;-) He's got the same CPU I bought back in 2002! Of course a 2002-era CPU can not handle 1080p. I don't even think 1080p existed back then, and 1080i was mainly just a pipe dream reserved to only $10,000 TV sets, not PCs.

                However a modern 2007-designed CPU should be able to handle that movie just fine.

                So why upgrade if the 2007-designed CPU can play 1080p movies flawlessly? What possible *real world* (not star trek) application could make someone want to get a
      • scientific and engineering are a completely different market. Although a lot of us do modeling on equipment that has less powerful processors than our desktops these days.

        The thing is people don't really need a more powerful machine. The hardware is capable of handling the workloads you described. And when someone said that we won't need anything faster than a 387 fpu, or more than 640K, or whatever. They were right. And were never proven wrong. Just because a market is out there for flashy gadgets that don
        • by Jeremi ( 14640 )
          And most of us would rather have 1 cpu that runs 16x faster than 16 cpus.


          Sure, anyone would. Barring some major breakthroughs in superconducting circuits, however, it's not going to happen anytime soon.... well, not unless you want to run one of those liquid-helium cooled machines.... :^)

        • You forget about those of us using compute farms to do work. My place has 700 blades that are dual core right now. If we had 8 or 16 cores, that's a lot more available compute time - I often see 10-20K jobs pending to hit the farm, so having more compute to dispatch all those processes to would be great, and it wouldn't take up more server room (which is a big issue when space becomes limited.)
        • >>>"when someone said that we won't need anything faster than a 387 fpu, or more than 640K, or whatever. They were right. And were never proven wrong. Just because a market is out there for flashy gadgets that don't really work any better than 20 year old technology doesn't prove the naysayers wrong."

          Well...

          I gotta disagree. It's a nice modern feature to be able to hear REAL music instead of sid-music (C=64) or watch REAL movies instead of 5-minute graphics demos (Commodore Amiga). There have bee
    • Re: (Score:2, Insightful)

      The thing is, most PCs have plenty of computing power as a single core system. . . .
      Rather than multi core technology resulting in elegant new software to take advantage of it, I suspect that software will get worse (think loop until done, rather than schedule an interrupt). Faster processors have not made software, better rather they have resulted in an abundance of bad software!
    • by bberens ( 965711 )
      I think you're absolutely right in the short to medium term. We'll need another technology revolution in order for more than 1-2 cores to be really beneficial to the average home user. Outside the web/db server market there's not a lot of use that isn't somewhat fringe. Of course, the web/db server market is huge.
    • by Locutus ( 9039 )
      I think it has more to do with the lack of multi-threading in applications. Imagine you have a graphics app which pegs the CPU doing some particular image processing and you see the Windows hour-glass for 5 minutes. So you go out and get a dual-core system and fire up that same app but still see a 5 minute wait. Because that application is not or is poorly multi-threaded, the 2nd CPU is doing very little to help speed things up when one program is doing the CPU hogging. Ofcourse, the poor design of Windows
  • In the new octagon (8-way processors), a battle of the ages, Crapware vs AntiCrapware

    Most of the new cores are being used to isolate crapware and anticrapware in a Battle Royal.

    And it looks like Crapware is going to win in a submission tapout at the current rate.
  • by alta ( 1263 ) on Thursday April 03, 2008 @02:40PM (#22955356) Homepage Journal

    I am not sure what Intel's and Microsoft's expectations are, but it is quite possible that they are in fact looking at fundamental results from the academic centers to leverage their large work force to polish and realize the ideas that come forth.
    Maybe my brain needs a new compiler. This must be a multi-core sentence.
  • Multicore Programs (Score:4, Insightful)

    by Ironsides ( 739422 ) on Thursday April 03, 2008 @02:42PM (#22955376) Homepage Journal
    Software that will exploit 16+ cores already exists. The problem is, it is not consumer (home/office) software. There does not yet exist an application that people use that really needs multiple cores. Video encoding is getting there, but most people will never use it.
    • by eh2o ( 471262 )
      Dunno about the UIUC side, but a significant part of the UCB/ParLab grant includes research into applications.

      There is also a recent paper that shows how the MapReduce pattern can be easily applied to just about every machine-learning algorithms with near-linear speedup. This stuff isn't just going to be used to make the next Clippy, but for more interesting stuff like video processing, speech recognition, sensor fusion in "smart" handhelds, etc.

      The applications and the need exist, but so far they have not
    • Apple's multi-touch track pads will let us see if getures are a good way to control computers. If it is then I can see that the nest step will be to do away with the touch pad and just move finger n the air while a web-can type device watches. That could use use a few cores. Watching a user's eyes to see what part of the screen is being read could be usful too. What about sorting my library of photos by object, for example finding all the ones with a give person in them? There are lots of uses for comp
  • I hope they do better at getting useful coding tools into the hands of home coders than GPU manufacturers have to utilise the parallel programmable nature of modern GPU's.
  • It should not be very hard... The algorithm begs for multi-threading — once you divide your array, you apply the same algorithm to the two parts, recursively. The parts can be sorted in parallel — this has a potential for huge performance gains implications in database servers (... ORDER BY ...), etc.

    Anyone?

    • by Yokaze ( 70883 ) on Thursday April 03, 2008 @03:12PM (#22955710)
      You mean something like parallel_sort [gnu.org] in libstdc++, since GCC 4.3.0?

      One of several parallelised standard algorithms [209.85.135.104].
      • And how does that help? Any dataset that fits in memory can be sorted almost instantaneous using a single core. And datasets that do not fit in memory don't benefit from having more cores since they are IO-bound anyway.
        • by mi ( 197448 )

          Any dataset that fits in memory can be sorted almost instantaneous using a single core.

          Even in 128Gb of memory? Even if the comparison function (qsort()'s last argument) takes a while to complete?

      • by mi ( 197448 )

        You mean something like parallel_sort in libstdc++, since GCC 4.3.0?

        Uhm, yes, something like that... But it ought to be transparent to the caller — I just want to keep calling qsort() from my (portable) code and have it take advantage of the multiple CPUs, when available.

  • Sun? (Score:4, Funny)

    by Anne Thwacks ( 531696 ) on Thursday April 03, 2008 @02:44PM (#22955398)
    Of course some of you will know that Sun have had 8/16/32 cores for quite a while, and that Solars, *BSD, and probably even Linux support this stuff just fine.

    Its only you peasants that persist in using old-hat Wintel stuff that are so last-year. Get with it people! You too could be runningNetBSD on your toaster (it will probably out perform Windows Vista in a 4-core Pentium anyway). Hell it might even eat Nandos peri-peri Vista for breakfast!

    • Re: (Score:3, Informative)

      by GreggBz ( 777373 )

      Of course some of you will know that Sun have had 8/16/32 cores for quite a while, and that Solars, *BSD, and probably even Linux support this stuff just fine.

      The NT kernel has supported SMP for 10 years. So what?

      It's all about the applications. Sure, there's some development tools in *nix for multicore. I doubt they are efficient and accessible though. Can y'all tell me how great GCC is with 16 cores and thread level parallelism? I'm sure some academic and or low level solutions exist everywhere. However,

      • by Locutus ( 9039 )

        The NT kernel has supported SMP for 10 years. So what?

        It sucked at it compared to OS/2 and probably Solaris 10+ years ago and because of how poorly it did threads, most Windows apps did what Microsoft did and pretty much stayed away from threading. And to be relevant to the current discussion, Windows threading did not cross CPU/core boundries while OS/2's threading did 10+ years ago.

        So, are you saying that Windows( XP and/or Vista ) threading can cross core boundries? If so, why would Microsoft be trying to come up with a way to get developers to target mult

        • by drsmithy ( 35869 )

          It sucked at it compared to OS/2 and probably Solaris 10+ years ago and because of how poorly it did threads, most Windows apps did what Microsoft did and pretty much stayed away from threading. And to be relevant to the current discussion, Windows threading did not cross CPU/core boundries while OS/2's threading did 10+ years ago.

          What do you mean by "cross CPU/core boundaries" ? Windows NT has been able to schedule artbitrary threads onto arbitrary processors since *at least* NT 4.0 (and probably 3.1).

          • by Locutus ( 9039 )

            What do you mean by "cross CPU/core boundaries" ? Windows NT has been able to schedule artbitrary threads onto arbitrary processors since *at least* NT 4.0 (and probably 3.1).

            you are right since I found a couple of pages which say that NT threads can cross CPU boundaries. Interesting since when I was working on OS/2 and NT apps in the mid 90's, NT performance was really bad on dual CPU systems with a heavily threaded app. It was explained to me at the time that the NT kernel didn't let a process's threads spread out across the CPUs. Whatever it was, the OS/2 port was much faster on the same dual CPU system as the NT port and that was before any 32bit data structure alignment wa

      • Firstly, NT supports SMP, but it doesn't scale well to utilise it. Windows Server 2008 might be tolerable, but it's not going to compete with current, let alone future, Linux, and the higher the core count, the bigger the divide gets.

        Secondly, GCC doesn't care about threading scalability. It's all up to you as the application architect to design a parallel system.

        Academic and real-world examples are well known. Once you get the basic ideas down, the vast majority of throughput bottlenecks parallelise out ve
        • by drsmithy ( 35869 )

          Firstly, NT supports SMP, but it doesn't scale well to utilise it. Windows Server 2008 might be tolerable, but it's not going to compete with current, let alone future, Linux, and the higher the core count, the bigger the divide gets.

          Benchmarks ?

  • by stratjakt ( 596332 ) on Thursday April 03, 2008 @02:54PM (#22955514) Journal
    SMT processors of this type are only useful for accelerating a certain type of problem set, and useless for most general computing.

    We've had SIMD multicore PC's forever, and they're useless as desktops. I write this from a quad xeon machine, repurposed as my dev box, as CPU1 grinds away at about 75% all day long, the rest idle. It's been like that for more than a decade, it'll be like that until MIMD hits the street with a whole new paradigm of programming languages behind it - a handful of C compiler #pragma directives from intel isn't going to make this work.

    It's not simply a matter of "coders don't know how to do it." It's a matter of these multi-core "general purpose" CPUs are only really useful for a fairly limited set of specific problems.

    Eg; writing a game engine with a video thread, audio thread and an input thread still leaves 13 cores idle. You really cant thread those much farther (the ridiculously parallel problem of rendering is handled by the GPU).

    Simply starting processes on different procs doesn't help all that much, since they all fight over memory and I/O time. The point of diminishing returns is reached fairly quickly.

    But hey, if all you do is run Folding@home so you can compare your e-cock with the other kids on hardextremeoverclockermegahackers.com, well I have some good news!

    As for me, I'm seeing AMD's multiple specific purpose core approach as being more viable, as far as actually making my next desktop computer perform faster.

    Savain says it best at rebelscience.org: "Even after decades of research and hundreds of millions of dollars spent on making multithreaded programming easier, threaded applications are still a pain in the ass to write."
    • "Eg; writing a game engine with a video thread, audio thread and an input thread still leaves 13 cores idle. You really cant thread those much farther (the ridiculously parallel problem of rendering is handled by the GPU)."

      Woopsie. I think you presume that games don't need more processing before the GPU so much.

      What if you could thread out, and preprocess the video? We don't know, cause it's not yet practical. The tools to write that software don't exist.

      Actually, if we get enough cores as CPU, when do w
      • Actually, if we get enough cores as CPU, when do we start to need less GPU?

        When CPU's get better at churning out FP math solutions. The whole purpose of the GPU is it's a massive net of FPUs. I think Cell style technology is going to be more similar to the type of chip we see in 10 years than an Intel C Core w/ 100 Pentium type cores in it. Ideally, I think you are looking at a processor 'office' for each thread - 1 supervisor core, multiple FPUs, a couple of CPU cores, perhaps 1 or 2 GPUs & a few FGP

    • by everphilski ( 877346 ) on Thursday April 03, 2008 @03:58PM (#22956496) Journal
      a handful of C compiler #pragma directives from intel isn't going to make this work.

      That's OpenMP, and depending on the program, it can work wonders. In an hour I parallelized 90% of a finite element CFD code with it. Yes, it sucks for fine-grained parallelization.

      Intel's product is Threaded Building Blocks, and is not built around pragmas, and is both commercial and OSS. It's pretty slick and will let you do the more fine-grained optimizations.

      It's a matter of these multi-core "general purpose" CPUs are only really useful for a fairly limited set of specific problems.

      Not entirely true, it's just useful for problems that need a processor.

      I write this from a quad xeon machine, repurposed as my dev box, as CPU1 grinds away at about 75% all day long, the rest idle.

      ... obviously, you have more processor than you need. I, on the other hand, have a quad core Opteron that is currently over 350% utilization. I tank it almost 24/7.

      the ridiculously parallel problem of rendering is handled by the GPU

      Not for long. Raytracing is making a comeback.

      As for me, I'm seeing AMD's multiple specific purpose core approach as being more viable, as far as actually making my next desktop computer perform faster.

      If you can't even tank one core of your Xenon, it's doubtful.

      "Even after decades of research and hundreds of millions of dollars spent on making multithreaded programming easier, threaded applications are still a pain in the ass to write."

      I'd caveat that by saying "threading arbitrary program X is a pain in the ass." There are plenty of useful programs that are easily parallelized.
    • Re: (Score:3, Interesting)

      by Rhys ( 96510 )
      The desktop PC should be idle most of the time. User input is really slow and in general the machine is waiting on the user, not the other way around. However, ask yourself who's time is more valuable, the machine you bought for $1,500 that lasts 3 years (at least, that's hardware update cycle around my work), or the person you pay $150,000 over a similar time frame? (give or take on location, entry-level position) Pay 10% more ($150) for the computer to save the person 0.1% ($150) of their time? That's an
    • You seem to be wrong on just about every point that you've tried to make. Take a look at the previous article on multithreading that made the slashdot frontpage (I think about a week ago). Somebody tried the same argument as you and was shot down quite forcefully in the largest thread in the discussion.

      None of the people that argued could come up with a single real-world problem that couldn't be hacked into working on multi-core systems. When you say "SMT" I assume you've made a typo and mean SMP. SMP is a
    • by Jeremi ( 14640 )
      We've had SIMD multicore PC's forever, and they're useless as desktops. [...] it'll be like that until MIMD hits the street


      Those acronyms, I don't think they mean what you think they mean. SIMD refers to a Single Instruction operating on Multiple Data values in parallel... think Altivec or SSE. MIMD is Multiple Instructions, Multiple Data... i.e. the multiple CPU machines you and I have been running for years.

    • Eg; writing a game engine with a video thread, audio thread and an input thread still leaves 13 cores idle. You really cant thread those much farther (the ridiculously parallel problem of rendering is handled by the GPU).

      Hi! My name is AI, I'll be happy to eat any number of cores you throw at me!

      NOT SO FAST, AI! I'm RAY, RAY TRACING AND....

      Anyways, you get the picture. 640k ram yadda yadda.
    • The problem is that current programming languages don't use a programming paradigm like the Actor model, which makes every object a separate thread. If such was the case, each new core would increase parallelism, as permitted by the underlying algorithm.
  • by Anonymous Coward on Thursday April 03, 2008 @02:55PM (#22955526)
    The structure of VHDL is inherently parallel as all processes (blocks of hardware) run at the same time. Only the code within the processes is evaluated sequentially (in most cases).

    Although VHDL is a hardware description language, couldn't similar concepts be used to make a parallel centric computer programming language?
  • We all already have networks of servers all running in parallel. Multi core processing is simply squashing the network onto a little bit of silicon.
     
  • Didn't we already see this one? Intel (this time AMD also) develops radical new processor arch that will be insanely great once a quantum leap in developer tools is made to utilize it.

    Itanic crashed, burned and sank against the rocks of the compiler tech not being able to keep up. I see it happening again.

    Yes we will find ways to make a quad core system stay busy enough to sell em to corporate desktops and home users. Hell, you can assign one to the virus/crapware scanner. Waste another or two doing ev
    • by turgid ( 580780 )

      Itanic crashed, burned and sank against the rocks of the compiler tech not being able to keep up

      There is a fundamental flaw in the itanic design philosophy that no compiler will ever be able to make up for. There are some optimisations that have to be done at run time. They can't be done at compile time. itanic was conceived out of 1970's supercomputer research before out-of-order, speculative execution, dynamic branch prediction RISC processors had been invented.

      There was in itanic fore-runner back in t

    • About the time the 386 was still considered a high end PC, a former roommate of mine who was doing a software engineering related Master's degree was convinced that no home user would ever find a use for multi-threading, except for printer spooling. He was as obviously wrong as Bill was with his 640K comment.

      Even your humble word processor can be broken into hundered of threads, I would bet OpenOffice has dozens. Today, threads are considered expensive as RAM once was, but in the 16 core world they will b
  • by MarkEst1973 ( 769601 ) on Thursday April 03, 2008 @03:11PM (#22955702)

    Forget software not being written for multi-cores, the entire infrastructure around the computer needs to "go wide" for massive parallelism, not just the software. This includes disk, memory, front-side bus, etc./p>

    I'm doing highly concurrent projects (grid computing) for my company and we're finding that some things parallelize just fine, but others simply move the pain and bottleneck to a piece of infrastructure that hasn't quite caught up yet.

    For example, my laptop has a dual-core 2.2Ghz processor, which you'd think is great for development. It's no better than a single CPU machine because my disk IO light is on all the time. IntelliJ pounds the disk. Maven and Ant pound the disk. Outlook pounds the disk. Even surfing the web puts pages into disk cache, so browsing while building a project is slow. Until I get a SCSI drive, you're still limited on disk IO, so those extra cores don't help that much.

    All the cores are great on the server, though. I've recently completed a massive integration project where I grid-enabled my company's enterprise apps. All those cores running grid nodes is giving us very high throughput. Our next bottleneck is the database (all those extra grid nodes pounding away at another bottleneck resource...)

    Terracotta Server as a Message Bus [markturansky.com]. It's been a very interesting project.

    • by geekoid ( 135745 )
      well, dual-core 2.2 that tells us everything we need to know for your point to be valid~

    • by Chirs ( 87576 )
      Outlook I can understand. It needs to flush the emails to disk before replying back to the server.

      However, there's no reason why the web browser needs to ensure that the data hits the disk cache right away, so it should be just fine sitting in RAM until the disk frees up. Similarly, intellij, maven, and ant should be slow the first time but faster later on since they should be reading from the page cache.

      There's no reason for your disk I/O light to be on unless you don't have enough RAM or the disk algori
    • by Prune ( 557140 )
      Get two 10000 RPM Raptor drives and run them in Raid 0. The difference over a regular, single 7200 rpm drive is immense and I've never seen as big one from any other upgrade. Sure, they're only 150 GB but I use slower drivers for video etc.
      I also found that getting faster RAM makes more difference than a faster CPU, which suggests that too many programs have poor cache behavior.
  • The solution is right in front of our faces. If you use virtualization then you can easily make use of a 16 core system. I can have IIS, Exchange, a Linux Apache Server, and a Terminal Server all on the same physical machine.
    • This is exactly what Intel is trying to avoid. They want people to do more computing, not consolidate their purchases. The need to sell more PC's with more CPU's. They want people to find uses for the extra computing power. In business, there are plenty of uses [ampl.com] for that power. If people's applications are using just one core at at time, the consolidation effect will occur and Intel's sales will plummet. If applications make good use of the multiple cores, the applications themselves become more usef
    • Man I hope this was a tongue-in-cheek post. Virtualization, used in this manner, is precisely equivalent to scheduling multiple processes across cores, only you also get the virtualization overhead. It's most definitely not a solution to the problem Intel is trying to solve (making it easy for developers to write individual pieces of software who's problems can be naturally broken down and distributed across multiple cores).
      • Virtualization, used in this manner, is precisely equivalent to scheduling multiple processes across cores, only you also get the virtualization overhead.

        Obviously you've never used a Microsoft product before. They aren't exactly stable. The virtualization means that one thing doesn't take down another. Also certain apps don't get along. MSSQL for instance running with anything else gives you headaches. Plus I mentioned running multiple operating systems (Windows Server / Linux) on the same machine.

    • How does Intel persuade people to buy new CPUs if there is no benefit delivered to the buyer?

      How does Microsoft sell you new licenses if you don't buy a new computer?

      Virtualization at the OS image level only allows you to run multiple different applications. Running more applications at once isn't the primary goal of the average user. They want the application which has the focus of their attention to be slick and fast.

      Multicore CPUs do not allow you to run a single application faster. Intel's PC market
  • Scala is a JVM based language that has good features for working well with multiple cores (Actors, immutable collections, functional language, etc), so why not sponsor it?

    Mats
  • ...for Linux, Mac and Windows [intel.com] supporting multicore and also cluster [intel.com] architectures.
    Obviously it would be better if these worked better and were easier to use, but many people are unaware of the tools that are available right now.
  • Seriously, Folks, who can do anything for a mere $20M today, let alone change the entire programming paradigm of the last 65 years?
    • by geekoid ( 135745 )
      The current programming paradigm isn't 65 year old. In fact, that's been changed. I am of course tlaking about newer wide use systems. Do you know how program was done 25 years ago?

      I do, and it's not even remotely the same.
  • by EEPROMS ( 889169 ) on Thursday April 03, 2008 @05:28PM (#22957536)
    I've got a dual core machine sitting on the desk before me and the cpu rarely goes above 20% load. The strange thing though is it is still slow when loading programs and this is due to the hardisk (SATA II) being the bottle_neck on my system. I could fix this to some degree with a RAID setup but the real question is why isnt this being looked at more closely ?
    • due to the hardisk (SATA II) being the bottle_neck on my system

      And people tried to make fun of Vista using free RAM for advanced HD Caching... Weird how Microsoft was on top of that, and even stranger is the Linux project to mimic the intelligent caching of Superfetch, all the while SlashDot people were making fun of it, until a few people realized how beneficial it was to overall performance.

      BTW HD bottleneck technologies are being looked at more closely, as on a Vista system with I/O priority and intellig

Put your Nose to the Grindstone! -- Amalgamated Plastic Surgeons and Toolmakers, Ltd.

Working...