Using GPUs For General-Purpose Computing 396
Paul Tinsley writes "After seeing the press releases from both Nvidia and ATI announcing their next generation video card offerings, it got me to thinking about what else could be done with that raw processing power. These new cards weigh in with transistor counts of 220 and 160 million (respectively) with the P4 EE core at a count of 29 million. What could my video card be doing for me while I am not playing the latest 3d games? A quick search brought me to some preliminary work done at the University of Washington with a GeForce4 TI 4600 pitted against a 1.5GHz P4. My Favorite excerpt from the paper:
'For a 1500x1500 matrix, the GPU outperforms the CPU by a factor of 3.2.' A PDF of the paper is available here."
video stuff (Score:5, Interesting)
Re:Not the Point (Score:4, Interesting)
No, it's like using your pop-up camper for storage space when you're using it on holidays.
DSP using GPUs (Score:3, Interesting)
Not just the GPU : the RAM (Score:5, Interesting)
Wow (Score:5, Interesting)
The CPU can do six billion instructions a second, the GPU can do 18 billion, and every last cycle is being used to stuff a 40MB texture into memory faster. What a waste. Yeah, the walls are even more green and slimy. Whoop-de-fucking-do.
Would it be great if all that processing power could be used for something other than yet-another-graphics-demo?
Like, maybe some new and innovative gameplay?
Comment removed (Score:5, Interesting)
http://www.gpgpu.org/ is a great resource (Score:4, Interesting)
Not so... (Score:4, Interesting)
Apple's Newton had no CPU, only a GPU that was more than adequate.
Ideas like these are good in general. I'd like to see the industry move away from the CPU-as-chief status quo. Amigas were years ahead of their time in large part because the emphasis wasn't as much on central processing. The CPU did only what it was supposed to do -- hand out instructions to the gfx and audio subsystems.
Hardly using a "motorcycle to tow a pop-up camper." If anything, the conventional wisdom is, "when all you have is a hammer, everything looks like a nail."
So when do we get unified memory? (Score:2, Interesting)
I can see it now.... (Score:3, Interesting)
Ohh well the idea was good while it lasted.
Imagine... (Score:4, Interesting)
seriously, we have a 16 node beowulf cluster and each node has an unnecessarily good graphics card in them. a lot of the calculations are matrix-based e.g. several variables each 1xthousands (1D) or hundredsxhundreds (2D).
how feasible and worthwhile do you think it would be to tap into the extra processing power?
Altivec (Score:2, Interesting)
Re:As has been said many time before ... (Score:5, Interesting)
Documentation (Score:3, Interesting)
Frogger (Score:5, Interesting)
SETI (Score:2, Interesting)
Re:Maybe that's the answer... (Score:3, Interesting)
My understanding is that they used GCC.
Further, "Another said that some version of Linux had to be used to compare apples to apples. Well, MacOS X isn't Linux, and the desktop standard for x86 machines is Windows (not that using a properly optimized Linux bothered the Opterons very much). You want to know what machine is fastest, you test in their native environment."
Oh, silly me. Processors are so obviously made to run only one operating system!
I'll take this site's info with a grain of salt.
Re:Imagine... (Score:3, Interesting)
All I can suggest is download the Brook [stanford.edu] libraries and try it out. See if it helps, and see if the results are accurate enough. And yes, Fortran can be used if you can bind it - Intel's compiler suite worked for me.
Re:Link to previous discussion on same/similar sub (Score:5, Interesting)
However, it seems a few organisations have actually beaten us to it.
Apple, for example, uses the 3d aspect of the GPU to accelerate its 2d compositing system with quartz extreme [apple.com]. Microsoft, as usual, announced the feature after Apple shipped it [microsoft.com], and with any luck Windows users might have it by 2007
-- james
Unused computing Power? (Score:1, Interesting)
The graphics card has a lot of unused computing power, nearly equal to the main processor chip in the computer if not more, that is not being used when there is no game or video being played, right?
Is there no way to tap into this power?
Perhaps it could be used for the main display on the computer (I think you guys call it GUI?)?
What else could it be used for?
Could Linux be modified to make use of this power?
Just a know nothing, nobody with questions.
J
Let me check my notes... (Score:4, Interesting)
Alternative use (Score:3, Interesting)
Re:Maybe that's the answer... (Score:3, Interesting)
The compiler that seems to be best/fully optimized for the G5 is the new IBM XL compilers, released at the beginning of the year.
http://forums.macnn.com/showthread.php?s=&threa
There doesn't seem to be much benchmark done using it yet, but all information points to significant gain in performance when using the IBM compiler versus GCC (not surprising, since IBM built the chip). The only benchmark I can find is from a German site:
http://www.heise.de/ct/Redaktion/as/spec/ct0408
I don't believe the G5 is indeed the "fastest" personal computer in the world as claimed by Apple, but it certainly is comparable to the best in the x86 world. Not to mention it is a very new architecture, and there are still plenty of optimization that can be made to make it faster. But to claim that GCC is fully optimized for the G5, and that Apple was using it to justify its claim of being the "fastest" is incorrect. It used a compiler that is arguably good, but certainly not excellent for it.
In regards to comparing Mac OS X to Linux rather than Windows. I think the comparison is valid considering the market Apple has been targeting recently. Apple seems to have backed off from wooing the MS crowd, but instead focusing on firms that use UNIX workstations. Apple wants these companies to switch to the PowerMac rather than to a x86/Linux platform. This is highlighted by their advocacy of using OS X for biotech and film/video effects production. I remember one of their earlier OS X ad even told the reader to send all of their old UNIX boxes to "/dev/null" - or something like that.
-B
Dual Core (Score:5, Interesting)
Even with the ATI 800XT, 1600x1200 can dip below 30FPS with AA/AF on higher settings. Still a ways to go for that full virtual reality look.
Re:Dual Core (Score:3, Interesting)
There where dual ATI GPU's or Matrox or even the old Voodoo2 SLI. Seems you can increase speed with more cores.
Commodore 64 (Score:5, Interesting)
This concept was being used back in 1988. The Commodore 64 (1mhz 6510, a 6502 like micro processor) had a peripheral 5.25 disk drive called the 1541, which itself had a 1mhz 6510 cpu in it, connected via. a serial link.
It became common practice to introduce fast loaders: these were partially resident in the C64, and also in the 1541: effectively replacing the 1541's limited firmware.
However, demo programmers figured out how to utilise the 1541: one particular demo involved uploading program to the 1541 at start, then upon ever screen rewrite, uploading vectors to the 1541, which the 1541 would perform calculations in parallel with the C64, then at the end of the screen, the C64 fetch the results from the 1541, and incorporate them into the next screen frame.
Equally, GPU provides similar capability if so used.
Expand this thinking! (Score:4, Interesting)
You're absolutely correct that these "game snobs" are looking at the past through rose-colored graphics, forgetting all of the stinkers of yesteryear. However, it's not just games where this applies. How many times have you heard people complain about how bad movies are now, or music, or books? It's exactly the same phenomenon. When your grandfather tells you how much better things were "back in the day", it's for exactly the same reason. He's looking back at all the good things, while ignoring all of the bad.
Face it, everything mostly sucks. It always has, and it always will. There will always be some gems that really stand out, and those will be what are remembered when people fondly look back on "the old days". Get over it.
Re:and a sourceforge project too (Score:2, Interesting)
So? How do you program this? (Score:1, Interesting)
The 'acceleration' layer (DirectX, Xv?) is not even available to the programmers. The programmer requests from DirectX or SDL to draw a polygon. Then DirectX or SDL invoke acceleration features of the card. But we do not have direct access to those features. They are not even documented.
Will the kernel provide those facilities?
Because it would be stupid to go through SDL to perform FFT with the video card's capabilities.
what's really needed (Score:3, Interesting)
What's really needed is to couple the GPU and CPU in such a way that the GPU actually runs a very low level O/S, like an L4Ka style kernel (http://l4ka.org/), and becomes "just another" MP resource.
Then, on top of this low level, actually runs the UI graphics driver and so on. Other tasks can also run, but ultimately the priority is given to the UI driver.
Then, the O/S on the CPU needs to be able to know generally how to distribute tasks across to the GPU. Fairly standard for a tightly coupled MP that has shared bus memory.
Why do I say this? Because the result is
(a) if you're using an especially high performance application, the GUI runs full throttle dedicated to rendering/etc and acts as per normal;
(b) if you're not, e.g. such as when running Office or Engineering other compute intensive tasks (e.g. recoding video without displaying the video), then the GPU is just another multi processor resource to soak up cycles.
Then, CPU/GPU is just a seamless computing resource. The fantastic benefit of this is that if the O/S is designed properly, then it could allow simply buying/plugging in additional PCI (well, PCI probably not good because of low speed, perhaps AGP?) cards that are simply "additonal processors" - then you get a relatively cheaper way of putting more MP into your machine.
Re:Maybe time for a new generation of math-process (Score:3, Interesting)
Remember the co-processors? Well, actually I don't (I'm a tad to young). But I know about them.
Dig deeper. 8087 FPU's were nice, though they ran hot enough to cook on, but the idea had existed for 15 or more years before they appeared. Try looking into the old DEC PDP-11 archives. There you'll find DEC's own "CIS" or "commercial instruction set", which was a set of boards (later a add on chip) that added string, character and BCD math instructions. DEC also had a FPU card set that implemented a 64-bit FPU out of AMD 2901 bit slice processors. Many low-budget not-quite-supercomputers were really add-on hardware boxes to a general purpose computers. Basicly add-on stunt boxes.
Dam... I'm too young to feel this old! Most of this stuff was in play when I was in grade school.
Temkin
compiler? (Score:1, Interesting)
gcc --with-gpu somebigprog.c
Very bad article (Score:3, Interesting)
Just look at the matrix multiplication case. Look at the graph and see that 1000x1000 takes 30 seconds on CPU and 7 seconds on GPU. Let's translate it to Millions of operations per second: CPU -> 33 Mop/s, GPU -> 142 Mop/s Matrix multiplication has cubic complexity so for CPU: 1000 * 1000 * 1000 / 7 seconds / 1000000 = 33 Mop/s
Now think a while: 33 million operations on 1.5 GHz Pentium 4 with SSE (I assume there is no SSE2). Pentium 4 has fuse multiply-add unit which makes it do two ops per clock. So we get 3 billion ops per second peak performance! What they claim is that the CPU is 100 times slower for matrix multiply. That is unlikely. You can get 2/3 of peak on Pentium 4. Just look at ATLAS [sourceforge.net] or FLAME [utexas.edu] projects. If you use one of these projects you can multiply 1000 matrix in half a second: 14 times faster than the quoted GPU.
Another thing is the floating point arithmetic. GPU uses 32-bit numbers (at most). This is too small for most scientific codes. CPU can do 64-bits. Also, if you use 32-bits on CPU it will be 4 times as fast as 64-bit (SSE extension). So in 32-bit mode, Pentium 4 is 28 times faster than the quoted GPU.
Finally, the length of the program. The reason matrix multiply was chosen is becuase it can be encoded in very short code - three simple loops. This fits well with 128-instruction vertex code length. You don't have to keep reloading the code. For more challenging codes it will exceed allowed vertex code length. The three loop matrix multiply implementation stresses memory bandwidth. And CPU has MB/s and GPU has GB/s. No wonder GPU wins. But I can guess that without making any tests.
Re:Let me check my notes... (Score:4, Interesting)
Notice that they're quick to point out the problem isn't likely a hardware issue. There should be plenty of bandwidth on the AGP bus, but graphics chip makers don't seem to have written their drivers to handle transfers from AGP cards to main memory properly.
Then they run some tests and conclude:
That means even if you can render high-quality images at 30 frames per second, you won't be able to get them out of the graphics card at anything near that rate.
Three questions (Score:3, Interesting)
2. Has anyone tried something similar to what Quartz Extreme does but for non-graphical tasks?
3. How come GPU makers are not trying to make a CPU by themselves?
Folding@Home is actually working on this... (Score:4, Interesting)
Re:audio stuff (Score:1, Interesting)
Re:The day is saved (Score:5, Interesting)
Re:The day is saved (Score:3, Interesting)
For the positioning and cooling, well there is one in there right now. There is enough space more than likely even for more than one.
Also, I'm not saying lets not give the sucker a cache. It would more than likely need a cache of its own dedicated memory to effectively operate just like any processor.
When I was about 15 and I first started reading about the first GPUs, all I could think about was, "Boy is this a step in the wrong direction." I believe in hardware whose purposes are cleanly seperated. Well, the GPU thing has had its hayday, why not start making general purpose coprocessors now so every application can get a nice boost (well a lot of applications). The instructions already resemble a normal processors anyways, so why not.
Re:Not just the GPU : the RAM (Score:3, Interesting)
Is it just me, or should the cards have maybe 64 or 128MB of high speed memory, and then a couple of DIMM slots that take ordinary DDR SDRAM? That would still be pretty fast stuff, especially if the cards had dual-channel memory controllers, and plenty fast enough for textures. The card could cache the most-used textures in whatever video memory was left after drawing screens.