Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Graphics Software

Using GPUs For General-Purpose Computing 396

Paul Tinsley writes "After seeing the press releases from both Nvidia and ATI announcing their next generation video card offerings, it got me to thinking about what else could be done with that raw processing power. These new cards weigh in with transistor counts of 220 and 160 million (respectively) with the P4 EE core at a count of 29 million. What could my video card be doing for me while I am not playing the latest 3d games? A quick search brought me to some preliminary work done at the University of Washington with a GeForce4 TI 4600 pitted against a 1.5GHz P4. My Favorite excerpt from the paper: 'For a 1500x1500 matrix, the GPU outperforms the CPU by a factor of 3.2.' A PDF of the paper is available here."
This discussion has been archived. No new comments can be posted.

Using GPUs For General-Purpose Computing

Comments Filter:
  • video stuff (Score:5, Interesting)

    by rexguo ( 555504 ) on Sunday May 09, 2004 @02:54AM (#9098538) Homepage
    At my work place, I'm looking into using the GPUs to do video analysis. Things like cut-scene detection, generating multi-resolution versions of a video frame, applying video effects and other proprietary technologies that were previously done in CPU. The combination of pixel shaders and floating-point buffers really make GPUs a Super-SIMD machine if you know how to exploit it.
  • Re:Not the Point (Score:4, Interesting)

    by JonoPlop ( 626887 ) <me.JonathonMah@com> on Sunday May 09, 2004 @02:59AM (#9098554) Homepage
    The whole point of graphic cards is that they have a dedicated purpose. Using the cards for anything that is general purpose is like using a motorcycle to tow a pop-up camper.

    No, it's like using your pop-up camper for storage space when you're using it on holidays.

  • DSP using GPUs (Score:3, Interesting)

    by crushinghellhammer ( 727226 ) on Sunday May 09, 2004 @03:01AM (#9098563)
    Does anybody know of pointers to papers/research pertaining to using GPUs to perform digital signal processing for, say, real-time audio? Replies would be much appreciated.
  • by ratboot ( 721595 ) on Sunday May 09, 2004 @03:10AM (#9098588)
    What's interesting with new video cards it's their memory capacity, 128 or 256 MB and that this memory is accessible on some new cards at 900 MHz with a data path of 256 bit (which is a lot faster than a CPU with DDR 400 installed).
  • Wow (Score:5, Interesting)

    by cubicledrone ( 681598 ) on Sunday May 09, 2004 @03:10AM (#9098589)
    All that processing power, and the latest games still run at about 22 frames per second, if that.

    The CPU can do six billion instructions a second, the GPU can do 18 billion, and every last cycle is being used to stuff a 40MB texture into memory faster. What a waste. Yeah, the walls are even more green and slimy. Whoop-de-fucking-do.

    Would it be great if all that processing power could be used for something other than yet-another-graphics-demo?

    Like, maybe some new and innovative gameplay?
  • Comment removed (Score:5, Interesting)

    by account_deleted ( 4530225 ) * on Sunday May 09, 2004 @03:12AM (#9098594)
    Comment removed based on user account deletion
  • by aancsiid ( 135857 ) on Sunday May 09, 2004 @03:21AM (#9098628) Homepage
    http://www.gpgpu.org/ [gpgpu.org] is a great resource for general purpose graphics processor usage.
  • Not so... (Score:4, Interesting)

    by oboylet ( 660310 ) on Sunday May 09, 2004 @03:27AM (#9098642)
    High-powered GPUs can make for really good general-purpose devices.

    Apple's Newton had no CPU, only a GPU that was more than adequate.

    Ideas like these are good in general. I'd like to see the industry move away from the CPU-as-chief status quo. Amigas were years ahead of their time in large part because the emphasis wasn't as much on central processing. The CPU did only what it was supposed to do -- hand out instructions to the gfx and audio subsystems.

    Hardly using a "motorcycle to tow a pop-up camper." If anything, the conventional wisdom is, "when all you have is a hammer, everything looks like a nail."

  • by Anonymous Coward on Sunday May 09, 2004 @03:28AM (#9098646)
    Many of the problems stated in using a GPU for non-graphics tasks would be implicitly solved if the GPU and CPU shared memory. While this would slightly slow down the GPU's memory access, in 3 years, I don't think that would be an issue. Especially compared to the benefits of having only one memory pool.
  • I can see it now.... (Score:3, Interesting)

    by TypoNAM ( 695420 ) on Sunday May 09, 2004 @03:29AM (#9098649)
    ...Several indies and companies figure out how to use the powerful GPU's in an efficient manner that would benefit everyone who uses computers on a daily basis and improves the usefulness of the computer making it the best thing in the world again then some greedy bastard comes along flashing his granted patent by the U.S. Patent Office which makes us all screwed...

    Ohh well the idea was good while it lasted. ;)
  • Imagine... (Score:4, Interesting)

    by rokzy ( 687636 ) on Sunday May 09, 2004 @03:32AM (#9098661)
    a beowulf cluster of them.

    seriously, we have a 16 node beowulf cluster and each node has an unnecessarily good graphics card in them. a lot of the calculations are matrix-based e.g. several variables each 1xthousands (1D) or hundredsxhundreds (2D).

    how feasible and worthwhile do you think it would be to tap into the extra processing power?
  • Altivec (Score:2, Interesting)

    by ensignyu ( 417022 ) on Sunday May 09, 2004 @03:41AM (#9098681)
    I'm curious how GPUs stack up against the Altivec engine in G4/G5s.
  • by lazy_arabica ( 750133 ) on Sunday May 09, 2004 @03:50AM (#9098705) Homepage
    The GPU are very fast ... at performing vector and matrix calculations. This is the whole point. If general computing CPUs were capable of doing vector or matrix calcs very efficiently, we would probably not have GPUs.
    Yes. But 3D graphics are not the only use of these mathematical objects ; I wonder if it would be possible to use a GPU to perform video encoding or digital sound manipulation at a higher speed, as both operations require matrices. I'm also sure they could take advantage of these processors vector manipulation capabilities.
  • Documentation (Score:3, Interesting)

    by Detritus ( 11846 ) on Sunday May 09, 2004 @03:51AM (#9098708) Homepage
    Do any of the video chip manufacturers make free and complete documentation available for their GPUs? Everything that I have read in the past has said that they are encumbered with NDAs and claims of trade secrets. I'd prefer not to waste my time dealing with companies that treat their customers as potential enemies.
  • Frogger (Score:5, Interesting)

    by BiggerIsBetter ( 682164 ) on Sunday May 09, 2004 @03:52AM (#9098712)
    Some dude wrote Frogger almost entirely in pixel shaders. http://www.beyond3d.com/articles/shadercomp/result s/ [beyond3d.com] (2nd from the bottom).
  • SETI (Score:2, Interesting)

    by ryanw ( 131814 ) on Sunday May 09, 2004 @04:04AM (#9098735)
    I would what seti could do by the extra cycles in parallel with the CPU. Is it possible to get 2x or 3x the crunching of data for seti clients?
  • by trg83 ( 555416 ) on Sunday May 09, 2004 @04:11AM (#9098758)
    From the link you mentioned: "while Apple used a compiler you've never heard of (at least in the x86 world)."

    My understanding is that they used GCC.

    Further, "Another said that some version of Linux had to be used to compare apples to apples. Well, MacOS X isn't Linux, and the desktop standard for x86 machines is Windows (not that using a properly optimized Linux bothered the Opterons very much). You want to know what machine is fastest, you test in their native environment."

    Oh, silly me. Processors are so obviously made to run only one operating system!

    I'll take this site's info with a grain of salt.
  • Re:Imagine... (Score:3, Interesting)

    by BiggerIsBetter ( 682164 ) on Sunday May 09, 2004 @04:12AM (#9098760)
    It's a good idea if your datasets take a long enough time to process. You could run 6 or so cards (maybe 1 AGP super fast, 5 PCI slowish (eg FX5200)) in your machine and send a dataset to each GPU and the main CPU, then get the results back. The trick is to keep them working without blowing all your bandwidth or PSU. Also depends on the resolution required, because the GPU is only 32 bits FP, compared to 80 bits for the CPU.

    All I can suggest is download the Brook [stanford.edu] libraries and try it out. See if it helps, and see if the results are accurate enough. And yes, Fortran can be used if you can bind it - Intel's compiler suite worked for me.
  • There's some good stuff in there.

    However, it seems a few organisations have actually beaten us to it.

    Apple, for example, uses the 3d aspect of the GPU to accelerate its 2d compositing system with quartz extreme [apple.com]. Microsoft, as usual, announced the feature after Apple shipped it [microsoft.com], and with any luck Windows users might have it by 2007

    -- james
  • by JLang22 ( 774790 ) on Sunday May 09, 2004 @04:26AM (#9098810)
    I am a novice in a lot of these discussions so I don't post much. Let me see if I understand this:

    The graphics card has a lot of unused computing power, nearly equal to the main processor chip in the computer if not more, that is not being used when there is no game or video being played, right?

    Is there no way to tap into this power?

    Perhaps it could be used for the main display on the computer (I think you guys call it GUI?)?

    What else could it be used for?

    Could Linux be modified to make use of this power?

    Just a know nothing, nobody with questions.
    J
  • by Impeesa ( 763920 ) on Sunday May 09, 2004 @04:58AM (#9098875)
    I did a paper on the topic of general-purpose GPU programming for my parallel computing course just this last semester here, interestingly enough. I believe our research indicated that even a single PCI card was so badly throttled by the bus throughput that it was basically useless. AGP does a lot better taking data in, but it's still pretty costly sending data back to the CPU. I have a feeling your proposed setup will be a whole lot more feasible if/when PCI Express [pcisig.com] becomes mainstream.
  • Alternative use (Score:3, Interesting)

    by Zog The Undeniable ( 632031 ) on Sunday May 09, 2004 @05:01AM (#9098880)
    Remember the story about PS2's being used in Iraqi WMDs? No doubt the next "outlaw state" will be accused of using GeForce Ti4600's to manage fast breeder reactors.
  • by phatsharpie ( 674132 ) on Sunday May 09, 2004 @05:09AM (#9098893)
    Actually, GCC may have optimization for the G5, but it is far from being optimal:

    The compiler that seems to be best/fully optimized for the G5 is the new IBM XL compilers, released at the beginning of the year.

    http://forums.macnn.com/showthread.php?s=&thread id =197118

    There doesn't seem to be much benchmark done using it yet, but all information points to significant gain in performance when using the IBM compiler versus GCC (not surprising, since IBM built the chip). The only benchmark I can find is from a German site:

    http://www.heise.de/ct/Redaktion/as/spec/ct04082 30 /

    I don't believe the G5 is indeed the "fastest" personal computer in the world as claimed by Apple, but it certainly is comparable to the best in the x86 world. Not to mention it is a very new architecture, and there are still plenty of optimization that can be made to make it faster. But to claim that GCC is fully optimized for the G5, and that Apple was using it to justify its claim of being the "fastest" is incorrect. It used a compiler that is arguably good, but certainly not excellent for it.

    In regards to comparing Mac OS X to Linux rather than Windows. I think the comparison is valid considering the market Apple has been targeting recently. Apple seems to have backed off from wooing the MS crowd, but instead focusing on firms that use UNIX workstations. Apple wants these companies to switch to the PowerMac rather than to a x86/Linux platform. This is highlighted by their advocacy of using OS X for biotech and film/video effects production. I remember one of their earlier OS X ad even told the reader to send all of their old UNIX boxes to "/dev/null" - or something like that.

    -B
  • Dual Core (Score:5, Interesting)

    by BrookHarty ( 9119 ) on Sunday May 09, 2004 @05:16AM (#9098907) Journal
    With Dual Core CPU's going to be the norm, why not a Dual Core GPU for even faster gfx cards? With everyone wanting 16x antialiasing at 1600x1200 to get over 100fps, its gonna take some very powerful GPU's (or some dual cores).

    Even with the ATI 800XT, 1600x1200 can dip below 30FPS with AA/AF on higher settings. Still a ways to go for that full virtual reality look.
  • Re:Dual Core (Score:3, Interesting)

    by BrookHarty ( 9119 ) on Sunday May 09, 2004 @06:00AM (#9098994) Journal
    Video cards are already able to run many things in parallel- they are beyond dual-core.

    There where dual ATI GPU's or Matrox or even the old Voodoo2 SLI. Seems you can increase speed with more cores.
  • Commodore 64 (Score:5, Interesting)

    by curator_thew ( 778098 ) on Sunday May 09, 2004 @06:01AM (#9098995)

    This concept was being used back in 1988. The Commodore 64 (1mhz 6510, a 6502 like micro processor) had a peripheral 5.25 disk drive called the 1541, which itself had a 1mhz 6510 cpu in it, connected via. a serial link.

    It became common practice to introduce fast loaders: these were partially resident in the C64, and also in the 1541: effectively replacing the 1541's limited firmware.

    However, demo programmers figured out how to utilise the 1541: one particular demo involved uploading program to the 1541 at start, then upon ever screen rewrite, uploading vectors to the 1541, which the 1541 would perform calculations in parallel with the C64, then at the end of the screen, the C64 fetch the results from the 1541, and incorporate them into the next screen frame.

    Equally, GPU provides similar capability if so used.

  • by Osty ( 16825 ) on Sunday May 09, 2004 @06:06AM (#9099002)

    You're absolutely correct that these "game snobs" are looking at the past through rose-colored graphics, forgetting all of the stinkers of yesteryear. However, it's not just games where this applies. How many times have you heard people complain about how bad movies are now, or music, or books? It's exactly the same phenomenon. When your grandfather tells you how much better things were "back in the day", it's for exactly the same reason. He's looking back at all the good things, while ignoring all of the bad.


    Face it, everything mostly sucks. It always has, and it always will. There will always be some gems that really stand out, and those will be what are remembered when people fondly look back on "the old days". Get over it.

  • by WinterpegCanuck ( 731998 ) on Sunday May 09, 2004 @06:11AM (#9099012)
    What about a general abstraction layer at the OS level? I am by no means at that programming level, but could you not have calculations that are proven to run good on GPU's (int's maybe?) be redirected by the OS, and the rest just sent to the CPU as normal? To me this would take advantage for all programs (except the games that want exclusive GPU use) running on the system instead of only those coded to take advantage of it. I know a few programs in the oil industry that could use all the bogomips they could get.
  • by Anonymous Coward on Sunday May 09, 2004 @06:50AM (#9099102)
    Are there interrupts? Available to userland?

    The 'acceleration' layer (DirectX, Xv?) is not even available to the programmers. The programmer requests from DirectX or SDL to draw a polygon. Then DirectX or SDL invoke acceleration features of the card. But we do not have direct access to those features. They are not even documented.

    Will the kernel provide those facilities?
    Because it would be stupid to go through SDL to perform FFT with the video card's capabilities.
  • what's really needed (Score:3, Interesting)

    by curator_thew ( 778098 ) on Sunday May 09, 2004 @08:11AM (#9099279)

    What's really needed is to couple the GPU and CPU in such a way that the GPU actually runs a very low level O/S, like an L4Ka style kernel (http://l4ka.org/), and becomes "just another" MP resource.

    Then, on top of this low level, actually runs the UI graphics driver and so on. Other tasks can also run, but ultimately the priority is given to the UI driver.

    Then, the O/S on the CPU needs to be able to know generally how to distribute tasks across to the GPU. Fairly standard for a tightly coupled MP that has shared bus memory.

    Why do I say this? Because the result is

    (a) if you're using an especially high performance application, the GUI runs full throttle dedicated to rendering/etc and acts as per normal;

    (b) if you're not, e.g. such as when running Office or Engineering other compute intensive tasks (e.g. recoding video without displaying the video), then the GPU is just another multi processor resource to soak up cycles.

    Then, CPU/GPU is just a seamless computing resource. The fantastic benefit of this is that if the O/S is designed properly, then it could allow simply buying/plugging in additional PCI (well, PCI probably not good because of low speed, perhaps AGP?) cards that are simply "additonal processors" - then you get a relatively cheaper way of putting more MP into your machine.

  • by Temkin ( 112574 ) on Sunday May 09, 2004 @08:35AM (#9099328)

    Remember the co-processors? Well, actually I don't (I'm a tad to young). But I know about them.



    Dig deeper. 8087 FPU's were nice, though they ran hot enough to cook on, but the idea had existed for 15 or more years before they appeared. Try looking into the old DEC PDP-11 archives. There you'll find DEC's own "CIS" or "commercial instruction set", which was a set of boards (later a add on chip) that added string, character and BCD math instructions. DEC also had a FPU card set that implemented a 64-bit FPU out of AMD 2901 bit slice processors. Many low-budget not-quite-supercomputers were really add-on hardware boxes to a general purpose computers. Basicly add-on stunt boxes.


    Dam... I'm too young to feel this old! Most of this stuff was in play when I was in grade school.


    Temkin

  • compiler? (Score:1, Interesting)

    by Anonymous Coward on Sunday May 09, 2004 @08:40AM (#9099339)
    Could this be integrated with a compiler, so that the compiler could elect to use the GPU? That would be really cool:

    gcc --with-gpu somebigprog.c

  • Very bad article (Score:3, Interesting)

    by Slash.ter ( 731367 ) on Sunday May 09, 2004 @09:58AM (#9099570)
    This is a very poor quality article, I analyzed it before. There are possibly better ones mentioned by others.

    Just look at the matrix multiplication case. Look at the graph and see that 1000x1000 takes 30 seconds on CPU and 7 seconds on GPU. Let's translate it to Millions of operations per second: CPU -> 33 Mop/s, GPU -> 142 Mop/s Matrix multiplication has cubic complexity so for CPU: 1000 * 1000 * 1000 / 7 seconds / 1000000 = 33 Mop/s

    Now think a while: 33 million operations on 1.5 GHz Pentium 4 with SSE (I assume there is no SSE2). Pentium 4 has fuse multiply-add unit which makes it do two ops per clock. So we get 3 billion ops per second peak performance! What they claim is that the CPU is 100 times slower for matrix multiply. That is unlikely. You can get 2/3 of peak on Pentium 4. Just look at ATLAS [sourceforge.net] or FLAME [utexas.edu] projects. If you use one of these projects you can multiply 1000 matrix in half a second: 14 times faster than the quoted GPU.

    Another thing is the floating point arithmetic. GPU uses 32-bit numbers (at most). This is too small for most scientific codes. CPU can do 64-bits. Also, if you use 32-bits on CPU it will be 4 times as fast as 64-bit (SSE extension). So in 32-bit mode, Pentium 4 is 28 times faster than the quoted GPU.

    Finally, the length of the program. The reason matrix multiply was chosen is becuase it can be encoded in very short code - three simple loops. This fits well with 128-instruction vertex code length. You don't have to keep reloading the code. For more challenging codes it will exceed allowed vertex code length. The three loop matrix multiply implementation stresses memory bandwidth. And CPU has MB/s and GPU has GB/s. No wonder GPU wins. But I can guess that without making any tests.

  • by sonamchauhan ( 587356 ) <sonamc.gmail@com> on Sunday May 09, 2004 @10:21AM (#9099663) Journal
    Somewhere in this story, I found a post with a a link [tech-report.com] that explains this is a software problem:
    Notice that they're quick to point out the problem isn't likely a hardware issue. There should be plenty of bandwidth on the AGP bus, but graphics chip makers don't seem to have written their drivers to handle transfers from AGP cards to main memory properly.

    Then they run some tests and conclude:
    That means even if you can render high-quality images at 30 frames per second, you won't be able to get them out of the graphics card at anything near that rate.
  • Three questions (Score:3, Interesting)

    by pvera ( 250260 ) <pedro.vera@gmail.com> on Sunday May 09, 2004 @10:50AM (#9099793) Homepage Journal
    1. Is anyone except Apple trying to leverage the GPU for non-3D tasks? Apple has been doing Quartz Extreme for a while but I have not heard if anyone else is doing it.

    2. Has anyone tried something similar to what Quartz Extreme does but for non-graphical tasks?

    3. How come GPU makers are not trying to make a CPU by themselves?
  • Some day you may be able to Fold proteins with your GPU [folding-community.org].
  • Re:audio stuff (Score:1, Interesting)

    by Anonymous Coward on Sunday May 09, 2004 @03:24PM (#9101381)
    I use Mathlab at work a lot. It runs a mathmatical simulation of a system. Lots of digital communications people do it. I wonder if a system can be converted into a matrix and then solved numerically.
  • Re:The day is saved (Score:5, Interesting)

    by Directrix1 ( 157787 ) on Sunday May 09, 2004 @03:39PM (#9101457)
    Doesn't anybody find it annoying that 3-D operation is being hardwired into the video card to begin with? Why aren't we making 200million transistor math coprocessors with high bus speeds, uncoupled from the video card. This way we wouldn't have to keep getting a new video card every time we want to upgrade our systems 3-d performance. Since these operations are highly symmetric, you could put in an array of these into one machine to incrementally upgrade. Also, this would make the issue of how to access your GPU to use for other purposes irrelevant, as it would be a math coprocessor expected to be used as such anyways. And the best reason for doing it this way: OpenGL (and DirectX too) could become more of a thick software layer on top of the generic coprocessor, and since the coprocessors would eventually standardize on common instruction set, you wouldn't need a new version of OpenGL or DirectX for every new coprocessor release. What do you guys think?
  • Re:The day is saved (Score:3, Interesting)

    by Directrix1 ( 157787 ) on Monday May 10, 2004 @01:37AM (#9104126)
    I don't understand your first statement. The fact that these GPUs exist and are being used to do so many things would imply that its actually not that specialized. It just has a fat pipeline. Matrix operations are very common and many common tasks, such as web browsing even, can easily take advantage of them for image decompression and video/audio streaming. And maybe in the future if we get the whole "we don't need a dedicated coprocessor" idea out of our heads, it could be used for things like Neural Network Assistants, faster/better speech recognition, and other more complex tasks which are only not commonplace on the desktop right now because the desktop can't effectively handle it right now.

    For the positioning and cooling, well there is one in there right now. There is enough space more than likely even for more than one.

    Also, I'm not saying lets not give the sucker a cache. It would more than likely need a cache of its own dedicated memory to effectively operate just like any processor.

    When I was about 15 and I first started reading about the first GPUs, all I could think about was, "Boy is this a step in the wrong direction." I believe in hardware whose purposes are cleanly seperated. Well, the GPU thing has had its hayday, why not start making general purpose coprocessors now so every application can get a nice boost (well a lot of applications). The instructions already resemble a normal processors anyways, so why not.
  • by drinkypoo ( 153816 ) <drink@hyperlogos.org> on Tuesday May 11, 2004 @08:46PM (#9122765) Homepage Journal
    The part that annoys me is that it's all the same speed. Texture memory doesn't have to be near as fast as video memory and furthermore you could have two classes of texture memory, which will make sense as video cards reach and exceed 512MB. There have in the past been video cards with high speed video memory and something like EDO for textures, which makes a lot of sense, especially if you're willing to cache most-used textures somewhere in video memory.

    Is it just me, or should the cards have maybe 64 or 128MB of high speed memory, and then a couple of DIMM slots that take ordinary DDR SDRAM? That would still be pretty fast stuff, especially if the cards had dual-channel memory controllers, and plenty fast enough for textures. The card could cache the most-used textures in whatever video memory was left after drawing screens.

Math is like love -- a simple idea but it can get complicated. -- R. Drabek

Working...