Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Graphics Software

Using GPUs For General-Purpose Computing 396

Paul Tinsley writes "After seeing the press releases from both Nvidia and ATI announcing their next generation video card offerings, it got me to thinking about what else could be done with that raw processing power. These new cards weigh in with transistor counts of 220 and 160 million (respectively) with the P4 EE core at a count of 29 million. What could my video card be doing for me while I am not playing the latest 3d games? A quick search brought me to some preliminary work done at the University of Washington with a GeForce4 TI 4600 pitted against a 1.5GHz P4. My Favorite excerpt from the paper: 'For a 1500x1500 matrix, the GPU outperforms the CPU by a factor of 3.2.' A PDF of the paper is available here."
This discussion has been archived. No new comments can be posted.

Using GPUs For General-Purpose Computing

Comments Filter:
  • by drsmack1 ( 698392 ) * on Sunday May 09, 2004 @01:50AM (#9098517)
    Now I finally have a use for the 20 Voodoo 2 cards I have in a box in the basement. Now I can have my very own supercomputer. I just need some six pci slot motherboards.... Instant cluster!
  • What?!?!?! (Score:5, Funny)

    by DarkHelmet ( 120004 ) * <.mark. .at. .seventhcycle.net.> on Sunday May 09, 2004 @01:50AM (#9098518) Homepage
    What? Matrix operations run faster on a massively parallel form of vector processor over a general purpose processor? How can that be?

    Intel's been telling me for years that I need faster hardware from THEM to get the job done...

    You mean........ they were lying?!?!?

    CRAP!

    • by Anonymous Coward on Sunday May 09, 2004 @03:11AM (#9098756)
      Don't worry, the Intel processor is *much* faster at the internet thingy. Graphics cards only do the upload to screen thing, and everyone knows the internet is all about downloading.

      And besides, nobody needs or wants Matrix operations anyway. Did you see how bad Matrix Reloaded was? That was *just* reloading, imagine how bad Matrix Multiplying is. You get the idea.
    • There's some good stuff in there.

      However, it seems a few organisations have actually beaten us to it.

      Apple, for example, uses the 3d aspect of the GPU to accelerate its 2d compositing system with quartz extreme [apple.com]. Microsoft, as usual, announced the feature after Apple shipped it [microsoft.com], and with any luck Windows users might have it by 2007

      -- james
      • by Crazy Eight ( 673088 ) on Sunday May 09, 2004 @05:00AM (#9098993)
        QE is cool, but it doesn't do anything similar at all to what they're talking about here. FFTs on an NV30 are only incidentally related to texture mapping window contents. Check out gpgpu.org or BrookGPU. In a sense, the idea is to treat modern graphics hardware as the next step beyond SIMD instruction sets. Incidentally, e17 exploited (hardware) GL rendering of 2D graphics via evas a bit before Apple put that into OS X.
      • As for organizations beating slashdot to the punch on this one, that's true... but it's good to see this getting even more exposure. :)

        GPGPU (General-Purpose computation on GPUs) was a hot topic at various conferences in 2003; a number of papers were published on the subject. At SIGGRAPH 2004 [siggraph.org] there will be a full-day course [gpgpu.org] on GPGPU given by eight of the experts in the field (including myself).

        Mark Harris of NVIDIA [nvidia.com] maintains a website [gpgpu.org] dedicated to GPGPU topics, including discussion forums and news pos

  • Googled HTML (Score:5, Informative)

    by balster neb ( 645686 ) on Sunday May 09, 2004 @01:54AM (#9098535)
    Here's a HTML version of the PDF [216.239.57.104], thanks to Google.

  • video stuff (Score:5, Interesting)

    by rexguo ( 555504 ) on Sunday May 09, 2004 @01:54AM (#9098538) Homepage
    At my work place, I'm looking into using the GPUs to do video analysis. Things like cut-scene detection, generating multi-resolution versions of a video frame, applying video effects and other proprietary technologies that were previously done in CPU. The combination of pixel shaders and floating-point buffers really make GPUs a Super-SIMD machine if you know how to exploit it.
  • by keltor ( 99721 ) on Sunday May 09, 2004 @01:55AM (#9098542)
    The GPU are very fast ... at performing vector and matrix calculations. This is the whole point. If general computing CPUs were capable of doing vector or matrix calcs very efficiently, we would probably not have GPUs.
    • by lazy_arabica ( 750133 ) on Sunday May 09, 2004 @02:50AM (#9098705) Homepage
      The GPU are very fast ... at performing vector and matrix calculations. This is the whole point. If general computing CPUs were capable of doing vector or matrix calcs very efficiently, we would probably not have GPUs.
      Yes. But 3D graphics are not the only use of these mathematical objects ; I wonder if it would be possible to use a GPU to perform video encoding or digital sound manipulation at a higher speed, as both operations require matrices. I'm also sure they could take advantage of these processors vector manipulation capabilities.
      • by Slack3r78 ( 596506 ) on Sunday May 09, 2004 @11:24AM (#9100393) Homepage
        Actually, the GeForce 6800 includes the hardware to do just that [anandtech.com]. I'm surprised no one else has mentioned it by now, as I thought it was one of the cooler features of the new NV40 chipset.
        • by Anonymous Coward
          ATI has had this for even longer. The all-in-wonder series uses the video card to do accelerated encoding and decoding.

          Also, I believe that mplayer, the best video player/encoder I have seen also uses openGL (and thus the video card on a properly configured system) to do playback.

          Personally, I don't think there is anything really new in this article.
  • by 2megs ( 8751 ) on Sunday May 09, 2004 @01:57AM (#9098548)
    The Pentium 4 EE actually has 178 million transistors, which puts it in between ATI's and NVIDIA's latest.

    In all of this, keep in mind that there's computing and there's computing...the kind of computing power in a GPU is excellent for doing the same numeric computation to every element of a large vector or matrix, not so much for branchy decisiony type things like walking a binary tree. You wouldn't want to run a database on something structured like a GPU (or an old vector-processing Cray), but something like a simulation of weather or molecular modeliing could be perfect for it.

    The similarities of a GPU to a vector processing system bring up an interesting possibility...could Fortran see a renaissance for writing shader programs?

    • by Knightmare ( 12112 ) on Sunday May 09, 2004 @01:59AM (#9098556) Homepage
      Yes, it's true that it has that many transistors BUT, only 29 million of them are part of the core, the rest is memory. The transistor count on the video cards does not count the ram.
      • by LinuxGeek ( 6139 ) <djand.nc@NoSpAM.gmail.com> on Sunday May 09, 2004 @02:27AM (#9098641)
        If they are ignoring the cache on the P4 EE, then why mention the Extreme Edition at all? Cache size is the only difference between the Xeon based EE and a regular Northwood P4. Also, modern GPU's certainly do have cache. Read this old GeForce4 preview [pcstats.com] .
        The Light Speed Memory Architecture (LMA) that was present in the GeForce3 has been upgraded as well, with it's major advancements in what nVidia calls Quad Cache. Quad Cache includes a Vertex Cache, Primitive Cache, Texture Cache and Pixel Caches. With similar functions as caches on CPU's, these are specific, they store what exactly they say.
        Another good article [digit-life.com] has a block diagram showing the cache structures of the GeForce FX GPU. Nvidia and ATI both keep quiet about the cache sizes on their GPUs, but that dosen't mean that the full transistor count is dedicated to the processing core.
      • by Bender_ ( 179208 ) on Sunday May 09, 2004 @04:02AM (#9098883) Journal
        The transistor count on the video cards does not count the ram

        How do you know? In fact, modern GPUs require a large amount of small scattered memory blocks. Texture caches, FIFOs for fragment/pixels/texels when they are not in sync, caches for vertex shader and pixel shader programs etc etc..

        More recent GPUs are notorious for their incredibly long latencies. Long latencies imply that a lot of data has to be stored in chip..

        • by Hast ( 24833 )
          Well, it's really more that the pipelines are very long. On the order of 600 pipelinestages, and that's pretty damned long. (P4 which is a CPU with a deep pipeline has 21 stages IIRC.)

          They do of course store data between those stages, and there are caches on the chip. Otherwise performance would be shot all to hell.

          I doubt that the original statement that GPU designs don't count the on chip memory is correct. That just seems like an odd way to do it.
      • Yes, it's true that it has that many transistors BUT, only 29 million of them are part of the core, the rest is memory. The transistor count on the video cards does not count the ram.

        Sure it does, it's just that the ram isn't cache, it's mostly huge register files.

    • Please ANYTHING BUT FORTRAN!!!!!!! Seriously, FORTRAN needs serious reworking to be user friendly in today's age. It was fine a decade or two ago when people were not used to user friendly languages. COBOL anyone? FORTRAN has its uses, but it's horribly, horribly tough to use if you want to combine number crunching with other stuff such as string manipulation.
  • by Anonymous Coward on Sunday May 09, 2004 @01:57AM (#9098550)
    General-purpose computation using graphics hardware has been a significant topic of study for the last few years. Pointers to a lot of papers and discussion on the subject are available at: www.gpgpu.org [gpgpu.org]
    • by Lord Prox ( 521892 ) on Sunday May 09, 2004 @02:28AM (#9098645) Homepage
      BrookGPU [stanford.edu]
      from the BrookGPU website...
      As the programmability and performance of modern GPUs continues to increase, many researchers are looking to graphics hardware to solve problems previously performed on general purpose CPUs. In many cases, performing general purpose computation on graphics hardware can provide a significant advantage over implementations on traditional CPUs. However, if GPUs are to become a powerful processing resource, it is important to establish the correct abstraction of the hardware; this will encourage efficient application design as well as an optimizable interface for hardware designers.

      From what I understand this project it aimed at making an abstraction layer for GUP hardware so writing code to run on it is easier and standardsied.
  • by pyrrhonist ( 701154 ) on Sunday May 09, 2004 @02:01AM (#9098562)
    What could my video card be doing for me while I am not playing the latest 3d games?

    Two words: virtual pr0n

  • DSP using GPUs (Score:3, Interesting)

    by crushinghellhammer ( 727226 ) on Sunday May 09, 2004 @02:01AM (#9098563)
    Does anybody know of pointers to papers/research pertaining to using GPUs to perform digital signal processing for, say, real-time audio? Replies would be much appreciated.
    • here ya go (Score:4, Informative)

      by dave1g ( 680091 ) on Sunday May 09, 2004 @02:22AM (#9098630) Journal
      some one else posted this...

      www.gpgpu.org [gpgpu.org]

      Website on this topic (Score:0)
      by Anonymous Coward on Sunday May 09, @01:57AM (#9098550)
      General-purpose computation using graphics hardware has been a significant topic of study for the last few years. Pointers to a lot of papers and discussion on the subject are available at: www.gpgpu.org [gpgpu.org]
  • Hacking the GPU (Score:5, Informative)

    by nihilogos ( 87025 ) on Sunday May 09, 2004 @02:03AM (#9098572)
    Is a course being offered at caltech since last summer on using gpus for numerical work. Course page is here [caltech.edu].
  • by CherniyVolk ( 513591 ) on Sunday May 09, 2004 @02:04AM (#9098576)

    "Utilize the sheer computing power of your video card!"

    New market blitz, hmmmm.

    SETI ports their code, and within five days their average completed work units increase 1000 fold. 13 hours later, they have evidence of intelligent life at 30000 locations within one degree.

    Microsoft gets the hint, and comes out with a brilliant plan to utilize GPUs to speed up their OS and add bells and whistles to their UI.

    And, once again, Apple and Quartz Extreme is ignored.
    • by Barbarian ( 9467 ) on Sunday May 09, 2004 @02:12AM (#9098593)
      Then they throw away the results because the gpu's are not able to calculate at double precision floating point, but only at 24 or 32 bits.
      • by renoX ( 11677 )
        Yes, one thing shocked me in their paper: they don't talk much about the precision they use..

        Strange because it is a big problem for using GPU as coprocessors: usually scientific computation use 64bit floats or on Intel 80-bit floats!
  • by Anonymous Coward on Sunday May 09, 2004 @02:05AM (#9098580)
    Before you get excited just remember how asymmetric the APG bus is. Those GPUs will be at much better use when we get them as 64bit pci cards.
  • by ratboot ( 721595 ) on Sunday May 09, 2004 @02:10AM (#9098588)
    What's interesting with new video cards it's their memory capacity, 128 or 256 MB and that this memory is accessible on some new cards at 900 MHz with a data path of 256 bit (which is a lot faster than a CPU with DDR 400 installed).
  • Wow (Score:5, Interesting)

    by cubicledrone ( 681598 ) on Sunday May 09, 2004 @02:10AM (#9098589)
    All that processing power, and the latest games still run at about 22 frames per second, if that.

    The CPU can do six billion instructions a second, the GPU can do 18 billion, and every last cycle is being used to stuff a 40MB texture into memory faster. What a waste. Yeah, the walls are even more green and slimy. Whoop-de-fucking-do.

    Would it be great if all that processing power could be used for something other than yet-another-graphics-demo?

    Like, maybe some new and innovative gameplay?
    • Frogger (Score:5, Interesting)

      by BiggerIsBetter ( 682164 ) on Sunday May 09, 2004 @02:52AM (#9098712)
      Some dude wrote Frogger almost entirely in pixel shaders. http://www.beyond3d.com/articles/shadercomp/result s/ [beyond3d.com] (2nd from the bottom).
    • Re:Wow (Score:5, Insightful)

      by PitaBred ( 632671 ) <slashdot@pitabre ... org minus distro> on Sunday May 09, 2004 @03:23AM (#9098794) Homepage
      You don't seem to understand that GPU's are very specific purpose computing devices. They aren't like a general purpose processor like you CPU. They crunch matrices, and that's about it. Even all the programmable stuff is just putting parameters on the matrix churning.
    • by Sycraft-fu ( 314770 ) on Sunday May 09, 2004 @03:42AM (#9098848)
      When I say oh shut the fuck up.

      Sorry for the flames, but seriously, I get so damn sick of all the "all new games suck" whiners. Look, there are legit reasons to want new technology. It is nice to have better graphics, more realistic sound, etc. It is NICE to have game that looks and sounds more like reality. Yes, that doesn't make the game great, but that doesn't mean it's worthless.

      What's more, don't pretend like all modern games suck while old games ruled. That's a bunch of bullshit. Sure, there are plenty of modern games that suck, but guess what? There are tons of old games that suck too. Thing is, you just tend to forget about them. You remember the greats that you enjoyed or heard about, the ones that helped shape gaming today. You forget all the utter shit that was released, just as is released today.

      So get off it. If you don't like nice graphics, fine. Stick with old games, no one is forcing you to upgrade. But don't pretend like there is no reason to want better graphics in games.
      • by Tim C ( 15259 ) on Sunday May 09, 2004 @04:16AM (#9098908)
        Hear, hear.

        There's something that's always puzzled me a little about this site - attached to every single article about some new piece of PC tech - a faster processor, better graphics card, etc - there are a number of comments bemoaning the advance. All of them saying that people don't need the power/speed they have already, that they personally are just fine with 4 year old hardware, or, in this case, that better graphics don't make for better games. Hell, the same is true for mobile phones - I've lost count of the number of comments bemoaning advances in them, too.

        It's funny, but I thought this was supposed to be a site for geeks; aren't geeks supposed to *like* newer, better toys?

        To get back on topic - no, better graphics are not sufficient for a better game. However, if the gameplay is there, then they can certainly make the experience more enjoyable. Would Quake have been as much fun if it was rendered in wireframes?

        Better graphics help add to the sense of realisim, making the game a more immersive experience. The whole point of the majority of games is entertainment and (to an extent) escapism. Additionally, what a lot of people like the grand-parent poster seem to forget is that most of the big-name game engines are licensed for use in a number of games. Let people like id spend their time and money coming up with the most graphically intensive, realistic engine they can. Think Doom 3'll suck because the gameplay will be crap? Fine, then wait for someone to license the engine and create a better game with it. In the meantime, please shut up and remember that there are those of us who like things to be pretty, as well as useful/well made/fun/(good at $primaryPurpose)

        Good graphics on their own won't make a good game, but they will help make a good game great.
      • by Osty ( 16825 ) on Sunday May 09, 2004 @05:06AM (#9099002)

        You're absolutely correct that these "game snobs" are looking at the past through rose-colored graphics, forgetting all of the stinkers of yesteryear. However, it's not just games where this applies. How many times have you heard people complain about how bad movies are now, or music, or books? It's exactly the same phenomenon. When your grandfather tells you how much better things were "back in the day", it's for exactly the same reason. He's looking back at all the good things, while ignoring all of the bad.


        Face it, everything mostly sucks. It always has, and it always will. There will always be some gems that really stand out, and those will be what are remembered when people fondly look back on "the old days". Get over it.

  • This is BIG (Score:5, Insightful)

    by macrealist ( 673411 ) on Sunday May 09, 2004 @02:18AM (#9098612) Journal
    Creating a way to use the specialize GPUs for vector processing that is not graphics related is ingenious. Like a lot of great ideas, it is sooo obvious AFTER you see some one else do it.

    Don't miss the point that this is not intended for general purpose computing. Don't port OoO to the graphics chip.

    Where it is huge is in signal processing. FPGAs have begun replacing even the G4s in this area recently because of the huge gains in speed vs. power consumption an FPGA affords. However, FPGAs are not bought and used as is, and end up costing a significant amount (of development time/money) to become useful. Being able to use these commodity GPUs for vector processing creates a very desirable price/processing power/power consumption option. If I were nVIDIA or ATI, I would be shoveling these guys money to continue their work.
  • Siggraph 2003 (Score:5, Informative)

    by Adam_Trask ( 694692 ) on Sunday May 09, 2004 @02:21AM (#9098626)
    Check out the publication list in Siggraph 2003. There is a whole section named "Computation on GPUs" (papers listed below). And the papers for Siggraph 2004 should be out shortly.

    If you have a matrix solver, there is no telling what you can do. And i remember, these papers show that the speed is faster than the matrix calculations of the same stuff using the CPU.

    # Linear Algebra Operators for GPU Implementation of Numerical Algorithms
    Jens Krüger, Rüdiger Westermann

    # Sparse Matrix Solvers on the GPU: Conjugate Gradients and Multigrid
    Jeff Bolz, Ian Farmer, Eitan Grinspun, Peter Schröder

    # Nonlinear Optimization Framework for Image-Based Modeling on Programmable Graphics Hardware
    Karl E. Hillesland, Sergey Molinov, Radek Grzeszczuk

  • by aancsiid ( 135857 ) on Sunday May 09, 2004 @02:21AM (#9098628) Homepage
    http://www.gpgpu.org/ [gpgpu.org] is a great resource for general purpose graphics processor usage.
  • I can see it now.... (Score:3, Interesting)

    by TypoNAM ( 695420 ) on Sunday May 09, 2004 @02:29AM (#9098649)
    ...Several indies and companies figure out how to use the powerful GPU's in an efficient manner that would benefit everyone who uses computers on a daily basis and improves the usefulness of the computer making it the best thing in the world again then some greedy bastard comes along flashing his granted patent by the U.S. Patent Office which makes us all screwed...

    Ohh well the idea was good while it lasted. ;)
  • Imagine... (Score:4, Interesting)

    by rokzy ( 687636 ) on Sunday May 09, 2004 @02:32AM (#9098661)
    a beowulf cluster of them.

    seriously, we have a 16 node beowulf cluster and each node has an unnecessarily good graphics card in them. a lot of the calculations are matrix-based e.g. several variables each 1xthousands (1D) or hundredsxhundreds (2D).

    how feasible and worthwhile do you think it would be to tap into the extra processing power?
    • Re:Imagine... (Score:3, Interesting)

      It's a good idea if your datasets take a long enough time to process. You could run 6 or so cards (maybe 1 AGP super fast, 5 PCI slowish (eg FX5200)) in your machine and send a dataset to each GPU and the main CPU, then get the results back. The trick is to keep them working without blowing all your bandwidth or PSU. Also depends on the resolution required, because the GPU is only 32 bits FP, compared to 80 bits for the CPU.

      All I can suggest is download the Brook [stanford.edu] libraries and try it out. See if it helps,
      • by Impeesa ( 763920 ) on Sunday May 09, 2004 @03:58AM (#9098875)
        I did a paper on the topic of general-purpose GPU programming for my parallel computing course just this last semester here, interestingly enough. I believe our research indicated that even a single PCI card was so badly throttled by the bus throughput that it was basically useless. AGP does a lot better taking data in, but it's still pretty costly sending data back to the CPU. I have a feeling your proposed setup will be a whole lot more feasible if/when PCI Express [pcisig.com] becomes mainstream.
        • Seems worth checking out: GPGPU.ORG [gpgpu.org] - "General-Purpose Computation Using Graphics Hardware"

          > AGP does a lot better taking data in, but it's still pretty
          > costly sending data back to the CPU.
          I've heard that mentioned a few times, is it true?

          From the AGP 3.0 spec [intel.com]:
          The AGP3.0 interface is designed to support several platform generations based upon 0.25m (and
          smaller) component silicon technology, spanning several technology generations. As with AGP2.0, the
          physical interface is designed to operate at a c
  • When... (Score:3, Insightful)

    by alexandre ( 53 ) on Sunday May 09, 2004 @02:32AM (#9098662) Journal
    ...will someone finally port john the ripper to a new video card's graphical pipeline? :)
  • Pseudo repost (Score:4, Informative)

    by grape jelly ( 193168 ) on Sunday May 09, 2004 @02:42AM (#9098682)
    I thought this looked familiar:

    http://developers.slashdot.org/developers/03/12/21 /169200.shtml?tid=152&tid=185 [slashdot.org]

    At least, I would imagine most of the comments would be the same or similar....
  • Finally (Score:5, Funny)

    by Pan T. Hose ( 707794 ) on Sunday May 09, 2004 @02:45AM (#9098693) Homepage Journal

    Using GPUs For General-Purpose Computing

    I'm glad that finally they started to use the General-Purpose Unit. What took them so long?

  • by Anonymous Coward on Sunday May 09, 2004 @02:47AM (#9098699)
    Remember the co-processors? Well, actually I don't (I'm a tad to young). But I know about them.

    Maybe it's time to start making co-processing add-on cards for advanced operations such as matrix mults and other operations that can be done in parallell on a low level. Add to that a couple of hundred megs of RAM and you have a neat little helper when raytracing etc. You could easily emulate the cards if you didn't have them (or needed them). The branchy nature of the program itself would not affect the performance of the co-processor since it should only be used for calculations.

    I for one would like to see this.
    • by BlueJay465 ( 216717 ) on Sunday May 09, 2004 @03:37AM (#9098831)
      Well they already make DSP cards for audio processing. Simply do a google(TM) search for "DSP card" and you will get [uaudio.com] several [tcelectronic.com] vendors. [digidesign.com]

      I can't imagine it would take a whole lot to hack them for just their processing power outside of audio applications.
    • by pe1chl ( 90186 ) on Sunday May 09, 2004 @04:11AM (#9098898)
      What I remember about co-processing cards and "intelligent peripheral cards" (like raid controllers or network cards with an onboard processor) is this:

      There is a certain overhead because a communications protocol is to be established between the main processor and the co-processor. For simple tasks the main processor often stops and waits for the co-processor to complete the task and retrieves the results. For more complicated tasks, the main processor continues but later an interrupt occurs that the main processor must service.

      You must be very careful or the extra overhead of this communication makes the execution of the task slower than without the co-processor. This is certainly going to happen at some time in the future, when you increase central processor power all the time but keep using the same co-processor.

      For example, your matrix co-processor needs to be fed the matrix data, start working, and tell it is finished. Your performance would not only be limited by the processor speed, but also by the bus transfer rate, and by the impact those fast bus transfers have on the CPU-memory bandwidth available and the on-CPU cache validity.
      When you are unlucky, the next CPU you buy is faster in performing the task itself.
    • Remember the co-processors? Well, actually I don't (I'm a tad to young). But I know about them.

      Dig deeper. 8087 FPU's were nice, though they ran hot enough to cook on, but the idea had existed for 15 or more years before they appeared. Try looking into the old DEC PDP-11 archives. There you'll find DEC's own "CIS" or "commercial instruction set", which was a set of boards (later a add on chip) that added string, character and BCD math instructions. DEC also had a FPU card set that implemented a 64-

  • Documentation (Score:3, Interesting)

    by Detritus ( 11846 ) on Sunday May 09, 2004 @02:51AM (#9098708) Homepage
    Do any of the video chip manufacturers make free and complete documentation available for their GPUs? Everything that I have read in the past has said that they are encumbered with NDAs and claims of trade secrets. I'd prefer not to waste my time dealing with companies that treat their customers as potential enemies.
  • Bass Ackwards? (Score:5, Insightful)

    by Anonymous Coward on Sunday May 09, 2004 @03:01AM (#9098725)
    Perhaps offloading the CPU to the GPU is the wrong way to look at things? With the apparently imminent arrival of commodity (low power) multi-CPU chips [slashdot.org], maybe we should be considering what we need to add to perform graphics more efficiently (ala MMX et al)?

    While it's true that general purpose hardware will never perform as well as or as efficiently as a design specifically targeted to the task (or at least it better not), it is also equally as true that eventually general purpose/commodity hardware will achieve a price-performance point where it is more than "good enough" for majority.
  • GPU = (Score:5, Funny)

    by greppling ( 601175 ) on Sunday May 09, 2004 @03:29AM (#9098815)
    Now I finally understand that acronym: General purpose unit!
  • by nothings ( 597917 ) on Sunday May 09, 2004 @03:42AM (#9098843) Homepage
    Transistor counts keep growing, so I keep updating this and reposting it about once a year.

    486 : 1.2 million transistors
    Pentium : 3 million transistors
    Pentium Pro : 5.5 million transistors
    Pentium 2 : 7.5 million transistors
    Nvidia TNT2 : 9 million transistors
    Alpha 21164 : 9.3 million (1994)
    Alpha 21264 : 15.2 million (1998)
    Geforce 256 : 23 million transistors
    Pentium 3 : 28 million transistors
    Pentium 4 : 42 million transistors
    P4 Northwood : 55 million transistors
    GeForce 3 : 57 million transistors
    GeForce 4 : 63 million transistors
    Radeon 9700 : 110 million transistors
    GeForce FX : 125 million transistors
    P4 Prescott : 125 million transistors
    Radeon X800 : 160 million transistors
    P4 EE : 178 million transistors
    GeForce 6800 : 220 million transistors
    here's the non-sucky version [nothings.org] since <ecode> doesn't actually preserve spacing like <pre>.

    • > Transistor counts keep growing, so I keep updating this and reposting it about once a year.

      For those who don't already know, what we now think of as "Moore's Law" was originally a statement about the rate of growth in the number of transistors on a chip, not about CPU speed.

  • Alternative use (Score:3, Interesting)

    by Zog The Undeniable ( 632031 ) on Sunday May 09, 2004 @04:01AM (#9098880)
    Remember the story about PS2's being used in Iraqi WMDs? No doubt the next "outlaw state" will be accused of using GeForce Ti4600's to manage fast breeder reactors.
  • Dual Core (Score:5, Interesting)

    by BrookHarty ( 9119 ) on Sunday May 09, 2004 @04:16AM (#9098907) Journal
    With Dual Core CPU's going to be the norm, why not a Dual Core GPU for even faster gfx cards? With everyone wanting 16x antialiasing at 1600x1200 to get over 100fps, its gonna take some very powerful GPU's (or some dual cores).

    Even with the ATI 800XT, 1600x1200 can dip below 30FPS with AA/AF on higher settings. Still a ways to go for that full virtual reality look.
    • Re:Dual Core (Score:3, Informative)

      Video cards are already able to run many things in parallel- they are beyond dual-core.
      • Re:Dual Core (Score:3, Interesting)

        by BrookHarty ( 9119 )
        Video cards are already able to run many things in parallel- they are beyond dual-core.

        There where dual ATI GPU's or Matrox or even the old Voodoo2 SLI. Seems you can increase speed with more cores.
  • Audio DSP (Score:4, Informative)

    by buserror ( 115301 ) * on Sunday May 09, 2004 @04:23AM (#9098926)
    I've been thinking about using the GPU for audio DSP work for some time, even got to a point where I could transform some signal by "rendering" it into a texture (in a simple way, I could mix two sounds using the alpha as factor).
    The problem is that these cards are made to be "write only" and that basicaly fetching back anything from them is *very* slow, which makes them totaly useless for the purpose, since you *kmow* the results are there, but you can't fetch them in an usefull/fast maneer.
    I wonder if it's deliberate, to sell the "pro" cards they use for the rendering farms
    • Re:Audio DSP (Score:4, Insightful)

      by SmackCrackandPot ( 641205 ) on Sunday May 09, 2004 @06:23AM (#9099176)
      I wonder if it's deliberate, to sell the "pro" cards they use for the rendering farms

      No, it's just the way that the OpenGL and DirectX API's evolved. There never was any need in the past to have a substantial data feedback. The only need back then was to read pixelmaps and selection tags for determining when an object had been picked.
  • Commodore 64 (Score:5, Interesting)

    by curator_thew ( 778098 ) on Sunday May 09, 2004 @05:01AM (#9098995)

    This concept was being used back in 1988. The Commodore 64 (1mhz 6510, a 6502 like micro processor) had a peripheral 5.25 disk drive called the 1541, which itself had a 1mhz 6510 cpu in it, connected via. a serial link.

    It became common practice to introduce fast loaders: these were partially resident in the C64, and also in the 1541: effectively replacing the 1541's limited firmware.

    However, demo programmers figured out how to utilise the 1541: one particular demo involved uploading program to the 1541 at start, then upon ever screen rewrite, uploading vectors to the 1541, which the 1541 would perform calculations in parallel with the C64, then at the end of the screen, the C64 fetch the results from the 1541, and incorporate them into the next screen frame.

    Equally, GPU provides similar capability if so used.

    • I would be interested in a reference for that, since the 1541 serial link was so slow. If you are talking about Mindsmear [ntlworld.com] that was not actually released, but a demo would have to be pretty clever to make the communication time worth while (and accurate with the screen still turned on).
      • Re:Commodore 64 (Score:3, Informative)

        I don't recall exactly: maybe Horizon, definitely scandinavian. I remember because I decompiled it! What happened was that I started the demo, and unusually the disk drive kept spinning: so I turned if off which caused the demo to fail. Tested loading, then trying to start the demo and it didn't work, so curiosity, an Action Reply and an irq investigation revealed what was going on. I think it was a single part demo: the most memorable C64 demo for me because of that trick.

  • by tiger99 ( 725715 ) on Sunday May 09, 2004 @06:53AM (#9099239)
    ... there are a few snags, such as the fact that a GPU will not have (because it normally does not need) memory management and protection, so it is really only safe to run one task at a time. And, does this not need the knowledge of the architecture and instruction set that Nvidia seem to be unable or unwilling to disclose, hence the continuing controversy over the binary-only Linux drivers?

    However I do know that a lot of people had been wondering about this for a while, could it be done, and was it worth attempting, so now we know. Maybe we shall soon see PCI cards containing an array of GPUs, I imagine the cooling arrangements will be quite interesting!

    There are other things which are faster than a typical CPU, are not some of the processors in games machines 128-bit? Again, you could in theory put some of these together as a co-processor of some sort.

    This was a good piece of work technically, but it says something about society that the fastest mass-produced processors, whether for GPUs or games consoles, exist because people want a higher frame rate in Quake. I can't think of any professional application that needs really fast graphics output, but many that could use faster processing. So why can't Intel and AMD stop putting everything in the one CPU (multiple CPUs with one memory are not really much better), and make co-processors again, which will do fast matrix operations on very large arrays, etc, for those who need them? The ultimate horror of the one CPU philosophy was the winmodem and winprinter, both ridiculous. Silicon is in fact quite cheap, as Nvidia have proved, people's time while they wait for long calculations to finish is not.

    Maybe we are going to see an architectural change coming, I expect it will be supported by FOSS long before Longhorn, just like the AMD64.

  • what's really needed (Score:3, Interesting)

    by curator_thew ( 778098 ) on Sunday May 09, 2004 @07:11AM (#9099279)

    What's really needed is to couple the GPU and CPU in such a way that the GPU actually runs a very low level O/S, like an L4Ka style kernel (http://l4ka.org/), and becomes "just another" MP resource.

    Then, on top of this low level, actually runs the UI graphics driver and so on. Other tasks can also run, but ultimately the priority is given to the UI driver.

    Then, the O/S on the CPU needs to be able to know generally how to distribute tasks across to the GPU. Fairly standard for a tightly coupled MP that has shared bus memory.

    Why do I say this? Because the result is

    (a) if you're using an especially high performance application, the GUI runs full throttle dedicated to rendering/etc and acts as per normal;

    (b) if you're not, e.g. such as when running Office or Engineering other compute intensive tasks (e.g. recoding video without displaying the video), then the GPU is just another multi processor resource to soak up cycles.

    Then, CPU/GPU is just a seamless computing resource. The fantastic benefit of this is that if the O/S is designed properly, then it could allow simply buying/plugging in additional PCI (well, PCI probably not good because of low speed, perhaps AGP?) cards that are simply "additonal processors" - then you get a relatively cheaper way of putting more MP into your machine.

  • Very bad article (Score:3, Interesting)

    by Slash.ter ( 731367 ) on Sunday May 09, 2004 @08:58AM (#9099570)
    This is a very poor quality article, I analyzed it before. There are possibly better ones mentioned by others.

    Just look at the matrix multiplication case. Look at the graph and see that 1000x1000 takes 30 seconds on CPU and 7 seconds on GPU. Let's translate it to Millions of operations per second: CPU -> 33 Mop/s, GPU -> 142 Mop/s Matrix multiplication has cubic complexity so for CPU: 1000 * 1000 * 1000 / 7 seconds / 1000000 = 33 Mop/s

    Now think a while: 33 million operations on 1.5 GHz Pentium 4 with SSE (I assume there is no SSE2). Pentium 4 has fuse multiply-add unit which makes it do two ops per clock. So we get 3 billion ops per second peak performance! What they claim is that the CPU is 100 times slower for matrix multiply. That is unlikely. You can get 2/3 of peak on Pentium 4. Just look at ATLAS [sourceforge.net] or FLAME [utexas.edu] projects. If you use one of these projects you can multiply 1000 matrix in half a second: 14 times faster than the quoted GPU.

    Another thing is the floating point arithmetic. GPU uses 32-bit numbers (at most). This is too small for most scientific codes. CPU can do 64-bits. Also, if you use 32-bits on CPU it will be 4 times as fast as 64-bit (SSE extension). So in 32-bit mode, Pentium 4 is 28 times faster than the quoted GPU.

    Finally, the length of the program. The reason matrix multiply was chosen is becuase it can be encoded in very short code - three simple loops. This fits well with 128-instruction vertex code length. You don't have to keep reloading the code. For more challenging codes it will exceed allowed vertex code length. The three loop matrix multiply implementation stresses memory bandwidth. And CPU has MB/s and GPU has GB/s. No wonder GPU wins. But I can guess that without making any tests.

  • Three questions (Score:3, Interesting)

    by pvera ( 250260 ) <pedro.vera@gmail.com> on Sunday May 09, 2004 @09:50AM (#9099793) Homepage Journal
    1. Is anyone except Apple trying to leverage the GPU for non-3D tasks? Apple has been doing Quartz Extreme for a while but I have not heard if anyone else is doing it.

    2. Has anyone tried something similar to what Quartz Extreme does but for non-graphical tasks?

    3. How come GPU makers are not trying to make a CPU by themselves?
    • Re:Three questions (Score:3, Informative)

      by be-fan ( 61476 )
      1. Is anyone except Apple trying to leverage the GPU for non-3D tasks? Apple has been doing Quartz Extreme for a while but I have not heard if anyone else is doing it.
      Microsoft, for Longhorn, and freedesktop.org, for X11. Both go quite a bit beyond Quartz Extreme by using D3D/OpenGL for all drawing, not just compositing.

      3. How come GPU makers are not trying to make a CPU by themselves?
      GPUs are very different from CPUs. Graphics is almost infinitely parallizable, so you are really just limited by how man
  • by thurin_the_destroyer ( 778290 ) on Sunday May 09, 2004 @09:54AM (#9099818)
    Having done a similar work for my final year project this year, I have some experience attempting general purpose computation on a GPU. The results that I recieved when comparing the CPU with the GPU were very different with many of the applications coming in at 7-15 times slower on the GPU. Further, I discovered some problems which I mention below:

    ! Matrix results
    As in mentioned earlier in the report, the graphics pipeline does not support a branch instruction. So with a limitied number of assembly instructions that can be executed in each stage of the pipeline (either 128 or 256 in current cards), how is it possible for them to perform a calculation on a 1500x1500 matrix multiplication. To calculate a single result 1500 multiplications would need to take place and if they are really clever about how they encode the data into texture s to optimise access, they would need two texture accesses for even 4 multiplications. By my calculations that is 1875 instructions, where you can only do 128 or 256.

    My tests found that using the Cg compiler provided by NVidia, that a matrix of size 26x26 could be multiplied before the unrolling of the for loop exceed the 256 limitation.

    One aspect that my evaluation did not get to examine was the possiblity of reading partial results back from the framebuffer to the texture memory along with loading a slightly modified program to generate the next partial result. They don't mention if they used this strategy so I assume that they don't.

    ! Inclusion of a branch instruction
    Even if a branch instruction were to be included into the vertex and fragment stages of the pipeline, it would cause serious timing issues. As student of Computer Science, I have been taught that the pipeline operates at the speed of the slowest stage and from designing simple pipelined ALUs, I see the logic behind it. However, if a branch instruction is included then the fragment processing stage could become the slowest as the pipeline stalls waiting for the fragment processor to output its information into the framebuffer. I believe it for this reason that the GPU designers specifically did not include a branch instruction.

    ! Accuracy
    My work also found a serious accuracy issue with attempting compuation on the GPU. Firstly, the GPU hardware represents all number in the pipeline as floating point values. As many of you can probably guess, this brings up the ever present problem of 'floating point error'. The interface between GPU and CPU are traditionally 8-bit values. Once they are imported into the 32-bit floating point pipeline the representation has them falling between 0 and 1, meaning that these numbers must be scaled up to their intended representations (integers between 0 and 255 for example) before computation can begin. Combine these two necessary operations and what I saw was a serious accuracy issue where five of my nine results(in the 3x3 matrix) were one integer value out.

    While I don't claim to be an expert on these matters, I do think there is the possiblity of using commodity graphics cards for general purpose computation. However, using hardware that is not designed for this purpose holds some serious constraints in my opinion. Anyone who cares to look at my work can find it here [netsoc.tcd.ie]
  • by peter303 ( 12292 ) on Sunday May 09, 2004 @11:01AM (#9100235)
    GPUs pass input and output from GPU memory at 4-12 bytes per flop. This is much faster than CPUs which are limited by bus speeds that are likely to deliver a number every sever several operations. So CPU benchmarks are bogus, using algorithms that use internal memory over and over again.

    Its not always easy to reformulate algorithms to fit streaming memory and other limitations of GPUs. This issue has come up in earlier generations of custom computers. So, there are things like cyclic matrices tha map multi-dimensional matrix operations into 1-D streams, and so on.

    The 2003 SIGGRAPH had a session [siggraph.org] on this topic showing you could implement a wide variety of algorithms outside of graphics.
  • Some day you may be able to Fold proteins with your GPU [folding-community.org].

We are Microsoft. Unix is irrelevant. Openness is futile. Prepare to be assimilated.

Working...