Using GPUs For General-Purpose Computing 396
Paul Tinsley writes "After seeing the press releases from both Nvidia and ATI announcing their next generation video card offerings, it got me to thinking about what else could be done with that raw processing power. These new cards weigh in with transistor counts of 220 and 160 million (respectively) with the P4 EE core at a count of 29 million. What could my video card be doing for me while I am not playing the latest 3d games? A quick search brought me to some preliminary work done at the University of Washington with a GeForce4 TI 4600 pitted against a 1.5GHz P4. My Favorite excerpt from the paper:
'For a 1500x1500 matrix, the GPU outperforms the CPU by a factor of 3.2.' A PDF of the paper is available here."
The day is saved (Score:5, Funny)
Re:The day is saved (Score:5, Funny)
Maybe they have. They've been trapped in that box together in the basement for a long time.
Re:The day is saved (Score:5, Interesting)
Re:The day is saved (Score:5, Insightful)
Re:The day is saved (Score:3, Interesting)
What?!?!?! (Score:5, Funny)
Intel's been telling me for years that I need faster hardware from THEM to get the job done...
You mean........ they were lying?!?!?
CRAP!
Re:What?!?!?! (Score:5, Funny)
And besides, nobody needs or wants Matrix operations anyway. Did you see how bad Matrix Reloaded was? That was *just* reloading, imagine how bad Matrix Multiplying is. You get the idea.
Link to previous discussion on same/similar sub... (Score:5, Informative)
Re:Link to previous discussion on same/similar sub (Score:5, Interesting)
However, it seems a few organisations have actually beaten us to it.
Apple, for example, uses the 3d aspect of the GPU to accelerate its 2d compositing system with quartz extreme [apple.com]. Microsoft, as usual, announced the feature after Apple shipped it [microsoft.com], and with any luck Windows users might have it by 2007
-- james
Re:Link to previous discussion on same/similar sub (Score:5, Informative)
Re:Link to previous discussion on same/similar sub (Score:3, Informative)
As for organizations beating slashdot to the punch on this one, that's true... but it's good to see this getting even more exposure. :)
GPGPU (General-Purpose computation on GPUs) was a hot topic at various conferences in 2003; a number of papers were published on the subject. At SIGGRAPH 2004 [siggraph.org] there will be a full-day course [gpgpu.org] on GPGPU given by eight of the experts in the field (including myself).
Mark Harris of NVIDIA [nvidia.com] maintains a website [gpgpu.org] dedicated to GPGPU topics, including discussion forums and news pos
Re:Link to previous discussion on same/similar sub (Score:4, Insightful)
Microsoft can afford to be lazy with their products, they make money either way. I don't think that will last forever though. Sometimes they do try hard, NT for example, but then they pile a bunch of poorly designed stuff to go on top of it and that ruins it. If you can, check out OS X's directory structure, it's beautiful. Now compare that to Window's cryptic system...
"Microsoft, as usual, announced the feature after Apple shipped it"
"God I'm tired of hearing that phrase over and over again when 95% of the time it's just because Apple can control the hardware and it would be a total disaster if MS included a technology as fast as they do..."
Re:Link to previous discussion on same/similar sub (Score:3, Informative)
Boy, you really have no idea what the heck you are talking about, do you? Of course the basic UNIX stuff is there, /bin, /sbin, /usr/local, all that stuff.
Those directories have very little files in them, you will also notice a lack of init.d startup scripts. Most of the system is contained in /System.
For example, rather than /etc/init.d, it has startup services i
Googled HTML (Score:5, Informative)
video stuff (Score:5, Interesting)
Comment removed (Score:5, Interesting)
Re:audio stuff (Score:4, Informative)
Re:audio stuff (Score:3, Informative)
There's a company that actually does this. The Universal Audio UAD-1 [uaudio.com] audio DSP had a previous life as a video card and a DVD hardware accelerator. Check out this thread on the UAD forums [chrismilne.com] for more technical information.
As has been said many time before ... (Score:5, Insightful)
Re:As has been said many time before ... (Score:5, Interesting)
Re:As has been said many time before ... (Score:4, Informative)
Re:As has been said many time before ... (Score:3, Informative)
Also, I believe that mplayer, the best video player/encoder I have seen also uses openGL (and thus the video card on a properly configured system) to do playback.
Personally, I don't think there is anything really new in this article.
178 Million in the P4EE (Score:5, Insightful)
In all of this, keep in mind that there's computing and there's computing...the kind of computing power in a GPU is excellent for doing the same numeric computation to every element of a large vector or matrix, not so much for branchy decisiony type things like walking a binary tree. You wouldn't want to run a database on something structured like a GPU (or an old vector-processing Cray), but something like a simulation of weather or molecular modeliing could be perfect for it.
The similarities of a GPU to a vector processing system bring up an interesting possibility...could Fortran see a renaissance for writing shader programs?
Re:178 Million in the P4EE (Score:5, Informative)
Re:178 Million in the P4EE (Score:4, Informative)
Re:178 Million in the P4EE (Score:4, Insightful)
How do you know? In fact, modern GPUs require a large amount of small scattered memory blocks. Texture caches, FIFOs for fragment/pixels/texels when they are not in sync, caches for vertex shader and pixel shader programs etc etc..
More recent GPUs are notorious for their incredibly long latencies. Long latencies imply that a lot of data has to be stored in chip..
Re:178 Million in the P4EE (Score:3, Informative)
They do of course store data between those stages, and there are caches on the chip. Otherwise performance would be shot all to hell.
I doubt that the original statement that GPU designs don't count the on chip memory is correct. That just seems like an odd way to do it.
Re:178 Million in the P4EE (Score:3, Informative)
Sure it does, it's just that the ram isn't cache, it's mostly huge register files.
Re:178 Million in the P4EE (Score:3, Funny)
Re:178 Million in the P4EE (Score:5, Insightful)
IMHO, the perfect friend is someone interested in maximum performance and knows how to program and knows something about computer hardware.
Have you looked at fortran 90, 95 or 2000?
Re: (Score:3, Informative)
Website on this topic (Score:5, Informative)
and a sourceforge project too (Score:5, Informative)
from the BrookGPU website...
As the programmability and performance of modern GPUs continues to increase, many researchers are looking to graphics hardware to solve problems previously performed on general purpose CPUs. In many cases, performing general purpose computation on graphics hardware can provide a significant advantage over implementations on traditional CPUs. However, if GPUs are to become a powerful processing resource, it is important to establish the correct abstraction of the hardware; this will encourage efficient application design as well as an optimizable interface for hardware designers.
From what I understand this project it aimed at making an abstraction layer for GUP hardware so writing code to run on it is easier and standardsied.
While not playing games? (Score:4, Funny)
Two words: virtual pr0n
DSP using GPUs (Score:3, Interesting)
here ya go (Score:4, Informative)
www.gpgpu.org [gpgpu.org]
Website on this topic (Score:0)
by Anonymous Coward on Sunday May 09, @01:57AM (#9098550)
General-purpose computation using graphics hardware has been a significant topic of study for the last few years. Pointers to a lot of papers and discussion on the subject are available at: www.gpgpu.org [gpgpu.org]
Hacking the GPU (Score:5, Informative)
What comes next. (Score:5, Funny)
"Utilize the sheer computing power of your video card!"
New market blitz, hmmmm.
SETI ports their code, and within five days their average completed work units increase 1000 fold. 13 hours later, they have evidence of intelligent life at 30000 locations within one degree.
Microsoft gets the hint, and comes out with a brilliant plan to utilize GPUs to speed up their OS and add bells and whistles to their UI.
And, once again, Apple and Quartz Extreme is ignored.
Re:What comes next. (Score:5, Funny)
Re:What comes next. (Score:3, Insightful)
Strange because it is a big problem for using GPU as coprocessors: usually scientific computation use 64bit floats or on Intel 80-bit floats!
Re:What comes next. (Score:4, Informative)
That's 64-bits for a four element vector (RGBA) or (XYZW), which is thus 16-bits per float. This is referred to as the 'half' floating point data type, as opposed to 'float' or 'double'. This is compatible with Renderman.
It's nice, but could be nicer (Score:5, Informative)
AGP read latency not important when not real time. (Score:3, Insightful)
Not just the GPU : the RAM (Score:5, Interesting)
Wow (Score:5, Interesting)
The CPU can do six billion instructions a second, the GPU can do 18 billion, and every last cycle is being used to stuff a 40MB texture into memory faster. What a waste. Yeah, the walls are even more green and slimy. Whoop-de-fucking-do.
Would it be great if all that processing power could be used for something other than yet-another-graphics-demo?
Like, maybe some new and innovative gameplay?
Frogger (Score:5, Interesting)
Re:Wow (Score:5, Insightful)
I think I speak for many of us (Score:5, Insightful)
Sorry for the flames, but seriously, I get so damn sick of all the "all new games suck" whiners. Look, there are legit reasons to want new technology. It is nice to have better graphics, more realistic sound, etc. It is NICE to have game that looks and sounds more like reality. Yes, that doesn't make the game great, but that doesn't mean it's worthless.
What's more, don't pretend like all modern games suck while old games ruled. That's a bunch of bullshit. Sure, there are plenty of modern games that suck, but guess what? There are tons of old games that suck too. Thing is, you just tend to forget about them. You remember the greats that you enjoyed or heard about, the ones that helped shape gaming today. You forget all the utter shit that was released, just as is released today.
So get off it. If you don't like nice graphics, fine. Stick with old games, no one is forcing you to upgrade. But don't pretend like there is no reason to want better graphics in games.
Re:I think I speak for many of us (Score:5, Insightful)
There's something that's always puzzled me a little about this site - attached to every single article about some new piece of PC tech - a faster processor, better graphics card, etc - there are a number of comments bemoaning the advance. All of them saying that people don't need the power/speed they have already, that they personally are just fine with 4 year old hardware, or, in this case, that better graphics don't make for better games. Hell, the same is true for mobile phones - I've lost count of the number of comments bemoaning advances in them, too.
It's funny, but I thought this was supposed to be a site for geeks; aren't geeks supposed to *like* newer, better toys?
To get back on topic - no, better graphics are not sufficient for a better game. However, if the gameplay is there, then they can certainly make the experience more enjoyable. Would Quake have been as much fun if it was rendered in wireframes?
Better graphics help add to the sense of realisim, making the game a more immersive experience. The whole point of the majority of games is entertainment and (to an extent) escapism. Additionally, what a lot of people like the grand-parent poster seem to forget is that most of the big-name game engines are licensed for use in a number of games. Let people like id spend their time and money coming up with the most graphically intensive, realistic engine they can. Think Doom 3'll suck because the gameplay will be crap? Fine, then wait for someone to license the engine and create a better game with it. In the meantime, please shut up and remember that there are those of us who like things to be pretty, as well as useful/well made/fun/(good at $primaryPurpose)
Good graphics on their own won't make a good game, but they will help make a good game great.
Expand this thinking! (Score:4, Interesting)
You're absolutely correct that these "game snobs" are looking at the past through rose-colored graphics, forgetting all of the stinkers of yesteryear. However, it's not just games where this applies. How many times have you heard people complain about how bad movies are now, or music, or books? It's exactly the same phenomenon. When your grandfather tells you how much better things were "back in the day", it's for exactly the same reason. He's looking back at all the good things, while ignoring all of the bad.
Face it, everything mostly sucks. It always has, and it always will. There will always be some gems that really stand out, and those will be what are remembered when people fondly look back on "the old days". Get over it.
This is BIG (Score:5, Insightful)
Don't miss the point that this is not intended for general purpose computing. Don't port OoO to the graphics chip.
Where it is huge is in signal processing. FPGAs have begun replacing even the G4s in this area recently because of the huge gains in speed vs. power consumption an FPGA affords. However, FPGAs are not bought and used as is, and end up costing a significant amount (of development time/money) to become useful. Being able to use these commodity GPUs for vector processing creates a very desirable price/processing power/power consumption option. If I were nVIDIA or ATI, I would be shoveling these guys money to continue their work.
Siggraph 2003 (Score:5, Informative)
If you have a matrix solver, there is no telling what you can do. And i remember, these papers show that the speed is faster than the matrix calculations of the same stuff using the CPU.
# Linear Algebra Operators for GPU Implementation of Numerical Algorithms
Jens Krüger, Rüdiger Westermann
# Sparse Matrix Solvers on the GPU: Conjugate Gradients and Multigrid
Jeff Bolz, Ian Farmer, Eitan Grinspun, Peter Schröder
# Nonlinear Optimization Framework for Image-Based Modeling on Programmable Graphics Hardware
Karl E. Hillesland, Sergey Molinov, Radek Grzeszczuk
http://www.gpgpu.org/ is a great resource (Score:4, Interesting)
I can see it now.... (Score:3, Interesting)
Ohh well the idea was good while it lasted.
Imagine... (Score:4, Interesting)
seriously, we have a 16 node beowulf cluster and each node has an unnecessarily good graphics card in them. a lot of the calculations are matrix-based e.g. several variables each 1xthousands (1D) or hundredsxhundreds (2D).
how feasible and worthwhile do you think it would be to tap into the extra processing power?
Re:Imagine... (Score:3, Interesting)
All I can suggest is download the Brook [stanford.edu] libraries and try it out. See if it helps,
Let me check my notes... (Score:4, Interesting)
Re:Let me check my notes... (Score:3, Informative)
> AGP does a lot better taking data in, but it's still pretty
> costly sending data back to the CPU.
I've heard that mentioned a few times, is it true?
From the AGP 3.0 spec [intel.com]:
The AGP3.0 interface is designed to support several platform generations based upon 0.25m (and
smaller) component silicon technology, spanning several technology generations. As with AGP2.0, the
physical interface is designed to operate at a c
Re:Let me check my notes... (Score:4, Interesting)
Notice that they're quick to point out the problem isn't likely a hardware issue. There should be plenty of bandwidth on the AGP bus, but graphics chip makers don't seem to have written their drivers to handle transfers from AGP cards to main memory properly.
Then they run some tests and conclude:
That means even if you can render high-quality images at 30 frames per second, you won't be able to get them out of the graphics card at anything near that rate.
When... (Score:3, Insightful)
Pseudo repost (Score:4, Informative)
http://developers.slashdot.org/developers/03/12/2
At least, I would imagine most of the comments would be the same or similar....
Finally (Score:5, Funny)
Using GPUs For General-Purpose Computing
I'm glad that finally they started to use the General-Purpose Unit. What took them so long?
Maybe time for a new generation of math-processor? (Score:4, Insightful)
Maybe it's time to start making co-processing add-on cards for advanced operations such as matrix mults and other operations that can be done in parallell on a low level. Add to that a couple of hundred megs of RAM and you have a neat little helper when raytracing etc. You could easily emulate the cards if you didn't have them (or needed them). The branchy nature of the program itself would not affect the performance of the co-processor since it should only be used for calculations.
I for one would like to see this.
Re:Maybe time for a new generation of math-process (Score:5, Informative)
I can't imagine it would take a whole lot to hack them for just their processing power outside of audio applications.
Re:Maybe time for a new generation of math-process (Score:5, Insightful)
There is a certain overhead because a communications protocol is to be established between the main processor and the co-processor. For simple tasks the main processor often stops and waits for the co-processor to complete the task and retrieves the results. For more complicated tasks, the main processor continues but later an interrupt occurs that the main processor must service.
You must be very careful or the extra overhead of this communication makes the execution of the task slower than without the co-processor. This is certainly going to happen at some time in the future, when you increase central processor power all the time but keep using the same co-processor.
For example, your matrix co-processor needs to be fed the matrix data, start working, and tell it is finished. Your performance would not only be limited by the processor speed, but also by the bus transfer rate, and by the impact those fast bus transfers have on the CPU-memory bandwidth available and the on-CPU cache validity.
When you are unlucky, the next CPU you buy is faster in performing the task itself.
Re:Maybe time for a new generation of math-process (Score:3, Interesting)
Remember the co-processors? Well, actually I don't (I'm a tad to young). But I know about them.
Dig deeper. 8087 FPU's were nice, though they ran hot enough to cook on, but the idea had existed for 15 or more years before they appeared. Try looking into the old DEC PDP-11 archives. There you'll find DEC's own "CIS" or "commercial instruction set", which was a set of boards (later a add on chip) that added string, character and BCD math instructions. DEC also had a FPU card set that implemented a 64-
Documentation (Score:3, Interesting)
Bass Ackwards? (Score:5, Insightful)
While it's true that general purpose hardware will never perform as well as or as efficiently as a design specifically targeted to the task (or at least it better not), it is also equally as true that eventually general purpose/commodity hardware will achieve a price-performance point where it is more than "good enough" for majority.
GPU = (Score:5, Funny)
transistor counts through the ages (Score:5, Informative)
Re: transistor counts through the ages (Score:3, Informative)
> Transistor counts keep growing, so I keep updating this and reposting it about once a year.
For those who don't already know, what we now think of as "Moore's Law" was originally a statement about the rate of growth in the number of transistors on a chip, not about CPU speed.
Alternative use (Score:3, Interesting)
Dual Core (Score:5, Interesting)
Even with the ATI 800XT, 1600x1200 can dip below 30FPS with AA/AF on higher settings. Still a ways to go for that full virtual reality look.
Re:Dual Core (Score:3, Informative)
Re:Dual Core (Score:3, Interesting)
There where dual ATI GPU's or Matrox or even the old Voodoo2 SLI. Seems you can increase speed with more cores.
Re:Dual Core (Score:3, Informative)
I remember awhile back someone did quake2 benchmarks on accuracy vs FPS, and how 79FPS (i think) was the sweet spot, faster and lower refresh rate had a negative effect on accuracy.
But I wont argue 20FPS over 80, but 100 seems to be target. imho
Audio DSP (Score:4, Informative)
The problem is that these cards are made to be "write only" and that basicaly fetching back anything from them is *very* slow, which makes them totaly useless for the purpose, since you *kmow* the results are there, but you can't fetch them in an usefull/fast maneer.
I wonder if it's deliberate, to sell the "pro" cards they use for the rendering farms
Re:Audio DSP (Score:4, Insightful)
No, it's just the way that the OpenGL and DirectX API's evolved. There never was any need in the past to have a substantial data feedback. The only need back then was to read pixelmaps and selection tags for determining when an object had been picked.
Commodore 64 (Score:5, Interesting)
This concept was being used back in 1988. The Commodore 64 (1mhz 6510, a 6502 like micro processor) had a peripheral 5.25 disk drive called the 1541, which itself had a 1mhz 6510 cpu in it, connected via. a serial link.
It became common practice to introduce fast loaders: these were partially resident in the C64, and also in the 1541: effectively replacing the 1541's limited firmware.
However, demo programmers figured out how to utilise the 1541: one particular demo involved uploading program to the 1541 at start, then upon ever screen rewrite, uploading vectors to the 1541, which the 1541 would perform calculations in parallel with the C64, then at the end of the screen, the C64 fetch the results from the 1541, and incorporate them into the next screen frame.
Equally, GPU provides similar capability if so used.
Re:Commodore 64 (Score:3, Insightful)
Re:Commodore 64 (Score:3, Informative)
All very impressive, but.... (Score:4, Insightful)
However I do know that a lot of people had been wondering about this for a while, could it be done, and was it worth attempting, so now we know. Maybe we shall soon see PCI cards containing an array of GPUs, I imagine the cooling arrangements will be quite interesting!
There are other things which are faster than a typical CPU, are not some of the processors in games machines 128-bit? Again, you could in theory put some of these together as a co-processor of some sort.
This was a good piece of work technically, but it says something about society that the fastest mass-produced processors, whether for GPUs or games consoles, exist because people want a higher frame rate in Quake. I can't think of any professional application that needs really fast graphics output, but many that could use faster processing. So why can't Intel and AMD stop putting everything in the one CPU (multiple CPUs with one memory are not really much better), and make co-processors again, which will do fast matrix operations on very large arrays, etc, for those who need them? The ultimate horror of the one CPU philosophy was the winmodem and winprinter, both ridiculous. Silicon is in fact quite cheap, as Nvidia have proved, people's time while they wait for long calculations to finish is not.
Maybe we are going to see an architectural change coming, I expect it will be supported by FOSS long before Longhorn, just like the AMD64.
what's really needed (Score:3, Interesting)
What's really needed is to couple the GPU and CPU in such a way that the GPU actually runs a very low level O/S, like an L4Ka style kernel (http://l4ka.org/), and becomes "just another" MP resource.
Then, on top of this low level, actually runs the UI graphics driver and so on. Other tasks can also run, but ultimately the priority is given to the UI driver.
Then, the O/S on the CPU needs to be able to know generally how to distribute tasks across to the GPU. Fairly standard for a tightly coupled MP that has shared bus memory.
Why do I say this? Because the result is
(a) if you're using an especially high performance application, the GUI runs full throttle dedicated to rendering/etc and acts as per normal;
(b) if you're not, e.g. such as when running Office or Engineering other compute intensive tasks (e.g. recoding video without displaying the video), then the GPU is just another multi processor resource to soak up cycles.
Then, CPU/GPU is just a seamless computing resource. The fantastic benefit of this is that if the O/S is designed properly, then it could allow simply buying/plugging in additional PCI (well, PCI probably not good because of low speed, perhaps AGP?) cards that are simply "additonal processors" - then you get a relatively cheaper way of putting more MP into your machine.
Very bad article (Score:3, Interesting)
Just look at the matrix multiplication case. Look at the graph and see that 1000x1000 takes 30 seconds on CPU and 7 seconds on GPU. Let's translate it to Millions of operations per second: CPU -> 33 Mop/s, GPU -> 142 Mop/s Matrix multiplication has cubic complexity so for CPU: 1000 * 1000 * 1000 / 7 seconds / 1000000 = 33 Mop/s
Now think a while: 33 million operations on 1.5 GHz Pentium 4 with SSE (I assume there is no SSE2). Pentium 4 has fuse multiply-add unit which makes it do two ops per clock. So we get 3 billion ops per second peak performance! What they claim is that the CPU is 100 times slower for matrix multiply. That is unlikely. You can get 2/3 of peak on Pentium 4. Just look at ATLAS [sourceforge.net] or FLAME [utexas.edu] projects. If you use one of these projects you can multiply 1000 matrix in half a second: 14 times faster than the quoted GPU.
Another thing is the floating point arithmetic. GPU uses 32-bit numbers (at most). This is too small for most scientific codes. CPU can do 64-bits. Also, if you use 32-bits on CPU it will be 4 times as fast as 64-bit (SSE extension). So in 32-bit mode, Pentium 4 is 28 times faster than the quoted GPU.
Finally, the length of the program. The reason matrix multiply was chosen is becuase it can be encoded in very short code - three simple loops. This fits well with 128-instruction vertex code length. You don't have to keep reloading the code. For more challenging codes it will exceed allowed vertex code length. The three loop matrix multiply implementation stresses memory bandwidth. And CPU has MB/s and GPU has GB/s. No wonder GPU wins. But I can guess that without making any tests.
Three questions (Score:3, Interesting)
2. Has anyone tried something similar to what Quartz Extreme does but for non-graphical tasks?
3. How come GPU makers are not trying to make a CPU by themselves?
Re:Three questions (Score:3, Informative)
Microsoft, for Longhorn, and freedesktop.org, for X11. Both go quite a bit beyond Quartz Extreme by using D3D/OpenGL for all drawing, not just compositing.
3. How come GPU makers are not trying to make a CPU by themselves?
GPUs are very different from CPUs. Graphics is almost infinitely parallizable, so you are really just limited by how man
Interesting work that raises some questions... (Score:4, Informative)
! Matrix results
As in mentioned earlier in the report, the graphics pipeline does not support a branch instruction. So with a limitied number of assembly instructions that can be executed in each stage of the pipeline (either 128 or 256 in current cards), how is it possible for them to perform a calculation on a 1500x1500 matrix multiplication. To calculate a single result 1500 multiplications would need to take place and if they are really clever about how they encode the data into texture s to optimise access, they would need two texture accesses for even 4 multiplications. By my calculations that is 1875 instructions, where you can only do 128 or 256.
My tests found that using the Cg compiler provided by NVidia, that a matrix of size 26x26 could be multiplied before the unrolling of the for loop exceed the 256 limitation.
One aspect that my evaluation did not get to examine was the possiblity of reading partial results back from the framebuffer to the texture memory along with loading a slightly modified program to generate the next partial result. They don't mention if they used this strategy so I assume that they don't.
! Inclusion of a branch instruction
Even if a branch instruction were to be included into the vertex and fragment stages of the pipeline, it would cause serious timing issues. As student of Computer Science, I have been taught that the pipeline operates at the speed of the slowest stage and from designing simple pipelined ALUs, I see the logic behind it. However, if a branch instruction is included then the fragment processing stage could become the slowest as the pipeline stalls waiting for the fragment processor to output its information into the framebuffer. I believe it for this reason that the GPU designers specifically did not include a branch instruction.
! Accuracy
My work also found a serious accuracy issue with attempting compuation on the GPU. Firstly, the GPU hardware represents all number in the pipeline as floating point values. As many of you can probably guess, this brings up the ever present problem of 'floating point error'. The interface between GPU and CPU are traditionally 8-bit values. Once they are imported into the 32-bit floating point pipeline the representation has them falling between 0 and 1, meaning that these numbers must be scaled up to their intended representations (integers between 0 and 255 for example) before computation can begin. Combine these two necessary operations and what I saw was a serious accuracy issue where five of my nine results(in the 3x3 matrix) were one integer value out.
While I don't claim to be an expert on these matters, I do think there is the possiblity of using commodity graphics cards for general purpose computation. However, using hardware that is not designed for this purpose holds some serious constraints in my opinion. Anyone who cares to look at my work can find it here [netsoc.tcd.ie]
the magic of "streaming i/o" (Score:4, Informative)
Its not always easy to reformulate algorithms to fit streaming memory and other limitations of GPUs. This issue has come up in earlier generations of custom computers. So, there are things like cyclic matrices tha map multi-dimensional matrix operations into 1-D streams, and so on.
The 2003 SIGGRAPH had a session [siggraph.org] on this topic showing you could implement a wide variety of algorithms outside of graphics.
Folding@Home is actually working on this... (Score:4, Interesting)
Re:Not the Point (Score:4, Interesting)
No, it's like using your pop-up camper for storage space when you're using it on holidays.
Re:Not the Point (Score:4, Insightful)
What's relevant is that to the processor on a graphics card, its dedicated purpose is simply a bunch of logic. There's no dedicated "this must be used for pixels only, all else is waste" logic inherent in the system. there are MANY purposes for which the same/similar logic that applies in generating 3D imagery can be used, and that seems the purpose of this paper. Run THOSE type operations on the GPU. Some things they won't be able to do well no doubt - but those they can, they can do extremely well.
Re:Not the Point-headbanger. (Score:4, Insightful)
On those operating systems that require them, that could very well be.
Still makes a nice thought that a linux box without even X installed, but a kickass graphics card, could crunch away doing something 4 times quicker than any windowed machine.
Not so... (Score:4, Interesting)
Apple's Newton had no CPU, only a GPU that was more than adequate.
Ideas like these are good in general. I'd like to see the industry move away from the CPU-as-chief status quo. Amigas were years ahead of their time in large part because the emphasis wasn't as much on central processing. The CPU did only what it was supposed to do -- hand out instructions to the gfx and audio subsystems.
Hardly using a "motorcycle to tow a pop-up camper." If anything, the conventional wisdom is, "when all you have is a hammer, everything looks like a nail."
Re:Not the Point (Score:5, Funny)
KFG
Re:Maybe that's the answer... (Score:3, Interesting)
My understanding is that they used GCC.
Further, "Another said that some version of Linux had to be used to compare apples to apples. Well, MacOS X isn't Linux, and the desktop standard for x86 machines is Windows (not that using a properly optimized Linux bothered the Opterons very much). You want to know what machine is fastest, you test in their native environment."
Oh, silly me. Processors are s
Re:Maybe that's the answer... (Score:3, Insightful)
In any case,
Re:Maybe that's the answer... (Score:3, Interesting)
The compiler that seems to be best/fully optimized for the G5 is the new IBM XL compilers, released at the beginning of the year.
http://forums.macnn.com/showthread.php?s=&threa d id =197118
There doesn't seem to be much benchmark done using it yet, but all information points to significant gain in performance when using the IBM compiler versus GCC (not surprising, since IBM built the chip). The only benchmark I can find is
Re:Unused computing Power? (Score:5, Insightful)
a) Not equal. Apples and oranges. A GPU will do repeated calculations very, very fast, like matrix transforms and the like. A CPU on the other hand will make decisions based on input, rather than just crunching numbers
b) The main display (the GUI) already uses many tricks on the graphics card. The hard part is making sure that all graphics cards support the features. Things like the xrender extension and such are becoming more common as graphics cards and drivers get "standard" capabilities
c) Your imagination is the limit as to what it could be used for. Just realize that it's a good data processing unit, not a good program execution unit. Use each for their strengths.
d) Modified? With new cards/drivers, all it takes is OpenGL calls to start taking advantage of this power. All it really takes is someone who knows what they're doing and has a bit of inspiration.
Re:Unused computing Power? (Score:4, Informative)
Longhorn is suppossed to offload a lot of the GUI stuff to the card. So yeah, it'd take advantage of untapped power of the card. However, as for other general purpose stuff, it wouldn't be so interesting. It's kinda like comparing a Ferrari to a school bus. The Ferrari will run circles around the bus, but can only ferry 2 people. The bus can move a LOT of cargo, but not as fast as the Ferrari. We're talking about specialization here. The trick is to find ways to take what the GPU is good at and making them useful.
Re:Violation of Compartmentalization (Score:4, Insightful)
No, having a CPU that does everything is what violates the tenet.
I don't know about you, but I don't have a chip that does my video processing for me, I don't have a chip that does all the encryption for me, I don't have a chip that handles (en/de)capsulating network traffic, as well as handing interrupts and routing.
Having a second processor that does some specialized work that a CPU isn't good at is an improvement, not a nightmare. I'd love to be able to plug in a chips or two into my PC and have them do better-than realtime MPEG-4 encoding that doesn't affect my processor at all... Who wouldn't?