Using GPUs For General-Purpose Computing 396
Paul Tinsley writes "After seeing the press releases from both Nvidia and ATI announcing their next generation video card offerings, it got me to thinking about what else could be done with that raw processing power. These new cards weigh in with transistor counts of 220 and 160 million (respectively) with the P4 EE core at a count of 29 million. What could my video card be doing for me while I am not playing the latest 3d games? A quick search brought me to some preliminary work done at the University of Washington with a GeForce4 TI 4600 pitted against a 1.5GHz P4. My Favorite excerpt from the paper:
'For a 1500x1500 matrix, the GPU outperforms the CPU by a factor of 3.2.' A PDF of the paper is available here."
Link to previous discussion on same/similar sub... (Score:5, Informative)
Googled HTML (Score:5, Informative)
Website on this topic (Score:5, Informative)
Re:178 Million in the P4EE (Score:5, Informative)
PDF to HTML (Score:2, Informative)
Hacking the GPU (Score:5, Informative)
It's nice, but could be nicer (Score:5, Informative)
Siggraph 2003 (Score:5, Informative)
If you have a matrix solver, there is no telling what you can do. And i remember, these papers show that the speed is faster than the matrix calculations of the same stuff using the CPU.
# Linear Algebra Operators for GPU Implementation of Numerical Algorithms
Jens Krüger, Rüdiger Westermann
# Sparse Matrix Solvers on the GPU: Conjugate Gradients and Multigrid
Jeff Bolz, Ian Farmer, Eitan Grinspun, Peter Schröder
# Nonlinear Optimization Framework for Image-Based Modeling on Programmable Graphics Hardware
Karl E. Hillesland, Sergey Molinov, Radek Grzeszczuk
here ya go (Score:4, Informative)
www.gpgpu.org [gpgpu.org]
Website on this topic (Score:0)
by Anonymous Coward on Sunday May 09, @01:57AM (#9098550)
General-purpose computation using graphics hardware has been a significant topic of study for the last few years. Pointers to a lot of papers and discussion on the subject are available at: www.gpgpu.org [gpgpu.org]
Re:178 Million in the P4EE (Score:4, Informative)
and a sourceforge project too (Score:5, Informative)
from the BrookGPU website...
As the programmability and performance of modern GPUs continues to increase, many researchers are looking to graphics hardware to solve problems previously performed on general purpose CPUs. In many cases, performing general purpose computation on graphics hardware can provide a significant advantage over implementations on traditional CPUs. However, if GPUs are to become a powerful processing resource, it is important to establish the correct abstraction of the hardware; this will encourage efficient application design as well as an optimizable interface for hardware designers.
From what I understand this project it aimed at making an abstraction layer for GUP hardware so writing code to run on it is easier and standardsied.
Pseudo repost (Score:4, Informative)
http://developers.slashdot.org/developers/03/12/2
At least, I would imagine most of the comments would be the same or similar....
Re:Altivec (Score:2, Informative)
Re:Not so... (Score:2, Informative)
Re:Maybe time for a new generation of math-process (Score:5, Informative)
I can't imagine it would take a whole lot to hack them for just their processing power outside of audio applications.
transistor counts through the ages (Score:5, Informative)
Re:Unused computing Power? (Score:4, Informative)
Longhorn is suppossed to offload a lot of the GUI stuff to the card. So yeah, it'd take advantage of untapped power of the card. However, as for other general purpose stuff, it wouldn't be so interesting. It's kinda like comparing a Ferrari to a school bus. The Ferrari will run circles around the bus, but can only ferry 2 people. The bus can move a LOT of cargo, but not as fast as the Ferrari. We're talking about specialization here. The trick is to find ways to take what the GPU is good at and making them useful.
Audio DSP (Score:4, Informative)
The problem is that these cards are made to be "write only" and that basicaly fetching back anything from them is *very* slow, which makes them totaly useless for the purpose, since you *kmow* the results are there, but you can't fetch them in an usefull/fast maneer.
I wonder if it's deliberate, to sell the "pro" cards they use for the rendering farms
Re:audio stuff (Score:4, Informative)
Re:178 Million in the P4EE (Score:3, Informative)
They do of course store data between those stages, and there are caches on the chip. Otherwise performance would be shot all to hell.
I doubt that the original statement that GPU designs don't count the on chip memory is correct. That just seems like an odd way to do it.
Re:Dual Core (Score:3, Informative)
Re:Link to previous discussion on same/similar sub (Score:5, Informative)
Re: transistor counts through the ages (Score:3, Informative)
> Transistor counts keep growing, so I keep updating this and reposting it about once a year.
For those who don't already know, what we now think of as "Moore's Law" was originally a statement about the rate of growth in the number of transistors on a chip, not about CPU speed.
Re:Let me check my notes... (Score:3, Informative)
> AGP does a lot better taking data in, but it's still pretty
> costly sending data back to the CPU.
I've heard that mentioned a few times, is it true?
From the AGP 3.0 spec [intel.com]:
The AGP3.0 interface is designed to support several platform generations based upon 0.25m (and
smaller) component silicon technology, spanning several technology generations. As with AGP2.0, the
physical interface is designed to operate at a common clock frequency of 66 MHz. Its source
synchronous data strobe operation, however, is octal-clocked and transfers eight double words
(Dwords) of data within the span of time consumed by a single common clock cycle. The AGP3.0 data
bus provides a peak theoretical bandwidth of 2.1 GB/s (32 bits per transfer at 533 MT/s). Both the
common clock and source synchronous data strobe operation and protocols are similar to those
employed by AGP2.0.11
Later on Page 96:
Traditional AGP devices can demand up to the maximum bandwidth available over the AGP ports.
However, the AGP system does not guarantee to deliver the requested bandwidth, nor does it guarantee
transfers will take place within some clearly specified request/transfer latency time.
This is done by the system guaranteeing to process a specified number (N) of read or write transactions of a specified size (Y) during each isochronous time period (T). An AGP3.0 device can divide this bandwidth between read and write traffic as appropriate. Further, the system transfers isochronous data over the AGP3.0 Port within a specified latency (L).
(emphasis mine)
I'm no expert, just asking if the "low upsream bandwidth" assumption is true. If it is, there could still some applications (eg: simple data compression) that could use it. Also, maybe output from VGA/DVI ports could be tapped.
Re:audio stuff (Score:3, Informative)
There's a company that actually does this. The Universal Audio UAD-1 [uaudio.com] audio DSP had a previous life as a video card and a DVD hardware accelerator. Check out this thread on the UAD forums [chrismilne.com] for more technical information.
Comment removed (Score:3, Informative)
Re:Dual Core (Score:3, Informative)
I remember awhile back someone did quake2 benchmarks on accuracy vs FPS, and how 79FPS (i think) was the sweet spot, faster and lower refresh rate had a negative effect on accuracy.
But I wont argue 20FPS over 80, but 100 seems to be target. imho
Interesting work that raises some questions... (Score:4, Informative)
! Matrix results
As in mentioned earlier in the report, the graphics pipeline does not support a branch instruction. So with a limitied number of assembly instructions that can be executed in each stage of the pipeline (either 128 or 256 in current cards), how is it possible for them to perform a calculation on a 1500x1500 matrix multiplication. To calculate a single result 1500 multiplications would need to take place and if they are really clever about how they encode the data into texture s to optimise access, they would need two texture accesses for even 4 multiplications. By my calculations that is 1875 instructions, where you can only do 128 or 256.
My tests found that using the Cg compiler provided by NVidia, that a matrix of size 26x26 could be multiplied before the unrolling of the for loop exceed the 256 limitation.
One aspect that my evaluation did not get to examine was the possiblity of reading partial results back from the framebuffer to the texture memory along with loading a slightly modified program to generate the next partial result. They don't mention if they used this strategy so I assume that they don't.
! Inclusion of a branch instruction
Even if a branch instruction were to be included into the vertex and fragment stages of the pipeline, it would cause serious timing issues. As student of Computer Science, I have been taught that the pipeline operates at the speed of the slowest stage and from designing simple pipelined ALUs, I see the logic behind it. However, if a branch instruction is included then the fragment processing stage could become the slowest as the pipeline stalls waiting for the fragment processor to output its information into the framebuffer. I believe it for this reason that the GPU designers specifically did not include a branch instruction.
! Accuracy
My work also found a serious accuracy issue with attempting compuation on the GPU. Firstly, the GPU hardware represents all number in the pipeline as floating point values. As many of you can probably guess, this brings up the ever present problem of 'floating point error'. The interface between GPU and CPU are traditionally 8-bit values. Once they are imported into the 32-bit floating point pipeline the representation has them falling between 0 and 1, meaning that these numbers must be scaled up to their intended representations (integers between 0 and 255 for example) before computation can begin. Combine these two necessary operations and what I saw was a serious accuracy issue where five of my nine results(in the 3x3 matrix) were one integer value out.
While I don't claim to be an expert on these matters, I do think there is the possiblity of using commodity graphics cards for general purpose computation. However, using hardware that is not designed for this purpose holds some serious constraints in my opinion. Anyone who cares to look at my work can find it here [netsoc.tcd.ie]
Re:178 Million in the P4EE (Score:3, Informative)
Sure it does, it's just that the ram isn't cache, it's mostly huge register files.
Re:Commodore 64 (Score:3, Informative)
Re:Commodore 64 (Score:1, Informative)
Actually, the processor in the 1541 was an ordinary 6502. The 6510 added some memory mapping stuff that the drive didn't need.
the magic of "streaming i/o" (Score:4, Informative)
Its not always easy to reformulate algorithms to fit streaming memory and other limitations of GPUs. This issue has come up in earlier generations of custom computers. So, there are things like cyclic matrices tha map multi-dimensional matrix operations into 1-D streams, and so on.
The 2003 SIGGRAPH had a session [siggraph.org] on this topic showing you could implement a wide variety of algorithms outside of graphics.
Re:As has been said many time before ... (Score:4, Informative)
Re:As has been said many time before ... (Score:3, Informative)
Also, I believe that mplayer, the best video player/encoder I have seen also uses openGL (and thus the video card on a properly configured system) to do playback.
Personally, I don't think there is anything really new in this article.
Re:What comes next. (Score:4, Informative)
That's 64-bits for a four element vector (RGBA) or (XYZW), which is thus 16-bits per float. This is referred to as the 'half' floating point data type, as opposed to 'float' or 'double'. This is compatible with Renderman.
Re:Three questions (Score:3, Informative)
Microsoft, for Longhorn, and freedesktop.org, for X11. Both go quite a bit beyond Quartz Extreme by using D3D/OpenGL for all drawing, not just compositing.
3. How come GPU makers are not trying to make a CPU by themselves?
GPUs are very different from CPUs. Graphics is almost infinitely parallizable, so you are really just limited by how many execution units you can stick on the CPU. Assuming enough memory bandwidth, you get nearly a linear increase with increasing numbers of execution units. CPUs, on the other hand, deal with general-purpose code that has an inherent parallelism of about 3-way to 4-way at most. So CPU manufacturers have to do clever things like SMT to take advantage of increased execution resources, but mainly must concentrate on ramping up clock speed and memory bandwidth.
Interestingly enough, GPU makers wouldn't be very good at making CPUs. GPUs are designed using high-level software, like VHDL. This has a big impact on their maximum clock speed, but that doesn't really matter, because they can always double the number of pipelines and get a nearly 2x increase in performance. Meanwhile, CPUs are designed by hand, and tweeked to get every last MHz, because throwing twice as many execution units on the CPU wouldn't help performance much at all.
Re:Link to previous discussion on same/similar sub (Score:3, Informative)
Boy, you really have no idea what the heck you are talking about, do you? Of course the basic UNIX stuff is there, /bin, /sbin, /usr/local, all that stuff.
Those directories have very little files in them, you will also notice a lack of init.d startup scripts. Most of the system is contained in /System.
For example, rather than /etc/init.d, it has startup services in /System/Library/StartupItems. For example there is an apache folder, in that are the scripts necessary to start Apache along with a file which describes Apache's dependencies. Also, these startup items are multi lingual. You can boot into any language you want. All of this in one folder. That's f*cking elegance, yet it is only a very small example.
Check it out, you will see.
Re:video stuff (Score:1, Informative)
Re:Link to previous discussion on same/similar sub (Score:3, Informative)
As for organizations beating slashdot to the punch on this one, that's true... but it's good to see this getting even more exposure. :)
GPGPU (General-Purpose computation on GPUs) was a hot topic at various conferences in 2003; a number of papers were published on the subject. At SIGGRAPH 2004 [siggraph.org] there will be a full-day course [gpgpu.org] on GPGPU given by eight of the experts in the field (including myself).
Mark Harris of NVIDIA [nvidia.com] maintains a website [gpgpu.org] dedicated to GPGPU topics, including discussion forums and news postings. Well worth a browse if you're interested in GPGPU topics.
I look forward to seeing some of you at SIGGRAPH! :)
--Cliff