Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Graphics Programming Hardware

Sorting Algorithm Breaks Giga-Sort Barrier, With GPUs 187

An anonymous reader writes "Researchers at the University of Virginia have recently open sourced an algorithm capable of sorting at a rate of one billion (integer) keys per second using a GPU. Although GPUs are often assumed to be poorly suited for algorithms like sorting, their results are several times faster than the best known CPU-based sorting implementations."
This discussion has been archived. No new comments can be posted.

Sorting Algorithm Breaks Giga-Sort Barrier, With GPUs

Comments Filter:
  • Re:Um... (Score:5, Interesting)

    by PrecambrianRabbit ( 1834412 ) on Sunday August 29, 2010 @11:19PM (#33412196)

    Given that the particular hardware setup is detailed here (a GTX 480 achieves the 1 billion keys/sec figure), and the algorithm used (radix sort) has known asymptotic behavior (O(nk) for n keys of length k), 10^9 keys/sec is quite meaningful, particularly since it's a significant implementation challenge (possibly even an algorithmic challenge) to port this algorithm to a GPU.

    Furthermore, I think sorting speed is appropriately measured in keys/sec. Big-O does not in fact describe the speed, but rather the upper bound of the growth of an algorithm's asymptotic running time, which needs to be paired with the implementation, architecture, and data set to determine a speed. It turns out the constant factors can actually be quite important in practice.

  • by crovira ( 10242 ) on Sunday August 29, 2010 @11:21PM (#33412204) Homepage

    you've got lots and lots of RAM with room for the keys and lots of space to waste for unfilled pointers.

    Pass 1, read the records, at the key radix store a record URI
    Pass 2, sweep RAM and read the record URIs in the order you encounter them copying them onto a sequential write device.

    I was doing this years ago.

    If you are careful with the number and sizes of your read buffers, the re-read done for pass 2 doesn't have to be all that disk intensive.

    You can even generate the keys using what ever hash function you find that is truly efficient and store collisions separately (leave a bit high and go into the a link list maintained by the hash generator for those few keys that hash redundantly.)

  • Re:Um... (Score:5, Interesting)

    by SpazmodeusG ( 1334705 ) on Sunday August 29, 2010 @11:36PM (#33412258)

    I wish they'd start putting the "P" into these Big-O notations, where the "P" is the number of processors. Some algorithms don't scale well, some do. Putting the P in illustrates this.

    eg. O( n/P ) illustrates an algorithm that scales perfectly with more cores added. O( n / log(P) ) not so much.

  • Re:x86 (Score:2, Interesting)

    by jhachen ( 1889398 ) on Monday August 30, 2010 @12:47AM (#33412466)
    The answer to hitting a wall with traditional CPUs is complicated. The number of transistors on new CPUs is actually keeping up with Moore's law. The size of the transistors and power consumption is also steadily decreasing. However clock speeds have been stagnant and performance/clock cycle hasn't been increasing as fast as it has in the past.

    When it comes to raw performance numbers GPUs destroy CPUs. The problem is trying to take advantage of the power GPUs offer. For starters the algorithm has to be parallel in nature. And not just part of it, the majority of the algorithm has to be parallel or the overhead will erase any performance gain. The application also has to run long enough to justify offloading it to the GPU or again, the overhead will get you.

    Even if you have a parallel algorithm, implementing it isn't trivial. To use CUDA or OpenCL you have to have not only a good understanding of general parallel programming but also a good understanding of the architecture of the GPU hardware. These languages are not user friendly. They really put the burden on the programmer. On the other hand this does mean they can be very powerful in the right hands.

    Now lets say your application meets these criteria and you implemented it in CUDA and got a 10x speedup. No one with an ATI card can run it. Sure you could implement it in OpenCL instead to be cross platform but OpenCL seems to still be in it's infancy and not as mature as CUDA.

    I'm not trying to say GPGPU computing has no future, just that it has a long way to go. Parallel Programming has actually had quite the revival lately and I'm truly interested to see where it goes. Some type of parallel compiler that relieves the programmer of having to deal with all the headaches associated with parallel programming would be ground breaking and have awesome implications. Some people claim this isn't possible. If this topic interests you I would recommend looking into reconfigurable computing. Theres some real interesting stuff going on in that area and it supports a much wider range of algorithms than GPGPU currently does.
  • Efficient? (Score:2, Interesting)

    by scotty.m ( 1881826 ) on Monday August 30, 2010 @01:19AM (#33412578)
    Not sure how they are defining efficient in this experiement. I'd like to see how many clock cycles it took to sort each problem, maybe how much memory the radix sort is using too.
  • Also (Score:5, Interesting)

    by Sycraft-fu ( 314770 ) on Monday August 30, 2010 @01:30AM (#33412624)

    CS people get way too caught up in Big O forgetting that, as you note, it is the theory describing the upper bound on time, not actual practice AND is only relevant for one factor, n. A great example is ray tracing. CS people love ray tracing because most create a ray tracer as part of their studies. They love to talk on about how it is great for rendering because "It is O(log n)!" They love to hype it over rasteraztion like current GPUs do. However there's two really important things they forget:

    1) What is n? In the case, polygons. It scales with the log of the number of polygons you have. This is why ray tracer demos love showing off spheres made of millions of polygons and so on. It is cheap to do. However turns out polygons aren't the only thing that matters for graphics. Pixels also matter and ray tracing is O(n) with relation to pixels and it gets worse for anti aliasing. For FSAA you cast multiple rays per pixel. That means that 4x FSAA has 4x the computation requirements. Turns out rasterization scales much better with regards to resolution, and AA. In fact these days 4x FSAA is often O(1) in actual implementation in that it doesn't hit frame rate to turn it on. That is also why ray tracing demos are low resolution, because THAT'S the hard part.

    2) Real hardware limits. In the case of ray tracing, it is memory access. While those polygons scale in a perfectly logarithmic fashion in theory, real system RAM isn't so forgiving. As you start to have more and more you run in to RAM bandwidth limits and things slow down. All the memory access required becomes a limit that wasn't on paper. You can cry all you like that it wasn't a problem in theory, on actual hardware you run in to other limits.

    People need to better understand that it is a theoretical tool for comparing speed factors algorithms. That is useful, but you have to then consider the reality of the situation. CS people also need to understand for users, there is only one benchmark that matters: The clock on the wall. Whatever is faster is better. Doesn't matter what the theory says, if the hardware can do X faster than Y, then X is better according to users.

  • Re:Um... (Score:3, Interesting)

    by frosty_tsm ( 933163 ) on Monday August 30, 2010 @02:36AM (#33412816)
    While an interesting idea, many algorithms behave unpredictably on multiple processors (depending on how much communication would be required). Some will even be slower!

    The affect that extra CPUs will have is too dependent on the hardware implementation to be able to formalize like this.
  • by kramulous ( 977841 ) on Monday August 30, 2010 @03:18AM (#33412946)

    So, does that mean if they went out and got the fastest Xeon processor (they used the fastest gpu - excluding the C2070), wrote parallel and used the Intel Compiler (writing to it) the speedup over the cpu is less than zero?

    After having just looked at their code, also remove the cpu stl shit (actually any template since they don't vectorise). If you are writing speedy code for the gpu, to perform an adequate test for the cpu you also have to write appropriately.

    hahahahaha

    This gets better and better...
    They only timed the compute time. Cudamalloc was not part of the timing or cudamemcpy.

    Sorry, I only count 'time to solution'. That is all i'm interested in. I thought is was strange that a less than O(n**2) was faster on the gpu.

    It is like writing benchmarks that ignore disk IO, ignore reading from system memory, L3 cache, etc. Only time stuff that is in the registers.

  • Re:Also (Score:2, Interesting)

    by Anonymous Coward on Monday August 30, 2010 @09:18AM (#33414156)

    You are missing the same part almost every other CS student out there misses

    K*O(N)+C

    There is usually some K involved that is done on every operation. This can turn a log n op into something huge. Or the set up (the C part) could take longer than to just brute force the issue. Many people make this mistake. However, hardware does not stand still either. People expect next years graphics to be more photo realistic than last years.

    You use raster vs ray (because you hate how long ray takes) as the reason you hate big O. Instead you should wonder isnt raster just a special low cost implementation of ray casting where the reflection is set to 1 or 2? Eventually ray will take over. It has to. There is only so much more pre cooking you can do (which is what raster is). The results do look good and people use it to good effect. However, eventually you have to ray trace it. In fact today you use many ray casting methods. Such as z buffer, per vertex shading, per pixel shaders, etc. These methods were invented to speed up ray casting. They work with raster as it is special case of ray tracing, one where many of the pixels are pre rendered. Eventually you can not get the correct colors unless you figure out all the light sources in the scene. Raster also has a scaling problem. In that you can stretch the pixels out but they turn into jaggy squares. Aliasing helps some but can only do so much, as the information just does not exist. As screens get larger and higher pixel depths, and higher geometry rates you eventually just can not move the raster images around the system fast enough to keep up.

    How do people know these things? They use big O and the other constants to see what it is doing. If you think nVidia, AMD, and Intel are going to keep making raster only systems from here til the end of time your dreaming. Intel cant make a graphics part we all know this. But they to date have made the most compelling ray tracing one. nVidia, and AMD both have stated eventually it will be ray casting. As they know raster only works for so long. As you will see more and more general highly parallel special purpose GPUs coming from these folks. Eventually it will be as easy to do ray casting in combination with raster to create very compelling graphics. Notice I didnt say raster goes away. Anything but, it will be around for a long time.

    Ray casting has a distinct advantage long term over raster. It is done 'as code' meaning you are not slugging huge bitmaps thru the system bus. But instead rendering directly into the video memory. The code has an advantage as it can be loaded and probably fit into the high speed memory system cache. Bitmaps will need to get ever larger to accommodate better looking graphics and higher polygon counts.

    Also you use FSAA as a 'this looks better' when in fact in quite a few cases it looks worse. You forget the FS part of that full screen. Meaning things that should not be aliased are. Ray casting can take that into account PER object, PER pixel, and depending on the z buffer and then decide only when its needed. With ray tracing you dont need FSAA all the time because you have already done the alias part at the ray cast level.

    Also raster image memory is not cheap either. It is why they built special bus's to handle it. You know AGP, PCIe, etc... It was a practical solution to what the industry is doing right now. However if you notice PCIe is not only graphics but about bandwidth in general. While AGP was a special bus for graphics. They know there are things coming up that need more bandwidth, ray casting being one of many.

    Also have you not wondered why the graphics chips guys have been in a huge hurry to make themselves into parallel processors? They do not want to be thrown to the side. The CPU eventually would be able to handle the raster images that they do today. With room to spare. They must make the graphics better and do it faster than a CPU could. It just so happens there are stacks of graphics algs t

  • Re:Also (Score:4, Interesting)

    by pz ( 113803 ) on Monday August 30, 2010 @09:33AM (#33414256) Journal

    People need to better understand that it is a theoretical tool for comparing speed factors algorithms. That is useful, but you have to then consider the reality of the situation.

    Right. And any good programmer understands that *first* you pick the right algorithm, and *then* you optimize the implementation. Working the other way around is wasting time.

    But, more importantly, that the parent seems to miss, is that the speed improvement from changing the order of growth of an algorithm can swamp nearly any possible improvements in hardware. Going from O(n^4) to O(n^2) when n is 10,000 is like replacing your desktop box with ten thousand supercomputers. No amount of implementation tweaking or upgrading processors and memory -- which only affects the constants in algorithm speed -- is going to give you the same level of speedup.

    There is a very, very good reason to pay attention to O() analysis of algorithms, and, in particular, their implementation. You can implement bubble sort which is O(n^2) when done correctly, in a brain-dead way that makes it O(n^3) --- if you, for example, re-extend the output array for each new element --- and the difference is huge. Extra factors of n creep in easily, and understanding them can be highly useful.

    So, the parent can review real-world constraints all he wants, but in the end, order of growth is more important.

With your bare hands?!?

Working...