Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Supercomputing Java Programming IT Technology

Open Source Solution Breaks World Sorting Records 139

allenw writes "In a recent blog post, Yahoo's grid computing team announced that Apache Hadoop was used to break the current world sorting records in the annual GraySort contest. It topped the 'Gray' and 'Minute' sorts in the general purpose (Daytona) category. They sorted 1TB in 62 seconds, and 1PB in 16.25 hours. Apache Hadoop is the only open source software to ever win the competition. It also won the Terasort competition last year."
This discussion has been archived. No new comments can be posted.

Open Source Solution Breaks World Sorting Records

Comments Filter:
  • Overlords (Score:3, Funny)

    by Narpak ( 961733 ) on Saturday May 16, 2009 @09:25AM (#27978823)
    I for one welcome our new datasorting overlords!
  • by Anonymous Coward

    Just give me a few minutes to patch together a bubblesort from my highschool Pascal class. I'll show them record speed!

    • My sort [wikipedia.org] will totally beat yours!

      • Re: (Score:3, Funny)

        Bogosort: for when you have you are paid by the hour, but aren't penalised for being late.

        • Re: (Score:3, Funny)

          by Anonymous Coward

          Bogosort: for when you have you are paid by the hour, but aren't penalised for being late.

          with my luck, bogosort would get it right the first time.

      • I've asked lots of interview candidates to implement randomSort. They've never heard of it, so then I describe the algorithm.

        Watching their eyes go wide is the highlight of the interview, typically.

        Occasionally some person who has overcome their interview nervousness will, with eager honesty, try to implore to me that this is not a very good sort algorithm, and that much better ones are taught in universities these days.

        Good Times.

        • I've asked lots of interview candidates to implement randomSort. They've never heard of it, so then I describe the algorithm.

          Did you change roles from developer to HR?

        • by mwvdlee ( 775178 )

          In what way exactly does this help weed out the bad from the good candidates.

          Any candidate that actually tries and succesfully implements the algorithm is someone you DON'T want on your team.
          Any candidate that runs of screaming is one you DO want, but they're already gone.

          • by bmajik ( 96670 )

            It's a good question for university students because they've been thinking about sorting problems "recently". So asking someone how to implement a given sort is a fair question. This has the upside of not being one they've had to implement, and it doesn't work like anything they have done. Of course the implementation is trivial; that's not the point. The solution is just the gateway into the rest of the conversation. The follow up questions are

            "is it guaranteed to return? Why or why not?"

            "what is the

        • by Repton ( 60818 )

          Hah. Easy-peasy in python!

          while lst != sorted(lst): random.shuffle(lst)

    • Just give me enough money to remember my Logo sort algorithm from grade school. The turtle will always be the fastest! :P
  • by Jah-Wren Ryel ( 80510 ) on Saturday May 16, 2009 @09:47AM (#27978987)

    So, it appears they have finally sorted out whether open source beats proprietary.

  • by AlexBirch ( 1137019 ) on Saturday May 16, 2009 @09:50AM (#27979011) Homepage
    If it's winning competitions at 0.20, when will they release it?
  • ... truth be told, a lot of good engineering could happen if many of peoples favorite commercial applications could have the souce distributed with them, a lot of old games for instance coudl be updated and maintained.

    I think what holds the progress of open source back is interesting projects that exist that people want to work on but are locked away under corporate lock and key.

    • by Bert64 ( 520050 )

      People maintaining old games is now what the companies that produced those games want... They would rather sell you new games, or sell the old ones to you again..
      Corporations will always put their own interests first, and those interests will often be detrimental to everyone else.

  • But has anyone patented it yet? Patents trump copyright after all.

    • You can't patent Apache 2.0 licensed stuff.
      Also, you can't patent software*.

      *in Europe

      • by berend botje ( 1401731 ) on Saturday May 16, 2009 @10:19AM (#27979221)
        Also, you can't patent software in Europe

        Not yet, but they are working on it. They tried to snuck it through by hiding it in the amendments of an agricultural bill. Luckily Poland kept watch and rose a stink about it.

        It's not over. There is too much money to be gained for that.
        • Re: (Score:3, Interesting)

          by haruchai ( 17472 )
          Why isn't this illegal - adding unrelated legislation to a ? Is there anywhere in the world why this practice is not permitted, or better yet, prosecuted?
          • Why isn't this illegal - adding unrelated legislation to a ? Is there anywhere in the world why this practice is not permitted, or better yet, prosecuted?

            I never heard of it happening here in the UK, as far as I knew only the US did. Shows how little I knew.

            • by jimicus ( 737525 ) on Saturday May 16, 2009 @11:22AM (#27979657)

              Here in the UK, the patent office has been issuing software patents for some time in "anticipation" of them becoming legal at some point in the future.

              No, I don't understand that either.

              • GP was referring to tacking on legislation to an unrelated bill, i.e. patent legislation on an agricultural bill. It's my understanding that this is sometimes used in the US to block a bill by means of appending something that no fool will vote in.
                • This is indeed the case (for killing bills). The nastier version is tacking some random crap on to the annual budget, and using the excuse of getting the budget passed to ram it through even though the bill alone wouldn't even get to a vote by itself.
                  • by zarqman ( 64555 )
                    Sadly, tacking stuff on isn't limited to the budget bills. It happens routinely.
                    • by haruchai ( 17472 )
                      I'd heard of a US bill getting pork added to it AFTER it had been voted on - sometime within the last year. I haven't been able to find the story again and would hope that this isn't a common occurrence.
                    • by zarqman ( 64555 )
                      Yeah. They'll add it in conference committee, where, after the initial vote, they reconcile differences in bills between the House and Senate versions. It goes back for a quick final vote in each chamber but that's usually considered procedural as I understand.

                      I don't know for sure, but somehow doubt that it's uncommon. More likely, the changes snuck in aren't enough to raise significant ire so they get away with it. And if if people figure it out and are unhappy, there's always plausible deniability: "S
            • That depends how you define unrelated, but I think that the Anti-terrorism, Crime and Security Act of 2001 [guardian.co.uk] is a perfect example of the fact that the name of a law is chosen to try and make sure that it gets passed.

          • by Halo1 ( 136547 ) on Saturday May 16, 2009 @11:36AM (#27979757)

            Why isn't this illegal - adding unrelated legislation to a ? Is there anywhere in the world why this practice is not permitted, or better yet, prosecuted?

            The GP is confusing a bunch of things. First, the Council of Ministers threw out all limiting amendments from the European Parliament and reached an Political Agreement on a shoddy text through backdoor maneuvering by Germany and the European Commission [google.com]. That text would have turned the European Patent Office's practice of granting software patents into EU legislation.

            A Political Agreement has no juridical nor legislative value, but it has never happened that a political agreement was later on annulled and that negotiations were reopened. So also in this case, even though the German, Dutch, Spanish and Danish parliaments afterwards passed motions asking to reopen the discussions, the Council's bureaucrats did not want to do that because it "would undermine the efficiency of the decision making process".

            Anyway, once you have a Political Agreement (which is reached by the representatives of the ministries responsible for the matter at hand) and nobody "wants" to discuss it anymore, the agreement can be placed as an "A item" on any EU Council of Ministers meeting, since it only needs rubber stamping in that case. In the case of the Software Patents Directive, it appeared several times as an A item on the agenda of an Agriculture and Fisheries meeting (which is presumably where the GP's confusion stems from).

            In principle, there would have been nothing wrong with that, but in this case there was no actual political agreement, and in particular Poland was very unhappy with the way it had been treated. So 4 times in a row, Poland either had this "A item" removed from the agenda (sometimes at the last minute, because the responsible Polish minister had to be informed that they were again trying to get it through at a meeting he had no business with), or turned it into a "B item", which means that it can't be rubber stamped but that they first have to talk a bit about it (which nobody wanted to do).

            In the end it still did get approved, but that whole circus helped with in convincing the EU Parliament to table a resolution asking the Commission to restart the directive's process [ffii.org], and when the Commission refused to later on squarely reject it [ffii.org].

            You can find some more of my thoughts on the Council's behaviour here [ffii.org].

          • Re: (Score:3, Interesting)

            by jjohnson ( 62583 )

            There was an episode of the Simpsons where Springfield is going to be destroyed by a meteor. Congress meets to quickly pass legislation to fund the evacuation of the city. At the last moment, a Congressman steps up to the podium and says "I'd like to add a rider providing $30 million for the perverted arts". The bill is defeated.

            It's funny because it's true.

          • by turbidostato ( 878842 ) on Saturday May 16, 2009 @08:05PM (#27983283)

            "Why isn't this illegal"

            Because they made it legal by passing it on a Totally Unrelated Bill.

    • But has anyone patented it yet? Patents trump copyright after all.

      There are a number of patent applications related to MapReduce from Google and Yahoo.

  • What data? (Score:1, Insightful)

    by Tinctorius ( 1529849 ) *

    They sorted 1TB in 62 seconds, and 1PB in 16.25 hours.

    This doesn't say anything if we don't know what kind of records were supposed to be sorted.

    • by eddy ( 18759 ) on Saturday May 16, 2009 @10:03AM (#27979115) Homepage Journal
      Probably why the second sentence in the article is "All of the sort benchmarks measure the time to sort different numbers of 100 byte records. The first 10 bytes of each record is the key and the rest is the value."
    • Re: (Score:2, Insightful)

      by Antisyzygy ( 1495469 )
      Things can be sorted by any of their properties. What is important is this software sorted data objects this quickly regardless of what property they were being ordered by. It beats all of the other sorting algorithms.
    • by rackserverdeals ( 1503561 ) on Saturday May 16, 2009 @10:31AM (#27979299) Homepage Journal

      They sorted 1TB in 62 seconds, and 1PB in 16.25 hours.

      This doesn't say anything if we don't know what kind of records were supposed to be sorted.

      It's amazing what you can learn if you actually RTFA.

      All of the sort benchmarks measure the time to sort different numbers of 100 byte records.

      If that's not good enough for you, post your email address and maybe someone will be kind enough to send you the 100TB and 1PB data files they used.

      • by allenw ( 33234 )

        I see Hadoop Summit door prizes.

      • They sorted 1TB in 62 seconds, and 1PB in 16.25 hours.

        This doesn't say anything if we don't know what kind of records were supposed to be sorted.

        It's amazing what you can learn if you actually RTFA.

        If that's not good enough for you, post your email address and maybe someone will be kind enough to send you the 100TB and 1PB data files they used.

        It's amazing what you can learn if you actually RTFS, or read the comment, or read the quote you just quoted: 100TB != 1TB. We can be pedantic all day though if you like. Mr. Smug "I can RTFA"

  • Gonna pass this on to my boss, hopefully now we can move off of our terrible, terrible proprietary sorting software...Good to see open source breaking inroads in so many areas!!!
    • Maybe your boss could write to google and let them know just how special sorting is, I'm sure they'd love to hear it.
  • by nathan.fulton ( 1160807 ) on Saturday May 16, 2009 @10:02AM (#27979099) Journal
    ...this cluster had nearly 4 times the number of nodes as the previous records. This competition was testing who had more nodes working together the best, but when you have so many more nodes, it would be hard not to top other clusters.
    • by Rockoon ( 1252108 ) on Saturday May 16, 2009 @10:41AM (#27979363)
      I was doing some back-of-the-envelope, and they are sorting 17.7GB/second, which at a minimum would require 177 HD's if each drive can write 100MB/sec.

      If its not written to disk, then there is no achievement here (you don't perform 1 minute+ sorts and then throw the result away in real-world scenarios)
    • Yep, when you have a problem that big, you want scalability before anything else. How well could the other candidades use a machine that big?
      • That's the problem. All of the other groups used big clusters, but with different interconnects and different hardware. All the result shows is that this algorithm on this hardware performs better than other algorithms on other hardware. It doesn't show you whether the hardware or the algorithm was the determining factor.

        Ideally, each group should have run all of the competing algorithms on their own hardware so you would have a two-dimensional data set for the results and be able to see whether an al

    • by drizek ( 1481461 )

      SO Open Source isn't all that great at sorting, but since they had the biggest toys, it obviously shows that it works as a business model.

  • Java (Score:5, Insightful)

    by cratermoon ( 765155 ) on Saturday May 16, 2009 @10:02AM (#27979101) Homepage
    OK, so where are the "Java is slow" comments? o.O
    • Here. [thedailywtf.com]
    • by dodobh ( 65811 )

      Waiting for similar hardware to become available for other languages.

      I think Tim Bray is on the right track with his widefinder idea.

      See Widefinder 1 [tbray.org] and Widefinder 2 [sun.com] for details.

    • Re:Java (Score:5, Insightful)

      by hey! ( 33014 ) on Saturday May 16, 2009 @10:50AM (#27979411) Homepage Journal

      Well, not to endorse the "Java is slow" meme or anything, but starting from a red light I can beat most cars across the intersection on my bike.

      Likewise if I had to drive across country in the shortest time possible, I'd choose a Ford F250 if the challenge stipulated I had to bring 3000 pounds of bricks with me.

      Speed is a very task specific notion.

      • by dodobh ( 65811 )

        Just use a Boeing 747. You drive so fast that your wheels don't touch the ground.

        (That's like throwing RAM at an IO limited problem).

    • well, actually... i sort of do. look, google has already reported on sorting 1 PB in 6 hours [blogspot.com] on a 4000 node cluster. their implementation is in C++. yahoo's result is 16.25 hours on a 3800 node cluster and hadoop is written in java. even taking into account the 200 node difference, yahoo's implementation is ~2.6 times slower than google's. it may not all be java's fault, but still.
      • Something doesn't seem right.

        Yahoo's cluster had 3,800 nodes with 4 disks per node giving it roughly 15,200 drives plus or minus the dead nodes/drives in the cluster.

        The Google cluster had 4,000 nodes with 48,000 hard drives. 12 drives per node doesn't sound like the typical Google servers [youtube.com] I've seen. That one looks like 2-4 drives. This other video [youtube.com] seems to show the storage node which looks like it has 5-10 drives.

        The reason I bring up drives is that sorting 1PB likely involves hd access the more drives, th

      • Let's compare orange and apples.
        Since a node with 2 GB of memory is equivalent of 4 GB of memory, because a node is a node. What about myrinet is the same as ethernet. What about gigabits?
        Or if we really wanted to compare the numbers we would ensure they sort the same data on the hardware.
    • by Timmmm ( 636430 )

      Running mathematical algorithms is usually a very small part of what the average desktop program does. Java can be very fast at this since it essentially produces the same assembly as the equivalent C program. For example something like

      for (int i = 0; i 1000; ++i) total += i;

      is going to produce exactly the same machine code in Java or C. I think the difference comes when you start writing real-world desktop programs. These make use of things like vector's strings, function calls, etc. which seem to slow ja

  • Java doesn't fit for my environment, does someone know of an open source C++ port of Java Hadoop?

    If there is no such port, is anyone interested in starting a port? An example of a Java port is "cLucene", a C++ port of Java Lucene search engine. It usually outperforms its Java sibling in an order of magnitude.

    • Comment removed based on user account deletion
    • Re: (Score:3, Informative)

      Comment removed based on user account deletion
    • There isn't a C++ port of Hadoop's map/reduce, but there is a C++ interface to the Java code. It is used by Yahoo's WebMap, which is the largest Hadoop application. It lets you write your mapper and reducer code as C++ classes.

      The Hadoop Distributed File System (HDFS) also has C bindings to let C programs access the system. If you want another alternative, the Kosmos File System (KFS) is also a distributed file system and was written in C++. Hadoop includes bindings for HDFS and KFS, so that the application

  • According to a post on the Yahoo developer forums:

    2005 winner used only 80 cores and achieved it in 435 seconds. So with 800 cores what 2007 winner achieved is 297 seconds ?

    Its not only number of cores its how the logic to use parallel nodes properly to do a particular task is important.

    Hadoop won with 1820 cores (910 nodes w/ 2 cores each) at 209 seconds.

    I'm all for better sorting algorithms, but eventually the cost of parallelizing something overtakes the profit made. That being said, Hadoop's internal filesystem made to be redundant, which is an important feature whenever you're dealing with large amounts of data.

    Hadoop uses Google's MapReduce, by the way, whereas the competition didn't. It's nice to see MapReduce being used in a more public eye.

    While better sorting algorithms -do- matter, I have to

    • By the way, the number of nodes and the hardware in the nodes for this Hadoop cluster is -optimized- for this contest.

      The number of nodes was reduced to run the 100TB benchmark but I don't see anything that backs up your comment that the hardware was optimized for this contest. The cluster hardware doesn't look like anything special. Maybe it's optimized for Hadoop which is different than being optimized for the contest.

    • by leenks ( 906881 )

      There is a C++ framework called Sector/Sphere, that is quite a bit faster but not as stable. I don't think it scales as well either (yet)

      http://sector.sourceforge.net/ [sourceforge.net]

  • by Sangui5 ( 12317 ) on Saturday May 16, 2009 @10:53AM (#27979441)

    Google's sorting results from last yeat (link [blogspot.com]) are much faster; they did a petabyte in 362 minutes, or 2.8 TB/sec. They minute sort didn't exist last year, but Google did 1TB in 68 seconds last year, so I think it may be safe to assume that they could do 1 TB in under a minute this year. Google just hasn't submitted any of their runs to the competition.

    From the sort benchmark page [sortbenchmark.org], the list the winning run as Yahoo's 100TB run, leaving out the 1PB run; that implies the 1PB run didn't conform to the rules, or was late, or something.

    People have commented that this is a "who has the biggest cluster" competition; the sort benchmark also includes the 'penny' sort, which is how much can you sort for 1 penny of computer time (assuming your machine lasts 3 years), and 'Joule' sort, how much energy does it take you to sort a set amount of data. Not surprisingly, the big clusters appear to be neither cost efficient nor energy efficient.

    • In sorting a terabyte, Hadoop beat Google's time (62 versus 68 seconds). For the petabyte sort, Google was faster (6 hours versus 16 hours). The hardware is of course different. (from Yahoo's blog [yahoo.net] and Google's blog [blogspot.com])

      Terabyte:
          Machines: Yahoo 1,407 Google 1,000
          Disks: Yahoo 5,628 Google 12,000
      Petabyte:
          Machines: Yahoo 3658 Google 4000
          Disks: 14,632 Google: 48,000

      Yahoo published their network specifications, but Google did not. Clearly the network speed is very relevant.

      The two take away points are: Hadoop is getting faster and it is closing in on Google's performance and scalability.

      • One other difference:

        Google:
        "we asked the Google File System to write three copies of each file to three different disks."

        Yahoo:
        "On the larger runs, failure is expected and thus replication of 2 is required. HDFS protects against data loss during rack failure by writing the second replica on a different rack and thus writing the second replica is relatively slow."

        Google is using 4x as many disks, but writing 1.5 as much data.

        I'm actually more impressed that Google is cramming 12 disks onto a single machine,

        • Er... Take a bigger case?

          You can easily get 12, 18 or 24 disks into a server....

        • by zarqman ( 64555 )

          I'm actually more impressed that Google is cramming 12 disks onto a single machine, how do they get them to fit?

          umm... a rubber mallet?

          More seriously, Google has a history of not even using cases some of the time -- at least not cases as most people think of them.

          As I recall, they're even using custom motherboards and such, so custom cases (or special racks if they're still doing the caseless thing) to accommodate 12 disks per mobo seems very reasonable for them.

  • Imagine a beowulf cluster of those!
  • I really hope that this works across multiple drives, because my p0rn collection is so spread out it would take for ever to sort manually!

  • Google Sort (Score:2, Interesting)

    by jlebrech ( 810586 )

    Im looking forward to sorting my search results by Date, Title, Description, Author, etc..

    • This may get easier if HTML5 catches on. I've been playing with it, and the new <time> and <article> tags are extremely useful.

      I used to be sympathetic to the "limited view of html" argument, but after writing a couple of tools that need to search the dom, I'm convinced that the semantic tags work a lot better than abusing css classes. The consistency is going to help search engines, too.

  • My sorting algorithm operates in constant time. I should really enter it into one of these competitions. It's called Intelligent Design Sort: http://www.dangermouse.net/esoteric/intelligentdesignsort.html [dangermouse.net]
  • Like the widely-held belief that sorting speed is related to the software license used.

Time is the most valuable thing a man can spend. -- Theophrastus

Working...