Google Sorts 1 Petabyte In 6 Hours 166
krewemaynard writes "Google has announced that they were able to sort one petabyte of data in 6 hours and 2 minutes across 4,000 computers. According to the Google Blog, '... to put this amount in perspective, it is 12 times the amount of archived web data in the US Library of Congress as of May 2008. In comparison, consider that the aggregate size of data processed by all instances of MapReduce at Google was on average 20PB per day in January 2008.' The technology making this possible is MapReduce 'a programming model and an associated implementation for processing and generating large data sets.' We discussed it a few months ago. Google has also posted a video from their Technology RoundTable discussing MapReduce."
Kudos to Google (Score:5, Funny)
for knowing how important the Library of Congress metric is to us nerds!
Re: (Score:1)
Re:Kudos to Google (Score:5, Funny)
for knowing how important the Library of Congress metric is to us nerds!
But at least now we know Google can sort out petafiles.
Re:Kudos to Google (Score:5, Funny)
Re: (Score:2)
True, but when the actual Library of Congress entire Library is converted digitally then they can brag on comparisons. However, I doubt they will want to seeing as how that will be far larger than a petabyte of data.
Re:Kudos to Google (Score:4, Funny)
So Google can sort through 12 LoCs in 6 hours.
Wow, that's 2 LoC/pH
Re: (Score:2)
Re: (Score:2)
Try the low bandwidth view and/or disabling the dynamic comments, then filter at 1. Oh and hand in your geek card for not being able to circumvent censorware at work.
Re: (Score:2)
Why don't you address the problems with your "censorship" program instead? It appears to be completely broken.
Unit conversion (Score:5, Funny)
Yay! We finally have unit conversion from 1 LoC to bytes! So...20 PB = 6LoC, means that 1 LoC = 3,333... PB :)
Re: (Score:3, Informative)
Re:Unit conversion (Score:4, Informative)
No, 1 PB = 12 LoC, so 1 LoC = 0.0833... PB
Also, I'd like to make some kind of swimming pool reference.
Re: (Score:2, Interesting)
Assuming it was written in binary in a font that allows for 1 digit per 2mm, the length of the data would be 183251938 m, or 1145324 times the perimeter of an olympic-sized swimming pool.
Re: (Score:2)
Yes, but how much is that in football fields?
Re: (Score:1, Interesting)
Yes, but how much is that in football fields?
You silly sod, you can't measure something in football fields! There's internationalization to take into account!
Canadian football fields are 100x59m, American football fields are 109x49m, and the rest of the world doesn't even play the same game on a football field. And THEIR sport has a standard range, anywhere from 90-120m by 45-90m (Thank you wikipedia [wikipedia.org]).
You've now introduced variable-variables! We can't get an absolute number!
Re: (Score:2)
Can we get an absolute variable instead?
Re: (Score:2)
Re: (Score:2, Informative)
Re: (Score:2, Funny)
Re: (Score:2)
This is an excellent point. No American football player has used his feet since the NFL adopted hoverchairs into the rules in 1974.
If that is enough foot for you, I really want to see the American handball team.
Re: (Score:2)
Yes, yards are close enough for us, but by God, we're still using English measurements.
Re: (Score:3, Informative)
Oh darn. Clearly I was converting pound-congresses to kilos first.
Re: (Score:2)
Re: (Score:2)
What format are they using for the books when doing this calculation as to the size of the LoC?
Raw Text?
PDF?
JPEG? ....
Re: (Score:2)
Re: (Score:3, Funny)
Sure, it's -4.15 Edsels.
Re: (Score:2)
Sure, it's -4.15 Edsels.
That's rounding it off a bit generous, don't you think?
That's Easy (Score:5, Interesting)
Re:That's Easy (Score:5, Insightful)
I came here to post the same thing. If they sorted a petabyte of Floats, that might be pretty impressive. But if they're sorting 5-terabyte video files, their software really sucks.
Not enough info to judge the importance of this.
Re:That's Easy (Score:5, Informative)
I think this is the data set. I could be wrong though. The article (yeah yeah) says that
In our sorting experiments we have followed the rules of a standard terabyte (TB) sort benchmark.
Which lead me to this page [hp.com] that describes the data (and it's available for download).
Re: (Score:2)
For the record, you can download a file that will generate the data for it. Because otherwise, well, posting a link to a 1TB file on Slashdot might melt the entire Internet.
Re: (Score:1)
Re: (Score:2, Interesting)
You got baited into a flame in a very elaborate scheme to mock your intelligence (or lack thereof).
There is no category meta-flamebait, so you're proving the mods right I'd say.
I hope this helps.
Re:That's Easy (Score:5, Informative)
From TFA: they sorted "10 trillion 100-byte records"
Re:That's Easy (Score:5, Funny)
And yet google don't even convert petabytes to libraries of congress in the google calculator.
Or perhaps I got the syntax wrong.
Re:That's Easy (Score:5, Funny)
Huh? This isn't the parent post I was trying to reply to.
Re: (Score:2)
Chances are now they are going to ask potential employees being interviewed there how to do it using half the time and one tenth of the machines...
Need to benchmark against the best sorts (Score:5, Insightful)
Sorts have been parallelized and distributed for decades. It would be interesting to benchmark Google's approach against SyncSort [syncsort.com]. SyncSort is parallel and distributed, and has been heavily optimized for exactly such jobs. Using map/reduce will work, but there are better approaches to sorting.
Re: (Score:2)
And Google is trying to make money off mapreduce(as an api of sorts), so now you're surprised they're using their massive resonance over the market, especially geeks, in order to heighten awareness of their product?
On the other hand, what they're trying to prove is mapreduce's worth, as a workload divider(how to break-up 20PB for sorting), not necessarily how optimal it is in the current situation. They have a better test/sample of mapreduce, but it's a trade secret to them(how it's used to index the pages
Re: (Score:3, Interesting)
Parallel/distributed sorting doesn't eliminate the need for map/reduce, it just helps spread the problem set across machines.
Here's the thing though...its the distributing of the problem set and the combining of the results that is the hard part - not map/reduce.
Map and reduce are simple functional programming paradigms. With map, you apply a function to a list - which could be either atomic values or other functions. With reduce, you take a single function(like add or multiply, for instance) and use that t
Re: (Score:2)
I suspect maybe you don't quite understand how MapReduce works. Take a look at the references section of the MapReduce paper [google.com]; the paper's authors are well aware of research in the parallel sorting field. In particular their reference 1 [google.com] is most relevant.
Re:Need to benchmark against the best sorts (Score:4, Insightful)
>>Using map/reduce will work, but there are better approaches to sorting.
It kinda bugs me that Google trademarked (or, at least, what they named their software) after a programming modality that has been in parallel processing for ages. In fact, MPI has a mapreduce() function that, well, does a map/reduce operation. I.e., farms out instances of a function to a cluster, then gathers the data back in, summates it, and presents the results to someone.
It kind of bugs me (in their Youtube video linked in TFA, at least) that they make it seem that this model is their brilliant idea, when all they've done is write the job control layer under it. There's other job control layers that control spawning new processes, fault tolerance, etc., and have been for many, many years. Maybe it's nicer than other packages, in the same way that Google Maps is nicer than other map packages, but I think most people like it just because they don't realize how uninspired it is.
It'd be like them coming out with Google QuickSort(beta) next.
Re: (Score:2)
Using map/reduce will work, but there are better approaches to sorting.
I think we can safely assume that the hordes of egghead computer scientists are already exploring the alternate approaches.
Perhaps SyncSort has better theoretical performance, but Map/Reduce yields better results in Google's real-world scenarios? I don't know, it's all way above my head.
Finally... (Score:5, Funny)
I will be able to catalog my pr0n in my lifetime:
Blondes, Brunettes, Red heads, Beastial^H^H^H^H^H "Other"
tagging (Score:5, Interesting)
It's not enough to sort by blond, black, gay, scat, etc. Some categories are a combination that don't belong in a hierarchy.
That is where tagging comes in. Sorting can be done on-the-fly, with no one category intrinsically more important.
Re:tagging (Score:5, Funny)
pr0n for Geeks, volume 18: Sorting On-the-Fly
Re: (Score:2)
I swear I shouldn't admit this but good lord you're right - I _REALLY_ wanted WinFS to come out and I wanted it to be good.
A database driven filesystem would be so goddamned useful, it would change the way we work with computers but noooo Microsoft screwed up (furthermore WinFS was a hack, on top of NTFS anyhow I heard it was SQL server or something, sounds messy)
Porn is a fantastic example, I realise it's kind of immature but I mean it would be genuinely useful.
Set tags like:
threesome
oral
brunette
money shot
Re: (Score:2, Funny)
Re: (Score:2)
One ups Yahoo & Hadoop (Score:3, Interesting)
Let's see if Yahoo responds!
Re: (Score:2)
Hadoop uses MapReduce :) From their site:
Re: (Score:2)
MapReduce isn't something invented by Google. It's a design pattern.
Re:One ups Yahoo & Hadoop (Score:4, Informative)
Re: (Score:2)
Since were relating to human proportions today, I'll compare your comparison to comparing running 100m to running a marathon. Apply story telling skills and score.
Re:One ups Yahoo & Hadoop (Score:5, Interesting)
Sort? Sort what? (Score:1, Insightful)
One quadrillion bytes, or 1 million gigabytes.
How big are the fields being sorted. Is it an exchange sort or a reference sort?
It is probably very impressive, but without a LOT of details, it is hard to know.
Re:Sort? Sort what? (Score:5, Informative)
I realize, slashdot..., but maybe you could glance at the article which states:
10 trillion 100-byte records
Re: (Score:2)
10 trillion records across 4,000 computers comes to 2.5 billion records per computer.
It took 6 hours for a computer to sort 2.5 billion records? 250G?
Yawn.
Re: (Score:3, Insightful)
You do have to merge them all back together at the end...
But I'm sure you can do better tonight.
Re: (Score:1, Flamebait)
You do have to merge them all back together at the end...
Technically speaking, that's not true. In fact, you wouldn't want too.
Assuming some sort of search paradigm, you'd keep the records on their 4000 separate servers, each server doing its on search functionality, and *only* merge the results of the searches as needed and cache them in the web layer.
Re: (Score:2)
How did someone see this as flamebait?
Re: (Score:2)
you consistently oversimplifying
That's just this issue, isn't it? I mean, seriously, all the great theoretical work on sorting algorithms is done. No one is going to come along and give us an order of magnitude better performance in a general purpose algorithm. It just isn't going to happen.
So, it *is* a simple problem for which there are ample tools from which to choose. The challenge is not the *sort* but the scale. This too is pretty pedestrian as well. The cluster "divide and conquer" approach is not ne
Re: (Score:3, Insightful)
Re: (Score:2)
right, so it's 250gb sorted in 6 hours... now where does the sorting and integration of the 4000 250gb blocks of sorted data come in?
You wouldn't merge it in to one set, you'd keep it all on their own servers and only merge the results as needed.
Re: (Score:2)
if you sort 4000 blocks of random data into an actual order, but don't combine the data in any serious way, what you have is tons of overlap in all these seperate blocks of data. Just talking about 1-20 on 4 servers:
That data may be sorted but it's a mess, and doing this type of sort for a competition is nothing more then getting fast servers and sticking them in the same room and have them all sort random blocks of da
Re: (Score:2)
Odds are they're using the mythical "google algorithm", so they're probably going to keep what they're doing quiet.
Re:Sort? Sort what? (Score:5, Funny)
Its About Time.... (Score:2, Funny)
Finaly... A system with enough power to run vista efficiently.
Re:Its About Time.... (Score:4, Informative)
Are you sure? It wasn't marked Vista capable.
Re:Its About Time.... (Score:4, Funny)
Not only that the extra processors aren't covered under the EULA and require special extra licenses.
Not impressive... (Score:5, Funny)
Not even close. (Score:2)
Dude, that's barely half my porn stash.
Is it new data (Score:2)
0s and 1s (Score:2, Funny)
That's a lot of computing power to use just to get 4,000,000,000,000 0s and 4,000,000,000,000 1s.
nice one, Google... (Score:2, Funny)
...fancy doing my mp3 collection?
Re: (Score:2)
hmmm... not having audited them for a long while, I glance at my shelf and see eight 320GB pocketdrives and know they're all jam packed. Average bitrate is 160 and duration is 7m, so figure an average filesize of 10MB. There's a lot of live stuff in there as well as my entire CDA collection and a fair few audiobooks and vinyl/minicassette rips. That's 31,000 tracks per drive, or 248,000 tracks total (give or take). With my hardware and assuming I could be arsed, that's a month's work, although I really shou
Libraries of congress? (Score:3, Insightful)
Honestly, How am i supposed to know what "..the amount of archived web data in the US Library of Congress as of May 2008." Looks like!? I've been to the library of congress, i've seen it, its a metric shit-ton of books (1 shit-ton = Shit * assloads^fricking lots), but i have no clue what the LoC is archiving, what rate they're going at it, and what the volume is of it.
Wow (Score:1)
clever strategy (Score:3)
They clearly have the ability to respond to emergencies. And this puts it out there that they can...
eg;
1) Foot n mouth out break in cattle
2) A supliment to census data
3) Finding information of dissidents/traitors(bloggers)
20,111 Servers ?? (Score:1, Interesting)
Re:20,111 Servers ?? (Score:4, Insightful)
Re: (Score:2, Informative)
Re:20,111 Servers ?? (Score:4, Insightful)
Oh dear. 4000*362 ~= 1440*20111 / 20. So you assumed that the sorting would scale linearly. fail.
just in perspective... (Score:2, Interesting)
Re: (Score:2)
They probably didn't hold the source data on a single machine in first place (or did seagate break the Petabyte barrier, yet?).
48GB/s broken down over 4000 servers boils down to "only" 12 Mbyte/s.
So indeed, impressive aggregate performance, but the individual nodes were "only" performing at (roughly) the throughput of Gigabit Ethernet.
Holy shit... (Score:1)
I'm surprised (Score:1, Redundant)
...that Google hasn't implemented the Libraries of Congress metric into their auto-calculator. [google.ca]
C'mon Google, get on the ball(s)!
Is this our standard of measurement? (Score:1)
Amazing feat... (Score:5, Funny)
Today from Google, the god of all things and doer of all things good in the universe, many millions of dollars in computer equipment were able to sort lots of things, in about the amount of time you would think it would take for millions of dollars of equipment to sort things.
In other news, a woodchuck was found chucking wood as fast as a woodchuck could chuck wood.
Congrats Google, you have a HUGE data set, and an even bigger wallet.
MapReduce = map + reduce (Score:4, Interesting)
If you feel the urge to play with MapReduce (or reade the paper), you don't need a fancy Linux distro [apache.org] to do it. MapReduce is simply the map() and reduce() functions, exactly as implemented in Python. Granted, Google implementation can work with absurdly large data sets, but for small data sets, Python is all you need.
Re: (Score:3, Informative)
True, but not quite the point. The map and reduce functions as you say are implemented in python (and a great many other languages), but what makes MapReduce special is that you replace the Map function with one which distributes it out to other computers. Because any map function can be implemented in parallel you get a speed boost for however many machines you have (dependant on network speeds etc....).
So yeah, you can do it in Python but you arent going to be breaking any records untill you implement you
Re: (Score:2)
But its the distributing part that is special, not the map/reduce part.
You're basically just dividing up a huge list and sending each part to a different machine. Tacked on to each list are the map and reduce functions themselves so each machine knows what to do with the list.
Its the parallelization of the problem that is the hard part. Map does not mean the mapping of the problem to thousands of machines - it means the mapping of a function to a list, and that is not a terribly difficult problem.
Re: (Score:2)
Re: (Score:3, Informative)
Exactly. There is nothing special to map and reduce.
Here's an example. Map and reduce are functional programming tools that work with lists. So we'll start with a simple list.
1 2 3 4 5
Now we'll take a function - x^2, and map it to the list. The list now becomes:
1 4 9 16 25.
Now, we'll apply a reduce function to our list to combine it to a single value. I'll use "+" to keep it simple. We end up with:
55
And that is pretty much all there is to map and reduce.
Re: (Score:3, Informative)
Almost, but not quite. MapReduce has a slightly different format than just map() and reduce(). Here is the signature of map and reduce from a theoretical functional language:
map(): A* -> B*
reduce(): B* -> C
Whereas in MapReduce:
map: (K, V)* -> (K1, V1)*
reduce: (K1, (V1)*)* -> (K2, V2)*
I think that is mostly accurate. Read more accurate/detailed report in MapReduce revisited [cs.vu.nl][PDF].
But.... (Score:1)
It really only took Two Hours - the rest of the time was used stuffing in paid ads.
MapReduce (Score:2)
Re:MapReduce (Score:5, Informative)
The individual functions map and reduce are quite standard. The innovation here is the systems work they've done to make it work on such a large scale. All the programmer needs to worry about is implementing the two functions, they don't have to worry about distributing the work, ensuring fault tolerance, or anything else for that matter. That is the innovation.
They mention in the article that if you try and sort a petabyte you WILL get hard disk and computer failures. Hell, you can only read a terabyte hard disk a few times before you encounter unrecoverable errors. The system for executing those maps and reduces is what is important here. The important parts are in the design details, such as dealing with stragglers. If you have 4000 identical machines, you won't necessarily get equal performance. If a few of those machines have a bit flipped and started without disk cache, they might see a huge decrease in read/write performance. The system needs to recognize this and schedule the work differently. That can make a huge difference in execution time. If you graph the percentile complete of a MR job, you'll often see that it quickly reaches 95% and then plateaus. The last 5% may take 20% of the time, and good scheduling is required to bring this time down.
But like I said, the innovation isn't in the idea of using a Map and Reduce function, it is the system that executes the work.
Re: (Score:2)
I should have been more specific/clear. If you read do a full read of a terabyte disk a dozen times, you are likely to see an unrecoverable read error:
"Typically, [Unrecoverable Error Rate (UER) for read operations] will be 1 per 10^14 bits read for consumer class drives and 1 per 10^15 for enterprise class drives. This can be alarming, because you could also say that consumer class drives should see 1 UER per 12.5 TBytes of data read."
That quote is from a Sun blog [sun.com] that has lots of information about Mean Ti
Re: (Score:2)
As the owner of terabyte drives who hasn't had unrecoverable errors (yet), I was expressing my skepticism that such a thing was inevitable after only a few reads. That's not flamebait, but a request for further support of what I considered to be an unlikely statement.