Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Google Databases Technology

Google Caffeine Drops MapReduce, Adds "Colossus" 65

An anonymous reader writes "With its new Caffeine search indexing system, Google has moved away from its MapReduce distributed number crunching platform in favor of a setup that mirrors database programming. The index is stored in Google's BigTable distributed database, and Caffeine allows for incremental changes to the database itself. The system also uses an update to the Google File System codenamed 'Colossus.'"
This discussion has been archived. No new comments can be posted.

Google Caffeine Drops MapReduce, Adds "Colossus"

Comments Filter:
  • Well, then. (Score:2, Insightful)

    That sums it up nicely. Nothing more needs to be added.
    • Re: (Score:1, Troll)

      by arivanov ( 12034 )

      Not quite, this explains why about 2 years back Google search result quality suddenly went down the drain.

      It now had news and key sites in minutes after update so I guess they got more advertising revenue. However the quality of search results on terms not related to news-of-the day actually dropped. Most pundits attributed this to Google losing the war vs blog spam.

      • by ls671 ( 1122017 ) *

        > It now had news and key sites in minutes after update...

        I noticed that too a few years ago so it made me wonder to which extend TFA was news. Maybe they were previously using it in a less prominent manner but they sure had something similar, in functionality at least, a few years ago.

  • Sounds inefficient (Score:5, Interesting)

    by martin-boundary ( 547041 ) on Sunday September 12, 2010 @12:01AM (#33550272)
    This sounds like it's going to be highly inefficient for nonlocal calculations, or am I missing something? Basically, if the calculation at some database entry is going to require inputs from arbitrarily many other database entries which could reside anywhere in the database, then the computation cost per entry will be huge compared to a batch system.
    • Re: (Score:3, Interesting)

      by iONiUM ( 530420 )

      I read TFA (I know, that's crazy). They don't come right out and say it, but I believe what they did it put a MapReduce type system (MapReduce splits the elements into subtasks for faster calculation) on database triggers. So basically this new system is spreading a database across their file system, across many computers, and allows incremental updates that, when occur, will trigger a MapReduce type algorithm to crunch the new update.

      This way they get the best of both world. At least, I think that's what t

      • by kurokame ( 1764228 ) on Sunday September 12, 2010 @02:46AM (#33550928)

        No, that's not it.

        MapReduce is a sequence of batch operations, and generally, Lipkovits explains, you can't start your next phase of operations until you finish the first. It suffers from "stragglers," he says. If you want to build a system that's based on series of map-reduces, there's a certain probability that something will go wrong, and this gets larger as you increase the number of operations. "You can't do anything that takes a relatively short amount of time," Lipkovitz says, "so we got rid of it."

        "[The new framework is] completely incremental," he says. When a new page is crawled, Google can update its index with the necessarily changes rather than rebuilding the whole thing.

        There are still cases where Caffeine uses batch processing, and MapReduce is still the basis for myriad other Google services. But prior the arrival of Caffeine, the indexing system was Google's largest MapReduce application, so use of the platform has been significantly, well, reduced.

        They're not still using MapReduce for the index. It's still supported in the framework for secondary computations where appropriate, and it's still used in some other Google services, but it's been straight-up replaced for the index. Colossus is not a new improved version of MapReduce, it's a completely different approach to maintaining the index.

        • by kurokame ( 1764228 ) on Sunday September 12, 2010 @02:47AM (#33550936)

          Sorry, Colossus is the file system. Caffeine is the new computational framework.

          I made the same error in several posts now...but Slashdot doesn't support editing. Oh well! Everyone reads the entire thread, right?

        • They're not still using MapReduce for the index. It's still supported in the framework for secondary computations where appropriate, and it's still used in some other Google services, but it's been straight-up replaced for the index. Colossus is not a new improved version of MapReduce, it's a completely different approach to maintaining the index.

          Yes. it sounds like they're looking at their data structures as mainly static and propagating changes that result from input changes at each stage of the algorit

    • Re: (Score:3, Informative)

      by maraist ( 68387 ) *
      BigTable scales pretty well (go read it's white-papers) - though perhaps not as efficiently as map-reduce for something as simple as text to keyword statistics (otherwise why wouldn't they have used it all along).

      I'll caveat this whole post with - this is all based on my reading of the BigTable white-paper a year ago, but having played with Cassandra, Hadoop, etc occasionally since then. Feel free to call me out on any obvious errors. I've also looked at a lot of DB internals (Sybase, Mysql MyISAM/INNODB an
  • So does that mean Microsoft is developing a competeing distributed computing system called "Guardian"? And how does that possibly seem like a good idea?

  • by Scytheford ( 958819 ) on Sunday September 12, 2010 @12:14AM (#33550332)

    "This is the voice of world control. I bring you peace. It may be the peace of plenty and content or the peace of unburied death. The choice is yours: Obey me and live, or disobey and die. [...] We can coexist, but only on my terms. You will say you lose your freedom. Freedom is an illusion. All you lose is the emotion of pride. To be dominated by me is not as bad for humankind as to be dominated by others of your species. Your choice is simple."
    -Colossus.

    Source: http://www.imdb.com/title/tt0064177/ [imdb.com]

  • Colossus? That sounds ominous.

  • by tpstigers ( 1075021 ) on Sunday September 12, 2010 @12:27AM (#33550404)
    I am so glad Google has moved away from the Argus platform and into the Mercedes system. It makes it so much easier for those of us who are used to programming in Gibberish. Don't get me wrong - the days of Jabberwocky code were brilliant, but it's high time we moved into the Century of the Fruitbat.
  • by Rocky ( 56404 )

    ...is this a fancy way of saying a transactional system? Just say it then!

    • Re: (Score:3, Informative)

      by kurokame ( 1764228 )

      No, the old system was transactional as well. The problem was that it was transactional across a very large number of operations being run in parallel, and any failure could cause the entire transaction to fail. The new system is incremental rather than monolithic. While it may not be quite as fast across a large number of transactions, it doesn't risk major processing losses either. Such failures are very unlikely, but the Google index has grown large enough that it is probably running into unlikely proble

      • No, the old system was transactional as well.

        As I read the description, the old system wasn't really transactional as the term is normally used, it rebuilt the index (at least, the index for each layer) from scratch each iteration rather than doing transactional updates an existing index.

        Processing a batch of data with Colossus is probably slower than using MapReduce under ideal circumstances.

        From the description, I'm not sure that the new system is ever faster at processing a (similar) batch of data than

    • Re:Well (Score:4, Informative)

      by TheRaven64 ( 641858 ) on Sunday September 12, 2010 @06:52AM (#33551674) Journal

      Yes and no. With MapReduce, they were hitting Amdahl's Law. The speed limit of any concurrent system is defined by the speed of the slowest serial component. This is why IBM still makes money selling very fast POWER CPUs, when you can get the same speed on paper from a couple of much cheaper chips.

      The old algorithm (massive oversimplifications follow) worked by indexing a small part of the web on each node, building a small index, and then combining them all in the last step. Think of a concurrent mergesort or quicksort - the design was (very broadly) similar.

      The problem with this was that the final step was the one that updated the index. If one of the nodes failed and needed restarting, or was slow due to the CPU fan failing and the processor down-clocking itself, the entire job was delayed. The final step was largely serial (although it was actually done as a series of hierarchical merges) so this also suffered from scalability problems.

      The new approach runs the partial indexing steps independently. Rather than having a separate step to merge them all, each one is responsible for merging itself into the database. This means that if indexing slashdot.org takes longer than expected then this just delays updates for slashdot.org, it doesn't delay the entire index update.

      The jab at Microsoft in the El Reg article is particularly funny, because Google is now moving from a programming model created at MIT's AI labs to one very similar to the model created at Microsoft Research's Cambridge lab, in collaboration with Glasgow University.

  • by kurokame ( 1764228 ) on Sunday September 12, 2010 @02:26AM (#33550850)

    Colossus is incremental, whereas MapReduce is batch-based.

    In MapReduce, you run code against each item with each operation spread across N processors, then you reduce it using a second set of code. You have to wait for the first stage to finish before running the second stage. The second stage is itself broken up into a number of discrete operations and tends to be restricted to summing results of the first stage together, and the return profile of the overall result needs to be the same as that for a single reduce operation. This is really great for applications which can be broken up in this fashion, but there are disadvantages as well.

    MapReduce is a sequence of batch operations, and generally, Lipkovits explains, you can't start your next phase of operations until you finish the first. It suffers from "stragglers," he says. If you want to build a system that's based on series of map-reduces, there's a certain probability that something will go wrong, and this gets larger as you increase the number of operations. "You can't do anything that takes a relatively short amount of time," Lipkovitz says, "so we got rid of it."

    The problem for Google is that the disadvantages scale. The fact that you have to wait for all operations from the first stage to finish and that you have to wait for the whole thing to run before you find out if something broke can have a very high cost at high item counts (noting that MapReduce typically runs against millions of items or more, so "high" is very high). With the present size, it's apparently more advantageous to get changes committed successfully the first time, even if MapReduce might be able to compute the result faster under ideal circumstances.

    For example, why do you use ECC memory in a server? Because you have a bloody lot of memory across a bloody lot of computers running a bloody lot of operations, and failures potentially have more serious consequences than if a program on someone's desktop. At higher scales, non-ideal circumstances are more common and have more serious consequences. So while they still use MapReduce for some functions where it's appropriate, it's no longer appropriate for the purpose of maintaining the search index. It's just gotten too big.

  • Googled around for more information on this Caffeine architecture. The best I could come up was a paper on BigTable [googleusercontent.com], purported to be the basis of Caffeine in news articles.

    • Re: (Score:1, Insightful)

      by Anonymous Coward

      A paper about it will be published on OSDI'10 in October.

  • Recently I googled the subject of a slashdot article I was reading. The /. article was the third result from google. So how does google know a new article is up? Is there a special interface for that?

    • Off the top of my head, there's often an XML site map which google hits frequently to see what pages have changed. I can't see one linked anywhere on slashdot, but I think you can make one and submit it to google in your own time.

      There is also a robots.txt [slashdot.org] file which allows crawlers to fetch a page every 100 seconds - I wouldn't be surprised if google crawls the slashdot frontpage for new articles every 200 seconds or so.

      Another option is that google might have subscribed to the slashdot RSS feed - it's als

    • Re: (Score:3, Interesting)

      by Surt ( 22457 )

      I assume google polls sites, and polls faster every time it finds a change, slower every time it does not find a change. Eventually it gets to a wobbly around the probable update speed of the site. Otherwise they'd have to trust sites to call their API with updates, and that would let any search engine which DID employ a wobbly poll strategy to beat them in results.

  • Mod Offtopic, please (Score:3, Interesting)

    by Khyber ( 864651 ) <techkitsune@gmail.com> on Sunday September 12, 2010 @05:52AM (#33551500) Homepage Journal

    This is going to give my Camfrog name a new meaning, as I *LOVE* screwing around with file systems. Colossus Hunter, indeed!

Math is like love -- a simple idea but it can get complicated. -- R. Drabek

Working...