Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Databases Software Programming IT Technology

MapReduce — a Major Step Backwards? 157

The Database Column has an interesting, if negative, look at MapReduce and what it means for the database community. MapReduce is a software framework developed by Google to handle parallel computations over large data sets on cheap or unreliable clusters of computers. "As both educators and researchers, we are amazed at the hype that the MapReduce proponents have spread about how it represents a paradigm shift in the development of scalable, data-intensive applications. MapReduce may be a good idea for writing certain types of general-purpose computations, but to the database community, it is: a giant step backward in the programming paradigm for large-scale data intensive applications; a sub-optimal implementation, in that it uses brute force instead of indexing; not novel at all -- it represents a specific implementation of well known techniques developed nearly 25 years ago; missing most of the features that are routinely included in current DBMS; incompatible with all of the tools DBMS users have come to depend on."
This discussion has been archived. No new comments can be posted.

MapReduce — a Major Step Backwards?

Comments Filter:
  • by yagu ( 721525 ) * <yayagu@[ ]il.com ['gma' in gap]> on Friday January 18, 2008 @04:56PM (#22100086) Journal

    I don't know why this article is so harshly critical of MapReduce. They base their critique and criticism on the following five tenets, which they further elaborate in detail in the article:

    1. A giant step backward in the programming paradigm for large-scale data intensive applications
    2. A sub-optimal implementation, in that it uses brute force instead of indexing
    3. Not novel at all -- it represents a specific implementation of well known techniques developed nearly 25 years ago
    4. Missing most of the features that are routinely included in current DBMS
    5. Incompatible with all of the tools DBMS users have come to depend on

    If you take the time to read the article you'll find they use axiomatic arguments with lemmas like: "schemas are good", and "Separation of the schema from the application is good, etc. First, they make the assumption that these points are relevant and germaine to MapReduce. But, they mostly aren't.

    Also taking the five tenets listed, here are my observations:

    1. A giant step backward in the programming paradigm for large-scale data intensive applications

      they don't offer any proof, merely their view... However, the fact that Google used this technique to re-generate their entire internet index leads me to believe that is this were indeed a giant step backward, we must have been pretty darned evolved to step "back" into such a backwards approach

    2. A sub-optimal implementation, in that it uses brute force instead of indexing

      Not sure why brute force is such a poor choice, especially given what this technique is used for. From wikipedia:

      MapReduce is useful in a wide range of applications, including: "distributed grep, distributed sort, web link-graph reversal, term-vector per host, web access log stats, inverted index construction, document clustering, machine learning, statistical machine translation..." Most significantly, when MapReduce was finished, it was used to completely regenerate Google's index of the World Wide Web, and replaced the old ad hoc programs that updated the index and ran the various analyses.
    3. Not novel at all -- it represents a specific implementation of well known techniques developed nearly 25 years ago

      Again, not sure why something "old" represents something "bad". The most reliable rockets for getting our space satellites into orbit are the oldest ones.

      I would also argue their bold approach to applying these techniques in such a massively aggregated architecture is at least a little novel, and based on results of how Google has used it, effective.

    4. Missing most of the features that are routinely included in current DBMS

      They're mistakenly assuming this is for database programming

    5. Incompatible with all of the tools DBMS users have come to depend on

      See previous bullet

    Are these guys just trying to stake a reputation based on being critical of Google?

  • Just watch. (Score:2, Insightful)

    by jonnythan ( 79727 ) on Friday January 18, 2008 @04:59PM (#22100174)
    It's a technical step backwards, they're doing it all wrong, experts say you should do it this other way....

    And watch. It'll be massively successful because it works.
  • by CajunArson ( 465943 ) on Friday January 18, 2008 @05:02PM (#22100220) Journal
    Are these guys just trying to stake a reputation based on being critical of Google? I tend to agree, I could probably write a nice article about how map-reduce would be a terrible system to use in making a 3D game. Could an article like that be technically true? Sure. Would it be in anything more than a logical non-sequiter? Not unless Google all of the sudden came out and claimed mapreduce is the new platform for all 3D game development (not likely).
  • Databases? WTF? (Score:5, Insightful)

    by mrchaotica ( 681592 ) * on Friday January 18, 2008 @05:02PM (#22100228)

    Since when did MapReduce have anything to do with databases? It's actually about parallel computations, which are entirely different.

  • Money, meet mouth (Score:4, Insightful)

    by tietokone-olmi ( 26595 ) on Friday January 18, 2008 @05:03PM (#22100242)
    Perhaps the traditional RDBMS experts will return when they can scale their paradigms to datasets that are measured in the tens of terabytes and stored on thousands of computers. Following the airplane rule the solution needs to be able to withstand a crash in a bunch of those hosts without coming unglued.

    Now, this is not to say that a more sophisticated approach wouldn't work. It's just that when you have thousands of boxes in a few ethernet segments, communication overhead becomes really quite large, so large in fact that whatever can be saved with brute-force computation it'll usually be worth it. Consider that from what I've heard, at Google these thousands of boxes are mostly containers for RAM modules so there's rather a lot of computation power per gigabyte available to throw away with a brute force system.

    Also, I would like to point out that map/reduce is demonstrated to work. Apparently quite well too. Certainly better than any hypothetical "better" massively parallel RDBMS available in a production quality implementation today.
  • ...entry says;

    "You seem to not have noticed that mapreduce is not a DBMS."

    Exactly. These are the same sort of criticisms that you hear around memcached [danga.com] - the feature set is smaller, etc - and they make the same mistake. It's not a DBMS, and it's not supposed to be. But it does what it does quite well nonetheless!
  • by dezert_fox ( 740322 ) on Friday January 18, 2008 @05:08PM (#22100342)
    >If you take the time to read the article you'll find they use axiomatic arguments with lemmas like: "schemas >are good", and "Separation of the schema from the application is good, etc. Actually, it says: "The database community has learned the following three lessons from the 40 years that have unfolded since IBM first released IMS in 1968. Schemas are good. Separation of the schema from the application is good. High-level access languages are good." Way to conveniently drop important contextual information. Axioms like these, derived from 40 years of experience, carry a lot of weight for me.
  • by dazedNconfuzed ( 154242 ) on Friday January 18, 2008 @05:10PM (#22100388)
    it represents a specific implementation of well known techniques developed nearly 25 years ago

    There are many classic/old techniques which are only now being used - and very successfully - precisely because the hardware simply wasn't there. A recent /. post told of ray-tracing being soon used for real-time 3D gaming, and how it beats the socks off "rasterized" methods when a critical mass of polygons is involved; the techniques were well known and developed nearly 25 years ago, but only now do we have the CPU horsepower and vast fast memory capacities available for those "old" techniques to really shine. Likewise "old" "brute force" database techniques: they may not be clever and efficient like what we've been using for highly stable processing of relatively small-to-medium databases, but they work marvelously well when involving big unreliable networks of processors working on vast somewhat-incoherent databases - systems where modern shiny techniques just crumble and can't handle the scaling.

    Sometimes the "old" methods are best - you just need the horsepower to pull it off. Clever improvements only scale so long.
  • by abes ( 82351 ) on Friday January 18, 2008 @05:13PM (#22100462) Homepage
    Well, INDBE, but MapReduce seems like a pretty cool idea (even it is old [which in my books does not equate bad]). A similar argument could be made against SQL -- it's not appropriate to all solutions. It's used for most nowadays, in part because it's the simplest to use, but that doesn't make it necessarily better. It (of course) depends on what data you want to represent.

    Even more importantly, you can create schemas with MapReduce by how you write your Map/Reduce functions. This is a matter of the datafunction exchange (all data can be represented as a function, likewise all functions can be represented as data). I admit ignorance to how this MapReduce system works, but I would be surprised if you couldn't get a relational database back out.

    The advantage is you get with MapReduce is that you aren't necessarily tied to a single representation of data. Especially for companies like Google, which may want to create dynamic groups of data, this could be a big win. Again, this is all speculative, as I have very little experience with these systems.
  • by Anonymous Coward on Friday January 18, 2008 @05:13PM (#22100478)
    The reaction seems straightforward enough. The MapReduce paradigm has proved to be very effective for a company that lives and breathes scalability, while it apparently ignores a whole bunch of database work that's been going on in academia. That fact that industry was able to produce something so effective without making use of all this knowledge base at least implicitly undercuts the importance of that work, and is thus threatening to the community which produced that work. Is it any surprise that the researchers whose work was completely side-stepped by this approach aren't happy with the current situation?
  • FTFA (Score:5, Insightful)

    by smcdow ( 114828 ) on Friday January 18, 2008 @05:19PM (#22100598) Homepage

    Given the experimental evaluations to date, we have serious doubts about how well MapReduce applications can scale.


    That's a joke, right?

    I think Google's already taken care of all the experimental evaluations you'd need.

  • by brundlefly ( 189430 ) on Friday January 18, 2008 @05:25PM (#22100720)
    The point of MapReduce is that It Works. Cheaply. Reliably. It's not a solution for the Cathedral, it's one for the Bazaar.

    Comparing it to a DBMS on fanciness is pointless, because the DBMS solution fails where MapReduce succeeds.
  • by samkass ( 174571 ) on Friday January 18, 2008 @05:45PM (#22101040) Homepage Journal
    Speaking as someone who works for a company whose product uses a database that is neither relational nor object-oriented, I can say from experience that folks who have devoted a significant amount of their lives to mastering that methodology see anything else as a threat. There are definitely use-cases for non-relational databases-- they're used at both Google and Amazon, as well as many other places. You can either burn significant effort defending your decision to go non-relational, or you can move on and ignore these folks and produce great products. The problem is that sometimes they make good points (especially about some aspects of indexing), but it's almost always lost in the "but... but... but... you're not relational!" argument.
  • by ShakaUVM ( 157947 ) on Friday January 18, 2008 @05:48PM (#22101090) Homepage Journal
    Map/Reduce is a very common operation in parallel processing. From my very quick look, it does seem as if the authors are right -- it looks like a quick and dirty implementation of a common operation, and not a "paradigm shift" in the slightest.
  • by steveha ( 103154 ) on Friday January 18, 2008 @05:48PM (#22101098) Homepage
    I read through the whole article, and was just bemused. According to the article, MapReduce isn't as good as a real database at doing the sorts of things real databases do well. Um, okay, I guess, but MapReduce can do quite a lot of other things that they seem to have missed.

    Also, I had a major WTF moment when I read this:

    Given the experimental evaluations to date, we have serious doubts about how well MapReduce applications can scale.

    Empirical evidence to date suggests that MapReduce scales insanely well. Exhibit A: Google, which uses MapReduce running on literally thousands of servers at a time to chew through literally hundreds of terabytes of data. (Google uses MapReduce to index the entire World Wide Web!)

    This in turn suggests that the authors of TFA are firmly ensconced in the ivory tower.

    They complained that brute-force is slower than indexed searches. Well, nothing about MapReduce rules out the use of indexes; and for common problems, Google can add indexes as desired. (Google uses MapReduce to build their index to the Web in the first place.) And because Google adds servers by the rackful, they have quite a lot of CPU power just waiting to be used. Brute force might not be slower if you split it across thousands of servers!

    Likewise, they complain that one can't use standard database report-generating tools with MapReduce; but if the Reduce tasks insert their results into a standard database, one could then use any standard report-generating tools.

    MapReduce lets Google folks do crazy one-off jobs like ask every single server they own to check through their system logs for a particular error, and if it's found, return a bunch of config files and log files. Even if you had some sort of distributed database that could run on thousands of machines, any of which might die at any moment, and if you planned ahead and set the machines to copy their system logs into the database, I don't see how a database would be better for that task. That's just a single task I just invented as an example; there are many others, and MapReduce can do them all.

    And one of the coolest things about MapReduce is how well it copes with failure. Inevitably some servers will respond very slowly, or will die and not respond; the MapReduce scheduler detects this and sends the Map tasks out to other servers so the job still finishes quickly. And Google keeps statistics on how often a computer is slow. At a lecture, I heard a Google guy explain how there was a BIOS bug that made one server in 50 disable some cache memory, thus greatly slowing down server performance; the MapReduce statistics helped them notice they had a problem, and isolate which computers had the problem.

    MapReduce lets you run arbitrary jobs across thousands of machines at once, and all the authors of the article seem to be able to see is that it's not as database-oriented as a real database.

    steveha
  • by SharpFang ( 651121 ) on Friday January 18, 2008 @06:18PM (#22101570) Homepage Journal
    Indexing works by picking a small slice of the data you have (as a list of hashes), and changing it into a much smaller table mapping the data onto a group of records matching it. The index is smaller and conforms to a certain strict standard, so it's very fast to brute force. Then as you get the list of indices, you brute force them, and this way you get the record.

    This works well if you can create such a slice - a piece of data you will match against. It becomes increasingly unwieldy if there are many ways to match a data - multiple columns mean multiple indices. And then if you remove columns entirely, making records just long strings, and start matching random words in the record, index becomes useless - hashes become bigger than chunks of data they match against, indexing all possible combinations of words you can match against results in index bigger than the database, and generally... bummer. Index doesn't work well against freestyle data searchable in random form.

    Imagine a database with its main column being VARCHAR(255) and using about full length of it, then search using a lot of LIKE and AND, picking various short pieces out of that column, and the database being terabytes big. Try to invent a way to index it.
  • by gnuman99 ( 746007 ) on Friday January 18, 2008 @06:32PM (#22101816)
    I thought Google search weren't exact. You know, they were more statistical in nature. The entire algorithm is not probably based on absolute number (guessing, but otherwise it would not make sense).

    The thing is if Google uses this to create their index-like structure of the internet for their search engine, and it is not exactly like a RDBMS, well, so what? The MapReduce thing seems to be targeted at large sets of data and semi-accurate data mining, not exact results. No one really cares if there are 3,000,000,000 sites or 3,000,000,002 sites with Linux in it somewhere.

    Comparing RDBMS to MapReduce is like comparing math function to a paper graph of that function. The first one gives you exact results for all data in its domain. The second gives out quick, pain-free and semi-accurate results for some parts of the domain.

    Now, I will not be using MapReduce but then I don't see why Google should not. It is their business.
  • Hmmm.... ISTM that the basic critiques come down to:

    1) No indexing.

    Which means

    2) Certain types of constraints probably don't work (such as UNIQUE constraints)

    Which also means

    3) Referential integrity checking and other things don't work.

    This leads to the conclusion that the idea is good for certain types of data-intensive but not integrity-intensive applications (think Ruby on Rails-type apps) but *not* good for anything Edgar Codd had in mind....
  • Re:Databases? WTF? (Score:3, Insightful)

    by martin-boundary ( 547041 ) on Friday January 18, 2008 @08:42PM (#22103380)
    Um, nope. You're not thinking abstractly enough, that is, you're not thinking like a computer scientist. MapReduce is a (rather obvious) framework for processing large lists of (key,data) pairs in parallel, therefore it can be compared with other such systems. Both MapReduce and RDBMSes basically compute a function on a set of (key,data) pairs.

    1) The fact that MapReduce is being used for specific low level applications does not make it intrinsically different or uncomparable to an RDBMS, although it may not be worthwhile.

    2) The more MapReduce gets used for things other than search engine calculations, the more it becomes worthwhile to do the comparison.

  • Re:Databases? WTF? (Score:3, Insightful)

    by martin-boundary ( 547041 ) on Friday January 18, 2008 @08:53PM (#22103522)

    More and more systems use databases simply as a data archive, not for primary work.
    I wouldn't count on even that being a long term trend. It takes time for people to come up with things to do with a database. Especially really big databases. Wait another ten years, and people will complain that their dumb data archives are not RDBMSes.
  • Re:Databases? WTF? (Score:3, Insightful)

    by Temporal ( 96070 ) on Friday January 18, 2008 @10:12PM (#22104204) Journal
    I guess if you consider anything that involves (key, value) pairs to be basically an RDBMS, you might as well classify almost everything as an RDBMS, which seems to make the term pointless. Why write software anymore when we can just use a database? The reality is that I would use MapReduce and MySQL to solve very different problems.

    I think TFA is being silly in trying to compare MapReduce to DBMSs. Yes, of course MapReduce compares unfavorably, because it isn't a DBMS. The comment that MapReduce is "A sub-optimal implementation, in that it uses brute force instead of indexing" is particularly telling: MapReduce is not intended for situations where you would want indexing, and never was. In general, the whole article is trying to judge MapReduce on points that are completely irrelevant to what it was designed for and the way it is actually used.

    Really, if MapReduce were a DBMS, then why did the creators of MapReduce also create BigTable? BigTable *is* meant to be like a database, although it omits a lot of features in favor of scalability. MapReduce and BigTable are used for completely different things. I think Jeff and Sanjay (creators of both MapReduce and BigTable) probably find it pretty amusing to see MapReduce evaluated as a DBMS.

Two can Live as Cheaply as One for Half as Long. -- Howard Kandel

Working...