Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Databases Programming Software IT

MapReduce Goes Commercial, Integrated With SQL 99

CurtMonash writes "MapReduce sits at the heart of Google's data processing — and Yahoo's, Facebook's and LinkedIn's as well. But it's been highly controversial, due to an apparent conflict with standard data warehousing common sense. Now two data warehouse DBMS vendors, Greenplum and Aster Data, have announced the integration of MapReduce into their SQL database managers. I think MapReduce could give a major boost to high-end analytics, specifically to applications in three areas: 1) Text tokenization, indexing, and search; 2) Creation of other kinds of data structures (e.g., graphs); and 3) Data mining and machine learning. (Data transformation may belong on that list as well.) All these areas could yield better results if there were better performance, and MapReduce offers the possibility of major processing speed-ups."
This discussion has been archived. No new comments can be posted.

MapReduce Goes Commercial, Integrated With SQL

Comments Filter:
  • Good question. I had to look it up [wikipedia.org]. (Would it have killed the submitter or editor to include a link?)

    Basically, the software gets its name from the list processing functions "map" (to take every item in a list and transform it, thus producing a list of the same size) and "reduce" (to perform an operation on a list that produces a single value or smaller list). The actual software has nothing to do with "map" and "reduce", but it does to tokenization and processing on massive amounts of data.

    Presumably the Map/Reduce part comes from first normalizing the items being processed (a map operation) then reducing them down to a folded data structure (reduce), thus creating indexes of data suitable for fast searching.

  • Re:Um. (Score:1, Informative)

    by EvilIntelligence ( 1339913 ) on Tuesday August 26, 2008 @05:20PM (#24756653)
    Yes, its called hash partitioning. Been around since version 7 or 8 about 10 years ago (current release is 11).
  • by ELProphet ( 909179 ) <davidsouther@gmail.com> on Tuesday August 26, 2008 @05:30PM (#24756749) Homepage

    Actually, MapReduce doesn't do anything in the way data's stored- it's just a pipe between two sets of stored data, and really just needs an interface on both ends to get the task into MapReduce (which is what it seems the projects TFS/A mention do). BigTable is the storage mechanism that's incompatible with most traditional row-based RDBMSs. GFS is just the underlying storage mechanism.

    http://labs.google.com/papers/gfs.html [google.com]
    http://labs.google.com/papers/bigtable.html [google.com]
    http://labs.google.com/papers/mapreduce-osdi04.pdf [google.com]

    Note that all of those were published several years ago- I'd bet dollars to donuts that Google is _WAY_ beyond this internally if it's just reaching commercial use by their competitors.

  • Re:Um. (Score:1, Informative)

    by Anonymous Coward on Tuesday August 26, 2008 @05:30PM (#24756757)
    Uh, no. MapReduce is a parallel programming model -- not a way of laying out data on disk.
  • The correct project name is Hadoop [apache.org]. It was factored out of Nutch 2.5 years ago. And Yahoo has been putting a lot of effort to make it scale up. We run 15,000 nodes with Hadoop in clusters of up to 2,000 nodes each and soon that will be 3,000 nodes. I used 900 nodes to win Jim Gray's terabyte sort benchmark [yahoo.com] by sorting 1 TB of data (100 billion 100 byte records) in 3.5 minutes. It is also used to generate Yahoo's Web Map [yahoo.com], which has 1 trillion edges in it.

  • by Anonymous Coward on Tuesday August 26, 2008 @06:06PM (#24757125)

    If my memory's right, this java api for doing grid computing uses this pattern and gives quite a good explanation about it (I think it was developped by google)
    http://www.gridgain.com/

  • by Anonymous Coward on Tuesday August 26, 2008 @06:14PM (#24757195)

    Don't confuse the search engine with MapReduce. The MapReduce engine creates the indexes for the search engine, its a batch job processor. Just because google chose C++ does not mean it is the only choice, even if it was the best choice for them. Hadoop (a java project at Yahoo, and open source too) has a MapReduce implementation.

  • Re:Um. (Score:3, Informative)

    by raddan ( 519638 ) on Tuesday August 26, 2008 @06:18PM (#24757231)
    Actually, the two are paired: programming model and implementation. The reason there's a programming model is that functional methods allow Google's implementation to automatically parallelize the input data for feeding to the cluster. So the implementation is very important, because that's actually how the data is processed and returned.

    In that sense, Oracle's clustering optimizations are also a paired programming model and implementation, since, presumably, you need to know Oracle's SQL language extensions in order to take advantage of them (disclaimer: I don't use Oracle). From what I understand about functional programming, SQL should be ideally positioned to take advantage of these kinds of optimizations, since the actual implementation details of any SQL query are always left to the query optimizer, SQL being a declarative language. I'm going to speculate wildly and say that you could probably write a SQL interpreter using a functional style as well, and that good ones probably already do.
  • by jbolden ( 176878 ) on Tuesday August 26, 2008 @06:59PM (#24757601) Homepage

    Here is the connection between map and reduce.

    In programming

    map takes a function from A to B, a list of A's and produces a list of B's

    reduce are associative fold functions. They take a list of B's and an initial value and produce a single C.

    Like say for example MAP a collection of social security numbers to ages and then select (REDUCE TO) the maximum age from the collection.

    Now there are results called "fusions" which allow you make computational reductions for example:
    foldr f a . map g = foldr (f.g) a

    So in other words the data set is being treated like a large array using array manipulation commands.

  • by grae ( 14464 ) <graeNO@SPAMimho.com> on Tuesday August 26, 2008 @07:11PM (#24757739) Homepage
    If you're interested in one of the sorts of things that Google has done with MapReduce, look no further than Sawzall.

    http://research.google.com/archive/sawzall.html [google.com]

    Sawzall is essentially designed around the mapreduce framework. It's impossible to *not* write a mapreduction in Sawzall. The way it works:

    Your program is written to process a single record. The magic part happens when you output: you have to output to special tables. Each of these table types has a different way that it combines data emitted to it.

    So, during the map phase, your program is run in parallel on each input record. During the reduce phase, the reduction happens according to the way the output tables do whatever operation was specified.

    There was some work to be done having enough different output tables to do everything that was useful, especially since you might want to take the output and plug it in as the input to another phase of mapreduction.

    One of the biggest reasons this was a major innovation for Google was that it let some of the people who weren't really programmers still come up with useful programs, because the Sawzall language was pretty simple (especially when combined with some of the library functions that had been implemented to do common sorts of computations.) There were also some interesting ways in which the security model was implemented, but as far as I know they haven't been published yet.

    There certainly are plenty of other technical things that can be done to improve a system like MapReduce (and I know that many of them were in various forms of experimentation when I left the company) but at least some of them are highly dependent on Google's infrastructure, and not really relevant to a general discussion. (I suspect that the papers linked above might have some hints, but it has been a while since I looked at them.)

  • by Jack9 ( 11421 ) on Tuesday August 26, 2008 @07:13PM (#24757757)

    Google's mapreduce framework has a native resource manager that's aware of what resources are available, aware of failures, and is prepared to reschedule failed processes and where (and when?) to direct finished tasks. Basically it's a job que for distributed processing using a private network. MapReduce is just one tool. You aren't going to get much out of it after you max out your local machine's processing until you start work on the rest of it. What's really scary is that MySQL announces that they finally discovered the ancient algorithm of multithreaded recursive aggregation, "Hey look, in some cases MySQL wont waste processing power!" //i'm a mysql fanboy, but this is really an embarassing announcement

  • by Rakishi ( 759894 ) on Tuesday August 26, 2008 @08:31PM (#24758603)

    Well someone should tell that to the people working on Hadoop. I'm sure they'd love to know that their java mapreduce based framework is impossible. Maybe they'll even be able to use the paradox to built a perpetual motion machine and power the world.

    See: http://developers.slashdot.org/comments.pl?sid=900359&cid=24756761 [slashdot.org]

  • by severoon ( 536737 ) on Tuesday August 26, 2008 @08:49PM (#24758825) Journal

    Map-Reduce is definitely a technique related to grid computing, but they are not one and the same.

    The most popular (to my knowledge) open source Java library implementing MR is Hadoop [apache.org].

    Here's the algorithm in a nutshell (anyone who knows more than me, please correct, and I'll be forever grateful). I have a bunch of documents and I want to generate a list of word counts. So I begin with the first document and map each word in the document to the value 1. I return each mapping as I do it, and it is merge-sorted by key into a map. Let's say I start with a document of a single sentence: John likes Sue, but Sue doesn't like John. At the end of the map phase, I have compiled the following map, sorted by key:

    • but - 1
    • doesn't - 1
    • like - 1
    • likes - 1
    • John - 1
    • John - 1
    • Sue - 1
    • Sue - 1

    Now begins the reduce phase. Since the map is sorted by key, all the reduce phase does is iterate through the keys and add up the associated values until a new key is encountered. The result is:

    • but - 1
    • doesn't - 1
    • like - 1
    • likes - 1
    • John - 2
    • Sue - 2

    Simple. Stupid. What's the point? The point is that the way this algorithm divides up the work happens to be extremely convenient for parallel processing. So, the map phase of a single document can be split up and farmed out to different nodes in the grid for processing, which can be processed separately from the reduce phase. The merge-sort can even be done at a different processing node as mappings are returned. Redundancy can be achieved if the same document chunk is farmed out to several nodes for simultaneous processing, and the first one that returns the result is used, the others simply ignored or canceled (maybe they're queued up at redundant nodes that were busy, so canceling means simply removing from the queue with very few cycles wasted). Similarly, because the resulting map is sorted by key, an extremely large map can easily be split and sent to several processing nodes in parallel. The original task of counting words across a set of documents can be decomposed to an ridiculous extent for parallelization.

    Of course, this doesn't make much sense to actually do this unless you have a very large number of documents. Or, let's say you have a lot of computing resources, but each resource on its own is very limited in terms of processing power. Or both.

    This is very close to the problem a company like Google has to solve when indexing the web. The number of documents is huge (every web page), and they don't have any super computers—just a whole ton of cheap, old CPUs in racks.

    At the end of the day, Map-Reduce is only useful for tasks that can be decomposed, though. If you have a problem with separate phases, where the input of each phase is determined by the output of the previous phase, then they must be executed serially and Map-Reduce can't help you. If you consider the word-counting example I posted above, it's easy to see that the result required depends upon state that is inherent in the initial conditions (the documents)—it doesn't matter how you divide up a document or if you jumble up the words, the count associated with each word doesn't change, so the result you're after doesn't depend on the context surrounding those words. On the other hand, if you're interested in counting the number of sentences in those documents, you might have a much more difficult problem. (You might think you could just chunk the documents up at the sentence level, but whether or not something is a sentence depends upon surrounding context—a machine can easily mistake an abbreviation like Mr. for the end of a sentence, especially if that Mr. is followed by a capital letter which could indicate the beginning of a new sentence...which it almost always is. Actually...if you're smart you can probably come up with a very compelling argument that this

  • by Anonymous Coward on Tuesday August 26, 2008 @10:40PM (#24759819)

    This classic word count example by Google is exactly what Aster demonstrated in their webinar via a live demo of their In-database MapReduce software:

    http://www.asterdata.com/product/webcast_mapreduce.html

  • by Anonymous Coward on Tuesday August 26, 2008 @11:17PM (#24760111)

    Hadoop is written in Java and does a fine job. and google uses more java than you can imagine.

On the eighth day, God created FORTRAN.

Working...