Forgot your password?
typodupeerror
Databases Programming Software IT

MapReduce Goes Commercial, Integrated With SQL 99

Posted by kdawson
from the patterns-in-the-data dept.
CurtMonash writes "MapReduce sits at the heart of Google's data processing — and Yahoo's, Facebook's and LinkedIn's as well. But it's been highly controversial, due to an apparent conflict with standard data warehousing common sense. Now two data warehouse DBMS vendors, Greenplum and Aster Data, have announced the integration of MapReduce into their SQL database managers. I think MapReduce could give a major boost to high-end analytics, specifically to applications in three areas: 1) Text tokenization, indexing, and search; 2) Creation of other kinds of data structures (e.g., graphs); and 3) Data mining and machine learning. (Data transformation may belong on that list as well.) All these areas could yield better results if there were better performance, and MapReduce offers the possibility of major processing speed-ups."
This discussion has been archived. No new comments can be posted.

MapReduce Goes Commercial, Integrated With SQL

Comments Filter:
  • by Anonymous Coward on Tuesday August 26, 2008 @04:53PM (#24756325)

    and can I run Linux on it? Or it on Linux? Is it available for my iPhone?

    • MapReduce is the algorithm used to determine the optimum folding pattern used to reduce a standard road map back into its folded state. Duh.

      • Re: (Score:2, Funny)

        by Anonymous Coward

        Why can't they just look at the creases? Duuuuuuh.

      • by AmberBlackCat (829689) on Tuesday August 26, 2008 @07:11PM (#24757735)
        I thought those were like Rubik's Cubes where you just rip them apart and put them back together right.
      • by spazdor (902907)

        That problem has already been solved by a collaboration of millions of computers! Haven't you ever heard of Folding@Home?

        "Ok, what about accordion-style from the leftmost edge, with a vertical fold at the beginning!?

      • by zevans (101778)

        MapReduce is the algorithm used to determine the optimum folding pattern used to reduce a standard road map back into its folded state. Duh.

        Coded for, we assume, on the Y chromosome only.

    • Good question. I had to look it up [wikipedia.org]. (Would it have killed the submitter or editor to include a link?)

      Basically, the software gets its name from the list processing functions "map" (to take every item in a list and transform it, thus producing a list of the same size) and "reduce" (to perform an operation on a list that produces a single value or smaller list). The actual software has nothing to do with "map" and "reduce", but it does to tokenization and processing on massive amounts of data.

      Presumably the Map/Reduce part comes from first normalizing the items being processed (a map operation) then reducing them down to a folded data structure (reduce), thus creating indexes of data suitable for fast searching.

      • Re: (Score:1, Informative)

        by Anonymous Coward

        If my memory's right, this java api for doing grid computing uses this pattern and gives quite a good explanation about it (I think it was developped by google)
        http://www.gridgain.com/

        • by severoon (536737) on Tuesday August 26, 2008 @08:49PM (#24758825) Journal

          Map-Reduce is definitely a technique related to grid computing, but they are not one and the same.

          The most popular (to my knowledge) open source Java library implementing MR is Hadoop [apache.org].

          Here's the algorithm in a nutshell (anyone who knows more than me, please correct, and I'll be forever grateful). I have a bunch of documents and I want to generate a list of word counts. So I begin with the first document and map each word in the document to the value 1. I return each mapping as I do it, and it is merge-sorted by key into a map. Let's say I start with a document of a single sentence: John likes Sue, but Sue doesn't like John. At the end of the map phase, I have compiled the following map, sorted by key:

          • but - 1
          • doesn't - 1
          • like - 1
          • likes - 1
          • John - 1
          • John - 1
          • Sue - 1
          • Sue - 1

          Now begins the reduce phase. Since the map is sorted by key, all the reduce phase does is iterate through the keys and add up the associated values until a new key is encountered. The result is:

          • but - 1
          • doesn't - 1
          • like - 1
          • likes - 1
          • John - 2
          • Sue - 2

          Simple. Stupid. What's the point? The point is that the way this algorithm divides up the work happens to be extremely convenient for parallel processing. So, the map phase of a single document can be split up and farmed out to different nodes in the grid for processing, which can be processed separately from the reduce phase. The merge-sort can even be done at a different processing node as mappings are returned. Redundancy can be achieved if the same document chunk is farmed out to several nodes for simultaneous processing, and the first one that returns the result is used, the others simply ignored or canceled (maybe they're queued up at redundant nodes that were busy, so canceling means simply removing from the queue with very few cycles wasted). Similarly, because the resulting map is sorted by key, an extremely large map can easily be split and sent to several processing nodes in parallel. The original task of counting words across a set of documents can be decomposed to an ridiculous extent for parallelization.

          Of course, this doesn't make much sense to actually do this unless you have a very large number of documents. Or, let's say you have a lot of computing resources, but each resource on its own is very limited in terms of processing power. Or both.

          This is very close to the problem a company like Google has to solve when indexing the web. The number of documents is huge (every web page), and they don't have any super computers—just a whole ton of cheap, old CPUs in racks.

          At the end of the day, Map-Reduce is only useful for tasks that can be decomposed, though. If you have a problem with separate phases, where the input of each phase is determined by the output of the previous phase, then they must be executed serially and Map-Reduce can't help you. If you consider the word-counting example I posted above, it's easy to see that the result required depends upon state that is inherent in the initial conditions (the documents)—it doesn't matter how you divide up a document or if you jumble up the words, the count associated with each word doesn't change, so the result you're after doesn't depend on the context surrounding those words. On the other hand, if you're interested in counting the number of sentences in those documents, you might have a much more difficult problem. (You might think you could just chunk the documents up at the sentence level, but whether or not something is a sentence depends upon surrounding context—a machine can easily mistake an abbreviation like Mr. for the end of a sentence, especially if that Mr. is followed by a capital letter which could indicate the beginning of a new sentence...which it almost always is. Actually...if you're smart you can probably come up with a very compelling argument that this

          • by Anonymous Coward on Tuesday August 26, 2008 @10:40PM (#24759819)

            This classic word count example by Google is exactly what Aster demonstrated in their webinar via a live demo of their In-database MapReduce software:

            http://www.asterdata.com/product/webcast_mapreduce.html

          • by gslavik (1015381)

            I don't think that there is sorting going on in MapReduce (from what I've read). Could be that I missed something ...

            • by severoon (536737)

              I only have a passing familiarity with Map-Reduce, so I'm definitely not an authoritative source. It's definitely possible that sorting isn't part of the algorithm itself, but rather one example of context around how it's often implemented. It definitely makes sense, though—why not merge-sort the results as mappings are returned? If you do implement it this way, it just makes it possible to deal with really large maps that need to be spread over multiple nodes.

          • I'm not quite entirely sure what you mean by the verb "map", the noun "map", and in which sense you use it in each instance. Also, I'm unsure why you think sorting enters into it.

            My understanding of MapReduce is that it's (surprise!) all about applying the higher-order functions map and then reduce. Here's what they do:

            Map takes a function f and a list [x_1, ..., x_n], then returns [f(x_1), ..., f(x_n)]. That is, it applies f to all the elements of the list. [variants takes multi-argument functions and

            • by severoon (536737)

              map, v. - to perform a mapping

              map, n. - a collection of mappings

              I think you describe the nuts and bolts of the algorithm...but that's not really that helpful when it comes to understanding the usefulness.

              The big fuss about map-reduce (not necessarily Google's) is that we've pretty much hit the speed limit in single core processing power. 4GHz is about it...it's not going to get any faster for some time. Unfortunately, most programs are written to only run on a single core, so adding more cores is only go

      • by jbolden (176878) on Tuesday August 26, 2008 @06:59PM (#24757601) Homepage

        Here is the connection between map and reduce.

        In programming

        map takes a function from A to B, a list of A's and produces a list of B's

        reduce are associative fold functions. They take a list of B's and an initial value and produce a single C.

        Like say for example MAP a collection of social security numbers to ages and then select (REDUCE TO) the maximum age from the collection.

        Now there are results called "fusions" which allow you make computational reductions for example:
        foldr f a . map g = foldr (f.g) a

        So in other words the data set is being treated like a large array using array manipulation commands.

      • Re: (Score:3, Informative)

        by Jack9 (11421)

        Google's mapreduce framework has a native resource manager that's aware of what resources are available, aware of failures, and is prepared to reschedule failed processes and where (and when?) to direct finished tasks. Basically it's a job que for distributed processing using a private network. MapReduce is just one tool. You aren't going to get much out of it after you max out your local machine's processing until you start work on the rest of it. What's really scary is that MySQL announces that they final

      • Basically, the software gets its name from the list processing functions "map" (to take every item in a list and transform it, thus producing a list of the same size) and "reduce" (to perform an operation on a list that produces a single value or smaller list).

        As does my Slashdot user name. Great, now everything is going to think I'm calling on people to "filter" this software somehow, which I'd never heard of before this story. And it's "highly controversial", that's helpful.

      • In Haskell, there is the command "fold" (foldr or foldl) for this. What's so special about this?

        Haskell has "map", "filter", "zip" "reverse" and whatnot...
        (... why must I think of Missy Elliott songs now?)

    • by Anonymous Coward on Tuesday August 26, 2008 @05:03PM (#24756451)

      and can I run Linux on it? Or it on Linux?

      Have you ever considered that it might itself be a distro? A, like, super-leet distro that the big Valley firms have been hacking together for the past ten years, only giving access to employees that sign a super-nasty NDA? A disto that traces back to a Photoshop 1.0 plugin for resizing GIFs?

    • by jefu (53450)

      MapReduce is just an idiom (pattern if you will) for processing collections (arrays, lists, trees, database tables...) of data. There is often another piece :filter that cuts out bits you don't want to do but that can easily be done in the reduce step, though sometimes it is done somewhere else.

      For example, suppose you want to compute exp(x) using the usual Taylor series expansion and 20 terms. Start with the list [0,1,2,3,4,5, .. 19]. Then map the function :
      f(i) = x^i / i!
      to each entry in the li

    • and can I run Linux on it? Or it on Linux? Is it available for my iPhone?

      First lets figure out if we can run Vista with it. Vista is toooooo slow.

  • by Anonymous Coward

    People who don't know LISP are bound to reinvent it, badly.

  • by MarkWatson (189759) on Tuesday August 26, 2008 @05:02PM (#24756441) Homepage

    Data warehousing (here I mean databases stored in column order for faster queries, etc.) may get a lift from using map reduce over server clusters. This would get away from using relational databases for massive data stores for problems where you need to sweep through a lot of data, collecting specific results.

    I think that it is interesting, useful, and cool that Yahoo is supporting the open source Nutch system, that implements map reduce APIs for a few languages - makes it easier to experiment with map reduce on a budget.

    • Re: (Score:3, Interesting)

      by roman_mir (125474)

      Except that relational databases are not just indexed objects copied across a large network of cheap PCs. What's good for Google may not be suitable for other databases, who actually care about ACID properties of transactions and not necessarily have the infrastructure to run highly parallel select queries.

      • by jefu (53450)

        I'm currently working on a project where users will be able to apply different types of transformation and collection to timestamped data and map/filter/reduce style algorithms are perfect ways to give them that capability.

        The kind of capability might look something like : give me the average temperature at hourly intervals for each day in the year for a dataset that spans multiple years. In this case there's no map, and the reduce does the work, in other cases this may be turned around.

        The data invol

      • by Zaaf (190878)

        Since the main difference between a RDBMS and MapReduce seems to be that the former is most suited for structured data and the latter best suited for unstructured data, it might be a good fit to use them both. And according to studies [computerworld.com], it might be that north of 80% of our data is unstructured. This has been a big topic in data warehousing and led to the start of the whole DWH 2.0 thing.

        So the fact that MapReduce is used in massive parallel processing machines like the ones from Greenplum (as quoted from the

    • by ELProphet (909179) <davidsouther@gmail.com> on Tuesday August 26, 2008 @05:30PM (#24756749) Homepage

      Actually, MapReduce doesn't do anything in the way data's stored- it's just a pipe between two sets of stored data, and really just needs an interface on both ends to get the task into MapReduce (which is what it seems the projects TFS/A mention do). BigTable is the storage mechanism that's incompatible with most traditional row-based RDBMSs. GFS is just the underlying storage mechanism.

      http://labs.google.com/papers/gfs.html [google.com]
      http://labs.google.com/papers/bigtable.html [google.com]
      http://labs.google.com/papers/mapreduce-osdi04.pdf [google.com]

      Note that all of those were published several years ago- I'd bet dollars to donuts that Google is _WAY_ beyond this internally if it's just reaching commercial use by their competitors.

      • by grae (14464) <grae&imho,com> on Tuesday August 26, 2008 @07:11PM (#24757739) Homepage
        If you're interested in one of the sorts of things that Google has done with MapReduce, look no further than Sawzall.

        http://research.google.com/archive/sawzall.html [google.com]

        Sawzall is essentially designed around the mapreduce framework. It's impossible to *not* write a mapreduction in Sawzall. The way it works:

        Your program is written to process a single record. The magic part happens when you output: you have to output to special tables. Each of these table types has a different way that it combines data emitted to it.

        So, during the map phase, your program is run in parallel on each input record. During the reduce phase, the reduction happens according to the way the output tables do whatever operation was specified.

        There was some work to be done having enough different output tables to do everything that was useful, especially since you might want to take the output and plug it in as the input to another phase of mapreduction.

        One of the biggest reasons this was a major innovation for Google was that it let some of the people who weren't really programmers still come up with useful programs, because the Sawzall language was pretty simple (especially when combined with some of the library functions that had been implemented to do common sorts of computations.) There were also some interesting ways in which the security model was implemented, but as far as I know they haven't been published yet.

        There certainly are plenty of other technical things that can be done to improve a system like MapReduce (and I know that many of them were in various forms of experimentation when I left the company) but at least some of them are highly dependent on Google's infrastructure, and not really relevant to a general discussion. (I suspect that the papers linked above might have some hints, but it has been a while since I looked at them.)

      • by tuomoks (246421)

        Correct. I sometimes wonder how many /. readers are really developers? Mapreduce is old, old technology, Google just made it famous and, maybe, documented. It is not always useful in all cases but never worse than any other method in throughput. If you have to "map" information and the more they are unbalanced, the better it gets.

        Actually the question about developers came because a lot of replies are talking about API - if you code, write your own, it is very easy once you understand the principle. And I c

    • The correct project name is Hadoop [apache.org]. It was factored out of Nutch 2.5 years ago. And Yahoo has been putting a lot of effort to make it scale up. We run 15,000 nodes with Hadoop in clusters of up to 2,000 nodes each and soon that will be 3,000 nodes. I used 900 nodes to win Jim Gray's terabyte sort benchmark [yahoo.com] by sorting 1 TB of data (100 billion 100 byte records) in 3.5 minutes. It is also used to generate Yahoo's Web Map [yahoo.com], which has 1 trillion edges in it.

    • by targyros (1351955)
      This is a great point. To add to that, the way we see it is that MapReduce serves two purposes:

      1) Go beyond SQL. This is not a big deal for transactional databases, where most of the logic is well-expressible in standard SQL. But analytics are another story since there is so much custom logic (how do you implement a data mining algorithm, like association rules, in SQL? It's not easy!)

      2) Go parallel. Nobody knew what a good parallel API looked like before Google brought MapReduce and proved its valu
  • by Anonymous Coward

    they go together like paint and peanut butter.

    Map/Reduce is better suited for read-only data mining situations.

  • First they attack it (Score:4, Interesting)

    by Intron (870560) on Tuesday August 26, 2008 @05:10PM (#24756545)
    • by sohp (22984)

      Mahatma Gandhi actually said, "First they ignore you, then they ridicule you, then they fight you, then you win."

      The tool custodians of the massively complex relational database warehouse tools are seeing their world turn obsolete as the lighter weight MySQL and the more flexible mapreduce and the BASE [neu.edu] worlds evolve beyond them, so yes, they are going to kick up a fight. Don't let the screen door hit you in butt on the way out, guys.

    • by Bazouel (105242) on Tuesday August 26, 2008 @08:19PM (#24758441)

      From a comment made about the article:

      You [the articles authors] seem to be under the impression that MapReduce is a database. It's merely a mechanism for using lots of machines to process very large data sets. You seem to be arguing that MapReduce would be better (for some value of better) if it were a data warehouse product along the lines of TeraData. Unfortunately the resulting tool would be less effective as a general purpose mechanism for processing very large data sets.

      • It would only be fair to include the article authors' answer as well:

        It's not that we don't understand this viewpoint. We are not claiming that MapReduce is a database system. What we are saying is that like a DBMS + SQL + analysis tools, MapReduce can be and is being used to analyze and perform computations on massive datasets. So we aren't judging apples and oranges. We are judging two approaches to analyzing massive amounts of information, even for less structured information.

        • by Bazouel (105242)

          That answer does not make sense at all given the points they try to make in the article which clearly shows their misunderstanding of what is MapReduce.

  • What a silly name... (Score:1, Interesting)

    by Anonymous Coward

    In functional programming map and reduce is very very old knowledge (and, yup, functional programming has its use and, yes, there are some very good and very successful programs written using functional languages).

    What's next? A product called DepthFirstSearch (notice the uber broken camel case for a product name) that has nothing to do with the depth-first search algorithm?

    Google? Allo?

  • Doesn't Oracle have this sort of feature already, without the Google "MapReduce" buzzword buzz?
    • Re: (Score:1, Informative)

      Yes, its called hash partitioning. Been around since version 7 or 8 about 10 years ago (current release is 11).
      • Re: (Score:1, Informative)

        by Anonymous Coward
        Uh, no. MapReduce is a parallel programming model -- not a way of laying out data on disk.
        • Re: (Score:3, Informative)

          by raddan (519638)
          Actually, the two are paired: programming model and implementation. The reason there's a programming model is that functional methods allow Google's implementation to automatically parallelize the input data for feeding to the cluster. So the implementation is very important, because that's actually how the data is processed and returned.

          In that sense, Oracle's clustering optimizations are also a paired programming model and implementation, since, presumably, you need to know Oracle's SQL language exte
          • IIRC, Oracle has features for parallelizing query execution automatically for queries. These features are enabled by various combinations of session settings and query hints, and can parallelize execution either within a single server machine, or across multiple machines in a cluster.

            I'm going to speculate wildly and say that you could probably write a SQL interpreter using a functional style as well, and that good ones probably already do.

            It's deeper than that. Save for relvar update operations, relation [wikipedia.org]

            • by raddan (519638)
              Yeah, that's why I speculated that SQL might be done so easily-- Oracle really is a rabbit hole. I've done some relational algebra in a database course (and also was exposed to set theory in my discrete maths course), but it was unclear to me whether query optimizers actually broke a query down into relational algebra or not. In fact, I remember that despite having had prior experience with SQL, relational algebra was much easier for me to wrap my head around than SQL. My professor was hesitant to go too
              • The optimizer in an Oracle database (and others, I'm sure) actually determines "access path" based on resource cost. It automatically generates many different access paths, and based on known statistics about the underlying objects in question, determines the cost in resources to execute that path (CPU, memory, disk I/O, etc, etc), then chooses the one with the least cost. It's not always correct 100% of the time, but you can influence the optimizer through configuration parameters at the database level a
                • The optimizer in an Oracle database (and others, I'm sure) actually determines "access path" based on resource cost. It automatically generates many different access paths, and based on known statistics about the underlying objects in question, determines the cost in resources to execute that path (CPU, memory, disk I/O, etc, etc), then chooses the one with the least cost.

                  Leaving aside the issue of where the "query rewriter" ends and where the "optimizer" starts, no, that's not all that happens to go from S

  • by Anonymous Coward

    I am with Bjarne on this one.
    Bjarne Stroustrup, creator of the C++ programming language, claims that C++ is experiencing a revival and
    that there is a backlash against newer programming languages such as Java and C#. "C++ is bigger than ever.
    There are more than three million C++ programmers. Everywhere I look there has been an uprising
    - more and more projects are using C++. A lot of teaching was going to Java, but more are teaching C++ again.
    There has been a backlash.", said Stroustrup.

    He continues.. ..What

    • Got what right? (Score:4, Interesting)

      by argent (18001) <peter@slashdot.2 ... m ['nga' in gap]> on Tuesday August 26, 2008 @05:59PM (#24757053) Homepage Journal

      I don't think you can credit Bjarne with "compiled code is faster than interpreted code" (or the 21st century version: "compilers can perform better optimizations that JIT translators").

      C++ happens to be the most popular fully compiled language, having edged Fortran out of that position some time near the end of the last century.

      Back in the early '80s, when he was coming up with C++, the big Fortran savants were saying stuff like "Fortran is bigger than ever. There are more than X million Fortran programmers. Everywhere I look there has been an uprising... a lot of teaching was going to Pascal, but more are teaching Fortran again. There has been a backlash."

      ----

      And that's not the only thing C++ has in common with Fortran, either.

      • Re: (Score:3, Interesting)

        by johanatan (1159309)

        " (or the 21st century version: "compilers can perform better optimizations that JIT translators").

        Actually, JITters can do some optimizations that compilers can't--by splitting the compilation into a frontend and a backend. The front end is essentially just a parser, and the later the back-end compile happens, the more opportunities for optimizations actually open up (including such things as utilizing specific instruction sets for given architectures and fine tuning the compile based on run time statistics).

        See the LLVM for more info: http://llvm.org/ [llvm.org]

        (or .NET for that matter--but we're anti-MS

        • by argent (18001)

          including such things as utilizing specific instruction sets for given architectures and fine tuning the compile based on run time statistics

          1. That's a nice theory but in practice JIT implementations of interpreters are not actually anywhere near as fast as compilers for real world workloads.

          2. When performance is critical (or even if you only THINK it's critical, see "Gentoo Linux"), compilers can use the same techniques, and still take advantage of the better regional and global optimizations they can do

    • You are aware that Python has built in support for map and reduce, no? And that the Python interpreter and most JVMs are written in C++ (not to mention many operating systems). When did the implementation language ever prove the abstraction worthwhile?
      • Python is written in C, actually.
        • Re: (Score:3, Insightful)

          by johanatan (1159309)

          To most people, C++ is C. :-) Unfortunate but true.

          • Stop embracing the ignorance.
            • Oh, I don't embrace it! In fact, I don't care to ever use C (proper) and I certainly never intend to use C++ as if it were C (that's actually my biggest gripe with C++ currently as recent co-workers do not always agree that high-level design is good and the language [and apparently sound arguments] do nothing to convince them of that).

              But, my original point still stands if you substitute 'C' for 'C++'. Heck, I could've even mentioned assembly if we really want to talk perf. Everyone knows that hand-tu
          • To me, C is basically a subset of C++ (and I am well aware that C came first, and that it is exactly a subset).

            That is, if I can program in C, I can do C++ as well, and if I can do C++, I can use many of the techniques when programming C.

            Of course, I can't program either C or C++ (Java and PHP are the closest I've got).

            So, your original comment the "Python interpreter and most JVMs are written in C++" is correct, if you understand C as being a subset of C++. But actually, you are wrong when it comes down th

            • It was a slip. I am and was aware that Python is written in C (though I fail to see why really). C++ can do everything C can and better. And, I disagree with the statement about C programmers being able to program C++. That is just not true. C++ is a multi-paradigm language and C is essentially only a single paradigm--namely, procedural. It is exactly C++'s support for the [obsolete] procedural/structured methodology that would [mis]-lead a C programmer into thinking that they know C++.
            • And, one other minor point-- C is not exactly a subset of C++. Ever since C99 brought about new features to C (the specific details of which I do not recall) which C++ does not support (and possibly even before then), they have diverged. It is true though that C is essentially a subset of C++.
              • I meant to say "not exactly", damn brain running ahead of myself again...

                It makes more sense if you automatically insert the "not" that I inadvertently missed.

                (I seem to be doing it quite often as well, forgetting my negatives...)

        • by Rakishi (759894)

          Well technically the most popular (and fastest I believe) implementation of python is written in C but python itself doesn't need to be written in C. There is a Java implementation, a python implementation, a .net implementation and probably a few others.

    • Re: (Score:1, Informative)

      by Anonymous Coward

      Don't confuse the search engine with MapReduce. The MapReduce engine creates the indexes for the search engine, its a batch job processor. Just because google chose C++ does not mean it is the only choice, even if it was the best choice for them. Hadoop (a java project at Yahoo, and open source too) has a MapReduce implementation.

    • by samkass (174571) on Tuesday August 26, 2008 @06:40PM (#24757425) Homepage Journal

      If Java ( or Pyhton etc. for that matter ) were fast enough why did Google choose C++ to build their insanely fast search engine.

      Because their developers knew it better? Because it had better 64-bit support when they started it? Because full GC's weren't compatible with their use case and IBM's parallel GC VM hadn't been released yet? Because they could get and modify all the source to all the libraries?

      I don't know the answer, but there are a lot of possibilities besides speed. You're jumping to an awfully big conclusion there, Mr. Coward.

    • by Jack9 (11421)

      Only C++ can allow you to create applications as powerful as MapReduce which allows them to create fast searches.

      Except that MapReduce is not an application, that it was originally codified in LISP, and that Google started using the technology because they bought AltaVista, where it was originally used for searching.

      An AC getting it all wrong? Unpossible.

      • by adpowers (153922)

        Except that AltaVista was bought by Overture [wikipedia.org] who were then bought by Yahoo!. Also, I wouldn't really call MapReduce a technology. The individual functions (Map and Reduce) come from functional programming, but the concept is becoming popular because Google's implementation and Hadoop have made it easy to write large scale data processing applications without having to worry about scaling or failures yourself. It also doesn't hurt that many problems can be solved with MapReduce.

        A five digit user getting it a

    • by Rakishi (759894) on Tuesday August 26, 2008 @08:31PM (#24758603)

      Well someone should tell that to the people working on Hadoop. I'm sure they'd love to know that their java mapreduce based framework is impossible. Maybe they'll even be able to use the paradox to built a perpetual motion machine and power the world.

      See: http://developers.slashdot.org/comments.pl?sid=900359&cid=24756761 [slashdot.org]

    • Re: (Score:1, Informative)

      by Anonymous Coward

      Hadoop is written in Java and does a fine job. and google uses more java than you can imagine.

  • wrong argument? (Score:2, Insightful)

    by fragbait (209346)

    Though this post is my introduction to both MapReduce and the argument, it strikes me that the people arguing are arguing the wrong problem.

    While MapReduce might be used against some structured data, it looks to be something for unstructured data and dynamically inventing structures in unstructured data. Additionally, you might want to keep that new structure around for a while. You might want to load it up with terabytes of data. At the same time, this data is less and less useful over time.

    Think about

  • Anyone remember this story: http://tech.slashdot.org/tech/08/07/08/201245.shtml [slashdot.org]? According to Google:

    Protocol buffers are now Google's lingua franca for data -- at time of writing, there are 48,162 different message types defined in the Google code tree across 12,183 .proto files. They're used both in RPC systems and for persistent storage of data in a variety of storage systems.

    (See http://code.google.com/apis/protocolbuffers/docs/overview.html [google.com].)

    If you think about it, Protocol Buffers are just about perfect for MapReduce applications. First, Protocol Buffers data streams are "flat" structures, very similar to database tables. If you need hierarchical data, I think that you'd tend to use multiple tables that incorporate foreign keys, rather than embedding the hierarchy every time

    • I suspect that this, rather than SQL compatibility, is the road to success with MapReduce processes.

      Why not both? :-)

      A lot of distributed databases already implicitly support functionality that's equivalent to mapreduce, especially greenplumb and netezza.

      ie: map operation is just:

      create table output as
      select [cols] from [table] where [condition] distribute on (key1,key2,key3);

      Which will scan the table stored on all nodes, and deposit the data across all the nodes in netezza distributed on key1,key2,key3---i

  • Stonebraker isn't exactly the one to complain about this: just as MapReduce is being overhyped these days, relational databases were being overhyped in the 1970's, and he rode that wave all the way to fame and fortune. 30 years later, although every database system in the world calls itself "relational", very few database applications actually are relational.

    MapReduce is indeed a simple, decades-old parallel programming technique. It's not the be-all-and-end-all of parallel programming, but it's good for s

  • I'm astounded that so few people here know about MapReduce. There are lots of good videos about it made by Google.
    There's a five-part lecture about it starting here [youtube.com] (use this link [google.com] to view the rest)

    Or simply search for "google mapreduce". I suggest watching one of the videos though :)

COMPASS [for the CDC-6000 series] is the sort of assembler one expects from a corporation whose president codes in octal. -- J.N. Gray

Working...