Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Databases Programming Software IT

Is the One-Size-Fits-All Database Dead? 208

jlbrown writes "In a new benchmarking paper, MIT professor Mike Stonebraker and colleagues demonstrate that specialized databases can have dramatic performance advantages over traditional databases (PDF) in four areas: text processing, data warehousing, stream processing, and scientific and intelligence applications. The advantage can be a factor of 10 or higher. The paper includes some interesting 'apples to apples' performance comparisons between commercial implementations of specialized architectures and relational databases in two areas: data warehousing and stream processing." From the paper: "A single code line will succeed whenever the intended customer base is reasonably uniform in their feature and query requirements. One can easily argue this uniformity for business data processing. However, in the last quarter century, a collection of new markets with new requirements has arisen. In addition, the relentless advance of technology has a tendency to change the optimization tactics from time to time."
This discussion has been archived. No new comments can be posted.

Is the One-Size-Fits-All Database Dead?

Comments Filter:
  • by AndroidCat ( 229562 ) on Tuesday January 09, 2007 @10:56PM (#17534110) Homepage
    Well it's about time we had some change around here!
  • by Ant P. ( 974313 ) on Tuesday January 09, 2007 @10:57PM (#17534118)
    The closest thing I can think of that fits that description is Postgres.
  • by BillGatesLoveChild ( 1046184 ) on Tuesday January 09, 2007 @10:58PM (#17534126) Journal
    Have you noticed when you code your own routines for manipulating data (in effect, your own application specific database) you can produce stuff that is very, very fast? In the good old days of the Internet Bubble 1.0 I took an application specific database like this (originally for a record store) and generalized it into a generic database capable of handling all sorts of data. But every change I made to make the code more general also made it less efficient. The end result wasn't bad by any means: we solid it as an eCommerce database to a number of solutions, but as far as the original record store database went, the original version was by far the best. Yes. I *know* generic databases with fantastic optimization engines designed by database experts should be faster, but noticed how much time you have to spend with the likes of Oracle or MySQL trying to get it to do what to you is an exceedingly obvious way of doing something?
    • by smilindog2000 ( 907665 ) <bill@billrocks.org> on Tuesday January 09, 2007 @11:16PM (#17534244) Homepage
      I write all my databases with the fairly generic DataDraw database generator. The resulting C code is faster that if you wrote it manually using pointers to C structures (really). http:datadraw.sourceforge.net [sourceforge.net]. Its generic, and faster than anything EVER.
    • by sonofagunn ( 659927 ) on Wednesday January 10, 2007 @10:31AM (#17538942)
      In our company, we use the database mostly as a warehouse. Our daily processing is done via flat files and Java code. It's just much, much, much faster that way and easier to maintain. I think we're kind of a special case though.
      • by suggsjc ( 726146 ) on Wednesday January 10, 2007 @11:51AM (#17540184) Homepage
        I think we're kind of a special case though.
        Yep, you are special...just like everyone else.


        On a side note. I know the term flat files can mean different things to different people, but I find that they are almost always a bad idea (to some degree and depending on your definition). You always run the risk of whatever you are using as delimiters coming up in the data you are parsing giving those "bugs." You always think "we sanatize our data..." and it will never happen to me, but more times than not, it will.
    • by fingusernames ( 695699 ) on Wednesday January 10, 2007 @01:21PM (#17541934) Homepage
      Back in the late 90s, I worked on a data warehouse project. We tried Oracle, and had an Oracle tuning expert work with us. However, we couldn't get the performance we needed. We wound up developing a custom "database" system, where data was extracted from the source databases (billing, CDRs, etc.) and de-normalized into several large tables in parallel. The de-normalization performed global transformations and corrections. Those tables were then loaded into shared memory (64bit HP multi-CPU system with a huge amount of RAM for those days, 32GB IIRC), indices were built, and a highly optimized algorithm (over time it kept getting tighter and smaller) was used to join the data based on various criteria using standard, left, right and some hybrid methods. The join algorithm operated on pointers to tables of pointers. Initially, developers used a PERL script to pre-process simple pseudo-SQL into C code/macros, that would be linked to their report application. As the project grew, I developed a SQL-derived language that was run through a cross-compiler to generate the C code and macros to link to applications. That language supported joins, views, temporary tables, and some other useful features that enabled developers to work quickly in implementing report requests. The system was very fast for our purposes, performing fraud analysis and sales trends analysis nightly. In parallel to that analysis on a different server, the de-normalized data was also exported to a Redbrick database so users could perform desktop reporting over historical data. I was the overall technical architect for system, and the developer of the joining system and the SQL-like language and compiling/development tools. I'm sure that today though there are data warehouse specific tools that would eliminate most of that.

      Larry
  • Prediction... (Score:5, Insightful)

    by Ingolfke ( 515826 ) on Tuesday January 09, 2007 @11:00PM (#17534140) Journal
    1) More and more specialized databases will begin cropping up.
    2) Mainstream database systems will modularize their engines so they can be optimized for different applications and they can incorporate the benefits of the specialized databases while still maintaining a single uniform database management system.
    3) Someone will write a paper about how we've gone from specialized to monolithic...
    4) Something else will trigger specialization... (repeat)

    Dvorak if you steal this one from me I'm going to stop reading your writing... oh wait.
    • Re:Prediction... (Score:4, Interesting)

      by Tablizer ( 95088 ) on Tuesday January 09, 2007 @11:47PM (#17534506) Journal
      2) Mainstream database systems will modularize their engines so they can be optimized for different applications and they can incorporate the benefits of the specialized databases while still maintaining a single uniform database management system.

      I agree with this prediction. Database interfaces (such as SQL) do not dictate implimentation. Ideally, query languages only ask for what you want, not tell the computer how to do it. As long as it returns the expected results, it does not matter if the database engine uses pointers, hashes, or gerbiles to get the answer. It may however require "hints" in the schema about what to optimize. Of course, you will sacrifice general-purpose performance to speed up a specific usage pattern. But at least they will give you the option.

      It is somewhat similar to what "clustered indexes" do in some RDBMS. Clusters improve the indexing by a chosen key at the expense of other keys or certain write patterns by physically grouping the data by that *one* chosen index/key order. The other keys still work, just not as fast.
             
      • Re:Prediction... (Score:3, Interesting)

        by Pseudonym ( 62607 ) on Wednesday January 10, 2007 @01:56AM (#17535468)

        Interfaces like SQL don't dictate the implementation, but they do dictate the model. Sometimes, the model that you want is so far from the interface language, that you need to either extend or replace the interface language for the problem to be tractable.

        SQL's approach has been to evolve. It isn't quite "there" for a lot of modern applications. I can forsee a day when SQL can efficiently model all the capabilities of, say, Z39.50, but we're not there now.

      • Re:Prediction... (Score:3, Informative)

        by Decaff ( 42676 ) on Wednesday January 10, 2007 @08:04AM (#17537476)
        I agree with this prediction. Database interfaces (such as SQL) do not dictate implimentation. Ideally, query languages only ask for what you want, not tell the computer how to do it.

        This can be taken a stage further, with general persistence APIs. The idea is that you don't even require SQL or relational stores: you express queries in a more abstract way and let a persistence engine generate highly optimised SQL, or some other persistence process. I use the Java JDO 2.0 API like this: I can persist and retrieve information from relational stores, CSV, XML, LDAP, Object Databases or even flat text files using exactly the same code and queries, and yet I get optimised queries on each - if I persist to Oracle, the product knows enough about Oracle (and even the specific version of Oracle) to generate very otimised SQL.
    • by theshowmecanuck ( 703852 ) on Wednesday January 10, 2007 @01:33AM (#17535328) Journal
      The reasons for this "one size fits all" (OSFA) strategy include the following:
      Engineering costs...
      Sales costs...
      Marketing costs...

      What about the cost of maintenance for the customer?

      Maybe people will keep buying 'one size fits all' DBMSs if they meet enough of their requirements and they don't have to hire specialists for each type of databases they might have for each type of application. That is, it is easier and cheaper to maintain a smaller number of *standard* architectures (e.g. one) for a company. Otherwise you have to pay for all sorts of different types of specialists. Now if your company only does say, data warehousing, then that is another matter and it is smart to purchase a specialized system. Or if you are a mega corporation you might be able to afford to have a number of specialist teams for each type of system. But I think smaller shops might need to make do with the poor old vanilla DBMS.

  • one size fits 90% (Score:5, Insightful)

    by JanneM ( 7445 ) on Tuesday January 09, 2007 @11:03PM (#17534158) Homepage
    It's natural to look at the edges of any feature or performance envelope. People that want to store petabytes of particle accellerator data, do complex queries to serve a million webpages a second, have hundreds of thousands of employees doing concurrent things to the backend.

    But for most uses of databases - or any back-end processing - performance just isn't a factor and haven't been for years. Enron may have needed a huge data warehouse system; "Icepick Johhny's Bail Bonds and Securities Management" does not. Amazon needs the cutting edge in customer management; "Betty's Healing Crystals Online Shop (Now With 30% More Karma!)" not so much.

    For the large majority of uses - whether you measure in aggregate volume or number of users - one size really fits all.
    • by smilindog2000 ( 907665 ) <bill@billrocks.org> on Tuesday January 09, 2007 @11:22PM (#17534296) Homepage
      This is more true all the time. I work in the EDA industry, in chip design. The databases sizes I work with are naturally well correlated with More's Law. In effect, I'm a permanent power user, but my circle of peers is shrinking into oblivion...
    • by TubeSteak ( 669689 ) on Wednesday January 10, 2007 @03:23AM (#17535984) Journal
      For the large majority of uses - whether you measure in aggregate volume or number of users - one size really fits all.
      I'm willing to concede that...
      But IMO it is not 100% relevant.

      Large corporate customers usually have a large effect on what features show up in the next version of [software]. Software companies put a lot of time & effort into pleasing their large accounts.

      And since performance isn't a factor for the majority of users, they won't really be affected by any performance losses resulting from increased specialization/optimizations. Right?
    • by Bacon Bits ( 926911 ) on Wednesday January 10, 2007 @10:41AM (#17539082)
      The same argument is what gave rise to re-programmable generic processing components: CPUs. You'll note that the processor industry today (AMD in particular) is now also moving towards this kind of diversification. Gaming systems have been using dedicated GPUs for ages (today they're more powerful than entire PCs from 5 years ago) and I'm sure we remember back when math co-processors (i387) were introduced. You'll note that math co-processors were just absorbed back into the generic model.

      It's another pendulum in the computing world (much like the serial/parallel dichotomy). Moving from a disparate number of diverse systems to a small number of all-purpose systems. The advances are always for performance, and they typically happen when the current generation plateaus. We've mastered the concepts of one generation, time to explore new concepts (by re-exploring old concepts).

      In 10 or 15 years people will be complaining about the difficulty of data portability, the esoteric nature of these unique data files, and the lack of features in area X in one product and area Y in a second product, and the archaic languages you have to use on these old, unsupported systems. There will be a move back to generic storage engines, bringing with it the lessons learned from that round of insight.

      Of course, there will always be demands for specialized components just as there will always be demand for generic, standard components. It's the centrists whose demands are for the best combination of performance and features that determine popularity.
    • by Bozdune ( 68800 ) on Wednesday January 10, 2007 @10:55AM (#17539264)
      Then why do we need specialized OLAP systems like Essbase, Kx Systems, etc.? So much for OSFA (one size fits all). Any transaction-oriented database of sufficient size, requiring multi-way joins between tables, and requiring sub-second response times to queries, is way out of range of OSFA. Furthermore, it doesn't require petabytes to take a relational database system to its knees. Just a few million transactions, and your DBMS will be on its back waving its arms feebly, along with your server.

      Performance IS a factor, a very serious factor indeed, for many applications. Not for Betty or for Icepick Johnny, to be sure; but for almost any business with more than about $200M in sales, I guarantee there's a dataset kicking around that will require specialized tools to analyze properly. Since those specialized tools are typically expensive, and typically difficult to use, that dataset will not get analyzed properly, and the business will be "running blind."

  • Imagine that.... (Score:5, Insightful)

    by NerveGas ( 168686 ) on Tuesday January 09, 2007 @11:09PM (#17534210)
    ... a database mechanism particularly written for the task at hand will beat a generic one. Who would have thought?

    steve

    (+1 Sarcastic)
    • by shis-ka-bob ( 595298 ) on Wednesday January 10, 2007 @09:17AM (#17538004)
      There is an article, and it has many references. How is a 'Captain Obvious' sort of comment labeled Insightful? The insightful part is in the article. The first author, Michael Stonebraker, architected Ingres and Postgres. He looked at OLAP databases, which is a market that is much larger than a special case. He proposed storing the data in columns rather than in rows. He tested this, it works. In fact it works so well that he can clobber a $300,000 server cluster with a $800 dollar PC. I know that I would be pretty happy to spend a year porting to his database if I could pocket half of that annual hardware cost savings. The savings in electricty would be enough to pay for several pretty serious Starbucks addictions. His key insight seems to be that he can vastly improve OLAP performance by storing the data in columns rather than in rows. This change could be quite transparent to the end users & developers, except for the massive speed-up and cost savings, of course. This paper describes a general solution for a common problem. Stonebraker has developed Vertica [vertica.com], which is still support ad-hoc querries in SQL. This seems like a pretty general purpose solution for OLAP.
  • Dammit (Score:5, Insightful)

    by AKAImBatman ( 238306 ) * <akaimbatman@g m a i l . c om> on Tuesday January 09, 2007 @11:15PM (#17534238) Homepage Journal
    I was just thinking about writing an article on the same issue.

    The problem I've noticed is that too many applications are becoming specialized in ways that are not handled well by traditional databases. The key example of this is forum software. Truly heirarchical in nature, the data is also of varying sizes, full of binary blobs, and generally unsuitable for your average SQL system. Yet we keep trying to cram them into SQL databases, then get surprised when we're hit with performance problems and security issues. It's simply the wrong way to go about solving the problem.

    As anyone with a compsci degree or equivalent experience can tell you, creating a custom database is not that hard. In the past it made sense to go with off-the-shelf databases because they were more flexible and robust. But now that modern technology is causing us to fight with the databases just to get the job done, the time saved from generic databases is starting to look like a wash. We might as well go back to custom databases (or database platforms like BerkeleyDB) for these specialized needs.
    • Re:Dammit (Score:3, Funny)

      by Jason Earl ( 1894 ) on Wednesday January 10, 2007 @12:36AM (#17534894) Homepage Journal

      Eventually the folks working on web forums will realize that they are just recreating NNTP and move on to something else.

    • by Jerf ( 17166 ) on Wednesday January 10, 2007 @12:56AM (#17535050) Journal
      Truly heirarchical in nature, the data is also of varying sizes, full of binary blobs, and generally unsuitable for your average SQL system.
      Actually, I was bitching about this very problem [jerf.org] (and some others) recently, when I came upon this article about recursive queries [teradata.com] on the programming reddit [reddit.com].

      Recursive queries would totally, completely solve the "hierarchy" part of the problem, and halfway decent database design would handle the rest.

      My theory is that nobody realizes that recursive queries would solve their problems, so nobody asks for them, so nobody ever discovers them, so nobody ever realizes that recursive queries would solve their problem. I don't know of an open source DB that has this, and I'd certainly never seen this in my many years of working with SQL. I wish we did have it, it would solve so many of my problems.

      Now, if we could just deal with the problem of having a key that could relate to any one of several tables in some reasonable way... that's the other problem I keep hitting over and over again.
      • by a_ghostwheel ( 699776 ) on Wednesday January 10, 2007 @03:32AM (#17536044)
        Or just use hierarchical queries - like START WITH / CONNECT BY clauses in Oracle. Probably other vendors have something similar too - not sure about that.
      • by Imsdal ( 930595 ) on Wednesday January 10, 2007 @05:02AM (#17536506)
        My theory is that nobody realizes that recursive queries would solve their problems, so nobody asks for them, so nobody ever discovers them, so nobody ever realizes that recursive queries would solve their problem.


        It used to be that execution plans in Oracle were retreived from the plan table via a recursive query. Since even the tiniest application will need a minimum amount of tuning, and since all db tuning should start by looking at the execution plans, everyone should have run into recursive queries sooner rather than later.


        My theory is instead that too few developers are properly trained. They simply don't know what they are doing or how it should be done. During my years as a consultant, I spent a lot of time improving db performance, and never even once did I run into in-house people who even knew what en execution plan was, let alone how to interpret it. (And, to be honest, not all of my consultant colleagues knew either...)


        Software development is a job that requires the training of a surgeon, but it's staffed by people who are trained to be janitors or, worse, economists. (I realise that this isn't true at all for the /. crowd. I'm talking about all the others all of us has run into on every job we have had.)

    • by poot_rootbeer ( 188613 ) on Wednesday January 10, 2007 @12:31PM (#17540900)
      The key example of this is forum software. Truly heirarchical in nature, the data is also of varying sizes, full of binary blobs, and generally unsuitable for your average SQL system.

      Hierarchichal? Yes, but I don't see any problem using SQL to access hierarchical information. It's easy to have parent/child relationships.

      Data of varying sizes? I thought this problem was solved 20 years ago when ANSI adopted a SQL standard including a VARCHAR datatype.

      Full of binary blobs? Why? What in the hell for? So that each user can have an obnormous enoxious "signature banner" graphic that readers have to look at 20 times in any given thread?

      There's very little data that belongs in a forum interface that can't be represented in plaintext. For the rest, store it on the filesystem and just store a reference to it in the database.

      As anyone with a compsci degree or equivalent experience can tell you, creating a custom database is not that hard.

      And as anyone who has ever done software development in the real world can tell you, custom components almost always suck worse than similar standard components.
  • Duh (Score:5, Insightful)

    by Reality Master 101 ( 179095 ) <RealityMaster101@nOSpAM.gmail.com> on Tuesday January 09, 2007 @11:18PM (#17534258) Homepage Journal

    Who thinks that a specialized application (or algorithm) won't beat a generalized one in just about every case?

    The reason people use general databases is not because they think it's the ultimate in performance, it's because it's already written, already debugged, and -- most importantly -- programmer time is expensive, and hardware is cheap.

    See also: high level compiled languages versus assembly language*.

    (*and no, please don't quote the "magic compiler" myth... "modern compilers are so good nowadays that they can beat human written assembly code in just about every case". Only people who have never programmed extensively in assembly believe that.)

    • Re:Duh (Score:5, Informative)

      by Waffle Iron ( 339739 ) on Tuesday January 09, 2007 @11:43PM (#17534468)
      *and no, please don't quote the "magic compiler" myth... "modern compilers are so good nowadays that they can beat human written assembly code in just about every case". Only people who have never programmed extensively in assembly believe that.

      I've programmed extensively in assembly. Your statement may be true up to a couple of thousand lines of code. Past that, to avoid going insane, you'll start using things like assembler macros and your own prefab libraries of general-purpose assembler functions. Once that happens, a compiler that can tirelessly do global optimizations is probably going to beat you hands down.

      • Re:Duh (Score:5, Insightful)

        by wcbarksdale ( 621327 ) on Wednesday January 10, 2007 @12:07AM (#17534670)
        Also, to successfully hand-optimize you need to remember a lot of details about instruction pipelines, caches, and so on, which is fairly detrimental to remembering what your program is supposed to do.
      • by Pseudonym ( 62607 ) on Wednesday January 10, 2007 @01:50AM (#17535424)

        The reason why assembly programmers can beat high-level programmers is they can write their code in a high-level language first, then profile to see where the hotspots are, and then rewrite a 100 line subroutine or two in assembly language, using the compiler output as a first draft.

        In other words, assembly programmers beat high-level programmers because they can also use modern compilers.

      • Re:Duh (Score:3, Insightful)

        by RAMMS+EIN ( 578166 ) on Wednesday January 10, 2007 @07:07AM (#17537176) Homepage Journal
        Also, the compiler may know more CPUs than you do. For example, do you know the pairing rules for instructions on an original Pentium? The differences one must pay attention to when optimizing for an Thoroughbred Athlon vs. a Prescott P4 vs. a Yonah Pentium-M vs. a VIA Nehemiah? GCC does a pretty good job of generating optimized assembly code for each of these from the same C source code. If you were to do the same in assembly, you would have to write separate code for each CPU, and know the subtle differences as well as the compiler does.
    • by smilindog2000 ( 907665 ) <bill@billrocks.org> on Tuesday January 09, 2007 @11:55PM (#17534558) Homepage
      I've never heard the "magic compiler myth" phrase, but I'll help educate others about it. It's refreshing to hear someone who understands reality. Of course, a factor of 2 to 4 improvement in speed is less and less important every day...
    • Re:Duh (Score:3, Interesting)

      by suv4x4 ( 956391 ) on Tuesday January 09, 2007 @11:57PM (#17534580)
      "modern compilers are so good nowadays that they can beat human written assembly code in just about every case". Only people who have never programmed extensively in assembly believe that.

      Only people who haven't seen recent advancements in CPU design and compiler architecture will say what you just said.

      Modenr compilers apply optimizations on a so sophisticated level that would be a nightmare for a human to support such a solution optimized.

      As an example, modern Intel processors can process certain "simple" commands in parallel and other commands are broken apart into simpler commands, processed serially. I'm simplifying the explanation a great deal, but anyone who read about how a modern CPU works, branch prediction algorithms and so on is familiar with the concept.

      Of course "they can beat human written assembly code in just about every case" is an overstatement, but still, you gotta know there's some sound logic & real reasons behind this "myth".
      • Re:Duh (Score:2, Insightful)

        by mparker762 ( 315146 ) on Wednesday January 10, 2007 @12:20AM (#17534776) Homepage
        Only someone who hasn't recently replaced some critical C code with assembler and gotten substantial improvement would say that. This was MSVC 2003 which isn't the smartest C compiler out there, but not a bad one for the architecture. Still, a few hours with the assembler and a few more hours doing some timings to help fine-tune things improved the CPU performance of this particular service by about 8%.

        Humans have been writing optimized assembler for decades, the compilers are still trying to catch up. Modern hand-written assembler isn't necessarily any trickier or more clever than the old stuff (it's actually a bit simpler). Yes compilers are using complicated and advanced techniques, but it's still all an attempt to approximate what humans do easily and intuitively. Artificial intelligence programs use complicated and advanced techniques too, but no one would claim that this suddenly makes philosophy any harder.

        Your second point about the sophistication of the CPU's is true but orthogonal to the original claim. These sophisticated CPU's don't know who wrote the machine code, they do parallel execution and branch prediction and so forth on hand-optimized assembly just like they do on compiler-generated code. Which is one reason (along with extra registers and less segment BS) that it's easier to write and maintain assembler nowadays, even well-optimized assembler.

        • Re:Duh (Score:2, Insightful)

          by suv4x4 ( 956391 ) on Wednesday January 10, 2007 @02:20AM (#17535618)
          This was MSVC 2003 which isn't the smartest C compiler out there, but not a bad one for the architecture. Still, a few hours with the assembler and a few more hours doing some timings to help fine-tune things improved the CPU performance of this particular service by about 8%... These sophisticated CPU's don't know who wrote the machine code, they do parallel execution and branch prediction and so forth on hand-optimized assembly just like they do on compiler-generated code. Which is one reason (along with extra registers and less segment BS) that it's easier to write and maintain assembler nowadays, even well-optimized assembler.

          Do you know which types of commands when ordered in quadruples will execute at once on a Core Duo? Incidentally those that won't on a Pentium 4.

          I hope you're happy with your 8% improvement, enjoy it until your next CPU upgrade that requires different approach to assembly optimization.

          The advantage of a compiler is that compiling for a target CPU is a matter of a compiler switch, so compiler programmers can concentrate on performance and smart use of the CPU specifics, and you can concentrate on your program features.

          If you were that concerned about performance in first place, you'd use a compiler provided by the processor vendor (Intel I presume) and use the intel libraries for processor specific implementations of common math and algorithm issues needed in applications.

          Most likely this would've given you more than 8% boost and still keep your code somewhat less bound to a specific CPU, than with assembler.

          An example of "optimization surprise" i like, is the removal of the barrel shifter in Pentium 4 CPU-s. You see, lots of programmers know that it's faster (on most platforms) to bit shift, and not multiply by 2, 4, 8, etc (or divide).

          But bit shifting on P4 is handled by the ALU, and is slightly slower than multiplication (why, I don't know, but it's a fact). Code "optimized" for bit shifting would be "antioptimized" on P4 processors.

          I know some people adapted their performance critical code to meet this new challenge. But then what? P4 is obsolete and instead we're back to the P3 derived architecture, and the barrel shifter is back!

          When I code a huge and complex system, I'd rather buy a 8% faster machine and use a better compiler than have to manage this hell each time a CPU comes out.
      • Re:Duh (Score:4, Insightful)

        by try_anything ( 880404 ) on Wednesday January 10, 2007 @02:59AM (#17535854)
        Modenr compilers apply optimizations on a so sophisticated level that would be a nightmare for a human to support such a solution optimized.

        There are three quite simple things that humans can do that aren't commonly available in compilers.

        First, a human gets to start with the compiler output and work from there :-) He can even compare the output of several compilers.

        Second, a human can experiment and discover things accidentally. I recently compiled some trivial for loops to demonstrate that array bounds checking doesn't have a catastrophic effect on performance. With the optimizer cranked up, the loop containing a bounds check was faster than the loop with the bounds check removed. That did not inspire confidence.

        Third, a human can concentrate his effort for hours or days on a single section of code that profiling revealed to be critical and test it using real data. Now, I know JIT compilers and some specialized compilers can do this stuff, but as far as I know I can't tell gcc, "Compile this object file, and make the foo function as fast as possible. Here's some data to test it with. Let me know on Friday how far you got, and don't throw away your notes, because we might need further improvements."

        I hope I'm wrong about my third point (please please please) so feel free to post links proving me wrong. You'll make me dance for joy, because I do NOT have time to write assembly, but I have a nice fast machine here that is usually idle overnight.

        • by ciggieposeur ( 715798 ) on Wednesday January 10, 2007 @11:42AM (#17540002)
          With the optimizer cranked up, the loop containing a bounds check was faster than the loop with the bounds check removed.

          That actually makes sense to me. If your bounds check was very simple and the only loop outcome was breaking out (throw an exception, exit the loop, exit the function, etc., without altering the loop index), the optimizer could move it out of the loop entirely and alter the loop index check to incorporate the effect of the bounds check. Result is a one-time bounds check before entering the loop and a simplified loop, hence faster execution.

          I remember in the discussion on the D compiler someone pointed this out.
    • Re:Duh (Score:2, Insightful)

      by kfg ( 145172 ) on Wednesday January 10, 2007 @12:00AM (#17534606)
      The reason people use general databases is not because they think it's the ultimate in performance, it's because it's already written, already debugged, and -- most importantly. . .

      . . .has some level of definable and gauranteed data integrity.

      KFG
    • I had a "simple" optimization project. It came down to one critical function (ISO JBIG compression). I coded the thing by hand in assembler, carefully manually scheduling instructions. It took me days. Managed to beat GNU gcc 2 and 3 by a reasonable margin. The latest Microsoft C compiler? Blew me away. I looked at the assembler it produced -- and I don't get where the gain is coming from. The compiler understands the machine better than I do.

      Go figure -- I hung up my assembler badge. Still a useful skill for looking at core dumps, though. And for dealing with micro-controllers.

      So, have you had at it and benchmarked your assembler vs. a compilers?
      • I looked at the assembler it produced -- and I don't get where the gain is coming from. The compiler understands the machine better than I do.

        All that proves is that the compiler knew a trick you didn't (probably it understood which instructions will go into which pipelines and will parallelize). I bet if you took the time to learn more about the architecture, you could find ways to be even more clever.

        I'm not arguing for a return to assembly... it's definitely too much of a hassle these days, and again, hardware is cheap, and programmers are expensive. Just that given enough programmer time, humans can nearly always do better than the compiler, which shouldn't be surprising since humans programmed the compiler, and humans have more contextual knowledge of what a program is trying to accomplish.

      • by TheLink ( 130905 ) on Wednesday January 10, 2007 @06:44AM (#17537074) Journal
        "The compiler understands the machine better than I do."

        Actually the people paid lots of money to write Microsoft's C compiler understand the machine better than you do. I doubt you should be surprised.

        And the compiler will hopefully be able to keep all the tricks in mind (a human might forget to use one in some cases).

        I'm just waiting/hoping for the really smart people to make stuff like perl and python faster.

        Java has improved in speed a lot and already is quite fast in some cases, but I don't consider it a high level language (given the amount of code people have to write just to do simple stuff).
  • by meta-monkey ( 321000 ) on Tuesday January 09, 2007 @11:20PM (#17534288) Journal
    This reminds me of the parallel databases class I took in college. Sure, specialized parallel databases (not distributed, mind you, parallel) using specialized hardware were definitely faster than the standard SQL-type relational databases...but so what? The costs were so much higher they were not feasible for most applications.

    Specialized software and hardware outperforms generic implementations! Film at 11!
  • by Doc Ruby ( 173196 ) on Tuesday January 09, 2007 @11:45PM (#17534488) Homepage Journal
    SW platform development always features a tradeoff between general purpose APIs and optimized performance engines. Databases are like this. The economic advantages for everyone in using an API even as awkward and somewhat inconsistent as SQL are more valuable than the lost performance in the fundamental relational/query model.

    But it doesn't have to be that way. SQL can be retained as an API, but different storage/query engines can be run under the hood to better fit different storage/query models for different kinds of data/access. A better way out would be a successor to SQL that is more like a procedural language for objects with all operators/functions implicitly working on collections like tables. Yes, something like object lisp, best organized as a dataflow with triggers and events. So long as SQL can be automatically compiled into the new language, and back, for at least 5 years of peaceful coexistence.
    • by sonofagunn ( 659927 ) on Wednesday January 10, 2007 @11:49AM (#17540146)
      Databases already have the ability to change storage engines as long as they support SQL. The reason my company shuns the database for many specific tasks is that SQL is ill-suited to perform many types of transformations, calculations, and aggregations on data. What may take many pages of SQL (and many temp tables) in a stored proc can be written in a simple Java class and will perform much better, as well as being easier to maintain. A lot of our processing goes like this Raw data from database (simple select queries, which are very fast) -> flat files -> custom Java code -> reporting engine or another database. The speedup over using stored procs or SQL based ETL tools ranges between a factor of 10 and a factor of 100. MDX is a better language than SQL for a lot of purposes, but not all.
  • by TVmisGuided ( 151197 ) <alan,jump&gmail,com> on Tuesday January 09, 2007 @11:56PM (#17534566) Homepage
    Sheesh...and it took someone from MIT to point this out? Look at a prime example of a high-end, heavily-scaled, specialized database: American Airlines' SABRE. The reservations and ticket-sales database system alone is arguably one of the most complex databases ever devised, is constantly (and I do mean constantly) being updated, is routinely accessed by hundreds of thousands of separate clients a day...and in its purest form, is completely command-line driven. (Ever see a command line for SABRE? People just THINK the APL symbol set looked arcane!) And yet this one system is expected to maintain carrier-grade uptime or better, and respond to any command or request within eight seconds of input. I've seen desktop (read: non-networked) Oracle databases that couldn't accomplish that!
    • by sqlgeek ( 168433 ) on Wednesday January 10, 2007 @03:12AM (#17535930)
      I don't think that you know Oracle very well. Lets say you want so scale and so you want clustering or grid functionality -- built into Oracle. Lets say that you want to partition your enormous table into one physical table per month or quarter -- built in. Oh, and if you query the whole giant table you'd like parallel processes to run against each partition, balanced across your cluster or grid -- yeah, that's built in too. Lets say you almost always get a group of data together rather than piece by piece so you want it physically colocated to reduce disk i/o -- built in.

      This is why you pay a good wage for your Oracle data architect & DBA -- so that you can get people who know how to do these sort of things when needed. And honestly I'm not even scratching the surface.

      Consider a data warehouse for a giant telecom in South Africa (with a DBA named Billy in case you wondered). You have over a billion rows in your main fact table, but you're only interested in a few thousand of those rows. You have an index on dates and another index on geographic region and another region on customer. Any one of those indexes will reduce the 1.1 billion rows to 10's of millions of rows, but all three restrictions will reduce it to a few thousand. What if you could read three indexes, perform bitmap comparisons on the results to get only the rows that match the results of all three indexes and then only fetch those few thousand rows from the 1.1 billion row table. Yup, that's built in and Oracle does it for you for behind the scenes.

      Now yeah, you can build a faster single-purpose db. But you better have a god damn'd lot of dev hours allocated to the task. My bet is that you'll probably come our way ahead in cash & time to market with Oracle, a good data architect and a good DBA. Any time you want to put your money on the line, you let me know.
      • by georgewilliamherbert ( 211790 ) on Wednesday January 10, 2007 @07:17AM (#17537218)
        Nevertheless - anyone doing serious data warehousing who cares about read performance has been using Teradata (older apps) or column-oriented Sybase-IQ (newer apps). Oracle can store over a billion rows, sure; a terabyte's a lot of data, and people have had multi-terabyte databases for the better part of a decade, for some projects.

        Why? Despite all the tuning, Sybase-IQ can still run through a general purpose query into its data around ten times faster than tuned Oracle.

        It may not matter in the telephone company, but for people who actually have money on the line (financial companies), huge data processing uses appropriate tools. IQ and columns win.
      • Now yeah, you can build a faster single-purpose db. But you better have a god damn'd lot of dev hours allocated to the task. My bet is that you'll probably come our way ahead in cash & time to market with Oracle, a good data architect and a good DBA. Any time you want to put your money on the line, you let me know.

        Seems to me this describes AA perfectly...SABRE has been around since what, the mid- to late-70s? And it's still actively developed and maintained. At a fairly hefty annual price tag. And yeah, the user interface is antiquated and arcane, but no one's come up with anything better yet.

        Now, I don't know what they're using to get it to play nice with the Internet (since Travelocity is tied directly into SABRE), but that must have been an interesting exercise in programming on its own. That, however, is a discussion topic for another time and place.

  • by suv4x4 ( 956391 ) on Wednesday January 10, 2007 @12:06AM (#17534660)
    We're all sick with "new fad: X is dead?" articles. Please reduce lameness to an acceptable level!
    Can't we get used to the fact that specialized & new solutions don't magically kill existing popular solution to a problem?

    And it's not a recent phenomenon, either, I bet it goes back to when the first proto-journalistic phenomenons formed in early uhman societies, and haunts us to this very day...

    "Letters! Spoken speech dead?"

    "Bicycles! Walking on foot dead?"

    "Trains! Bicycles dead?"

    "Cars! Trains dead?"

    "Aeroplanes! Trains maybe dead again this time?"

    "Computers! Brains dead?"

    "Monitors! Printing dead yet?"

    "Databases! File systems dead?"

    "Specialized databases! Generic databases dead?"

    In a nutshell. Don't forget that a database is a very specialized form of a storage system, you can think of it as a very special sort of file system. It didn't kill file systems (as noted above), so specialized systems will thrive just as well without killing anything.
    • by msormune ( 808119 ) on Wednesday January 10, 2007 @02:48AM (#17535752)
      I'll chip in: Public forums! Intelligence dead? Slashdot confirms!
    • Death to Trees! (Score:3, Interesting)

      by Tablizer ( 95088 ) on Wednesday January 10, 2007 @02:50AM (#17535770) Journal
      Don't forget that a database is a very specialized form of a storage system, you can think of it as a very special sort of file system. It didn't kill file systems

      Very specialized? Please explain. Anyhow, I *wish* file systems were dead. They have grown into messy trees that are unfixable because trees can only handle about 3 or 4 factors and then you either have to duplicate information (repeat factors), or play messy games, or both. They were okay in 1984 when you only had a few hundred files. But they don't scale. Category philosophers have known since before computers that hierarchy taxonomies were limited.

      The problem is that the best alternative, set-based file systems, have a longer learning curve than trees. People pick up hierarchies pretty fast, but sets take longer to click. Power does not always come easy. I hope that geeks start using set-oriented file systems and then others catch up. The thing is that set-oriented file systems are enough like relational that one might as well use relational. If only the RDBMS were performance-tuned for file-like uses (with some special interfaces added).
      • Re:Death to Trees! (Score:3, Insightful)

        by suv4x4 ( 956391 ) on Wednesday January 10, 2007 @03:02AM (#17535874)
        Anyhow, I *wish* file systems were dead. They have grown into messy trees that are unfixable because trees can only handle about 3 or 4 factors and then you either have to duplicate information (repeat factors), or play messy games, or both.

        You know, I've seen my share of RDBMS designs to know the "messiness" is not the fault of the file systems (or databases in that regard).

        Sets have more issues than you describe, and you know very well Vista had lots of set based features that were later downscaled, hidden and reduced, not because WinFS was dropped (because the sets in Vista don't use WinFS, they work with indexing too), but because it was terribly confusing to the users.
    • by account_deleted ( 4530225 ) on Wednesday January 10, 2007 @07:37AM (#17537358)
      Comment removed based on user account deletion
  • by Dekortage ( 697532 ) on Wednesday January 10, 2007 @12:24AM (#17534804) Homepage

    I've made some similar discoveries myself!

    • Transporting 1500 pounds of bricks from the store to my house is much faster if I use a big truck rather than making dozens (if not hundreds) of trips with my Honda Civic.
    • Wearing dress pants with a nice shirt and tie often makes an interview more likely to succeed, even if I wear jeans every other day after I get the job.
    • Carving pumpkins into "jack-o-lanterns" always turns out better if I use a small, extremely sharp knife instead of a chainsaw.

    Who woulda thought that specific-use items might improve the outcome of specific situations?

  • by yagu ( 721525 ) * <yayagu@@@gmail...com> on Wednesday January 10, 2007 @12:53AM (#17535032) Journal

    I've seen drop dead performance on flat file databases. I've seen molasses slow performance on mainframe relational databases. And I've seen about everything in between.

    What I see as a HUGE factor is less the database chosen (though that is obviously important) and more how interactions with the database (updates, queries, etc) are constructed and managed.

    For example, we one time had a relational database cycle application that was running for over eight hours every night, longer than the alloted time for all night time runs. One of our senior techs took a look at the program, changed the order of a couple of parentheses, and the program ran in less than fifteen minutes, with correct results.

    I've also written flat file "database" applications, specialized with known characteristics that operated on extremely large databases (for the time, greater than 10G), and transactions were measured in milliseconds, typically .001 - .005 seconds) under heavy load. This application would never have held up under any kind of moderate requirement for updates, but I knew that.

    I've many times seen overkill with hugely expensive databases hammering lightweight applications into some mangle relational solution.

    I've never seen the world as a one-size-fits-all database solution. Vendors of course would tell us all different.

  • by bytesex ( 112972 ) on Wednesday January 10, 2007 @04:32AM (#17536348) Homepage
    It's just not called SQL driven RDBMS. It's called Sleepycat.
  • by Terje Mathisen ( 128806 ) on Wednesday January 10, 2007 @06:38AM (#17537036)
    23 years ago I wrote a custom DB to maintain the status of millions of "universal" gift cards, it ran 3-5 orders of magnitude faster (on a 6 MHz IMB AT) than a commercial database running on a big IBM mainframe.

    I reduced the key operations (what is the value of this gift card, when was it sold, has it been redeemed previously? etc) to just one operation:

    Check and clear a single bit in a bitmap.

    My program used 1 second to update 10K semi-randomly-ordered (i.e. in the order we got them back from the shops that had accepted them) records in a database of approximately 10 M records.

    20 years later I wrote a totally new version of the same application, but this time the gift cards are electronic debet cards. This time I used Linux-Apache-MySQL-Perl to make a browser-based version, and I stored everything in the DB. Today that is plenty fast enough, and it allows us to make any kind of queries against the DB, like "How many transactions of less than 100 kr was accepted in December, broken down by business area/chain/shop/etc"

    Terje
  • by pfafrich ( 647460 ) <richNO@SPAMsingsurf.org> on Wednesday January 10, 2007 @07:27AM (#17537278) Homepage
    Has anyone noticed the This article is published under a Creative Commons License Agreement, its the first time I've seen this applied to an academic paper. Another small step for the open-content movement.
  • by FlopEJoe ( 784551 ) on Wednesday January 10, 2007 @09:04AM (#17537886)
    This is titled "OSFA? - Part 2: Benchmarking Results." Has anyone found Part 1?
  • by dpbsmith ( 263124 ) on Wednesday January 10, 2007 @10:11AM (#17538670) Homepage
    This is, of course, what MUMPS [wikipedia.org] advocates have been saying for years.

    MUMPS is a very peculiar language that is very "politically incorrect" in terms of current language fashion. Its development has been entirely governed by pragmatic real-world requirements. It is one of the purest examples of an "application programming" language. It gets no respect from academics or theoreticians.

    Its biggest strength is its built-in "globals," which are multidimensional sparse arrays. These arrays and the elements in them are automatically created simply by referring to them. The array indices are arbitrary strings. There can be an arbitrary number of subscripts and the same array can have elements with different numbers of subscripts. Oh, and they're always sorted automatically; each element is created automatically in its proper sequence, and there are fundamental operators for traversing arrays in sequence.

    "Global" arrays are persistent across sessions, are stored on the disk, and as in ordinary practice can be hundreds of megabytes in size.

    Before you say "this can all be done simply by writing a C++ class," I have to mention the important point, which is that the use of the globals is so intrinsic to the ordinary way MUMPS is really used in practice, that successful implementions of MUMPS must and in practice do make the implementation of globals efficient.

    You really can just use "globals" all the time for everything. They work well enough that you don't need to reserve their use for when they're really needed. They're not a luxury. MUMPS programmers rarely use files, except for interchange in and out of the MUMPS universe. Within MUMPS, data is simply kept in globals; it's just the MUMPS way.

    "Globals" are extremely flexible and lend themselves naturally to representations of real-world databases. These representations are typically one-off, ad-hoc representations designed by the programmer, who needs to make up-front decisions about the hierarchical organization in which the data will be stored, and writes special-purpose code to perform the accesses. Naturally, this sounds like the dark ages compared to relational technology, but there is an impressive tradeoff. If MUMPS fits the application, development times are short, and performance is dramatically better than for relational systems.

    Whether or not this is important in the year 2006, it was very clear a decade ago when medium-scale database applications were typically hosted on minicomputers, that the same hardware resources could support several times as many users running a MUMPS application as a similar application implemented with a relational database, as various organizations found when they converted... in either direction.

    Of course relational systems can and are implemented on top of MUMPS.

    MUMPS underlies InterSystems' Cache product, and a MUMPS-like language with historical connections to MUMPS underlies the products of Meditech. I'm not sure what the current status of Pick [wikipedia.org] is, but it has some similarities. The company I currently work for has nothing whatsoever to do with either system... except that our business IT system happens to be Pick-based.

    Regardless of whether you think of MUMPS itself, there are almost certainly lessons to be learned from the durability of this language and its effectiveness.

"Show business is just like high school, except you get paid." - Martin Mull

Working...