Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Databases Programming Software IT Technology

First "Real" Benchmark for PostgreSQL 275

anticlimate writes "A new benchmark published on SPEC shows PostgreSQL's performance approaching that of Oracle's and surpassing or on par with MySQL (however the test-hardwares of the other DB systems are somewhat different). The test was put together by PostgreSQL's core developers working at Sun. They certainly are not unbiased, but this is the first 'real' benchmark with PostgreSQL — according to Josh Berkus's blog. The main difference compared to earlier benchmarks (and anecdotes) seems to be the tuning of PostgreSQL."
This discussion has been archived. No new comments can be posted.

First "Real" Benchmark for PostgreSQL

Comments Filter:
  • by Control Group ( 105494 ) * on Monday July 09, 2007 @04:17PM (#19805251) Homepage
    however the test-hardwares of the other DB systems are somewhat different

    Which makes the results pretty much useless. But, being the intrepid slashdotter I am, I went ahead and R'ed the FA anyway, in case I could glean some useful information from it.

    Which revealed that the linked article doesn't actually contain any information whatsoever about Oracle* or MySQL, much less benchmarks on named hardware.

    So...what am I supposed to get out of this, again? Or is this just supposed to be some kind of PostgreSQL love-in, so I should take my wet blanket elsewhere?

    *Well, the second link contains someone claiming that Oracle is only 15% faster...but without providing any actual data.
    • Mod parent way up! (Score:3, Insightful)

      by khasim ( 1285 )
      You cannot compare benchmarks without SOMETHING standard between them.

      Okay, if they can't match the hardware (why not?) then focus on price points. I notice that they're looking at "$65,500 for the hardware". That's a LOT of hardware at today's prices.

      I'm sure MySQL would (and will) come back with a "benchmark" on hardware costing $10,000.

      There is nothing "real" about this "benchmark".
      • Re: (Score:2, Interesting)

        by Anonymous Coward
        I can't speak for complex queries, but here are some simple findings from my testing:

        Inserting 20 million rows, all simple inserts, only one primary key (int) with autoincrement for mysql and a sequence for postgres:
        Avg Mysql time per 1000 inserts: 3 seconds
        Avg Postgres time per 1000 inserts: 15 seconds (and gets worse over time)

        That was after increasing the caches and disabling fsync on postgres too.

        I also did a delete then insert for both (to flush out already existing rows), with similar result
        • by Doctor Memory ( 6336 ) on Monday July 09, 2007 @05:40PM (#19806269)

          Inserting 20 million rows, all simple inserts, only one primary key (int) with autoincrement for mysql and a sequence for postgres:
          Avg Mysql time per 1000 inserts: 3 seconds
          Avg Postgres time per 1000 inserts: 15 seconds (and gets worse over time)
          OK, now do a seven-table join, including a self-join with a correlated subquery (MySQL does those now, right?). I think everybody knows by now that MySQL is pretty much untouchable as long as all you're doing is simple single-table stuff. Kind of like comparing a pickup truck to a moving van: if all you're doing is moving a couple of boxes around, then the pickup kicks. But when you need to move serious loads, then it's the pickup that gets to sit by the curb...
          • by jedidiah ( 1196 ) on Monday July 09, 2007 @06:19PM (#19806683) Homepage
            If you do that to even a "fast and robust" RDBMS server you are bound to be bludgeoned by the DBA.
            • Re: (Score:3, Informative)

              As the DBA for Postgres DB where we do that for a website that searches over several million rows (with daily updates of up to 25% of the DB) in under 10 seconds. I can say that a similar query is ugly as hell but it only took a couple days tweaking to get the average case to under 3 seconds - certain parameters max out at 20 seconds.

              Certainly postgres plays nice with self joins and natural joins as the query turned out faster than trying an iterative stored proc so we just access a view from the proc.
          • Re: (Score:3, Insightful)

            Great analogy. Just remember that there are far, far fewer moving vans in this world for a reason and that they sit next to the curb more than the pickups.
          • by arivanov ( 12034 ) on Tuesday July 10, 2007 @02:39AM (#19810195) Homepage
            Who cares if MySQL does them or not. Show me the developers that can both develop applications and do SQL. That is a dieing breed. Most developers nowdays go for a really trivial schema and an abstraction layer. At that point the only thing that matters is row speed on simple table operations and there MySQL or in-memory OO database frameworks with a simple backing store wipe the floor. This is the reality of life. And it is not going to get better. If you look at the books on the market the only book that used to teach "proper" SQL (with joins and the lot) strictly from the context of application development was the old DB2 bible. It has not been reprinted since the late 90-es. All the rest that is out there is either heavily slanted toward the app side or towards the DB side (usually the latter). Add to that the fact that many universities try to teach "real life software engineering skills" instead of proper data structure and data manipulation classes and the picture is complete: http://www.joelonsoftware.com/articles/ThePerilsof JavaSchools.html [joelonsoftware.com]. Add to that the fact that DBDs when you actually corner them to ask something meaningfull answer with SQL technobabble like in your post. To the average developer it sounds like fortran. And if it looks like fortran, walks like fortran and talks like fortran it gotta be fortran. From the point of view of a average software engineer SQL and especially stored procedures look like a blast from the past. He expects to see objects, constructors, destructors, private and public structures. And what does he see? He sees something that looks like written by his grandparents. As a result he turns around and starts doing delete/insert/last_insert_id instead of replace and sequential deletions in software instead of foreign keys. I have tried in the past to work with developers who write commercial apps on top of SQL to optimise their code. And I have wanted to scream all along. In 95% of the cases you deal with either one of the following:
            • A nice schema designed once upon a time properly by a proper DBD that is vandalised in the application abstraction layer because the developers are are sorely pissed off by the endless wingeing of the SQL server and/or its abissmal performance. So they take the matters in their own hands and violate ACID by cashing and bypassing restrictions in the app. Sooner or later someone comes around and says - WTF, why don't we rewrite this all in software and sod off the expensive database. And surprise surprise it ends up being done in MySQL.
            • An abissmal schema or no schema at all where all restrictions are done in the app. That is MySQL country all the way.
            MySQL is a result of the way current software development is taught and done. Unless Jo the Average Developer starts understanding how to use SQL in his application (and he does not) and unless SQL data representation grows up to modern non-fortran-like OO semantics MySQL will proliferate. And if you think that MySQL is bad think twice. There are the object persistence frameworks and in-memory crap that follow in its wake.
            • Re: (Score:3, Insightful)

              by kpharmer ( 452893 )
              > unless SQL data representation grows up to modern non-fortran-like OO semantics MySQL will proliferate

              You do realize that in activities like reporting sql and its set-based operations are far, far, far faster and easier to work with than oo implementations, right?

              You can set up a typical star-schema and have certain tools (like microstrategy) immediately recognize it and generate queries for you. These queries will typically perform just fine and allow very powerful and fast drill-downs, drill-across,
            • Re: (Score:3, Insightful)

              by Doctor Memory ( 6336 )

              Most developers nowdays go for a really trivial schema and an abstraction layer. At that point the only thing that matters is row speed on simple table operations and there MySQL or in-memory OO database frameworks with a simple backing store wipe the floor.

              Until, of course, they don't. All it takes is a couple of users who want to actually get information *out* of the database ("How many widgets do we typically sell in Poughkipsie in March? And when I say 'Poughkipsie', I mean the greater Poughkipsie metroplex.") and you're stuck building indexes and making joins in your code. Eventually your code either becomes unmaintainable, or collapses under its own bulk. Agile/XP developers like the DRY axiom: Don't Repeat Yourself. Why write code to do what the d

        • by jaredmauch ( 633928 ) <jared@puck.nether.net> on Monday July 09, 2007 @05:54PM (#19806395) Homepage
          AC, sorry, but I have a postgres install working where I get 70k inserts/second or more with a single index on the table during the day. The first insert of course is faster as the index doesn't exist yet. I'm not sure what you're doing, but I can tell you that I have tuned postgres by increasing some simple parameters. If you're using some Linux package, you're likely not seeing the benefits that are possible by stuff like changing the block size parameter in the source. Yes, it's kinda lame you have to do this, but at the same time, it's not too unreasonable. I'd like it if I could set this larger.

          This is on 'decent' hardware running Solaris 10 (amd64). Obviously you need to tweak stuff like wal size, checkpoints, etc.. But getting this type of performance is not hard to do. I can scan an hours worth of data in a short amount of time. Each one of these 'hourly' tables contains roughly 30-32M rows. this is nothing to sneeze at from what I can tell. I haven't had a reason to re-evaluate mysql to see if there are enough tweaks to make it perform similarly, but if you're getting the crappy insert rate that you're talking about, you clearly need to change something as you're doing it wrong if you truly care about performance. E-Mail me if you're interested in my postgresql config files. I'm happy to share to minimize the FUD out there.

        • by ctr2sprt ( 574731 ) on Monday July 09, 2007 @07:40PM (#19807435)

          You can do way better than that with PostgreSQL, at least, and I suspect with MySQL as well. I wrote a benchmark similar to yours, but a good bit more complex. I had two tables, one of which was seeded and another which was populated by the benchmark. The benchmark table had six columns (int, timestamp, 4x bigint), a primary key (int + timestamp), four check constraints (on the bigints), a foreign key constraint (int, to the seeded table), and two indexes (one int, one timestamp). I would do a commit every 75k rows, with 24 such commits per iteration and 30 passes per benchmark run, so 54 million rows total. I also used a thread pool, and there are two reasons for that. First, some amount of parallelism improves DB performance. Second, it more accurately simulated our predicted usage patterns of the database. We ran my benchmark against PSQL and IBM DB2.

          The results were interesting (at least, I thought they were). First, PSQL can only handle about 10 threads doing work at once. Past about 10 threads, the DB completely falls apart. DB2, however, could handle more busy threads than Linux could, with a very gradual (and linear) degradation in performance past about 25 threads. I stopped testing at 100 threads. Second, PSQL's inserts per second (IPS) rate cut in half by the end of the bechmark. DB2 followed a similar trend until about 5 million rows, at which point IPS went up to where it started and stayed there without moving. Third, DB2 was I/O-bound, whereas PSQL was CPU-bound. I suspect it's why DB2 was able to handle an order of magnitude greater concurrency: more threads just meant the CPUs had something to do while waiting on the disks. However, it does mean that PSQL might do better with faster CPUs, whereas DB2 would not (it'd just be able to handle more threads).

          And the numbers: DB2 averaged 1100 IPS, PSQL 600. Note that for the first million rows or so PSQL was faster: it just eventually dropped down to ~400 IPS after ten million rows or so, killing the average. Of course, since this table would never have fewer than 54M rows - actually, it would typically have 160M - the IPS I got at the end was the one that mattered. Also, this was on a pretty weak server, at least for this kind of workload. With more (and faster) cores, more memory, and more spindles, I'm pretty sure you could increase those numbers by 50% or more. With tuning, perhaps that much again.

      • by Vellmont ( 569020 ) on Monday July 09, 2007 @04:53PM (#19805679) Homepage

        You cannot compare benchmarks without SOMETHING standard between them.

        The thing that's standard is the benchmarking software.

        If I were to buy a database server, do I really care which component of the solution is providing me with the great performance, or do I just want the performance? At the end of the day the only thing that really matters is the performance that comes out of the box.

        It doesn't really matter if "Postgresql" is faster than "MySQL", because they always run on a certain physical computer. What matters is "I need to accomplish X,Y and Z. I have A dollars to spend. Which solutions accomplishes X, Y and Z the best within my budget? You can't separate the software from the hardware and get an answer that's very meaningful.

        This benchmark isn't the last word on anything. Even a benchmark run on the exact same hardware means very little if you have a 2 core machine instead of 8.
        • You can't separate the software from the hardware and get an answer that's very meaningful

          I take your point in general, but this statement is somewhat misleading. While you can't separate software from hardware entirely, you can be in a situation where you have a given supply of hardware, and need to know how best to use it - which amounts to much the same thing.

          In that situation, knowing how each piece of software performs on a specific platform may be excellent information for you to have.

          • you can be in a situation where you have a given supply of hardware, and need to know how best to use it - which amounts to much the same thing.

            Sure, but you're talking about a specific piece of hardware. Sometimes you DO have a given hardware box and need to find software that works well on it. But how is a benchmark run on a totally different piece of hardware going to help you?

            Benchmarks like these might give you kind of general ideas about the software, like "postgresql is in the same class as Oracle
      • I notice that they're looking at "$65,500 for the hardware".

        Yes, but they are comparing it to $74,000 for Oracle's hardware. The part that worried me is that "all benchmark runs were extensively optimized by the Sun performance team, with the help of performance experts from the databases represented." And they said they spent 6 months optimizing it. My goal is to find the sweet spot that combines: Money, Performance, and Development (and Code Maintance) costs. If it's possible to get great performa

        • by jedidiah ( 1196 )
          If you need one of Sun's key luminaries in order to get that level of performance it is also of limited value. I don't want to have to drag Burleson or Niemiec out to my shop just to get my database to run well. I need to be able to get the database to run well on my own. My company won't spring for Niemiec's pool boy, nevermind Niemiec himself.
          • Re: (Score:3, Interesting)

            by Gorshkov ( 932507 )

            If you need one of Sun's key luminaries in order to get that level of performance it is also of limited value. I don't want to have to drag Burleson or Niemiec out to my shop just to get my database to run well. I need to be able to get the database to run well on my own. My company won't spring for Niemiec's pool boy, nevermind Niemiec himself.

            You're missing the entire point of what he's saying.

            When you see benchmarks run and comparisons made between different databases that are conducted by a single pe

      • Re: (Score:3, Funny)

        by timmarhy ( 659436 )
        that would be because they are testing serious db applications, not your fucking toy shit. $65,000 on a server is no big deal at all.
    • by Ngarrang ( 1023425 ) on Monday July 09, 2007 @04:26PM (#19805363) Journal
      To paraphrase an old saying:

      There are lies, damned lies and benchmarks.
    • by KillerCow ( 213458 ) on Monday July 09, 2007 @04:34PM (#19805467)
      I think that somebody sent the wrong link and (surprise!) the editors didn't even follow it to check.

      Here's a more useful one: All SPEC jAppServer2004 Results Published by SPEC [spec.org]

      The benchmarks aren't standardized enough for any useful comparison. The hardware and configurations vary in almost every one.
      • Re: (Score:3, Insightful)

        If you want to setup a dedicated database server, you want to know what software with what hardware will run the fastest. So while the benchmarks may not be useful to people wanting to setup a small multi-purpose server, it can still be useful for some people.
        • by jedidiah ( 1196 )
          I am more interested in how it handles an axe wielding maniac going postal in the datacenter, or what would happen of a mile wide tornado struck town, or what would happen if there's a bug, or if the project becomes wildly successful and does 10x or 100x the processing.
    • however the test-hardwares of the other DB systems are somewhat different

      Which makes the results pretty much useless.


      Not necessarily.

      It's essentially useless for separating out how much of the performance difference is the result of the software's design, implementation, and tuning versus how much is due to the platform differences.

      But such tests CAN be used to examine the performance of competing ENTIRE SYSTEMS, to inform choices between them.

      They say: "Oracle on does THIS well, PostgreSQL on can be tuned so it does THAT well on the same benchmark."

      This lets administrators (presuming they have access to the hardware info) get a bang-for-the-buck comparison.

      For the rest of us, the interesting point is that PostgreSQL, running on its team's idea of realistic hardware, can produce performance in the same ballpark as Oracle running on Oracle's choice of hardware.

      (Whether the necessary remaining data (what are hardwares x and y? how was PostgreSQL tunde) is published now, later, or never, is a separate issue. B-) )
      • That should have read:

        They say: "Oracle on {hardware x} does THIS well, PostgreSQL on {hardware y} can be tuned so it does THAT well on the same benchmark."
      • by Minwee ( 522556 )
        As I recall, Oracle's choice of hardware consists of a very large chequebook and a stamp with your signature on it.
      • by Control Group ( 105494 ) * on Monday July 09, 2007 @04:53PM (#19805683) Homepage
        Oh, I agree. A benchmark of whole systems can be just as (or more) useful as a benchmark of individual pieces of software, depending on what your goals are.

        But what's been presented here isn't even that. Links #1 takes us to a SPEC benchmark of PostgreSQL. It doesn't provide any information about anything else; there isn't anything to compare the benchmark to. Link #2 provides an unreferenced statement about Oracle's marginally superior performance on much more expensive equipment.

        So, perhaps, one can begin to draw conclusions about PostgreSQL vs Oracle in the contexts of full systems. But neither link #1 nor link #2 provide any information about MySQL (except the quote: "[t]his publication shows that a properly tuned PostgreSQL is not only as fast or faster than MySQL").

        Really, my criticism isn't of the benchmark (the data are the data, after all) or of the blog (one expects a vested PostgreSQL interest to comment on such a benchmark), but of the blurb here that either a) draws totally unwarranted conclusions, or b) depends on information it doesn't bother sharing.
      • by Qzukk ( 229616 )

        (Whether the necessary remaining data (what are hardwares x and y? how was PostgreSQL tunde) is published now, later, or never, is a separate issue. B-) )

        From the SPEC site [spec.org], click on the "Disclosures" links to find out the hardware and software used for each part of the test. For instance, the postgres server ran on a SunFire T2000 with one 8 core (4 virtual threads per core) UltraSPARC T1 processor at 1.2GHz, 16GB of ram running 64-bit solaris 10, etc. The HTML Disclosure links to the "Disclosure Archive" which is a .jar with all of the configuration files used.

    • http://www.spec.org/jAppServer2004/results/jAppSer ver2004.html [spec.org] contains all the results. Unfortunately the different hardware configurations make it rather hard to draw any conclusions. Which begs the question, how did the submiter knew that these specific guys where biased or not? From what I can see, the whole setup is inherently biased.
  • by CaptainPatent ( 1087643 ) on Monday July 09, 2007 @04:24PM (#19805325) Journal
    Because Sun systems will always be different from the x86 based cores that run MySQL and Oracle, I think the best way to compare such software would be by constructing servers of equal price and seeing how PostgreSQL fares. The true question on any business person's mind is "how much to implement?"
    • by Doctor Memory ( 6336 ) on Monday July 09, 2007 @04:37PM (#19805499)

      Sun systems will always be different from the x86 based cores that run MySQL and Oracle
      Umm, wrong both ways. Oracle runs really well on Sun SPARC hardware (and I suspect MySQL at least runs), and Sun also makes x86-based servers (built with AMD's Opteron chips). It shouldn't be any trouble to benchmark all three on the same hardware.

      Well, no technical trouble, anyway — I doubt Oracle would like to have its performance compared to two free-as-in-beer competitors. Even if it comes out on top, people will still be tempted to think "Jeez, with the money I save on Oracle licenses, I can buy a faster server and make up the speed difference"...
      • Well, no technical trouble, anyway -- I doubt Oracle would like to have its performance compared to two free-as-in-beer competitors. Even if it comes out on top, people will still be tempted to think "Jeez, with the money I save on Oracle licenses, I can buy a faster server and make up the speed difference"...

        Well yeah, especially since the primary metric for TPC-C isn't TPM (transactions per minute) but TPM/$. If the cost of Oracle means you could throw more hardware at the free DBs and get better overall
    • by khasim ( 1285 ) <brandioch.conner@gmail.com> on Monday July 09, 2007 @04:39PM (#19805531)
      Get people from each group, give them the requirements and 5 different dollar amounts.

      Let each team setup their systems how ever they want at each price point. Some will go with clustered servers. Some will go a single monster server. They know their products best so they'll be the ones best suited to choosing the configuration.

      Then run the benchmarks. And keep hammering on them until AFTER the next patch release.

      Yeah, it might run fast, but still be a bitch to patch/upgrade.

      At $5,000 you might find that a cluster of MySQL boxes beats everything.

      At $10,000 maybe something else is best.

      $25,000

      $50,000

      $100,000

      etc.

      And finally, break it. Break it bad. What happens when something goes wrong? Oracle might cost a lot, but if they can come through with your data they might just be worth it.

      If nothing else, you'll get the "best practices" nicely demonstrated by each group. :)
      • by suv4x4 ( 956391 ) on Monday July 09, 2007 @05:27PM (#19806091)
        Get people from each group, give them the requirements and 5 different dollar amounts.
        Let each team setup their systems how ever they want at each price point. Some will go with clustered servers. Some will go a single monster server. They know their products best so they'll be the ones best suited to choosing the configuration.


        We gave each team $10000 and told them to build the best hardware and db setup they can:

        ** PostgreSQL got a small IBM Blade quad machine redundant setup:
        "We're relying on standard industry solutions and reliability."

        ** MySQL clustered 4 PlayStation3 machines and wasted the rest on booze and women:
        "We're practical, plus we know what is money best spent on!".

        ** Oracle purchased a 1200 square foot datacenter and installed a megacluster of 8132 quad-Xeon 64GB RAM 4TB disk machines. With $10'000...?!
        "We... uhmm... we hit a great bargain, guys! You wouldn't believe it, but it's true!"
    • This is wrong on a whole bunch of levels:

      1. Postgres runs on, among other things, linux, and windows.
      2. SOLARIS runs on, among other things, x86.
      3. MySQL and Oracle run on, among other things, Solaris on a Sparc.

      There is a basis for identical comparisons. I've done it.

      OTOH, you got this one right:

      The true question on any business person's mind is "how much to implement?"
    • by jhines ( 82154 ) <john@jhines.org> on Monday July 09, 2007 @04:45PM (#19805593) Homepage
      If you read the details, while being Sun machines, they are Opteron based, so yeah they compare.
    • by PCM2 ( 4486 )
      Wait ... you're saying MySQL and Oracle only run on x86? What rock have you crawled out from under?
  • Bad firehose! (Score:5, Informative)

    by greg1104 ( 461138 ) <gsmith@gregsmith.com> on Monday July 09, 2007 @04:27PM (#19805377) Homepage
    Why this emaciated post made it while mine didn't I'll never know...here's how I submitted this story:
     
    The current version of PostgreSQL now has its first real benchmark [ittoolbox.com], a SPECjAppServer2004 [spec.org] submission [spec.org] from Sun Microsystems. The results required substantial tuning [sun.com] of many performance-related PostgreSQL parameters, some of which are set to extremely low values in the default configuration — a known issue that contributes to why many untuned PostgreSQL installations appear sluggish compared to its rivals. The speed result is close but slightly faster than an earlier Sun submission using MySQL 5 [spec.org] (with enough hardware differences to make a direct comparison of those results unfair), and comes close to keeping up with Oracle on similarly priced hardware — but with a large software savings. Having a published result on the level playing field of an industry-standard benchmark like SPECjAppServer2004, with documentation on all the tuning required to reach that performance level, should make PostgreSQL an easier sell to corporate customers who are wary of adopting open-source applications for their critical databases.
  • by Anonymous Coward
    For those of us who don't have dozens of hours to do the necessary research, can some postgresql gurus sum up some of the most significant tuning parameters so us mere mortals can see similar performance gains?

    I realize that a large part of the answer is going be "it depends on application, your hardware, and you query types", but surely there must be some general tips that we can follow given various typical setups. MySQL, for example, ships with several different configuration files: One suitable for a sm
  • by Pap22 ( 1054324 ) on Monday July 09, 2007 @04:44PM (#19805585)

    This publication shows that a properly tuned PostgreSQL is not only as fast or faster than MySQL, but almost as fast as Oracle (since the hardware platforms are different, it's hard to compare directly). This is something we've been saying for the last 2 years, and now we can prove it.
    Postgresql 8.2 on UltraSPARC T1 [spec.org]
    MySQL 5 on AMD Opteron 285 [spec.org]

    The UltraSPARC has 8 cores on 1 chip and 16GB of memory.
    The Opteron has 4 cores and 8GB of memory.

    The UltraSPARC should smoke it every time.

    • Postgres scored 778 on a probably more expensive machine with twice the RAM of the MySQL machine which scored 720. You call an 8% improvement "smoking it"?
  • by suv4x4 ( 956391 )
    They certainly are not unbiased

    I guess that's a deal breake right there, no?

    The test was put together by PostgreSQL core developers at Sun. Didn't we agree earlier, when talking about Intel/AMD benchmarks, that vendor supplied tests are wildly inaccurate?

    PostgreSQL should concentrate on more developer tools and better marketing. The "it's got a ton of features you don't need" on a cryptic site doesn't help its cause.

    People use MySQL because there's a wide support and lots of dev tools for it, and because th
    • by pavera ( 320634 )
      I really don't understand this argument. I've used PostgreSQL and MySQL pretty much interchangeably for the last 7 years. I have never felt a lack of dev tools or documentation/help from postgresql. Maybe I'm just smarter than the average joe, but setting up and running postgresql is not that much more difficult than mysql.

      For the features it provides I much prefer postgresql. sure, clustering is harder, but for the "easy" things that mysql is supposed to be good at, you don't need clustering either. O
  • Elephant (Score:4, Funny)

    by suv4x4 ( 956391 ) on Monday July 09, 2007 @04:50PM (#19805633)
    Won't you guys agree, "elephant" doesn't exactly communicate "fast and modern" very well.
    "Dolphin" comes a bit closer.

    Who's coming up with those logos?
    • by afabbro ( 33948 )
      Elephant communicates "really long memory".

      Which, of course, is what you want in a RDBMS.

      Who's coming up with this education system?
      • by suv4x4 ( 956391 )
        Elephant communicates "really long memory".

        Which, of course, is what you want in a RDBMS.
        Who's coming up with this education system?

        --

        I'm not convinced.

        Dolphins are way smarter than elephants: we'll need an elephant/dolphin benchmark.

        I'll get some monkeys to setup one.
    • Re:Elephant (Score:4, Insightful)

      by pavera ( 320634 ) on Monday July 09, 2007 @05:01PM (#19805797) Homepage Journal
      "Dolphin" also conveys "fun play thing" to me...

      I'd prefer the elephant that never forgets.
    • Re: (Score:3, Funny)

      by turing_m ( 1030530 )
      If the final decision in choosing an RDBMS comes down to the logo, the choice of database will be the least of your problems.
  • Which MySQL? (Score:5, Interesting)

    by itsdapead ( 734413 ) on Monday July 09, 2007 @05:33PM (#19806183)

    MySQL is modular - pick'n'mix data storage engines sharing a SQL front end. I can't find the bit in TFA that says which one they compared.

    I've always suspected that most MySQL vs Postgres flame wars are based on comparing Postgres with the speed of MySQL/MyISAM (No transactions or relational integrity checks - so, big surprise, dead fast for simple queries) and then waving MySQL/InnoDB around when the functionality issue is raised.

    MySQL/MyISAM hits the speed/functionality sweet spot for LAMP data-driven websites, is supported by lots of free webapp software and offered by most decent web hosting services. Comparing it speedwise with Postgres has always been pointless, though. If Postgres has caught up, colour me impressed, but if they're pro-Postgres I bet they're comparing with MySQL/InnoDB (which is a bit closer to like-with-like).

    Never quite seen the point of MySQL/InnoDB really - all the advantages of MySQL/MyISAM minus the speed, support by popular webapps, availbility on low-cost hosts... and still lacks the features of Postgres.

  • by Stinking Pig ( 45860 ) on Monday July 09, 2007 @06:15PM (#19806623) Homepage
    I worked for a company whose product ran on MS-SQL, PostgreSQL, and Oracle. Should I explain why we didn't support MySQL or not? It'll draw fanboys either way. I used the same server, reinstalled the OS (Red Hat Enterprise 3 or Windows 2000) and database between each test, and rebuilt the application server to be extra sure.

    Since it was more difficult to write Oracle-compliant SQL and we didn't have a lot of Oracle customers, the developers didn't care to spend time on it, and our stuff ran about 20 percent slower there. That's after a lot of tuning time, it was 50% slower on a default install. Oracle 9 took two days to install and tune, plus another two days of preparation. I was particularly underwhelmed that I had to deal with stupid errors like tarballs that extracted onto themselves and assumptions about the shell being used. At the time, Oracle was a very Solaris-like experience; user-unfriendly to the extreme.

    Postgresql 7 ran great; it was neck and neck with MS-SQL in all tests, after proper tuning, and 30 percent slower on a default install. Postgres took half a day to install and tune, but it took me a week and conversation with the postgres mailing lists to find out what needed tuning. Still, we were able to put together a document that took users from bare metal to RHEL+Postgres in four hours if they had all their media handy.

    Microsoft SQL Server 2000 ran great with no tuning at all, and took fifteen minutes to install. It also cost as much as paying me to do the entire set of tests. OS installation/patching times and tested workloads were the same for all three tests.

    YMMV.

Solutions are obvious if one only has the optical power to observe them over the horizon. -- K.A. Arsdall

Working...