Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Databases Businesses Oracle Software

Ask Slashdot: Is Postgres On Par With Oracle? 372

grahamsaa writes "I work at medium sized company that offers a number of products that rely fairly heavily on backend databases, some of which are hundreds of gigabytes and deal with hundreds or thousands of queries per second. Currently, we're using a mix of Postgres, Oracle, and MySQL, though we're working hard to move everything to Postgres. The products that are still on MySQL and Oracle were acquisitions, so we didn't get to choose the RDBMS at the time these products were designed. So far, we've been very happy with Postgres, but I know next to nothing about Oracle. It's expensive and has a long history of use in large enterprises, but I'm curious about what it offers that Postgres might not — I'm not saying this because I think that sticking with Oracle would be a good idea (because in our case, it probably isn't), but I'm curious as to how some companies justify the cost — especially considering that EnterpriseDB makes transitioning from Oracle to Postgres feasible (though not painless) in most cases. For those that use Oracle — is it worth the money? What's keeping you from switching?"
This discussion has been archived. No new comments can be posted.

Ask Slashdot: Is Postgres On Par With Oracle?

Comments Filter:
  • previous life (Score:3, Interesting)

    by Anonymous Coward on Friday July 12, 2013 @09:20PM (#44266875)

    my previous employer had a similar decision to make when they were restructuring the company. the powers that be decided to pay Oracle big $$ just because of name recognition ... and for the off chance that it would make the company a more appealing acquisition candidate.

    imo, if your enterprise is optimized for postgres, you'd be crazy to switch. rearchitecting would be a son-of-a-bitch.

  • by jbolden ( 176878 ) on Friday July 12, 2013 @09:22PM (#44266895) Homepage

    Materialized views (and all the related magic)
    Flashback queries and flashback archives (they are really cool)
    Index only scans (can be a major performance boost)
    No transaction control in stored functions

    Oracle handles queries that return 50k plus records far far better.

    Oracle uses a statistical optimizer for execution plans in the engine. They are working through the 2nd generation of it to handle situations where they are lots of high frequency values

    Temporary table undos

    Oracle is really an excellent product for a database in which there will be DBA maintenance. If there aren't DBAs Oracle's complexity becomes a minus not a plus.

  • by hibiki_r ( 649814 ) on Friday July 12, 2013 @10:14PM (#44267163)

    There's also the Evil Oracle Magic that lets you change query plans on the db directly, if Oracle itself is unable to come up with the best plan. In Postgres, the database is expected to figure everything out based on costs and statistics, which works well most of the time, but will kill you for specific kinds of queries. For instance, if you have 4 where clauses in different tables, postgres' static analysis will have no idea of whether each extra clause is any more or less selective than it'd be vs the entire dataset. If this is not the case, Postgres can make very wrong assumptions about how many rows you'll fetch, and thus come back with very silly query plans.

    In Oracle, you have a chance of being saved by the fact that the optimizer learns from this kind of mistakes, or, in the worst case scenario, the DBA can just assign a very specific plan to your query on the fly, which leads to great performance gains without having to change code. Postgres keeps getting better in every release though, and Oracle's licenses are not getting any cheaper.

  • by denmarkw00t ( 892627 ) on Friday July 12, 2013 @10:35PM (#44267243) Homepage Journal

    Flashback queries and flashback archives (they are really cool)

    This this this. Working with Oracle was very interesting at a few years back. Odd things, like not being able to do a LIMIT, OFFSET in an easy mannor (read: any way but LIMIT, OFFSET) was so strange - the DBAs explained it as something to do with how Oracle manages row count and the uncertainty of the rows returned? idk, it's been a while. They did give us a way around...but, I digress.

    Flashbacks are nasty cool - the way I understand it, as I was only watching the dev who about two hours before had hosed a production database, is that you can SELECT INTO FROM a point in time. We had a DBA on the line who walked him through the flashback, and before we knew it, the DB was back to the state it was in hours before.

    HOWEVER. Go with Postgres. Stick with Postgres. No reason to shell out all that cash for licenses, and Postgres is powerful enough to do just about anything you need it to, imho.

  • by ulatekh ( 775985 ) on Friday July 12, 2013 @11:29PM (#44267531) Homepage Journal

    Ever try to store an array of strings? Better to store it as one field and parse it in code!

    If you're trying to store complex data structures in SQL, I would recommend protocol buffers [wikipedia.org]. Imagine XML, but more compact, and with built-in support for versioning. It's open source [google.com] too.

  • If you go w/ Oracle (Score:2, Interesting)

    by GodfatherofSoul ( 174979 ) on Friday July 12, 2013 @11:58PM (#44267613)

    Based on a rumor I heard about a local mega-development firm, make sure you build your app with the flexibility to swap out DB implementations and make sure the Oracle sales guys *know* it. Prices change drastically when they know you're not locked in. As for PostgreSQL, from my experience where I'm sure you'll lose out is with tools and a development environment.

  • by Anonymous Coward on Saturday July 13, 2013 @12:56AM (#44267855)

    I have been an Oracle DBA for over 15 years. I agree Oracle is an excellent product. In fact, I have built two tech companies based on Oracle.

    But, really, Oracle is totally replaceable by Postgres. The resistance does not come from feature limitations, rather environmental. EnterpriseDB has laid out "vendor lock-in" well in this whitepaper: http://www.enterprisedb.com/wp-oracle-cost-containment.

  • Re:Clustering... (Score:4, Interesting)

    by evilviper ( 135110 ) on Saturday July 13, 2013 @01:24AM (#44267969) Journal

    FWIW, when you're faced with the option, go with Postgres if at all possible. Oracle is a crufty mess that's painful black magic to administer. There's sufficient documentation out there that you CAN manage it, but it seems to be made up of nothing-but dark corners and ancient cruft.

    I hate, hate, HATE always having to manually manage table spaces, data file size limitations, recompiling invalid objects, the HORRIBLE, painful, not entirely documented syntax of the tools (eg. expdp), the admin hostile tools that make the MS-DOS command-line look futuristic (sqlplus), etc. From an administrative perspective, Oracle is a nightmare in comparison to the relatively approachable and user-friendly Postgres.

  • by Meetch ( 756616 ) on Saturday July 13, 2013 @10:36AM (#44269497)

    I've been a relatively mild-mannered open source advocate for over 20 years now, and have been running Linux for all of it. My first DBA job was with Postgres (6 or 7, ~12 years ago now!) and now Oracle. This is all about databases, completely ignoring the application related acquisitions they've made in the last decade...

    A lot of difference I see and is evident from the discussions here is that Oracle usually has the features earlier (not always, but yes, usually). The earliest example I've witnessed is Postgres' Write-Ahead Logging, which was definitely cool, but Oracle were there first. More recently, with 11gR2 you have advanced compression (pay $$$$ and it will store all your data compressed if you want) and with 12c there are a bunch of features that make me drool. Pluggable databases is just one of them.

    Again, not entirely sure about Postgres, but Oracle build a lot of instrumentation into the database software itself. Tracing custom events is a great way of profiling your application as well as database deficiencies. Pay for the license to unlock the full power of ASH or AWR and you have a great deal of ability to see exactly what's going on and figure out how best to resolve any performance issues. The best bit is that this instrumentation doesn't make the database run like a dog. A few percent overhead gives you a lot of debugging power, and it's ALWAYS turned on with basic event tracking always happening anyway. But you can add MOAR.

    I see some impressive performance on Oracle databases these days, but not entirely convinced that Postgres cannot meet them. But then, Oracle can run on anything from 32 bit x86 to some seriously beefy hardware (and when it does, it runs well). I'm not entirely sure about Postgres, but I know Oracle has been compiled for RISC architecture (Power, SPARC, HPUX, others??) for a long time. These days they to lean towards x86 - and will even sell you a "database machine" (google for Exadata). This extends to scaling out on any of the supported architectures with their cluster software (Grid Infrastructure) these days, which is quite mature now. Again, Postgres probably does this, but each generation sees a significant improvement for Oracle.

    Having said all that, leading edge can also be bleeding edge... The biggest problem for me with Oracle continues to be the time it takes to resolve software bugs combined with their support infrastructure. While it usually gets there in the end, for the price you pay for enterprise support one might expect quicker resolution if you happen to be the first person to hit upon a specific problem. Unfortunately this tends to tie with the need to certify with all the Oracle applications they release and support. The one and only bug I reported when I was a Postgres DBA was around a date calculation issue - from the behaviour I reported it was tracked down and patched in ~ 2 days, and I had a workaround for the meantime anyway.

    Oracle have also done some cool stuff in the open source domain with OCFS (and now OCFS2) and the free domain with their base GI cluster software, as well as the plain cool domain with ASM (dynamically manageable disk pooling with Stripe And Mirror Everything methodology providing solid data robustness) and ACFS which lets you carve out clustered POSIX compliant filesystems on top of ASM at will. This all helps with scaling (don't need OCFS2 now if you use ACFS tho).

    Hmmm, it seems they really are turning me to the dark side.... heeellllllppppp!!!!

  • by greg1104 ( 461138 ) <gsmith@gregsmith.com> on Saturday July 13, 2013 @10:56AM (#44269603) Homepage

    Oracle has to do a complicated implementation in all cases because of how their rollback logs are structured, which makes certain types of things impossible to reverse without dipping into a record of earlier database states. But Postgres has fully transactional DDL [postgresql.org] in a way that's even a bit better than how Oracle approaches the problem. When you can rollback from DROP TABLE already, small jobs returning to an earlier state are possible without even dipping into the history.

    We would have to dump a lot of disk space into the obvious ways to implement deeper rollback targets in Postgres, and Oracle has surely put a lot of engineering work into making that efficient. The design for Postgres I have in mind would be a disk pig. I know it would work though, because I'm already hacking together business SLA level rollback targets for people--things like "we must be able to recover from any fat finger error made in the last 2 hours in less than 10 minutes". All of the hard pieces needed are already inside of the replication and rollback code, they just need to be reassembled for this goal. There's even a generic background worker infrastructure in Postgres now, which makes it easy to create more schedule driven daemons like the existing background writer or autovacuum worker. Two years ago, even an inefficient Flashback clone would have been impossible for PostgreSQL to reach. Now enough of the internal components needed are there that it's just waiting for someone to pick an initial business goal target and build a UI to reach that one.

  • by fuzzytv ( 2108482 ) on Saturday July 13, 2013 @01:44PM (#44270471)

    PostgreSQL is about the closest open-source equivalent to Oracle. Reputedly, they both come from the same parent

    Ummmm, what? AFAIK there's no common ancestor or anything like that, and the projects come from very different environments. PostgreSQL originates from Berkeley research projects, Oracle is a purely commercial project. They are both ACID-compliant WAL-based relational databases and may be similar in other aspects but there are many significant technical differences (e.g. in the MVCC implementation etc.).

    Regardless, if you discover your developers have been indulging in extensive use of stored procedures, you should immediately escort them out the door. I'm speaking from bitter experience.

    Stored procedures can be very efficient sometimes, but often, they just add additional load to the DBMS server that could have been distributed among application servers. Stored procedure code is also not as likely to be version-controlled and restoring code backups means a database restore. Also, splitting logic between the application server and database server can result in even the most trivial mods requiring a time (money) consuming "treasure hunt" to locate where the affected code is and careful co-ordination of the mods between the two serves.

    This, however, is mostly nonsense. It's true that stored procedures may add load to the DBMS, however it's equally true they may significantly reduce the overhead. What's more efficient - transferring 1M rows to an application server and handle it there (often submitting the result back to the DBMS as a parameter for the next query), or just process it locally within a stored procedure? Not to mention that the "application servers" usually manipulate the data using languages that are very poor match for such operations (which is basically everything except for SQL).

    If you had poor development practice (and the fact that you are saying that "stored procedure code is also not as likely to be version-controlled" strongly suggests this), then sure - it might be nightmare. But it's not a fault of stored procedures but purely your fault.

    It's true that it may not be as straightforward to decide what to place into the application server and what should be placed into the stored procedures. You simply need to treat them as separate components - I know about apps that built API on the database using stored procedures, and access it almost exclusively just through this - and it works perfectly. But if you're just piling up a query here, a call to a procedure there, then sure - it's going to be a mess. I do see this frequently in ORM-based Java apps, that somehow start with the idea "ORM is going to handle all of this, no SQL code at all" and then they start to write plain SQL queries / procedures as workarounds for performance issues. Well, they certainly deserve that.

    Last, but hardly least is the fact if you do want to switch out DBMS products, it's going to be very, very expensive, since not only the aforementioned PL-SQL would have to be rewritten, but the applications would likely be seriously traumatized as well.

    Amen, brother! Many believe that all relational databases are somehow equal and you can painlessly migrate the application from one to another. Well, turns out it;s not true - the common API (SQL) does not mean they share the same features / characteristics etc.

  • Re:MongoDB--run away (Score:4, Interesting)

    by lgw ( 121541 ) on Saturday July 13, 2013 @02:12PM (#44270609) Journal

    MongoDB can't be beat for "store it now, figure out what to index later" data, especially log files. It's probably the best tool for that job.

    Where there's a gaping hole in available software, either open source or commercial, is a solution for cloud-scale commodity-server DB sharding solutions. If you know how to partition your data (such that the only queries that need to run across partitions are map-reduce-style queries, and everything ACID stays insdie one partition per query), it should be off-the-shelf easy to do that with 10 or 10000 cheap database servers.

    But that's all homebrew now. AFAIK, all the big cloud players other than Google work that way for the non-logfile data. E.g., Facbook uses sharding over MySQL. But everyone has rolled their own, and no one has open sourced it or made a product from it. It's very strange.

Never test for an error condition you don't know how to handle. -- Steinbach

Working...