Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Databases Open Source

PostgreSQL Getting Parallel Query 83

New submitter iamvego writes: A major feature PostgreSQL users have requested for some time now is to have the query planner "parallelize" a query. Now, thanks to Robert Haas and Amit Kapila, this has now materialized in the 9.6 branch. Robert Haas writes in his blog entry that so far it only supports splitting up a sequential scan between multiple workers, but should hopefully be extended to work with multiple partitions before the final release, and much more beside in future releases.
This discussion has been archived. No new comments can be posted.

PostgreSQL Getting Parallel Query

Comments Filter:
  • This will greatly improve the already impressive PostgreSQL database engine and help it compete against the more well-known Enterprise Relational Database Engines at a much better price point (free).
    • by kimanaw ( 795600 )
      Hmmm, if so, then there are probably at least 6 other RDBMS's - mostly owned by deep pocket corps, and actually based on Pg - that ORCL could sue and actually hope to collect a check. Given Pg's open source nature, suing them would at best remove the code, but not contribute a single dime for that new dock on Larry's island.
  • I believe Oracle owns using more than 1 cpu in a query if I am correct so their own rdbms looks faster. Maybe it is now being enabled if it expires or maybe not.

    I don't want to be sued using it

  • by rbrander ( 73222 ) on Thursday November 12, 2015 @08:14PM (#50918833) Homepage

    Just a few months back, lwn.net had a longish story on PostgreSQL. They were scoring a victory with the "UPSERT" command addition in 9.5, which with speed updates old records, OR inserts a new one, if none. A big feature on your commercial databases. Apparently, PostgreSQL's biggest worry lately is that it has so many developers adding cool new features that there's some resource lacks maintaining and cleaning the base code. (Possibly unfair oversimplification of lwn.net story.)

    I discovered PostgreSQL to get a free geodatabase for mapping, with the PostGIS plug-in...the open plug-in architecture being one of the greatest things about a FLOSS database. After nearly 25 years with Oracle and thinking everything else was a toy by comparison, PG blew me away. Amazing features, high performance, reliable. It's an amazing project, and this news is both impressive and unsurprising.

    • by Kjella ( 173770 )

      This one on the other hand seems like baby steps:

      One rather enormous limitation of the current feature is that we only generate Gather nodes immediately on top of Parallel Seq Scan nodes. This means that this feature doesn't currently work for inheritance hierarchies (which are used to implement partitioned tables) because there would be an Append node in between. Nor is it possible to push a join down into the workers at present. (...) With things as they are, about the only case that benefits from this feature is a sequential scan of a table that cannot be index-accelerated but can be made faster by having multiple workers test the filter condition in parallel.

      No partitioning, no joins, right now the only thing you can speed up is a simple table scan where you can't/won't use an index. This is more "proof of concept" parallelism than a useful feature right now. I guess in a release or two this will be a big thing.

      • by guruevi ( 827432 )

        The biggest issue with parallelism is that a lot of stuff can't be parallelized in a way that makes sense. The way it is done (dispatching and gathering nodes) only makes sense if the query takes a really long time, otherwise there is a lot of overhead that destroys any type of speedup and could actually make everything else slower. Typically multi-threading in databases is done to speed up multiple independent queries, not a single query.

        • by Kjella ( 173770 )

          The biggest issue with parallelism is that a lot of stuff can't be parallelized in a way that makes sense. The way it is done (dispatching and gathering nodes) only makes sense if the query takes a really long time, otherwise there is a lot of overhead that destroys any type of speedup and could actually make everything else slower. Typically multi-threading in databases is done to speed up multiple independent queries, not a single query.

          Define "really long time", in the example he's running a query that takes less than a second and gets it down to <250ms. Sure, it's not useful for transaction processing but I got many queries running on millions of rows where anything from a minute to an hour is more like it and milliseconds are peanuts. I would think most people have at least some reports that would benefit.

    • by phantomfive ( 622387 ) on Thursday November 12, 2015 @09:56PM (#50919233) Journal
      PostgreSQL has one of the best-commented and cleanest code-base I've ever reviewed. FWIW
      • by Anonymous Coward

        It's like the FreeBSD of RDBMS!

      • by Anonymous Coward

        Having worked with some of the core committers, and attended conferences (and dinner) with others, I'm not the least bit surprised. I consider myself a pretty competent developer, and every single one of them made me feel like a total rookie. Also, none of them had the chest thumping ego that you see from the heads of a lot of other opensource projects. It was refreshing.

    • by short ( 66530 )
      SQLite already has that as "INSERT OR REPLACE", MySQL as "REPLACE INTO" etc.
  • I only ever got to develop a single project on pgsql and I regret that. This was back in 2001. MySQL was pretty immature at the time but had the enormous install base. I went with PostgreSQL because it was more mature. It never let me down. The deployment went fine, it ran great, customer used it on and off for about 6 years and then it was just no longer needed.

    Fast forward to 2011, ten years later, and now I'm running the show and developing a point of sale for the family business I'm in and I run wit

    • Re: (Score:2, Interesting)

      by mysidia ( 191772 )

      That and the master-master replication suite from Percona.

      I think that is a good reason to pick MySQL.

      As much as I like Postgres..... it seems to be a heck of a lot easier to do replication with MySQL and put together a highly-survivable system.

      I'm not even sure how to even start to go about doing it with Postgres.... although in the past; I have had a Cold/Warm standby Postgres with Slony-L based replication; It was quite frankly, a PITA.

      • I agree. It is the biggest issue for PostgreSQL. A straight forward native high availability solution should be one of the highest priorities if not the highest.
        • The native solution (streaming replication with many options) has been production stable since 9.1 and only increased in features, speed, and reliability.

      • Pg replication systems are dime a dozen, pick one, and implement it.

        • Re: (Score:2, Insightful)

          by mcrbids ( 148650 )

          ... and the replication systems are typically not worth much more than a dime, sadly.

          We have a pretty beefy set up; 4x 16 Core Xeon DB servers with 128 GB of RAM each and Enterprise SSDs, serving hundreds of instances of like-schema databases, one per (organizational) customer, serving an aggregate peak of about 1,000 queries/second in a mixed read/write load.

          And we've never been able to get replication to work reliably, ever. In every case we've ever tried, we've seen a net reduction in reliability. Every

          • Even with streaming replication in 9.3 onward? And tuning it? This a single instance with multiple schemas? Or separate instances?

          • by dskoll ( 99328 )

            You're doing something wrong, then. Streaming replication works fine for us on a fairly similarly-sized setup.

      • Re:I miss pgsql (Score:5, Informative)

        by nullchar ( 446050 ) on Thursday November 12, 2015 @11:10PM (#50919457)

        MySQL/MariaDB are still toys in comparison to PostgreSQL.

        Postgres has recursive CTEs, regex replacement, native JSON support (as a record type and trivially convert every query return type), and even base64 decoding and xpath parsing.

        MySQL has had some nice features for years, like REPLACE, but since the 9.x branch, CTEs can do that and more. And now PG has UPSERT for simplicity. Replication has always been great with MySQL, but PG's replication is now easy to administer. I've relied on Postgres' streaming replication since 9.1 in production and it's been great for years.

        • Re: (Score:2, Informative)

          by mysidia ( 191772 )

          MySQL has had some nice features for years, like REPLACE, but since the 9.x branch

          The features MySQL has are good enough for 99% of real-world web applications.

          Yes, Postgres has more, but the extra features it has don't necessarily add much value for most programs.

          MySQL multi-master replication features are immensely valuable by comparison, and Postgres lacking them has prevented me from using Postgres, more than once.

          • by jedidiah ( 1196 )

            > The features MySQL has are good enough for 99% of real-world web applications.

            It doesn't even have complete SQL support.

            While that might not matter for a trivial web application. For the more interesting ones that include non-trivial development teams, that will cause problems.

          • Yes, Postgres has more, but the extra features it has don't necessarily add much value for most programs.
            MySQL multi-master replication features are immensely valuable

            I guess it depends on your programs. It does take effort to maintain two or more connections (one for write, the others for reads).

            If your use case needs balanced multi-master replication, but simple features, you should use a NoSQL solution.

            Postgres is amazing for reporting where you can bring anything you want in a single query, including JSON output (without using plsql procedures), and bulk updates are fantastic, with CTEs for selects and layers of them for updates/deletes, especially great with regex_r

    • > I will eventually migrate off MySQL but I don't know if it'll be MariaDB

      So from MySQL to still MySQL?

  • I've been using Postgres for well over a decade now, and I still love it. Yes, you have to tune it, like any powerful tool.

    Granted this first pass is only for sequential scans, but those are the simplest to parallelize and generally the slowest. Some queries rely on table scans as not every column can be indexed.

    Postgres' growing feature set is amazing. Thanks team!

  • It will be interesting to see how it compares with MongoDB. MongoDB does not use joins. MongoDB is webscale. [youtube.com]

  • This optimisation caters for a niche (admittedly a relatively large one) where there are relatively few queries but large ones. A more typical usage is where there are many smaller queries. I hope that this does not compromise the total throughput, so that the total parallelisation of multiple concurrent queries is not slowed to allow parallelisation withing individual queries. Either that or it should be a switchable option.
    • by rhaas ( 804642 )
      As the fine blog post explains, it is a switchable option. You can set max_parallel_degree=0 to turn it off. Actually, right now, it's off by default, and you have to set max_parallel_degree>0 to turn it on.
    • by jedidiah ( 1196 )

      I wouldn't call it a "niche" exactly. It's one of the major main use cases for employing an RDBMS.

      We just have a lot of "database people" with very limited experience and a limited mindset.

    • A more typical usage is where there are many smaller queries.

      Typical usage of what? If I have an OLTP system, for a transactional web based system, sure i'd agree. But if I am operating a data warehouse with fact tables housing hundreds of millions of rows, or trying to run largish reports on top of my OLTP system (say for state/fed reporting, or financial reporting), my "typical usage" is not many smaller queries.

      The last two projects I have been on, turning on auto-parrallism in Oracle has made huge performance gains. Not just for reads, but also when enabled fo

  • I am building an object store where some of my data objects can each be a key-value store that is used as a column in a relational table. Some queries against a table require a full scan (e.g. SELECT * from my_table where address like '%Main Street%';). If I had a table with a billion rows, it can take awhile to scan the whole address column looking for matches (I dedup the values in each column, but there can still be 100 million unique address values in such a table.) The solution is to break each column

As you will see, I told them, in no uncertain terms, to see Figure one. -- Dave "First Strike" Pare

Working...