PostgreSQL 9.2 Out with Greatly Improved Scalability 146
The PostgreSQL project announced the release of PostgreSQL 9.2 today. The headliner: "With the addition of linear scalability to 64 cores, index-only scans and reductions in CPU power consumption, PostgreSQL 9.2 has significantly improved scalability and developer flexibility for the most demanding workloads. ... Up to 350,000 read queries per second (more than 4X faster) ... Index-only scans for data warehousing queries (2–20X faster) ... Up to 14,000 data writes per second (5X faster)" Additionally, there's now a JSON type (including the ability to retrieve row results in JSON directly from the database) ala the XML type (although lacking a broad set of utility functions). Minor, but probably a welcome relief to those who need them, 9.2 adds range restricted types. For the gory details, see the what's new page, or the full release notes.
verson 9.2 (Score:1)
From the summary:
"9.1 adds range restricted types"
nice proof reading...
How PostgreSQL stacks up to Oracle ? (Score:4, Interesting)
I've been searching for a comparison chart of various SQLs but all I can find are very very old articles
There's a database project that I'm working on and I'm choosing which SQL to be employed
MySQL is obviously not up to par
I don't know how good PostgreSQL is - so, is there a comparison chart or something that can facilitate us, the one who are going to make purchasing decision, to make one choice over the other?
Thank you !
Re:How PostgreSQL stacks up to Oracle ? (Score:5, Informative)
Generally there is very little in the sense of logical data manipulation capabilities in which Oracle exceeds PostgreSQL (usually the opposite, actually). The main advantage Oracle has is in the extreme high end of scalability and replication, and that benefit is offset by massive complexity in setup and configuration. Even there, PostgreSQL is closing fast these days, with built-in streaming replication, table partitioning, and all sorts of high-end goodies.
I do all sorts of PostgreSQL consulting, and you would be surprised at the number of large companies and government organizations considering migration from Oracle to PostgreSQL.
And if you *really* need PostgreSQL to go into high gear, just pay for the commercial Postgres Plus Advanced Server from EnterpriseDB and you will get a few heavy-duty add-ons, including an Oracle compatiblity layer.
Also, IMHO one of the really cool things about PostgreSQL is the number of very geeky tools it puts at your disposal, such as a rich library of datatypes and additional features, along with the ability to create your own user-defined datatypes.
Re:How PostgreSQL stacks up to Oracle ? (Score:4, Informative)
and you would be surprised at the number of large companies and government organizations considering migration from Oracle to PostgreSQL.
Not really.
I've had no experience with the database end of things, but I've been on the receiving end of some other Oracle "products" at two places I've been. Once you've been Oracled, there is a strong incentive never to go anywhere near them again, no matter how they look on paper.
When it comes for utter distain and hatred for their customers, Oracle make Sony look like rank ametures.
As far as Oracle are concerned, the customer is a fool whose sole purpose is to be screwed over for as much cash as possible.
Re: (Score:3)
Re: (Score:1)
Their developers suck. Go look at the sort of bugs MySQL gets, AND gets AGAIN.
MySQL is the PHP of databases.
Example: http://bugs.mysql.com/bug.php?id=31001 [mysql.com]
Notice the part where the bug is reintroduced. If they require regression tests to pass before releases this bug would not happen again.
Re: (Score:3)
Lots of people such, but it is just hard to trust your data to MySQL. Just a moment ago I posted a link above to this video which illustrates it:
http://www.youtube.com/watch?v=1PoFIohBSM4 [youtube.com]
Re: (Score:2)
Lots of people such, but it is just hard to trust your data to MySQL. Just a moment ago I posted a link above to this video which illustrates it:
http://www.youtube.com/watch?v=1PoFIohBSM4 [youtube.com]
The person narrating that video sounds so much like Mr. Garrison I couldn't make it past the first minute. However the video is based largely off the info found here:
http://sql-info.de/mysql/gotchas.html [sql-info.de]
Re: (Score:2)
Yes, I missed that link the first time I watched it, thanks.
In the comments it says:
"the vast majority of these gotchas have been solved with SQL_MODE in MySQL 5.0. The SQL_MODE must be set in your configuration file once in then you're done."
So that's also interresting to know.
Re: (Score:2)
Re: (Score:1)
see http://www.postgresql.org/ [postgresql.org]
PostgreSQL has had ACID compliance built in from the beginning. MySQL added it much later.
Over the last 18 years I have 3 times gone searching on the Internet for comparisons - each time PostgreSQL came out better than MySQL!
PostgreSQL is more standards compliant than MySQL, and has far fewer gotchas (unintended consequences of doing something that seemed so straightforward).
I have the misfortune to have a client with a
Re: (Score:2)
And yeah comparisons on the Internet is of course always true
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
All that I want to convey is that we use MySQL for bu
Re: (Score:2)
Interesting that you feel PostgreSQL easier to set up, I have the exac
Re: (Score:2)
Re: (Score:2)
Re: (Score:3)
MySQL has some nice replication built in I believe, I've never used them.
Other than that, I would thread lightly with MySQL:
"Why not MySQL"
http://www.youtube.com/watch?v=1PoFIohBSM4 [youtube.com]
Re: (Score:2)
That's great and all, but . . . (Score:3, Funny)
Re:That's great and all, but . . . (Score:5, Informative)
9.3. Seriously.
http://rhaas.blogspot.com/2012/06/absurd-shared-memory-limits.html
Re: (Score:2)
What a seriously sensible and simple solution. If I could mod up, I would, but I can't so I will reply.
Re:That's great and all, but . . . (Score:5, Informative)
I just posted this to the blog, but I will repeat it here --
There is a very good reason we OS vendors do not ship with SysV default limits high enough to run a serious PostgreSQL database. There is very little software that uses SysV in any serious way other than PostgreSQL and there is a fixed overhead to increasing those limits. You end up wasting RAM for all the users who do not need the limits to be that high. That said, you are late to the party here, vendors have finally decided that the fixed overheads are low enough relative to modern RAM sizes that the defaults can be raised quite high, DragonFly BSD has shipped with greatly increased limits for a year or so and I believe FreeBSD also.
There is a serious problem with this patch on BSD kernels. All of the BSD sysv implementations have a shm_use_phys optimization which forces the kernel to wire up memory pages used to back SysV segments. This increases performance by not requiring the allocation of pv entries for these pages and also reduces memory pressure. Most serious users of PostgreSQL on BSD platforms use this well-documented optimization. After switching to 9.3, large and well optimized Pg installations that previously ran well in memory will be forced into swap because of the pv entry overhead.
Re:That's great and all, but . . . (Score:5, Insightful)
I don't see your comment on the blog (maybe it has to be approved?), but the same issue was raised here [nabble.com] during review of the patch. The concern was mostly blown off (most PG developers use Linux instead of BSD, that might well be part of it), but if you had some numbers to back up your post, the -hackers list would definitely be interested. Ideally, you could give numbers and a repeatable benchmark showing a deterioration of 9.3-post-patch vs. 9.3-pre-patch on a BSD. If that's too much work, just the numbers from a dumb C program reading/writing shared memory with mmap() vs. SysV would be a good discussion basis.
Re: (Score:2)
My guess is that it is only used for postgresql clusters setup, allowing different instances of postgresql to chat together as said in the blog.
Re:That's great and all, but . . . (Score:4, Informative)
Re:That's great and all, but . . . (Score:4, Informative)
While Postgresql does use the Apache model, there is middleware available (google 'pgpool' for an example) that amongst other things will queue requests so they can be serviced by a limited number of children. Of course this only matters if there are an awful lot of simultaneous queries (without the corresponding amount of server RAM).
However; your claim about threads per CPU is oversimplified, and especially wrong with a DB server where processes will most likely be IO bound. With 1 core, for example, there is nothing wrong with having 5 processes parsing and planning a query for a few microseconds, while the 6th is monopolising IO actually retrieving query results. Or the reverse - having 1 CPU-bound process occasionally being interrupted to service 5 IO bound processes, which would negligibly impact the CPU-bound query, while hugely improving latency on the IO bound queries.
Re: (Score:2)
Ideally, any single service/application should NEVER have more threads than there are n+1 logical CPUs.
In the ideal world you'll never have more nonparallelizable tasks than you have CPUs.
However in the real world you often do. It is usually better for the application developers to focus on having their application solve the application related problems, and let the OS take care of the multitasking and other OS related problems.
A process per client also means that if a process crashes it is less likely to affect other clients. And if there are memory leaks for whatever weird/stupid reason, if you close that
Re: (Score:2)
If you're the OP AC, you can try reduce your max worker threads to "n+1 logical CPUs" on a 1500 connection test DB server and see if the DB performs better.
I doubt it will. The thing is a thread of execution is a useful concept for a programmer - you set up a thread to handle each task and let the OS worry about multiplexing efficiently across logical/physical/whatever CPUs. Same goes for pr
Re:That's great and all, but . . . (Score:4, Informative)
I don't think this is true any more. Threads are light weight... that's the whole point. They all share the same pmap (same hardware page table). Switching overhead is very low compared to switching between processes.
The primary benefit of the thread is to allow synchronous operations to be synchronous and not force the programmer to use async operations. Secondarily, people often don't realize that async operations can actually be MORE COSTLY, because it generally means that some other thread, typically a kernel thread, is involved. Async operations do not reduce thread switches, they actually can increase thread switches, particularly when the data in question is already present in system caches and wouldn't block the I/O operation anyway.
There is no real need to match the number of threads to the number of cpus when the threads are used to support a synchronous programming abstraction. There's no benefit from doing so. For scalability purposes you don't want to create millions of threads (of course), but several hundred or even a thousand just isn't that big a deal.
In DragonFly (and in most modern unix's) the overhead of a thread is sizeof(struct lwp) = 576 bytes of kernel space, +16K kernel stack, +16K user stack. Everything else is shared. So a thousand threads has maybe ~40MB or so of overhead on a machine that is likely to have 16GB of ram or more. There is absolutely no reason to try to reduce the thread count to the number of cpu cores.
--
There are two reasons for using lock memory for a database cache. The biggest and most important is that the database will be accessing the memory while holding locks and the last thing you want to have happen is for a thread to stall on a VM fault paging something in from swap. This is also why a database wants to manage its own cache and NOT mmap() files shared... because it is difficult, even with mincore(), to work out whether the memory accesses will stall or not. You just don't want to be holding locks during these sorts of stalls, it messes up performance across the board on a SMP system.
Anonymous memory mmap()'s can be mlock()'d, but as I already said, on BSD systems you have the pv_entry overhead which matters a hell of a lot when 60+ forked database server processes are all trying to map a huge amount of shared memory.
Having a huge cache IS important. It's the primary mechanism by which a database, including postgres, is able to perform well. Not just to fit the hot dataset but also to manage what might stall and what might not stall.
In terms of being I/O bound, which was another comment someone made here... that is only true in some cases. You will not necessarily be I/O bound even if your hot data exceeds available main memory if you happen to have a SSD (or several) between memory and the hard drive array. Command overhead to a SSD clocks in at around 18uS (verses 4-8mS for a random disk access). SSD caching layers change the equation completely. So now instead of being I/O bound at your ram limit, you have to go all the way past your SSD storage limit before you truly become I/O bound. A small server example of this would be a machine w/16G of ram and a 256G SSD. Whereas without the SSD you can become I/O bound once your hot set exceeds 16G, with the SSD you have to exceed 256G before you truly become I/O bound. SSDs can essentially be thought of as another layer of cache.
-Matt
Re: (Score:1)
Because it's not just caching ...
Most of the shared memory is usually reserved for shared buffers, i.e. cached blocks of data files - this is something like a filesystem cache (and yes, some data may be cached twice) with the additional infrastructure for shared access to these blocks (especially for write), and so on. But there's more that needs to be shared - various locks / semaphores etc. info on connections, cluster-wide caches (not directly files) etc.
I'm not saying some of this can't be done using a
Re: (Score:2)
Here's the problem in a nutshell... any memory mapping that is NOT a sysv shm mapping with the use_phys sysctl set to 1 requires a struct pv_entry for each pte.
Postgres servers FORK. They are NOT threaded. Each fork attaches (or mmap in the case of this patch) the same shared memory region, but because the processes fork instead of thread each one has a separate pmap for the MMU.
If you have 60 postgres forked server processes each mapping, say, a 6GB shared memory segment and each trying to fault in the e
Re: (Score:2)
Yes, and that is precisely what happens. But it means that we had to size-down the shared-memory segment in order to take into account that the machine had 7GB less memory available with that many servers running.
There is a secondary problem here... not as bad, but still bad, and that is the fact that each one of those servers has to separately fault-in the entire 6GB. That's a lot of faults. There would be 1/60th as many faults if the servers were threaded. This is a secondary problem because it only
Re: (Score:3, Informative)
http://postgresapp.com/
Re:That's great and all, but . . . (Score:5, Funny)
you atheists love to take all the fun out of things, don't you?
Eliminate the human sacrifice now and next you'll be saying we have to get rid of our Steve Jobs altars.
Re: (Score:3)
Get rid of your Steve Jobs altars!
Range data types (Score:5, Interesting)
Postgre's range data type allows you to create unique checks on ranges of time. This can in two lines of code, do every single logic check that is needed to ensure no two people schedule the same room at the same time.
How this is not showing up on anyone's radar is beyond me, or maybe we all just use Outlook or Google Calendar now. However, the range types are not just limited to the application of time, but of anything that requires uniqueness along a linear fashion, as opposed to just checking to see if any other record matches the one that you are trying to insert.
Re: (Score:2)
TFS apparently (from the link, which goes to range datatypes) meant to refer to them when it made the comment about "range-restricted datatypes".
Re: (Score:3)
Re: (Score:3)
No, the range is data, not part of the column definition, I would say "RTFA", but to be fair the link was mislabelled in TFS as being about "range-restricted types", rather than range types.
But here's the docs on range types [postgresql.org]. The scheduling use case is the basic example of exclusion constraints on range types (Sect 8.17.10 in the linked doc.)
Re: (Score:3)
Re: (Score:2)
Yeah, or, maybe you could go look at how range fields work, and find out that that's part of the range field already.
No, actually, it wouldn't, pretend-o-saurus. It's called "an exclude constraint."
Re: (Score:2)
Re:Range data types (Score:4, Informative)
Oh, it's simple enough to do with two separate fields and a check constraint. That's how you'd do it i other DB engines, in fact.
Ensuring there are no overlaps is an entirely different story, however: queries against those two fields cannot make any reasonable use of an index. The ranged type, by contrast, allows you to query the data using a nearest neighbour search and a GiST index.
Think of a GiST index as indexing the smallest boxes that enclose your shapes of interest. When queried, the DB scans for boxes that overlap your box of interest, and discards rows that don't match the data's actual shape.
Re: (Score:2)
Optimization of a constraint involving date ranges is a bit more difficult than you might think, and having it as one unified type makes queries a lot cleaner and indexes a lot more efficient (if done as GiST indexes anyways)
Old: WHERE (a.starttime BETWEEN b.starttime AND b.endtime OR b.starttime BETWEEN a.starttime AND a.endtime)
New: WHERE a.timerange @@ b.timerange
The speedup when you're doing things like trying to find overlaps between two lists of tens of thousands of ranges each is phenomenal.
Before 9.
Re: (Score:2)
Also, strictly speaking, you can't do the first one as a constraint at all (you can do it as a query condition, or enforce a constraint-like be
Re: (Score:2)
Heh, I should have guessed.
It was a few years ago, but we actually tossed around some ideas on a standard format for applying the range concepts to types besides timestamps. One of the issues then was that a small handful of built-in types have a notion of infinity/-i
Infinities and unbounded ranges (Score:2)
I think it is a good decision in that it provides a syntactic construct for ranges that are unbounded on either
Postgres-Curious (Score:5, Interesting)
TL;DR: Is there an advanced PostgreSQL for MySQL Users guide out there somewhere? Something more than basic command-line equivalents? And preferably from the last two major releases of the software?
Long version
I've been using MySQL personally and professionally for a number of years now. I have setup read-only slaves, reporting servers, multi-master replication, converted between database types, setup hot backups (Regardless of database engine), recovered crashed databases, and I generally know most of the tricks. However I'm not happy with the rumors I'm hearing about Oracle's handling of the software since their acquisition of MySQL's grandparent company, and I'm open to something else if it's more flexible, powerful, and/or efficient.
I've always heard glowing, wonderful things online about PostgreSQL, but I know no one who knows anything about it, let alone advanced tricks like replication, performance tuning, or showing all the live database connections and operations at the current time. So for any Postgres fans on Slashdot, is there such a thing as a guide to PostgreSQL for MySQL admins, especially with advanced topics like replication, tuning, monitoring, and profiling?
MariaDB and Percona (Score:4, Insightful)
Oracle is not that big a of concern.
There is MariaDB [mariadb.org] which is data-compatible with MySQL, and has some nice additions (like microsecond performance data), and there is also Percona Server [percona.com].
If Oracle messes up, like they did with OpenOffice, there will be another version that they cannot touch, like LibreOffice.
Re:Postgres-Curious (Score:5, Informative)
PostgreSQL replication is new (revision 9.1) so there may be little out there (Yes, there was replication, but with additional software, like Slony).
I'm in the weird position of having used PostgreSQL mainly --- for seven years, writing dozens of applications --- but never MySQL. I've also used --- out of necessity only --- Microsoft SQL, Oracle, and Ingres, and PostgreSQL is much better. Just from a programming point of view, the syntax is, in my mind, simpler yet more powerful --- more ANSI-SQL-compliant, too, I've heard.
Anyway, the point is, I've never used anything I like more. I adore PostgreSQL. It's so powerful. So many useful datatypes, functions, syntax. Not to mention it's ACIDity.
To your question, though --- are there any good books to help a MySQLite move to PostgreSQL? Not that I've come across. But then again, I haven't found any good PostgreSQL books --- or even, for that matter, very well-written SQL books, period. They all are stupefyingly boring --- but I got what I could out of them.
Actually, PostgreSQL's documentation is not that bad. In particular, try sections I, II, V, VI, and III, in that order. Skip anything that bores you at first. You can always come back. Honestly, there can't be that much of a learning curve for you, coming from MySQL.
Re: (Score:3)
9.0 was the first version with replication, not 9.1 and we have had things like warm standby since 8.1.
Re: (Score:3)
There are two PostgreSQL books I used a lot in the past: PostgreSQL 9.0 High Performance by Gregory Smith (Packt) and PostgreSQL Second Edition by Douglas Douglas (O'Reilly).
There is an extended list of books listed on the PostgreSQL homepage: http://www.postgresql.org/docs/books/ [postgresql.org]
Problem with all books is, they get outdated too quickly. While a lot of the basic info is still true for the books above, the O'Reilly book is very much based on 8.4 with is pretty ancient already. Perhaps getting an ebook is less
Re: (Score:2)
If you look for a good SQL programming book, the PL/SQL book from Oracle is the best book written in this area, IMHO. As for the MySQL to PostgreSQL book, there was no incentive to write it for PostgreSQL power users. We mostly looked over the time at MySQL as toy database and it's users as at best misguided and at worst, not caring about data integrity (cardinal sin in my book). So writing such book would be sort of like "Black Hat Hacking for Script Kiddies". Sure it could be done, but who wants a bunch o
Re: (Score:3, Informative)
Well, recommending a PL/SQL book as a source for learning SQL is a bit silly IMHO. Moreover, I find the books from Oracle rather bad - there are better sources to learn PL/SQL (e.g. the one from Feuerstein is a much better book).
And in fact there's a great book about administering PostgreSQL from Hannu Krosing - it's called "PostgreSQL 9 Admin Cookbook" [http://www.packtpub.com/postgresql-9-admin-cookbook/book]. It's a great set of recipes for admins for common tasks, not an exhaustive documentation (that's
Re: (Score:2)
I'm not sure if the book available to Oracle employees on PL/SQL is the same as the one available externally, I assume so. The books by Oracle are generally not so good, I'd agree, but the PL/SQL one is a rare gem. It sounds like you read a bunch of Oracle books, but not this one and you recommend what you did read on the subject, which is fine. But in this case ... Anyway....
What the point was not a good Postgres book, there are some. The point was a comparison book taking you from MySQL to PostgreSQL and
Re: (Score:2)
Not sure which Oracle books you mean - I've read e.g. "PL/SQL Programming" (ISBN 978-0072230666) and "Expert Oracle PL/SQL" (ISBN 978-0072261943) and probably some more when preparing for OCP exams. And I'd definitely recommend ISBN 978-0596514464 instead of the first one. But yeah, it's a matter of opinion.
But you're right - there are no "PostgreSQL for MySQL people" guides. The problem is that almost no one is able to write it. The people who are switching from MySQL to PostgreSQL don't have the knowledge
Re: (Score:2)
I have to admit, as a long-time MySQL user, it really messes with your head and makes you not do things in a way that works with MS SQL Server or PostgreSQL. Especially how MySQL does its lazy grouping.
I've only tried other databases for a short while and give up because I know that I'd have to learn everything properly. If I was starting a brand new project, it might be great, but I wouldn't want to rewrite an existing database app with it.
Re:Postgres-Curious (Score:5, Informative)
Unfortunately, I haven't found a really good guide of the type you are looking for. I can give you my experiences, going from MySQL to PostgreSQL, back to MySQL to support it at a large company, and then back to PostgreSQL. Generally, these days there is really *nothing* that I can find about MySQL that can't be done better in PostgreSQL. I mean it. At least for awhile MySQL could boast of native replication, but Postgres biw has that and it is arguably much more robust than MySQL's solution (had the misfortune to support MySQL replication for 2 years). Ditto with full-text indexing, and just about any other MySQL feature.
Main differences:
1. PostgreSQL is much more "correct" in how it handles data and has very little (essentially no) unpredictable or showstoppingly odd behavior of the sort you find in MySQL all the time. Your main problem in migrating an app to PostgreSQL will be all those corner cases that MySQL just "accepts" when it really shouldn't, such as entering '0000-00-00' into a date field, or allowing every month to have days 0-31. In other words, PostgreSQL forces you to be a lot more careful with your data. Annoying, perhaps, if you are developing a non-mission-critical system like a web CMS or some such, but absolutely a lifesaver if you deal with data where large numbers of dollars and cents (or lives) depend on correct handling.
MySQL has provided for a fair amount of cleanup for those who enable ANSI standard behavior, but it is still nowhere close to PostgreSQL's level of data integrity enforcement.
2. MySQL has different table types, each of which support different features. For example, you cannot have full-text indexing in InnoDB (transactional) tables. PostgreSQL has complete internal consistency in this regard.
3. MySQL has an almost entirely useless error log. PostgreSQL's can be ratcheted up to an excruciating level of detail, depending on what you want to troubleshoot. Ditto with error messages themselves.
4. MANY MANY more choices in datatypes and functions to manipulate them. Definitely a higher learning curve, but worth it for expressive capability.
5. Don't get me started on performance. Yes, if you have a few flat tables, MySQL will be faster. Once you start doing anything complicated, you are in for a world of pain. Did you know that MySQL re-compiles every stored procedure in a database on every new connection? PHP websites with per-page-load connections can really suffer.
6. Don't get the idea that PostgreSQL is more complex to work with. If you want simple, you can stick with the simple parts, but if you want to delve into complex database designs and methodologies, PostgreSQL pretty much opens up the world to you.
- Glad to be back in the PostgreSQL world...
Re: (Score:2)
Actually it really doesn't, it will only recompile the stored procedure if the compiled version has left the cache, so as long as they fit into the cache you would see very little compiling going on.
Re: (Score:2)
Greg Smith's book "High-Performance SQL" is a good start.
Re: (Score:1)
No, I'm not aware of such thing ("PostgreSQL for MySQL people" style guide).
The best thing you can do is give it a ride - install it, use http://www.postgresql.org/docs/9.1/interactive/admin.html [postgresql.org] to do the setup etc.
Basically all you need to do to install and start the PostgreSQL from source code is this (at least on Linux):
$ cd postgresql-9.1.5 ./configure --prefix=/path-to-install /database-directory init ... fiddle with the config at /data
$
$ make install
$ export PATH=/path-to-install/bin:$PATH
$ pg_ctl -D
Re: (Score:1)
Read the PostgreSQL docs.
Unlike MySQL, their docs are clear, complete and exhaustive. The MySQL docs require you to read user comments on the documentation to learn how the software actually works.
Not the case here, PostgreSQL's design and behavior is clearly and properly documentated
Re: (Score:1)
What's wrong with third-party stuff? I mean, looking bad it was silly to expect this to happen with replication (third-party replication solutions, not included in the core), but with the management tools this should not be a problem - there are already tools like repmgr and more to come. The problem with in-core tools is that they hard-code a single way to do things the release cycle is tightly bound to the PostgreSQL itself and it's a significant effort for the whole community.
Regarding the replication -
Re: (Score:2)
Kludgy? You must be talking about MySQL's "solution". The one that is not really truly transaction-safe, nor dependable. I can't tell you how many times I've logged into a MySQL server in the morning only to find replication broken.
Native replication has been available for almost two years now. The fact that it uses the write-ahead log in conjunction with streaming is exactly the kind of solution you need if you want dependable transaction-safe replication. I suggest that PostgreSQL took longer to achieve b
JSON (Score:3)
To me, JSON very interesting. I don't know how exactly I'll use it, but it combines all that's great about PostgreSQL with some of what was interesting about CouchDB and other projects like it.
Mainly, one-to-many relationships may be easier. Usually, they are two separate select statements. For example, one to get the article, another to get the comments. Then you patch it all together in PHP, or whatever middle language you're using. With JSON support, that could be a single SELECT, crammed up in JSON, which you then uncram with a single json_decode function call in PHP, which would yield nice nested arrays.
Re: (Score:1)
I think you just made the database fairies cry.
Re: (Score:1)
Then you defiantly don't want to see the DB tables I've seen with just a primary key and a single XML field, then add on XPath indexes.
Re: (Score:2)
I'm not sure adding new SQL features is going to deal with the problem of people not using the features they already have. Its already quite possible in PostgreSQL to do a single select that gets the article data and an aggregate that contains all the comments. Feat
While Postgres is good for many things... (Score:3)
Until the fix the TX number issue ( the infamous rollover ) then they are pretty much out of the running in DB's that have VERY high insert levels since the vacuum process cannot hope to keep up with tables that have 100's of millions of rows.
I am an Oracle professional but I do keep track of Postgres and like it, but the 32 bit TX t is a bit of an Achilles heel.
Re: (Score:2)
you can't vacuum your table every 2 billion transactions? did you know autovacuum exists?
There is no table with "100's of millions of rows" that can't be vacuumed every 2 BILLION transactions.
http://www.postgresql.org/docs/current/static/routine-vacuuming.html#VACUUM-FOR-WRAPAROUND
Re: (Score:2)
Re: (Score:1)
If you have a single table with 17 trillion rows then you're doing it wrong. And inserts aren't really an issue with MVCC in PG - I'd focus more on updates.
Partitioning in PostgreSQL will let you split that up into separate physical items on disk. As others have said - you just need to let vacuum scan the table once every 2 billion transactions or so to keep things in check. Rows that aren't update regularly will be given the special frozen xid and won't be subject to any wrap around issues.
And as far as da
Re: (Score:2)
Yes, there are all sorts of interesting strategies you can employ once you separate the physical storage from the logical presentation... can't be said enough.
Re: (Score:3)
I don't see why not.
You had the IO to create those 17 trillion tuples in the first place; so vacuum will use that same IO capacity to maintain it.
The low billions of tuples isn't much of an issue despite being on spinning disk with very little in memory.
Re: (Score:2)
Until the fix the TX number issue ( the infamous rollover ) then they are pretty much out of the running in DB's that have VERY high insert levels since the vacuum process cannot hope to keep up with tables that have 100's of millions of rows.
Infamous to whom? A vacuum updates the frozen TID, which is a trivial operation and allows a subsequent TID to safely wrap around. And I'm struggling to think of any common use cases where the volume of inserts is so high that they can't afford a vacuum every two bil
Re: (Score:2)
A vacuum updates the frozen TID, which is a trivial operation and allows a subsequent TID to safely wrap around.
What if you have at least one outstanding transaction/connection? Can vacuum update the frozen TID then?
For example if you have a transaction that's open for a few weeks and happen to have 4 billion transactions during that time.
I believe perl DBI/DBD in AUTOCOMMIT OFF mode starts a new transaction immediately after you commit or rollback. So if you have an application using that library that is idling for weeks a transaction would presumably be open for the entire time- since it would be connected to the d
Re: (Score:2)
A long open transaction (that's used the table in question) will block auto vacuum for those rows.
You can set options in postgresql.conf to auto-kill long transactions if you like (set a hard limit for transaction time).
I solved this another way by only examining IDLE transactions via pg_stat_activity. Any long running transactions are left alone, while long idle transactions are killed.
Re: (Score:2)
So you're calling yourself an Oracle professional and you're not aware of this: http://www.infoworld.com/d/security/fundamental-oracle-flaw-revealed-184163-0 [infoworld.com] ?
I mean - PostgreSQL does have 32 bit transactions IDs and a well designed process to prevent wraparound.
Oracle has 48bit transaction IDs, a number of bugs that speed up transaction ID growth, a feature that "synchronizes" transaction IDs through the whole cluster (thus the IDs are growing according to the busiest of the instances) and a soft SCN limit
PostgreSQL is so cool (Score:2)
I'm sure most of this applies to MySQL these days but historically it didn't and I never saw the attraction of a DB which went through a succession of backends in order to obtain the behaviour PostgreSQL always supplied. It doesn't help that MySQL is Oracle owned and all the issues with licencing a
Re: (Score:2)
I believe they both improved.
PostgreSQL 7.x wasn't as much fun either which didn't have autovacuum and needed a lot of tuning.
I haven't tried something like Drizzle but it seems they ditched a lot of old code and problems.
Re: (Score:2)
We didn't switch until 8.0 or 8.1, after we were able to install as a native Windows application and play with it. The pgsql database servers are actually Linux, but we were still feeling our way there as well.
Range types -- not range-restricted -- are major (Score:3)
First, its 9.2, not 9.1.
Second, (as shown in the link) these are range types, not range-restricted types. Range-restricted types (as known from, e.g., Ada) are something that (via domains with check constraints) PostgreSQL has supported for a very long time.
Range types, combined with 9.2s support for exclusion constraints, are a pretty major new feature that give 9.2 a great facility in dealing with (among other things) temporal data and enforcing common logical constraints on such data in the database as simple-to-express constraints rather than through triggers.
Re:/. Poll (Score:5, Insightful)
Re: (Score:1, Informative)
Because we love to bash our keyboards into so much plastic scrap whenever we come across one of its many standards-defiant idiosyncracies?
Re: (Score:3, Insightful)
Because we love to bash our keyboards into so much plastic scrap whenever we come across one of its many standards-defiant idiosyncracies?
You mean, idiosyncracies different from Oracle's idiosyncracies, Microsoft's idiosyncracies and IBM's idiosyncracies?
By the way, care to be specific? Oh yeah, posting anon. Right.
Re: (Score:2)
How are SQL Server's idiosyncracies different from Microsoft's? Isn't SQL Server a Microsoft product?
Re: (Score:1)
Did you even read the article? The article talks about PostgreSQL, which is an SQL Server from a different vender. There's also MySQL, and plenty of other SQL Servers.
Re: (Score:1)
"SQL Server" is not some generic name for a relational database -- it's a product from Microsoft. So "SQL Server", "PostgreSQL", "MySQL" etc are all relational database servers, not "SQL Servers".
Re: (Score:2)
Note that the two relevant entries that you mention are both spawned from the same product and code base. Originally, MS SQL Server was Sybase SQL server.
Re: (Score:2)
I think they're trying to say that Microsoft calls theirs "SQL Server" in such a way as to make it seem that the SQL standard is something they own or control.
Re: (Score:2)
I took the original poster's use of SQL server to denote a Microsoft product.
Re: (Score:2)
Because we love to bash our keyboards into so much plastic scrap whenever we come across one of its many standards-defiant idiosyncracies?
You mean, idiosyncracies different from Oracle's idiosyncracies, Microsoft's idiosyncracies and IBM's idiosyncracies?
By the way, care to be specific? Oh yeah, posting anon. Right.
I think probably the idiosyncracy that keeps it from running on my Linux servers is probably sufficient. Although that extra level in the table naming hierarchy has been known to cause me to destroy things.
Re: (Score:2)
LOL just use SQLServer you nubs.
I really tried, honestly, but I couldn't find Debian packages for it anywhere on the MS web site.
Re: (Score:2)
Wake me when it catches up with MemSQL.
That would have to be called MemgresQL, wouldn't it.
Re: (Score:1)
Could you please compare Ferrari F1 and Liebherr T1-272 minin truck [e.g. http://www.flickr.com/photos/doncampbellmodels/3434490464/%5D [flickr.com]? Not possible, right? Different products for different requirements.
Re: (Score:1)
Damn, this was supposed to be a response for the parent flamebait ...
Re: (Score:2, Insightful)
Re: (Score:1)
More seriously, unless you say what features you think MemSQL is ahead off PostgreSQL, you are sounding very much like a troll.
The appropriate database software depend