Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Upgrades

PostgreSQL 7.1 Released 23

Moosbert writes: "The latest and greatest PostgreSQL has been released. Major new features include write-ahead logging for better performance and reliability, outer joins, unlimited row size, and more complex queries (subselects in FROM, views with aggregates, etc.). Get yours from a mirror."
This discussion has been archived. No new comments can be posted.

PostgreSQL 7.1 released

Comments Filter:
  • I think that samba is conciderd more Linux of linux software because it's anti-windows. PostgreSQL is usefull on windows, so it's just an upgrade.
  • I can't seem to find a changelog for this. There site seams damm slow. Anyone have a mirror of the changelog or release notes?
  • postgres has a number of very nice features, like stored functions and transactions.

    Don't forget cursors. Since they require transactions I suppose you've included them though.

    There's nothing nicer than being able to ask the database for a top ten list or otherwise get a subset of data from a large table without pulling the entire table into Perl. Combine that with the stored functions and you can do some very powerful things within the database... which is actually exactly what databases are so popular for.

    Just about anyone can store things in a flat file and hash it to get fast search results. It's the ability to manipulate data that makes a database powerful, not just its raw speed on simple selects.

  • Is online backup. Shutting down and using pg_dump just doesn't rock muh world.

    In every other respect though I love Postgres. It is a serious RDBMS and should have a spot in Open Source history right up there with Samba.

  • On a side note, I'm pretty sure I read somewhere that pg_dump uses MVCC to take a snapshot of the database when you run it. Correct me if I'm wrong, but wouldn't that imply that it can be run hot with no ill effects...?

    I'm not too sure whether it would be able to be done with no ill effects... you could probably get away with it but it's almost guaranteed that the time you need the data back is the time that it didn't work. :-) The postgres documentation and mailing lists seem to suggest that even if it is possible right now that it is definately NOT reccomended.

    It would be very nice to be able to tar -czf /dev/tape /dev/postgressdata to get a live snapshot... no bogus backup programs, nada.. just a device which when read would give a postgres-sanctioned copy of the database and when written would restore to that state.

  • I don't know about you, but I try to keep by database as separated as possible from the application/business logic.

    This is fine for some, maybe even most instances but when you are working on high performance and/or large applications it becomes necessary to more tightly integrate with the datastore and let the database do the hard work. Hell even letting the database do things like sorting and ordering (which does not rely on integration) output tends to be faster and more efficient of resources than pulling it into application-space and doing it there.

    This is especially true with large tables when you only want a certain subset (top 10 selling items, top 50 support issues, etc. -- you don't pull in all 10000 (100k, 1M, whatever) rows into a Perl script or C program and strip off the ones you want; that's ludicrous. You ask the DB for what you want via cursors)

    Sticking bunches of code in the database via stored procedures violates this; the result is my app is now tied to the database. That can be quite a pain to upgrade.

    Moving from Postgres to Sybase or Oracle when stored procedures and other goodies are present is trickier, yes. But the performance you get with stored procedures not only allows you to forego the upgrade for (much) longer but also tends to out-and-out work better.

    Why, then, is it seen as an advantage to perform application logic in the database?

    As mentioned above: performance. Also more efficent use of resources (memory, cached storage, processor utilization, etc.) (e.g. the database already knows where things are, has them hashed, indexed and possibly (pre-)cached and ready for your app.

    You don't want to put a lot of complex and "proprietary" (in the application sense -- things which aren't used to maintain database integrity or get results out faster) code in the stored procedure. You do, however, want to put complex selects, inserts and updates into the stored procedures and call them from application space as a higher-level command. Or have the database call them automatically to keep data integrity via triggers.

    • e.g. a large insurance-type application won't put the actual detailed calcuations for premiums into database stored procedures but you probably would put a complex function which updated the premium matrices on each payout based on the data which is entered for every payout. In my mind that doesn't violate any design policy; it in fact helps to enforce database integrity since it doesn't rely on an outside application to keep the premium matrices up to date. It's done inside the database with a trigger and at the exact instant that each payout is entered.

    For what it's worth, I agree with you in keeping code to run the application the hell out of the database, but I do also see the value in integrating specific functions right into the database. It's a bit of a trade-off for upgradability but really if you're specifying a system you don't WANT to upgrade it for as long as possible and you'll quote hardware / software which will do the job right the first time around and with a certain amount of anticipated growth.

  • "...while they keep one-upping each other..."

    um... as far as i know, the only advantage mysql has over postgres is speed. postgres has a number of very nice features, like stored functions and transactions.

    it does seem as though mysql is catching up with the features. should be interesting to see if it can maintain the speed.
    ---
  • a changelog can be found on any mirror. it's named readme.v7_1.

    an unloaded mirror can be found here: ftp://mars.capital-data.com/pub/postgresql/README. v7_1 [capital-data.com].

    complex
  • There is some documentation work in progress on the topic here -> http://techdocs.postgresql.org/installguides.html# replicating [postgresql.org]
  • As per tradition, it's really not that big of a deal. The INSTALL doc demonstrates how to do it in just a couple commands:
    1. pg_dumpall
    2. install postgresql
    3. psql -d -template1 -f

  • I don't know about you, but I try to keep by database as separated as possible from the application/business logic. The resultant 3-tiered design lets me change any portion of the app independently of the other. Most important, it lets me change the database to another that suits my needs with a minimum of effort. Sticking bunches of code in the database via stored procedures violates this; the result is my app is now tied to the database. That can be quite a pain to upgrade.

    Why, then, is it seen as an advantage to perform application logic in the database?
  • *BSD users (and developers) are all complete jackasses... you'll fit right in. - Linus Torvalds


    You know that the email that quote came from was a april fool's joke [zork.net], right? Unless you think Linux Torvalds really uses MS Outlook...
  • According to the /. icons, PostgreSQL is an upgrade while Samba is Linux-related. Hello?

    I can't wait for the new religious wars to begin. MySQL vs. PostgreSQL. Who cares who comes out on top? I doubt either will come out on top. But while they keep one-upping each other and tweaking this and that and the other ad naseum, one of these days we'll stop and look around. Suddenly they won't be the Open Source "challengers." They'll be the default choice. And most of us probably won't see it happen. We'll only notice it after the fact.

    The old guard will follow the road of most closed-source software: Specific niches for select clients.

    So has anyone used it on Win32 lately? I've only ever played with PostgreSQL on Linux. I'm curious as to how well the ODBC libraries hold up.
  • Heh heh... "select" clients. Sorry about that folks. Totally unintentional pun.
  • Yes, I've tried the 7.1 ODBC driver, and it works great. It's even been updated in the 7.1 source. (With minor tweaks.) That said, when I checked, (7.1 was still in beta, then) to build the Win32 ODBC driver, you still had to use MSVC, rather than Cygnus GCC. And the instructions for how to do that were not terribly well publicized. I'm sure the updated binaries of the driver will be available soon. (If not already.)
  • It's as simple as that - unless you have a really low performance computer.

    Anything Mysql does, postgres can do. Or close to.

    Postgres supports stuff that mysql doesn't support, or has JUST started to support.

    The only real advantage to Mysql, is that you're allowed more freedom in changing existing tables. I must admit i get kinda annoyed when i have to redo yet another Postgresql table because it turned out I didn't need that column anyway.
  • Actually, its the topic that the user selected for their submission and I could not find another one that fit any better.
  • Yea, I'm using the odbc to log ms VSS checkins to SourceForge running a beta of 7.1.
  • Properly designed, the stored procedures can be considered another Tier in your n-tier appplication -- there's quite a bit of data representation that can be done that isn't "business logic" per se. Sure, that's largely platform-dependant code, but it's also at the point of the biggest bottleneck, where compiled and optimised code matters the most.

    Things like EJB are just terrible for database performance if you aren't aware of what you are doing. (Ever wonder why Oracle and Sun are the biggest proponents?) Client-side joins and transactions have a huge cost in the reponsiveness of an applicaiton and the database server in general, and are done behind the sceans with tools like entitiy beans. If you aren't aware of this because you are following some middleware mantra, you aren't doing your job correctly.

    Besides, you already have "business logic" in the database -- it's called a schema (unless your app has a sloow everything2-style generic node/attribute schema, in which case server-side joins are even more important.)
  • Last time I checked the newest ODBC driver for Windows PostgreSQL clients was 6.5.

    Worked with 7.0, though.

    Anyone tried 'em with 7.1?

  • Shouldn't be too long. New in 7.1 is transaction logging, which is the core of a good incremental on-line backup. They added it mostly for performance reasons and were in a hurry to get it out the door so the backup funcionality didn't get implemented. Now that the pressure is off a little we can hope to see it being worked on (7.2 or 7.11 maybe?)

    On a side note, I'm pretty sure I read somewhere that pg_dump uses MVCC to take a snapshot of the database when you run it. Correct me if I'm wrong, but wouldn't that imply that it can be run hot with no ill effects...?

  • As per tradition, its a pain converting the binary data-on-disk files between versions.

Reality must take precedence over public relations, for Mother Nature cannot be fooled. -- R.P. Feynman

Working...