Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
News

PostgreSQL 7.3 Released 315

rtaylor writes "Nearly a year's worth of work is out. The new tricks include schema support, prepared queries, dependency tracking, improved privileges, table (record) based functions, improved internationalization support, and a whole slew of other new features, fixes, and performance improvements. Release Email - Download Here - Mirror FTP sites (at bottom)."
This discussion has been archived. No new comments can be posted.

PostgreSQL 7.3 Released

Comments Filter:
  • PostgreSQL is a great database. I always run it as a daemon on my iBook since the Smallttalk development environment that I run needs a relational database for source code control.
    -Mark
  • Quick question (Score:5, Interesting)

    by Noose For A Neck ( 610324 ) on Saturday November 30, 2002 @03:20PM (#4784730)
    Did they do anything to improve/add replication support? That seems to be the only real thing that was holding it back from replacing Oracle, as far as I can tell. I know several projects for such a thing were in the works, but they appeared to be very beta.
    • Re:Quick question (Score:4, Interesting)

      by Khalid ( 31037 ) on Saturday November 30, 2002 @04:17PM (#4784957) Homepage
      I don't know about this one, but one the things that were holding it back from replacing Oracle were stored procedures Table function now brings one of the features of stored procedures : the ability to return sets.

      Table Functions : Functions returning multiple rows and/or multiple columns are now much easier to use than before. You can call such a "table function" in the SELECT FROM clause, treating its output like a table. Also, PL/pgSQL functions can now return sets.
      • Re:Quick question (Score:3, Informative)

        by Sokie ( 60732 )
        Postgres has had stored procedures for a while, look up CREATE FUNCTION. But adding better support for result sets does make them quite a bit more useful, now if only there was a decent JDBC driver that implemented result sets more completely.

        -Sokie
    • I'm afraid that there hasn't been a big effort to make replication user friendly -- whats there works very nicely as asynchronous master -> multiple slaves.

      There are a few things on the developers minds prior to replication. 2 Phase commits, improved inter-database connections (see dblink), Point in Time Recovery (via WAL logs).

      Once any 2 of the above three are completed, then replication should be a piece of cake :)

      That said, I'm holding out on asynchronous multi-master replication.
    • Re:Quick question (Score:5, Insightful)

      by slamb ( 119285 ) on Saturday November 30, 2002 @05:42PM (#4785219) Homepage
      Did they do anything to improve/add replication support? That seems to be the only real thing that was holding it back from replacing Oracle, as far as I can tell.

      I think that's the sort of thing that as soon as that feature is filled in, people will say it's "just" something else that's missing. There are a bunch of features I can think of that would be nice and PostgreSQL doesn't have. And probably there's someone who considers each one to be vital:

      • Database links to Oracle data warehouses. Obviously Oracle has a bit of an advantage here, but you might want to use PostgreSQL and link to an existing system outside your control.
      • Materialized views. These are kind of a cross between tables and views. They are used for expensive views; ones with complex calculations and/or ones over data links. They can be refreshed manually, every N hours, or in some cases when the underlying tables change. They can even be updateable. You can use them to rewrite queries that don't even know about them.
      • Index-organized tables. This is just a performance optimization - instead of the primary key index referencing the table row, the entire row is stored in the index. Good for tables with few columns where you often look for the primary key.
      • Point-in-time recovery. (Planned for 7.4, and not too big a step from what they already have with the WAL, I think.)
      • Savepoints/nested transactions. There's a discussion about this for 7.4. It would also allow a failed update/insert/whatever to not invalidate the entire transaction.
      • Better cursor support in JDBC bindings (and presumably other language interfaces). Right now, executing a query fetches the entire results to memory. That doesn't scale of course. But I hope to see this change soon. Nic Ferrier is working on a patch, though it won't work with resultsets you use across transactions. (PostgreSQL doesn't (yet?) support cursors outside of transactions.)
      • executeBatch and such that I think would be helpful for inserting a lot of rows quickly. There's COPY, but I think it's completely non-standard.
      • Surrounding tools. Oracle Forms & Reports, for instance. I consider GNUe Forms & Reports to be a long way from a replacement. Don't know of any other projects even as close as they are.
      • Tablespaces. Mostly for performance, I think - we just keep all the indexes in a different tablespace on a different array for less disk seeking.
      • Multi-column function-based indexes. "create index person_upper_name_idx on per.person (upper(lname), upper(fname)) tablespace bob".
      • Good Win32 support.
      • Database migration fairies. We use Oracle at work, even though it is a relatively small database. Even if all the other features were completed, I don't think we'd switch unless database migration fairies helped us with the transition.
      • Some of that stuff would be nice to have, some of it can be done by other tools (espcially forms and reports).

        But really it's free and it's great. If you want oracle like features you are going to pay oracle prices.
      • I think that's the sort of thing that as soon as that feature is filled in, people will say it's "just" something else that's missing

        Maybe, but I have had to disqualify postgresql from consideration becasue of this issue on two projects this year. I really would have preferred to use it, too. I can't say that about any other feature.

      • Re:Quick question (Score:3, Informative)

        by GooberToo ( 74388 )
        Tablespaces. Mostly for performance, I think - we just keep all the indexes in a different tablespace on a different array for less disk seeking.

        Planned for 7.4 IIRC.

        Good Win32 support.

        Planned for 7.4. Seems code is already available, it's just being cleaned up prior toward merge.

  • Replication out of the box?
  • by limekiller4 ( 451497 ) on Saturday November 30, 2002 @03:25PM (#4784755) Homepage
    WOOHOO!

    DROP COLUMN [column] FROM TABLE [table];

    This up-until-now lacking feature has been the bane of my existence. I HATE cruft being left lying around.

    (btw, I don't know if that is the correct syntax, just a guess)
    • I don't think even Oracle had the drop column feature until Oracle 8i or something.
    • (btw, I don't know if that is the correct syntax, just a guess)

      Not sure whether it's the same in Postgresql, but in oracle, it would be:

      ALTER TABLE [table] DROP COLUMN [column];


    • And you can rename tables and colums on the fly too!

      And the default identifier length is 63! ForReallyLongAndDescriptingColumnNames!

      For all you people out there with Access who think this is old hat - the tables and column renaming and droping can happpen while people are connected to the PostgreSQL database - you don't have to kick anybody off the database.

      If you're considering migrating your Aceess database to MS SQL Servier - do consider PostgreSQL. From experience, the amount of suffering is about the same for both transitions, but when your done, PostgrSQL is more robust, less expensive and less buggy.


      • AND it doesn't have the nasty side-effect of being owned, controlled, and licensed by Microsoft.
      • And you can rename tables and colums on the fly too!

        Oh, it's even better than that. You can do these things within transactions. If you rename a table within a transaction and abort the transaction, it's as if the rename never took place.

        This is very cool stuff. I suspect that dropping columns works the same way. It means that you can do things like exchange table names atomically.


        • Oh, it's even better than that. You can do these things within transactions.

          Good greif, that's cool! Do a whole database transformation in a transaction, and if it borks out, it rolls back? Cool!

          • Good greif, that's cool! Do a whole database transformation in a transaction, and if it borks out, it rolls back? Cool!

            Yeah, it is cool. The side effect, of course, is that the space from DROP COLUMN can't be automaticaly reclaimed - for that you need to run a VACCUM FULL, which does need an exclusive lock. You can batch updates, however, and then do the vaccum at some time in the future when the db isn't being heavily used (ie 3am on a public holiday)

            Newly added rows don't have much overhead for the dropped columns though, apparently. The postgresql websties are still pointing to 7.2 docs, but 7.3 says:

            "The DROP COLUMN command does not physically remove the column, but simply makes it invisible to SQL operations. Subsequent inserts and updates of the table will store a NULL for the column. Thus, dropping a column is quick but it will not immediately reduce the on-disk size of your table, as the space occupied by the dropped column is not reclaimed. The space will be reclaimed over time as existing rows are updated. To reclaim the space at once, do a dummy UPDATE of all rows and then vacuum, as in:

            UPDATE table SET col = col;
            VACUUM FULL table;"

      • If you're considering migrating your Aceess database to MS SQL Servier - do consider PostgreSQL. From experience, the amount of suffering is about the same for both transitions, but when your done, PostgrSQL is more robust, less expensive and less buggy.

        OK, you've got my attention. This is something I hope to do more of in the future and being that you have some expirience, I'd like to ask a couple questions...

        1. I recently did an Access to MS SQL conversion and ended up with an access database in .adp format which contained the connection string to the SQL server and required no ODBC setup on the clinet PC's. Is this similar in Postgre?
        2. In that conversion, the .adp file contained the forms, reports, and macros. The queries from the old access db were stored on the server either as views or Stored Procedures. Is this also similar (or at least compatible)?
        3. MS SQL allows you to use your current windows credentials as the authentication to the SQL server. This is nice because then the users don't have to enter/remember another password. Can I do this in Postgre?
        4. I've found that Access generates absolutely horrid SQL. Fortunately, because all these queries are stored on the SQL server, they can be edited/optimized after the conversion. This question ends up being a two-parter. How compatible is Postre with Access' ugly SQL? And if there's a compatability problem with the generated SQL, can I at least edit it server side and make access not care that that's happened?

        I'm sure there's more issues I'd run accross, but this is all I can think of setting on my couch during my extended weekend away from work :)

        • The answer to all the above is: You are going to have to put in some effort, sorry. The Microsoft curve always runs easier at the start and harder as time goes on. The Unix curve generally loses some wizards and other gumpf in exchange for a need to get a bit low down and grunty. It'll take you longer to get going, but when it's up you'll understand *why* things are they way they are and how to maintain it in the long term. Harder at the start, easier as time goes on.

          I guess it depends where your head sits.

          Dave
        • by zulux ( 112259 ) on Saturday November 30, 2002 @06:03PM (#4785264) Homepage Journal


          1. I recently did an Access to MS SQL conversion and ended up with an access database in .adp format which contained the connection string to the SQL server and required no ODBC setup on the clinet PC's. Is this similar in Postgre?

          Nope, You do need a PostgreSQL ODBC driver, but the link settings can be managed by your Access databsse if you relink on client startup with a VBA script.

          2. In that conversion, the .adp file contained the forms, reports, and macros. The queries from the old access db were stored on the server either as views or Stored Procedures. Is this also similar (or at least compatible)?

          You can store the queires as views on the PostgreSQL server - no problem there. In 7.3 procedures can return a set of data now - though, I'm waiting for reports from the field to come back and report that it's working well before I jump on it myself.

          3. MS SQL allows you to use your current windows credentials as the authentication to the SQL server. This is nice because then the users don't have to enter/remember another password. Can I do this in Postgre?

          I don't think there is any way you can do that in the PosgreSQL ODBC driver - you could rewire the ODBC link on the fly though. Another login is a pain in the ass, but nobody seems to care. It may be possible to get this to work with a Linux server through PAM - if you can get PostgreSQL to work though PAM. I don't know, though.

          4. I've found that Access generates absolutely horrid SQL. Fortunately, because all these queries are stored on the SQL server, they can be edited/optimized after the conversion. This question ends up being a two-parter. How compatible is Postre with Access' ugly SQL? And if there's a compatability problem with the generated SQL, can I at least edit it server side and make access not care that that's happened?

          Both the PostgreSQL server and ODBC driver can massage the horrid Access built queries into normalacy. Typically you don't have to migrate the queries off of Access and into server views because of this - they just work. It's the KSQO that does the magic, from the docs: Key Set Query Optimizer causes the query planner to convert queries whose WHERE clause contains many OR'ed AND clauses (such as "WHERE (a=1 AND b=2) OR (a=2 AND b=3) ...") into a UNION query. KSQO is commonly used when working with products like MicroSoft Access, which tend to generate queries of this form.

          This hasen't always been the case, Access queries used to crash PostgreSQL a few years ago becuase they were so odd.

          General thoughts on both:
          It takes a *bit* longer to get Access to play with PostgreSQL but once it's there, there are no odd bugs to work out. Upgrade Jet to the latest version on the clinet boxes, and set the ODBC time out in the regestry from 600 to 0 - there's a bug in the way Access relinks to a timed-out ODBC session, so by setting the timeout to 0, it never times out.
          The Access/ODBC driver sometimes has problems with creating a record using contininous forms - any new record should be created using VBA rather than by filling out the blank entry in an Access continuis form or list of records.

          Good luck - I've been very happy with the migration myself.

          Oh, setup an hourly cron job to dump the database to a file then gzip it and stash it on a NFS server. Easy hourly backups! Never had to use them, but it's nice to know that we'll never loose more than an hours worth of work!

          • I stand corrected, it's easier than I thought :)

            Dave
          • MS SQL allows you to use your current windows credentials as the authentication to the SQL server. This is nice because then the users don't have to enter/remember another password. Can I do this in Postgre?

            I don't think there is any way you can do that in the PosgreSQL ODBC driver - you could rewire the ODBC link on the fly though. Another login is a pain in the ass, but nobody seems to care. It may be possible to get this to work with a Linux server through PAM - if you can get PostgreSQL to work though PAM. I don't know, though.

            Even if not, this is Unix. A simple glue script to fetch the necessaries from Samba and push them into the Postgres authentication table(s) should do the trick.

            When it works, make a hero of yourself, re-render it in C or something at least a little more robust than BASH or whatever you prototyped it in, and throw it at Postgres' contribs.

        • "# In that conversion, the .adp file contained the forms, reports, and macros. The queries from the old access db were stored on the server either as views or Stored Procedures. Is this also similar (or at least compatible)?"

          You had to translate all those queries from access SQL to MS-SQL language right? They are incompatible in many ways. Also many access queries tend to have functions in them like dlookup() all of which won't work in SQL server. SO depending on the complexity of the SQL you will have to translate some of them.

          "# MS SQL allows you to use your current windows credentials as the authentication to the SQL server. This is nice because then the users don't have to enter/remember another password. Can I do this in Postgre?"

          You can embed a username and a password in the ODBC driver and have everybody log in as the same user or you can write some VB code to get the user name from the windows login and make right registry hacks. Not too big of an ordeal.

          " 4. I've found that Access generates absolutely horrid SQL. Fortunately, because all these queries are stored on the SQL server, they can be edited/optimized after the conversion. This question ends up being a two-parter. How compatible is Postre with Access' ugly SQL? And if there's a compatability problem with the generated SQL, can I at least edit it server side and make access not care that that's happened?"

          Pretty much the same as SQL server. You can create stored procedures or views and link them up like tables. Study up on the postgres RULE system. It basically allows you to create views with code and also makes all views writable if you want to code the writes.

          Look into SQL Porter http://www.realsoftstudio.com/ordersqlporter.php it might save you a ton of work and it does not cost a lot of money.

          You can also use "pass through" queries to take advantage of postgres features like regular expressions and such.

    • by Anonymous Coward
  • Drop Column (Score:4, Interesting)

    by farnsworth ( 558449 ) on Saturday November 30, 2002 @03:39PM (#4784819)
    Drop Column
    PostgreSQL now supports the ALTER TABLE ... DROP COLUMN functionality.

    HURRAY! this has been my biggest annoyance with postgresql since I've started using it. there are workarounds for older versions, but they become arduous when you have a lot of existing data.

    this is a *very welcomed* implementation.

  • FINALLY!! (Score:4, Funny)

    by Anonymous Coward on Saturday November 30, 2002 @03:49PM (#4784861)

    Dancing Girls
    The PostgreSQL now includes a number of beautiful dancing girls

    I can't tell you how long I've been waiting for this feature! Now I can get rid of Oracle for GOOD!!

    Kudos to the PostgreSQL team!

  • by Anonymous Coward on Saturday November 30, 2002 @05:14PM (#4785144)
    Main feature I've been waiting for replication.

    As of a couple of months ago none of the replication options for postgres were any good. Most were unreliable, offered very small features or very hard to set up.

    Some looked like they had promise, but were not there.

    Please, please, please, add replication to the next release :)

    I also wish performance for simple case dbs was faster. eg key value dbs compared to the performance of sleepy cats berkley db.

    I'm sure there would be a *lot* of money to be had if someone were to make a good replication system. Possibly releasing it blender styles? Or offering to implement replication for businesses for a fee?

    Perhaps one of the postgres groups could ask for donations from some of us users so some developers could work on it full time. I know I could easily convince my boss to cough up for it. Almost any business that relies on postgres could be convinced to chip in I think.
  • This is slightly off-topics but anyway...

    There have been some references to msAccess here, what I like about Access is the ease I can build an ad hoc database application (but where the data could be reused easily should there be a later requirement).

    While Postgres sounds great, I want to know if there are and tools that approach this ease of development, within a linux environment. Ability to choose the back-end database would be a huge plus - I'd certainly give Postgres a go.

    RG
    • The Kompany makes an access like tool.
      pgaccess does some of what access does.
      Openoffice has nice database developmet tools.
      You can build database apps very easily with kylix and there are even open source repirting engines available.

      If all of that is not good enough then you can always use postgres as a back end to access via ODBC.

    • Unfotrunatly there is no all-in-one rapid developemnt and flat-file database rolled into one, like Access, in the Unix world. But don't let that stop you from using Access though - it's a great tool for rapid development.

      I use it all the time - you rapidly develope the small database, and when it outgrown the Access flat-file .mdb sotrage on about the fith concurent user, migrate the backend database to PostgreSQL and keep the Access front end. Once, the database get's really popular, migrate the Access front end over to Delphiand keep the PostgeSQL back end. The Access to Delphi transition should be done in stages - migrate the data enrty first, then migrate the reporting later.

      Once that is donem and your database is really popular - migrate the front end again to Delphi to Delphi/Kylix and you'll be able to support Linux/FreeBSD and Windows desktops. People can VNC into a FreeBSD server that shares the Kylix app over VNC for other systems - Solaris, Mac, Psion.

      Cool stuff.
    • I want to know if there are and tools that approach this ease of development, within a linux environment. Ability to choose the back-end database would be a huge plus - I'd certainly give Postgres a go.

      I've just started looking at Rekall by theKompany.com (KDE etc). It's quite Accesss-like, and it runs on Windows, Linux, and Mac. It only costs $70 too!

      Not as versatile/powerful as Access yet, but I think it may get there.

  • I've used MySQL with PHP/Perl for a while on various projects, and always half considered Postgres, but since many web hosts don't offer it (why?) never really considered it seriously (don't wish to rewrite a lot of db code - and yes I do know about PEAR with PHP, but it's a performance hit).

    So, does PG+PHP match the speed of My+PHP? Is it as thoroughly tested? As I'm moving away from using Perl, I'd be really interested in seeing some benchmarks with this new version of PG...
    • So, does PG+PHP match the speed of My+PHP? Is it as thoroughly tested? As I'm moving away from using Perl, I'd be really interested in seeing some benchmarks with this new version of PG...

      The question shouldn't be whether PG+PHP matches teh speed of MySQL+PHP, but rather whether the speed of PG+PHP is good enough for your application and, more importantly, whether it will scale reasonably well under load.

      This [phpbuilder.com] seems to indicate that PG+PHP will scale better than MySQL+PHP, but that will certainly depend on the configuration of MySQL (in particular, which table type you use).

      You should be a lot more concerned about the features of the database that you will require, instead of the speed, because once you select a database engine you'll have a lot of trouble migrating to a different one. Speed can always be gained by throwing more hardware at the problem if necessary. Database features can't.

    • In my experience, PostgreSQL is faster if your code is well written and carefully optimized (particularly through careful use of transactions).
  • by esconsult1 ( 203878 ) on Saturday November 30, 2002 @07:53PM (#4785710) Homepage Journal
    The thing that makes Postgresql completely different from MySQL is that it is an *active* RDBMS. By active, I mean that you can set it up so if it gets certain kinds of data, it can operate on that data to create new records, delete records, update other tables etc.

    Postgresql has the *intellegence* built in. You can write all sorts of georgous functions to do stuff, especially if, like us, your shop uses several languages... PHP, Perl, Java, Python, C++, etc. Why replicate your business logic everywhere?

    Transaction support and file/record locking are the least of your problems. If you do serious database stuff, at some point, you are *going* to want VIEWS, TRIGGERS, RULES, and STORED PROCEDURES (functions). Having this functionality in the database engine, instead of in your code makes a heck of a lot of difference when the time comes to scale.

    Coming from a MySQL backgroud in a multi-language shop, we clearly saw the limitations, and decided to switch the entire database platform over to Postgresql a year ago.

    We haven't looked back since.
    • by Anonymous Coward
      With all due respect, your business logic should stay in the middle tier, not embedded in the data layer. (I'm assuming you're talking about n-tier enterprise development) If your BL is all in stored procedures, then you've got your system IO bound, (standard db queries) as well as CPU bound ( business logic calculations ) which is just a painful situation to be in if you want to ever scale up.
      • Generally, you are right that "business logic" should be in the middle tier. A middle tier should have business logic, but there are a LOT of tasks that use SQL besides business logic.

        For the data intensive operations used within the business logic, it is often helpful to encapsilate the data access using an API that resides in the database. This stops large amount of data going across the network. For example, an order fulfilment system might have a middle tier that decided whether an order could be shipped. It might call a "get_backordered_part_count" function and make a series of decisions based on the result.

        Implementing that function in the middle tier accomplishes nothing because the same SQL hits the DB either way. If the logic in the function is complicated and can only be coded with several SQL statements, the extra network traffic and server round trips can be unacceptable.

        A lot of other situations call for Stored procedures and triggers. For example, they are appropriate for writing a data integrity layer: if you denormalize your data model for performance, you need to write triggers to enforce the data integrity. Similarly, if you have raw data processing operations as is common for external system interfaces, data load and transformation operations, periodic jobs etc then it is good to write these in the data layer unless you'd rather push lots of extra data across the network for no identifiable reason.

  • Making The Switch? (Score:3, Interesting)

    by suwain_2 ( 260792 ) on Saturday November 30, 2002 @09:54PM (#4786064) Journal
    I'm surprised this hasn't been asked yet...

    Just today I found a need (not a chance to use, but a *need*) for a subquery. While contemplating copying and pasting (it's only like 30 rows) data between database tables, I happened to see this article.

    How easy is it to switch over from MySQL to PostgreSQL? Is there a simple tool to convert between the two? (And as a sidenote... The machine I want to do this on is a third-hand computer, a 300 MHz, 128 MB RAM webserver... Am I going to notice a performance hit if I put PostgreSQL on it?)

    • Not hard (Score:4, Informative)

      by einhverfr ( 238914 ) <[moc.liamg] [ta] [srevart.sirhc]> on Sunday December 01, 2002 @01:58AM (#4786669) Homepage Journal
      Prior to 7.3, I used to do most of my prototyping in MySQL. Then I would convert the database over, and test it, then I would dump, add triggers, etc. and restore.

      There are two scripts that come with PostgreSQL to take a database dump from MySQL and turn it into something you can use with PostgreSQL. So the switch is painless.

      3 cautions, though ;)
      1) PostgreSQL timestams are time-zone independent, and the database manager will correct for timezone if set. So if your timestamps are off by a certain factor, that is probably why.
      2) Timestamp format is different, so you may have to rewrite any time-stamp parsers.
      3) Limit clauses in MySQL are non-standard.

      Coming from someone who supports both ;)
    • by imroy ( 755 )

      Zachary Beane [xach.com] of GIMP fame, has a MySQL to PostgreSQL [xach.com] migration page with a Perl script and some advice.

  • by axxackall ( 579006 ) on Sunday December 01, 2002 @12:33AM (#4786498) Homepage Journal
    What PostgreSQL does really need is a better marketiing. Today 90% of enterprise programmers on a question "Why Oracle [Sybase, MSSQL, DB2]? Why not open source database in you project?" usually answer: "MySQL? We've tried. Doesn't really work for our projects." And if you try further "Did you try PostgreSQL?" then they counter-ask "Postgres-who?"

    Too bad. When Internet burned tons of startup money, they hired lots of "so-called programmers" to do web-development stuff. No wonder that MySQL and PHP (and Linux!) was typically a choice. Who cares about transactions? Who cares about aspect separation? Just show the first home page to the boss!

    The positive outcome: big bosses heard about Linux. Could Linux be where it is now without those so-called programmers? I doubt so. Professional Services from IBM and Microsoft would decide for you what technology to use after your boss has decided what partnership contract to sign.

    But that wasn't the only way to "educate" big bosses about Linux: startup boom sparked Linux marketing boom creating OSDN, and others, including Slashdot. As a result, Linux is not self-selling itself: everyone loves Linux therefore Linux is protecting your investments. Crowd effect.

    Could it be possible would Linux be really bad? No. Why it didn't happened to PostgreSQL? I think b/c PostgreSQL-based few companies didn't care about marketing. Or cared wrong. Or didn't have money to care. Compared to what? To Linux. Try to find some subject about Linux using google - besides mail-lists you've got many official documents, FAQs, HOWTOs, learning courses, support companies. Try to do it for PostgreSQL - mostly mail-lists and few official docs.

    With improved better marketing PostgreSQL may become in one or two years as Linux today. Without good marketing only PostgreSQL developers, few enthusiasts and some Slashdot readers will know that not all open-source databases are so bad.

    • by Khalid ( 31037 ) on Sunday December 01, 2002 @02:12PM (#4788481) Homepage
      Open source software rely mainly on the network effect for it's development and for it's Marketing too (that's what is called viral Marketing) yes lately companies like IBM, Redhat and many others have done a lot to make Linux maintstream, but the main Marketing medium for Linux remains word of mouth.

      On the other hand, Linux is the admiral ship for all open source software, it come bundled with it, Linux has chown that OS is viable and it's success will make OS prevail too. It only needs time.
  • by matsh ( 30900 ) on Sunday December 01, 2002 @01:51AM (#4786660) Homepage
    Does it support the latest JDBC standard, and does it work fine under heavy load?

Put your Nose to the Grindstone! -- Amalgamated Plastic Surgeons and Toolmakers, Ltd.

Working...