Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Programming IT Technology

Object Prevalence: Get Rid of Your Database? 676

A reader writes:" Persistence for object-oriented systems is an incredibly cumbersome task to deal with when building many kinds of applications: mapping objects to tables, XML, flat files or use some other non-OO way to represent data destroys encapsulation completely, and is generally slow, both at development and at runtime. The Object Prevalence concept, developed by the Prevayler team, and implemented in Java, C#, Smalltalk, Python, Perl, PHP, Ruby and Delphi, can be a great a solution to this mess. The concept is pretty simple: keep all the objects in RAM and serialize the commands that change those objects, optionally saving the whole system to disk every now and then (late at night, for example). This architecture results in query speeds that many people won't believe until they see for themselves: some benchmarks point out that it's 9000 times faster than a fully-cached-in-RAM Oracle database, for example. Good thing is: they can see it for themselves. Here's an article about it, in case you want to learn more."
This discussion has been archived. No new comments can be posted.

Object Prevalence: Get Rid of Your Database?

Comments Filter:
  • gigabytes? (Score:5, Insightful)

    by qoncept ( 599709 ) on Monday March 03, 2003 @09:49AM (#5423491) Homepage
    At first, I had a problem understanding object oriented methodology because I kept thinking of objects in terms of a database -- they seemed so much alike. But...

    Who uses a database small enough to fit in RAM?

    • Re:gigabytes? (Score:2, Interesting)

      i think the idea is that the databases are running on servers such as the SunFire, which has a stupid amount of RAM (somewhere in the terabytes if i remember correctly)....
    • Just buy a lot of RAM!

      I think many small and mid-sized e-commerce vendors
      whould benefit from this.
    • We're not talking about an entire OLTP system that runs a business -- we're talking about the object data used for the code itself. The article suggests a different way of managing the object data instead of using a flat file, XML, or a database.
    • Re:gigabytes? (Score:5, Insightful)

      by bmongar ( 230600 ) on Monday March 03, 2003 @09:56AM (#5423534)
      Who uses a database small enough to fit in RAM?

      Not every solution is for every problem. This isn't for huge data warehousing systems. My impression is that this is for smaller databases where there is a lot of interactions with fewer objects.

      I have also seen object databases used as the data entry point for huge projects, where the database is then periodicaly dumped into a large relational database for warehousing and reports.

      • Re:gigabytes? (Score:2, Insightful)

        by qoncept ( 599709 )
        Very true, then again, though, if the database is that small anyway, you're probably not taking much of a performance hit unless you never should have been using a database to begin with.

        Offtopic though, I'd love to see a solid state revolution. With the amounts of RAM and flash memory available these days, I don't see why we couldn't run an OS off one. I'm not generally one to be anxious to jump in to new technologies (I used to hate games that used polygons instead of sprites), I think moving to solid state in an intelligent manner would be the biggest thing that could happen in the industry in the near future. ie, along with serial ata, introduce fast, ~2gb bootdrives that run your OS and favorite programs and store everything else on a conventional magnetic hard drive.

      • Re:gigabytes? (Score:5, Insightful)

        by juahonen ( 544369 ) <jmz@iki.fi> on Monday March 03, 2003 @10:10AM (#5423650) Homepage

        And that goes for OO as well. Not every database (or a collection of data) needs to be accessed in Object-Oriented way. Most (or should I say all) data I store to small tables would not benefit from being objects.

        And how does this differ from storing non-object-oriented data structures in RAM? You'd still need to implement searches, and how do you search an collection of objects without placing them on the relational line.

        • Re:gigabytes? (Score:3, Insightful)

          by Arandir ( 19206 )
          How many times has this been said before? "Use the right tool for the job!" If you have a large collection of objects all of the same class, then use a database. If you have a large collection of objects of differing classes, then use an OO method. For small collections of objects, or if you don't have any real objects at all, neither may be appropriate.

          What irks me to no end are database freaks who have to do everything with a database, OO freaks who have to do everything with OO, and GP freaks who have to do everthing as pure GP. They're like guys who only know how to use a screwdriver, so they end up using the screwdriver to hammer in nails and chisel wood.
    • Who uses a database small enough to fit in RAM?

      The Museum of 20th Century French Military Victories in Paris could make use of this technology on my old 8086 system.

    • Re:gigabytes? (Score:3, Insightful)

      by Ed Avis ( 5917 )

      Who uses a database small enough to fit in RAM?

      Even if your database doesn't fit in affordable RAM today, it probably will in a few years. RAM prices fall faster than database sizes increase. Already a couple of gigabytes of storage is more than enough for a big class of applications.
    • Re:gigabytes? (Score:5, Insightful)

      by mbourgon ( 186257 ) on Monday March 03, 2003 @11:18AM (#5424072) Homepage
      Who uses a database small enough to fit in RAM?

      I do, but I'll thank my SQL server for doing it for me. Most aggressively cache data and databases - if Database A is used constantly, it'll be kept in RAM, whereas less-frequently Databases will either stay on the hard disk, or certain tables of that database will be put in memory. It lets you make the most of your RAM.
  • Very large? (Score:2, Interesting)

    What about absolutely monsterous databases? What about huge queries? Or even querying across objects (like we would do joins in a table). I assome that while this can work, there will be some major shifts in thinking in order to get it to be accepted. People like their databases. And enterprise level software isn't going to go out and grab this up--until it does, it probably won't really take off.
  • Slashdotted (Score:5, Funny)

    by Cubeman ( 530448 ) on Monday March 03, 2003 @09:53AM (#5423520)
    For a scalability test, it sure fails the Slashdotting Test.

    It's about 9000 times slower right now :)
  • Neat concept... (Score:3, Interesting)

    by Gortbusters.org ( 637314 ) on Monday March 03, 2003 @09:56AM (#5423538) Homepage Journal
    but it doesn't really provide any compelling reasons NOT to use a database. Besides the fact that their home page, Prevayler.org [prevayler.org] seems to be non-existant I think it's more of just a "neat idea" type thing rather than a compelling reason for any prodcut/project to drop relational DB support.

    You can always have a caching system as the author states, but even then what systems use this? The countless PHP/MySQL sites out there seem to perform just fine. This may be desirable for some very strict real time communications systems, but for just about every other form of app, I don't see it.

    What are you going to tell your 3rd party integrators? Drop their XML/ODBC report and surf on over to prevayler.org?
    • Re:Neat concept... (Score:5, Informative)

      by truthsearch ( 249536 ) on Monday March 03, 2003 @10:12AM (#5423661) Homepage Journal
      The countless PHP/MySQL sites out there seem to perform just fine.

      Object-oriented programming and data persistance is about a lot more than public web sites. Private, corporate data warehouses with terabytes of persisted objects squeeze every bit of processing power available. For example, I used to work on Mastercard's Oracle data warehouse. An average of 14 million Mastercard transactions occur per day. That's 14 million new records to one table each day, with reporting needing hundreds of other related tables to look up other information. To get something of that scale to run efficiently for a client app (internal to the company) costs millions of dollars. Object persistance on a large scale is tough to get right and is far from perfected, and there's a lot more going on that public web site development. Every new idea helps. Consider the article written on IBM's developerWorks. It's readers are mostly corporate developers.
  • by koh ( 124962 ) on Monday March 03, 2003 @09:56AM (#5423541) Journal
    Their solution really seems to rock, and may finally be the OO to DB paradigm everyone was waiting for.

    That said, I wonder what their position is towards the import of existing data. Many projects would only benefit from the solution if and existing data (usually object-oriented but saved in a roughly flat database as the article points out) can be ported seemlessly to the new environment.

    My point is, this solution solves a known problem by introducing a new technology, however this new techno will have to be bent towards the older systems in order to retrieve what was already saved. Same old story : in the database world existing data is paramount.

    • by Steeltoe ( 98226 ) on Monday March 03, 2003 @11:10AM (#5424015) Homepage
      OODBMSes have been thoroughly and handily debunked. For the best opinions on relational database technology, visit these hardcore guys: http://www.dbdebunk.com/ [dbdebunk.com]

      The problems with OODBMSes can be summarized so (OTOMHRN - on top of my head right now):
      1) Proper relational technology can model OO-hierarchies, but the other way around is unnatural and cumbersome, if not impossible. Proper relational technology is a step up on the ladder in generalization from OO-technology. It's simply a generation or two ahead, while OODBMS is several steps backwards.

      2) Proper relational technology is proven concepts from mathematics and logics, while OODBMSes are just a hack to store application data "quick'n dirty". Everything can be modelled as general relations, while OO-technology lacks the fundamentals to model *ANYTHING* and is limited and impeded by having an obligatory and *meaningless* top-to-bottom hierarchy. (You cannot have *meaning* without relations of differing types to other entities.)

      3) Proper relational technology allows you to extract, convert and manipulate data in standardized methods (using query languages like SQL), in ways not thought of at the time of design. OODBMSes can only be used properly in the context of the OO-application layer, often relying on runtime data. If you need flexible solutions, you will have to spend extra time programming a specialized solution, instead of having the benefit of a fully relational query language (which unlike SQL, can express almost any problem to be solved).

      4) The future is relational. Current RDBMSes do not implement true relational technology, which if they did, nothing else would be needed. The matemathics in the theories behind it would be at the programmers disposal during programming, reducing time and potential errors. Yes, it requires understanding the theory, but wouldn't you like a true DBA to do that anyways?

      Don't buy into the hype, look into true relational technology and educating yourself. As for storing everything in RAM, and "saving it for the night", I wouldn't risk to have my bank-account in such a DB. Such solutions are only usable for storing non-volatile data. For non-commercial game-servers, it maybe perfect.
      • Here here! Someone with their head on straight. Let me also add to this that when relational technology took hold, it was NOT because it was faster. In fact, at that time, relational databases were 50 TIMES SLOWER than the current hierarchical databases. The performance gap has narrowed, but the reasons for choosing relational remain the same. The industry at that time realized that the benefits of relational technology was much more important than speed, and hopefully we'll come to the same decision again. Those reasons include:

        * the separation between _logical_ and _physical_ layers of the database - the DBA controls physical record layout and indices, while the database designer and applications have access to the logical layers. This way they can do their roles independently of each other.

        * the ability for the data model to change without affecting the applications. Using VIEWS - you can do quite a bit of modification to the underlying data model, but applications using the older one will still run if the DBA sets up a view.

        * the ability to do arbitrary querys on the data

        * The ability to set up views to handle more complex interactions. For example, in a mail system I've written, we have a table for campaigns with a sent/not-sent flag, a list of addresses, and three layers of do-not-send lists. We then have a single view which puts all of this together and gets the list of addresses which need to be sent to. This is a view on top of several views.

        I'm sure I'm missing some others, too. Basically, a relational database system is a gigantic inference engine when designed appropriately.
    • by MisterFancypants ( 615129 ) on Monday March 03, 2003 @12:44PM (#5424712)
      Their solution really seems to rock, and may finally be the OO to DB paradigm everyone was waiting for.

      Not likely. The REAL problem with OO databases isn't that RDBMs might be more mature or whatever else you might read, it is that the data is almost always more important to companies than the behaviors that operate on that data. For example, if the company has a database of customers, they might want to use that database in dozens of different ways, and they might want to grow it for years, if not decades. The OO-database view tends to look at things too much from the view of one single application of the data and the data gets entangled with code behavior based on that specific application. With a clean RDBMs you can hit the same database from many different applications (assuming the database has a well thought-out schema to begin with)... the data isn't so tightly wound up with a specific bit of application code.

      This 'solution' doesn't fix that aspect of OO databases. In fact, it makes it worse. I will grant that it is a neat technology, but I wouldn't expect to see it take over the place of RDBMs systems any more than OO-databases of the past have.

  • OOP (Score:2, Interesting)

    by NitsujTPU ( 19263 )
    Couple things.

    1) You COULD use an object-relational database if you wanted to keep an OOD aspect.
    2) You COULD load non-object oriented data into RAM with lower overhead.
    3) A couple gig's of data into RAM... not really a deployable solution for enterprise, don't you think?

    Other than that, nifty idea and all.

  • Two words... (Score:4, Informative)

    by Anonymous Coward on Monday March 03, 2003 @10:00AM (#5423567)
    Enterprise JavaBeans.

    Here's the definition of an EJB from the http://java.sun.com [sun.com] site.
    A component architecture for the development and deployment of object-oriented, distributed, enterprise-level applications. Applications written using the Enterprise JavaBeans architecture are scalable, transactional, and multi-user and secure.

    And more specifically, here's the definition of an Entity EJB:
    An enterprise bean that represents persistent data maintained in a database. An entity bean can manage its own persistence or it can delegate this function to its container. An entity bean is identified by a primary key. If the container in which an entity bean is hosted crashes, the entity bean, its primary key, and any remote references survive the crash.
    • Re:Two words... (Score:5, Informative)

      by neurojab ( 15737 ) on Monday March 03, 2003 @11:53AM (#5424340)
      Entity beans are all about transactions. You've got a transaction context that can propogate over several beans. The EJB container doesn't do this on it's own, however. It uses the ACID properties of the database along with the database's commitment control mechanisms to accomplish the properties you mentioned. Entity beans are usually mapped to tables, and could represent a join in the BMP case. That said I'm not sure if you're saying EJB will benefit from using this as a backend, or that EJB did this first? The latter is false, but the former... I'm not sure this technology will benefit entity beans, but may benefit STATEFUL SESSION beans because they're less RDBMS-centric.

  • by carstenkuckuk ( 132629 ) on Monday March 03, 2003 @10:00AM (#5423569)
    Have you have looked at object-oriented databases? They give you ACID transactions, and also take care of mapping the data into your main memory so that you as a programmer only have to deal with in-memory objects. The leading OODBs are Objectstore (www.exln.com), Versant (www.versant.com) and Poet (www.poet.com).
    • Yes, I've looked at object-oriented databases. I worked on a project that used Objectstore for a year or two before we gave up and went to Oracle.

      Here is why:

      1. We realized that like _most_ projects there really wasn't anything that object-oriented about the data. The code, yes. But the data was just as easily represented with typical RDBMS relationships and it was much faster to do basic operations. We saw a several thousand time increase in performance when trying to query the database for a particular object and its associated data. A join or ten wasn't nearly as expensive as getting data out of what was essentially just a dump.

      2. Objectstore, at the time, had no concept of administration. It was up to the developer to handle things like when files got too big, or creating the OODBMS concept of indexes, or what have you. The "DBA" could stop and start it, and that's about it. So if we grew, or got new hardware, or changed platforms, it was time to dump the old data (because migrating it was an programming project in itself) and start over.

      3. People would ask us questions about the data we were storing that would have been absolutely trivial to find in a RDBMS (like "how many of these events occured last month when this device was in this state) that we'd have to write long slow-performing pieces of code to retrieve.

      4. Other people wanted to write applications that used our data. That wasn't too easy, because they wanted slightly different objects. We would have had to agree on a object for everything we shared, or store things twice. With an RDBMS we used could use views, or generate the objects differently from the same tables.

      5. There was no way to get a read-consistent hot backup across a couple of hundred files. Maybe there is now. This was just foolish.
      • A join or ten wasn't nearly as expensive as getting data out of what was essentially just a dump.

        What do you mean by a "dump"? Sounds like you were using the OO db inappropriately, e.g. by querying for "SELECT * FROM extent1" and then linearly searching it, or something.

        3. People would ask us questions about the data we were storing that would have been absolutely trivial to find in a RDBMS (like "how many of these events occured last month when this device was in this state) that we'd have to write long slow-performing pieces of code to retrieve.

        Doesn't ObjectStore have a query language similar to SQL? The Object Data Standard defines OQL, but I know that the Object Data Standard is not exactly "industry standard" yet. As for speed, it's still possible to create indexes, optimise data structures and algorithms, etc.

        Other people wanted to write applications that used our data. That wasn't too easy, because they wanted slightly different objects. We would have had to agree on a object for everything we shared, or store things twice. With an RDBMS we used could use views, or generate the objects differently from the same tables.

        This is an odd complaint. In any decent OODBMS, creating views (even manually) should be fairly simple. Yes, you might have to write some repititive code - there's room for improvement there. This is where aspect-oriented programming (specifically, composition filters) comes in, I think.

  • 3 issues I see (Score:4, Interesting)

    by foyle ( 467523 ) on Monday March 03, 2003 @10:00AM (#5423573)
    First off, I like the concept, but speaking as a former Oracle DBA, I have several issues:

    1) You're limited by how much RAM you have on your server, not how much disk space you have

    2) If you're making a lot of data changes and have a crash or power outage, I'd imagine that it can take a while to replay the log to get things back to the most recent point in time (you can have the same problem with Oracle, but your checkpoints would be a lot closer together than "once a day")

    3) There are millions of people that already know SQL and can write a decent query with it. How does this help them? Never underestimate the power of SQL.

    On the other hand, for projects dealing with small amounts of data I can see how implementing this would be far easier than integrating with Mysql, Postgresql or Oracle.
    • Re:3 issues I see (Score:5, Interesting)

      by jhines0042 ( 184217 ) on Monday March 03, 2003 @10:23AM (#5423719) Journal
      1) What about Swapping? I know that you would by limited by physical ram (which IS getting cheaper) but couldn't you also get a really large virutal memory space and utilize that?

      2) You can probably set up your own checkpoints to be more than once a day.

      3) I agree. Lack of SQL would cause people to.... GASP.... learn a new system. SQL is very cool. And I admit that I have a system I am thinking of porting away from JDBC and into Prevalence just to see how it goes (No, it isn't mission critical) and one of the first things I realized is that I would have to design a new method of querying. But you know what... That can lead to new thinking and more powerful software in the future.

    • Buggy whips (Score:5, Insightful)

      by Camel Pilot ( 78781 ) on Monday March 03, 2003 @10:49AM (#5423882) Homepage Journal
      3) There are millions of people that already know SQL and can write a decent query with it. How does this help them? Never underestimate the power of SQL.

      There are millions of people that already know how saddle and ride a horse. How do these new fangled automobile help them? Never underestimate the power of a horse.

      While I agree with your other points... number 3 is never a reason to keep from embracing something new. People are suprisingly trainable.

    • by JohnDenver ( 246743 ) on Monday March 03, 2003 @11:23AM (#5424112) Homepage
      Personally, I still think it sounds a lot easier to just map objects to a database.

      4) Concurrency - If you haven't implemented locks for an object model, then you haven't lived. Seriously, I can see a lot of people screwing this up with deadlocks galore. Locking up concurrent systems can be a nightmare.

      5) Ad Hoc Support - Goodbye Crystal Reports, Goodbye English Query, Goodbye ANY Ad Hoc query support, because if you need anything different, you're going to have to write a lot more code to enumerate throughout your objects. Have fun.

      6) Indexing - I hope you have a good B-Tree library and are familiar with Indexing/Searching algorithms when implementing HARDCODED indexing. Oh yeah, have fun rewriting all of your query procedures when you decide to change your hardcoded indexing.

      Nothing says flexible like HARDCODING! Yay!

      In all seriousness, this is a bad idea for 99% of projects out there. It's inflexible, unscalable, severely error prone, and timely to implement.

      (sarcasm) All this just to avoid the "cumbersome" process of mapping objects to tables?

      Seriously people, it's not that hard (3 magnitudes easier than this) and there are a lot of tools that help doing it.

      If you're REALLY hung up on not using a relational database, try an Object Database, XML Database, or an Associative Model Database.

      • If you haven't implemented locks for an object model, then you haven't lived. Seriously, I can see a lot of people screwing this up with deadlocks galore. Locking up concurrent systems can be a nightmare.

        Then just wrap all of your lock-sensitive stuff in Prevayler command objects. They've got that working fine, and it guarantees isolation.

        Goodbye Crystal Reports, Goodbye English Query, Goodbye ANY Ad Hoc query support, because if you need anything different, you're going to have to write a lot more code to enumerate throughout your objects. Have fun.

        Oh, please. If you really need SQL compatability, then dump the data occasionally to a data warehouse, which is where you should be doing unconstrained ad-hoc queries anyhow.

        Or if it's so the programmers can peek at the live system, then put in something like BeanShell, which will let you see a lot more than just the persistent data.

        Or you could drop an SQL interpreter into your system and present your objects as tables. Many of the pieces are already open sourced, so it would be pretty easy.

        Indexing - I hope you have a good B-Tree library and are familiar with Indexing/Searching algorithms when implementing HARDCODED indexing. Oh yeah, have fun rewriting all of your query procedures when you decide to change your hardcoded indexing.

        Can you really not think of ways to write these things in flexible ways? If that's the case, you could learn something about being a programmer. Pick up Martin Fowler's Patterns of Enterprise Application Architecture [martinfowler.com].

        In all seriousness, this is a bad idea for 99% of projects out there. It's inflexible, unscalable, severely error prone, and timely to implement.

        Perhaps you should try it before knocking it. As you are, in order, wrong, mostly wrong, wrong, and confused. It's no magic bullet, but it's a useful approach for some systems.
  • Interfacing (Score:3, Interesting)

    by MSBob ( 307239 ) on Monday March 03, 2003 @10:00AM (#5423576)
    This may be a great way to snapshot the state of a Java application but how on earth would you query anything out of it with a non-Java/non-OO language?

    A SOAP interface could go some ways towards accomplishing this but what about the traditional ACID properties of a DBMS? Durability is obviously guaranteed... Consistency? That would depend on programmers following the practices... Atomicity? Not sure about that one. For simple commands it seems to work. What about compound commands? If no rollback occurs how can I assert that I changed both objects not just one? Isolation? Not sura about this one either.

  • C++ soluton (Score:2, Funny)

    by debrain ( 29228 )
    I noticed the lack of C++ support, so I thought I'd throw my hat in. :)
    template<typename O,typename T>
    O&
    operator <<(O&o,T&t) {
    o.write(t,sizeof(T));
    }
  • It looks very much like a journaling filesystem. That one basically also stores the commands executed in a log file. If you've had a crash with ReiserFS for instance, you can see messages like "replaying log for...." at startup.

    Now they're doing the same for in-memory object data structures. Might be a nice idea.

    On a different note: the objectdatabase behind zope has perhaps the same net effect. To the programmer, everything is in-memory. The object database reads stuff from disk if needed and keeps things in memory when much-requested. And also with a list of transactions which can be replayed or rolled back.

    So: it looks nice, but I'm curious to the net results!

  • by sielwolf ( 246764 ) on Monday March 03, 2003 @10:04AM (#5423605) Homepage Journal
    I think this would work well for most web-server DB backends as the data isn't changing on the fly that much. But what about even /. where the content of a discussion thread is changing possibly several times a second (with new posts and mods)? I'd think then you'd want to use the strong atomic operators of the DB to pull directly from the tables instead of relying on serial operators to try and refresh.

    Since the benchmark page was slashdotted I might be speaking out of my ass. But I never trust "9000 times faster!". It sounds too "2 extra inches to your penis, guaranteed!"
  • by Ummite ( 195748 ) on Monday March 03, 2003 @10:05AM (#5423608)
    The advantage of putting data into a database isn't just speed! Just think about sharing data between application, between many computers, exporting data into another format, or simply making a query to change some values! You simply don't want to write code that will change value of data with some specific conditions : you prefer make a single query that any database manager or simply a sql newby could write, not just the 2-3 programmers that have done the work on that code some years ago! You also sometime need to visualise data, make reports, sort data. You simply don't want to code that. I think most serious database can also put data in RAM if you have enough, and is able to do some commit/rollback when it's necessary. So your point that RAM data with serialize in-out is ok, as long as you absolutly need 100% speed, don't need to do complex query on your data and is in used only on one computer.
  • by Zayin ( 91850 ) on Monday March 03, 2003 @10:05AM (#5423611)

    This architecture results in query speeds that many people won't believe until they see for themselves: some benchmarks point out that it's 9000 times faster than a fully-cached-in-RAM Oracle database, for example. Good thing is: they can see it for themselves.



    Yes, I've seen it. The page on www.prevayler.org only took about 30 seconds to load. Does that mean that a fully-cached-in-RAM Oracle database would spend 75 hours loading that page...?

  • no queries (Score:5, Insightful)

    by The Pim ( 140414 ) on Monday March 03, 2003 @10:06AM (#5423613)
    Queries are run against pure Java language objects, giving developers all the flexibility of the Collections API and other APIs, such as the Jakarta Commons Collections and Jutil.org.

    In other words, "it doesn't have queries". What real project doesn't (eventually) need queries? And even if writing your queries "by hand" in Java is good enough for now, what real project doesn't eventually need indices, transactions, or other features of a real database system?

    • Re:no queries (Score:4, Insightful)

      by sql*kitten ( 1359 ) on Monday March 03, 2003 @10:20AM (#5423700)
      In other words, "it doesn't have queries". What real project doesn't (eventually) need queries? And even if writing your queries "by hand" in Java is good enough for now, what real project doesn't eventually need indices, transactions, or other features of a real database system?

      Indeed. It looks like a high-level, language-neutral API for traversing linked lists of structs. Yes, you can rip through such a structure far faster than Oracle can process a relational table, but they are two different solutions to two different problems. I wouldn't use an RDBMS for storing vertex data for a scene rendering application, and I wouldn't use an in-memory linked list for storing bank transactions!
  • by ChrisRijk ( 1818 ) on Monday March 03, 2003 @10:07AM (#5423625)
    If you need performance for persistant data, this "new" system doesn't seem to be much different at all to what you can do today. Using JDO (Java Data Objects) with a file-system backend would be about identical, though easier to use and have more features.

    Of course, you can always write your own persistance layer. I've done this a few times - very easy in Java. Map a row in the DB to an object, and cache the object in memory. If need to fetch that data again, check the cache first. When doing a write, write to the DB and update/flush your cache as necessary.

    That's just the basics - what's most optimal depends on how your data is accessed and changed (and also your programming language and capability as a programmer). Java has nice really nice stuff for caching built-in, like SoftReference wrapper objects, and of course threading and shared memory that you can use in production.

    I'm currently working on a super optimised threaded message board system. Almost all pages (data fetch/change + HTML generation) complete in about 0.001s.
  • by jj_johny ( 626460 ) on Monday March 03, 2003 @10:11AM (#5423653)
    Reading through the article it seems to lack a rather small but important item - multiple systems interacting read/write with the same database. This is not a very robust or scalable way of doing things. I wonder how this stacks up to one of the normal ways of improving performance by have one read/write database with lots of read only repicas.
  • Sourceforge Link (Score:4, Informative)

    by BoomerSooner ( 308737 ) on Monday March 03, 2003 @10:17AM (#5423686) Homepage Journal
  • by GoldTeamRules ( 639624 ) on Monday March 03, 2003 @10:34AM (#5423793)

    In 1999, I worked for a company that used an OO database (ObjectStore) to develop an e-commerce shopping portal. It was a disaster.

    OO advocates point to extremely fast (extremely special-case, in practice) queries, and natural persistent object mapping as reasons to why OO is superior.

    However, this is very misleading.

    Some of the MAJOR problems we ran into in using ObjectStore were:

    • It is very difficult to "see" an OO database. By nature, the data isn't tabular. It's a persistent object heap. There's no "SELECT * FROM USERS". So tracking down data-related problems involves exporting data to an XML file and sifting through it.
    • Reporting tools don't exist for OODB. Try hooking up Crystal or another reporting tool to this. You end up writing every report from scratch.
    • DB Performance when querying outside the normal object hierarchy (aggregate queries grouping on object attributes, etc.) is orders of magnitude SLOWER on an OODB! This was a huge problem when allowing users to do a product search on an e-commerce portal.
    • 32-bit memory limited our max customer size dramatically.

    When developers first consider OO databases, their first assumption is that OODBMS is to RDBMS as OOP is to Procedural Programming. This is a FALSE analogy! Migrating to OODBMS offers precious little to support better software design while introducing significant maintenance and design issues that should be considered prior to using this technology.

    Unless I had a product that had an extremely specialized use case that matched OODB strengths, I would NEVER develop on this kind of platform again.

    • by Juju ( 1688 ) on Monday March 03, 2003 @11:13AM (#5424044)
      and like in every object system, it is very important you get the design (and objects) right before you start coding.

      If you are thinking of accessing your objects like you are doing with SQL, then you haven't understood how OODB work. As for accessing your objects and doing your queries, there are tools (like Inspector for ObjectStore) than enable you to do just that.

      In term of performance, Oracle and co are nowhere near what you can reach with ObjectStore, provided you designed your application well.
      The 2 main problems with OODB, are:
      - schema evolution
      - reporting
      But these can easily be solved by a good design of your application.

      OODB is a skill that needs time mastering. After 4 years, seeing ObjectStore application from various companies, I can tell the difference between the ones where people knew what they were doing, and those from people who didn't have a clue...
      • and like in every object system, it is very important you get the design (and objects) right before you start coding

        And, like in every programming project, your requirements are incomplete, so your model will be incomplete, so you need to allow for flexibility. OO DBMS that I have used don't allow for that flexibility (schema evolution), so we build layers on top of the OODB, just the same as we do for relational DBs. I don't see the advantage. By the time we are done optimizing a relational DB, it has all the same indexes that the OODB would have, but we were able to evolve the system, instead of designing it all up front.

        I suppose I could argue for an OO DBMS if the number of transactions was high enough and application had a static set of requirements (general ledger, trade system, etc.).

        Joe
      • by Samrobb ( 12731 ) on Monday March 03, 2003 @12:55PM (#5424773) Journal
        The 2 main problems with OODB, are:
        - schema evolution
        - reporting
        But these can easily be solved by a good design of your application.

        In other words, OODB technology is doomed.

    • by leomekenkamp ( 566309 ) on Monday March 03, 2003 @11:35AM (#5424220)

      Although you certainly have a point, there are some remarks I have to make here:

      There's no "SELECT * FROM USERS".

      That's just like saying Latin is a bad language because it does not have equivalents for 'the', 'le/la', 'de/het', 'der/die/das', whatever. An rdbms is *fundamentally* different from an oodbms

      DB Performance when querying outside the normal object hierarchy (...) is orders of magnitude SLOWER on an OODB!

      That's right: you are trying to use a oodbms as a rdbms. Ever tried to drive a car like you ride a bicycle?

      Oodbms are relatively new, and they have their 'problems', just like rdbms-es have theirs. But the biggest problems arise when one approaches an oodbms like one would an rdbms. Just like you run into problems using an oo language when you have only used a proc. language

      • Nods head (Score:3, Interesting)

        by abulafia ( 7826 )
        > DB Performance when querying outside the normal object hierarchy (...)
        > is orders of magnitude SLOWER on an OODB!

        That's right: you are trying to use a oodbms as a rdbms. Ever tried to drive a car like you ride a bicycle?

        I've still never heard a good answer to this problem, only that I'm using the wrong hammer.

        When performing activities against pure OO storage in which selectively collecting data from a (potentially large) number of objects is required, what is the OC (object-correct) way to do so? Asking each one via a method call is horrendously slow in comparison to a RDBMS. For instance, contrast "select last_activity, uid from users" to
        my %blarg;
        foreach ( my $user $users->$next() ) {
        $blarg{uid} = $user->{uid};
        $blarg{last_activity} = $user->{last_activity};
        }

        I suppose if one is building a product instead of managing an ongoing project, saying that lazy access to the hash will save a little time. I still don't see the performance win, and for ad hoc access, building the methods and accessors just takes too much time to be reasonable.

        Use the right tool for the right job, I say. And usually, for managing data, a RDBMS is the right tool. For interacting with that data, OO is frequently nice.
        Please correct my incorrect notions.

    • Some of the MAJOR problems we ran into in using ObjectStore were:

      No, the MAJOR problem you ran into was trying to get RDBMS guys to understand OODBMSs, and you clearly failed.

      It is very difficult to "see" an OO database. By nature, the data isn't tabular. It's a persistent object heap. There's no "SELECT * FROM USERS". So tracking down data-related problems involves exporting data to an XML file and sifting through it.

      Well, that would be the hard way to do it. I suppose the easy way would be to take two minutes and write a small program to scour through the DB looking for the problems, but my experience with Objectstore and other OODBMSs would lead me to ask a different question -- How did the "data-related problems" get created? Write your classes with strong invariants and tightly encapsulate your data and you won't really have many such issues.

      Reporting tools don't exist for OODB.

      Actually this isn't really true, but the point is still worth addressing because the available reporting tools aren't very good. This isn't the fault of the tools, it's just a fact that it's impossible to write a general-purpose tool that can intelligently traverse arbitrarily-structured data.

      Again, the solution is: write a small program to extract the data you want to report on.

      If you need to do lots of ad-hoc queries against the database, such that writing a program each time isn't reasonable, then your usage pattern suggests an RDBMS is more appropriate.

      DB Performance when querying outside the normal object hierarchy (aggregate queries grouping on object attributes, etc.) is orders of magnitude SLOWER on an OODB!

      Unless you create indexes for those queries, of course. Ad-hoc querying is a real weakness of OODBMSs. OTOH, queries that are planned for and for which good indexes exist are orders of magnitude FASTER on an OODB! Like, three orders of magnitude faster than an RDBMS.

      32-bit memory limited our max customer size dramatically

      That is a problem if you design your database badly, but Objectstore allows you to segment your DB so that the size of your address space isn't an issue. The segmentation is completely transparent to the programmer using the objects.

      Migrating to OODBMS offers precious little to support better software design while introducing significant maintenance and design issues that should be considered prior to using this technology.

      OODBMSs have advantages and disadvantages. The advantages are:

      • Ease of initial development. No more figuring out how to map between objects and tables.
      • Code can be more object-oriented. With an RDBMS, "tableitis" tends to infect your classes.
      • Performance! Particularly with Objectstore/C++, the facts that (a) database representation is almost identical to in-memory representation and (b) client-side caching means that once an object has been retrieved from the persistent store there is *zero* overhead -- using a persistent object costs *exactly* the same as using a purely in-memory object -- mean that a well-structured Objectstore database is hugely faster than any RDBMS.

      The disadvantages vs. RDBMSs are:

      • Ongoing development requires schema migrations and those can be difficult. Mind you they're not easy for an RDBMS situation, either, since you have to reswizzle all your object-relational mapping stuff.
      • Ad-hoc queries are hard.
      • Getting good performance requires more design effort, particularly with page-oriented OODBMSs like Objectstore (which really act more like a specialized virtual memory system than a database).
      • Very few people understand them.

      Overall, OODBMSs shine when your primary need is for an "applicaiton working store", more than a "database" and when you need maximum performance and minimum time to market (assuming you have staff that knows the tool). If you need ad-hoc queries you can still use an OODBMS, but you will want to export the data to a relational DB for query purposes.

      Actually, that's a very nice solution to many problems, IMO. Use an OODBMS as your high-performance working store, and periodically export the the data to a relational "data warehouse" for ad-hoc queries and data mining. This means that you still have to implement and maintain an object-relational mapping, but it's much easier to manage a one-way mapping than a bi-directional mapping.

      The system described in the article is fine for some environments, I'm sure, but a high-quality OODBMS would be just as fast, more robust and would allow you to use databases that won't fit in RAM.

  • by digerata ( 516939 ) on Monday March 03, 2003 @10:37AM (#5423805) Homepage
    The first problem I see with this method is the lack of a powerful and flexible querying method. One of the most powerful features of SQL databases is their capability for searching. No where in the article did I see anthing about advanced querying of the objects. Even if there is, I'm sure its no where near as fast as a MySQL or Oracle. The author states that it is several orders of magnitude faster, but I bet it is this much faster only on fetch routines where you already know what object you are looking for.

    Here's the issue they are trying to solve: mapping object to records. That's it. Now the problem with removing the records / database is you lose all of the searching power that is inherit in relational databases. The author states that the codebase is 350 lines of code. How can any complex search engine be implemented in 350 lines of code that also covers the persistance?

    • The first problem I see with this method is the lack of a powerful and flexible querying method.

      Maybe I don't understand this well enough (the Prevaylor site is down), but if this is really a database based upon objects, and you can access them as normal objects, then any good programmer can make a "powerful and flexible querying method." You can write your own hashtables, searching functions, or whatever.

      One of the most powerful features of SQL databases is their capability for searching. No where in the article did I see anthing about advanced querying of the objects.

      Because they probably didn't put any searching routines into Prevayler. From the SourceForge page: "Ridiculously simple, Prevayler provides transparent persistence for PLAIN Java objects." You write the searching routines.

      Even if there is, I'm sure its no where near as fast as a MySQL or Oracle. The author states that it is several orders of magnitude faster, but I bet it is this much faster only on fetch routines where you already know what object you are looking for.

      Ever hear of hash codes and hash tables? You write the code yourself. How do you think MySQL and Oracle do it? They have code which does the searches. With this system you cut out the middleman. It'll have its own weaknesses and strengths, so every manager will have to decide if this system will fill their needs.

      At first glance, I see two weaknesses and two strengths to this system. Weaknesses: a) you'll have to be more of a programmer to implement a database. b) the database has to be small enough to fit in memory. Strengths: a) infinitely flexible. b) really fast for anything which will fit in RAM.

      Web hosting services won't want this. (they usually have many customers, and all their databases won't fit in RAM at once.) Big e-commerce sites won't want this for their customer databases. (again, probably won't fit in RAM) They may be able to use it for their product data, unless it's really huge--such as Barnes and Noble. I'm sure it'll be quite usable for most small businesses. The need for a programmer may seem like a huge obstacle, but I'm sure if Object Prevalence gets big, there'll be a book called "Object Prevalence in Java for Dummies" in no time.

  • Memory is CHEAP? (Score:3, Interesting)

    by mpxcz ( 448928 ) on Monday March 03, 2003 @10:39AM (#5423828)
    how much is 25Gig Hard disk as opposed to 25gig RAM? is that you call cheap? :)
  • by Tikiman ( 468059 ) on Monday March 03, 2003 @10:44AM (#5423859)
    In fact, this concept actually predates SQL-based databases! The first one I am aware of is MUMPS (Massachusetts General Hospital Utility Multi-Programming System) which goes back to 1966. One company that continues this legacy is Sanchez [sanchez-gtm.com]. Another commercial version is Caché [e-dbms.com]. This makes sense, really - the most obvious solution to serializing an object is to store all properties of a single object together (the OO solution), rather than store a single property of all objects togther (the RDBMS solution)
  • by mojorisin67_71 ( 238883 ) on Monday March 03, 2003 @10:52AM (#5423904)
    Main Memory Databases have been researched for nearly 10 years now and there are a number of commercial products. For details you can check out:
    TimesTen [timesten.com]
    Polyhedra [ployhedra.com]
    DataBlitz [bell-labs.com]

    etc..
    The idea it to have enough RAM to be able to store all the database in memory. This gives higher performance than a fully cached Oracle for two primary reasons:
    - there is no buffer manager so data can be directly accessed.
    - the index structures use smart pointers to access the data in memory.

    Typically the data is mapped using mmap or shared memory. Each application can have the databae directly mapped into its memory space.
    For providing persistence, typically main memory databases provide transaction logging and checkingpoint to be able to recover the data. Various techniques have been developed to be able to do this without affecting performance.
  • by kriegsman ( 55737 ) on Monday March 03, 2003 @11:07AM (#5423994) Homepage
    Things I want in a persistent datastore:
    - Atomicity of transactions (commit/rollback),
    - Consistency in the enforcement of my data integrity rules,
    - Isolation of each transaction from other competing transactions (locking)
    - Durable storage that can survive a crash without losing transactions (e.g., journaling)

    My experience with RAM-centeric disk-backed object storage is that you, the developer, often have to implement the ACID fetures yourself, from scratch. And from-scratch implementations of complex data-integrity mechanisms tend to be time-consuming to develop and test and often take much, much longer than you think to "get right".

    Call me old-fashioned, but I really like using data storage (database) engines that pass the ACID test and have already been debugged and debugged and debugged and debugged and debugged.

    -Mark
    • At A Previous Employer Who Shall Remain Nameless:
      (product is still on the market)

      We had a product which did (we'll call it "X") and tracked all it's information in a "database" we built in-house. The primary architect, of course, was a pretty sharp guy. He had written a whitepaper for the company stating why he thought "unix was dead" and why we should not waste our time, as a company, developing "portable" products, and that we should take full advantage of Microsoft's technologies on Windows.

      As far as ACID test goes, NONE of those elements existed in this "database" we used. Nor were there any verification, export, import, or repair tools initially available.

      As soon as this product scaled to a reasonable level, (the field was always one step ahead of our test lab, as far as scaling the application goes), we started seeing weird crashes and corruption that we just could not reproduce or isolate in the lab. When the term "database corruption" was used, the architect would throw a fit, and blame some other component, denying that database corruption was even possible.

      The absence of tools meant that we could not troubleshoot in the field. Developing tools was the equvalent of admitting that there was a problem. As we scaled our lab, in response, we started to uncover these problems. This was when our architect resigned. His job had suddenly changed from "Technical Primadonna" to "beleaguered fixer of uncounted bugs".

      That's when we REALLY started to get into trouble.

      At some point, there was serious talk about ripping the whole database out and going to a "real" commercial database solution. Some third party thing. That was shortly before I left that job. But in the end - there was much suffering and pain, and the product lost a great deal of ground to it's competitors all due to a lack of Respect For Those Who Have Gone Before.
  • by cushty ( 565359 ) on Monday March 03, 2003 @11:17AM (#5424067)

    Some people seem to be missing the point: this is not a "database" it is a persistence mechanism. What they are saying is that persisting objects is difficult (er, tend to disagree but I'll bite) and so they are solving this. Whether a RDBMS offers better searching is completely irrelevent as this, in their architecture, is handled by the application.

    What they seemed to gloss over is that you need to take snapshots of the actual data. If you didn't you'd have to keep every single "log" in order to safely playback the actions and know you have the same data in the same state. Loose one log, say the very first one, and you're pretty much screwed.

  • by praetorian_x ( 610780 ) on Monday March 03, 2003 @11:29AM (#5424159)
    This is not a new idea. There are all sorts of object databases out there. (Versant springs to mind).

    The main problems I see with object databases:

    1) SQL is incredibly powerful. You give up *a lot* of power when you go from sql semantics to object semantics. Sub-selects, group bys and optimized stored procedures, to name just a few things. All the object language query constructs I've seen fall far short of these. (As a side note, most O/R tools make a hash of it as well.)

    2) You immedately make a massive reduction in the number of database administrators who will be willing and/or capable of helping you out in your project.

    3) Scaling is always a question. With oracle, it just isn't.

    4) Backup, redundancy, monitoring, management, etc. Most mature relational databases have very good tools for doing these infrastructure activities. Developers often forget about banal things like this, but they are crucial for the long term health of IT systems.

    Don't get me wrong. Every time I construct some nasty query and go through the mind-numbing process of moving the results into an object, I think to myself "There has to be a better way!", but I've looked at the O/R tools and the object database out there and, sadly, I don't feel they are worth the trade off.

    Just my opinion,
    prat
  • by Rob Riggs ( 6418 ) on Monday March 03, 2003 @11:35AM (#5424217) Homepage Journal
    The persistent store is quite language-specific. It doesn't allow for a Python application to access a Java store, for instance. It also doesn't seem to allow concurrent access to data, which would require significantly more than 350 lines of code.

    Both of these issues make this solution unusable in an enterprise environment. The RAM size issue has already been mentioned by others and is another very real limitation.

    In general, object caching mechanisms are not terribily difficult to create. This generic solution proves the point by only requiring 350 lines of Java code.

    I am sure that there is something worthy in this project, I just cannot see it used for anything other than very small-scale development efforts.
  • by Lucas Membrane ( 524640 ) on Monday March 03, 2003 @11:38AM (#5424244)
    This OO scheme is a database system, but it leaves out much of the management element. (1) Things like changing the database structure without bringing the whole company down probably won't work. (2) You lose all the enforcement of the rules of relational integrity that an RDBMS gives you right out of the box. (3) And you lose Crystal Reports. (1) and (2) kill it technically in many situations, and (3) kills it management-wise.

    Gadfly, a Python package, gives you an in-memory DB and SQL. If you want to trade SQL for extra speed and do more programming, you can run the ISAM-like engines of Btrieve or Berkeley DB without the SQL layer on top. We have SQL RDBMS's because the conventional wisdom is that such a trade is not a good idea.

  • BS (Score:5, Insightful)

    by bwt ( 68845 ) on Monday March 03, 2003 @12:11PM (#5424452)
    Sorry this just won't cut it in most enterprise systems.

    1) Doesn't scale. Most enterprise databases don't fit in RAM. Data volumes grow with the capacities of hard disks which outpace RAM. If your database fits in memory now and you use this architecture, what do you do when it grows larger than your RAM capacity? You fire the guy that proposed this and switch to an RDBMS.

    2) Performance claims are BS. Good databases already serialize net changes to redo logs via a sort of binary diff of the data block. Redo logs are usually the limiting factor on transaction throughput, since they require IO to disk. Serializing the actual commands is more inefficient than using a data block diff. You simply cannot minimize the space any better than an RDBMS does, therefore you cannot minimize the IO for this serialization any better, and therefore you cannot do it faster without sacrificing ACIDity. If your performance is too good to be true, then you gave up an essentail feature of the RDBMS.

    3) Consistancy. If there is only one object in memory for each record, then you'll be writing a tremendous amount of custom thread-safety code and even then, either A) writers block readers and readers block writers or B) read consistacny isn't guaranteed. Either is usually unacceptable. One alternative is to clone objects at every write (sounds slow and horribly inefficient). Of course, this too has to be serialized, or you don't have ACIDity. If you are serializing these, then you aren't really different than an RDBMS which uses rollback/undo, except you are wasting disk IO and are slower.

    4) Reliability. A hardware failure, software hang/crash, or system administration mistake would force recovery from the last full backup. Replaying a full day's transactions could take hours. Sure you could be continually making a disk image, except for read consistancy issues like above. Its not clear what you do even for a daily backup. Are all sessions simply blocked during backup? Ouch.

    Every few years object fanatics try to come up with some way to get rid of RDMBS's. The methods invariably rely on sacrificing some of the core capabilities of the RDBMS: data integrity, performance, consistency, ACID, reliability etc... These "innovations" are really only of interest to OO fanatics. In the real world, OO gets sacrificed way before RDBMS's do. This is not going to change.

    OO is a tool that is good for writing maintainable code. It is not good for performance critical uses like OS's, device drivers, and real time systems. It is not good for data intensive systems. These things are not likely to change. If all you can accept is OO, then you are a niche player.
    • Re:BS (Score:3, Insightful)

      by p3d0 ( 42270 )
      I agree that OO is not so good for databases, but it works well in OSes, device drivers, and realtime systems. You just need to know how to get good abstractions without sacrificing any performance, and that's not an easy skill to master.

      OO is not all about classes and jump tables. For example, you can get polymorphism in C++ without using any virtual methods at all. If you disagree, then I think your view of what constitutes OO is quite limited, and I'm not surprised you think it's a "niche player".

  • by puppetman ( 131489 ) on Monday March 03, 2003 @12:30PM (#5424603) Homepage
    As has been mentioned, it fails the ACI portion of ACID (it's not Atomic - all or nothing, not Consistent - data is left in a consistent state, doesn't provide Isolation - you appear to be the only transaction running; other processes don't affect your data in mid-transaction). Passes Durable, I suppose.

    I've read a few posts that say that the performance claims (vs a relational database) are not true. I think this will be much faster than a database. This is an in-memory cache. It will be very fast. Our Oracle databases have a cache-hit ratio of 98 and 99+ percent, but will be slower. Why?

    First, databases (especially Oracle) do alot of stuff behind the scenes, logging all sorts of stuff from a user connecting to the SQL being run.

    Second, this sort of thing offers nearly direct access to the data. SQL usually needs to be parsed before it is executed. The database needs to come up with the optimal query plan before it actually executes the statement. A database offers different ways of joining data, and accessing data. Find me all managers that make more than $50,000 per year and have a last name that start with K. You will have to decide the best way to get the data yourself. A database will do all the work for you.

    This is a great, idea, though for a middle-tier cache. Say you want to do some fast searching on a small amount of data. You can use this in the middle tier to save yourself the trip to the database.

    A good object oriented database that has not been mentioned yet is Matisse [fresher.com]
  • by Master of Transhuman ( 597628 ) on Monday March 03, 2003 @12:36PM (#5424649) Homepage
    somebody has figured out that things in memory are faster than disk...

    After twenty years, we finally get to...

    the in-memory database!

    Oh wait, didn't my Atari ST have that?

  • OK (Score:4, Interesting)

    by anthony_dipierro ( 543308 ) on Monday March 03, 2003 @12:42PM (#5424690) Journal
    How is this any different from using a journaling filesystem and mmap?
  • MOO (Score:4, Interesting)

    by zerOnIne ( 128186 ) on Monday March 03, 2003 @01:16PM (#5424922) Homepage
    MOO has been doing this very thing for years, and it actually draws a lot of criticism for it. Keeping a persistant image of objects around and making checkpoints at determined intervals doesn't really seem to be that big of a deal, though it is cool to have bindings to all of those languages. But really, what's the big deal? (an honest question, not a flame)
  • race conditions? (Score:4, Insightful)

    by vsync64 ( 155958 ) <vsync@quadium.net> on Monday March 03, 2003 @03:11PM (#5425725) Homepage
    Is anyone else bothered by the complete lack of the synchronized keyword in his example code? So the ChangeUser Command can apparently be in between these 2 lines:

    usersMap.remove(user.getLogin());
    usersMap.put(user.getLogin(), user);

    Meanwhile someone else can run an AddUser Command with the same username. Guess what happens when ChangeUser gets to that 2nd line?

    Maybe when this radical new concept in databases can be presented in a way that avoids race conditions I'll pay a little more attention...

  • Congratulations! (Score:3, Informative)

    by I Am The Owl ( 531076 ) on Monday March 03, 2003 @07:18PM (#5428077) Homepage Journal
    You've invented an Object-Oriented database! Wowee zowie! Wait, what's that? You say this is nothing new? Well, you're [sleepycat.com] right [odbmsfacts.com]. Of course it's faster than an Oracle database stored in RAM. Oracle is not designed for the purpose of storing objects. It's a relational database, which is something else entirely.
  • The bad old days (Score:3, Insightful)

    by InnovATIONS ( 588225 ) on Tuesday March 04, 2003 @12:37AM (#5430417)
    Why really did DBMS come about? It was not because of a need for secure transactions or to store a lot of data, although obviously those are necessary qualities of a DBMS.

    Before dbms applications stored their data in very efficient data stores designed just for that application but were worthless for anything else and hard to upgrade or extend without breaking or rewriting the existing application.

    DBMS were developed so that data could be stored in an application independent store that could be used and extended for new applications without breaking everything that went before.

    DBMS were never designed to be more efficient than the application specific data stores that they replaced, so that somebody saying that they can build a custom data store just for a particular application that is faster is missing the point entirely.

Genetics explains why you look like your father, and if you don't, why you should.

Working...