Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Programming IT Technology

Prevayler Quietly Reaches 2.0 Alpha, Bye RDBMS? 444

ninejaguar asks: "Slashdot did an article on an Open Source product called Prevayler, which could theoretically resolve all the problems associated with OO's rough courtship with Relational databases. Slashdot covered Prevayler when it was still 1.x. Despite fear, doubt, and memory concerns, it has reached 2.0 alpha. Is anyone currently using this non-database solution in production? If so, has it sped development because of the lack of OO-to-RDBMS complexity? Was there a significant learning curve to speak of? The LGPL'd product could be incorporated into proprietary commercial software, and few might know about it. Is anyone considering using it in a transactional environment where speed is the paramount need? And, are there any objections to using Prevayler that haven't been answered at the Prevayler wiki? Would those who use MySQL find Prevayler to be a better solution because it's tiny (less than 100kb), 3000 times faster and is inherently ACID compliant?" Update: 09/24 19:25 GMT by C :Quite a few broken links, now fixed.

"We've used relational databases for years despite incompatibilities in SQL implementation. Accessing them from an OOP paradigm has been so tedious, that Object-Relational mapping technologies have sprouted all over the Open Source landscape. Some competing examples and models are Hibernate, OJB, TJDO, XORM, and Castor; which in turn have supporting frameworks such as Spring and SQLExecutor. Because SQL is the dominant form of interfacing with the data in an RDBMS, there's now a specification to offer it a friendlier OO face.

Most of the above, including the SQL-variants, arguably appear to add yet another layer of complexity (even if only at the integration level) where they should be taking complexity away. These solutions are put together by some very smart people, but it's inescapable to get that feeling someone is missing the forest (simple answer) because all the trees (incompatible models) are in the way. If there are so many after-the-fact solutions attempting to simplify relational database access and manipulation from OO, isn't it reasonable to think that there is something generally wrong with trying to cobble-together two disparate concepts with what are essentially high-caliber hacks? Is Prevayler a better way?"

This discussion has been archived. No new comments can be posted.

Prevayler Quietly Reaches 2.0 Alpha, Bye RDBMS?

Comments Filter:
  • From the faq (Score:5, Interesting)

    by FreeLinux ( 555387 ) on Tuesday September 23, 2003 @08:13PM (#7039188)
    Although MySQL is about 2.8 times faster for transaction processing, Prevayler retrieves 100 objects among one million 3251 TIMES FASTER than MySQL via JDBC, even when MySQL has all data cached in RAM!

    Now many different people will interpret this statement differently. But, my interpretation is that under very specific circumstances it is 3000 times faster than MySQL. However, for transaction processing, which is the primary and most common role of a RDBMS, MySQL is 2.8 times faster than Prevayler.

    Bottom line... For normal circumstances, MySQL is faster and many commercial SQL database servers are faster still.
  • by jonabbey ( 2498 ) * <jonabbey@ganymeta.org> on Tuesday September 23, 2003 @08:14PM (#7039192) Homepage

    Hell no! Are you stupid?

    I might be.. we use an in-memory Java objectbase solution for Ganymede.. when coupled with an on-disk transaction log, we get extremely high performance, transactional integrity, and the ability to run wherever there's a Java VM.

    While I wouldn't claim that this kind of solution will scale indefinitely, it works very well indeed for our application and our dataset size.

    It's not Prevayler, though, so I can't comment on his API or code quality.

  • by Elwood P Dowd ( 16933 ) <judgmentalist@gmail.com> on Tuesday September 23, 2003 @08:16PM (#7039202) Journal
    Their jingoism is absurd.

    "We have some features that are better than relational databases. RELATIONAL DATABASES SHOULD NEVER BE USED AGAIN."

    In their wiki, "When Should I Not Use Prevalence" [prevayler.org] lists three things:
    When you do not know how to program.
    When you cannot afford enough RAM to contain all your Business Objects.
    When you cannot find a Java Virtual Machine that is robust enough.

    That's obviously the stupidest thing ever. How about, "When I don't have the time and money to connect all my company's SQL DB stuff to your java stuff." Obviously my scenario encompasses a whole lot more users than their three, and perhaps explains why no one is using their product.

    What assholes.
  • SQLite (Score:3, Interesting)

    by Electrum ( 94638 ) <david@acz.org> on Tuesday September 23, 2003 @08:21PM (#7039239) Homepage
    SQLite [hwaci.com] is tiny, fast [hwaci.com] and ACID compliant. SQLite is a public domain embedded SQL database library. It is similar to BDB, but provides a complete SQL database.
  • by TheSunborn ( 68004 ) <mtilstedNO@SPAMgmail.com> on Tuesday September 23, 2003 @08:30PM (#7039314)
    Can they rollback? Their documentation is poor(Or I could just not find it) but based on the description i found on the site, they said that rollback was not posible.
  • by pVoid ( 607584 ) on Tuesday September 23, 2003 @08:33PM (#7039335)
    Something else that makes me think "is that even possible"...

    Zero bugs. Ever.

    No one has yet found a bug in Prevayler in a Production release. Who will be the first?

    link [prevayler.org].

    I didn't know people still made such bold assertions past the dot-com era.

  • by MaxTardiveau ( 629919 ) on Tuesday September 23, 2003 @08:43PM (#7039412) Homepage
    Consistency? Admittedly, most databases barely address that (domain checks and referential integrity are only a start).

    Not having a SQL interface closes so many avenues... For a nifty (and remarkably solid) light-weight database, take a look at McKoi [mckoi.com].

    Max [integrity-logic.com]

  • Thank you for the clarification.

    I'm sure that, with its entire database cached in memory, MySQL's performance *does* improve dramatically. But it's still using maladapted algorithms for doing all of its queries -- the poor algorithms think they're reading blocks from a disk, not pages from memory. And of course every time the MySQL process performs a "disk" read, control reverts to the kernel and the process sacrifices the rest of its quantum. So MySQL will never be able to take full advantage of the fact that it's running with a memory backing store.

    My point was simply that Prevayler is naturally going to be much faster than a MySQL database, because one of them is dealing with RAM and the other is dealing with disk. Hence, comparing Prevayler and MySQL performance is like comparing apples and oranges.

    We're agreed that Prevayler is very useful, provided that your application fits its memory and reliability constraints. Thus, instead of distracting us with how much faster their orange is than the MySQL apple, they should spend their time evangelizing the "orange" way of doing things.
  • by RevAaron ( 125240 ) <revaaron AT hotmail DOT com> on Tuesday September 23, 2003 @09:23PM (#7039695) Homepage
    No incompatibility per se, but it's a rough analogy. People often use an SQL RDBMS along with some layer in between it and their OO language, the layer "flattening" objects into rows in the database, and then dynamically reinstantiating the row into an object in that language.

    This is all fine and dandy when you have simple data- a "Person" object filled with nothing but integers and strings flattens fine into an SQL row. But then again, you're kind of just using that object as something not much more than a C struct.

    However, when you have more active objects- which generally arise when you are actually doing good OO design and programming- the interface falls apart some. What if I've got my data objects pointing to other complex objects, rather than just elemental/translatable types like strings, numbers, and dates? What if the state of object A depends directly on what is going on in object B?

    In an OORDBMS translation system, the objects are essentially dead while in storage. Object A can point to Object B - another row in a different database which holds data from a different class- but it dead.

    A good object oriented database, like GemStone (for Java or Smalltalk) allows the objects to remain "alive" while in storage. I have never used or read much about this Prevayler db, so I don't know where it lies. There are some database systems which claim to be an "OODB," but are little more than an RDBMS and an object translation layer built in.

    Not all applications benefit from the OODB methodology, but there are plenty which do. For an OO system which saw some decent design, an OODB is often a good fit. If all of your data is relatively simple- for instance, a customer and parts database on an e-commerce site- an SQL database will probably fit well enough.
  • Counter Hypothesis (Score:2, Interesting)

    by ogren ( 32428 ) on Tuesday September 23, 2003 @09:24PM (#7039703) Homepage

    From the Wiki:

    Prevalent Hypothesis:
    That there is enough RAM to hold all business objects in your system.

    IMO, this hypothesis doesn't hold water for 99% of business applications. Prevalent proposes that even if this hypothesis doesn't hold today, it might soon hold true because of breakthroughs in memory technology [prevayler.org]. But historically these types of ideas are flawed, as the real world has clearly shown that memory and disk needs will grow even faster than our ability to use those resources.

    Sure I can take a 1998 application and run it with breakthrough performance by caching the whole application in memory. But I don't want a 1998 application, I want a 2003/2004 application. Do you think that the entire Slashdot comment database would fit into a couple of gigs of memory? How about the enite customer database for an average enterprise?

    Frankly, if my database is so small that it fits into memory then I probably don't have performance as a primary concern anyway. Plus, do I really want to subject my entire database to constant mark and sweep garbage collection if performance is an issue?

    --
    ogren
    .no sig

  • Re:Project Promotion (Score:5, Interesting)

    by C10H14N2 ( 640033 ) on Tuesday September 23, 2003 @09:37PM (#7039774)
    ...the quasi religious cult sanctimonious denigration crap doesn't do much to convince either, as if storing tabular enterprise data in - GASP - "tables" is such a terrible thing to do indicative of a lower order of being in need of spiritual and philosophical cleansing and release from porridge-feeding in prison.

    [Question: In SQL] Given a table of employees, and a table of offices, it's very easy to find those employees who don't have offices, and those offices who don't have employees. How can I do this in a consistent manner in Prevayler?

    [Their answer:]

    Trivial. But how about a query over polimorphically evaluated methods based on the current millisecond?

    Talk of "trivial," you can already persist your Java objects into the same store as all your other data and access it either through EJB's, simple entity beans or straight SQL. IMHO, that is a far more useful model than locking everything into a pile of serialized object collections accessible only to a handful of Java gurus who have better things to do than write reams of application logic in order to tell Jim-Bob in accounts payable that he owes the janitor fifty bucks. If it takes less SQL than it would take to write the necessary includes in Java and I can still map everything into Java objects when necessary, why bother giving up 80% of your functionality to get only what is already there? Performance? Please. When the effort of generating a simple report exceeds the price of the CPU, just buy another CPU and save on labor. Besides, after being categorically insulted by these self-satisfied pricks, I wouldn't use their product if it would end hunger and bring about world peace.
  • Re:From the faq (Score:5, Interesting)

    by j3110 ( 193209 ) <samterrell&gmail,com> on Tuesday September 23, 2003 @10:56PM (#7040244) Homepage
    I guess in your development model you write more than you read?

    I bet you even are going to say that you use more transactions than you query.

    Finding/Reading/Loading data is always going to be the key to the best performance. Transactions are second to that. 3000X faster at reading affords it a lot of breathing space when it comes to transactions because everyone visiting slashdot is reading, but only a few post. This is the real world model.

    I'm skeptical when it comes to easily retrieving and indexing data for reporting. You need to be able to query aggregate (yes, this is easy enough to code) but you need to join related objects. Do I have to do my own query planning/joining? What's the transaction API like? Do I lock objects then update them? What about deadlock? (Easy enough to avoid by anyone smart enough to know any of the algorithms like having to sort the order in which you lock resources, but still a hassle.) So many questions!
  • by davi_slashdot ( 653133 ) on Tuesday September 23, 2003 @11:29PM (#7040414) Homepage Journal
    I really appreciate the idea of a RAM targeted object repository layer, but as often happens, the Java guys tend to present their solutions as panaceias to every single application domain that slightly ressembles the problems they are attacking. I am really curious about how Prevalence compares to more naive in-RAM data storage approaches. Despite raising attention to the large quantities of memory not used in high end servers (and to the project itself), I simply can't see the point to compare it with mysql.

    Furthermore, I can't avoid noticing the curious culture that has emerged with Java that you always have a single application/application instance for each machine. The dedicated server concept came to give reliability and CPU power, but WTF, do I really need a dedicated server to every application I run?
  • Bubble Memory Wanted (Score:2, Interesting)

    by Tablizer ( 95088 ) on Wednesday September 24, 2003 @01:30AM (#7041029) Journal
    A good object oriented database, like GemStone (for Java or Smalltalk) allows the objects to remain "alive" while in storage.

    I have concluded that what OO affectionados really want is the concept of bubble memory. Bubble memory was being heavily researched in the late 80's and promised to create a kind of RAM that would keep memory even when the power was turned off. You only needed power to change state, not keep state.

    This is more or less what Prevayler-like tools seem to do: give one the illusion of bubble memory.

    Sure, we would all like bubble memory, but it seems that OO needs it more because it depends on behavior as the primary communications conduit between system/application parts, whereas relational approaches use data and meta-data for such communication. It is easier to recreate data messaging between boot cycles than behavior.

    I don't know what happened to bubble memory. It never seemed to take off like promised.
  • by telbij ( 465356 ) on Wednesday September 24, 2003 @01:56AM (#7041127)
    Great post, thanks for saving me the time. It's amazing how the hype of OOP has caused so many programmers to just believe that it's 'better' without any kind of understanding of what OOP really provides. Now it's to the point that people are ready to blindly replace RDBMSs with Object-Oriented Databases without stopping to think why RDBMSs have been such a great tool to begin with.

    Citing incompatibility due to SQL variations is the biggest red herring I've ever seen. Designing a relational database to store company data has always been a much more straightforward process than desiging a system of classes. The importance of keeping your data clean and flexible can not be overestimated. If you have good data you can build unlimited apps on top of it without running into the kind of brick walls that rigid object structures can impose. Sure even normalized databases sometimes make tradeoffs based on how they will be used, but you aren't cutting off possibilities at anywhere near the rate you are the minute you try to cram your data into a class tree.
  • Not neccessarily (Score:3, Interesting)

    by Simon ( 815 ) * <simon@simonzoneS ... com minus distro> on Wednesday September 24, 2003 @02:38AM (#7041239) Homepage
    You are assuming that when Prevayler restarts it reapplys the "half transaction" and corrupts the data. What is more likely to happen is that Prevayler applies the transaction log, sees that the last transaction is incomplete/corrupt, and then stops. Result: consistent DB state. (Actually the state just before the software failure.)

    --
    Simon

  • Ok, I'll bite... (Score:3, Interesting)

    by DCowern ( 182668 ) * on Wednesday September 24, 2003 @05:20AM (#7041773) Homepage

    Ok, I'll bite. I don't get it.

    I've developed several medium-sized projects using OO languages interfacing with a RDBMS. One used MSSQL, one used Oracle, and a couple used MySQL. (Just to be a bit more precise, by medium-sized I mean a few tens of thousand lines of code and up to 50 concurrent users per location to which the application is deployed.) I've never seen a "rough courtship with Relational database" or a problem with "OO-to-RDBMS complexity" and I would not call developing OO applications that interface with SQL databases "tedious".

    Secondly, the incompatibilities can easily be addressed by creating a database connection class. The SQL implementation aren't so different that you can't pass the implementation name (i.e. MSSQL, ORA, MYSQL, etc.) to the constructor or add some #ifdefs (or the equivalent) in appropriate places and pass the right implementation-specific SQL based on that. If you do it well enough, you can reuse this class across projects. At least SQL is somewhat standardized (haha, I know). By using Prevayler (which sounds like it's some 12 year old AOLer's screen name, btw), you're locked in to one vendor. What if, for example, I have a client who can't/won't run this software? There are enough SQL variants to make pretty much anyone happy with the platform.

    The poster talks about a lot of high-caliber hacks... if a developer understands the purpose of an RDBMS was supposed to do, they wouldn't need to resort to "hacks" (I was expecting an example or some useful supporting information from the poster's link... unfortunately, I was sadly mistaken). An RDBMS is a way of storing and retrieving data. The implementation of the RDBMS should have no impact on the design of a program. If it makes an OO-zealot feel warm and fuzzy, they can just think of the entire RDBMS as a giant object sitting off in never-ever land with connect and query methods (at the most basic level at least).

    Lastly, I see a lot of hot air on the Prevayler website. Things talking about freedom from query languages, support of any algorith, etc. They don't seem to have a lot of information on these statement. They don't say exactly how data is stored. They seem to claim that Prevayler will organize your data in the most optimal fashion since they seem to think DBAs are evil people who, for no reason whatsoever, come up with wicked database designs intended to make developers miserable. They don't say exactly how they store data. These kind of omissions scare me. I don't trust projects just because theyre open source. (Note: If this information is available, it wasn't obvious. Even if these guys are the most awesome OO developers ever, they suck at presenting information).

    What is the big deal here? Seriously... I'm just not seeing the problem.

    I'm sorry if something like this has been posted before but I really didn't see anything covering these questions. Also, if it sounds like I'm a bit mean in this post, it's because as I was writing the post, I tried to answer my own questions through the Prevayler website but was unable to find any useful information -- in fact, I ended up with more unanswered questions than I started with. In short, zealotry combined with lack of information makes me an unhappy camper.

  • Re:Why Java? (Score:3, Interesting)

    by LizardKing ( 5245 ) on Wednesday September 24, 2003 @07:35AM (#7042224)

    That same argument again? It does matter what language you write in, you can still get appalling code if you don't know what you're doing.

    I am still yet to see a good reason why Perl cannot be _effectively_ used for anything other than a small/stopgap system.

    Perl encourages bad coding style in the same way that C++ does. Both languages have a number of idioms for coding the same thing, and that often leads to highly "personalised" code. I have seen too many projects where Perl was chosen as the implementation language, and then the whole thing became unmaintainable once the original coder(s) moved on. In fact I used to make a living from going into companies as a contractor and fixing such abortions. The fact that I was in such high demand suggests this is a common problem. Ultimately I got sick of doing it, as it's not a very satisfying job. I then expunged the word "Perl" from my CV.

    In contrast, Java enforces a more limited set of programming idioms. Some people rail against this, claiming (wrongly in my view) that the limited syntax and lack of things like a preprocessor make coding considerably harder. However I see far less spaghetti code written in Java than I ever saw Perl programmers produce. Ditto for C compared to C++.

    Perl may be a good choice as long as you can enforce good coding standards and peer review, but how many typical software houses do that? I've worked at a large number ranging from small to massive, and very few have either policy and even less actually enforce them.

    Now you're most likely going to come back with the tired "you're blaming bad programming practices on Perl". But the reality is most companies have bad programming practices, and Perl excacerbates them.

    Chris

    (Who once believed Perl was the panacea to everything, but nowadays isn't foolish enough to think that either it or Java is).

  • Re:From the faq (Score:3, Interesting)

    by julesh ( 229690 ) on Wednesday September 24, 2003 @08:13AM (#7042395)
    What's the transaction API like? Do I lock objects then update them? What about deadlock? (Easy enough to avoid by anyone smart enough to know any of the algorithms like having to sort the order in which you lock resources, but still a hassle.)

    Its pretty simple. You create command objects to represent each kind of transaction you can perform. They all implement a fairly simple interface. You present them to the persistence layer, which executes them 1 at a time. No locking is necessary. No deadlocks are possible.

    In fact, its so simple I see no need to use their code to do it. In fact, in one of my applications I have recently used the same model with my own implementation. I used my own implementation because I didn't want to use Java's serialization API (I find using a self implemented system that doesn't rely on reflection is generally much faster).
  • Re:very naive (Score:3, Interesting)

    by julesh ( 229690 ) on Wednesday September 24, 2003 @08:52AM (#7042688)
    While I have to agree with some of the things you're saying (like RDBMSs are the way to go for most real world applications), I have to disagree with some things you've said:

    They claim there's no need for two-phase commit (2pc), as though the only systems they need to interact with are (or will be) prevaylor.

    The design decision they've made regarding transaction implementation is kind of orthogonal to their storage decision, although it is a design that works better in memory than it does on disk.

    But, rather than 'begin transaction, update some stuff, check some stuff, if its all ok commit', they do 'prevent any updates that would change the outcome of the precondition; evaluate precondition; if its true, do the updates; allow other transactions to be processed'. Mathematically speaking, these two are equivalent[*], although the latter way is a more difficult way of thinking about the problem that will slow development down.

    The whole "let's keep it in RAM" is cute, and for a lot of projects is probably all you need, but for any kind of large data set you just can't buy enough RAM to hold it all. Once it goes to disk, there's a whole new set of problems.

    The only real problems I can see are 'how do we reliably store this on disk, how do we load it back transparently'. Admittedly, that's a big problem, but its not one that hasn't been addressed before.

    [*]: I can't prove this. It just feels like they must be.
  • Re:Ok, I'll bite... (Score:3, Interesting)

    by julesh ( 229690 ) on Wednesday September 24, 2003 @09:05AM (#7042796)
    I think you've probably missed the more relevant parts of their site. It is a bit of a mess.

    Prevayler isn't a database server. Its an object storage library. So its not a matter of a client not wanting to run it; you link it to your own app and manage your own data with it. The only reason a client couldn't run it would be (a) no support for Java (unlikely) or (b) not enough RAM (which is the big problem).

    Prevayler stores all of your objects in RAM while you're working on them, and at the same time maintains an (apparently) reliable disk backup. To do this, it uses the standard Java object serialization API. To maintain consistency, it requires you to perform all updates through serializable command objects. So you end up with two files; one is a snapshot of all the objects in your store at some point in time, the other is a log of all the commands executed since that point in time.

    The code is, apparently, only 335 lines long. It is very easy to understand. There can be no vendor lockin because of this simplicity. I looked at the version 1 code about a year ago and probably remember enough about it that I could write a program to load in the stored objects.

    The "organising data in a more optimal fashion" is probably simply a reference to the fact that you can use it with any serialisable Java objects, so you can have references between objects that don't need index lookups, use most of the standard Java classes to organise your data (so you're not constrained to tree indices, you can use hashtables or arrays or whatever you think most appropriate), stuff like that. Basically, because it does less for you, you have more choice.

  • by nicestepauthor ( 307146 ) on Wednesday September 24, 2003 @10:10AM (#7043443) Homepage
    I would tend to agree that the website oversells Prevayler, but it does get your attention. I used Prevayler in a project for work where we wanted to distribute a decision support system using Java Web Start. The system downloads data from a central server and works with it offline, producing reports and charts. It is meant for laptop users who will not always be connected to a network.

    Without Prevayler we would have had to install a DBMS on each client machine. With Prevayler we were able to avoid that and create a fully cross-platform system in 100% Java, entirely deployed through Java Web Start.

    Since this project we have used Prevayler on other systems where the amount of data involved did not justify buying an Oracle license. We have even used it on the server in a few cases.

    Prevayler will never replace Oracle here or anywhere else, but it IS useful and well worth checking out.

  • by plaurila ( 243726 ) on Wednesday September 24, 2003 @12:01PM (#7044715)
    From my weblog: [freeroller.net]

    Could loading all your objects into memory, and keeping them there, possibly work? That's what the folks at www.prevayler.org think. Breakthroughs in memory technology, cheaper RAM, 64-bit computing, better JVMs -- all these instances of technological progress, we are told, ought to finally make it possible to keep all your domain objects in main memory. The advantages are obvious. Systems become simpler. The need for caches is all but eliminated. Performance can be enhanced dramatically as there is no longer need for expensive disk access or network communication with the database. No longer are you limited by the query language of your database; use whatever query mechanism that best suits your needs.
    Wonderful! But, does it work in practice?

    1. Cheaper RAM. While it is true that RAM has been getting cheaper and cheaper, the same is true of hard disks. A look at the web site of a nearby computer shop reveals that a 512MB RAM unit costs 84 euros; you can buy an 80GB hard disk suffering the same amount of monetary drain. It is not to be expected that this two-orders-of-magnitude difference would disappear anytime soon. Therefore, for large amounts of data, even if you could buy enough RAM, would you really like to?

    2. 64-bit computing and JVM support. Extensions like Intel Extended Server Memory Architecture aside, 32-bit computers can address only 4 gigabytes of data. In practice the number for an individual application is less, and for a Java application even more so. Java heaps on x86 platforms were restricted to approximately 1.5GB last I looked. This is not enough for many enterprise applications. It is not nice to be running at 1GB heap utilization, knowing that users are creating new objects all the time, fast...

    64-bit architectures get rid of the 4GB barrier thus carrying with them a promise of prevalence nirvana. Right now, though, all 64-bit systems are stratospherically expensive. Have you seen the prices of those Itanium 2 boxes? AMD promises to come to the rescue with x86-64, but the practical utilization of this architecture from Intel's chief rival in Java application is a little off, since there's not even a Java VM for x86-64 yet (though one has been promised to appear in 2004).

    3. Simplicity. I'd love getting rid of caches. Caches suck! And don't even get started on all that pesky O/R mapping stuff. Without a query language, though, the distinct possibility exists that the magic box full of promises will turn out to be half-empty. Writing queries in SQL is, well, convenient. If you have to manually construct hashtable-based indexes, then the value proposition of object prevalence diminishes. Fortunately, there are object query languages, though none of them seems to be widely accepted as a standard. However, there should be no obstacle in principle why such standardization couldn't take place.

    4. Performance. Queries from a hashtable-based index are fast. For updates the situation is different, as updates will have to be written to the command log; all in-memory indexes, too, will have to be updated. If such indices are not maintained, databases could in case of a huge data set turn out to be faster, even though they often have to access the disk to get to the data. The big performance advantage that I see coming from object prevalence is in the area of very complex, deeply-linked data structures: you can build them and still not worry, given the right combination of hashtables.

    Conclusion? If an application deals with lots of data, or if the amount of data grows fast or unpredictably, object prevalence won't be viable until 64-bit architectures establish themselves in the marketplace (which should take a couple more years). Regardless, you can look for subsets of data that could be kept in memory in their entirety. The more often that data is accessed, the more performance benefits will accumulate. For example, in a server-side business application, it might be possible to

It is not best to swap horses while crossing the river. -- Abraham Lincoln

Working...