Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Programming IT Technology

Prevayler Quietly Reaches 2.0 Alpha, Bye RDBMS? 444

ninejaguar asks: "Slashdot did an article on an Open Source product called Prevayler, which could theoretically resolve all the problems associated with OO's rough courtship with Relational databases. Slashdot covered Prevayler when it was still 1.x. Despite fear, doubt, and memory concerns, it has reached 2.0 alpha. Is anyone currently using this non-database solution in production? If so, has it sped development because of the lack of OO-to-RDBMS complexity? Was there a significant learning curve to speak of? The LGPL'd product could be incorporated into proprietary commercial software, and few might know about it. Is anyone considering using it in a transactional environment where speed is the paramount need? And, are there any objections to using Prevayler that haven't been answered at the Prevayler wiki? Would those who use MySQL find Prevayler to be a better solution because it's tiny (less than 100kb), 3000 times faster and is inherently ACID compliant?" Update: 09/24 19:25 GMT by C :Quite a few broken links, now fixed.

"We've used relational databases for years despite incompatibilities in SQL implementation. Accessing them from an OOP paradigm has been so tedious, that Object-Relational mapping technologies have sprouted all over the Open Source landscape. Some competing examples and models are Hibernate, OJB, TJDO, XORM, and Castor; which in turn have supporting frameworks such as Spring and SQLExecutor. Because SQL is the dominant form of interfacing with the data in an RDBMS, there's now a specification to offer it a friendlier OO face.

Most of the above, including the SQL-variants, arguably appear to add yet another layer of complexity (even if only at the integration level) where they should be taking complexity away. These solutions are put together by some very smart people, but it's inescapable to get that feeling someone is missing the forest (simple answer) because all the trees (incompatible models) are in the way. If there are so many after-the-fact solutions attempting to simplify relational database access and manipulation from OO, isn't it reasonable to think that there is something generally wrong with trying to cobble-together two disparate concepts with what are essentially high-caliber hacks? Is Prevayler a better way?"

This discussion has been archived. No new comments can be posted.

Prevayler Quietly Reaches 2.0 Alpha, Bye RDBMS?

Comments Filter:
  • by Anonymous Coward on Tuesday September 23, 2003 @06:53PM (#7039040)
    Is that really possible? How do you even benchmark that?
    • by Anonymous Coward on Tuesday September 23, 2003 @06:57PM (#7039070)
      here [prevayler.org]
      • From the faq (Score:5, Interesting)

        by FreeLinux ( 555387 ) on Tuesday September 23, 2003 @07:13PM (#7039188)
        Although MySQL is about 2.8 times faster for transaction processing, Prevayler retrieves 100 objects among one million 3251 TIMES FASTER than MySQL via JDBC, even when MySQL has all data cached in RAM!

        Now many different people will interpret this statement differently. But, my interpretation is that under very specific circumstances it is 3000 times faster than MySQL. However, for transaction processing, which is the primary and most common role of a RDBMS, MySQL is 2.8 times faster than Prevayler.

        Bottom line... For normal circumstances, MySQL is faster and many commercial SQL database servers are faster still.
        • Re:From the faq (Score:5, Interesting)

          by j3110 ( 193209 ) <samterrell @ g m a i l.com> on Tuesday September 23, 2003 @09:56PM (#7040244) Homepage
          I guess in your development model you write more than you read?

          I bet you even are going to say that you use more transactions than you query.

          Finding/Reading/Loading data is always going to be the key to the best performance. Transactions are second to that. 3000X faster at reading affords it a lot of breathing space when it comes to transactions because everyone visiting slashdot is reading, but only a few post. This is the real world model.

          I'm skeptical when it comes to easily retrieving and indexing data for reporting. You need to be able to query aggregate (yes, this is easy enough to code) but you need to join related objects. Do I have to do my own query planning/joining? What's the transaction API like? Do I lock objects then update them? What about deadlock? (Easy enough to avoid by anyone smart enough to know any of the algorithms like having to sort the order in which you lock resources, but still a hassle.) So many questions!
          • Re:From the faq (Score:3, Interesting)

            by julesh ( 229690 )
            What's the transaction API like? Do I lock objects then update them? What about deadlock? (Easy enough to avoid by anyone smart enough to know any of the algorithms like having to sort the order in which you lock resources, but still a hassle.)

            Its pretty simple. You create command objects to represent each kind of transaction you can perform. They all implement a fairly simple interface. You present them to the persistence layer, which executes them 1 at a time. No locking is necessary. No deadloc
    • by pVoid ( 607584 ) on Tuesday September 23, 2003 @07:33PM (#7039335)
      Something else that makes me think "is that even possible"...

      Zero bugs. Ever.

      No one has yet found a bug in Prevayler in a Production release. Who will be the first?

      link [prevayler.org].

      I didn't know people still made such bold assertions past the dot-com era.

      • by lokedhs ( 672255 ) on Wednesday September 24, 2003 @03:26AM (#7041624)
        In short: No it's not.

        The long answer is: Yes, it's a lot faster retreieving data. It's about as fast as looking up an object in an in-memory array, because that's what it does. However, they are comparing apples to oranges.

        Prevayler is a very neat piece of software. It's very simple, and not hard to implement really. The problem is that the people behind Prevayler seems to think that it's the end-all-be-all of databases. It's not.

        What these people fail to mention is that you lose somehting that a lot of people find very useful in SQL databases: select. Yes people, that's right. It doesn't provide any way of searching or joining tables. In fact, you don't even have tables. All that Prevayler really is, is a method to encapsulate all data modifications in an action object which is persisted to disk, so that they can be re-played whenever the system starts after a crash. All data accesess are performed just like any other memory read, in the same way as you access any other object. Of course it'll be faster than accessing an SQL database.

        The Prevayler people are very good at twisting the truth to create some amazing benchmark results. It's a good technology, but the the attitude of the developers is slightly irritating.

    • In the prevalent design, every transaction is represented as a serializable object which is
      atomically written to the queue (a simple log file) and processed by the system.
      Taken from: Here [prevayler.org]

      And I thought Quantum Computing was still part of the distant future...
  • Project Promotion (Score:5, Insightful)

    by Bingo Foo ( 179380 ) on Tuesday September 23, 2003 @06:55PM (#7039051)
    I'm all for trying to use Slashdot to promote your pet project, but don't couch your story in questions about people's use of your admittedly relatively unknown software.
    • by Anonymous Coward
      I agree; the phrasing is pretty off-putting, especially with all the answers and assumptions presumed in the posing of the questions themselves.

      Leave that stuff for the "Messages for Marketdroids; Nerds that Natter" website.
    • Perhaps it should be changed from the lightweight-louie-vs-the-data-behemoth dept. to the shameless plug dept.?
    • Re:Project Promotion (Score:5, Interesting)

      by C10H14N2 ( 640033 ) on Tuesday September 23, 2003 @08:37PM (#7039774)
      ...the quasi religious cult sanctimonious denigration crap doesn't do much to convince either, as if storing tabular enterprise data in - GASP - "tables" is such a terrible thing to do indicative of a lower order of being in need of spiritual and philosophical cleansing and release from porridge-feeding in prison.

      [Question: In SQL] Given a table of employees, and a table of offices, it's very easy to find those employees who don't have offices, and those offices who don't have employees. How can I do this in a consistent manner in Prevayler?

      [Their answer:]

      Trivial. But how about a query over polimorphically evaluated methods based on the current millisecond?

      Talk of "trivial," you can already persist your Java objects into the same store as all your other data and access it either through EJB's, simple entity beans or straight SQL. IMHO, that is a far more useful model than locking everything into a pile of serialized object collections accessible only to a handful of Java gurus who have better things to do than write reams of application logic in order to tell Jim-Bob in accounts payable that he owes the janitor fifty bucks. If it takes less SQL than it would take to write the necessary includes in Java and I can still map everything into Java objects when necessary, why bother giving up 80% of your functionality to get only what is already there? Performance? Please. When the effort of generating a simple report exceeds the price of the CPU, just buy another CPU and save on labor. Besides, after being categorically insulted by these self-satisfied pricks, I wouldn't use their product if it would end hunger and bring about world peace.
      • by baka_boy ( 171146 ) <lennon@@@day-reynolds...com> on Wednesday September 24, 2003 @12:12AM (#7040935) Homepage
        First off, SQL is an idealized, functional language for querying large datasets -- in theory, you can run any imaginable query, but in reality, you can't touch an un-indexed field on most production databases unless you've got *lots* of horsepower to burn, and very patient (read: non-existent) users. Personally, I find that there are a pretty large number of queries that are also pretty difficult to write in SQL.

        Really, it's not that different from the procedural-vs-functional or static-vs-dynamic typing issues in language design: a holy war that no one's going to win, except for the developers who learn to adopt the best pieces from each side without making dire claims about the imminent death of either side.
  • by Godeke ( 32895 ) * on Tuesday September 23, 2003 @06:58PM (#7039071)
    This would be great for projects where interoperability isn't an issue or only occurs via edge connections like SOAP. However, I generally would be wary of a "database" which is only accessible in Java, via unique interface. What do you do with your Crystal Reports users? How do I get this into data cubes for analysis?

    Frankly, this is simply a persistance layer with some nice properties. It *isn't* a database. A database stands at the center of your applications and makes itself available to as wide of an audiance as possible. It shouldn't limit your choice of tools in such an absurd manner.
    • by Elwood P Dowd ( 16933 ) <judgmentalist@gmail.com> on Tuesday September 23, 2003 @07:16PM (#7039202) Journal
      Their jingoism is absurd.

      "We have some features that are better than relational databases. RELATIONAL DATABASES SHOULD NEVER BE USED AGAIN."

      In their wiki, "When Should I Not Use Prevalence" [prevayler.org] lists three things:
      When you do not know how to program.
      When you cannot afford enough RAM to contain all your Business Objects.
      When you cannot find a Java Virtual Machine that is robust enough.

      That's obviously the stupidest thing ever. How about, "When I don't have the time and money to connect all my company's SQL DB stuff to your java stuff." Obviously my scenario encompasses a whole lot more users than their three, and perhaps explains why no one is using their product.

      What assholes.
    • could theoretically resolve all the problems associated with OO's rough courtship with Relational databases
      Can someone enlighten me here? There's an incompatibility between OO and RDBMS? How so?
      • by RevAaron ( 125240 ) <revaaron@hotmail. c o m> on Tuesday September 23, 2003 @08:23PM (#7039695) Homepage
        No incompatibility per se, but it's a rough analogy. People often use an SQL RDBMS along with some layer in between it and their OO language, the layer "flattening" objects into rows in the database, and then dynamically reinstantiating the row into an object in that language.

        This is all fine and dandy when you have simple data- a "Person" object filled with nothing but integers and strings flattens fine into an SQL row. But then again, you're kind of just using that object as something not much more than a C struct.

        However, when you have more active objects- which generally arise when you are actually doing good OO design and programming- the interface falls apart some. What if I've got my data objects pointing to other complex objects, rather than just elemental/translatable types like strings, numbers, and dates? What if the state of object A depends directly on what is going on in object B?

        In an OORDBMS translation system, the objects are essentially dead while in storage. Object A can point to Object B - another row in a different database which holds data from a different class- but it dead.

        A good object oriented database, like GemStone (for Java or Smalltalk) allows the objects to remain "alive" while in storage. I have never used or read much about this Prevayler db, so I don't know where it lies. There are some database systems which claim to be an "OODB," but are little more than an RDBMS and an object translation layer built in.

        Not all applications benefit from the OODB methodology, but there are plenty which do. For an OO system which saw some decent design, an OODB is often a good fit. If all of your data is relatively simple- for instance, a customer and parts database on an e-commerce site- an SQL database will probably fit well enough.
  • by PCM2 ( 4486 ) on Tuesday September 23, 2003 @06:59PM (#7039073) Homepage
    Prevayler Quietly Reaches 2.0 Alpha, Bye RDBMS?
    "Quietly," eh?
  • Here's why (Score:5, Insightful)

    by Anonymous Coward on Tuesday September 23, 2003 @07:01PM (#7039090)
    A direct quote from your Wike:

    Prevalence requires us to have enough RAM on our servers to contain all our business objects.

    What do you do if you have a nice big data set that won't fit in memory? Businesses my company works with have millions of customers. Do they have to have all those gigabytes of data in memory to do anything?

    Don't come at me with yet another memory resident thing that's supposed to be the greatest ever, when it doesn't come close to addressing the real needs of a database user.
    • by zekt ( 252634 ) on Tuesday September 23, 2003 @07:32PM (#7039333)
      Yeah we have a backup strategy... we have another machine sitting next to it with about 20 gig of memory..
    • .. you swap !
    • by davi_slashdot ( 653133 ) on Tuesday September 23, 2003 @10:29PM (#7040414) Homepage Journal
      I really appreciate the idea of a RAM targeted object repository layer, but as often happens, the Java guys tend to present their solutions as panaceias to every single application domain that slightly ressembles the problems they are attacking. I am really curious about how Prevalence compares to more naive in-RAM data storage approaches. Despite raising attention to the large quantities of memory not used in high end servers (and to the project itself), I simply can't see the point to compare it with mysql.

      Furthermore, I can't avoid noticing the curious culture that has emerged with Java that you always have a single application/application instance for each machine. The dedicated server concept came to give reliability and CPU power, but WTF, do I really need a dedicated server to every application I run?
    • Oh, no problem. I'll inform my (contract) employer that they'll have a much faster system when they switch. The only hangup is that they'll need a box with 30 gig of RAM to do it.

      Tell me again why this is better than SQLServer using a data file on a RAMdisk?
      No, really. Tell me how this is better than a Microsoft solution, 'cause the astroturfing article didn't do it yet.

      --
  • Whoa wtf (Score:5, Funny)

    by Breakfast Pants ( 323698 ) on Tuesday September 23, 2003 @07:02PM (#7039099) Journal
    I'm glad I'm not a subscriber right now cause if you payed to have the advertisements removed I don't think it would have caught this one.
  • by Xeger ( 20906 ) <slashdotNO@SPAMtracker.xeger.net> on Tuesday September 23, 2003 @07:05PM (#7039128) Homepage
    If you could cache an entire MySQL database in RAM, I'm sure your MySQL performance would improve dramatically.

    If you could then optimize MySQL's search routines for working on memory instead of disk blocks, your MySQL performance would improve even more dramatically. As it is now, MySQL must go through all sorts of contortions, probably using B-Tree-like structures for indexing, and other fanciful datatypes I can't even conceive of without a PhD. The reason for all of this silliness is the fact that MySQL's backing store is disk, not memory.

    In a prevalence system like Prevayler, one of the fundamental tenents of the system is that ALL of your objects are ALWAYS in memory, and are only serialized to disk when they change, for persistence.

    So...yes: as always, a memory-based system will be three orders of magnitude (or more!) faster than a disk-based system. Prevayler vs. DBMS is no exception to this rule.

    But when your website has grown popular, your prevalance database has swelled to 30 gigs and you find yourself hosting it on massive systems with 12 gigs of core memory and another 30 gigs of swap space -- when your memory access times are starting to look like disk access times because of swapping -- well, don't come crying to mwe.

    Prevalence is a brilliant solution, for small projects. But they only scale to the size of your physical memory, or slightly (50-100%) larger. You can't expect them to scale beyond that.
    • If you could cache an entire MySQL database in RAM, I'm sure your MySQL performance would improve dramatically.

      For what it's worth, 5th sentence on the Prevayler web site [prevayler.org] is:

      These hoax-like results are obtained even with the Oracle and MySQL databases fully cached in RAM and on the same physical machine as the test VM.

      That said, I think the Prevayler developers would agree with you. They are suggesting Prevayler as a good solution in situations where you:

      • have enough RAM to hold all your business objec
      • by Xeger ( 20906 ) <slashdotNO@SPAMtracker.xeger.net> on Tuesday September 23, 2003 @07:47PM (#7039437) Homepage
        Thank you for the clarification.

        I'm sure that, with its entire database cached in memory, MySQL's performance *does* improve dramatically. But it's still using maladapted algorithms for doing all of its queries -- the poor algorithms think they're reading blocks from a disk, not pages from memory. And of course every time the MySQL process performs a "disk" read, control reverts to the kernel and the process sacrifices the rest of its quantum. So MySQL will never be able to take full advantage of the fact that it's running with a memory backing store.

        My point was simply that Prevayler is naturally going to be much faster than a MySQL database, because one of them is dealing with RAM and the other is dealing with disk. Hence, comparing Prevayler and MySQL performance is like comparing apples and oranges.

        We're agreed that Prevayler is very useful, provided that your application fits its memory and reliability constraints. Thus, instead of distracting us with how much faster their orange is than the MySQL apple, they should spend their time evangelizing the "orange" way of doing things.
  • Broken link (Score:4, Informative)

    by Otter ( 3800 ) on Tuesday September 23, 2003 @07:06PM (#7039134) Journal
    Correct link for the previous Slashdot article [slashdot.org] -- important, since it provides the lucid, straightforward explanation of what the hell this thing actually is that today's pointless-link-filled, cutesy bit of Astroturfing neglects to offer.
  • by OYAHHH ( 322809 ) on Tuesday September 23, 2003 @07:07PM (#7039136)
    I'm,

    Not sure what the proper term for referring to the articles that appear on Slashdot should be but I can say that this article is about the worst I've ever seen in terms of promoting a pet project.

    Slashdot needs to go one step further with it's moderating functionality.

    Slashdot needs a way that crappy articles like this one can be moderated into the bit bucket and I don't have to see it anymore!
  • Why Java? (Score:2, Insightful)

    by Fred Nerk ( 128328 ) *
    I'd be very interested in this, except for the single fact that's it's Java?

    I may be offending some people here, but I hate Java. After having worked with it for a couple of years I hate it even more.

    I've often had the need to store objects in other languages (Perl, PHP) and I'd have to say I've had a bit of difficulty mapping the data into an RDBMS, but not enough trouble to make me want to switch languages.

    Actually, I don't have a huge problem with the Java language itself, just that it has to run in a
    • Re:Why Java? (Score:3, Insightful)

      by Xeger ( 20906 )
      Prevayler doesn't map data into an RDBMS. Prevayler removes the RDBMS entirely.

      In Prevayler, your "database" is represented natively as a collection of objects, which reference each other by holding pointers to each other, just like any other God-fearing object in core memory. It builds indices, also in memory, to support query operations on this big collection of objects. The whole shebang is periodically serialized to disk, to make it persistent between invocations of your application.

      So: Prevayler is n
      • How do you effectively query that thing? OO links are not efficient to follow unless you set up hashmaps all over the place which would make your object model absolutely horrible to maintain...
      • I disagree with Prevayler's approach to "replacing" RDBMS' for the simple fact that it fails the ACID test.
    • Really?

      Have you found that all Oracle datatypes were supported in your Perl DB driver? They weren't when I last used it. And forget trying to catch deadlock or timeout exceptions consistently and concisely.

      In fact, the quality and completeness of common JDBC drivers is probably one of Java's biggest assets.

      Java generally runs pretty quickly these days - can you point to a benchmark we can try? The Jikes and Borland compilers are very quick - did you try either of those? And JDK 1.4 has full regexp functi
  • by inertia187 ( 156602 ) * on Tuesday September 23, 2003 @07:08PM (#7039146) Homepage Journal
    I'm just going to assume their site uses Prevayler to store the page counter and revision information at the bottom of the linked page. Here's what I saw (times are Pacific):

    [16:47] : This page has 10151 hits and 60 revisions
    [16:50] : This page has 10156 hits and 60 revisions
    [16:52]*: This page has 10170 hits and 60 revisions
    [16:53] : This page has 10194 hits and 60 revisions
    [16:54] : This page has 10220 hits and 60 revisions
    [16:55] : This page has 10261 hits and 60 revisions
    [16:56] : This page has 10311 hits and 60 revisions
    [16:57] : This page has 10353 hits and 60 revisions
    [16:58] : This page has 10413 hits and 60 revisions
    [16:59] : This page has 10454 hits and 60 revisions
    [17:00] : This page has 10503 hits and 60 revisions
    [17:01] : This page has 10539 hits and 60 revisions
    [17:02] : This page has 10578 hits and 60 revisions
    [17:03] : This page has 10623 hits and 60 revisions
    [17:04] : This page has 10666 hits and 60 revisions
    [17:05] : This page has 10713 hits and 60 revisions
    [17:06] : This page has 10748 hits and 60 revisions
    [17:07] : This page has 10792 hits and 60 revisions


    * - The Mysterious Future

    Oh well. It did seemed to peek around 16:58, but I guess Slashdot users really didn't click on that link all that much. Too bad, that would have been fun ad-hoc to test.
  • by tstoneman ( 589372 ) on Tuesday September 23, 2003 @07:11PM (#7039172)
    Please don't go around saying, "Could this be the end of RDBMSes?" That is just a crock of shit, and it really bugs the hell out of me. How stupid do you think we are to have a tagline like that?

    Please watch "Bowling for Columbine" and it says that this type of exaggeration by the media, and most notably US media, is driving the Americans crazy paranoid with fear. It seems like the editors have fallen for the same thing, and it should stop now!

    Let's have a modicum of dignity and avoid all the hyperbole, please. We are all somewhat tech-savvy, please treat us with the respect that we deserve!

    I'm sure the technology is good, but for crying out loud, mainframes are still around 40 years later. RDBMSes are going nowhere for the next 20+years until true AI comes around.
  • by vlad_petric ( 94134 ) on Tuesday September 23, 2003 @07:16PM (#7039200) Homepage
    The durability constraint of ACID implies that each transaction will be written to hdd when before the commit returns. This is why I don't buy the 3000x faster claim - you can certainly make everything go fast, but you'll still need 1 hdd access per transaction (granted, a db could try to coallesce 2 commits into one write, but that still won't fix much; furthermore, the DBs that I know of simply don't do it).

    If they just benchmarked reads ... then the results don't tell much.

  • Zope (Score:4, Informative)

    by SlightOverdose ( 689181 ) on Tuesday September 23, 2003 @07:19PM (#7039228)
    This sounds a lot like the Zope ZODB. For those who are still stuck in the stone age, Zope is a python based application server that essentially uses object serialisation to store its data.

    Imagine if every page in your website was an object- with methods, properties, Access Control, etc. You have different classes for different types of documents, and each document internally knows how to render itself. The ZODB is essentially one big persistant object orientated namespace- You dont have to parse your data into SQL and back again, it's always just there, elimenating a huge amount of work (and bugs!). Having worked with it for a year, I can certainly testify that it is leaps and bounds over relational databases for most things.

    • Re:Zope (Score:3, Informative)

      by Evan ( 2555 )
      Like the ZODB, except that the ZODB has pluggable storage backends, so that your objects can live in a FileStorage, DirectoryStorage, OracleStorage, etc. Except that ZODB has true transactions, with rollback and two-stage commit (allowing transactions to span other systems, such as an RDBMS). Oh, and except that the ZODB doesn't force all of your objects to live in memory at once; you set the cache size and it dynamically loads and unloads objects as necessary. Oh yeah, it also has ZEO for client-server
  • SQLite (Score:3, Interesting)

    by Electrum ( 94638 ) <david@acz.org> on Tuesday September 23, 2003 @07:21PM (#7039239) Homepage
    SQLite [hwaci.com] is tiny, fast [hwaci.com] and ACID compliant. SQLite is a public domain embedded SQL database library. It is similar to BDB, but provides a complete SQL database.
  • Maybe we don't really need a better match between OO and RDBMS? If OO has such a strong impedance mismatch with Relational technology perhaps it's OO that is really flawed and not RDBMS?

    It once struck me that one could hypothetically develop a language that is table based. Not a language that is table friendly but a language that is built on tables. Sort of like Lisp treats everything as a list this language would treat everything as tables and have a built in capability to evaluate and modify tables. Of

  • I'm not sure about this, because I'm not exactly sure how the program works, but from what I understand, the main whoop about it is that it simply uses the ram as a hard drive to highly increase speeds. I saw some users on here talking about how it would end up being slow, especially for millions of users, where most of the data on those users would just sit in the ram and eat up space. Well, wouldn't any well designed operating system swap out the memory pages that haven't been accessed in ages? It sort
  • by jadavis ( 473492 ) on Tuesday September 23, 2003 @07:34PM (#7039341)
    I'm not about to give up PostgreSQL.

    An RDBMS:
    * Allows a wide variety of applications to operate on the data, in a wide variety of languages, at different times or simultaneously.
    * Allows you to manipulate the data inside the database before you get it.
    * Allows for a lot more storage, which is sometimes important when you need the memory for some other task.

    What I like about an RDBMS (like postgres) is that my requirements constantly change. I try something for a while, then we have better ideas, but we need to work with our existing data. An RDBMS allows me a huge amount of flexibility with my data (also the reason I don't use MySQL...) and I've been able to drastically change the way my application works while still making use of the data that I have.

    Maybe this is an OK database for some applications where you have the entire thing laid out in a perfect spec with no chance of a change. However, when I need to get at my data with another language, or it takes up more space than I thought, or I figure out that my application needs to change the way it works, I have no clue where to begin with a huge collection of "objects" that now happen to be obsolete. If I have relations, I have a solid representation of my data that an RDBMS can manipulate efficiently according to a fairly mature mathematical model. I'll take that over a collection of persistant objects any day.

    That said, I think this has application in areas where you just need some persistent objects, and they need to be fast, and they don't take up much room. I don't encounter that very often, and when I do I usually just use postgres because it seems fast enough when the tables are cached. I suppose if the objects are really intricate this would be a nice system, because you wouldn't have to spend so much code on a mapping. It just seems so much more narrow as far as usefulness.
  • by Lysol ( 11150 ) * on Tuesday September 23, 2003 @07:34PM (#7039345)
    I tried Prevayler and even loaded in 50k records from an old product table and object-ized them.

    Yah, it was fast. Searches were great. I even figured out how to do complex object joins.
    But I started having trouble when I tried to figure out how all the transactions worked. This was complicated by the wiki they're using, which was quite useless to me. It lead to many dead ends.

    However, the real reason I found that I could not (possibly ever) use Prevayler is becuase it seemed the approach was for one machine and one machine only. There were no distributed mechanisms at all. Or at least, not how I'm used to working.

    All the systems I've worked on in the past five years have all been with clusters of app servers. If all the objects on one machine were all in memory, then I couldn't think of an easy way to get them into the memory of the other machines. There was some talk about using Java Spaces, but that's kinda where I dropped off.

    And the other issue was getting to the data from non app server machines. Like stuff to do back end reporting and things like that. I bascially figured out that for n-machine access, I needed something that, well, acted like a database.

    I thought the idea was very interesting and maybe these things have been addressed. But when I really sat down with it for a few weeks, it just didn't pan out for me.
  • How does that work when you have multiple processes, potentially implemented in various different programming languages with different object models, that need to access this data concurrently? Will my "database" still work after I modify my persistent classes, or do I have to convert it somehow? What about access control that is guaranteed to be shared by all database-accessing apps?

    This might be nice for a certain class of applications, but unless that all (and more) works - and I can't see how it could

    • Agreed.

      I just had the thought that this software will make people who just need object persistence "wake up" and figure out that they don't need to carry around a big RDBMS if all they have is a straightforward app that needs persistence.

      I think there are a lot of MySQL users out there that could get by with just a persistence system like this. That statement isn't made to reflect poorly upon MySQL (disclaimer: I don't like mysql), but that MySQL has a tendency to market their database, and position it, f
  • Can't run reports (Score:2, Insightful)

    by kpharmer ( 452893 ) *
    A quick search on the wiki showed no hits for the word 'report'.

    Note that the classic problem with object databases is that they focus on transactional queries, and that DSS or reporting queries are either too slow or too difficult to perform.

    So, yeah it sounds nice if you want *both* an object database and a relational one. Not a bad solution if you already have a data warehouse on the side. But if you don't it just a lot of extra work.

    Next...
  • Consistency? Admittedly, most databases barely address that (domain checks and referential integrity are only a start).

    Not having a SQL interface closes so many avenues... For a nifty (and remarkably solid) light-weight database, take a look at McKoi [mckoi.com].

    Max [integrity-logic.com]

  • When we haven't said hello to one yet?

    I'm not sure how you can say your new technology is better than the old one when you haven't even bothered to check out a fully implemented version of the old one.

    KFG
  • but I don't buy from companies who market unethically.

    If you haven't got the guts to admit you're connected to a project you're pimping, you've lost any respect I might have had for the project.
  • Can we predict the death of the RDBMS? Is it time for Oracle to roll over and play dead? Do I really know that the questions I'm asking are ridiculous? Is it possible that I'm just begging the question here? Do I realize that plenty of people have built systems to handle queries from memory without hitting the disk? Do I realize these are only appopriate for a very limited domain of problems? Could we come up with a worse name for a project than "Prevayler"?

    And most importantly, was Cliff smoking so

  • by Frater 219 ( 1455 ) on Tuesday September 23, 2003 @08:31PM (#7039736) Journal
    It's true that an RDBMS doesn't map well to the object-oriented ideology. That's because an RDBMS does not store objects, or anything like them.

    The object-oriented ideology as instantiated in C++ and Java is founded upon breaking data into objects, bearers of identity, which belong to classes, bearers of structure and behavior. (C++ and Java make little account of metaclasses, which are used in more dynamic object systems such as Python's class system and Common Lisp's CLOS. Templates are not metaclasses.) Objects have identity, so they can be equated; they are the unique bearers of attributes about themselves; and each object's structure is dictated by the class to which it belongs.

    When object-oriented partisans look at a database, they see its relvars (or table headers) as bearers of structure and think of classes, and its tuples (or rows) as bearers of identity and think of objects. They see a database as a place to store objects persistently.

    But this is not what an RDBMS does. An RDBMS isn't an object store; its relvars are not classes and its tuples are not objects. So what is an RDBMS? What is "relational" anyhow? Relational databases are founded upon relational mathematics, which is what you get when you cross set theory with predicate calculus.

    Set theory is the branch of math that deals with collections of elements which behave according to formal axioms. Set theory lets you say, for instance, that if you have a non-null set R and a non-null set S, that you can construct a set R*S of all the possible pairs of elements from R and S.

    Predicate calculus is the branch of logic that deals with quantified statements about entities. It lets you formalize logical arguments such as the syllogism: All men are mortal; Socrates is a man; therefore, Socrates is mortal. Predicate calculus deals with generalizations and instantiations of those generalizations.

    What do we get by combining set theory and predicate calculus? We get a system that allows us to operate upon sets of tuples of values satisfying predicates. A relation holds tuples of values which make some predicate true. For instance, consider the predicate "Person x owes me y dollars." Tuples which satisfy this predicate will be pairs (x,y) for which the sentence is true. For instance, if Fred owes me 40 dollars, (Fred, 40) satisfies the predicate. It could thus be a tuple in the relation described by the predicate -- the one relating people's names to how much they owe me.

    With the relational algebra (or an RDBMS) we can do operations upon this relation and others. We could, for instance, select a result set of all those people who owe me more than 50 dollars -- or join this result set with those people's addresses. Whatever result set we ask for will be calculated from the facts in the database. We might get back this result set:

    (Barney, 75, 40 Elm Road)
    (Megan, 60, 9 High Street)

    Now, are the elements of this result set objects in the object-oriented sense? They are not. They do not have identity. The tuple about Barney is not Barney himself, or even a machine representation of him. It doesn't uniquely store attributes of Barney -- after all, we created it by joining tables which also contain such attributes. It is not even, truly, a fact about Barney exclusively -- for it is also a fact about the number 75, and about the address 40 Elm Road. It isn't an object; it's a tuple value, and values do not have identity as objects do.

    Moreover, note that by joining, we can construct new relations from old ones. Thus, not only are tuples not objects, but relvars are not classes. After all, in OO we do not create new classes by joining, but by inheritance or encapsulation of old ones -- and creating a new class does not cause it to be instantiated into known-correct objects.

    So what does this matter to OO people faced with RDBMS as a

    • Great post, thanks for saving me the time. It's amazing how the hype of OOP has caused so many programmers to just believe that it's 'better' without any kind of understanding of what OOP really provides. Now it's to the point that people are ready to blindly replace RDBMSs with Object-Oriented Databases without stopping to think why RDBMSs have been such a great tool to begin with.

      Citing incompatibility due to SQL variations is the biggest red herring I've ever seen. Designing a relational database to
  • very naive (Score:3, Insightful)

    by koehn ( 575405 ) * on Tuesday September 23, 2003 @09:03PM (#7039943)
    These folks are either very naive, or very silly.

    They claim there's no need for two-phase commit (2pc), as though the only systems they need to interact with are (or will be) prevaylor.

    Umm, hello. How about that 50TB database with all our transaction history? You gonna put that in your RAM-based database? No? Well, what happens when you need to do an insert into it, but commit only if the insert and the local transaction succeeds?

    Hell, forget the 50TB database, what about the little Oracle database the guys down the hall use? Or the asynchronous queue that you post into?

    It's a much bigger world than just your little project, guys, and you have to fit into it. 2PC is not an option. It's a requirement.

    The whole "let's keep it in RAM" is cute, and for a lot of projects is probably all you need, but for any kind of large data set you just can't buy enough RAM to hold it all. Once it goes to disk, there's a whole new set of problems.

    Also, the fact that you're responsible for defining and managing your transaction boundaries is really lame. It's not that hard to build check-in/check-out logic that can be used.

    Come back when you have a real system that can handle real load with real datasets. Until then, I'll keep my RDBMS. You may have performance beat on the tiny systems, but who cares? THEY'RE TINY SYSTEMS!
    • Re:very naive (Score:3, Interesting)

      by julesh ( 229690 )
      While I have to agree with some of the things you're saying (like RDBMSs are the way to go for most real world applications), I have to disagree with some things you've said:

      They claim there's no need for two-phase commit (2pc), as though the only systems they need to interact with are (or will be) prevaylor.

      The design decision they've made regarding transaction implementation is kind of orthogonal to their storage decision, although it is a design that works better in memory than it does on disk.

      But,
  • by hey! ( 33014 ) on Tuesday September 23, 2003 @10:18PM (#7040359) Homepage Journal
    I don't mean to say anything against their product, which as far as I know is the greatest object persistence scheme ever hatched.

    But they clearly don't know what a database actually is; they're confusing the issue with services that an RDBMS happens to perform as part of its job. It has always been possible to write procedural code that is faster than database queries because underneath a query is turned into a sequene of operations. When building a system to answer a single question, the system will always be faster without the database layer. Building a hash table or B-Tree to do a simple lookup simply can't be beat.

    People have lost sight of history. Years ago, we used to keep our data in indexed files, and guess what? They were faster than databases of the day at doing the tasks they were designed to do.

    However while databases are slower, in many cases much slower, than procedural code, they have an important property: they can be used to answer unanticipated questions acceptably quickly. How quickly is acceptably quickly? Well, if the database can come back with an answer faster than it takes a skilled programmer to come up with a special purpose program to answer the question, it has done its job. Compare this to their answer to the querying problem: write a java program. And that's a fine answer if having to answer an unanticipated question is a relatively rare event, which no doubt it is for many applications.

    This gets to what a database is: it is a collection of information that that is organized to make reuse simple and efficient. This is different from business object re-use, which is about re-using logic; this is about re-using facts. Relational databases are unequaled at this task because they are based on sets and mathematical operations that are closed on these sets. This allows both the user, but more importantly the system's optimizer, to create sequences of operations that meet the user's requests.

    These people may have a wonderful system for a lot of purposes, but they're really talking about a particular set of applications, for which their system might be better than storing object data in a database. Probably is, as far as I know. But really. No need for rollback because "transactions are instantaneous"? Well, they would be right if transactions really were instantaneous. HOwever, while their test case may be so fast they appear instantaneous, that's a long way from actually being instantaneous. In the real world bound by the laws of physics, transactions take finite time. Given enough objects to update, or high enough system load, or both, you will have to either (1) accept a possibility, albeit small, of inconsistent transactions being processed or (2) lock all the objects that might affected by a transaction, with attendent possibilties of deadlocking.

    In short, I wish them luck; it sounds like they are producing some interesting and useful stuff. However, it isn't a database or a replacement for a database.
  • Silly Project (Score:5, Insightful)

    by cartman ( 18204 ) on Wednesday September 24, 2003 @12:16AM (#7040956)

    The "Prevayler Team" has written a persistent HashMap with a redo log, using the command pattern. This is exceptionally trivial and is in no way comparable to a database. A database has things like: 4GL query language, referential integrity constraints, data integrity, queryable metadata, separation of logical and physical layers, data independence, declarative rather than imperative querying, dynamically assembled queries, and gazillions of other things. These are the real features that we mean when we say "database." These features are absolutely necessary. Prevayler includes none of them. It is an extremely trivial persistent HashMap, that's all.

    Thus, when the prevayler team says "throw away your database," I must assume one of two things. 1) They're trolling for publicity by saying outrageous and purposefully stupid things. Or 2) They are shockingly, mind-numbingly naive, and they don't know what a database is or what it does.

    The author of Prevayler wrote this about himself: "Carlos Eduardo Villela is a 19-year old Brazilian graduate in Information Systems... almost 8 years experience has made him a Java and Python enthusiast."

    Thus, I have to assume that the authors are mind-numbingly naive. Don't get me wrong, I'm sure the authors are very bright, and I know that some good insights that went into the implementation of prevayler. But let's not throw away our databases quite yet.

    • by cartman ( 18204 ) on Wednesday September 24, 2003 @12:31AM (#7041031)

      The somewhat naive authors of prevayler confidently announce the following on their website:

      "No one has yet found a bug in Prevayler in a Production release. Who will be the first? [bold in original text]."

      I already found a serious bug in the current production release. From the prevayler source:

      ObjectOutputStream oos = logStream();
      try {
      oos.writeObject(command);
      oos.reset();
      oos.flush();
      } catch (IOException iox) {

      ObjectOutputStream does not guarantee atomicity. If your command object is larger than the page size of your disk, the "transaction" will take at least two page writes. A software failure between those page writes will lead to "half a transaction" being committed and a subsequent corruption of data. Once data integrity is lost, it is often difficult or impossible to recover. Prevayler has nothing to handle this case. Thus, prevayler does not correctly implement ACID, because it doesn't guarantee atomicity (half a transaction can be committed), consistency (referential integrity would be destroyed in such a case), isolation (this failure wouldn't be isolated to a single transaction) or durability (the problem would only show up upon reloading).

      Finding this bug took very little searching. I am apparently the first person ever to find a bug in prevayler. Do I get a prize?

      • Not neccessarily (Score:3, Interesting)

        by Simon ( 815 ) *
        You are assuming that when Prevayler restarts it reapplys the "half transaction" and corrupts the data. What is more likely to happen is that Prevayler applies the transaction log, sees that the last transaction is incomplete/corrupt, and then stops. Result: consistent DB state. (Actually the state just before the software failure.)

        --
        Simon

  • by Zeno Davatz ( 705862 ) on Wednesday September 24, 2003 @02:32AM (#7041467) Homepage Journal
    I posted a news item some days ago about oddb.org (see also linuxmednews.org). The complete 'datastructure' of oddb.org is done with Madlaine - a prevayler solution as mnemonic. Yes it has sped up development, yes the search queries are delivered faster, yes we are very happy with this new technology. You can have a look at our source at download.ywesee.com/ruby/oddb.org. You can test online at http://www.oddb.org. We are using Madlaine also on other mission-critical projects and it also works great there.
  • Ok, I'll bite... (Score:3, Interesting)

    by DCowern ( 182668 ) * on Wednesday September 24, 2003 @04:20AM (#7041773) Homepage

    Ok, I'll bite. I don't get it.

    I've developed several medium-sized projects using OO languages interfacing with a RDBMS. One used MSSQL, one used Oracle, and a couple used MySQL. (Just to be a bit more precise, by medium-sized I mean a few tens of thousand lines of code and up to 50 concurrent users per location to which the application is deployed.) I've never seen a "rough courtship with Relational database" or a problem with "OO-to-RDBMS complexity" and I would not call developing OO applications that interface with SQL databases "tedious".

    Secondly, the incompatibilities can easily be addressed by creating a database connection class. The SQL implementation aren't so different that you can't pass the implementation name (i.e. MSSQL, ORA, MYSQL, etc.) to the constructor or add some #ifdefs (or the equivalent) in appropriate places and pass the right implementation-specific SQL based on that. If you do it well enough, you can reuse this class across projects. At least SQL is somewhat standardized (haha, I know). By using Prevayler (which sounds like it's some 12 year old AOLer's screen name, btw), you're locked in to one vendor. What if, for example, I have a client who can't/won't run this software? There are enough SQL variants to make pretty much anyone happy with the platform.

    The poster talks about a lot of high-caliber hacks... if a developer understands the purpose of an RDBMS was supposed to do, they wouldn't need to resort to "hacks" (I was expecting an example or some useful supporting information from the poster's link... unfortunately, I was sadly mistaken). An RDBMS is a way of storing and retrieving data. The implementation of the RDBMS should have no impact on the design of a program. If it makes an OO-zealot feel warm and fuzzy, they can just think of the entire RDBMS as a giant object sitting off in never-ever land with connect and query methods (at the most basic level at least).

    Lastly, I see a lot of hot air on the Prevayler website. Things talking about freedom from query languages, support of any algorith, etc. They don't seem to have a lot of information on these statement. They don't say exactly how data is stored. They seem to claim that Prevayler will organize your data in the most optimal fashion since they seem to think DBAs are evil people who, for no reason whatsoever, come up with wicked database designs intended to make developers miserable. They don't say exactly how they store data. These kind of omissions scare me. I don't trust projects just because theyre open source. (Note: If this information is available, it wasn't obvious. Even if these guys are the most awesome OO developers ever, they suck at presenting information).

    What is the big deal here? Seriously... I'm just not seeing the problem.

    I'm sorry if something like this has been posted before but I really didn't see anything covering these questions. Also, if it sounds like I'm a bit mean in this post, it's because as I was writing the post, I tried to answer my own questions through the Prevayler website but was unable to find any useful information -- in fact, I ended up with more unanswered questions than I started with. In short, zealotry combined with lack of information makes me an unhappy camper.

    • Re:Ok, I'll bite... (Score:3, Interesting)

      by julesh ( 229690 )
      I think you've probably missed the more relevant parts of their site. It is a bit of a mess.

      Prevayler isn't a database server. Its an object storage library. So its not a matter of a client not wanting to run it; you link it to your own app and manage your own data with it. The only reason a client couldn't run it would be (a) no support for Java (unlikely) or (b) not enough RAM (which is the big problem).

      Prevayler stores all of your objects in RAM while you're working on them, and at the same time m
  • by tartley ( 232836 ) <user tartley at the domain tartley...com> on Wednesday September 24, 2003 @07:47AM (#7042655) Homepage
    If you're tempted by the RAM-based performance of this, but you don't want to give up on all the good things a RDBMS offers, check out MySQL HEAP tables [themoes.org] - they are a polymorphous implementation of MySql's tables that are designed to be stored in memory.

If all the world's economists were laid end to end, we wouldn't reach a conclusion. -- William Baumol

Working...