Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Programming Data Storage IT Technology

MySQL Creator Contemplates RAM-only Databases 290

Aavidwriter writes "Peter Wayner asks Michael 'Monty' Widenius of MySQL, 'When will RAM prices make disk drives obsolete for database developers?' From Monty's answers, it sounds like hard drives may be nothing but backups before long." From experience, I'd wager that RAM failure rates are less than hard drive failure rates, so it might also mean more stability from that perspective.
This discussion has been archived. No new comments can be posted.

MySQL Creator Contemplates RAM-only Databases

Comments Filter:
  • by Anonymous Coward on Saturday May 10, 2003 @08:23AM (#5925425)
    But also a very strong Memory Manager. We've all seen a poorly written program corrupt memory.
    • by Anonymous Coward
      Not exactly an original idea, in memory DB's have been around for quite some time.

      Depending on the application it may not be feasible to have a in memory DB for a long time.

      I currently work on DB's that are many hundreds of gigabytes as well as terabytes, some are even pentabytes(sp?). It will be a long time before a we can afford to buy a terabyte of memory to hold our DB's.

      Eventually it could be a possibility but its still cheaper to buy hard drives and let the DB just cache in memory the active store
  • *cough* Google? Big enough for ya? *cough*
    • Re:Umm...now? (Score:3, Informative)

      by Anonymous Coward
      And large image-bases are too much for RAM today. Remeber the 640K limit? I'd guess that more than 50% current corporate databases would fit in a single gigabyte, including indexes.

      The real speed improvements, according to the guys working on projects like Bamboo on sourceforge come not from the fact that it's in RAM, they test against SQL in RAM and show that most of the performance improvements come from keeping the data in the same process space as the application operating on the data. If they're ri
  • ECC RAM? (Score:4, Interesting)

    by Big Mark ( 575945 ) on Saturday May 10, 2003 @08:25AM (#5925435)
    I remember reading somewhere that, due to things like thermal radiation and cosmic rays, every so often a bit in RAM is changed by 'accident'... isn't the ECC RAM (which, IIRC, negates the effects of such interaction) horrendously expensive though, more so than the 'normal' SDRAM variants we have these days?
    • ECC RAM is actually incredibly cheap; just not quite as incredibly cheap as non-ECC RAM. All it means is that you get 9 memory chips instead of 8. It should be 12.5% more expensive, but it doesn't quite work that way.
    • Re: (Score:3, Informative)

      Comment removed based on user account deletion
    • Double the price isn't 'horrendously' expensive...
    • And when you are spending millions on your database infrastructure, who cares about the extra 20-40k in memory.
  • by Sayten241 ( 592677 ) on Saturday May 10, 2003 @08:26AM (#5925438)
    but doesn't RAM need power running through it to hold its data? If this is true and we do switch to RAM for our SQL servers, all it would take is one fool to trip over a power chord (or just a power-outtage) to lose one heck of a lot of data.
    • by sholden ( 12227 ) on Saturday May 10, 2003 @08:31AM (#5925458) Homepage
      There are these things called batteries. Of course the article mentions them, but who reads the articles anyway...

      RAM only needs a trickle to keep refreshed...
      • You mean there's articles we're supposed to read? If we did that, how we would ever get in on the discussion while it was still "live"?

        If you ask me, RAM is still too expensive to make this feasible. The article seems to assume that the examples listed are typical. But for so many purposes the idea of keeping the whole database in memory is a huge waste unless RAM gets a lot cheaper. For $100 you can get 120GB of HD or 256MB of RAM. That's not a comparable expense. And depending on the database, you are
        • by Hungus ( 585181 ) on Saturday May 10, 2003 @09:39AM (#5925676) Journal
          You really cant compare things like that for databases. AISITA (as is stated in the article) the big bottlenecks for both are similar in nature but orders of magnitude in scope. I currently work with a medical database where everything has to be logged. Disk access is a big factor for us, so we use fibre channel scsi (specifically Seagate 73.4GB 10000RPM [seagate.com]) where the cost is more like 700 dollars for 70gb) (basically $10 per GB not the $1 per GB you are showing) Also there is the issue of supporting hardware but we will ignore that for the time being.

          time for some napkin math:

          1 512MB ecc reg pc2100 dim -> $ 78 or $156GB

          1 70GB Fibre Channel Drive -> $700 or $ 10GB

          Now lets factor in raid (for access speed and redundancy)

          we typically put 8 drives in a bundle which tends to give us 36% of the total drive capacity (mirrored raid 5 aka raid 6 remember teh ram is ecc reg so this factoring is already in place for it)
          8 * $700 -> $5600 for
          36% * 8 * 70 = 200GB
          This give me approximately 1GB for $28
          now thats a factor of 5.6 (call it 6) in price from ram only. AND i still get a prolly 4 fold increase in throughput. Not bad at all in my book.
    • True, however.... (Score:5, Insightful)

      by gilesjuk ( 604902 ) <giles DOT jones AT zen DOT co DOT uk> on Saturday May 10, 2003 @08:38AM (#5925492)
      If you have a database that is stored in RAM and periodically written out to the hard disk (for backup reasons) then you get better performance than if you have a database that is reading and writing most of the time.

      UPS would prevent the data loss, the database could be written to disk when the power fails.
      • If you have a database that is stored in RAM and periodically written out to the hard disk (for backup reasons) then you get better performance than if you have a database that is reading and writing most of the time.

        EMC and similar storage arrays have done this for decades; the array cabinet itself contains enough backup power to flush the caches to disk and power down cleanly if the main power supply fails (assuming you didn't already have a UPS there, of course, and if you can afford EMC then you can a
      • I posted something else along the lines of this, but how would you do it under heavy load? The disk is so enormously slow compared to RAM, you'd overwhelm whatever buffer you're using to do the write-back. You'd have to throttle back requests on the RAM, thus negating the performance increases.
    • by Beliskner ( 566513 ) on Saturday May 10, 2003 @10:08AM (#5925812) Homepage
      RAM for our SQL servers, all it would take is one fool to trip over a power chord (or just a power-outtage) to lose one heck of a lot of data
      The military and Banks already have databases on SANs with battery-backed RAM and HD upon battery failure 'solid-state disks' [superssd.com]

      UPS is old technology, the battery needs constant replacement, and very few have multiple redundant batteries and/or transistors to deal with wear and tear. Yes even a simple MOSFET transistor is not 100% reliable [glary.com]. Usually the only way to tell a battery is dead is your UPS fails when you need it (this happened to us when my MD was demonstrating our service live to customers, afterwards was the only time he's taken less than 9 months to sign off a purchase order on new equipment). A UPS also has a power cord to pull out when you recoil after burning your fingers on a Seagate Cheetah 15000RPM HD in the server room. A UPS also trips if you overload it, which again means the UPS fails when you most need it.

      Other posts mention cosmic radiation at high altitude makes RAM fail. Last time I checked there were no Quad-Xeon Oracle databases on Concorde, although if the International Space Station were to use one this might pose a problem for non-ECC RAM. Anyway, somebody could always write a driver to do software-ECC with Reed-Solomon for RAM if it becomes necessary.

      Huge databases (>500 Gigabyes) would benefit most from this as running a simple OUTER JOIN query on the biggest tables will require most of the database to be called into RAM.

      • Small databases become slow due to HD latency problems if they do a lot of WRITE operations (the database is stored in RAM, the transaction log is appended to, COMMIT TRANS). This would benefit least FROM RAMdisk because a HD append operation is cheap, however it would benefit database speed in mid-backup
      • Mid-size databases become HD-intensive due to aggregate queries/triggered operations over large '>RAM' datasets. For instance enforced cascading deletes where millions of tuples are being deleted cascaded to hundreds of other unindexed tables (in my job I go to the toilet whenever I run a query like this).
      • Huge databases where 'Index size' > 'RAM size' - the simplest query would benefit hugely from more RAM or faster storage or RAM-storage. With current databases this would be a 10Gig Eth connection to a Terabyte RAMSan solid-state disk [superssd.com].
      In the future, who knows, maybe a FPGA/ASIC DPU (Database Processing Unit) for INSTANT COMMIT like NVidia's GPU?
  • by VCAGuy ( 660954 ) on Saturday May 10, 2003 @08:26AM (#5925439)
    With our Exchange server, we use a Platypus [platypus.net] Qik Drive to send our retrieval times through the basement. We put the database on Qik Drives (but mirror it hourly on to HDDs)...it makes our effective Exchange bandwidth limited to the gigabit ethernet port on the server.
    • by FreeLinux ( 555387 ) on Saturday May 10, 2003 @08:44AM (#5925510)
      Wow! $25,000 [cdw.com] for 16 GB of RAM disk seems a tad high or widespread adoption.

      It's also interesting to note that Microsoft was going to release what they called In Memory Database(IMDB) support in Windows 2000. However, this feature was removed after Windows 2000 RC2 due to technical issues.

      • "However, this feature was removed after Windows 2000 RC2 due to technical issues."

        Not so much technical as priorities. They discovered that they were essentially rewriting all the code in SQL Server and that it wasn't signifigantly faster than SQL Server. It was decided it was simply a waste of time.

        Like others have pointed out SQL Server, Oracle, etc. already maintain their own caches of data in memory. So on SQL that accesses the same chunks of data, the response is really quite fast.
      • "However, this feature was removed after Windows 2000 RC2 due to technical issues."

        Wow - now that must be something to see, a techincal issue that would stop _Microsoft_ releasing a product.
      • > Wow! $25,000 [cdw.com] for 16 GB of RAM disk seems a tad high or
        > widespread adoption.

        Wow! Whats funny is someone just like you said the same thing about harddrives.
        I mean, $25,000,000 for a 10 gig drive?!? Thats a tad high to EVER take off!

        Except for the fact we can now get a drive 25 times as large for 100,000 times less the price (250gb for $250) and not only that but 10gig drives and much larger ARE common place, so i'd have to say you have very high chances of being wrong :)

        The same can b
  • bad idea (Score:2, Insightful)

    by Anonymous Coward
    RAM is highly susceptible to transient faults. Things like comic radiation at high altitudes make computing a real problem. ECC helps this but it won't totally eliminate it. With a hard drive, the probability of a hard fault goes up but a soft fault goes down.
  • Already exists? (Score:4, Insightful)

    by gilesjuk ( 604902 ) <giles DOT jones AT zen DOT co DOT uk> on Saturday May 10, 2003 @08:28AM (#5925450)
    Surely a well tuned database server stores uses quite a lot of RAM for buffering?

    Nobody in their right mind would have a busy database server which accesses the hard disk like crazy. A few years back I saw Oracle servers running NT with 4GB of RAM, so I guess they're using even more now.
    • Re:Already exists? (Score:3, Insightful)

      by NineNine ( 235196 )
      . A few years back I saw Oracle servers running NT with 4GB of RAM, so I guess they're using even more now.


      I few years back, I saw a Sun box running Oracle with 64 Gig of RAM... They're already using quite a bit more. I can't even begin to fathom how much RAM a DB stored in RAM would take. It would be absolutely astronomical for any reasonable sized database. Sysadmins would spend all day swapping out RAM sticks as they died.
    • Re:Already exists? (Score:5, Insightful)

      by sql*kitten ( 1359 ) on Saturday May 10, 2003 @09:02AM (#5925566)
      Surely a well tuned database server stores uses quite a lot of RAM for buffering?

      Well, a professional database like Oracle manages its own cache, but MySQL really only relies only on the OS-level cache. The problem with that approach is that the database knows a lot more about what you're doing, so it can make much smarter decisions about what to cache, what to age out, what to prefetch, etc. On an Oracle server, you want to lock the buffer in memory, and leave as little as possible over for the OS filesystem cache. You see, if a block doesn't exist in the cache, it has to be fetched from disk into the database cache, and if it does, the db will go straight to its own cache. Another caching layer inbetween provided by the OS is just wasted.

      I don't think Monty understands any of this; in the article he seems to say that ACID, rather than being a fundamental principle of relational databases, is just something you need to do because disks are slower than RAM. The only reason that you might not want full ACID and use semaphores instead, as he suggests, is because you are only updating one record in one table in a single transaction!

      Further, if he is thinking in terms of a few Gb of data, then he is a little out of touch with modern database usage. SouthWest airlines do not have a database that stores 10 bytes of data for every seat on every flight, and I have a hard time figuring out why they would want to - the database of seats would be tied into the customer records, the freight/baggage handling database, the billing records, the accounting system. That is the point of relational database, that you can query data based on data that exists elsewhere. Monty thinks in terms of databases that have only a few tables, because that's all you can do in MySQL. He says that database programmers are forced to be very careful about the consistency of their data - well those using MySQL are, but those using Oracle (or any other database with real transactions and real integrity constraints) find it's all taken care of transparently.

      • Thanks for the informative posts. Just from curiosity, how much data are we talking about for a large corporation, say SW Air or BofA?

        • Re:Already exists? (Score:5, Interesting)

          by sql*kitten ( 1359 ) on Saturday May 10, 2003 @11:54AM (#5926189)
          Just from curiosity, how much data are we talking about for a large corporation, say SW Air or BofA?

          Impossible to put a figure on the total amount of data that exists within an organization, but a typical SAN in a major financial institution has terabytes online. UBS Warburg has 2 Tb [oracle.com] in just its general ledger database. Acxion has 25 Tb [oracle.com] in its data warehouse, which will mainly be used for queries, whereas the GL database will be more transaction heavy. SouthWest [oracle.com] is an Oracle customer, but it doesn't say here how much data they have.
      • Re:Already exists? (Score:3, Interesting)

        by PizzaFace ( 593587 )

        I don't think Monty understands any of this; in the article he seems to say that ACID, rather than being a fundamental principle of relational databases, is just something you need to do because disks are slower than RAM.

        In fairness to Monty, it's his interviewer, Peter Wayner, who suggests that ACID is just for keeping RAM and disk synchronized. Monty at one point cautions, "You still need commit/rollback, as this functionality is still very useful on the application level."

  • by Anonymous Coward
    ...goes to whoever is crazy enough to put their entire database in RAM.

    Now if the RAM was non-volatile and was static with the power off that would rock, but volitile RAM - are you crazy?!!
    • in 1985. The system was a TRS-80 IV (CPU was an 8080) that had been overclocked and had a megabyte of RAM stuffed in it. The RAM cost more than the computer. The application was a point-of-sale system for video stores and it used floppies for backup (a 10 meg HD for a "trash-80s was even more expensive than the additional RAM) The idea was that the user would fire up th machine in the morning, the system would load program files and data from 360K floppies to RAM disk. Several times during the day, the user
    • Now if the RAM was non-volatile and was static with the power off that would rock

      So what about all this "flash ram" stuff we see for our mp3 players, portable pr0n viewers, and whatnot?

      Maybe someone could design a new kind of high-speed drive fitting into a standard drive bay, that takes a bunch of MultiMediaCard's in an (m+1) x (n+1) matrix to provide a total storage of m x n cards, using RAID principals in two dimensions such that the memory controller could correct any number of bit errors in a s
  • unique problems (Score:3, Interesting)

    by ramzak2k ( 596734 ) * on Saturday May 10, 2003 @08:38AM (#5925489)
    "RAM failure rates are less than hard drive failure rates, so it might also mean more stability from that perspective" Well that is because they havnt been subjected to that sort of load as yet. RAM could pose its set of unique problems once implemented as databases.
    • Excuse me, since when is RAM not subjected to constant unyeilding load?!

      Or are you talking about the stability of RAM-resident databases (which is NOT what the line you quoted was talking about, it was purely about RAM failure rates).
  • by Anonymous Coward on Saturday May 10, 2003 @08:46AM (#5925518)
    Considering the non-existent ACID support in MySQL it sounds like a good idea, it's not like MySQL will get any more errorprone than it is now...
  • It works! (Score:4, Interesting)

    by The Original Yama ( 454111 ) <lists.sridharNO@SPAMdhanapalan.com> on Saturday May 10, 2003 @08:46AM (#5925519) Homepage
    When our site [pclinuxonline.com] was slashdotted last year, we were able to cope with the load after putting our database into RAM. It's probably not the best solution, since the RAM would get deleted if the system crashes (or the power goes out, etc.), but it's a good temporary measure.
  • by jkrise ( 535370 ) on Saturday May 10, 2003 @08:49AM (#5925529) Journal
    I guess the issue with databases is not only speed and reliability, but a totally different ballgame called 'user-perception'. Even now, tape drives are used to archive databases; despite the fact that less than 1 in 1000 of the tape media get used for actually retrieving the data during a crash. NAS devices and the like have changed this, but the temptation remains to use tape.

    I guess the RAM vs disk debate is on similar lines - but there are some vital differences:
    1. Disks (esp. IDE) have become a commodity item and can be accessed by different system architectures easily.
    2. IDE and SCSI standards have stood the test of time - 13 and 20 years respectively, unlike RAM wihch has evolved from Parity, non-EDO, EDO, DRAM, SDRAM, DDR-RAM, RAMBUS RAM etc., and suffers several patent and copyright encumberances.
    3. Although RAM prices are driving down, the h/w to interface speciality RAM banks is proprietary and hence cost-prohibitive, and comes with attendant long-term supportability risks - think Palladium, or even Server mobos over the last 10 years. TCO for RAM based systems could thus be much higher than disk-based systems.

    Overall, except for apps that need super-high speeds, and users that can risk proprietary stuff, disk-based databases shall remain.

    My 0.02
  • Some thoughts on RAM (Score:5, Interesting)

    by Effugas ( 2378 ) on Saturday May 10, 2003 @08:52AM (#5925538) Homepage
    RAM-resident Database? Yes, that would be Google -- a massive, massive cluster of x86 boxen with a couple gigs of RAM apiece. Each gets a portion of the hashspace, leading to near-O(1) searchability. I'm pretty sure all the big search engines work this way, at this point -- the DB is checkpointed into RAM, but is never actually run from it.

    Recent discussions about disks vs. CPU's have ignored the massive decreases in the cost of RAM. For a very long time, the secret bottleneck in PC's (in that it wasn't advertised heavily) was RAM. That's starting to disappear -- there's a gig in my laptop, and there's no discernable improvement in all but the most intense applications if I were to go beyond that.

    Virtual Memory is already on the chopping block; any time it's imaginable that a system might need another gig of storage, it's probably worth going to the store and spending the hundred dollars.

    But what if more RAM is indeed needed? One of the most interesting developments in this department has involved RDMA [infinibandta.org]: Remote DMA over Ethernet. Effectively, with RAM several orders of magnitude faster than disk, and with Ethernet achieving disk-interface speeds of 120MB/s, we can either a) use other machines as our "VM" failover, or more interestingly, b) Directly treat remote RAM as a local resource -- a whole new class of zero copy networking. This Is Cool, though there are security issues as internal system architectures get exposed to the rough and tumble world outside the box. It'll be interesting to see how they're addressed (firewalls don't count).

    What next, for the RAM itself? I don't think there's much that value in further doublings...either of capacity, or soon, of speed. What I'm convinced we're going to start seeing is some capacity for distributed computation in the RAM logic itself -- load in a couple hundred meg in one bank, a couple hundred meg in another, and XOR them together _in RAM_. It'd just be another type of read -- a "computational read". Some work's been done on this, though apparently there's massive issues integrating logic into what's some very dumb, very dense circuitry. But the logic's already done to some degree; ECC verifiers need to include adders for parity checking.

    My guess...we'll probably see it in a 3D Accelerator first.

    *yawns* Anyway, just some thoughts to spur discussion. I go sleep now :-)

    Yours Truly,

    Dan Kaminsky
    DoxPara Research
    http://www.doxpara.com

    • "Recent discussions about disks vs. CPU's have ignored the massive decreases in the cost of RAM"

      I don't recollect a disk vs CPU debate!
      RAM prices might have decreased, but implementing databases over RAM need proprietary architectures over and above RAM, which srives the price up. Let me explain. A commodity $200 PC can support 500GB of disk space (4 * 120GB + USB drives). OTOH a mobo supporting even 4GB of RAM could cost over $2000, and it's likely a proprietary design.

      "For a very long time, the secret b
      • by yarbo ( 626329 )
        "OTOH a mobo supporting even 4GB of RAM could cost over $2000, and it's likely a proprietary design." My $200 MSI K7D Master L supports 4 Gigabytes of RAM link [msi.com.tw]
      • by Effugas ( 2378 )
        Others have rebutted your assertion on RAM availability.

        Clearly you haven't used XP much. I've got an XP Video Server hooked up to a TV; it has uptime of around four months right now. Good luck getting a Win9x machine to do that -- 95 literally could never stay up more than 47 days, due to a clock related overflow. They've done ALOT to fix stability, and it's nothing but ignorance to claim otherwise.

        It's nice to be able to finally change your IP address without rebooting, too. :-)

        95->98 was a huge
    • by Kevin Stevens ( 227724 ) <kevstev.gmail@com> on Saturday May 10, 2003 @09:47AM (#5925720)
      this is true, google does hold everything in RAM, but google does not care if one of those boxes goes down and they have to rely on a couple of hours old data for searches. However, a financial institution, or even a webstore, can not afford to just lose a couple of transactions if a machine goes down. I do not think this model would work well except for Databases that are primarily read only (IE you wont have to write to disk that often), since for this to work for most DB's you are going to need an up to the millisecond snapshot of the DB. Google is in a unique position where it is not critical for its data to be 100% up to the minute, and that is why it works. There are many applications for this, but this is not really a one size fits all solution.
      • Um, in most datawarehouseing situations (for example, a bank), it is assumed that you are not working with the most recent data. You are working with a snapshot from 10 minutes ago. Or two hours ago. Or a day ago.

        Your characterization is still correct, in that my transactions from last week cannot disappear once they've been posted, but Google has this problem as well. Google solves it with massive redundancy. I don't know if that would be cost effective for my bank.
    • by Salamander ( 33735 ) <`jeff' `at' `pl.atyp.us'> on Saturday May 10, 2003 @09:51AM (#5925743) Homepage Journal
      any time it's imaginable that a system might need another gig of storage, it's probably worth going to the store and spending the hundred dollars.

      A gig is nothing in the enterprise space. What happens when a terabyte is the unit you allocate between applications or departments, and a petabyte is still big but no longer guaranteed to be the biggest on the block? Gonna walk down to the store and buy a terabyte of RAM, plus a CPU and chipset and OS capable of addressing it? This whole discussion is based on a faulty concept of what "big" is nowadays. For a truly big database, RAM isn't going to cut it. RAM and disk sizes have grown rapidly, but database sizes have (at least) kept pace. That will probably always be the case. If something came along that was even bigger than disks, but even more cumbersome to access, databases would interface to it anyway. General-purpose OS access won't be far behind, either. VM is far from dead; if anything, the change that's needed is to get rid of bogus 2x/4x physical-to-virtual ratios in broken operating systems like Linux and Windows so they can address even more virtual memory.

      we can either a) use other machines as our "VM" failover, or more interestingly, b) Directly treat remote RAM as a local resource

      I worked at a company several years ago (Dolphin) that allowed just this sort of remote memory access at the hardware level. Even then, there were so many issues around consistency, varying latency (this is NUMA where access is *really* non-uniform), and system isolation (it sucks taking a bus fault because something outside your box hiccuped) that the market resisted. InfiniBand HCAs don't even do that; access to remote memory is explicit via a library, not just simple memory accesses. RDMA over Ethernet is even less transparent, and has a host of other problems to solve before it's even where IB is today; it's a step backwards functionally from software DSM, which has been around for at least a decade without overcoming the same sort of acceptance issues mentioned above.

      What I'm convinced we're going to start seeing is some capacity for distributed computation in the RAM logic itself

      You could start with IRAM [berkeley.edu] at Berkeley. There are links to other projects as well, and some of the papers mention still more that don't seem to be in the links. A lot of what you talk about is possible, and being thought about, but a lot further off than you seem to think.

    • "b) Directly treat remote RAM as a local resource -- a whole new class of zero copy networking."

      Been there, done that - I once worked on a flight simulator that used a shared memory area accross many machines to distribute data as things progressed.

      It's not as cool as it sounds and was eventually ditched for ethernet (not TCP/IP, just raw network messages) for real time information exchange.

      In the final analysis however, it's probably faster for high bandwidth applications to build a dedicated high speed
  • by Anonymous Coward on Saturday May 10, 2003 @08:53AM (#5925542)
    Many databases contemplate database sizes in the tens to tens of thousands of gigabytes. For smaller databases, RAM is an easy thing, and even for a small number of gigabytes it might be reasonable. For larger databases RAM would be unthinkable. The fact that a database developer doesn't know what databases are used for is disturbing.

    Most modern databases also make very effective use of RAM as a cache in order to speed up queries. I don't know about MySQL since I don't use it. My guess, however, is that it does not, since that would eliminate the need for this stupid measure.

    As far as reliability, RAM is more vulnerable to transient things like cosmic radiation. ECC memory will take care of most single-bit problems (there are lots of them...), but all it can do for multi-bit failures is determine that yes, your data is screwed.

    Also, swapping out a bad hard disk in a RAID configuration is relatively simple and has a recovery process. Suppose your RAM stick fails; what is your recourse? You've permanently lost that data, and systems with hot-swappable RAM are much more costly than ones with similar capabilities for hard drives.

    Finally, consider the problem of catastrophic system failure. If the power goes out, your RAM dies but your hard disk is still safe. If it is worse (say your facility burns down) then it is much easier to recover data from the charred remnants of a hard disk than from the charred remnants of a DRAM chip.

    The idea of replacing disks with DRAMs has been around for quite a while now. But disks continue to get (a bit) faster and (much) larger. Every time the morons want to replace it they get shot down. More sensible people focus on using the resources available in ways such as caches that make systems faster and more reliable.
    • by ajs ( 35943 ) <ajs.ajs@com> on Saturday May 10, 2003 @09:46AM (#5925712) Homepage Journal
      It's true MySQL is not a real database. After all, it rarely has press releases, the support contracts don't cost nearly enough and what's more it's so easy to administer that your average UNIX guy with a basic RDMS background can get by. It's the freakin' anti-christ!!!

      Seriously, can we all just get over size comparisons? MySQL runs a lot of very useful databases from my dinky little statistics systems that are less than 10MB to giant multi-TB DBs. When you talk about the latter being RAM-resident, you're usually not talking about ALL of the data, but rather indexes and as much of the indexed columns as possible. In that sense a database can be fully RAM-resident on a 4-16GB machine and still have many more TB on disk.
    • You really don't know what you're talking about do you?

      a) Of course MySQL has RAM cache. Here is one part of it:
      http://www.mysql.com/documentation/mysql/byc h apter /manual_Reference.html#Query_Cache

      b) More than a bit here or there and your disk data is probably toast also. Where do you think the data on the disk is computed from?

      c) There are hot-swap/raid type RAM motherboards available also. But that's not really the point.

      I run a master MySQL/innodb database from disks and I replicate to 20 slaves ea
      • You really don't know what you're talking about do you?

        It is fairly clear you are the one who is a bit confused here. Because, typically, disk is the primary data storage mechanism and main memory capacity is less than the total size of a database the enterprise DBMS vendors (this does NOT include MySQL) have what is commonly referred to as a 'data cache' (vendors may call it something else) which stores data pages in main memory (there are other caches for other data structures, but we're only concern

  • Niche applications (Score:3, Insightful)

    by digitalhermit ( 113459 ) on Saturday May 10, 2003 @09:01AM (#5925562) Homepage
    This is interesting. Lots of responses so far have said that putting a database into volatile memory is preposterous. But from reading the article I'm not certain if it's such a bad idea in some situations. There are often sites that have a lot of relatively static data in their databases. These sites often use a backend database because it's easier, programmatically and as far as maintenance is concerned, to do so rather than create lots of static pages. Writes to the database could be done as a pass-through so they do get written to the disk "backup". A good example may be Google's cache -- the pages do not need to be re-indexed all the time but speed is critical. If RAM can be faster and, possibly even use less power than a hard drive, then there is a benefit. In Google's case, there is no writing, only queries.

    This means that in any situation where data is unchanging except for periodic updates this could be a good idea.
  • http://www.imperialtech.com/success_ebay.htm

    The basic idea: use solid state ram drives (with separate power supply) for your busy tablespaces and your redo logs.

    This leverages 'cheap ram' technology with existing (and proven and scalable) db architecture.

    For ebay, for example, they might store 'active items' in 'ram-drive-backed' tablespace and 'old items' in the 'hard-drive-backed tablespace'.

    These solid-state drives are expensive, but additional Oracle licenses (or moving from 'standard' to 'enterpr
  • This is nothing new (Score:2, Informative)

    by smackdaddy ( 4761 )
    There is already a leading in Memory database that is extremely fast. Check out TimesTen [timesten.com]. That is what we use. There is also another one called Polyhedra [polyhedra.com]. But the redundancy on Polyhedra doesn't appear to be as good as TimesTen, and it doesn't support Unicode either.
  • ...have nothing to do with the medium the data is stored in! What you're trying to guard against is concurrent access of resources by transactions which in cases can cause incorrect or inconsistent results in a RDBMS. I think this article is a bit obvious for most people who've had any training in how databases actually work and I think Monty was actually pretty gracious for taking the time to give the interviewer a smidgeon of clue.
    • by sql*kitten ( 1359 ) on Saturday May 10, 2003 @09:35AM (#5925660)
      I think this article is a bit obvious for most people who've had any training in how databases actually work and I think Monty was actually pretty gracious for taking the time to give the interviewer a smidgeon of clue.

      From the article:

      Is it easier to maintain ACID principles with pure RAM?

      Yes. This makes ACID almost trivial.


      Umm, no. There is no difference in the ACID algorithm whether the database is stored in memory or on disk. The only thing that is easier to do in memory is to fake it, because for low levels of concurrency you can serialize everything without anyone noticing. But that strategy will collapse under load. Far better to do it properly the first time. Yeah, it slows you don't with a single user, but when you have tens or hundreds of users connected, it'll still work.


      With RAM the algorithms that one uses to store and find data will be the major bottleneck, compared to disk thrashing now.


      Actually, the algorithms are the bottleneck on disk too. Monty would know this if he had a query optimizer like the one in Oracle (or had even looked at an explain plan).


      It's when you store data on disk that you are still manipulating on disk that you need the ACID principles.


      Nonsense - you need ACID if there is any conceivable case in which two users might want to access the same data at the same time, or if there is any conceivable case that a write could fail, or if you want to support commit/rollback at all. In other words, if you're running a database and not a simple SQL interface to flat files. Hell, there's an ODBC driver for CSV that does all that MySQL does.

  • Already in Use (Score:4, Informative)

    by NearlyHeadless ( 110901 ) on Saturday May 10, 2003 @09:16AM (#5925602)
    There are already memory-resident databases in use. For example, Lucent uses them for creating products which process cell-phone transactions. See http://www.bell-labs.com/project/dali/ [bell-labs.com].

    There are some cool ideas there. They use two copies on disk for backup in case of system failure. Because of this they don't have to do page-latching.

    In some configurations, though, this is irrelevant, because write transactions lock the whole database! Because they know all transactions will be extremely short, this is faster than locking at page or row level.

  • by satsuke ( 263225 ) on Saturday May 10, 2003 @09:21AM (#5925616)
    Memory sounds like a good idea in theory .. but what about power failures or momentary blips .. UPS can help but not eliminate that risk.

    A recent hardware write up I read from HP / Compaq has ram partitioning / raid'ing on some of the higher end x86 servers .. with some options for active standby and hot replacement available.

    Another little burb was that with ram .. as the number of individual ram components increases the risk of a single bit non ecc correctable fault scales up accordingly .. such that with 8 gig + arrays the chance of uncorrectable error approches 50% per time interval

    I know memory can develop stuck bits without any warning .. several of the Sun Fire 6800 series machines I work with on a regular basis develop these kinds of errors occasionally .. though with Sun the hardware is smart enough to map around the error and make relevent OBP console & syslog entries.
    • Another little burb was that with ram .. as the number of individual ram components increases the risk of a single bit non ecc correctable fault scales up accordingly .. such that with 8 gig + arrays the chance of uncorrectable error approches 50% per time interval

      So what? Most high-end systems scrub the ram every so often, correcting ECC faults as they go. Hell, some of the Opteron chipsets do this (go AMD!).

  • Ive used (Praedictus' [praedictus.com] (goofy name i know..)) database. resides totally in memory, used (in my case) to do Statistical Process Analysis in a manufacturing environment.... closed source.

  • ACID? (Score:5, Interesting)

    by ortholattice ( 175065 ) on Saturday May 10, 2003 @09:37AM (#5925667)
    Maybe I read it wrong, but the interview seems to give the impression that Atomicity, Consistency, Isolation and Durability [webmasterworld.com] compliance is primarily concerned with keeping the disk in sync with memory.
    Q. Is it easier to maintain ACID principles with pure RAM?

    A. Yes. This makes ACID almost trivial.

    ...

    Q. Does writing the data to disk add some of the same problems of synchronization and ordering that led to the development of the ACID principles?

    Q. If you use the disk only for backups, then things are much easier than before. It's when you store data on disk that you are still manipulating on disk that you need the ACID principles.

    I'm confused. I actually haven't used MySQL much, and someone else can clarify its current ACID compliance. My application involves multiuser financial transactions. When making my DB selection a couple of years ago, at that time it was claimed that MySQL had some ACID deficiencies that made me nervous. I settled on PostgreSQL, which I'm very happy with.

    But there's a lot more to ACID than just keeping RAM and disk in sync, and I don't see how RAM would make ACID that much easier, and certainly not "almost trivial". You still have all the transactional semaphores, record locking, potential deadlocks, rollbacks, etc. to worry about. In fact I don't see why you wouldn't just have the RAM pretend to be a disk and be done with it, since the disk version already has stable software. Then, if it is important to increase performance further, RAM-specific code optimization could be done over time, but slowly and carefully.

    I'm sorry - I really don't want to get into a religious war here, but the interview didn't do much to bolster my confidence in MySQL for mission-critical financial stuff. Educate me.

  • by thogard ( 43403 ) on Saturday May 10, 2003 @09:44AM (#5925701) Homepage
    If you've got enough ram for your database to fit in, why not mmap it and do a simple search? It tends to take up much less memory than a database and you can search a whole lot of records in the time it takes to do a context switch (which is what you get when you use a socket to talk to the database program).
  • by Anonymous Coward
    "Gosh, I don't know why anyone would need a database bigger than ram"

    Here's a tip kids, when you stop playing around with the toy databases, give us a call and we'll show you why you still need hard drives.

  • DIV (Score:3, Interesting)

    by Spiked_Three ( 626260 ) on Saturday May 10, 2003 @09:47AM (#5925723)
    Back in the early 90's IBM added a machine instruction to their mainframes called DIV. It treated data in a file system as if it where in virtual mememory - ie addressRecord[12345] appeared to the program as an in memory array, but was backed by disk storage - the same format that was used for paging virtual memory - brilliant. It's a shame it never caught on - it would make advances like this transparent in implementation. Well I guess you can't really say it never caught on - it was a big reason IBMs mainframe databases outperformed everyone else for so long.

    Is there a similiar kind of instruction on Intel? It's probably too late though - indexed arrays have become less useful since associative array patterns have become better defined. A hardware implementation (RAM) of JDO would be interesting.
    • Re:DIV (Score:2, Interesting)

      by lokedhs ( 672255 )
      We have this already. And have had for quite some time. It was invented with Multics, and Linux makes it available using the system call mmap(). Look it up, you might like it. :-)
  • Why not a use a ram based hard drive? The drive would have a battery backup and the speed of ram.

    Plus with new standards like fiber channel and varies SCSI you wouldn't lose much if any speed.
    • This would (and does) result in a speedup, but not the kind being talked about here. Instead of doing the same thing faster, as your suggestion would result, Monty is discussing doing things differently because there aren't two active copies of the data at all times. (No delay from copying and checking between the two.) RAM-only is at least twice as fast as ram-to-disk, even if your disks are as fast as the ram.
  • by cartman ( 18204 ) on Saturday May 10, 2003 @09:54AM (#5925759)
    There are components of ACIDity that would be implmented very differently for RAM-persistent databases than for disk-persistent ones. Maintaining ACIDity on disk-persistent databases requires complicated algorithms to mitigate the disatrous disk seek times. These complicated algorithms would be rendered unnecessary if disks were no longer used.

    For example, disks have incredibly slow seek times and much better bandwidth; therefore it's far cheaper to write things to disk in big chunks. The purpose of write-ahead logging (or "redo logging") is to mitigate the performance impact of slow seek times by blasting all the transactions to disk at once, in the redo log, thereby avoiding the slow seeks that would be required by putting each transaction in its proper place. Putting the transaction data in its proper place is deferred until after the load has died down somewhat. This could be seen as exchanging seek times for bandwidth.

    This redo log mechanism would be unnecessary for ram-persistent databases. It's a significant source of complexity that would be obviated by the removal of disks. And that's just one example of complexity required to get adequate performance from disk, a medium that has disastrously slow seek times.

  • The new Sparc chip was supposed to have some nifty way of massively boosting RAM access times. Without that sort of advance, RAM-resident databases aren't much of a win over RAM-cached DBs (what everyone does now), sicne the win for fully resident DBs (which Oracle can do if you force it) is that you can lock huge sections of RAM for just the database, but that leads to the problem that for very large amounts of RAM the latency is starting to get too large.

    Hopefully this will be a solved problem soon.
  • The ability to replicate a database, in real time, onto a slower system, would seem to be a key item.

    As long as the DB is replicated onto a slower HD based DB. This would have other advantages, IE duplicating a DB to a remote site for disaster recover purposes.

  • At present, the principal performance bottleneck of a relational database is disk seek time. Since disks have disastrous seek times, database servers often have an incredible number of disks (hundreds or more), in order to have those disks seeking in parallel, thereby mitigating disastrous seek times of individual disks.

    These hundreds of disks often have very little on them. The purpose of having lots of disks isn't for more storage, but for more drive heads, because lots of heads can be seeking in paralle
    • Re:What's at stake (Score:3, Interesting)

      by Beliskner ( 566513 )
      The difficulty with RAM is that it loses its data after you turn off the power.
      Why does everyone think this? If your Motherboard cracks in mid-transaction you will lose transactions. Five Lithium batteries (simple mechanics, every watch uses them, rarely fail) connected through diodes in parallel and going through a voltage regulator chip for RAM provides a much more reliable persistant storage than hard disk.
      • Rarely? When you work in a bank, "Rarely" is a 100 billion dollar word. You don't want that sort of damocles hanging over you. When one disk fails, generally the other disks don't. If you've got 100 copies of your stock exchange, you're not going to cry if you've only got 99. The problem is that when power fails, every single copy of your data dies.

        Then again, bank dataservers are presumably distributed clusters, and taking out all of them at once is pretty damn hard. Guaranteeing power isn't all that hard
      • If your Motherboard cracks in mid-transaction you will lose transactions.

        No you won't. There's a difference between a transaction not happening, and a transaction being lost - a lost transaction is one that happened, and application and the user think it happened, but in fact the data was never stored. That's the difference between depositing $100 and it never appears in your bank account, and turning up to the bank with $100 but walking away because the bank was closed.

        If a database does proper ACID tr

  • by semanticgap ( 468158 ) on Saturday May 10, 2003 @10:14AM (#5925828)
    I believe it surfaced a while back on /. - can't find any links at the moment, but AFAIK the entire Google index is stored in RAM.
  • by ikekrull ( 59661 ) on Saturday May 10, 2003 @10:44AM (#5925925) Homepage
    Running the DB from RAM is nice, but as far as I can see this won;t require any changes to the software itself, you could just mount your DB on a RAMdisk and be done with it. Whats the big deal?

    What MySQL and PostgreSQL really lack is the ability to replicate on-the-fly and to support running on clusters for *real* failover and fault tolerance.

    For Postgres, this means multiple 'postmaster' processes being able to access the same database concurrently, and probably something similar for MySQL.

    Being able to run a database on an OpenMOSIX cluster, for example, would make it massively scalable, and being able to run multiple independent machines with an existing HA (High Availability) monitoring system would provide a truly fault-tolerant database.

    There are of course major technical difficulties involved in making databases work this way, but an Open Source DB that can compete with Oracle's 'Unbreakable' claims would be a huge shot in the arm for OSS in the business world.

  • I thought one of the key points of ACIDity was to maintain data integrity in the event of catastrophic system failure (ie, power goes out)?

    With a dynamic RAM system (DRAM also isn't all that reliable...SRAM is better, and SRAM is very expensive) you are highly vulnerable to this.

    I suppose you could implement a kind of write-back system to the disk where you pile up things in some kind of buffer, but under heavy load, you're going to overwhelm it. Or at the very least cause the thrashing that this suppose

  • DBD::RAM (Score:3, Informative)

    by suntse ( 672374 ) on Saturday May 10, 2003 @11:45AM (#5926163)
    Any Perl programmers in the audience may wish to check out DBD::RAM. From the CPAN documentation: "DBD::RAM allows you to import almost any type of Perl data structure into an in-memory table and then use DBI and SQL to access and modify it. It also allows direct access to almost any kind of file, supporting SQL manipulation of the file without converting the file out of its native format." More information here [cpan.org]
  • by panurge ( 573432 ) * on Saturday May 10, 2003 @11:53AM (#5926186)
    To a certain extent this is a dupe of any previous article about emulating hard disk drives in RAM. Perhaps it is worth making a few points.

    First, as other have said, a properly designed RAM subsystem can be battery backed up. In terms of getting the data out, loss of power to the RAM is no more catastrophic than loss of power to the CPU, the router, the computer running the middleware, or whatever. Because RAM is a purely semiconductor approach, any battery backup system can be simple and reliable.

    In fact, it should not be too difficult to design a system which, in the event of power fail, dumps data to backup disk drives. To get to that state, the main system has already failed to do a clean shutdown, so this is a last resort issue.

    The next thing is error detection and correction. It's true that single bit ECC is limited, but it also takes only limited resources (7 additional bits for a 32-bit subsystem, 8 for 64). Memory subsystems could have extra columns so that if bit errors start to multiply in one column, it can be switched out for a new one. Just as with any error detection and correction strategy, single bit detection in columns can be combined with further error correction down rows, using dedicated hardware to encode and decode on the fly. Just good old basic electronics.

    In the worst case, it should be possible to build an extremely reliable memory system for a bit penalty of 50% - no worse than mirroring two hard drives. It won't be quite as fast as writing direct to motherboard RAM, but we don't want to do that anyway (we want to be able to break the link on power fail, save to disk, then later on restore from disk. And we want the subsystem in its own cabinet along with the batteries. No one in their right minds is suggesting having a couple of C cells taped to this thing and held on with croc clips.)

    I'd even venture to suggest that most MySQL databases are not in the terabyte range, and that most databases aren't in the gigabyte range even if they are mission critical in SMEs.

    Conclusion? As usual we have the people trying to boast "My database is far too big and complicated for MySQL! So MySQL sucks! My database is too (etc.) to run in RAM! So running DBs from RAM sucks!" and ignoring the fact that there are many web databases where transactional integrity is not an issue, and the market for a RAM store for databases in the low Gbyte range might actually be rather substantial.

  • by Animats ( 122034 ) on Saturday May 10, 2003 @11:57AM (#5926200) Homepage
    We may be moving to an era where disks are primarily an archival medium. Unfortunately, disk manufacturers are moving to an era where disks are less reliable and have shorter warranties. There's a problem here.

    We need archival storage devices that won't lose data unless physically destroyed. We don't have them. Tapes don't hold enough data any more. Disk drives don't have enough shelf life.

    DVD-sized optical media in caddies for protection, maybe.

    (It's annoying that CDs and DVDs went caddyless. Early CDs drives use caddies to protect the CDs, but for some idiotic reason, the caddies cost about $12 each. We should have had CDs and DVDs in caddies, with the caddy also being the storage box and the retail packaging for prerecorded media. There's no reason caddies have to be expensive. 3.5" floppies, after all, are in caddies.)

  • ...But such a condition (DB in RAM) will make his product pretty much obsolete.
    The Prevayler Project [prevayler.org] is a RAM-only Java persistence project that works and is so simple not a single bug has been found in the production release.
    3000 times faster that MySQL (9000 times faster than Oracle) even with the database in caches entirely in RAM simply because of the JDBC overhead that is eliminated .
    The only sticking points I've seen are:
    1. Normal PC's boards generally will only take 1GB of RAM. Sure there are th
  • by godofredo ( 198906 ) on Saturday May 10, 2003 @06:39PM (#5928076)
    Modern storage solutions (like EMC) use redundant battery backed ram to buffer writes, greatly reducing perceived write latency. This gives you a lot of the performance gain of a ram only database, and also scales very well to large loads. (in fact, when choosing RAID stripe size you take into account whether writes are buffered; if not, keep stripes small for log files)

    If you know that your data will always fit into available ram then there are a number of performance optimizations that can be done. I'm not sure about ACID becoming "trivial"; You still need most of the same db components: indexes, lock managers, operation journaling, etc. But many of these could be greatly simplified:

    1. Page/Buffer Manager Eliminated. Since no disk IO will be required for the normal running of the db, there will be no need for a page manager. This eliminates complexity such as block prefetch and marking and replacement strategies. In fact, the data will probably not be stored on pages at all. Details such as block checksum, flip flop, log position, page latches etc can all be removed. The values in the rows would be sitting in memory in native language formats rather than packed making retrieval much faster. There would be no need for block chaining.

    2. More flexible indexing. Since it is not necessary to store data in pages, traditional B-Trees are not absolutely required. Other index structures like AVL trees would be faster and might allow better concurrency. These trees would also be easier to keep balanced ...most databases don't cleanup indexes after deletes, forcing periodic rebuilding. Other index schemes not generally considered because of poor locality prinicles could be considered. Note that Hash Indexes would probably still use Linear Hashing.

    3. Lock Manager Simplified. Row level locking (and MVC) are still desired features, but keeping the locks all in memory simplifies implementation. Oracle and InnoDB store lock information in the blocks (associated with transaction) to allow update transactions larger than memory.

    4. Log manager simplified. You will still need journaling capability for rollback, replication, recovery from backup etc. But the implementation of the log need not be traditional. Any structure that maintains information about transactions and contains causal ordering will do. Techniques such as keeping old versions of rows adjacted to current versions that are unacceptable for disk based databases (ahem, Postgres) could be used.

    Although these may seem like small things, they can add up: less code to run is faster code. A company called TimesTen offered a product that they claimed was 10x faster than Oracle using an all memory DB. Generally the corporate world doesn't care to split hairs. They want something that works, and they are willing to throw some money and iron at it. Thats why battery backed ram in the disk controller to buffer writes is probably going to be fine for now.

    A last note: modern databases already know to not bother with indexes when a table is sufficiently small.

    JJ

You are in a maze of little twisting passages, all different.

Working...