Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Databases Programming Software IT

Brian Aker On the Future of Databases 175

blackbearnh recommends an interview with MySQL Director of Technology Brian Aker that O'Reilly Media is running. Aker talks about the merger of MySQL with Sun, the challenges of designing databases for a SOA world, and what the next decade will bring as far as changes to traditional database architecture. Audio is also available. From the interview: "I think there's two things right now that are pushing the changes... The first thing that's going to push the basic old OLCP transactional database world, which... really hasn't [changed] in some time now — is really a change in the number of cores and the move to solid state disks because a lot of the... concept around database is the idea that you don't have access to enough memory. Your disk is slow, can't do random reads very well, and you maybe have one, maybe eight processors but... you look at some of the upper-end hardware and the mini-core stuff,... and you're almost looking at kind of an array of processing that you're doing; you've got access to so many processors. And well the whole story of trying to optimize... around the problem of random I/O being expensive, well that's not that big of a deal when you actually have solid state disks. So that's one whole area I think that will... cause a rethinking in... the standard Jim Gray relational database design."
This discussion has been archived. No new comments can be posted.

Brian Aker On the Future of Databases

Comments Filter:
  • Leaky abstractions (Score:5, Interesting)

    by yoris ( 776276 ) on Tuesday June 03, 2008 @07:10PM (#23645239)
    Gotta love that link between the hardware limitations and the software concepts that may seem fancy but are essentially only built to get around them. I believe someone once called it "the law of leaky abstractions" - would be interesting to see what the new limitations would be if you start combining solid-state storage with pervasive multiprocessing, i.e. what can you do with a multi-processor multi-sdd server that you can not do with a single-processor single-hard drive server?

    I think TFA is pretty right on the money that parallellization and massive use of SSD could cause some pretty fundamental changes in how we approach database optimization - if I were to imagine that rack that I'm staring at being filled with SSD drives and processors instead of with nothing but hard drives... locality of data takes on a whole new meaning if you don't require data to be on the same sector of the HD, but rather want certain sets of data to be stored on storage chips located around the same processor chips to avoid having to overload your busses.

    Then again, I haven't been in this game for so long, so maybe I'm overestimating the impact. Oldtimer opinion would be very welcome.
  • Admittedly.... (Score:2, Interesting)

    by Enderandrew ( 866215 ) <enderandrew&gmail,com> on Tuesday June 03, 2008 @07:27PM (#23645429) Homepage Journal
    I haven't read the article yet, but that summary terrifies me. I keep hearing how in the modern age we shouldn't think about optimal programming because people have more resources than they need.

    Databases need to scale to disgusting large numbers. Memory and disk resources should always be treated as expensive, precious commodities, because they might be plentiful on a simple database on robust hardware, but there are plenty of people out there with massive friggin' databases.

    In corporate America, Oracle and MSSQL sadly are king. MySQL has some interesting advantages, one of them is performance over MSSQL, but if they squander that, what will they be left with? And ffrankly, I don't think Sun paid a fortune for MySQL just to piss away opportunities at gaining ground in corporate America.
  • Re:Too small (Score:3, Interesting)

    by Anonymous Coward on Tuesday June 03, 2008 @07:33PM (#23645479)
    It's more accurate to say that there will probably always be a tradeoff between slow and fast storage, there will probably always be a tradeoff between permanent and temporary storage, and there will probably always be a tradeoff between expensive and cheap storage.

    In 20 years, I do not know what form slow, or cheap, or permanent storage may take. It may not be spinning magnetized platters. But I do know that in 20 years, every well-written database will have algorithms and data structures to deal with slow storage, permanent storage, and cheap storage.
  • by Enderandrew ( 866215 ) <enderandrew&gmail,com> on Tuesday June 03, 2008 @07:34PM (#23645501) Homepage Journal
    I'm actually reading the article now, and as he is talking about design for a database taking multiple cores into consideration, etc, I'm wondering if the traditional lock approach used in MySQL (and most SQL databases as far as I know) somewhat kills parallel operations. Wouldn't the interbase approach work better in a parallel environment?

    Again, I'm sure this is a stupid question, but perhaps someone could clue me in.
  • by njcoder ( 657816 ) on Tuesday June 03, 2008 @08:34PM (#23646071)
    If I was migrating away from Oracle, MS SQL Server wouldn't be my first choice. Postgresql would. Given the choice between a free version that is similar to the original vs a product that is very different that I need to pay for it's a no brainer. Also take into consideration that for some database applications you're going to need some serious horsepower. You're limited in the number of procs you can have in a Windows system. Last time I checked, once you get past 8 processors Windows doesn't scale as well. Even linux doesn't do as well as Solaris, AIX or HPUX past a certain number of procs.

    Oracle's RAC seems to be a better solution than MSSQL's approach. PostgreSQL (and EnterpriseDB) are working on a more RAC-like approach.

    This is a good story about a company that successfully moved from Oracle to Postgresql [arnnet.com.au]. Basically, they had 2 database systems running Oracle, a data warehouse and an OLTP system. They moved their data warehouse over to Postgresql running on Solaris 10, then they used the licenses they no longer need for the data warehouse to boost the computing power of the OLTP system.
  • by Anonymous Coward on Tuesday June 03, 2008 @09:02PM (#23646285)
    InnoDB uses MVCC as well. As storage goes, InnoDB is perfectly serviceable. It's just the rest of the DB engine around it that's out of whack.
  • Re:Admittedly.... (Score:2, Interesting)

    by TheFlamingoKing ( 603674 ) on Tuesday June 03, 2008 @09:21PM (#23646403)
    I wonder if I've been reading Slashdot too long - I can't tell whether this is a troll, a joke, a newbie, or an actual legitimate issue...
  • Locality is the key (Score:5, Interesting)

    by Dave500 ( 107484 ) on Tuesday June 03, 2008 @09:55PM (#23646627)
    In my mind as a database engineer for a wall street bank, the biggest change in the near term that we forsee is data locality.

    Given the amount of computing power on hand today, it may surprise many how difficult it is to engineer a system capable of executing more than a few thousand transactions per second per thread.

    Why? Latency. Consider your average SOA application which reaches out to 4-5 remote services or dataserver calls to execute its task. Each network/rpc/soap/whatever call has a latency cost of anything between one and at worst several hundred milliseconds. Lets say for example that the total latency for all the calls necessary is 10 milliseconds. 1000/10=100 transactions per thread per second. Oh dear.

    The amount of memory an "average" server ships with today is in the 32-64GB range. Next year it will be in the 64-128GB range. The average size of an OLTP database is 60-80GB.

    So, the amount of memory available to the application tier will very soon be greater than the size of the database, warehouses excluded. Moore's law is quickly going to give the application tier far more memory than it needs to solve the average business state, exceptions noted.

    The final fact in the puzzle is that for transaction processing, read operations outnumber write operations by roughly 20 to 1. (This will of course vary on the system, but that *is* the average.)

    This situation is strongly in favor in migrating read only data caches back into the application tier, and only paying for the network hop when writes are done in the interests of safety. (There is a lot of research into how writes can be done safely asynchronously at the moment, but its not ready yet IMHO.)

    Challenges exist in terms of efficient data access and manipulation when caches are large, performant garbage collection and upset recovery - but they are all solvable with care.

    Its my opinion that in the near future large data caches in the application tier will become the norm. What has to be worked out is the most effective way of accessing, manipulating and administering that data tier and dealing with all the inevitable caveats of asynchronous data flow.

    Some (not complete) examples of implementing this:

    Relational Caches (there are many more):
    http://www.oracle.com/technology/products/coherence/coherencedatagrid/coherence_for_java.html
    http://www.alachisoft.com/ncache/index.html

    Object Caches:
    http://www.ogf.org/OGF21/materials/970/GigaSpaces_DataGrid_OGF_Oct07.ppt
    http://jakarta.apache.org/jcs/
  • by Animats ( 122034 ) on Tuesday June 03, 2008 @11:09PM (#23647135) Homepage

    Until recently, solid state storage devices have been treated as "disks". But they're not disks. They have orders of magnitude less latency.

    For files, this doesn't matter all that much. For databases, it changes everything. Solid state devices need new access mechanisms; I/O based seek/read/write is oriented towards big blocks and long latencies. The optimal access size for solid state storage devices is much smaller, more like the size of a cache line. With smaller, lower latency accesses, you can do more of them, instead of wasting channel bandwidth reading big blocks to get some small index item. It's not RAM, though; these devices usually aren't truly random access.

    It starts to make sense to put more lookup-type functions out in the disk, since getting the data into the CPU has become the bottleneck. Search functions in the disk controller went out of fashion decades ago, but it may be time to bring them back. It may make sense to put some of the lower-level database functions in the disk controller, at the level of "query with key, get back record". Cacheing at the disk controller definitely makes sense, and it will be more effective if it's for units smaller than traditional "disk blocks"

    This could be the beginning of the end of the UNIX "everything is a stream of bytes" model of data storage. We may see the "database is below the file system" model, which has appeared a few times in mainframes, make it to ordinary servers.

  • by ppanon ( 16583 ) on Tuesday June 03, 2008 @11:31PM (#23647247) Homepage Journal
    Interesting ideas, but it would seem that, once your application tier is spread over multiple servers that don't share a memory space, you are going to have significant distributed cache coherency issues. While I can understand the desire to avoid the marshalling overhead involved in database reads and updates, you're also going to have to reinvent the wheel of distributed concurrency control for each application when it's already been solved in a general way in the clustered database.

    For instance, from the JCS link you provided:
      JCS is not a transactional distribution mechanism. Transactional distributed caches are not scalable. JCS is a cache not a database. The distribution mechanisms provided by JCS can scale into the tens of servers. In a well-designed service oriented architecture, JCS can be used in a high demand service with numerous nodes. This would not be possible if the distribution mechanism were transactional.

    So if you're having to give up transactional integrity to have your distributed cache, I think it's going to have limited applications because it doesn't solve that 1000 transactions per thread problem you indicated. Sure you can work your way around it a little by treating it as extremely optimistic locking to maintain transactional integrity on writes, but it also does limit the accuracy of the cache and for some applications (financial for starters, I would expect) that's going to be an issue.
  • by Dave500 ( 107484 ) on Tuesday June 03, 2008 @11:49PM (#23647373)
    Extremely valid point.

    Not to bash Oracle, but the ultimate scalability of their multi-host database partitioning solution (RAC) is indeed limited by the amount of communication the distributed lock manager needs to make to ensure transactional isolation as the number of partitions/hosts increase. (Caveat to Oracle fans - 80% of requirements are beneath this threshold - so I understand Oracle's strategy.) (An alternative solution is the "shared nothing" partitioning approach (example - db2's DPF) but this has its own drawbacks too.)

    I don't pretend for a second to know all the answers - indeed I suspect that some of them are yet to be invented/utilized effectively by industry.

    My major point is that having distributed application side data caches will soon become very tempting in terms of the latency involved with accessing data. There are admittedly great challenges involved with doing this safely, in a way which is scalable as you point out and providing a productive application interface.

    It will be very interesting over the next few years as we collectively work out the best approach to these requirements. Anybody can be wrong - me of all people - but my bet is that most of these problems will be solved. How they will be is the coolest part :) .
       
  • Re:Dear Slashot (Score:5, Interesting)

    by dave87656 ( 1179347 ) on Wednesday June 04, 2008 @12:48AM (#23647687)
    Okay, I'll bite too ...

    We've been running MySQL using MyISAM since 2002. It's delivered acceptable performance until recently as we've expanded our application and the data volumes have increased. Now, we have to reorganize it on a frequent basis (we just backup and restore).

    But, we really need to move to a transactional model so I've done some benchmarking between InnoDB and Postgresql. In almost all cases, Postgresql was significantly faster. Our application is very transactional with alot of writes.

    And from what I've read, Postgresql scales well to multiprocessors and multiple cores where as MySQL does not. I know Falcon is coming but it was still very Alpha at the time I compared - I couldn't get it to run long enough to perform the tests.

    Has anyone else compared Postgres to MySQL/Innodb?
  • Re:Too small (Score:2, Interesting)

    by ZerdZerd ( 1250080 ) on Wednesday June 04, 2008 @09:11AM (#23650575)
    I bought the same batch, but before setting up the RAID, I used each drive differently (ran benchmarks, copied files etc.). Some heavily, some light usage. The probability of them crashing at the same time should be smaller then if all of them get the same wear and tear.
  • Object Databases? (Score:2, Interesting)

    by Grapedrink ( 1298113 ) on Wednesday June 04, 2008 @09:27AM (#23650857)
    Not trying to start a war here, but seriously Databases != RDBMS. It seems like no one knows that object databases have been around a long time too. In the context of the article, many of the points can be applied to all types of databases, but it's so focused on the RDBMS (no shock considering the author).

    There were a multitude of issues in the past with object databases from agendas, performance, complexity, etc that put relational databases at the forefront. Hardware and the quality of object databases has more than caught up, so why are object databases so rarely used still?

    One answer why object databases are ignored to a large degree is that people don't like to stray from the norm and tend to implement what they know. Another possibility is many people simply have never even heard of the concept of object databases. Further, in academia we almost exclusively focus on relational databases in most courses. Finally, legacy data is perhaps the biggest hurdle.

    A corollary to the issues above is that there is an entire industry of DBAs and developers that fight learning something new. There's also mega corps with billions invested in the concept of the relational databases. I don't blame MySQL and some of the things said in the article because they're just trying to improve, but on the user level, it's amazing how much effort goes into adapting the RDBMS into the online world and resulting crazy architecture/technologies/code.

    Object relational mappers are a great example of our unwillingness to leave the RDBMS world(unless you're working with legacy/existing data of course, but even then, investigate the possibility to migrate). Why do we need ORMs in the first place? They are a product of using relational databases. When I'm programming, I want to work in objects and not bizarre mapping layers, complicated DALs, etc. We spend so much time on mapping and layers to build bridges between a relational and object world at the cost of productivity and performance simply to continue to hang on to our old RDBMSs.

    I've found that in most cases, object databases are faster for my projects. I've also tried related databases like grid/network databases. There are definitely cases where relational databases are also better, but I would use one over the other on a case-by-case basis. I find for the average case I've seen, hardware and architecture tips the balance in favor of object databases because of the way how we want to model things using objects anyway. If we look at a popular type of app right now, a social network... why use a relational database? Typically the associations and structures we make are objects and hierarchies or networks. Relational databases are ill suited at both. Instead, we start to develop hack and wtf schemas, rely to heavily on the app to sort out the data, or introduce object database-like concepts like table inheritance. This also forces us to introduce and learn yet another language.

    SQL is a huge discussion in itself. I find SQL brilliant and easy to use, but nonetheless ill-suited for many tasks. Once cursors, user defined functions, etc. were introduced, the nightmare got worse. I find procedural and object constructs instead of set based constructs in SQL created by clients all the time as a result. This ends up crippling performance and instead of fixing the issues, decision makers will just throw more hardware at the problem or ignore it all together. There's also this myth that somehow SQL creates a way for the layman to query data in the database. This is true to a small degree, but has mutated into something not unlike "human readable" for XML.

    I'm certified in SQL Server and Oracle, and Postgres is my home RDBMS of choice, so certainly I have a lot invested, but if I'm offered something that is better I will gladly abandon all my intellectual and time investments in these systems. I use whatever works the best for the task. After building several apps using Gemstone over the years, I have to cringe every time I return to Oracle or even w

This file will self-destruct in five minutes.

Working...