Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

Database Bigwigs Lead Stealthy Open Source Startup 187

BobB writes "Michael Stonebraker, who cooked up the Ingres and Postgres database management systems, is back with a stealthy startup called Vertica. And not just him, he has recruited former Oracle bigwigs Ray Lane and Jerry Held to give the company a boost before its software leaves beta testing. The promise — a Linux-based system that handles queries 100 times faster than traditional relational database management systems."
This discussion has been archived. No new comments can be posted.

Database Bigwigs Lead Stealthy Open Source Startup

Comments Filter:
  • Partners (Score:5, Informative)

    by stoolpigeon ( 454276 ) * <bittercode@gmail> on Wednesday February 14, 2007 @04:22PM (#18016580) Homepage Journal
    The article mentions that redhat and hp are listed among their partners. i'm not surprised by red hat or informatica (another partner though they aren't mentioned in the article) but i was a little surprised by hp - since they have been trying to get the word out [hp.com] about their own data warehousing and bi stuff. i wonder what that indicates about how they regard this new player.
     
    also interesting is the wikipedia article on Michael Stonebraker [wikipedia.org] if you aren't already familiar with him.
  • by Anonymous Coward
    The article seems to describe the big advantage as being column oriented.

    How does this differ than KX System's kdb (www.kx.com) which IIRC is similar in that way; and is alredy in use at many if not most major financial institutions (see their customer list)?
    • by georgewilliamherbert ( 211790 ) on Wednesday February 14, 2007 @04:37PM (#18016808)
      KX is primarily in-memory. The competing column-oriented product is primarily Sybase IQ, which has been on the market for a while now.

    • Well, kdb+ is proprietary and expensive. Maybe this product will be Open Source or at the very least kill kdb+ on price? There are not many real players in this market, the more the better IMO. What would be best is a competitive Open Source offering in this space. The Open Source product could steal away most of the market share, or at the very least, really drive down prices! :-)
  • by Anonymous Coward on Wednesday February 14, 2007 @04:23PM (#18016610)
    The question is when will this be ported to a mainstream OS such as Windows?

    • Re: (Score:3, Funny)

      by Mad Merlin ( 837387 )

      The question is when will this be ported to a mainstream OS such as Windows?

      Where by mainstream, you mean useless?

    • No offense, but you must be as dumb as a rock. This is not a "mainstream" application. This is a very business specific application. If the app does what a company needs, not running on MS Windows is _really_ no big deal.

      I have been a senior developer for more than a decade now and have worked at 2 fortune 500 companies and 1 fortune 1000 company. All of the big companies use a multi-OS server setup. While most of the desktops are MS Windows, a lot of the servers are *nix. In fact, all of the reall
  • by varmittang ( 849469 ) on Wednesday February 14, 2007 @04:24PM (#18016616)
    It was LAMP, now its LAVA. Much cooler name.
  • buzzword enabled (Score:4, Insightful)

    by hey ( 83763 ) on Wednesday February 14, 2007 @04:24PM (#18016620) Journal
    "grid-enabled, column-oriented relational database management system"
    What does that mean?
    If anything.
    • Re:buzzword enabled (Score:5, Informative)

      by c0nst ( 655115 ) on Wednesday February 14, 2007 @04:59PM (#18017032)
      Here you go:
      Stonebraker, Mike; et al. (2005). C-Store: A Column-oriented DBMS [mit.edu] (PDF). Proceedings of the 31st VLDB Conference.
      From the paper:
      Among the many differences in its design are: storage of data by column rather than by row, careful coding and packing of objects into storage including main memory during query processing, storing an overlapping collection of columnoriented projections, rather than the current fare of tables and indexes, a non-traditional implementation of transactions which includes high availability and snapshot isolation for read-only transactions, and the extensive use of bitmap indexes to complement B-tree structures
      :-)
    • by Jherek Carnelian ( 831679 ) on Wednesday February 14, 2007 @05:39PM (#18017472)

      "grid-enabled, column-oriented relational database management system"
      What does that mean?

      Uh, a spreadsheet?
    • Re:buzzword enabled (Score:5, Informative)

      by perfczar ( 1064296 ) on Wednesday February 14, 2007 @05:54PM (#18017616)
      Buzzwords, yes, but they have a little bit of meaning left. Grid-enabled means that it works on a "shared nothing" environment, that you can use a networked cluster of commodity computers if one isn't enough to hold the data, and so on. This is in contrast to using one big huge box (big computer, big storage array, or whatever). Of course many databases are similarly grid-enabled. Column-oriented means that data is stored on disk by column, this makes it fast to process a subset of columns that touch lots of rows, as is typical in data warehouse applications. This is a key architectural difference among databases; Oracle, DB2, etc., are "row stores", while Sybase IQ, Vertica, etc. are "column stores". Note: I work for Vertica Systems
    • Re:buzzword enabled (Score:5, Informative)

      by ChrisA90278 ( 905188 ) on Wednesday February 14, 2007 @05:54PM (#18017618)
      Column oriented means it can read data in from one column from the disk without pulling in all the other bytes in the row. Possibly much less reduced I/O bandwidth usage depending on the query. (kind of like if you turned the normal file structure side ways.)

      Grid enabled - This means the DBMS can make use of a large distributed group of computers and potentially have access to a huge amount of computing power. The typical DBMS runs on at beat a multi-processor server. Thi sis kind of like a DBMS server running a a "seti at home" type network.

      Going solely by the developer's reputation, this could be a big deal. He is not some random hacker. He is a well known university professor who has several times in the past lead projects that have been revolutionary and turned the field around. His ideas are widely used Still "100X faster" is a big claim. Lots of smart people have been working on DMBSes for many years, a two order of magnitude improvement is a "I will have to see it to believe it" type claim

      I'm using PostgreSQL to handle some telemetry data right now. If my 45 minute run times can be reduced to seconds, I'll be happy.

      • by Virtual_Raider ( 52165 ) on Wednesday February 14, 2007 @09:29PM (#18019562)

        Still "100X faster" is a big claim. Lots of smart people have been working on DMBSes for many years, a two order of magnitude improvement is a "I will have to see it to believe it" type claim

        Oh ye of little faith, here i present thee with The Facts. Or a paper at the very least: One size fits all? a Benchmark [mit.edu]

      • > Grid enabled - This means the DBMS can make use of a large distributed group of computers and potentially have access
        > to a huge amount of computing power. The typical DBMS runs on at beat a multi-processor server. Thi sis kind of like a DBMS
        > server running a a "seti at home" type network.

        Or like teradata in around, what? 1992? Informix around 1994? db2 around 1995? Oracle isn't there yet since their grid solution is more about failover than partitioning.

        This is now lower-end functionality i
      • by Kjella ( 173770 ) on Thursday February 15, 2007 @08:18AM (#18022480) Homepage
        Under ideal conditions, I don't have a problem seeing that:

        1. Make up lots of 100-column+ tables
        2. Select one column from each table
        3. If you're IO bound, you should now see about a 100:1 increase

        However, most real data models don't work that way. Usually you put stuff that's useful at the same time in the same table, in which case it probably won't make much of a difference.
    • by bytesex ( 112972 )
      'grid enabled' like a beowolf cluster
      'column oriented' like a table, but then turned on its side.
      'relational database management system' you've got me there. I have no idea.
  • A column oriented relational database? I'd like some more details on how that works. I don't suppose it's just a regular SQL db with Excel's Pivot Tables run on it...

    Seriously, though, the target market for grid-based high volume data-warehousing type dbs are a lot smaller than the MySQL crowd. Not as big a deal as it seems, but it'd be nice to have if you needed it.
    • Re: (Score:3, Insightful)

      by stoolpigeon ( 454276 ) *
      smaller in number - but i'm willing to bet much more profitable and growing rapidly. we've been looking at data warehousing options and frankly most of them suck in one way or another. if someone can do it right - they can make a killing.
      • Re: (Score:3, Interesting)

        I've worked in DW for a time, and I can tell you that it's not easy to "get it right" because so far it's not something that can be packaged. You can get the data models and fancy machinery, but you will most definitely need an architect to tailor it to the particular organization because all companies work differently on the inside. And that architect will have a dickens of a time understanding how the company works because the bigger they are, the more likely not even their own employees do. As long as th
    • Depends. Reporting and data warehousing are pretty important; Business Objects / Crystal Reports / etc. all seem to be slower than they could. If you were to be able to throw in the rows as quickly as in MySQL or Postgres and then report on it with ten times the efficiency, you've got a decent demand in store for you. If, say, Google or Amazon could run with 1/10th the overall servers I have this feeling they would. Just a guess though. It's always possible a new approach to the old problems has result
      • Re: (Score:3, Interesting)

        by stoolpigeon ( 454276 ) *
        info week just ran an article on hp [informationweek.com] getting into data warehousing and bi that had this paragraph pretty early on: Until sitting down with InformationWeek recently, the company has been mum on the initiative--not so much as a peep from its normally talkative marketing team. Indeed, it's an unlikely move into a sector where IBM, Oracle, SAS Institute, and Teradata have years of experience, well regarded products, and loyal customers. Those four vendors--along with Microsoft, which has muscled in on the streng
    • A lot of web sites that started out with small MySQL databases are now using replication. It can be a tough transition if not accounted for in the original development of the site. But if those sites started out with something that's "grid-based" maybe it would be much easier to grow (maybe). I have the feeling the market may be bigger than many people realize, especially if they start with something free.
    • Re:Column oriented? (Score:5, Informative)

      by AKAImBatman ( 238306 ) * <[moc.liamg] [ta] [namtabmiaka]> on Wednesday February 14, 2007 @04:43PM (#18016880) Homepage Journal

      A column oriented relational database? I'd like some more details on how that works.

      http://en.wikipedia.org/wiki/Column-oriented_DBMS [wikipedia.org]

      It's basically an optimization of the current data access patterns. Databases have been row-oriented for decades, because they evolved from fixed width flat files. Once we eliminated COBOL-style accesses to databases, the full row data became less important. It became far more important to be able to scan a column as fast as possible. For example:

      select * from names where lastname LIKE '%son'

      The above query might have an index available to find what it needs. But it's just as likely that the database will need to do a table-scan. Since table-scans involve looking through every record in the database, you can imagine that it would be faster to just load the lastname column rather than loading every row in the database just to discard 90% of that data.
    • by georgewilliamherbert ( 211790 ) on Wednesday February 14, 2007 @04:47PM (#18016910)

      A column oriented relational database? I'd like some more details on how that works.

      Column oriented is easy. Imagine a database as a set of tables, each of which has rows of data records, in organized columns (column 1 = "User name", column 2 = "User ID", column 3 = "Favorite slashdot admin", etc).

      Normal row-oriented databases store records which have a row of the data: "User name", "User ID", "Favorite slashdot admin" for user row #12345.

      Column oriented databases store records which have a column of the data: "User name" for user rows 1-100,000; "User ID" for user rows 1-100,000; etc.

      Updates are faster with row-oriented: you access the last record file and append something, or access an intermediate record file and update one "row" across.

      Searches are faster with column-oriented: you access the record file for "Favorite slashdot admin" and look for entries which say "Phred", and then output the list of rows of data which match. Instead of going through the whole database top to bottom for the search, you just search on the one column. If you have 100 columns of data, then you look through 1/100th of the total data in the search. To pull data out, you then have to look at all the column files and index in the right number of records, but that goes relatively quickly.

      Indexes are useful, but column-oriented is more efficient in some ways. You don't have to maintain the indexes, and can just automatically search any column without having indexed it, in a reasonably efficient manner.

      Column-oriented also lets you compress the data on the fly efficiently: all the records are the same data type (string, integer, date, whatever) and lists of same data types compress well, and uncompress typically far faster than you can pull them off disk, so you can just automatically do it for all the data and save both speed and time...

      • Re: (Score:3, Insightful)

        by flyingfsck ( 986395 )
        Yup, it is all about making the individual files smaller and more regular. Kinda the opposite of XML.
      • Column-oriented also lets you compress the data on the fly efficiently: all the records are the same data type (string, integer, date, whatever) and lists of same data types compress well, and uncompress typically far faster than you can pull them off disk, so you can just automatically do it for all the data and save both speed and time...

        Given enough spare CPU cycles, yes. LZO compression is probably good for that. In fact, this is part of the theory behind which Hans Reiser claimed Reiser4 will be over

      • by swilver ( 617741 )

        Searches are faster with column-oriented: you access the record file for "Favorite slashdot admin" and look for entries which say "Phred", and then output the list of rows of data which match. Instead of going through the whole database top to bottom for the search, you just search on the one column. If you have 100 columns of data, then you look through 1/100th of the total data in the search. To pull data out, you then have to look at all the column files and index in the right number of records, but that

    • the target market for grid-based high volume data-warehousing type dbs are a lot smaller than the MySQL crowd.

      The growth potential for that market is staggering. We've now got desktop computers with enough storage capacity to hold everything a person has written or has ever read, from first grade to grave. We'll be looking for ways to organize these huge attics sometime soon.

  • Awesome (Score:2, Interesting)

    by Fyre2012 ( 762907 )
    This is totally what we need.

    With comodity hardware getting faster and cheaper by the minute, having a system that can handle a higher than average load with optimized software is, imho, a winner.

    I'm sure everyone here can add some anecdotal evidence to how they had a heavy-hardware, database serving machine die on them because of some software bug.
    This is one of the reasons I've been looking forward to ZFS. Hopefully the DB guru's will take the best of what's good about software, drop the legacy c
  • The promise -- a Linux-based system that handles queries 100 times faster than traditional relational database management systems... ...using the power of oxygen!
  • Perfect timing (Score:4, Interesting)

    by defile ( 1059 ) on Wednesday February 14, 2007 @04:31PM (#18016736) Homepage Journal

    Loading a million random records out of a set of one hundred million records is an enormously difficult task for an RDBMS on commodity hardware (e.g. magnetic rotating disks). This is a more common task than you would think. ORM systems backed by an RDBMS, such as Ruby on Rails, Django, Hibernate, have exactly this requirement and will only demand more as these models become more mainstream. Think about what search engines have to do: find millions among billions, all to show a user a dozen.

    These problems are solvable now, but there's a lot of duplication of effort going on that a smart database vendor could solve for us.

    • by symbolic ( 11752 )
      Duplication of effort isn't bad at all....without it, you'll wind up with another Microsoft.
  • by georgewilliamherbert ( 211790 ) on Wednesday February 14, 2007 @04:33PM (#18016774)
    Vertica's website has had all the details about what they're doing for months. They've had a Wikipedia article for a long time.

    This is some new Network World definition of "Stealthy", apparently...
    • Vertica's website has had all the details about what they're doing for months. They've had a Wikipedia article for a long time. This is some new Network World definition of "Stealthy", apparently...

      Network World is a trade rag. To them, anything not advertised is stealthy. Especially since they want to motivate people to think "oh no, I don't want to be stealthy, that means unknown! quick buy some advertising!"

  • Best of luck (Score:5, Insightful)

    by 140Mandak262Jamuna ( 970587 ) on Wednesday February 14, 2007 @04:40PM (#18016836) Journal
    I dont want to rain in their parade. But typically whenever people start with a spec like "100 times better than what they can do", they assume they will continue to perform at current levels while these people take years to develop and mature their new technology. In the real world, the traditional methods too improve and unless they can maintain a 100x lead continually the new technology flops.

    What happened to Gallium Arsenide replacing silicon? What happened to solid state memory completely repalcing magnetic disks? Technology field is littered with such fiascos.

    • Sybase IQ already shows that class of speedups on lots of datasets. Proof of concept is out there...
    • by PCM2 ( 4486 )

      In the real world, the traditional methods too improve and unless they can maintain a 100x lead continually the new technology flops.

      This might be the obvious conclusion if Vertica were targeting the mass market and trying to compete directly with Oracle, SQL Server, or DB2, but they are not. TFA says Vertica is targeted at the data warehousing market, which is a very specific application area that can be better served with niche products than with the traditional general-purpose relational RDBMSs. Base

    • This is a different kind of issue, really, more like the difference between a CPU and a GPU. At the moment, a good GPU has >100x the performance of a good CPU on a certain class of computations. Column stores will clearly never replace row stores for transaction processing for obvious reasons, but (coupled with a few other architectural decisions) they do exhibit >100x the performance of row stores for the kinds of queries seen in data warehouses.

      Also, the two technologies are complementary. The

    • Re: (Score:3, Insightful)

      by einhverfr ( 238914 )
      For certain applications (particularly BI), I think that 100x speedups are practical, but I would not expect it in general OLTP systems.

      Let me give you an example.

      Suppose you have a table with, say, 100 billion rows. You want to create a report which provides aggregated data on a very large subset of a few columns of table. With a tradition RDBMS, you have to read through every single one of the 100 billion rows to aggregate the data (indeces don't help if you are going to be searching through a sizeable
    • by Ant P. ( 974313 )
      "100 times better" is perfectly feasible, if your average dataset is 100 (arbritary units), your old algorithm is O(n^2) and the new one is O(n).
  • by IflyRC ( 956454 ) on Wednesday February 14, 2007 @04:42PM (#18016860)
    Watch...they'll run into patent problems with patents held by Oracle, Sybase, and MS.
  • Where does it say that Vertica is going to be open source?

    In any case, if people wonder how they get 100x speedups, it's probably related to Stonebraker's previous company called Streambase [streambase.com].
    • There wasn't much information on the web site, but everything is in Wikipedia (look under C-Store, the BSD-licensed open source version). It really is just a column-oriented database.
  • by WindBourne ( 631190 ) on Wednesday February 14, 2007 @04:50PM (#18016934) Journal
    • Re: (Score:3, Interesting)

      by Mad Merlin ( 837387 )
      Look again...

      $ curl -I www.vertica.com
      HTTP/1.1 200 OK
      Date: Wed, 14 Feb 2007 23:00:26 GMT
      Server: Apache/1.3.33 (Unix)
      Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0
      Expires: Sun, 19 Nov 1978 05:00:00 GMT
      Pragma: no-cache
      X-Powered-By: PHP/4.4.4
      Set-Cookie: PHPSESSID=488de093f5b89a78277a234e1e9886a6; expires=Sat, 10 Mar 2007 02:33:46 GMT; path=/
      Last-Modified: Wed, 14 Feb 2007 23:00:26 GMT
      Content-Type: text/html; charset=utf-8

  • Speculation (Score:5, Informative)

    by cartman ( 18204 ) on Wednesday February 14, 2007 @04:55PM (#18016998)

    I noticed that Stonebraker is the company founder. Stonebraker has contributed extensively to database research over the years.

    He's known for advocating the "shared-nothing" approach to parallel databases. The shared-nothing approach means that nodes in the parallel database don't attempt memory or cache synchronization, and each node has its own commodity disk array. In a shared-nothing parallel database, the data is "partitioned" across servers. So, for example, rows with id's 1-10 would be on the first server, 11-20 on the second server, etc. Executing the SQL query "select * from table where id < 1000" would send requests to multiple commodity servers and then aggregate the results. The optimizer is modified to take into account network bandwidth and latency, etc.

    My guess on what they're doing: they're working on a shared-nothing parallel RDBMS with an in-memory client similar to Oracle TimesTen.

    The are a few drawbacks to the shared-nothing approach: 1) the RDBMS software is more difficult to implement; 2) since the data is partitioned, any transaction that updates tuples on more than one database node requires a two-phase distributed commit, which is much more expensive; and 3) some queries are more expensive because they require transmitting large amounts of data over the network rather than a memory bus, and in rare cases that network overhead cannot be eliminated by the optimizer.

    The advantage, of course, is linear scalability by adding commodity hardware. No more need for $3M+ boxes.

  • by Qbertino ( 265505 ) <moiraNO@SPAMmodparlor.com> on Wednesday February 14, 2007 @05:01PM (#18017058)
    ... for a long time.
    Classic RDBMSes are crutches. A forced-upon neccesitiy we have to put up with for our app models to latch on to real world hardware and it's limitations. A historically grown mess with an overhead so huge it's insane. With a Database PL and 30+ dialects of it from back in the days when we flew to the moon using a slide-ruler as primary means of calculation.
    If what they claim is true, these guys are probably finally ditching the omnipresent redundant n-fold layers user and connection management in favour of a lean system that at last does away with the distinction of filesystem and database and data access layer. Imagine a persistance layer with no SQL, no extra user management, no extra connection layer, no filesystem under it and native object suport for any PL you wish to compile in.
    I tell you, finally ditching classic RDBMSes is *long* overdue, they're basically all the same ancient pile of rubble, from MySQL up to Oracle. If these guys are up to taking on this deed (or part of it) and they get finished when solid-state finally relieves our current super-slowpoking spinning metal disks on a broad scale we'll feel like being in heaven compared to the shit we still have to put up with today.
    I wish these guys all the best. They appear to have the skills to do it and the authority to emphasise that todays RDBMSes and their underlying concepts are a relic of the past.
    My 2 cents.
    • Imagine a persistance layer with no SQL, no extra user management, no extra connection layer, no filesystem under it and native object suport for any PL you wish to compile in.

      I worked on just such a system, and ended up replacing it with a straightforward RDBMS. The object persistence layer serialised to disk, which offered no benefits over using an RDBMS as the backend data store (which had been in the original design oddly enough). It had to keep everything in memory - which proved impossible when th

  • Given that... (Score:5, Informative)

    by CodeShark ( 17400 ) <{moc.oohay} {ta} {cphtrowslle}> on Wednesday February 14, 2007 @05:26PM (#18017312) Homepage
    MonetDb, [monetdb.cwi.nl] is similarly configured as a column oriented AND Open source, and appears to clean the clock of most of the major commercial and Open Source databases for huge data set queries, (see the benchmarks at axyana.com [axyana.com] for an example), where is Vertica's market advantage supposed to be?


    By which I am asking that while Vertica is obviously well-researched and well funded as a start up, MonetDB is well-researched, already benchmarked and available now.. So why would I wait to invest my time, energy, and $$ in a proprietary future product rather than the time and energy, etc. to develop market leadership in my chosen corporate area in the present?

    • Re:Given that... (Score:5, Informative)

      by perfczar ( 1064296 ) on Wednesday February 14, 2007 @06:46PM (#18018116)

      Here are a few of the technical reasons one might choose Vertica over Monet; I'll not get into business issues.


      Vertica is designed for large amounts of data, and is optimized for disk based systems. Monet does benchmarks against TPC-H Scale Factor 5 (30 million records, an amount which would fit in main memory) running on Postgres; Vertica does TPC-H Scale factor 1000 (6 billion records) against commercial row stores tuned by people who do such work to make a living.

      Vertica runs on multi-node clusters, allowing the cluster to grow as the amount of data grows, while Monet doesn't scale to multiple machines.

      There are numerous differences in the transaction systems, update architecure, tolerance of hardware failure, and so on, that make Vertica better suited to the enterprise DW market.


      Note: I work for Vertica
      • Thanks for the perspective. That's what I like about /. I can put out a thought or question and mostly get good information back relatively quickly.
  • I wonder how this compares to http://en.wikipedia.org/wiki/Netezza [wikipedia.org]Netezza.
  • by russryan ( 981552 ) on Wednesday February 14, 2007 @06:09PM (#18017774)
    See http://en.wikipedia.org/wiki/Bigtable [wikipedia.org] for a description of Google's column oriented database.
  • How about a database with the exact same query API (not just "but it's all SQL") as, say, Oracle or MS-SQL, or even Postgres, that allows any number of parallel query servers to work against a single datastore?

    In other words, instead of yet another incompatible database, how about one that we could just switch to from an existing one, that is arbitrarily scalable against shared data. If you're going to get clever and act like you can solve hard problems, why not give people what we need, and not just what y
    • by PCM2 ( 4486 )

      How about a database with the exact same query API (not just "but it's all SQL") as, say, Oracle or MS-SQL, or even Postgres, that allows any number of parallel query servers to work against a single datastore?

      What would be the purpose of that? Performance gains? I/O is going to be your bottleneck there, and it sounds like it would start to clog up sooner, rather than later.

      In other words, instead of yet another incompatible database, how about one that we could just switch to from an existing one, th

      • IO is the bottleneck anyway. The scheme I mentioned reduces the bottlenecks to that single one. And it allows arbitrary scaling with minimal (if any) recoding, just by adding HW.

        If you're going to get snotty and dismissive, why not recognize that the needs of the market, easily/cheaply scalable databases without complex planning in application design, are more important than what this team happens to think it can do better, and don't need a vendor white paper to make clear in a few sentences?
  • by ramakant ( 256472 ) on Wednesday February 14, 2007 @06:37PM (#18018052)
    This looks like it will be a commercial version of the Michael Stonebraker and MIT developed C-Store column-oriented:
    - Web site: http://db.lcs.mit.edu/projects/cstore/ [mit.edu]
    - Wikipedia Entry: http://en.wikipedia.org/wiki/C-Store [wikipedia.org]
    They distribute the source with a fairly liberal license, so this looks like something the open source community could pick up and run with.
  • Is that you do not scale as well to a large number of columns. To access a set of X records with 100 columns, you have 100 asynchronous I/O calls to the separate column stores. I sell an analytical software that does just this, and it is not a technical something that should just be ignored. In some regards the single file row oriented system has less I/O overhead. We have come up with some ways to reduce the file system overhead, but while it is small, it is noticeable, more so on systems not designed to h
    • Column-oriented DBs should scale better with more columns if the the query doesn't access all the columns (which they rarely do). The DB only needs to keep the columns in memory that are being accessed. This is far better than a row-oriented DB that needs to cycle though the entire table or numerous indexes to get a result set.
  • it's on the front page of slashdot.. how stealthy can it be?
  • by WoTG ( 610710 ) on Wednesday February 14, 2007 @10:54PM (#18020050) Homepage Journal
    I've never heard of column based databases prior to this article. Would I be correct in assuming that you still can work with these using regular SQL?
    • by Jayson ( 2343 )
      One of the benefits of column oriented DBs is that tables have an ordering, and that ordering can be exploited in queries. SQL doesn't give a good way to exploit it. Column DBs do allows SQL, but they also have other native languages that people tend to use.
    • Yes, still SQL. Column oriented DBs are meant to optimize SQL reads where you only are using a few columns in your SQL, but the tables have many columns. This doesn't change anything about SQL.

"To take a significant step forward, you must make a series of finite improvements." -- Donald J. Atwood, General Motors

Working...