Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Databases Programming Software IT Technology

Zvents Releases Open Source Cluster Database Based on Google 87

An anonymous reader writes "Local search engine company, Zvents, has released an open source distributed data storage system based on Google's released design specs. 'The new software, Hypertable, is designed to scale to 1000 nodes, all commodity PCs [...] The Google database design on which Hypertable is based, Bigtable, attracted a lot of developer buzz and a "Best Paper" award from the USENIX Association for "Bigtable: A Distributed Storage System for Structured Data" a 2006 publication from nine Google researchers including Fay Chang, Jeffrey Dean, and Sanjay Ghemawat. Google's Bigtable uses the company's in-house Google File System for storage.'"
This discussion has been archived. No new comments can be posted.

Zvents Releases Open Source Cluster Database Based on Google

Comments Filter:
  • I'll check back when they get out of alpha.
  • Kitten Nipples (Score:2, Insightful)

    by milsoRgen ( 1016505 )

    ..designed to scale to 1000 nodes, all commodity PCs...
    I'm just curious if anyone has had any experiance with these types of systems using commodity PCs, how is performance and does how well does it scale as you increase the amount of nodes?
    • I don't really have any first hand knowledge (outside of network rendering at a pretty small scale) but the concept is deffinetly sound, its the same reason why software uses "threads" and why processors now have more than one core...

      As for scaling, it would scale at the same rate as Non-Commodity Computers... if you have 999 computers all of equal performance, and then you add another one, you could expect a 0.1% change over-all...however its largely based on what sort of controllers you use, the same as h
    • I was contracted to make the firebird db able to work with OpenSSI. Quite frankly, it worked beautifully, and it didn't require that much work. The issue I was faced with was that the storage had to be remote, which wasn't necessarily a problem per se, because nothing ever failed while I was around. Now if the power went out on the storage server and a few nodes at random, I really have no idea the havoc it would have caused... I was told my job was done and they didn't have a need for any sort of fault tol
    • Re: (Score:3, Informative)

      by allenw ( 33234 )
      So, Hypertable runs on top of Hadoop. We don't use Hypertable (or HBase) so I can't commen on those. I can share some of our experiences with Hadoop though. I think it is safe to say that it scales quite well for the vast majority of people who need it. Let's deep dive for a bit...

      Hadoop keeps all of its file system metadata in memory on a machine called the name node. This includes information about block placement and which files are allocated which blocks. Therefore, the big crunch we've seen is th
      • Anyone who thinks that machine with 16G or RAM is _commodity PC_ has to see a doctor. Most important question for scaling would be could anyone keep base requirements withing _commodity_ limits in case of data sets grows? So if 16G of RAM _now_ is sufficient what will happen when the data sets doubles? could you just add 16G nodes or you'll ask for 32G or 64G nodes? Could you keep data your engine works with of a certain size? There is a HUGE difference between adding 4 16G nodes and single 64G. Is the memo
    • I'm not an expert here, but on HPC clusters below 1k nodes I've never had any major problems with channel bonded gigabit and PVFS, however, the simulations we're running are not very I/O intensive.
  • Really, this time, a full fucking beowulf cluster (that runs linux!) is available to /.ers. No. Fucking. Way.

    Alright, I know it's only storage and not processing power, but that was inevitable.
    • Re: (Score:3, Informative)

      by drinkypoo ( 153816 )

      Really, this time, a full fucking beowulf cluster (that runs linux!) is available to /.ers. No. Fucking. Way.

      What?

      There is no particular piece of software that defines a cluster as a Beowulf. Commonly used parallel processing libraries include MPI (Message Passing Interface) and PVM (Parallel Virtual Machine). Both of these permit the programmer to divide a task among a group of networked computers, and recollect the results of processing.

      "Beowulf (computing)." Wikipedia, The Free Encyclopedia. 28 Jan 2008, 12:25 UTC. Wikimedia Foundation, Inc. 9 Feb 2008 <http://en.wikipedia.org/w/index.php?title=Beowulf [wikipedia.org]

      • Wikipedia lists no less than eight Linux distributions designed specifically for building Beowulf clusters.
        Actually, I'm aware of that. You could say that I overreacted.
  • Project page: http://www.hypertable.org/ [hypertable.org]
    Zvents: http://www.zvents.com/ [zvents.com]
  • how useful is DHT? (Score:4, Insightful)

    by convolvatron ( 176505 ) on Friday February 08, 2008 @07:42PM (#22355818)
    i've been interested in this question for the last few years. how much do people value the ability to use a relational language and transactional consistency, or for most of these uses are these things just historical artifacts?
    • by moderatorrater ( 1095745 ) on Friday February 08, 2008 @07:51PM (#22355910)
      It's useful for ridiculously large data sets, like the entire internet. I know that medium sized stores (overstock, etc) use a relational database, and anything with less data than that is probably going to use a relational database. However, for extremely large data sets and certain repetitive, non-dependent loops (such as, say, looping through every website for a search), this can be useful. At least for now, relational databases are more useful overall, but tools like this have their place, and as data sets grow faster than real computational power, they'll be used more and more.
      • by vicaya ( 838430 )
        Hypertable is not a DHT. DHT is mostly useful for large amount of relatively small key value pairs. Hypertable like Bigtable use a metadata table to track the tablets (ranges in Hypertable term) Table automatically splits when they grow. A master server assigns the splitted tablets/ranges to appropriate tablet/range servers. Hypertable can be made to support transactions, as it has builtin versioning of data. As a result, Hypertable/Bigtable is more versatile than DHT.
    • by ShieldW0lf ( 601553 ) on Friday February 08, 2008 @08:00PM (#22355962) Journal
      i've been interested in this question for the last few years. how much do people value the ability to use a relational language and transactional consistency, or for most of these uses are these things just historical artifacts?

      In the 7 years I've been working in the industry, I've never delivered a single project that I would trust to a non-ACID database. Ever. And I doubt I ever will. If you want something that will generate some marketing material at high speed, and if it fails, who cares, well, use MySQL. If you want to do something that can handle a million pithy comments and if some of them get lost in the shuffle, who cares, well, that's fine too. Use whatever serves fast. If you're running Google, and it doesn't matter if a node drops out because there is no "right" answer to get wrong in the first place as long as you spit out a bunch of links, well, these sorts of non-resilient systems are fine.

      Personally, I've never done projects like that. In my projects, if the data isn't perfect always and forever, it's worse than if it had never been written. It's very existence is a liability, because people will rely on it when they shouldn't, for things that can't get by with "close".

      So yes. Transactional consistency and a solid relational model are pretty much mandatory, and not going anywhere soon. The idea that they might be replaced by technology such as this is laughable.
      • Re: (Score:3, Informative)

        by nguy ( 1207026 )
        So yes. Transactional consistency and a solid relational model are pretty much mandatory, and not going anywhere soon. The idea that they might be replaced by technology such as this is laughable.

        Relational databases don't implement the relational model correctly anyway. As for transactional consistency, you can get that on top of many different kinds of stores (including file systems); relational databases have no monopoly on that.
      • Re: (Score:1, Offtopic)

        by rubycodez ( 864176 )
        corporations constantly put bullshit data into those acid-compliant databases and then believe them forever as if they were true.

        already, we have the Dick-Shrub using such databases to terrorize the populace with expansion planned.
      • In my thirty-plus years in the industry, I have seen a disk drive which could support transactional storage. The notion that you're going to write data in a manner which is more reliable than the underlying store is laughable. Even if you check the integrity of the underlying record, how do you know that your integrity check actually tested against the data you'll return next time? You don't; all you know is that the odds that you get back something else are negligibly small -- not zero, but low enough t
    • for most of these uses are these things just historical artifacts?
      they are not .There are still some places you can find their use .
  • The article talks about adapting MySQL to be a front end. I wonder if someone is working on adapting PostgreSQL to be a front end too.
  • by inKubus ( 199753 ) on Friday February 08, 2008 @07:48PM (#22355882) Homepage Journal
    This is a classic column-orientated DBMS, ala Sybase. You use these for data warehousing since they are optimized for read queries and not transactions. Stuff like Google search queries. It also allows you to quickly build cubes of data across a timeline, since you have data in columns instead of rows.

    IE:

    a,b,c,d,e; 1,2,3,4,5,6; a,b,c,d,e;

    instead of:

    a, 1, a;
    b, 2, b;
    c, 3, c;
    d, 4, d;
    e, 5, e;

    A cube using the time dimension would look like:

    01:01:01; a,b,c,d,e; 1,2,3,4,5; a,b,c,d,e;
    01:01:02; a,b,c,d,e; 1,2,6,4,5; a,b,c,d,e;

    It's pretty difficult to do the same thing with row-based DBMS. However, you can see that doing an insert is going to be costly.. This looks like a pretty good try, I know there were some other projects going to try to replicate what BigTable does. And after hearing that IBM story the other day about one computer running the entire internet, I started thinking about Google.

    More interesting is their distributed file system, which is what makes this really work well.
     
    • Re: (Score:3, Informative)

      You can do all those things in Orace -

      http://download.oracle.com/docs/cd/B19306_01/server.102/b14223/dimen.htm#i1006266 [oracle.com]

      Distributed filesystem - Oracle RAC (Real Application Clusters) fits the bill.
      • Data warehousing and clustering is also available in MS SQL Server. Mod SQL Server +1 Underrated
      • But it costs like 25k per processor for RAC though..
      • I was an Oracle DBA in the past.

        Oracle Dimensions are a logical overlay, they have no impact on how the data is physically organized in segments.

        Neither does Oracle RAC -- it uses the same underlying storage format as regular Oracle.

        You *could* do column-orientation in Oracle with a data cartridge, but that would likely be third party.

        I could see Oracle offering this natively in a future release, maybe 11g r2...
    • A cube using the time dimension looks more like this [timecube.com].
    • "orientated" is not considered proper usage.

      The correct word is "Asiantated".
    • by vicaya ( 838430 )
      Sorry, it's column-oriented but not really "classic". It supports ID part of ACID (which is possible but not implemented) very well, due to its builtin data versioning. It's optimized for both reads (random or sorted sequential scan) and very high random writes as you don't have to lock anything. The scalability and fault tolerance part is not classic either. BTW, google search queries do NOT use Bigtable. It's used for storing all the crawl data and input/output for their Map-reduce framework to build sea
    • by Duncan3 ( 10537 )
      Pointing out to the kids at Google their "new" tech is 30 years old and not even close to interesting to any researcher, becasue it's 30 years old, isn't going to get you anywhere around here.
      • by vicaya ( 838430 )

        The Google system group guys are definitely not kids. Neither are Hypertable developers, who have actually built and deployed web scale search engines themselves. How many people on the slashdot can say that? We're well aware of the literatures in the area (RTFP to find out) and continuously learning from peers. Both Bigtable and Hypertable built upon previous solutions to solve real world web scale data problems. Many algorithms used in Bigtable/Hypertable appeared in literature only in the late 90s+, clai

  • by Jurily ( 900488 )
    Can we do a distributed search engine with it? Google@home would be sooo cool.
    • You want to donate your network to google?
      • Yeah, what a wonderful idea, I mean whatcouldpossiblygowrong if Google could access the hard drive of everyone who signed up to it?

        "Please wait while the Index is updated"

        "Please wait while we Upload new entries"

        "Please wait for the FBI to knock on your door"
        • whatcouldpossiblygowrong if Google could access the hard drive of everyone who signed up to it?

          They do that already, it is called Google Desktop.
          • Yeah but thats not part of a network, other people can't search the contents of your computer (at least thats not an advertised "Feature"...lol)
    • Can we do a distributed search engine with it? Google@home would be sooo cool.

      I'm afraid that's been done before [exciteathome.com], and it didn't work out [news.com] so well, and may have always been a bad idea [forbes.com] in the first place.

  • Google 'Forms' (Score:3, Informative)

    by webword ( 82711 ) on Friday February 08, 2008 @10:02PM (#22356874) Homepage
    I think Google Forms [blogspot.com] is more interesting. (Based on Google Spreadsheets.)
  • There's another open source BigTable clone called HBase . It's written in Java, and also runs on top of Hadoop Distributed Filesystem like Hypertable. It has the advantage of being a subproject of Hadoop. For anyone interested in using this kind of database, give HBase a shot. We can definitely use the additional testing. (Full disclosure - I am an HBase developer.)
    • by vicaya ( 838430 )

      Hypertable can run on a variety of DFS' that support global namespace. Currently it can run on HDFS, KFS (GFS clone from Kosmix) and any DFS with a POSIX compliant mounting point, include GlusterFS, Lustre and Parallel NFS, GPFS etc. An S3 DfsBroker can be made easily as well.

      Besides DFS flexibility and not-java, Hypertable supports access group (locality group in Bigtable) unlike HBase, where you have to resort to column family hacks for read performance tuning. Hypertable also have more block compressi

  • Wheel: reinvented (Score:3, Insightful)

    by stonecypher ( 118140 ) <stonecypher@gm[ ].com ['ail' in gap]> on Friday February 08, 2008 @10:06PM (#22356898) Homepage Journal
    Mnesia has been able to handle things far in excess of the numbers cited, and with far better control of placement, for more than a decade. So has KDB. Also Coral8. This wouldn't even be on the map if people didn't start drooling the second they heard "based on Google." When they find out it's unstable and in alpha?

    Yawn.
    • by vicaya ( 838430 )

      Sorry, you couldn't be more wrong. Mnesia, KDB and Coral8 and Hypertable/Bigtable are completely different beasts for different purposes. Mnesia is mostly a DHT for key-value pair lookups while hypertabe/bigtable support efficient primary key sorted range scans. For concurrent read/write/update, Mnesia requires explicit locking. Hypertable/bigtable doesn't need explicit locking for that, consistency and isolation is achieved through data versioning. The most interesting feature here is time/history versioni

      • Re: (Score:3, Informative)

        by stonecypher ( 118140 )

        Mnesia is mostly a DHT for key-value pair lookups while hypertabe/bigtable support efficient primary key sorted range scans.

        Pretty much every database on earth has key sorted ranges. Please be less of a noob. Go look up ondex_match_object.

        For concurrent read/write/update, Mnesia requires explicit locking

        No, it doesn't. It offers explicit locking, because it's been proven for decades that without it, you cannot have hard realtime queries, something that mnesia wanted to offer. You don't have to use tha

        • by vicaya ( 838430 )
          Thanks for the playing :) Lets see:

          There's a reason Google's moving to Erlang so fast - they're discovering that a lot of the tools they've half-assed reinvented in Python already exist in Erlang in far more flexible fashions. This is nothing more than another map/reduce fiasco - a first generation solution to a problem that the internet adores because it's never seen any solution to the problem, but something which has been far better addressed in real industry for thirty or so years. If google would just

        • If Google would just buy Bluetail already, things would start changing for the better, fast.

          I had thought Bluetail was bought many years ago and absorbed into Nortel ...
  • Over at ASF a bunch of smart people are building Hadoop and Hbase. The latter is the open-source version of the BigTable, similar to Hypertable, but written in Java (not C++) and being super actively developed in the open and under the ASF umbrella.

"If it ain't broke, don't fix it." - Bert Lantz

Working...