Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Databases Programming Software Data Storage IT

The 1-Petabyte Barrier Is Crumbling 217

CurtMonash writes "I had been a database industry analyst for a decade before I found 1-gigabyte databases to write about. Now it is 15 years later, and the 1-petabyte barrier is crumbling. Specifically, we are about to see data warehouses — running on commercial database management systems — that contain over 1 petabyte of actual user data. For example, Greenplum is slated to have two of them within 60 days. Given how close it was a year ago, Teradata may have crossed the 1-petabyte mark by now too. And by the way, Yahoo already has a petabyte+ database running on a home-grown system. Meanwhile, the 100-terabyte mark is almost old hat. Besides the vendors already mentioned above, others with 100+ terabyte databases deployed include Netezza, DATAllegro, Dataupia, and even SAS."
This discussion has been archived. No new comments can be posted.

The 1-Petabyte Barrier Is Crumbling

Comments Filter:
  • by Anonymous Coward on Monday August 25, 2008 @08:39AM (#24735439)

    No porn collection jokes please.

  • by Anonymous Coward

    Oh wait, that was petabyte...

  • by hyperz69 ( 1226464 ) on Monday August 25, 2008 @08:41AM (#24735477)
    I had been a Porn Collector for a decade before I found 1-gigabyte Porn Collections to write about. Now it is 15 years later, and the 1-petabyte barrier is crumbling.
  • by C_Kode ( 102755 ) on Monday August 25, 2008 @08:42AM (#24735489) Journal

    Petabyte DBs are old news to techie porn collectors. They always mix their two favorite subjects into one. Tech + Porn = Petabyte+ Porn Database

  • by BitterOldGUy ( 1330491 ) on Monday August 25, 2008 @08:43AM (#24735497)
    We must protect the children from the petabytes! These petabytes are everywhere trying to have sex with our children!

    I have to find my kid. Last time I saw her, she was with her Uncle Micky while he was having his morning martini.

  • by Anonymous Coward on Monday August 25, 2008 @08:44AM (#24735515)

    They have many towns now with less than 50k people completely photographed, every street in high res. That has to be well over 1-petabyte, though I doubt it's all in one location, must be distributed?

  • by neonux ( 1000992 ) on Monday August 25, 2008 @08:45AM (#24735523) Homepage

    How many Libraries of Congress are necessary to break the 1-petabyte barrier ??

  • No big news here.... (Score:5, Interesting)

    by edwardd ( 127355 ) on Monday August 25, 2008 @08:49AM (#24735577) Journal

    Take a look at almost any large financial firm. The email retention system alone is much larger than a petabyte, and that's just dealing with the online media, not including what's spooled to tape. Due to deficiencies in RDBMS ssytems, each of the large firms usually develop their own systems for managing the archival system on top of the database.

  • Oh, come on. (Score:5, Interesting)

    by seven of five ( 578993 ) on Monday August 25, 2008 @08:50AM (#24735583)
    Call me old fashioned, but I don't see why anyone but a search engine like google would need anything like a petabyte. You can have only so much useful information about anything. Sounds to me like, fill your garage with sh1t, build a bigger garage.
    • Re:Oh, come on. (Score:5, Insightful)

      by poetmatt ( 793785 ) on Monday August 25, 2008 @08:58AM (#24735649) Journal

      So the fact that movies have gone from 780mb (dvdrips) to 4.8gb (straight up copies) to 25gig (blu ray) doesn't bear any significance to you?

      Or how about games which have gone from 1mb to installations that are upwards of 10gigs now (warhammer IIRC is 9 something).

      Not to mention MS's fiasco of their Office XML format where things take up a ridiculous amount of space in comparison to open office (10mb docx vs 2.9mb open office)...it's all about the level of tech knowledge of someone that determines their space usage.

      I wouldn't mind 3-4 TB, I'd split it off into about 4 partitions or raid stripe and call it a day for a while.

      However consumer use is indicative of business use, so I would expect things to head towards exabyte eventually.

      • Re:Oh, come on. (Score:5, Insightful)

        by seven of five ( 578993 ) on Monday August 25, 2008 @09:25AM (#24735869)
        However consumer use is indicative of business use, so I would expect things to head towards exabyte eventually.

        This is kind of my point. Do companies keep libraries of pr0n, video, music? Sure, if you're a media company you will. But say you're a plumbing distributor. You'll have the usual accounting stuff, and media for marketing, and some BS overhead, but don't tell me it adds up to a TB much less a PB.

        On the other hand, if you have the extra space, it invites the usual waste in the form of archive directories for closed-out years, development junk, etc. Spinning round and round, doing nothing.
        • "This is kind of my point. Do companies keep libraries of pr0n, video, music? Sure, if you're a media company you will. But say you're a plumbing distributor. You'll have the usual accounting stuff, and media for marketing, and some BS overhead, but don't tell me it adds up to a TB much less a PB."

          That's true for small companies but places like Digg and any site that gets a lot of comments would very quickly fill up that TB.

        • by nasor ( 690345 )
          That's exactly what I was thinking. Okay, a hi-def movie is 25 GB - but does some company really have 40k hi-def movies to stored?
        • by mcrbids ( 148650 ) on Monday August 25, 2008 @04:23PM (#24741815) Journal

          On the other hand, if you have the extra space, it invites the usual waste in the form of archive directories for closed-out years, development junk, etc. Spinning round and round, doing nothing.

          Yep. That's exactly it. $200 today buys a 1 TB drive. $200 a few years ago bought a 1 GB drive. As the price has fallen the value of the HDD has risen relative to its cost. Those archive directories and development junk aren't being deleted because they have value. Sure, it's enough value to justify keeping them around when a 1 GB drive costs $200, but they are worth keeping around with a 1 TB drive costs that much.

          They aren't "doing nothing" - they just aren't doing enough that it's worth keeping it until the price drops enough.

          All of this is making the 1 TB drive considerably more valuable than the 1 GB drive, despite their original purchase price parity. This is long-tail economics at work [wired.com]. As the individual bits become worth less and less, the value in of the bits in total continues to rise, resulting in a completely new set of capabilities.

          My DVR is an excellent example of this - it's a thorough change in the way that I watch television. Suddenly, it's a family event that we can all share, because when I want to comment, I can just hit pause, and share my thought. Nothing's lost, if needed we can just hit rewind a bit, and suddenly, instead of being annoyed at my daughter for wanting to comment on a point during a televised debate, I'm excited and interested! No more SHUSHSTing at my family, it's now a much more shared experience.

          The price of nonlinear access media has dropped so incredibly that marginal-value bits (like video) are suddenly cheap enough to make it all possible.

    • Re:Oh, come on. (Score:4, Insightful)

      by AP31R0N ( 723649 ) on Monday August 25, 2008 @08:59AM (#24735665)

      Agreed.

      And i'd also be worried about losing a PB all at once. There are TB drives at my local Best Buy, but that's a lot to lose at once. i'd rather split my files and programs between two or more smaller drives (and have a RAID).

      • This might be going slightly offtopic but yeah I've noticed that with the increases in data size, an increase in backup awareness and redundancy has been percolating down even to the home users.

        For example, recently I set up a mirrored drive system for my stepdad for his home photos (which are somewhere in the 200GB range as he is semi-professional) just in case one drive goes out. Also I've been looking at a cheap DVD Autoload backup option. Any ideas there from the Slashdot crowd?

        • by Fweeky ( 41046 )

          Also I've been looking at a cheap DVD Autoload backup option. Any ideas there from the Slashdot crowd?

          Backup 200GB+ of data to DVD's? Are you mad? That's 25-50 disks just for the initial backup, and you probably want twice that to handle discs going bad.

          Get two or three external disks (ESATA ideally; you can run SMART self tests, get better transfer rates, etc). Use a decent incremental backup tool to make versioned snapshots to them, rotating the drives periodically; keep one in storage, and ideally one off-site. Faster, less hassle, more robust and more flexible than a pile-o-DVDs.

    • by VampireByte ( 447578 ) on Monday August 25, 2008 @09:04AM (#24735699) Homepage

      ... but I do wonder if you've ever heard of Sarbanes-Oxley.

    • Science! (Score:5, Informative)

      by edremy ( 36408 ) on Monday August 25, 2008 @09:15AM (#24735791) Journal
      Petabytes are actually pretty common in the sciences. I visited NCAR (National Center for Atmospheric Research [ucar.edu]) in Boulder five years ago and their main database was in the 2PB region even then. I'm sure it's a lot larger today

      The LHC will generate several PB of data per year, as will the Large Synoptic Survey Telescope [lsst.org]. These projects aren't all that uncommon.

      • by dargaud ( 518470 )

        The LHC will generate several PB of data per year, as will the Large Synoptic Survey Telescope [lsst.org]. These projects aren't all that uncommon.

        Shit, I'm working on those 2 projects. I'd better ask for a bigger hard drive to management...

      • The LHC will generate several PB of data per year

        I know 1080p60 takes a lot of space, but I'm not sure I want to see that much hardon's colliding...

    • by secondhand_Buddah ( 906643 ) <secondhand@buddah.gmail@com> on Monday August 25, 2008 @09:21AM (#24735839) Homepage Journal
      Bill, is that you???
    • by garcia ( 6573 )

      You can have only so much useful information about anything.

      If you have the space available and the tools to utilize the stored data, why not? The more data you keep, the more information you will have available when techniques or routines become available to you to utilize this data.

    • Re:Oh, come on. (Score:4, Insightful)

      by Kjella ( 173770 ) on Monday August 25, 2008 @09:59AM (#24736275) Homepage

      Call me old fashioned, but I don't see why anyone but a search engine like google would need anything like a petabyte. You can have only so much useful information about anything. Sounds to me like, fill your garage with sh1t, build a bigger garage.

      Unfortunately, you gather up a lot of digital stuff fast and most of the time it's not useful. Take for example my business mail, it's full of old presentations and random versions of various documents and whatnot. Is it worth cleaning up? No. Is it worth keeping? Well, from time to time clients start asking about old things and it's very useful to have it. I figure 90% of it could be deleted, only keeping final versions and important mails. Of those 90% will never be asked for again, so I keep 100% for maybe 1%. Make a company with hundreds of thousands of people all like that and you get huge, huge amounts of data. It's still cheaper than to go through those huge, huge amounts of data. That goes double for many automated data collection processes - it's cheaper to keep until it's all guaranteed useless than trying to sort it out.

    • by abigor ( 540274 )

      a. How on earth would you know? Do you work in a data-intensive industry?

      b. Do you understand what a data warehouse even is?

      c. Data mining is statistically based. The more information that's available to mine, the more accurate the results will be. And by "information", I don't mean some kid's hard drive filled with terrible mp3s and downloaded movies.

      • Re:Oh, come on. (Score:5, Interesting)

        by Alpha830RulZ ( 939527 ) on Monday August 25, 2008 @12:02PM (#24737993)

        Data mining is statistically based. The more information that's available to mine, the more accurate the results will be.

        A minor quibble. I do data mining for a living. With most data sets, we end up sampling them down, because more data ramps up processing time faster than it improves accuracy. With most problems, more data doesn't improve accuracy measureably, once you've reached a certain critical mass size in the dataset. Simplistically, you don't need to flip the coin a billion times to figure out that it comes up heads 50% of the time.

        It's a rare problem that we use more than 100,000 records for. They exist, but they're rare.

    • I'm guessing most of these databases are keeping CYA information, most of which will never be used.
  • ... DB design and old data that should be purged. Color me unimpressed.

    • Re: (Score:2, Interesting)

      by Anonymous Coward

      ... DB design and old data that should be purged. Color me unimpressed.

      I'm convinced now that regardless of attempted discrimination, HUMANS are pack-rats. THAT I can deal with, as people can be trained to actually throw shit away. The problem is when lawyers get involved in the matter. Yes, most of the shit we have today in the corporate world we are FORCED to keep due to some insane lawsuit and follow-up "fix-it-forever" law that calls for us to keep a copy of every damn thing that flows electronically for the next 7 - 70 years.

      Could you almost call it corruption? Yes, I

    • by cefek ( 148764 )

      Imagine having tens of millions, or just millions users - all of them with their records, history, targeted ads data. Or some mail provider that stores attachments in a database. Or a file sharing service like those you and I know. That's a plenty of information to manage. Add an overhead, and it's easy to overfill even the biggest database.

      Also I agree with you that bad design might be a concern. Of course there's no big database that couldn't get on a "purge" diet.

      Now seems to me we might have a problem w

  • by cjonslashdot ( 904508 ) on Monday August 25, 2008 @08:57AM (#24735641)
    I remember encountering a 1+ petabyte database 10 years ago: it was the database to record and analyze particle accelerator experiment data at CERN. And it was built using a commercial object database - not relational. Oh but wait - the relational vendors have told us that OO databases don't scale....

    That was ten years ago.
    • by dfetter ( 2035 )

      Storing it is one thing. Querying is a very different thing. What happens when somebody wants to find out something not specifically envisioned in the original experiment?

    • Re: (Score:3, Interesting)

      by littlewink ( 996298 )
      You are mistaken. While certainly almost everything (right or wrong) has been said at some time by someone, nobody respectable who knew what they were doing ever claimed that object-oriented databases would not scale.

      In fact OO and similar (CODASYL, network-style, etc. ) databases were used and continue to be used very heavily in applications where relational database do not scale.

  • by Plantain ( 1207762 ) on Monday August 25, 2008 @08:58AM (#24735651)

    Google Maps' database is far bigger...

    A base of 8 tiles, with each becoming four more smaller tiles, in two modes (map/satellite), and 16 zoom levels.

    Each tile is approx. 30kB.

    (((0.03* (8 * (4^16)))/1024)/1024) == 983.04TB right there.

    My calculator doesn't handle numbers big enough for streetview. O_O

    • by Speare ( 84249 ) on Monday August 25, 2008 @09:02AM (#24735689) Homepage Journal

      Google Maps' database is far bigger...

      A base of 8 tiles, with each becoming four more smaller tiles, in two modes (map/satellite), and 16 zoom levels.

      We are sorry, but we don't
      have maps at this zoom
      level for this region.
      Try zooming out for a
      broader look.

      • That's the worst haiku I've ever seen.

  • by cpu_fusion ( 705735 ) on Monday August 25, 2008 @09:05AM (#24735709)

    ... we'll need an army of Chris Hansens and a mountain of beartraps. God help us.

  • by petes_PoV ( 912422 ) on Monday August 25, 2008 @09:09AM (#24735751)
    or more correctly, restore time.

    Any organisation that wishes to be classed in any way professional knows that the value in it's databases has to be protected. That requires them to have the means to recover the data if something bad happens. A hot-mirrored copy is simply not good enough (one corruption would get written to both copies).

    As a consequence, the size of commercial databases is limited by the amount of time the organisation is willing to have it unavailable while it is restored, in the case of a disaster, or the time taken to create/update secure, offline, copies.

    Not by intrinsic properties of the database or host architecture

    • by TheLink ( 130905 )
      Exactly.

      When various Important People are standing behind you making "supportive" noises, while other people are coming by every 5 minutes to ask "Is it fixed yet?", you'll start to realize that restore time is very important, and that disk I/O is pathetic, and tape is overrated.
  • by ivan256 ( 17499 ) on Monday August 25, 2008 @09:17AM (#24735813)

    That is all.

  • by davidwr ( 791652 ) on Monday August 25, 2008 @09:23AM (#24735857) Homepage Journal

    The world will only need 5 large databases.

    None of them will never need more than 640KB^H^HMB^H^HGBMB^H^HTB of RAM and 32MB^H^HGB^H^HTB^H^HPB of storage.

  • by captaindomon ( 870655 ) on Monday August 25, 2008 @09:44AM (#24736091)
    WalMart's data warehouse is already 4 petabytes: http://storefrontbacktalk.com/story/080307walmart.php [storefrontbacktalk.com]
    • Re: (Score:2, Funny)

      by Anonymous Coward

      They only needed one petabyte, but the Chinese cut them a deal on 4.

  • IBM Boulder (Score:2, Insightful)

    by Abattoir ( 16282 )

    Is the location of IBM's Managed Storage Services (MSS) division, which deploys SAN for customers in Boulder (including IBM internal) and other locations (over high speed fibre links) on IBM "Shark" (ESS) and DS6000/DS8000 devices. When I worked at IBM their marketing materials stated they were managing over 4 petabytes of data for enterprise customers out of that location alone - that was four years ago! That doesn't count for other MSS locations either, nor all the other areas where IBM implements large a

  • How much of that data is marketing information?

    seriously, is all of that data current and necessary?

    seems to me that they should prune off and backup old data.

    • by Shados ( 741919 )

      When you're doing automated data projections, using previous years of data to try and predict, from trends, the future (so to speak), having 10+ years of data isn't a luxury. And in our field, 10 years of data is often -all- of your data...so well...

  • by vjmurphy ( 190266 ) on Monday August 25, 2008 @10:05AM (#24736347) Homepage

    I need measurements I can understand, like how many Keanu Reeves' brains is a petabyte? And could he hold it indefinitely, or would his head explode at some point? If the latter, can we get him started on it now?

  • How is this news? (Score:5, Interesting)

    by Dark$ide ( 732508 ) on Monday August 25, 2008 @10:27AM (#24736617) Journal
    We've had petabyte databases on mainframes for a good couple of years. DB2 v9 on zSeries has two new tablespace types that make managing these humungous databases much easier.

    So it may be news for the PC world but it's bordering on ancient history on IBM mainframes.
  • Re: (Score:2, Informative)

    Comment removed based on user account deletion
  • Round numbers are not "barriers", they are just round numbers. The term "barrier" should only be used when there is something special about the number that creates special engineering challenges to overcome.

    Example: the sound barrier. The aerodynamics of a moving airplane are completely different when traveling faster than the speed of sound, than when traveling slower, so it was a real barrier that required engineering effort to overcome.

    Another barrier had to do with fabricating electronic component

    • You have a point.

      But the nice round numbers lead to marketing false alarms, so I think it's noteworthy when hype gives way to reality.

      This also happens to be an area that lends itself to round numbers right now, since 10 terabytes is about the level where Oracle has totally run out of gas, and 100 terabytes used to be the hard limit on Netezza configurations.

      CAM

  • From the Greenplum article mentioned in the summary:

    Most or all of the PostgreSQL data access methods are left intact. The big changes to PostgreSQL lie in the areas of query optimization, planning, and execution. I.e., Greenplum has its own way of breaking up a query into pieces â" and of course of seeing that data gets shipped among nodes â" but the low-level operators for storage and access are from PostgreSQL.

What is algebra, exactly? Is it one of those three-cornered things? -- J.M. Barrie

Working...