The 1-Petabyte Barrier Is Crumbling 217
CurtMonash writes "I had been a database industry analyst for a decade before I found 1-gigabyte databases to write about. Now it is 15 years later, and the 1-petabyte barrier is crumbling. Specifically, we are about to see data warehouses — running on commercial database management systems — that contain over 1 petabyte of actual user data. For example, Greenplum is slated to have two of them within 60 days. Given how close it was a year ago, Teradata may have crossed the 1-petabyte mark by now too. And by the way, Yahoo already has a petabyte+ database running on a home-grown system. Meanwhile, the 100-terabyte mark is almost old hat. Besides the vendors already mentioned above, others with 100+ terabyte databases deployed include Netezza, DATAllegro, Dataupia, and even SAS."
Porn collection (Score:4, Funny)
No porn collection jokes please.
Re: (Score:3, Insightful)
No porn collection jokes please.
+1 Futile
Re: (Score:2)
Won't somebody think of the children.... (Score:2, Funny)
Oh wait, that was petabyte...
Fixed it for you... (Score:5, Funny)
Noob (Score:5, Funny)
Re:Noob (Score:5, Funny)
It has an event horizon and is actively acquiring porn on it's own?
I've seen porn collections like that... (Score:2)
...on virus-infested Windows PCs.
Re: (Score:2)
My porn collection has long since achieved infinity.
It has an event horizon and is actively acquiring porn on it's own?
<voice series="Futurama" character="Hermes Conrad">
That would be a singularity. Since the universe is infinite, you can have an infinitely large porn collection by using an infinitely large volume rather than create a singularity.
</voice>
Re: (Score:2)
So... the entire universe may just be an infinite porn collection encoded in matter and energy? Damn it! Where's the key?
Re: (Score:3, Funny)
...event horizon...
Awesome! That's what I'm going to call it now! My "event horizon"!
"Here it comes baby, the point of no return!"
Re: (Score:2)
Petabyte DBs are old news to... (Score:3, Funny)
Petabyte DBs are old news to techie porn collectors. They always mix their two favorite subjects into one. Tech + Porn = Petabyte+ Porn Database
Comment removed (Score:5, Interesting)
Re: (Score:2)
You hit the nail on the head. The technology allows for a richer experience for the user -- hence the ability to collect more useful information to make the customer experience better/faster/stronger/etc.
Re:Petabyte DBs are old news to... (Score:4, Interesting)
"they used to contain basically the address and perhaps logs from calls they made to the call center. Now whole phone conversations are logged as well as faxes and letters that are scanned, together with images and video that is available."
Reminds me of David brin's Transparent society
http://www.davidbrin.com/tschp1.html [davidbrin.com]
http://www.amazon.com/Transparent-Society-Technology-Between-Privacy/dp/0738201448/ [amazon.com]
Re: (Score:2)
If they had the time to listen to you while on hold.. why would they put you on hold?
Re: (Score:3, Insightful)
When my unemployment was running out years ago, I took a job at a call center to pay the bills.. When I had to ask a co-worker a question, I often would hit Mute instead of hold after asking them to hold. It was pretty entertaining!
Re: (Score:2)
Re: (Score:2)
If you simply play copyrighted material while on hold and they record it can they be sued but the Record Industry?
Oh s***! I'm calling my Congressman! (Score:5, Funny)
I have to find my kid. Last time I saw her, she was with her Uncle Micky while he was having his morning martini.
Re: (Score:2)
Re: (Score:2)
Hotlinking FAIL!
Google Street View must be most massive db ever? (Score:3, Interesting)
They have many towns now with less than 50k people completely photographed, every street in high res. That has to be well over 1-petabyte, though I doubt it's all in one location, must be distributed?
Re:Google Street View must be most massive db ever (Score:5, Informative)
I am confused !! (Score:5, Funny)
How many Libraries of Congress are necessary to break the 1-petabyte barrier ??
Re: (Score:2, Informative)
You seem to be trying to calculate in Tebibytes (TiB) and Pebibytes (PiB), which are based on the binary system, rather than Terabytes (TB) and Petabytes (PB), which are base 10.
Although some operating systems incorrectly use the decimal-based units with binary-based values (i.e. 1TB = 1024MB), that is technically wrong. Hard drive manufacturers actually report correctly using the decimal-based values (i.e. 1TB = 1000MB).
Also, you still got your maths wrong. 10TiB = ~0.09PiB.
Re: (Score:2)
Yeah, well, like it or not, hard drive manufacturers and data transmission rates use the base 10 SI units.
Re: (Score:2)
Hard drives are formatted with block sizes that are a power of two (e.g., 512 bytes). Thus it is more useful to see how many of them you would have on a filesystem than some power of ten figure that also conveniently inflates the capacity.
The issue being discussed isn't whether they should use base 10 or base 2 values, it's about which SI Prefix names that should be used for reporting the values.
It is an indisputable fact that hard drive manufacturers do currently use base 10 values and the base 10 prefixes. If you think they should use base 2 values, then fine, you may have a valid point. But you would have take it up with their marketing departments. However, if they did, they would also have to switch to the base 2 prefixes to avoid any c
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Further, do they mean, the size of the *text* of the books when ascii-encoded, or do they mean images of every page in the books, and all the media encoded appropriately and "losslessly?"
Even further: RGB filters only? What about reflective inks/bindings, embossed covers? lenticular "hologram" covers?
Re:I am confused !! (Score:4, Informative)
1 Petabyte = 1,000 Terabytes
1 LoC = 10 Terabytes
100 LoC = 1,000 Terabytes
======
100 LoC = 1 Petabyte
Re: (Score:2)
If you think you're confused now, wrap your head around this:
1 64 bit address space = 18 Quintillion Bytes = 18 Million Petabytes = 18 Billion Libraries of Congress -- directly addressable by one machine (I wouldn't want the electric bill from that machine).
LHC data production (Score:4, Informative)
So when active, the Large Hadron Collider will generate the equivalent volume of data of 50 Libraries of Congress every second.
Re: (Score:2)
The AC obviously codes in Lisp, all those closing parens really add up!
No big news here.... (Score:5, Interesting)
Take a look at almost any large financial firm. The email retention system alone is much larger than a petabyte, and that's just dealing with the online media, not including what's spooled to tape. Due to deficiencies in RDBMS ssytems, each of the large firms usually develop their own systems for managing the archival system on top of the database.
Oh, come on. (Score:5, Interesting)
Re:Oh, come on. (Score:5, Insightful)
So the fact that movies have gone from 780mb (dvdrips) to 4.8gb (straight up copies) to 25gig (blu ray) doesn't bear any significance to you?
Or how about games which have gone from 1mb to installations that are upwards of 10gigs now (warhammer IIRC is 9 something).
Not to mention MS's fiasco of their Office XML format where things take up a ridiculous amount of space in comparison to open office (10mb docx vs 2.9mb open office)...it's all about the level of tech knowledge of someone that determines their space usage.
I wouldn't mind 3-4 TB, I'd split it off into about 4 partitions or raid stripe and call it a day for a while.
However consumer use is indicative of business use, so I would expect things to head towards exabyte eventually.
Re:Oh, come on. (Score:5, Insightful)
This is kind of my point. Do companies keep libraries of pr0n, video, music? Sure, if you're a media company you will. But say you're a plumbing distributor. You'll have the usual accounting stuff, and media for marketing, and some BS overhead, but don't tell me it adds up to a TB much less a PB.
On the other hand, if you have the extra space, it invites the usual waste in the form of archive directories for closed-out years, development junk, etc. Spinning round and round, doing nothing.
Re: (Score:2)
"This is kind of my point. Do companies keep libraries of pr0n, video, music? Sure, if you're a media company you will. But say you're a plumbing distributor. You'll have the usual accounting stuff, and media for marketing, and some BS overhead, but don't tell me it adds up to a TB much less a PB."
That's true for small companies but places like Digg and any site that gets a lot of comments would very quickly fill up that TB.
Re: (Score:2)
More long-tail economics! (Score:5, Interesting)
On the other hand, if you have the extra space, it invites the usual waste in the form of archive directories for closed-out years, development junk, etc. Spinning round and round, doing nothing.
Yep. That's exactly it. $200 today buys a 1 TB drive. $200 a few years ago bought a 1 GB drive. As the price has fallen the value of the HDD has risen relative to its cost. Those archive directories and development junk aren't being deleted because they have value. Sure, it's enough value to justify keeping them around when a 1 GB drive costs $200, but they are worth keeping around with a 1 TB drive costs that much.
They aren't "doing nothing" - they just aren't doing enough that it's worth keeping it until the price drops enough.
All of this is making the 1 TB drive considerably more valuable than the 1 GB drive, despite their original purchase price parity. This is long-tail economics at work [wired.com]. As the individual bits become worth less and less, the value in of the bits in total continues to rise, resulting in a completely new set of capabilities.
My DVR is an excellent example of this - it's a thorough change in the way that I watch television. Suddenly, it's a family event that we can all share, because when I want to comment, I can just hit pause, and share my thought. Nothing's lost, if needed we can just hit rewind a bit, and suddenly, instead of being annoyed at my daughter for wanting to comment on a point during a televised debate, I'm excited and interested! No more SHUSHSTing at my family, it's now a much more shared experience.
The price of nonlinear access media has dropped so incredibly that marginal-value bits (like video) are suddenly cheap enough to make it all possible.
Re:Oh, come on. (Score:4, Insightful)
Agreed.
And i'd also be worried about losing a PB all at once. There are TB drives at my local Best Buy, but that's a lot to lose at once. i'd rather split my files and programs between two or more smaller drives (and have a RAID).
Re: (Score:2)
This might be going slightly offtopic but yeah I've noticed that with the increases in data size, an increase in backup awareness and redundancy has been percolating down even to the home users.
For example, recently I set up a mirrored drive system for my stepdad for his home photos (which are somewhere in the 200GB range as he is semi-professional) just in case one drive goes out. Also I've been looking at a cheap DVD Autoload backup option. Any ideas there from the Slashdot crowd?
Re: (Score:2)
Also I've been looking at a cheap DVD Autoload backup option. Any ideas there from the Slashdot crowd?
Backup 200GB+ of data to DVD's? Are you mad? That's 25-50 disks just for the initial backup, and you probably want twice that to handle discs going bad.
Get two or three external disks (ESATA ideally; you can run SMART self tests, get better transfer rates, etc). Use a decent incremental backup tool to make versioned snapshots to them, rotating the drives periodically; keep one in storage, and ideally one off-site. Faster, less hassle, more robust and more flexible than a pile-o-DVDs.
I won't call you old fashioned... (Score:4, Insightful)
... but I do wonder if you've ever heard of Sarbanes-Oxley.
Science! (Score:5, Informative)
The LHC will generate several PB of data per year, as will the Large Synoptic Survey Telescope [lsst.org]. These projects aren't all that uncommon.
Re: (Score:2)
The LHC will generate several PB of data per year, as will the Large Synoptic Survey Telescope [lsst.org]. These projects aren't all that uncommon.
Shit, I'm working on those 2 projects. I'd better ask for a bigger hard drive to management...
Re: (Score:2)
The LHC will generate several PB of data per year
I know 1080p60 takes a lot of space, but I'm not sure I want to see that much hardon's colliding...
Re:Oh, come on. (Score:4, Funny)
Re: (Score:2)
You can have only so much useful information about anything.
If you have the space available and the tools to utilize the stored data, why not? The more data you keep, the more information you will have available when techniques or routines become available to you to utilize this data.
Re:Oh, come on. (Score:4, Insightful)
Call me old fashioned, but I don't see why anyone but a search engine like google would need anything like a petabyte. You can have only so much useful information about anything. Sounds to me like, fill your garage with sh1t, build a bigger garage.
Unfortunately, you gather up a lot of digital stuff fast and most of the time it's not useful. Take for example my business mail, it's full of old presentations and random versions of various documents and whatnot. Is it worth cleaning up? No. Is it worth keeping? Well, from time to time clients start asking about old things and it's very useful to have it. I figure 90% of it could be deleted, only keeping final versions and important mails. Of those 90% will never be asked for again, so I keep 100% for maybe 1%. Make a company with hundreds of thousands of people all like that and you get huge, huge amounts of data. It's still cheaper than to go through those huge, huge amounts of data. That goes double for many automated data collection processes - it's cheaper to keep until it's all guaranteed useless than trying to sort it out.
Re: (Score:2)
a. How on earth would you know? Do you work in a data-intensive industry?
b. Do you understand what a data warehouse even is?
c. Data mining is statistically based. The more information that's available to mine, the more accurate the results will be. And by "information", I don't mean some kid's hard drive filled with terrible mp3s and downloaded movies.
Re:Oh, come on. (Score:5, Interesting)
Data mining is statistically based. The more information that's available to mine, the more accurate the results will be.
A minor quibble. I do data mining for a living. With most data sets, we end up sampling them down, because more data ramps up processing time faster than it improves accuracy. With most problems, more data doesn't improve accuracy measureably, once you've reached a certain critical mass size in the dataset. Simplistically, you don't need to flip the coin a billion times to figure out that it comes up heads 50% of the time.
It's a rare problem that we use more than 100,000 records for. They exist, but they're rare.
Re: (Score:2)
Too Bad Most of that is Due to Poor... (Score:2, Insightful)
... DB design and old data that should be purged. Color me unimpressed.
Re: (Score:2, Interesting)
... DB design and old data that should be purged. Color me unimpressed.
I'm convinced now that regardless of attempted discrimination, HUMANS are pack-rats. THAT I can deal with, as people can be trained to actually throw shit away. The problem is when lawyers get involved in the matter. Yes, most of the shit we have today in the corporate world we are FORCED to keep due to some insane lawsuit and follow-up "fix-it-forever" law that calls for us to keep a copy of every damn thing that flows electronically for the next 7 - 70 years.
Could you almost call it corruption? Yes, I
Effect of the scale (Score:2, Insightful)
Imagine having tens of millions, or just millions users - all of them with their records, history, targeted ads data. Or some mail provider that stores attachments in a database. Or a file sharing service like those you and I know. That's a plenty of information to manage. Add an overhead, and it's easy to overfill even the biggest database.
Also I agree with you that bad design might be a concern. Of course there's no big database that couldn't get on a "purge" diet.
Now seems to me we might have a problem w
OO databases have done this ten years ago (Score:5, Interesting)
That was ten years ago.
Re: (Score:2)
Storing it is one thing. Querying is a very different thing. What happens when somebody wants to find out something not specifically envisioned in the original experiment?
Re: (Score:3, Interesting)
In fact OO and similar (CODASYL, network-style, etc. ) databases were used and continue to be used very heavily in applications where relational database do not scale.
Re: (Score:3, Interesting)
Only problem is, where do you find an oo database with a good index and search implementation, that don't cost to much that when you ask the company for a price, they don't even want to reply.
Re: (Score:3, Interesting)
Google Maps is way bigger... (Score:3, Informative)
Google Maps' database is far bigger...
A base of 8 tiles, with each becoming four more smaller tiles, in two modes (map/satellite), and 16 zoom levels.
Each tile is approx. 30kB.
(((0.03* (8 * (4^16)))/1024)/1024) == 983.04TB right there.
My calculator doesn't handle numbers big enough for streetview. O_O
Re:Google Maps is way bigger... (Score:5, Funny)
Google Maps' database is far bigger...
A base of 8 tiles, with each becoming four more smaller tiles, in two modes (map/satellite), and 16 zoom levels.
We are sorry, but we don't
have maps at this zoom
level for this region.
Try zooming out for a
broader look.
Re: (Score:2)
That's the worst haiku I've ever seen.
Re: (Score:2)
Re: (Score:2)
Then be surprised.
The landsat data alone comes close to 1TB.
And that is just the whole world in the broad 30m or so array.
(I know, because waaay back, i mirrored part of the Nasa WorldWind data)
This data is in no way fractal in nature.
And just do the math (just to see that your argument is bogus):
A km^2 at level 20 has 4^4=256 times as much data as one at level 16.
If you do the math, central europe alone is enough to push the world to an average of level 16 (germany, e.g., is completely covered in airplane
When the petafile barrier crumbles ... (Score:5, Funny)
... we'll need an army of Chris Hansens and a mountain of beartraps. God help us.
the only *real* barrier is backup time (Score:5, Interesting)
Any organisation that wishes to be classed in any way professional knows that the value in it's databases has to be protected. That requires them to have the means to recover the data if something bad happens. A hot-mirrored copy is simply not good enough (one corruption would get written to both copies).
As a consequence, the size of commercial databases is limited by the amount of time the organisation is willing to have it unavailable while it is restored, in the case of a disaster, or the time taken to create/update secure, offline, copies.
Not by intrinsic properties of the database or host architecture
Re: (Score:2)
When various Important People are standing behind you making "supportive" noises, while other people are coming by every 5 minutes to ask "Is it fixed yet?", you'll start to realize that restore time is very important, and that disk I/O is pathetic, and tape is overrated.
s/barrier/arbitrary round number/g (Score:5, Insightful)
That is all.
Re: (Score:2)
The world will only ever need 5 large databases (Score:5, Funny)
The world will only need 5 large databases.
None of them will never need more than 640KB^H^HMB^H^HGBMB^H^HTB of RAM and 32MB^H^HGB^H^HTB^H^HPB of storage.
WalMart has a 4 petabyte database already (Score:4, Informative)
Re: (Score:2, Funny)
They only needed one petabyte, but the Chinese cut them a deal on 4.
IBM Boulder (Score:2, Insightful)
Is the location of IBM's Managed Storage Services (MSS) division, which deploys SAN for customers in Boulder (including IBM internal) and other locations (over high speed fibre links) on IBM "Shark" (ESS) and DS6000/DS8000 devices. When I worked at IBM their marketing materials stated they were managing over 4 petabytes of data for enterprise customers out of that location alone - that was four years ago! That doesn't count for other MSS locations either, nor all the other areas where IBM implements large a
Re: (Score:2)
So you want to talk about high levels of storage - IBM has the game covered, considering they invented the HDD.
Actually, this is about databases rather than disks per se. But that's OK since they invented the relational database, too [wikipedia.org].
I wonder (Score:2)
How much of that data is marketing information?
seriously, is all of that data current and necessary?
seems to me that they should prune off and backup old data.
Re: (Score:2)
When you're doing automated data projections, using previous years of data to try and predict, from trends, the future (so to speak), having 10+ years of data isn't a luxury. And in our field, 10 years of data is often -all- of your data...so well...
Johnny Mnemonic (Score:5, Funny)
I need measurements I can understand, like how many Keanu Reeves' brains is a petabyte? And could he hold it indefinitely, or would his head explode at some point? If the latter, can we get him started on it now?
Re: (Score:2)
I believe 1 'Keanu' = 64 Kilobytes, but I would have to check the literature...
Re: (Score:2)
Johnny's brain could hold 80GB, or 160GB if he used a "doubler". So a PB is 12.5 times the capacity of Johnny's brain, undoubled.
I should know. ;)
Re: (Score:2)
I should know. ;)
That's a bummer then, since you're off by a factor of 1000. ;)
How is this news? (Score:5, Interesting)
So it may be news for the PC world but it's bordering on ancient history on IBM mainframes.
Re: (Score:2, Informative)
Pet peeve: misuse of "barrier" (Score:2)
Example: the sound barrier. The aerodynamics of a moving airplane are completely different when traveling faster than the speed of sound, than when traveling slower, so it was a real barrier that required engineering effort to overcome.
Another barrier had to do with fabricating electronic component
Re: (Score:2)
You have a point.
But the nice round numbers lead to marketing false alarms, so I think it's noteworthy when hype gives way to reality.
This also happens to be an area that lends itself to round numbers right now, since 10 terabytes is about the level where Oracle has totally run out of gas, and 100 terabytes used to be the hard limit on Netezza configurations.
CAM
Greenplum is based on Postgresql (Score:2)
From the Greenplum article mentioned in the summary:
Re: (Score:2, Flamebait)
Database, not filesystem. Thanks for almost bothering to read the summary, though.
Re: (Score:3, Insightful)
Re: (Score:2)
I could see practical applications (Score:3, Informative)
Okay, I know that the article is refering to database, but the comments seem to have gone into the way of disc storage, so I will take the bait and go off topic.
Petabyte drives would not really be that unpractical of an application for people who like to archive stuff. I just filled up a 300 gig drive and a 750 gig drive with just stuff off of the DVR in under a year. While National Geographic HD may be compressed so badly that it barely looks better than HD, and a one hour show is under 2 gig, try archivin
Re: (Score:2)
Granted, these are extremes, but who would have thought 15 years ago when we first started hitting the 1 gig barrier, that in 2008 we would have discs used for storing movies that have a capacity of 50 gig, and we would even consider saving stuff at a resolution of 1920x1080 and have PCM sound at a bitrate of 4.6Mbps?
Actually, very many. The infamous Moore's "law" was well underway and everything was growing nice and exponential. Though what the future needs is the bandwidth revolution, it's not "We can sto
Re: (Score:2)
"Is it likely to explode once it reaches 1 petabyte?"
No, but your head will.
Re: (Score:2)
The same barrier exists at 2TB or 2^32 disk sectors.
After that MSDOS style partition tables aren't good enough any more.
Re: (Score:2)
I'm assuming, because back in 1992 I remember reading that MCI had a 1 TB (!) database. It was big enough news to make it into PC Week.
Re: (Score:2)
how are you measuring that? Total database size? Raw input data size?
It's true user data. I make a point of that.
Following through to the links re Teradata gives a sense of what kind of back and forth that can engender.
CAM
Re: (Score:2)
The post surprisingly does not mention Aster Data Systems [asterdata.com] which is the datawarehouse behind MySpace. When web sites start to store and analyze every single user click then you quickly get into massive amount of data. It's no surprise that the Petabyte barrier is reached especially with the density of storage increasing at constant cost.
I met with Aster Data last Thursday, and will be writing about them soon. Aster's MySpace installation is a big database. But it's not petabyte-scale yet.
CAM