Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Data Storage Hardware

Data Storage Leaders Introduce New Wares 29

louismg writes "Data storage giant EMC announced upgrades to their storage hardware family this morning, and claimed performance increases of 25% to 100%, with increased capacity and disk speeds. This comes two weeks after competitor BlueArc announced Titan, the world's biggest ever NAS box, which claims throughput of 5 Gbps and 256 terabytes in a single hardware file system. How much is enough, and as IT administrators, what is the answer to today's issues - improved hardware, or software?"
This discussion has been archived. No new comments can be posted.

Data Storage Leaders Introduce New Wares

Comments Filter:
  • by ivan256 ( 17499 ) * on Monday February 09, 2004 @05:11PM (#8230019)
    I predict that the storage industry will continue to produce boring incremental improvements on archaic paradigms untill somebody comes out with something revolutionary. Yes, that was vague and truly deep. Since you probably didn't read the article, here's the spoiler: it's esentially the same thing the author of the story said. Given the history of the industry, you can bet you'll get old and go grey before something revolutionary comes from one of the established players.

    Something revolutionary is coming soon [revivio.com] though.

    • Adding the dimension of time to data storage as in the link you provide is hardly revolutionary (cf cvs and other version control systems). On the other hand, there are some very interesting developments in distributed file and archival systems.

      Some of this work is happening in the academic community (OceanStore, et al) and some is happening in the commercial sector (Avamar, Connected, etc etc).

      It seems to me that the storage industry is advancing on two main fronts.

      First, hardware is getting better and
      • Adding the dimension of time to data storage as in the link you provide is hardly revolutionary (cf cvs and other version control systems).

        There have long been snapshoting solutions too, the key diference here is that you can go back to any point in time, and that is truly new. With other version control systems you can only go back to where you manually told it to checkpoint.

        As for revolutions in indexing and searching storage, I have yet to see something that's not a new take on an old concept. There a
    • Have you seen RAIDn from Inostor/Tandberg Data? Multiple drive redundancy is an interesting development.

      More info [raidn.com]
      • Excuse me, but RAID rendundancy through (n,k) Humming Code (n data bits, k extra bits) is hardly interesting, let alone a development. Most other implementations work with (n,1), so they "innovated" and work with (n,k)? Big deal.

        Oh, and those 8 years of development you get to hear about when reading the link on their website titled "RAIDn"? I pity their shareholders' nerves.
  • Seagate, too! (Score:4, Informative)

    by morcheeba ( 260908 ) * on Monday February 09, 2004 @05:31PM (#8230329) Journal
    Also today, Seagate launched a family of server-class 2.5" drives [infoworld.com] sporting 10k rpm and an Ultra320 SCSI or Fibre Channel interface. No details on Seagate's web site [seagate.com] yet, though.
  • by Anonymous Coward
    ...is still broken. My company is finishing up a particularly nasty lawsuit with EMC now over the crap that they "sold" us. I'd advise anyone in a position to make a purchase for their company to consider all the options before going with EMC. Their products are unfinished and unreliable. Ugh.
  • Improved backups.. (Score:4, Insightful)

    by Sri Ramkrishna ( 1856 ) <sriram.ramkrishna@gmail. c o m> on Monday February 09, 2004 @06:07PM (#8230928)
    What they need is improved backups. I don't give a fig about space if I can't back it up. So maybe someone should be looking at how we're supposed to be backing this stuff or archive this stuff. Or are we supposed to keep a warehouse of EMCs around? I can lay a bit that we are going to need serious backup infrastructure than what we have today to keep up.

    sri
    • Companies are still adding 40%/year to their storage and filling it with what? mail, word docs, downloads off the internet.

      Instead of better backup, we need intelligent agents that figure out whats duplicates or unneeded old versions and deletes it. That makes better use of the storage you have, and makes it easier to find what you need amidst the clutter.

    • I used to work for EMC.... This wasn't my division, but if I recall, their preferred backup strategy is not to keep you EMC boxes in the same warehouse, but to have you buy two machines, keep them in geographically separate locations, and have them mirror each other over a Wide Area Network. They have some pretty tight functionality built in to handle the mirroring in real time... its features like that which make EMC boxes more than just a bunch of disks.

      Its also a clever way of getting you to spend twice
      • Having the redundant systems is great for protecting failure/destruction of the devices, but it doesn't really address file corruption/deletion by users. A "snapshot" system may offer some help, but when data retention is an issue, you'll still need to look at long term backup solutions. As earlier post have stated, backup of these huge amounts of storage is becoming very difficult.

        My understanding of snapshots may be a bit out of date (latest employer doesn't have storage with this feature) but snapsh
  • The price (Score:3, Informative)

    by dtfinch ( 661405 ) * on Monday February 09, 2004 @06:36PM (#8231338) Journal
    BlueArc appears to charge about $100/gb for storage solutions, and claims that its price is less than its competitors. At first, this looks to me like an insanely high price because my last hard disk cost $0.88/gb. But after some thought to the other hardware involved, I figure I could build an almost equally capable solution for $8-$20/gb, not counting software development costs. But adding the cost of the room to hold it all, plus the insane electrical and air conditioning costs, $100/mb is starting to look fairly reasonable for those who really need what they offer, and need it soon.
  • by Kanasta ( 70274 )
    Why can't I copy a 100mb file from C:\bob to C:\fred at more than aobut 5mb/s?

    All this claim of speed, in theory, and I get speeds that wouldn't even max out usb2.
    • Re:BS (Score:3, Informative)

      by PurpleFloyd ( 149812 )
      Your problem is because of Windows (or DOS, if you're even more of a masochist). It will tend to move the file in small chunks, so it goes something like this: read a little bit (maybe a few K) from disk, copy it to memory, seek head to new location, write that tiny amount back to the new location, then go back to the previous location and start over again with a new tiny chunk. As a result, your hard drive's heads are in transit more often than they're reading data, and speeds really suffer. Remember th
    • by jbert ( 5149 )
      Because you are copying from a disk to itself.

      All the "max bandwidth" figures you see are for streaming reads, where the disk heads move (relatively) smoothly along logically continguous chunks of disk.

      Compare that to copying from one part of the disk to another. Your 100Mbyte file will be copied in chunks. The sequence of events will go something like this at a low level:

      while( data left to copy )
      {
      move disk heads to offset in file to be read
      read a chunk
      move disk heads to offset in file to be wr
      • OK i guess, but this means there's little point making it faster cuz it'll still be bottlenecked by the seeks - which are still the same over the past n years.
      • There is also the extra housekeeping that goes on for clearing bits from the freemap, updating the file size in the dest directory entry, etc.
        Things like that also contribute to the performance penalty.
  • While the industry..... and consumers.... spend billions a year on R&D for larger storage devices/solutions and more secure ways to store data without losses, has anyone considered making the data SMALLER? Unlimited hours are going into encryption algorithms every year but most of the people I've seen out there are still using WinZip and other usefull but not too impressive compression utils. MP3 made audio better at a smaller (data) cost, mpeg for video etc..... what about the rest of the crap on you

Love may laugh at locksmiths, but he has a profound respect for money bags. -- Sidney Paternoster, "The Folly of the Wise"

Working...