Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Data Storage Open Source Oracle Linux

Lustre File System Getting New Community Distro 68

darthcamaro writes "Oracle acquired a lot of open source tech from Sun that has since been forked — or is in the process of being forked. The open source Lustre high performance computing file system isn't on the list of forked projects, but it is getting a new, community-driven distro that is trying really hard to say that they're not officially a fork. 'Since April of 2010 there has been confusion in the community, and we've seen an impact in the business confidence in Lustre,' Brent Gorda, CEO and president of Whamcloud told InternetNews.com. 'The community has been asking for leadership, the commitment of a for-profit entity that they can rely on for support and a path forward for the technology.'"
This discussion has been archived. No new comments can be posted.

Lustre File System Getting New Community Distro

Comments Filter:
  • by Icyfire0573 ( 719207 ) on Friday January 14, 2011 @05:54PM (#34884600)

    From their website:
    http://wiki.lustre.org/index.php/Main_Page [lustre.org]

    High Performance and Scalability

    For the world's largest and most complex computing environments, the Lustre file system redefines high performance, scaling to tens of thousands of nodes and petabytes of storage with groundbreaking I/O and metadata throughput.

    • by Elbereth ( 58257 )

      Any benchmarks?

      • Obviously, we have internal benchmarks that tend to show that Lustre is good but I can't talk about specifics on those. What I can do, though, is link to this: http://www.cs.rpi.edu/~chrisc/COURSES/PARALLEL/SPRING-2009/papers/MADbench2-2009.pdf [rpi.edu]

        The stuff that I found most interesting is on page 12. The machines named Jaguar and Franklin are Cray's running Lustre. Bassi and Jacquard are both running GPFS. On page 15 they claim that they can make up for the deficiency in Lustre's default settings for shar

        • IOR was used across, um.. I think ~4000 client nodes or something around that. I don't recall exactly. As for 10 PB in size, that's not uncommon in this arena, and in fact there are a lot of sites which this much or more storage out there which don't make much noise publicly.
        • Obviously, we have internal benchmarks that tend to show that Lustre is good but I can't talk about specifics on those.

          So, uh, why mention them?

          The stuff that I found most interesting is on page 12. The machines named Jaguar and Franklin are Cray's running Lustre.

          So you need a Cray to get good performance?

    • But does it run (on) Linux?

    • by CAIMLAS ( 41445 ) on Friday January 14, 2011 @06:06PM (#34884770)

      At a functional level, Lustre (GPL) is to ZFS (CDDL) as CXFS (commercial) is to XFS (GPL) for SGI. They are the upper 'cluster' layer to take advantage of the underlying filesystems' capability. I believe this approach is divergent from that of GFS, due to the upper/lower approach, but I'm not that familiar with clustered filesystems.

      However: Arguably, Lustre on ZFS is a mumuchch better option due to ZFSs inherent capability superiorty over XFS. I've liked XFS historically, but ZFS is so drastically superior than anything else out there (in terms of storage management and available capacity and throughput) - all 'out of the box' that it's a no-brainer to use zvols for things other than direct zfs posix access. (For instance, they make great VM iSCSI targets, or local raw disks for VMs, or..)

      Side note: the linux zfsonlinux.org port is being successfully used as the base volume manager for Lustre right now, so it is apparently quite capable/stable at that level. (zfsonlinux does not yet have zfs posix support.) Lustre on ZFS It, apparently, scales much better than the traditional LVM/RAID/etc. backend methods.

      • At a functional level, Lustre (GPL) is to ZFS (CDDL) as CXFS (commercial) is to XFS (GPL) for SGI.

        And who says the IT world has too many confusing Acronyms?

        • by CAIMLAS ( 41445 )

          I don't suppose "They made (asked) me to do it!" is a legitimate excuse?

          Inversely, if we had long names for everything, we'd soon get confused and have insufficient time to actually work.

      • Lustre on ZFS is a mumuchch better option due to ZFSs inherent capability superiorty over XFS.

        I can tell you with a high degree of confidence that ZFS is a poor option for Lustre compared to its traditional backends, Ext3 and Ext4. One simple reason: ZFS has about half the transaction throughput.

        • by CAIMLAS ( 41445 )

          It does? How do you figure that? You will be somewhat limited compared to other filesystems for 'raw speed', but that's why ZFS has build in read and write cache functionality (via SSD or ramdisk). Unless we're talking about massive amounts of sustained reads and writes, with no time for the disks to catch up, I suspect that (oh) 32Gb of SSD or so would do the trick for most hosts, or 128Gb for a 'high demand' host member to Lustre.

          So yeah, if you consider cache, ZFS is going to blow the snot out of anythin

          • Actually, I'm 100% certain Lustre is NOT using ZFS today. It is actually using ldiskfs for the backing filesystem, which is a modified version of ext4. While work was ongoing to port Lustre over to ZFS, this was not completed.

      • Lustre on ZFS It, apparently, scales much better than the traditional LVM/RAID/etc. backend methods.

        By the way, where did you get that idea?

        • Same where he thinks ZFS is fast & default for lustre, and same where he thinks Lustre and LVM/RAID is the same type of a thing :D

          • by CAIMLAS ( 41445 )

            Yes, because that was obviously what I intended to convey. Thank you for pointing out that your level of reading comprehension is likely very similar to that of a politicians'.

        • by CAIMLAS ( 41445 )

          I'll tell you were I got that idea: experience.

          Managing filesystems in lvm2, on raid cards - all with their own specific commands - is a real pain in the ass when you've got tens of hosts or more per admin, with many different roles and functionality.

          So then you've got to have snmp set up for each of those hosts (often with different controller cards) to monitor those RAID cards status (with the shitty RAID console tool which lacks anything resembling documentation). Then you've got to manage LVM, with its

      • ZFS is slow on linux and Lustre runs on EXT3/EXT4 by default. Infact, Lustre is quite a big contributor to making ext4 in to existence by optimizing Ext3.

        http://en.wikipedia.org/wiki/Ext4 [wikipedia.org]

        Comparing lustre to LVM/RAID is comparing Apples to Oranges. One is network cluster file system, another is local storage management.

        • by CAIMLAS ( 41445 )

          I didn't compare Lustre to LVM/RAID - I compared ZFS to it.

          What sits on either would be Lustre, obviously. ZFS is significantly superior to RAID + LVM in pretty much every way, barring super-expensive hardware RAID controllers where RAID has a slight trump in and of itself. (Though it should be noted that these RAID controllers would likely provide significant benefit to a ZFS system, too.)

          What I have to wonder is: what kind of storage methods or devices does a 'network cluster file system' use? Here's a gu

  • Ended project (Score:5, Informative)

    by diegocg ( 1680514 ) on Friday January 14, 2011 @06:08PM (#34884786)

    According to insidehpc [insidehpc.com], Oracle has stopped developing Lustre and developers "have reportedly been encouraged to apply for other positions within the company".

    A group of Lustre users already created OpenSFS [opensfs.org] on October 2010 to continue developing Lustre.

    • If necessary, it will be forked. Between OpenSFS and WhamCloud there will always be a home for lustre. WhamCloud already has contraclts with Lawrence Livermore National Lab and Oak Ridge National Lab. Oak Ridge already has the largest Lustre filesystem to date. And there is also DDN which supplies the hardware for most of the larger Lustre sites which has a local copy of Lustre that they distribute as well. Luistre is more than fine, its just a little lost finding a home at this time.

  • by Anonymous Coward

    Oracle acquired a lot of open source tech from Sun that has since been forked — or is in the process of being forked.

    Is really:

    Oracle acquired a lot of open source tech from Sun that has since been fucked— or is in the process of being fucked [by Oracle].

  • Whamcloud? Really?
  • by Daniel Phillips ( 238627 ) on Friday January 14, 2011 @08:33PM (#34886120)

    Lose every tie to ZFS. Every. Single. One.

    Right now.

    Like every piece of software Oracle is involved in, ZFS is a big fat patent trap. Not only that, but ZFS is a lot slower than Ext3 and Ext4, and probably Btrfs[1] as well. There is absolutely no benefit to using ZFS as an object storage target, there is only the certainty of legal problems.

    [1] Oracle is involved with Btrfs too, so exercise due caution.

    • by Anonymous Coward

      Unfortunately, Lustre-on-ZFS [zfsonlinux.org] is substantially faster that lustre on ext3, mainly because ZFS combines the features of an lvm and a filesystem. That eliminates the need to have SAN appliance heads managing the storage and provides some additional data integrity features. It's cheaper too.

      • by Lennie ( 16154 )

        Is it faster because of the ZFS intent log and second level cache on SSD ?

      • Unfortunately, Lustre-on-ZFS [zfsonlinux.org] is substantially faster that lustre on ext3, mainly because ZFS combines the features of an lvm and a filesystem

        That's bafflegab and incorrect. Or if you disagree, please explain why.

        • And by the way, is your opinion based on benchmarks, or on hype from Sun? I strongly suspect the latter.

          • by dsouth ( 241949 )

            It appears to be based on the linked site:

            "In particular, ZFS’s advanced architecture addresses two of our key performance concerns: random I/O, and small I/O. In a large cluster environment a Lustre I/O server (OSS) can be expected to generate a random I/O workload. There will be 100’s of threads concurrently accessing different files in the back-end file system. For writes ZFS’s copy-on-write transaction model converts this random workload in to a streaming workload which is critical whe

            • It appears to be based on the linked site:

              "In particular, ZFS’s advanced architecture addresses two of our key performance concerns: random I/O, and small I/O. In a large cluster environment a Lustre I/O server (OSS) can be expected to generate a random I/O workload. There will be 100’s of threads concurrently accessing different files in the back-end file system. For writes ZFS’s copy-on-write transaction model converts this random workload in to a streaming workload which is critical when using SATA disks. For small I/O, Lustre can leverage a ZIL placed on separate SSD devices to maximize performance."

              The LLNL ZFS study has been pretty widely publicized in the HPC community. Lustre uses the filesystem API rather than mounting in. Until now Lustre used ext under-the-hood for data storage, so the performance improvement from ZFS is relative to ext. ext3/4 may very well outperform ZFS on a workstation or small server, but that's not the what Lustre is used for (even their test system is ~900TB).

              Disclaimer: I used to work for LLNL.

              Disclaimer: I used to work on Ext3. I would classify the above as "hype from Sun". There is a hidden cost to making all the writes linear on spinning media: the reads become nonlinear. This is usually the wrong tradeoff.

              Note that a traditional journal is another way of linearizing writes in that a transaction write transaction can be considered durably recorded to media as soon as the journal write completes.

              Benchmarks tell the true story, not hype, and on good information and belief the benchmarks say Z

    • If you are comparing ZFS performance on linux, then, yes, it is slower, because ZFS on linux is not done at the kernel level and thus has a huge performance loss as compared to ZFS on Solaris/OpenSolaris. There have been plenty of benchmarks out there showing ZFS's performance besting EXT3 and EXT4 on identical hardware (with one running OpenSolaris and the others on linux). It is a shame that Oracle has no intentions of continuing its development, the same with lustre. Two years ago they were talking about
      • If you are comparing ZFS performance on linux, then, yes, it is slower

        No, I am comparing Ext3/4 on linux to ZFS on Solaris.

      • There have been plenty of benchmarks out there showing ZFS's performance besting EXT3 and EXT4 on identical hardware (with one running OpenSolaris and the others on linux)

        Link please.

    • Under linux this is so true, but under *BSD the Deduplication portion works too, and that is an excellent feature if you are running a huge amount of storage.

    • by TheRaven64 ( 641858 ) on Saturday January 15, 2011 @06:50AM (#34888496) Journal

      ZFS is a big fat patent trap

      Oracle has released the ZFS code under the CDDL. While lots of Linux people hate the license, it has very strong patent retaliation clauses. Oracle explicitly grates you patent licenses for everything required to use ZFS via clause 2.1. All other contributors do via clause 2.2. Anyone exerting patents against ZFS immediately (well, within 60 days) loses this grant and has their (copyright) license terminated as well via clause 6.2.

      Since Sun accepted third-party contributions to ZFS under the OpenSolaris program, if Oracle tried exerting patents against any ZFS distributor then they would immediately have to stop distributing Solaris and then remove all of these contributions before they could start again.

      The ZFS patents are only an issue for a reimplementation of ZFS for Linux, and that's a problem caused by the GPL. Using the FreeBSD or NetBSD ports of ZFS (or even the FUSE port) gives you an explicit grant to the patents.

    • by Anonymous Coward

      Comparing ZFS to ant of the EXT FSes is pointless, and utterly misses the point of ZFS.

      Do ext3/4 provide snapshotting?
      Do they provide deduplication?
      Do they perform hash checks to avoid duplicating files in the first place?
      Do they provide ANY of the dozens of features that set ZFS apart from other filesystems?

      Don't bother, the answer is no.

      And if you're going to disable those features on ZFS, then you have no reason to be using it in the first place, so you're effectively making an apples to zebras compariso

Someday somebody has got to decide whether the typewriter is the machine, or the person who operates it.

Working...