Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Data Storage

File System Design part 1, XFS 57

rchapman writes "Generally, file systems are not considered "sexy." When a young programmer wants to do something really cool, his or her first thought is generally not "Dude, two words... File System." However, I am what is politely termed "different." I find file systems very interesting and they have seldom been more so than they are right now. Hans Reiser is working on getting Reiser4 integrated into the Linux kernel, the BSD's are working on getting a journaled file system together, and Sun Microsystems just recently released a beta of ZFS into OpenSolaris. "
This discussion has been archived. No new comments can be posted.

File System Design part 1, XFS

Comments Filter:
  • Oh, snap. (Score:4, Interesting)

    by cbiffle ( 211614 ) on Wednesday January 25, 2006 @01:30PM (#14559403)
    the BSD's are working on getting a journaled file system together


    Oh, snap. Somebody's not running Soft Updates. :-)

    (Yes, I understand that Soft Updates is not technically metadata journalling as practiced by the Linux people. No, I don't believe there are a significant number of practical situations where the results will differ.)
    • It's not a "Linux people" thing. XFS, for example, is from SGI (but you probably knew that).

      The main difference is, there is no fsck in XFS. None whatsoever. With ext3, or ufs2 with soft updates, you can still type "fsck /dev/whatever" on an unmounted filesystem, and it'll grind through it, but try to fsck an XFS filesystem, and nothing will happen. It's a no-op.
      • Re:Oh, snap. (Score:3, Informative)

        by Anonymous Coward
        The main difference is, there is no fsck in XFS. None whatsoever.

        What the fuck?

        Have you read this [die.net], or even used XFS before, for that matter?
        • by Anonymous Coward
          Don't you mean "What the fsck?" :)
        • Sure, and I've never needed it despite all the crashes my main desktop has had (flaky power).

          I've used XFS for about four years now, on three systems.
      • I'm not sure why, but to repair an XFS filesystem, fsck is a no-go, you need xfsrepair.
  • File system design (Score:5, Informative)

    by Bogtha ( 906264 ) on Wednesday January 25, 2006 @01:35PM (#14559476)

    If you're interested in this, you'll probably also be interested in Practical File System Design with the Be File System [nobius.org] (PDF), by Dominic Giampaolo, the designer of the Be file system. There's also a Slashdot review [slashdot.org] of this book.

  • Blatant error (Score:5, Interesting)

    by lostlogic ( 831646 ) on Wednesday January 25, 2006 @01:42PM (#14559574) Homepage
    Sector size on hard disks is 512 bytes, not 512kbytes. WTF, don't act like an authority and be a dumbass. Imagine the data waste if we actually had 512k physical sectors on disks.

    Also the scaling numbers are completely hokey.
    • He didn't say it very eloquently, but the difference between 512 bytes and 512 kilobytes is pretty sigificant. 512KB is a half of a megabyte. Can you image if a hard drive sector was .5 megabyte?? You'd end up with tons of wasted space.
    • Re:Blatant error (Score:2, Informative)

      by Intron ( 870560 )
      The author also says disk mfgs are lying when they use K = 1000 bytes, M = 1000000 bytes. This person is a know-nothing.
    • by NixLuver ( 693391 ) <stwhite&kcheretic,com> on Wednesday January 25, 2006 @02:17PM (#14560054) Homepage Journal
      from TFA:
      "There is a minimum size you can write to or read from the disc. This minimum size is called a "sector," and is usually around 512k. So, unless you really like 512k files, it is very likely that you will end up either wasting space or cutting off the end of the file if your file system doesn't deal with this."

      This is clearly not a typo - which is what I was certain I would find when I did RTFA. This guy has a basic, fundamental flaw in his understanding of the very thing he's writing an article about. This is a non-starter, IMO. Combine that with poor sentence structure and bad scansion ... I mean:

      "Note: My ibook has a "30 gig" drive. This is bullshit and I'll tell you why: Drives are defined by the binary definition of mega, kilo and giga. For example, a kilobyte is not 1000 bytes, but actually 1024 bytes. However, your HD manufacturer uses the metric definitions, even up to gigabytes. Now I can see you thinking..."But Wait Mr. Mad Penguin Person...Thats patently ridiculous and means they are lying on the box." Yah... "

      If I'd written something like that, I'd delete it right away and start from scratch.
      • Either way, it's missing the unit, wether it's "b" or "B".

        A few examples:

        512Kb - Kilo bits
        512KB - Kilo bytes

        The writer did not include the unit, but used smallcaps, so one would assume it reads as:
        512k - kilo bits
        512K - kilo bytes

        It makes sense, because would interpret in any other way, due to the context. Noone, of course, except someone on ./ nitpicking and trying to prove that the person who wrote the article is actually ignorant (not sure why people do that, maybe it makes them feel better?
    • Re:Blatant error (Score:2, Insightful)

      by Anonymous Coward
      There are lots of other errors as well. For instance he asserts that the inode contains the filename (they don't). Other things are unclear. He refers to UFS and says it scales to around disks of 1TB, but does not define what he means by UFS (as opposed to FFS). He shows a considerable bias to PC hardware by refering to MBR's. He seems to think that taking something out of a B+-tree is faster than removing something from the front of a linked list. I have no idea why he thinks that Unix at Berkeley was "st
  • You've not played much with Ext3, then, have you? =)
  • by __aaclcg7560 ( 824291 ) on Wednesday January 25, 2006 @01:55PM (#14559749)
    So constructing a complier from stratch is no longer sexy?
  • division (Score:2, Interesting)

    by newr00tic ( 471568 )
    With everyone and their parrot talking about RAID these days, it would've been fun if some sort of dual array would work as ONE filesystem; where one(++) redunant set took care of the balancing/tree'ing, etc., (separately,) and the other(s) kept the actual files. If there was _yet_ another set (a ++third), with the relevant META-information belonging to the files, you would imagine it to be a step forward to what is now, well; I can, anyway..
    • Re:division (Score:1, Insightful)

      by Anonymous Coward
      RAID is best done at a separate LVM layer so that any FS can be built on top of it. I don't see much advantage in building this into the FS. What advantage is there in putting metadata on separate volumes? You need less reliability or something?
      • XFS and ext3 can have their metadata logs/journals on other physical devices, seperate from the actual filesystem blocks.
        Sometimes filesystems are RAID aware, in that they choose to allocate blocks at the beginning of RAID strides and stuff like that, But that's about as flexible as filesystems get.
      • less reliability? (Score:2, Insightful)

        by newr00tic ( 471568 )
        No, I was thinking more along the lines of when(/if) META-data becomes big, and you'd get further throughtput by having it on its own drives, so as to speed things up.

        By all the three examples I provided, I tried to "account" for both speed and reliability, even though it's only a vague theory..

        --No wonder (_real_)things keep standing still for fscking 10 years at the time, and only Disney features are implemented; people turn down theories just as snappy as they turn down webdesigns (50ms, or whatever)..
  • obligatory (Score:5, Insightful)

    by DrSkwid ( 118965 ) on Wednesday January 25, 2006 @02:12PM (#14559987) Journal
    If you like on disk file systems you should read Venti: a new approach to archival storage [bell-labs.com].

    Plan9 [bell-labs.com]'s primary on-disk storage is Fossil [wikipedia.org], which runs in user mode. (Plan9 doesn't have a super user)

    You can run arbitrary programs in Plan9 that present a file/folder directory structure by using the common 9P protocol. All devices look like files and folders and can be manipulated like any other, even at the permission level.

    For instance, I have an image mounter that takes a tga file and presents 1 folder containing 4 files, red, green, blue and alpha.
    I can then use any tool I like to manipulate those files using the file semantics we are all familiar with. I even have a flag that mounts the files as textual rather than binary, i.e :
    00 00 ff ff
    00 00 ff ff
    ff ff 00 00
    ff ff 00 00

    and I can do image processing with awk !

  • NSS for Linux (Score:2, Interesting)

    by marquis111 ( 94760 )
    NSS has been ported to Linux too. That's an another modern industrial-strength filesystem with features sorely needed by Linux.
    • It requires loading binary kernel modules, and you'll have to run it on OES, so that puts it beyond the pale for quite a few people. I've been a bit underwhelmed by it performance wise, and there were some nasty bugs in the initial release. SP2 may make a difference. I hope so.
  • by Anonymous Coward
    What compels people to make the leap from "I've grasped the basics of a large and complex field" to "I think I'll write an article about it for the Slashdot crowd" via "I'm sure it doesn't matter that I'm not a good writer" and "I think I'll go with a self-satisfied tone"?
    • What compels people to make the leap from "I've grasped the basics of a large and complex field" to "I think I'll write an article about it for the Slashdot crowd" via "I'm sure it doesn't matter that I'm not a good writer" and "I think I'll go with a self-satisfied tone"?

      If he really understood the basics, he'd undertand how the concept of "hard link" means the file name is not stored in the inode.

      There's an old maxim (usually attributed to Butler Lampson) that says almost any problem in programmin

  • by Anonymous Coward
    From the article:
    Small difference there. It is also a very fast file system, allowing reads of up to 7 GB/sec.

    An assumption which could only be made by a newbie. Maximum throughput of a filesystem is not filesystem architecture dependent, but hardware dependent.
    I could give you 7GB/sec out of a FAT drive, given the proper hardware.
    Several other quotes suggest a bit of 'newbieness' like "B+trees are insanely complex".
    The concept was designed by a human, therefor it is clearly understandable by a human. It'
    • Maximum throughput of a filesystem IS filesystem architecture dependant, and XFS solved that problem at it's time. Check your facts.

      Also, imagine this - your filesystem uses some kind of block size, allocating a block requires round-trip through the filesystem (including touching superblocks and modifying list of free blocks).

      What happens when you're trying to write a lot of data to such synchronous filesystem?

      You're bound by round-trip time, no amount of faster hardware would help. Similiar situations used

  • by JoshDanziger ( 878933 ) on Wednesday January 25, 2006 @02:36PM (#14560293)
    Sorry, this article didn't really teach me anything interesting about filesystems. In general, the article was poorly written. For example, taking two sentences to say: "B+Trees are complex. Let me rephrase that. B+Trees are very, very complex." Readers of all types appreciate their time and don't want to have to waste it.

    You were lost at points between trying to sound like an expert to trying to sound like a grandfather explaining the grande old days of filesystem development. Are you a storyteller or a teacher? Pick one.

    Content-wise, there wasn't really much there for me. You spent a lot of time explaining the problems of a binary tree, but I think that your target audience already understands the time complexity of a binary tree. Then, you glaze over the B+ tree because its complicated.

    Sorry if I sound harsh. I hope that this comes off as constructive criticism.
    • > Content-wise, there wasn't really much there for me.

      Yeah, I doubt there was anything in it for anyone interested in filesystems.
      And seeing XFS is my day job, the mistakes were pretty obvious, too.

      One, a b+tree does not make a filesystem.

      Two, in all that talk about b+trees in XFS, he made some basic mistakes. There's
      only one inode b+tree per AG, there's two extent free list b+trees per AG, and
      the superblock has no b+trees in it at all. And they are used in many other
      places in XFS as well.

      Three, there is
  • Sorry, but if you really like file systems maybe you should try learning something about them before deciding to write this kind of articles. I've had less than 10h of classes on file system design, do not consider myself to know anything about the subject and still got the impression that I knew quite a bit more than you.
  • Here: http://labs.google.com/papers/gfs.html [google.com]. Abstract: "We have designed and implemented the Google File System, a scalable distributed file system for large distributed data-intensive applications. It provides fault tolerance while running on inexpensive commodity hardware, and it delivers high aggregate performance to a large number of clients. (...)" Pretty damn cool stuff, very advanced but perhaps a bit too tuned for Google's needs. See also the papers on their own clustering technology and distrib

Real Programmers don't eat quiche. They eat Twinkies and Szechwan food.

Working...