Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Unix Operating Systems Software Linux

State Of The Filesystem 424

Skeme writes "Have you heard of Plan 9 or Reiser4 but don't know much about them? Are you curious about the improvements free software is making to its filesystems in general? Read my summary of the current developments in the filesystem: namely, what improvements we can expect (a lot), and what Linux and the BSDs can do to improve on the filesystem."
This discussion has been archived. No new comments can be posted.

State Of The Filesystem

Comments Filter:
  • Transferring Files (Score:5, Interesting)

    by jhunsake ( 81920 ) on Tuesday July 15, 2003 @06:18AM (#6441149) Journal
    I've always wondered how these filesystems with metadata handle transferring files between different systems. It would suck to have all your MP3 info in filesystem metadata and then lose it all when you transferred to a system without fs metadata. Anyone have any insight?
    • by ThogScully ( 589935 ) <neilsd@neilschelly.com> on Tuesday July 15, 2003 @06:24AM (#6441160) Homepage
      I would think that would be prevented by the filesystem implementation. A filesystem's features are only as good as the implementation. So if you have fs metadata and are transferring it to another filesystem, whatever tool you are using to move the files, whether it be at a BASH prompt or drag and drop in Konqueror, should figure some way of properly translating data from one system to another without losing anything. Moreover, the tool should be able to do it without knowing any particulars about the actual types of filesystems in use. It's all in the implentation.
      -N
    • by sleeper0 ( 319432 )
      i am skeptical as well, the only reasonable answer seems to be that it would get lost. Especially if you are transfering between platforms. Why would you want to use a metadata system that you can't count on anyone else using? It seems to me that current packaged in file metadata systems are much more practical for transfer, multi-platform use, streaming, etc.

      Other things mentioned in the article like the /etc/passwd manipulation just seem like gimmicks. How many files like that would you by hand suppo
      • by Surak ( 18578 ) * <.surak. .at. .mailblocks.com.> on Tuesday July 15, 2003 @07:18AM (#6441360) Homepage Journal
        Well, remember Reiser4, for instance, is a TOTALLY DIFFERENT system than the standard POSIX filesystems (Reiser3, Ext2/3, UFS, etc.)

        The idea is that data is stored as in a database, and small files can be handled efficiently.

        Think of it this way: What *is* a filesystem? Really, it's a relational database. We don't tend to think of it as one, because it isn't implemented as one, but that's what the data it represents is. If you think of each file and directory and associated metadata as a record, with directories being pointer records to other records that show relationships.

        Anyways, code needs to be implemented to handle this.

        We use metadata NOW. A files permissions, owner/group info, date and time stamp, and even the filesize are metadata. We transfer those between different systems right now. With the correct transport method, this metadata doesn't get lost at all. A tarball can sit on any system no problem. Inside the tarball is all the POSIX metadata. That gets transferred from system to system.

        Of course, some OSes (read: Windows) don't handle Unix-style permissions correctly, so these are translated into rough-equivalent ACLs with systems such as Samba.

        • by paul.dunne ( 5922 ) on Tuesday July 15, 2003 @08:53AM (#6442119)
          > If you think of each file and directory and associated metadata as a
          > record, with directories being pointer records to other records that
          > show relationships.

          Sorry to be snippy, but what you're describing is an *hierarchical*
          database. But you're right in your main point: an FS like ext2 is an
          example of an hierarchical database.
          • by thogard ( 43403 )
            Posix file systems aren't required to be hierarchical they just of sort of look that way. Remember hard links? The name space is hierarchical but files and directories can appear in many locations. link (no ln) has been able to create hard links for directories since Unix version 2.
        • by zatz ( 37585 ) on Tuesday July 15, 2003 @09:31AM (#6442503) Homepage
          A filesystem is nothing like an relational database. I wish people would stop making this comparison, because it's completely misleading and unhelpful. A filesystem is not a set of user-defined tables, each of which contains an unordered set of rows. Queries and joins are not possible. Constraints and null values are not supported. Files within a directory have an inherent order. Files are variable-length and byte-addressable. Duplicate "rows" are not permitted. The principle relationship modeled is hierarchy... ever heard of a hierarchical database?
    • by warrax_666 ( 144623 ) on Tuesday July 15, 2003 @06:28AM (#6441176)
      Anyone have any insight?

      Are you mad? This is slashdot.
    • by groomed ( 202061 ) on Tuesday July 15, 2003 @06:36AM (#6441204)
      Generally you lose the data, unless you wrap it in another format to encapsulate all the information. This is what Macheads did on Classic MacOS: they .hqx'd or .bin'd their files before transferring them to another system. It's not ideal. The alternative, flat streams-of-bytes, is not ideal either (and not true: even in Unix, files have some metadata that doesn't translate very well).

      Hopefully in the future our filesystems and transfer protocols will evolve to have some reasonably broad common ground where metadata is concerned (a development similar to the diminishing need to accomodate DOS 8+3 filenames).
      • by gilesjuk ( 604902 )
        That was a right pain in the butt, having resource and data forks etc..

        At least with AmigaOS they stored similar things in another file (.info file). Much easier to transfer, but you do run the risk of losing the .info file.
    • by axxackall ( 579006 ) on Tuesday July 15, 2003 @06:37AM (#6441209) Homepage Journal
      Metadata is very application specific and most of filesystem are agnostic about it. Typically it must be handled by another layer on a top of FS.

      Often that layer is a DB - database. I suggest you to try ZODB, database in Zope [zope.org], it's very good to handle files as documents - with many unified metadata about files.

      Another good example to study is Subversion [tigris.org], which is revisionining/versioning metadata-management layer on a top of a regular FS.

      You may research and find some software implementing a layer (on a top of a regular FS) specially designed to handle MP3 playlists. But again, that would be a layer on a top of FS, not a filesystem by itself.

    • there os should have a 'list' of what's supported by each fs. when you copy a file from fsa to fsb the os (or program) should compare feature and let you know (somehow) that something is not (may not) going to work right. if you copy something from the regular ext2 system that is case sensitive to a ms-dos floppy disk, something should try to remind you. or the program checks this and looks for problems.

      remember that not all problems can be detected, so are you willing to live with: 'this may not work
    • by Fastball ( 91927 ) on Tuesday July 15, 2003 @06:53AM (#6441259) Journal
      I agree that metadata in the filesystem is a risky proposition. Just on general principle, I prefer my data inside the file and not left with the filesystem. The MP3 metadata example, to me, is like Windows file extensions on HGH. I remember John Dvorak wrote a piece about Windows file extensions a long while back, and he argued that file types, etc. should be inside the file. A header of sorts. I tended to agree then, and I see filesystem metadata as a bad trend.

      • by hankaholic ( 32239 ) on Tuesday July 15, 2003 @10:25AM (#6443014)
        While there are problems to be solved (backup, for instance), there are many benefits as well.

        Many detractors of the UNIX security model point to the lack of ACLs and fine-grained security.

        The U.S. DoD has contracted Namesys (the ReiserFS guys, led by Hans Reiser) to develop a filesystem upon which security can be build. Reiser's vision is a filesystem which allows users of the filesystem to define what security means in an extensible manner, with plugins.

        The future of ReiserFS includes a plugin architecture which makes it easy to implement NT permissions, for instance, without breaking existing programs or requiring new semantics which wouldn't work across, say, NFS.

        I'm not sure why the author chose to throw Plan 9 into the mix. Plan 9 has some interesting features, but I don't feel that either Plan 9 or ReiserFS was given sufficient attention to allow the reader to understand just _why_ such things are interesting.

        I tried to sum up some interesting Plan 9isms here, but I'd rather not go into it -- reexporting modified views of the filesystems is a complicated thing, and it's hard to justify such complexity in limited space.

        As far as ReiserFS goes, the current and future benefits are many.

        First, speed is king, and ReiserFS takes the crown. Anyone who has waited more than 10 seconds for "ls /usr/share/doc" on ext2 can attest to the fact that ext2 craps out on large directories. Reiser V3 does quite well, and V4 is rumored to do even better.

        Second, space efficiency is nice, and ReiserFS does it better than anything out there. While storage sizes are increasing dramatically, it "feels" wrong to waste a whole block on a trivial file (only a few bytes long, for instance). With ReiserFS, developers don't need to waste time trying to work around the need for efficient small file access -- it's efficient to have many small files, and it's efficient to have many of them in one directory if needed.

        I believe it's been said that in the future, ReiserFS may compress the filenames in some way to try to eliminate even that overhead.

        ReiserFS V4 implements wandering journals, and support for transactions. What does that mean to the layperson? It means that in the event of a crash, an application which handles important data (for instance, an online purchasing system) can promise that in the event of a crash, a group of related filesystem operations can be guaranteed either to have all completed, or have all failed.

        For instance, a purchasing system makes a note to charge the person's credit card, and to ship the items that were ordered. You can tell ReiserFS programmatically to "create a file in /var/orders/ and place this text in it, as well as a file in /var/ccbilling with the customer's credit info in it, and if the system crashes halfway through, throw it all away, because I don't want to charge for products I'm not shipping and I don't want to ship products for which I'm not billing."

        If the system crashes, you don't have to work to make sure that you have a one-to-one mapping of orders to sets of billing information -- ReiserFS can guarantee you that either all of it got recorded, or none of it got recorded.

        So now you've got a fast, reliable filesystem. I'll even let you ignore the fact that ReiserFS will let you implement per-file compression and encryption plugins. Still not impressed? Wondering what all of this crap about namespaces is about?

        The author of this article basically ripped off Hans Reiser's examples from his V4 draft document [namesys.com], such as the /etc/passwd thing. I'm not sure where he got the idea of subusers and I don't much care, as it's not really relevant to the namespace thing.

        What is interesting, but not really mentioned, is that file filters can become part of the filesystem.

        The specific implementation details in this example are products of my imagination,
      • Metadata in files? (Score:4, Interesting)

        by steveha ( 103154 ) on Tuesday July 15, 2003 @11:25AM (#6443643) Homepage
        Just on general principle, I prefer my data inside the file and not left with the filesystem. The MP3 metadata example, to me, is like Windows file extensions on HGH.

        I'm with you -- I like self-contained file formats.

        But I don't think he was proposing that you not use Ogg tags or MP3 tags; he was talking about the filesystem abstracting the tags. If you changed "Stagnation.ogg/album" to the string "Trespass", then the filesystem abstraction layer should update the Ogg "album" tag inside the file to be "Trespass".

        The key benefit here is that you would not need some wacky command-line utility program to let you view and change tags on Ogg files. You could just use the shell. In bash:

        for ii in *.ogg; do echo "Trespass" > $ii/album; done

        Note that this same one-liner would work if you were in a directory with MP3 files, and you changed "*.ogg" to "*.mp3". Currently, you need to run vorbiscomment for your Ogg files, and mp3info for MP3 files. (I just checked, and sure enough, they take different arguments to do the same operations.)

        Personally, I'd like to see a standard metadata portable XML format for legacy systems. People talk about copying a file from a rich metadata filesystem and having new files like .attributes created on the target legacy system; I'd be happier if just one big XML file could be created with the same name as the original file.

        Suppose you backup server //rich onto server //legacy, and then you want to restore some files from //legacy to //rich. If all the metadata was stored in a big XML file, then when you copy the file from //legacy to //rich you restore all the metadata; you wouldn't accidentally slice off attributes by forgetting to copy one or more rich attributes files.

        You could do most of the fancy tricks of the rich metadata filesystem on a legacy filesystem that used the big XML file to store the rich metadata. And as long as the legacy system is just smart enough to look at the main data part of the XML and leave the metadata tags alone, you could still modify the file with sed, awk, perl or whatever, and then copy the big XML file onto your rich metadata filesystem and still not lose any rich metadata.

        Note also that the big XML file could be used to deal with existing rich metadata systems, like resource forks from Macintosh filesystems, or multiple data streams from NTFS files.

        steveha
    • by peatbakke ( 52079 ) <peat@noSpAM.peat.org> on Tuesday July 15, 2003 @06:56AM (#6441271) Homepage
      There are a few of ways to solve the problem, but only one of them ensures compatibility across many different platforms:

      The first is the worst: having a separate metadata file accompany each of your files as they move between file systems. This is a logistical nightmare, and requires soooo many tools to be rebuilt that it's not even worth considering.

      The second is the "bundle" concept, as popularly demonstrated in OS X. "iTunes.app" is actually a directory, not a big binary file. The directory is split-up into different sub directories, containing multiple binary formats, languages, etc. To transfer iTunes to another computer, just tar/zip the "iTunes.app" directory, and viola. Easy. Of course, the app will only run on platforms that recognize the bundle format, and can execute the binary. This rules out backwards compatability, and requires that all major system vendors agree on the same bundle format -- and that's not gonna happen.

      The third way to do it is to keep metadata within a file (like MP3 tags), and then export them into the filesystem space. For example, an MP3 plugin would pull the data out of "my.mp3" and turn it into "my.mp3/title" "my.mp3/genre" etc. The nice thing about this is that it's platform independent, and backwards compatable.

      I'd be keen to see option #3 implimented in a modern file system -- do any current FS's have an API that could support it?
      • by spitzak ( 4019 ) on Tuesday July 15, 2003 @11:13AM (#6443517) Homepage
        Absolutely #3 is the way to go. "metadata" should always be figured out from the contents of the file, the metadata should really be considered a cache of the answers.

        The way it would work is a program would look for a piece of metadata. If it was not there it would run a standard program using popen that would then create the metadata and return it, and also set the metadata on the file if it had permission to. This program would probably work something like "file" does now and would use text configuration files to actually determine how to extract interesting information from all known file types. Most likely this actual implementation of looking up metadata would be wrapped in a libc-level function so it just looks like you ask for the metadata.

        The obvious advantages:

        Files can be transferred by protocols that don't understand metadata.

        Files can be stored on filesystems that don't understand metadata.

        Metadata is never "lost" as long as the real file data is intact.

        Metadata can be stored locally for remote files.

        Metadata can be modified without permission to modify the file.

        Probably much faster because metadata is not calculated until first needed.

        I also think that all files should be treatable as directories, and that the metadata should be presented as being inside this directory (ie the metadata "blah" for file "foo" is foo/blah). But this does not mean it has to be stored this way and does not mean it has to exist when copied.

    • If all of the metadata is stored in directories, then copying the directories and all of their contained files would naturally include the metadata.

      Mac OSX uses the "file as a directory" strategy, and the BeOS used the dynamic file metadata approach. Combining the two is the way to go and buys you a tremendous amount of new functionality over traditional file systems.
    • You have to answer this question every time you improve or add to the metadata. One recent instance of this was the addition of ACLs. An ancient instance of this was the addition of long (>14 character) filenames.

      The file-as-directory scheme goes a long way toward generalizing this, so that once the common tools like tar can handle it, new metadata introduction will likely not be such a transportability nightmare.

      BTW, ClearCase has had files-as-directories for years, just with a slightly different

    • That's been a problem the Macintosh has had for years. Mac files have two forks, one for data and the other for resources (icons, strings, executable code) and a few other pieces of information, most notably the type and creator (application not user) which allows the Finder to launch the right app for a file without skank like extensions.

      Anyhow, Mac files get royally screwed up when move to any filesystem that doesn't support all those things (most) so there's a special encoding format that zips them al

    • by n3rd ( 111397 ) on Tuesday July 15, 2003 @07:21AM (#6441375)
      I've always wondered how these filesystems with metadata handle transferring files between different systems

      Usually it doesn't. Take for example VxFS (Veritas File System); they support attributes like preallocating space for a file or forcing a file to be contiguous. When you move a file that uses these attributes to another type of file system (VxFS to UFS) you lose them since the target file system doesn't support those attributes.

      Another example is ACLs. If they are in some way propriatary they won't transfer and sometimes won't work well together. The situation I ran into was a HP-UX NFS server's ACLs not working correctly with a Solaris client. The server used and enforced the ACLs however the client couldn't view or modify the ACLs on the files that were stored on the server.

      If the implementation is truly at the file system level there's nothing you can do. However, as stated below, if there is a layer above the file system that handles metadata then you can more than likely keep the metadata intact.
  • But when (Score:3, Informative)

    by infolib ( 618234 ) on Tuesday July 15, 2003 @06:25AM (#6441163)
    will ReiserFS be ready for the 2.6 kernel? Just curious.
  • Why? (Score:3, Insightful)

    by Libor Vanek ( 248963 ) <libor.vanek@gm[ ].com ['ail' in gap]> on Tuesday July 15, 2003 @06:33AM (#6441191) Homepage
    It's really nice. But what does it brings new that we shoudl rewrite 90% of all system tools too use this new features? I find "cp /a/..uid /b/..uid" same as "chmod /a --reference=/b"...
    • Sorry - I meant "/a/..rwx".
    • Re:Why? (Score:5, Insightful)

      by cperciva ( 102828 ) on Tuesday July 15, 2003 @06:41AM (#6441227) Homepage
      You're missing the point. chmod would still exist as a userland program; it is the kernel call which would be removed.

      To the user, there would be no change; to the userland programmer, there would be no change; to the C library developer, there would be a change (to implement chmod in terms of the existing filesystem operations); and to the kernel developer there would be a change (mostly in the direction of reduced complexity due to a smaller number of necessary functions).
    • Re:Why? (Score:5, Insightful)

      by IamTheRealMike ( 537420 ) on Tuesday July 15, 2003 @06:43AM (#6441233)
      I think people are distracted by the examples he gave, which make the point clear but are perhaps not representative of what this would be used for in real life.

      GConf was a better example. ATM using GConf is, well, not hard, but you have a lot of extra machinery involved, new APIs to learn and so on. Basically all that machinery does is control the backends and give change notification (it does stuff like schema validation as well).

      It'd be *much* easier to use GConf if in order to read a value, you didn't have to load up the GConf libs (which in turn depend on CORBA), or parse XML files. At the moment that's really the only way to do it, but in most environments/languages it's far easier to manipulate files and directories than it is to talk to a CORBA server or bind APIs into them.

      You also get an increase in efficiency. Parsing XML is kind of cludgy - XML is not a particularly efficient format to store stuff in. It's a good compromise between humans and machines, but both of us have to do lots more work to meet in the middle. The reason it's used, rather than lots of small files, is that otherwise GConf would be too slow. In fact, they are already talking about removing yet more of the files/directories to speed things up, and sticking them all in the same XML file.

      Being able to have a configuration system that truly leveraged the filing system would make a lot of stuff easier, more reliable, and faster (because you can take advantage of filing systems that are really really tuned to take advantage of advanced data structures).

      It won't really impact the way you do things like set file attributes today. Most of the changes would be under the hood. But used well, everything would become easier for the developers, and so more advanced and slicker for the user.

      • Re:Why? (Score:2, Interesting)

        by LordBodak ( 561365 )
        You're right, XML isn't the most efficient way for gconf to store data. But is rewriting the filesystem really the answer?
  • plan 9 is awesome (Score:2, Interesting)

    by Sespindola ( 542253 )

    There has been talk in the kernel mail list of implementing 9p, the plan 9 distributed filesystem.

    Now that is Open Source, I hope it will happen.

    • Re:plan 9 is awesome (Score:3, Informative)

      by F2F ( 11474 )
      9p is a protocol. the file systems in Plan 9 are venti [bell-labs.com] and fossil [bell-labs.com]

      9p in itself is worthy of including in the linux kernel, if only the guys there would do it right (their track record isn't too good with Plan 9 things).

      More about 9p dould be found in section 5 of the man pages [bell-labs.com]
  • by grey1 ( 103890 ) on Tuesday July 15, 2003 @06:35AM (#6441199)
    ...and not very general. Interesting for its comments on what's being tried out in R-FS & Plan9 but certainly doesn't manage to be a general summary of what's going on.

    How about the changes coming in 2.6 (like xfs support built in)?

    The article makes some good points but for me it could have done with rewriting to make it more general, separate the analysis of filesystem implementation problems from technical detail, and included more examples from other file systems.
    • How about the changes coming in 2.6 (like xfs support built in)?

      XFS is a good example of journalling filesystems. But how about filesystems like Coda, AFS and Intermezzo, a new generation of networking (actually - distributed) filesystems allowing disconnected operations?

  • Good article (Score:5, Informative)

    by IamTheRealMike ( 537420 ) on Tuesday July 15, 2003 @06:37AM (#6441206)
    It doesn't tell you anything you can't already learn by reading up on the articles on Plan 9 and the whitepapers on namesys.com, but it's a well written compressed version for those who perhaps don't want to wade through a description of set theoretic namespaces (whatever they are ;)

    The concept of reducing primitives is a good one, and based in sound mathematical theory. As already pointed out though, you need some container format that can handle some of these new ideas, things like very small files, files as directories and so on. This is needed, because when you transfer files through lossy mediums like the internet, or older filing systems, you don't want to lose the structure.

    As far as I know, there isn't a container format that can do this. Tar is showing its age already, I wouldn't like to see it hacked yet again. Zip is alright, but you'd need to break compatability to add in all those extra features, and then it wouldn't be zip anymore. It'd also be inefficient.

    So, what I propose is rather than reinvent the wheel to solve this problem, we add support for "boxing" to the Linux kernel.

    A box is a filing system in a file. We already use them, to some extent - it's been possible to mount ISO images using the loopback filing system for a while. What's needed is to take this to the next level. The first thing is that we need the ability to use files as mount points, not just directories. When files and directories are the same, well, I guess that should be easier.

    The .box file format simply contains a short header with some useful metadata, like maybe a checksum, and the filing system it contains (maybe that isn't needed). The fun part is the UI. What you need is the ability to right click on any dirfile (for want of a better term) and choose the "Box it" option. You'd need a better label. What essentially happens then is that the heirarchy below this point is sucked up and transformed into an ISO containing perhaps a "Reiser4-Lite" filing system. You can forgo the journal and other things that are redundant purely for storage.

    The user has then converted their file or directory into something that can be transferred across the net, on Windows compatible CDs and so on, without losing the inherant structure of the original.

    At the other end, choosing the "Unbox" option mounts the contents of the box using the loopback FS, mounted at the point of the file. To the user, it is seamless, far easier than zips or tarballs.

    Of course, there are lots of complications. You have to agree on the format to use inside the box, for one, because the need to have kernel mods and so on makes it more complex than just installing tar.

    I think MacOS has something a little bit similar with disk mountable images (.dmg) files, but the MacOS filing system is rather poor, and I don't know how easy it is for users to create them. Also the OS unfortunately applies some magic to them - for instance Safari will automatically extract the contents of the DMG file then destroy it when you download one (but other stuff does not, oops).

    Anyway. That's one way to prevent loss of vital structure when transferring across lossy mediums, that I can think of. There are probably others.

    • Re:Good article (Score:3, Informative)

      by Surak ( 18578 ) *
      A box is a filing system in a file. We already use them, to some extent - it's been possible to mount ISO images using the loopback filing system for a while. What's needed is to take this to the next level. The first thing is that we need the ability to use files as mount points, not just directories. When files and directories are the same, well, I guess that should be easier.

      Actually, although not seamless except from the users point of view, for a more visual representation, think ZIPs and tarballs in
    • The .box file format simply contains a short header with some useful metadata, like maybe a checksum, and the filing system it contains (maybe that isn't needed).

      I've been long wanting a transparent "boxing" system like this on a per file system basis, to the extent that we could erradicate and at the same time standardize the metadata format of all files instead of having different headers to parse for mp3s, avis, mpgs, pdfs, jpgs, etc etc. The biggest issue with a system like this though, is once you
      • Yeah, but it doesn't really need the co-operation of anybody else to be useful. A filing system upgrade would improve things immensely on Linux at any rate, even if it wasn't portable to Windows or MacOS.

        I guess the main problem would be that projects like KDE/Gnome wouldn't be willing to lose portability to forms of Unix that sucked at small files, needing layers of abstraction and so on. OTOH that causes problems just with Linux integration, it's just one of the balances people have to make.

    • Re:Good article (Score:3, Interesting)

      by Anonymous Coward
      Actually, ZIP is extensible. You can add new data blocks to the ZIP file and if you try to unzip it with a decompressor that doesn't understand your extensions, they'll just be ignored. The Be ZIP implementation extends ZIP with support for its native file system attributes meta data, for example.

      TAR isn't so much showing its age as dribling into a bib in the old folks home.
    • Re:Good article (Score:2, Informative)

      "for instance Safari will automatically extract the contents of the DMG file then destroy it when you download one (but other stuff does not, oops)"

      This is a feature of the .dmg format known as an Internet-Enabled Disk Image. See: http://developer.apple.com/ue/files/iedi.html

    • Re:Good article (Score:3, Informative)

      by Have Blue ( 616 )
      I think MacOS has something a little bit similar with disk mountable images (.dmg) files, but the MacOS filing system is rather poor, and I don't know how easy it is for users to create them.
      It's *very* easy to create a disk image. Drag a folder onto the Disk Copy app and select a destination to store the image. There's no step 3!
  • by panurge ( 573432 ) on Tuesday July 15, 2003 @06:37AM (#6441207)
    Windows support for metadata has always sucked, recognised by every Mac user who moved to a PC and discovered that you had to tell the system what a file did by appending a clumsy tla to the end, and passing gently over the inconsistencies of the support for long and short filenames.

    If Linux and related systems move to filesystems with really powerful metadata support, presumably the lockin would be much stronger. Moving a directory from Linux to a Windows system may be possible but the programming to do it will become increasingly painful and the risk of data loss will rise. And with mainframe integrity, why would you want to, Mr. customer?

    Apart from the CS issues, is this an attempt to use the embrace, extend weapons of Microsoft against it by turning the Linux filesystem into a full mainframe system, effectively squeezing out Windows servers by a convergence between big tin and small boxes? I guess this is pretty pie in the sky but I'd like to think so.

    • Nothing is stopping Microsoft from implementing support for the same technologies.

      After all, if a few volunteers can implement NTFS support in Linux with no source and no specs, then Microsoft with all its manpower can certainly add support for Reiser4 to Windows when they have the source and specs (and maybe HR would be willing to support them, for the right price).

      Basically, it's a non-issue in my books. It's like people who are "locked in" to Word, because they use advanced features available nowhere

    • by sql*kitten ( 1359 ) * on Tuesday July 15, 2003 @09:39AM (#6442588)
      Windows support for metadata has always sucked, recognised by every Mac user who moved to a PC and discovered that you had to tell the system what a file did by appending a clumsy tla to the end, and passing gently over the inconsistencies of the support for long and short filenames.

      Actually, NTFS support for "metadata" is impressive; you can have 255 streams per file. A stream in NTFS is what Mac users call a fork, but Macs are limited to 2, data and resource. You can happily make every file on NTFS an OLE server too and do away with file extensions altogether, if you want to. Oh, and NTFS has reparse points too - think like a trigger on a database table, but attached to a file. And NTFS has journalled from day 1, whereas Linux filesystems are only just discovering this.

      So why are file extensions still in common use? Largely because people who don't know NTFS come out with statements like "windows support for metadata has always sucked" without bothering to read the documentation, so few apps take advantage of this NTFS feature.
    • Windows support for metadata has always sucked, recognised by every Mac user who moved to a PC and discovered that you had to tell the system what a file did by appending a clumsy tla to the end

      As opposed to Mac OS, where you have to tell the system what a file does by setting an even clumsier 4-character code hidden deep within a file's metadata?

      and passing gently over the inconsistencies of the support for long and short filenames.

      What inconsistencies? Correctly-written modern applications (say, th
  • Really, I think someone should get on with finishing the NTFS filesystem access in the kernel. With people migrating to XP it's really becoming more important that this driver is fixed (how long has it been declared "dangerous" for write use now?!).

    I'd really like to know why this driver has taken so long to complete - is there some information that the developers don't have access to? Some technical reason? What?!
    • by IamTheRealMike ( 537420 ) on Tuesday July 15, 2003 @06:45AM (#6441236)
      I'd really like to know why this driver has taken so long to complete - is there some information that the developers don't have access to? Some technical reason? What?!

      Yes. They don't have access to the NTFS specs. Also, NTFS is a very complex filing system, with many different versions. You don't want to get that wrong. Resizing was a more important goal, and that has been working for many months now.

      Of course, it might be included in all distros when completed anyway, due to patents MS hold on the technology.

    • by spaic ( 473208 ) on Tuesday July 15, 2003 @07:33AM (#6441456)
      From the Linux-ntfs FAQ [sourceforge.net]

      3.8 How was the Linux NTFS Driver written?

      Microsoft haven't released any documention about the internals of NTFS, so we
      had to reverse engineer the filesystem from scratch. The method was roughly:
      1. Look at the volume with a hex editor
      2. Perform some operation, e.g. create a file
      3. Use the hex editor to look for changes
      4. Classify and document the changes
      5. Repeat steps 1-4 forever
      If this sounds like a lot of work, then you probably understand how hard the
      task has been. We now understand pretty much everything about NTFS and we
      have documented it for the benefit of others: http://linux-ntfs.sourceforge.net/ntfs/index.html [sourceforge.net]

      Actually writing the driver was far simpler than gathering the information.
  • no, no, no (Score:4, Insightful)

    by awx ( 169546 ) on Tuesday July 15, 2003 @06:38AM (#6441215)
    if that crock, that bag-on-the-side, that mess is what we have to look forward to, I think i'll switch to BSD.

    I mean, acessing owner data by travelling into a directory then backwards out of it again like: vi /directory/..owner is a big ugly crock.
    • Uh, nobody said that'd be the user interface you use. It's simply an implementation detail.
    • Re:no, no, no (Score:5, Insightful)

      by GammaTau ( 636807 ) <jni@iki.fi> on Tuesday July 15, 2003 @07:04AM (#6441297) Homepage Journal

      if that crock, that bag-on-the-side, that mess is what we have to look forward to, I think i'll switch to BSD.

      I mean, acessing owner data by travelling into a directory then backwards out of it again like: vi /directory/..owner is a big ugly crock.

      Are you so sure that you would hate it? After reading the PDF, I was thinking of two things:

      1. He suggests that files ought to be used for everything
      2. The old wisdom: "Everything in Unix is a file"

      I don't think it's any more radical than treating network sockets as files. Sure, it might feel a little weird first, but once you'd get used to it, the simplicity would overweigh the clumsiness of existing implementations.

      It's also very easy to wrap together a shell script that imitates the existing implementations and put it to /bin/chown or whatever you wish to replace.

  • Why pdfs? (Score:2, Insightful)

    by brrrrrrt ( 628665 )
    I see it happening more and more that people present their summaries, articles and technical papers on the net as pdfs. This is very inconvenient.

    Pdfs are nice for printing and publishing on traditional media, because you can be sure they will be in the correct layout etcetera for the printer. But on the web, where people browse between lightweight, easy html-documents, they're just a nuisance.

    Please, if you must publish a pdf, publish an html version next to it.
    • Most likely LaTeX (Or some other TeX variant) was used to typeset the paper. There are things used in this paper that just can't be done in HTML (For instance, footnotes). Personally, I'd rather see MORE stuff in PDF rather than less. I just find them much easier to read, something to do with how Acrobat's font rendering doesn't suck. Also, PDF's are far superior for anything beyond 1000 words or so.
      • Since when can't footnotes be done in HTML? No reason not to have them at the end of the document, and you can provide anchor links from the text to the footnote and back.
    • You've got a point when you're talking about one of those multi-megabyte monsters loaded down with hires images, but, come on, this thing is like 130kb. Even at 56k speeds, it loads quickly.
  • Questions... (Score:2, Interesting)

    by debilo ( 612116 )
    I do not know much about file systems, so I have a few questions.

    Once we see the GConf example, other possibilities immediately spring to mind. In almost all multimedia formats--such as MP3, MPEG, and OGG--there is a tagging system for storing things such as the author and title. Instead of storing those attributes in the tag--yet another namespace--store it in the file-as-adirectory. I have a file / music/Millencolin/PennybridgePioneers/RightAboutN o w.ogg. I let RightAboutNow.ogg/title be "Right About
    • Re:Questions... (Score:3, Insightful)

      by roemcke ( 612429 )
      What I would like to see, is something like the Hurd translators, that lets you access file content through the FS-API. For instance mp3 tags would be accessed through the files "HappyBirthday.mp3/title" or "HappyBirthday.mp3/artist", but instead of storing the tags in the FS as metadata, the translator store the tags in the mp3-file using the normal mp3 tagformat. This would combine the best feaures of both worlds. Other natural uses of the translator would be to access tar or xml files.

      Translators shou

  • Security? (Score:4, Insightful)

    by Inode Jones ( 1598 ) on Tuesday July 15, 2003 @06:41AM (#6441228) Homepage
    Before adopting any of these ideas, one must consider the security implications of doing so.

    If we assume that the filesystem is decoupled from the access control layer in the kernel, then one must ensure that any operation that potentially affects security is adequately controlled.

    For example, on systems with POSIX_RESTRICTED_CHOWN, the following ought to be illegal:

    cp foo/..uid bar/..uid

    This can be accomplished by making the UIDs mode 444. Without POSIX_RESTRICTED_CHOWN, the UID is 644. However, we have now moved a systemwide security feature into the filesystem. If multiple filesystems are configured into one kernel, then they ought to be consistent; otherwise the security model will be flawed.

    As for things such as allowing access to an environment, doesn't that break encapsulation? It means for a certain filename, the filesystem must grovel through a user-space process to find the environment. Also, if a change in some external environment immediately affects some partially-related processes (e.g. daemons started from that shell), then a whole new raft of security holes will come up based on a process' environment or filesystem layout changing unexpectedly.

    Cool ideas, but let's be careful lest we make a steaming pile of Swiss cheese.
    • Swiss Cheese (Score:3, Insightful)

      by Anonymous Coward
      Cool ideas, but let's be careful lest we make a steaming pile of Swiss cheese.

      Evidently you haven't used ReiserFS. It already does this.

      ReiserFS only journals filesystem metadata. Because it uses a B+ tree balanced allocation scheme for file blocks, when the system crashes the last pair of blocks written will often be swapped with respect to their files. For example (this has happened to me and separately to a friend) if you modify out /etc/passwd just before the system crashes, you'll be unable to lo
      • Re:Swiss Cheese (Score:3, Interesting)

        by johannesg ( 664142 )
        Organize my MP3 collection with FS metadata and lose it all when I try to move to another FS? What is he thinking? Is he thinking at all?

        AmigaOS had a single meta-data field in its various filesystems, called the "file comment" or "file note". While it was clearly far less powerful than what is proposed here, it was incredibly useful for lots of purposes.

        The problem you describe, losing the meta-data when moving to another filesystem, was non-existant on the Amiga for two reasons: all filesystems imple

      • Re:Swiss Cheese (Score:3, Interesting)

        by sql*kitten ( 1359 ) *
        syslogd wrote a panic message to /var/log/messages as the system went down, and you'll find this message (as well as the rest of its 4k block) in /etc/passwd, while your changed password file may be found at the end of /var/log/messages. This is a feature of ReiserFS, not a bug.

        Care to explain that? How is corrupting one of the files essential to the normal operation of your system not a bug? The whole point of a modern filesystem is to recover cleanly; even fsck can run unattended (mostly) but what you'r
  • by ajs ( 35943 ) <{moc.sja} {ta} {sja}> on Tuesday July 15, 2003 @06:43AM (#6441234) Homepage Journal
    I'm really getting tired of the ever-creeping assertion that transactions are required for [x]. At first x was ACID-compliant relational databases, and such was true because ACID was defined as such. However, then I started to see assertions that relational databases had to be ACID-compliant (mostly from the anti-MySQL camps who were ignoring the long history of highly valuable, non-ACID relational databases).

    Now, in this article, I see the assertion that databases in general require transactions, and thus cannot be supported by a filesystem.

    Worse, the logic is self-refuting, as the article previously states that a filesystem is a database, just a limited one. As it happens, POSIX-type filesystems are quite powerful, and let's not kid ourselves into thinking that they have not served us well for 20-30 years! Yes, changes are coming and I'm frankly quite impressed by Hans Reiser's accomplishment in finally coming up with a balanced-tree-based filesystem. Many have tried and failed where he succeeded.

    That's because his was a great step forward, not because the old UNIX filesystems weren't also. Let's stop trying to re-define terms so that we can explain why the last 20 years were the dark-ages. They simply were not.
    • by afidel ( 530433 ) on Tuesday July 15, 2003 @06:58AM (#6441281)
      Transactions are required for Reliable databases and filesystems. If you don't mind occassional corruption then you can throw out transactions, otherwise you need them and you need to eat the cost (memory, access speed, cpu overhead, whatever). Since PC's are generally faster then most people need at the moment making them more reliable seems like a worthwhile goal.
  • by N_gaAdy ( 688653 ) on Tuesday July 15, 2003 @06:46AM (#6441239)
    I agree that we need a revolution in how filesystems work inside an operating system, but it seems that the arguments placed in this paper had alot of holes.

    For one thing, the need for changing a filesystem should not really be solely concerned on space or metadata. I think security, speed of data retrieval, and self correcting error engines should be centered on the new systems.

    The reason for the speed of data retrieval as being more important than data size is because hardrives are getting much bigger than they are faster. In five years, we may have 20 terabyte drives, but the access speeds will still be horrible.

    Security and error correction are obvious points that should be implemented on a systemwide level. When these features are system wide, then management becomes much easier for all system users.
  • Sorry, but (Score:5, Insightful)

    by Morth ( 322218 ) on Tuesday July 15, 2003 @06:55AM (#6441268)
    This article seems to just be the author brainstorming or feeling excited about reiserfs. It's hardly a "summary of developments in the filesystem". Now if he was asking about opinions on his article it'd be fine, but he's not, so I'll just discard this as another non-news.
  • Negativity (Score:3, Informative)

    by listen ( 20464 ) on Tuesday July 15, 2003 @06:56AM (#6441273)
    I'm surprised at the negativity of some of the comments here, moaning that POSIX semantics are perfect and nothing else can possibly be countenanced...

    Plan9 namespaces and Reiser4 really do bring a lot more to the table in terms of useful expanded semantics and utility than all the posix filesystems. Posix extended attributes are very limited, and some filesystem implementors seem to be keen to implement them in the most restricted way possible ( eg size limitations in ext3).

    The annoying this with Reiserfs is that the VFS will lag it by a few versions, and very very few apps will make any use of its special system call. Sigh. We'll be stuck with databases in a file for a long while yet.

    One thing I would like to know about reiserfs is how attributes are attached to directories? If they are just small files in the "directory" bit of a file, what distinguishes them from children of the directory? Or are attributes just banned from dirs? Seems limiting.
  • On the .. syntax (Score:5, Informative)

    by IamTheRealMike ( 537420 ) on Tuesday July 15, 2003 @07:01AM (#6441289)
    Oh, I should probably mention - if you read the whitepapers available on ReiserFS the "foo/..attr" syntax is just a toy, made up on the spot, syntax to demonstrate the idea of files within files.

    Nobody (apart from perhaps this guy) has ever claimed that this syntax will actually ever be used, or needed. There are other possible syntaxes available, and in fact one long term blue sky plan for RFS is to allow many different types of syntax within the same file path, including for instance things that vaguely resemble database queries.

    So, don't get hung up on the syntax given in this article.

    • Re:On the .. syntax (Score:3, Informative)

      by hansreiser ( 6963 )
      No, it is not a toy, it is a style convention.

      In regards to attributes on attributes, some (regular unix plugin ) files have relationships to other files that are implemented using stat data plugins, but the stat data files don't have stat data themselves. There are lots of specific plugin implementation details, but it works without infinite recursion problems
  • by sholden ( 12227 ) on Tuesday July 15, 2003 @07:02AM (#6441293) Homepage
    We had plan9 machines here 10 years ago...

    I don't think any exist anymore, in fact I don't even think the inferno install works anymore.

    But anyway, it isn't a "new advance" anymore.
  • by xyote ( 598794 )
    People keep trying to use file hierarchies as data bases. You can do a lot of stuff, but arrays and m to n forward and reverse mappings aren't among the things you can do with filesystems. That's why you have databases and XML.
  • No need for LDAP? (Score:5, Interesting)

    by jackalope ( 99754 ) on Tuesday July 15, 2003 @07:10AM (#6441313)
    It seems that the author presumed that the only use of LDAP is to provide passwords for user authentication. While that is a common use of LDAP it is not the only use.

    It would seem that having a file system that is LDAP aware could be extremely useful. Imagine if your LDAP tree were reflected as a tree in your file system. You wouldn't need to embed LDAP calls in your application, it would just be data in your file system. So looking up an attribute for the current user, or a user, would be as simple as reading a file that holds the value of the attribute.
    • Imagine if your LDAP tree were reflected as a tree in your file system

      Ack, I dunno about this one. If you have a true file system (even in memory) imagine the resources it would consume. One directory for every container, one file for every attribute at the very least and perhaps space for it's value and on top of that it would be local to every system. It would also need to be sychronized with the LDAP server on every system. At most you would have an entire local copy of the LDIF(s) which could be *
      • Re:No need for LDAP? (Score:2, Interesting)

        by jackalope ( 99754 )
        One wouldn't need to copy the LDAP data into a file system. Instead provide a view into the LDAP directory via filesystem calls. In otherwords, the filesystem driver would translate calls to it into LDAP calls to the LDAP repository, whereever it may be. The only persistent data that would need to be stored for the file system would be the how to connect to the LDAP server farm.

        In regards to your second question. The standard getxxbyxx() calls are useful for returning username/uid/gid etc. But if the
  • What I REALLY want (Score:5, Interesting)

    by afidel ( 530433 ) on Tuesday July 15, 2003 @07:16AM (#6441343)
    Is for someone to come up with a real unlimited snapshotting filesystem for linux. I don't want to use user mode hacks (as nice as they are rsync style snapshotting isn't reliable enough), or snapshotting that only allows a shadow copy of the entire volume, I want to be able to tell the users that they can just go into ~/.snapshot/time (where time can be hours, days, or weeks in the past) and copy the file they messed up back into their home directory. Basically I want the most usefull feature of netapps without the HUGE markup =) The cost in admin time both in user interaction and reduced need to do tape retrieval and file restores is immense.
    • Load redhat 9.0 then it has a volume manager designed for exactly that purpose...
    • by IamTheRealMike ( 537420 ) on Tuesday July 15, 2003 @07:54AM (#6441577)
      That kind of thing would benefit from this, especially if integrated into the GUI in a nice way.

      In effect, it'd be similar to what Linux does with memory - empty disk space is useless! What is the point of that? Might as well use it for something. Storing old versions of files is an ideal use for it, and combined with an LVM means that the more space you have, the further back in time you can go. That's far more flexible and useful than storing stuff in a recycle bin.

      Of course, you run into problems straight away if you aren't using lots of small files. Are you really going to store a new copy of that 5meg presentation every time you hit save? Bad idea. You only want to store what has changed.

      But - how? Doing it with text is easy enough, we have diff and patch for that. But what about more complex formats?

      Ahhh, now we understand why minimizing primitives is useful in the real world. If the internal structure of a file is split into many small files, each representing a facet of the presentation (each slide, each bullet point on the slide, font weight of that bullet point etc) then suddenly it becomes much simpler to write a generic differential analyzer, we can just snapshot the mini files that changed.

      As a result, we increase efficiency to a level at which it becomes feasable to keep the undo transaction buffer on disk - just imagine how much hassle we'd save if all apps used this reliably?

      There are still plenty of details missing of course - some things still wouldn't work well, in particular large BLOBs with no structure, like say images, or audio files. I guess if you were smart you could leverage the apps transaction buffer, but still..... that would lead to interop problems.

      Still, there are many unexplored possibilities that appear when you increase the abilities and even efficiency of low level parts of the OS like this.

  • by ddubois ( 470043 ) on Tuesday July 15, 2003 @07:23AM (#6441389) Journal
    I feel this very interessting.
    For example I like the currents devfs and procfs (although not perfect). Those help me a lot to "debug" new hardware connection. SImple and coherent.

    I also imagine some RDB support. Imagine a "select * from account" where account would be asimple directory. Would be nice to use "find" for the where clause :-)

    Imagine also implementting OO classes wiht inheritenace using symlinks... And more...

    Yes would be very nice!
    D.
  • I just want... (Score:2, Interesting)

    by Anonymous Coward
    A filesystem that goes wrong properly...

    Imagine your filesystem is a library...

    Imagine you drop a bomb on it...

    Books and pages scattered all over the place... Yet you can still work out which page belongs to which book, and where on the shelves they used to sit..

    Having been caught short by LVM and reiser before (which just couldn't deal with a 46GB gap in the filsystem where a disk used to be) it seems to me that no-one's made a filesystem that breaks properly...

    For me.. speed is not an issue.. nor is
  • Started out good... (Score:3, Interesting)

    by Korgan ( 101803 ) on Tuesday July 15, 2003 @08:02AM (#6441649) Homepage
    The more I read this, the more it reminded me of the marketing version of how Apple would like us to think of Resource Forks.

    Truthfully, there isn't exactly a lot of difference in the concept or the idea. Implementation is vastly different but the idea remains very similar.

    Why do I want to accept this sort of idea anymore than I want to accept resource forks? If I copy a file with resource forks from one of my macs to nearly any other OS on the market thats not specifically configured to support them, I lose that information. Why do I want to continue this?

    I use HFS+ because I have to. To get all the functionality I want out of my macs, its the only real option I have. But for anything other than system level files that are never likely to be copied to another machine, this is just a waste of time to me.

    Next question. Say I do run this file system on my machines. I build up a heap of data and I'm using "files as directories" to store metadata about those files. How do I back it up? Don't even try to tell me "rebuild tar". Haven't we put tar through enough to try and extend its capabilities? I wouldn't touch a file system with these capabilities without a guaranteed way of being able to backup ALL the data. Otherwise its just truly not worth the effort.
  • by irw ( 204684 ) on Tuesday July 15, 2003 @08:04AM (#6441663)
    I wish people with clever ideas to redesign POSIX namespaces would spend ten years in system administration first so they realise what's involved with managing REAL WORKING SYSTEMS.

    Some of the ideas might well lead in useful directions, but some (at least as described in the paper) are plain silly. viz:

    1) with overlayed mounts:

    suppose my home dir is mounted read-write over a read-only system root, and I do not have a "/bin/prog" in my home dir. Consider:

    cp /bin/prog /bin/prog

    First time, it copies the system /bin/prog into my home fs - Counter-intuitive to the path semantics. If I run this a second time it copies my copy of /bin/prog over itself - Inconsistent.

    2) Attributes in the namespace

    We have a rather carefully written setuid chown/chgrp/chmod replacement which can be run by users in an "admin" group, and allows devolution of 1st-line support tasks to nominated users. It won't touch files whose uid/gid is 100, so they can only touch non-system files.

    If attributes (file uid) is file/..uid and cp is supposed to handle what chown does, the above breaks big-time. We now need a custom cp replacement. Either that or we have to add an ACL for the admin group to every file we want them to manage, which is a great deal of effort, and likely end up inconsistent.

    Contrary to the paper, setuid and PARTICULARLY setgid is NOT going to go away in the real world any time soon, as far as files are concerned. Ports less than 1024 are a different matter and I agree with the document.

    3) Consider the number of file descriptors involved if /etc/passwd becomes a hierarchy of files. Just logging in one user will involve multiple open()-read()-close() operations. Whilst these might be efficiently implementable at fs-level, it is still very inefficient in user space, or will at least require a dramatic rethink of unix tools.
  • by Zog The Undeniable ( 632031 ) on Tuesday July 15, 2003 @08:16AM (#6441765)
    I assume Plan9 is an ironic nod to the "worst film ever". When I develop my new filing system, which will only allow numeric characters in filenames, will delete the MFT every time the computer is rebooted, and will require a new directory for each file added to the system - that FAT16 limit of 512 was FAR too generous - I'm going to call it BattlefieldEarthFS.

"If there isn't a population problem, why is the government putting cancer in the cigarettes?" -- the elder Steptoe, c. 1970

Working...