Linux 2.4.5 Tested With Six Filesystems 10
Denis Lackovic writes: "we tested kernel 2.4.5 with six filesystems:
ext2, ext3, jfs, reiserfs, vfat, and xfs. You can find out the results here." The tests cover a number of situations, on decent but reasonable hardware.
Re:Fun toys! (Score:1)
Re:Tests are skewed (Score:1)
This test puts essentially no stress on the broken parts of the 2.4.5 VM (which are mostly in the swap and victim page selection areas). The only part of the VM getting stressed here is the readahead and dirty buffer flushing code. Both of which work quite well at the moment.
More likely than not, the machine didn't hit swap at all (or very very little; the 2.4 VM will sometimes preemptively swap out very old pages to make more room for the page cache).
vmstat results during the test would tell us more, of course.
Re:close times (Score:2)
This is the exact opposite of my experience (with XFS and reiser).
Deletes on reiserfs are dangerously fast (no time to ctrl-c that accidental rm -rf). XFS deletes are painfully slow; I've noticed this on both IRIX and Linux.
I've no experience with ext3 or jfs.
ext3 stability (Score:4)
close times (Score:3)
The deletes are a different story. ext3 and reiser are quite a bit slower than jfs and xfs.
xfs and ext3 are pretty fast for writing.
So, I come to a different conclusion than the article does. For best all around performance, use xfs.
Now, which one of all these is considered to be the most stable? All the performance in the world won't matter if the code hasn't matured to a point where files won't be scrambled.
Re:Tests are skewed (Score:3)
I have seen Linux go into heavy swapping when copying a large file. I have seen kswapd processor utilisation go through the roof when copying large files.
So there is stress on the broken parts of the 2.4.5 kernel. Maybe it decides not to swap, but that decision takes an awfull lot of time to make.
Bit Pointless? (Score:2)
When serving an NFS or SMB share, which filesystem performs best? In a shop with digital graphics files >100 MB? In a law office with files 100kb?
When used as a web server, with lots of static content? With dynamic content using Oracle? mySQL?
It's nice enough, but does it tell us anything other than (again, for example) the number of MFLOPS for a given CPU?
Heck, maybe not even that much.
Tests are skewed (Score:2)
Fun toys! (Score:2)
1) VFAT sucks. Don't use it ever. No surprise.
2) JFS sucks on Linux. No surprise--it'll take a while and more development than they've put in so far.
3) EXT2 sucks because it isn't journalling, although it has quite good performance.
4) XFS doesn't suck, but it's not great.
5) Reiserfs doesn't suck, but it's not much better than XFS.
6) EXT3 doesn't suck, but it's still not out of beta!
Conclusion: I want EXT3 to become stable released code!
Re:close times (Score:2)
I believe you've misrepresented the results. Two sets of tests were performed -- the first with one medium-large file (645 Mb), and the second with 10,000 small files (totaling about 550Mb).
In the one-large-file delete, the reiserfs and ext3 filesystem were slower. But, in the many-small-file delete, the reiserfs and ext3 filesystems were faster. Similarly, the write test depend a lot on the number and size of the files being written.
The final word is -- benchmark for your application, not someone elses. And, like the above comment says, none if this matters if your data plonks.