Tom's Hardware Looks At WinFS 809
Alizarin Erythrosin writes "Tom's Hardware Guide has an article about the new WinFS file system. The article talks first about some of the problems and advantages with FAT[16|32] and NTFS, then talks briefly about WinFS. Here is the summary: 'Microsoft is breaking new ground with Longhorn, successor to XP. The upcoming WinFS file system will be the first to be context-dependent, and promises to make long search times and wasted memory a thing of the past. Today, THG compares it to FAT and NTFS.' Personally, I still have reservations about using a relational database to keep track of files. Unless they can keep the overhead to a minimum, I can't see it being as efficient as a file system should be."
other FSs are out there (Score:3, Interesting)
Again? (Score:1, Interesting)
db filesystem (Score:5, Interesting)
BeOS used indexing for certain attributes, and it is GREAT. Maybe someone is just sour that linux didn't do it first?
I really don't like this idea.... (Score:2, Interesting)
Re:other FSs are out there (Score:5, Interesting)
Re:db filesystem (Score:5, Interesting)
I gathered that the quote was alluding to the fact that while the BFS did initially use a full relational database backend, it performed very poorly. Be replaced the backend with a more conventional one, but kept the SQL-like interface to it. It increased performance, but just wasn't quite as cool anymore. Maybe now that PCs have increased in power by several magnitudes since Be last tried this, Microsoft may actually be able to pull it off.
nothing new (Score:3, Interesting)
You think? (Score:1, Interesting)
Re:For lots of files... (Score:3, Interesting)
I ran a Mac lab where a lot of the machines had 20meg drives, and that wasn't all that long ago. They used to sell a 10Mb drive (I forgot how ungodly it cost) for Apple ][s. Apple DOS 3.3 could only recognize floppy size chunks, about 140Kb IIRC, so the thing had to be partitioned into along the lines of 80 pseudo-drives. I never saw one physically, but I can imagine what the P.I.T.A. that was.
WinFS is on top of NTFS (Score:5, Interesting)
This was actually confirmed at WinHEC:
"Microsoft has scaled back its 'Big Bang', and its Future Storage initiative will build on, rather than supersede the NTFS file system, when the next version of Windows 'Longhorn' appears in 2005."
"WinFS is not a file system
NTFS will be the only supported file system in Longhorn, from a setup and deployment standpoint, though the OS will, of course, continue to support legacy file systems like FAT and FAT32 for dual-boot and upgrade purposes. The oft-misunderstood Windows Future Storage (WinFS), which will include technology from the "Yukon" release of SQL Server, is not a file system, Mark Myers told me. Instead, WinFS is a service that runs on top of--and requires--NTFS. "WinFS sits on top of NTFS," he said. "It sits on top of the file system. NTFS will be a requirement."
Interestingly, when WinFS is enabled, file letters are hidden from the end user, though they're still lurking there under the covers for compatibility with legacy applications. This reminds of when Microsoft added long file name (LFN) support in Windows 95, but kept using short (8.3) file names under the covers so 16-bit applications would still work. Expect this to be the first step toward the wholesale elimination of drive letters in a future Windows version."
Re:Again? (Score:4, Interesting)
Fat 16 was limited to 2GB partitions, that affects the normal users.
Now, the fact that if the database file system works the way I imagine it would it will be a bad thing for the normal user's more tech savy friend.
I have spent years explaining to relatives that the same file name in 2 places is 2 different files.
Now I must spend time explaining that if you brows, documents, taxes, and edit file blah it will effect important stuff, file blah and what not.
People will be confused by this I believe. And I also think the the techies saying it is stupid would benifit from this greatly, I know I would love to organize things with tons of logical ways to browse there.
But I am not some overpaid market researcher so what do I know.
Oh so informative! (Score:5, Interesting)
"There has been much speculation"
Uh huh.
"Win FS is modeled on the file system of the coming SQL server"
Uh huh.
"In its latest build (M4), Longhorn contains few hints of the technology's imminent implementation."
Uh huh. You're saying you don't know anything, yeah, I'm getting that part.
"One of those is more than 20 MB in size and bears the name winfs.exe."
Neat.
"In the end, Win FS will probably emerge as an optional file system beside FAT and NTFS. It's also possible that Win FS will supersede its predecessors, however."
So in the end, it'll be A... but it is also possible it'll be B. I see.
"That would most likely produce problems for multi-boot systems"
An astounding feat of logic Mr. Spock!
This is the most uninformative article I've ever had the displeasure of reading on Tom's Hardware. These people know exactly nothing more about WinFS than any of the rest of us have heard in rumors and vague press releases.
Difference between FAT32 and NTFS (Score:3, Interesting)
NTFS has tons more advantage than FATxx. The official list can be found here [microsoft.com]. Granted, this benefits the corporate user more than home user.
At the very least, NTFS offers a quicker way to hide porn than FAT32.
Re:I really don't like this idea.... (Score:2, Interesting)
With MS database record they have got to be kidding! I know that Windows CE uses a DB format for storage but I want to see it under max load with "n" task accessing it and the planet worth of data to pull from with a good percentage being changing. Then crash it and try to restore the mess. What would the resulting speed be. Recovery time.
I guess you will need a 4 to 6 ghz system with an insane speed hd array and memory up the wazu!
Instead of revamping the wrapper why not improve on surviablity of both data/os/programs! When will they get it in their head that the OS does and should not be a swiss army knife with cheep blades that are dull, usless, break and hard to open!
I cant remember but they was something based on somthing called "tumblers" that was a way to access data. Read somting in a ancient issue of Byte mag. Had to do with Objects and mondering content.
Good idea (Score:4, Interesting)
I don't understand the concerns of the poster regarding performance (at least without evidence of truly dismal performance): no one is forcing anyone to use the FS if they are not satisfied with performance.
For most users, they main bottleneck in storage is their own organizational faculties. I used to be exasperated when users didn't know where they put their files, but once you get past the 100GB mark, it becomes very understandable.
Consider what most people use their massive storage for these days: videos, music, multimedia, games. Not only is this the kind of content that SHOULD be stored in a database, it's the kind of content that is ALREADY being handled through a database because the filesystem is not enough: people are using their media players, P2P programs and other software to handle their files, up to the point they rarely ever interact with the filesystem unless they lost a file.
For most users, the performance penalty is well worth the price.
For those for whom it is not, it doesn't take a genius to realize you can use more than a single filesystem, and perhaps rediscover the joy of proper partition organization: keep the OS and applications separate from your data, and you can use your highly efficient filesystem for the first and your metadata-loaded one for the second.
Better, not best (Score:5, Interesting)
However, the best solution is that used by EROS [eros-os.org], which is for the kernel not to provide a file system at all, but instead provide Orthogonal Persistence.
This is a much simpler layer for applications, since it doesn't require them to explicitly access the memory and disk separately. It is also much simpler to recover from because the entire state of the whole disk is always known to be coherent with itself at all given points in time, without an expensive journal.
In terms of performance - it beats the hell out of explicit disk access systems (Both conventional and database systems) because it performs big continuous reads and writes (that don't move the head much) rather than small writes on metadata and file data that forcibly jump the disk head around.
In EROS then, on top of the Orthogonal Persistence, you can create any arbitrary Objects you want easily - because they're just normal processes with normal memory. Conventional File Systems become useless and objects implemented by processes become a much better and more powerful alternative to files.
A relational database of the user objects is then much more powerful than a string hierarchy, but this is all the user's choice - and not hardcoded into a kernel.
Truth be told... (Score:5, Interesting)
A filesystem based on a relational database will have some characteristics to which today's filesystems can only aspire:
1. ACID - In every way that the underlying database supports Atomicity, Consistency, Isolation, and Durability [techtarget.com], so now will the filesystem. In so far as the database is robust, the filesystem will be robust. Please spare me the comments about the supposed unreliability of SQL Server. Itâ(TM)s certainly more reliable than NTFS; which is itself very good.
2. As an offshoot of the above - Imagine multiple file updates to a filesystem which is transactional! Imagine that transaction failing and being able to just rollback the changes without touching every file in your program! Imagine being able to make file changes programmatically without having to worry about locking because the engine will do it for you (just handle any exceptions)! Yeah, you could do all that today if you like. But it takes extensive to make it happen.
3. Operational characteristics - We can run queries against databases. We can index them. We can cluster them. We can replicate them. We can access them easily from any development platform you can imagine. Now your filesystem is a database. The possibilities make me shiver!
4. Another offshoot from #3 - Security. Databases are inherently better than filesystems (IMNSHO) at enforcing security and enabling administration of security.
I only have reservations about one issue with the database as filesystem area: recovery. Currently, all good and low-tech filesystem recovery tools really are based on the filesystem allocation table sort of scheme. Obviously, databases usurp this category of tried and true tools. However, good tools already do exist that allow recovery of relational databases. Itâ(TM)s just a matter of getting easily accessible tools of this sort into the hands of professionals that need them. It's more of a training issue I guess, but it will still need addressing.
I know many people will have a knee jerk reaction to this idea, and I understand why. But I would encourage people to keep an open mind to this. While there will probably be some issues with the idea, there's so much more that could easily be done with a filesystem on top of a database than could be done easily (or well) with a traditional filesystem.
And for you hard-core naysayers out there, you have to ask yourself this: If this is such a bad idea, then why did Oracle provide this as a feature too? [oracle.com]
Good idea, but ONLY much further out... (Score:4, Interesting)
Mid-Term: FS finally works, and allows easier retrivial by relevance, author, source, etc. in ways that we can just dream of now. It's the kind of thing we didn't realize we needed until we had it...until it inevitably blows up as all MS products must do eventually. But when it works, we will be fairly happy to have it...especially end users, most of whom can't figure out a hierchical file system in the first place.
Far-Term: FS is finally able to use it's relational roots to distribute filesystems over multiple processors in an cluster or over a network. Such a system would support atomic, distributed file updates by threads of processes on differing processors (including HyperThreaded procs). Imagine a virtual filesystem that can span your whole-house network, with a single file system image...in WINDOWS.
So I guess my view is: painful in the near-term, but may be cool to have when they get it right.
Re:Pretty thorough article (Score:4, Interesting)
http://www.cs.wisc.edu/~bolo/shipyard/hpfs.html
Fast != Fast (Score:4, Interesting)
I recently installed a Win2K server that is blindingly fast at finding documents and such... but horridly slow at serving up portions of files, for things like legacy database programs. Three of the customer's applications started running at 1/4 speed.
It got so bad, even after all the "fix win2k speed" patches, that we re-introduced the 200MHz NT4 server to feed the database apps, and the dual-processor 2GHz system just serves up documents!
Good idea, bad implementation (Score:5, Interesting)
A directory tree is a very useful structure, at least to the software. Similar stuff is grouped together, and easily cached. It provides a very clean and simple way of putting data somewhere and getting it back later. This should not lightly cast aside.
So, you want to use a relational database to keep track of files? Go for it, but instead of keeping track of the files themselves, keep track of their paths. Let the filesystem do the efficient storage, and the database do the efficient lookups. The database can be made faster and smaller, the filesystems can remain as fast as they are, and the files are still there even if the database gets corrupted.
Put hooks wherever necessary to update the database when the filesystem changes. For example, put a database in the root of each filesystem. Use a stacked mount to mount that disk, so when interesting things happen, the kernel tells a userspace process that updates the database. Then, make some standard libraries that use the database. Make file browsers that can query it, but pass the path to programs. Make save dialogs that can also save metadata about the file, and open dialogs that can search for it. Use LUFS or FUSE to make directories that correspond to queries.
This is just as effective as what MS is doing [theregister.co.uk], but it's more efficient, it's more compatible, and it doesn't reinvent the wheel.
Re:WinFS is on top of NTFS (Score:4, Interesting)
The article that i got some of that information from was from The Register: http://www.theregister.co.uk/content/4/30670.html [theregister.co.uk]
Also, there is more information here: http://www.winsupersite.com/showcase/longhorn_pre
New FS (Score:3, Interesting)
New FS = New corruption?
Rus
Re:other FSs are out there (Score:1, Interesting)
Re:I'll reserve judgement (Score:5, Interesting)
First, the async, means that not all reads and writes are syncronous which is an incredibly good thing for speed. Try putting your UFS/FFS filesystem into fully sync mode and then talk about performance, I'm willing to bet that UFS/FFS isn't sync by default either. However, calling fsync in the mail server (normally sendmail) in Linux will actually make it sync before returning. So no worries about RFC 1123. It's the SMTP server's job to ensure that it tells the filesystem, make sure the bits are on the disk. If Linux didn't have the ability to ensure bits where actually on the disk nobody would use it. That's why in Moshe Bar's series comparing Linux, FreeBSD, and OS X, he always said he recompiled after removing the fsync calls, otherwise you just compared how fast the disks in each system were.
For goodness sakes, Oracle ships on Linux, if Linux couldn't get the bits on the disk Oracle would have never ported to it. Not a chance. If Linux tells you the bits are on the disk, they are on the disk in my experience.
I've heard of people losing UFS filesystems while running them under NFS, or losing them due to any number of naferious VM race conditions. So what? Welcome to the real world, people lose data, buy a tape drive, make backups. Knew a guy who got really good at rebuilding filesystems by using dd on Solaris to recover email for customers.
Oh, and as I recall, async actually affects directories more then files, if you put the sync modifier on the filesystem, it only affects directories, not the file data for ext2/3. In ext3, directory writes are always journaled as I recall, so it shouldn't make much difference.
Now, from what I've heard of Linux and FreeBSD, is that until the late 2.2.X and early 2.4.X, there we're certain jobs Linux couldn't do like run big Usenet News services, or really disk intensive applications they the filesystem buffering was really hard to get right, and might cause corruption. The guy who ran a local ISP always said FreeBSD never did that when he was running the Usenet server on it, but Linux did with some regularity.
ext2 hasn't lot any data of mine in my 7 years of using Linux, including running a 120GB Oracle Database for the past 30 months. Ext3's never lost any data since I started using it. I've lost disk drives, I've lost mirrors, I've lost files, never lost a complete ext2 filesystem unless the disk just stopped spinning. Lost a couple of ReiserFS filesystems after installing RedHat7.0. Never tried most of the other journalling filesystems.
Kirby
Re:This article is bullshit (Score:5, Interesting)
It's really funny how they try to compare it with a file system, since they're just looking at NTFS with a layer giving the user an easier time to do certain things.
160,000 Files (Score:2, Interesting)
To search for a specific file often takes much longer.
Personally I look forward to a better, faster file system on Windows. Although I'd still hold off judgement of the new system until it becomes available.
-
Rod
Re:I'd be happy if they just let me use write cach (Score:3, Interesting)
Just "right click, turn on write cache etc" (from a previous post to this) DOESN'T WORK. If you'd care to READ what I originally posted I mention that it indicates write cache is enabled when IT ISN'T. It's pretty obvious whether it is or isn't based on the write performance. When read is 80-90MB/s and write is 1/10th of that, there's a problem. It's called the OS is forcing write-through, i.e. confirm all data to physical disk instead of just write back to the cache.
As for the controller, it's a 3Ware Escalade 7500-4, not one of those POS promise things. The drives are 4 Western Digital 1200JBs in RAID5. My previous Escalade 6400 and 4 75GXPs in RAID0 had the same problem. (this was asked in one of the previous posts)
Re:other FSs are out there (Score:1, Interesting)
Performance is a question of whether they care (Score:5, Interesting)
The difference isn't features - BeFS supported everything HFS+ does and arbitrary attributes, journaling, much larger file/filesystem support, and indexing and it was still faster. Be simply made performance a much higher priority than Apple has so far; fortunately they've hired the BeFS lead developer and perhaps 10.3 will have some surprises.
Another good example is ReiserFS - while some of their choices reflect overall design goals (e.g. targeting large numbers of small files instead of BFS's massive videos) they've largely passed the traditional filesystems in most areas despite having to do more work to keep all of the extra features going.
Microsoft has a number of engineers who do understand performance; the question is simply whether it'll be a significant priority for them to make WinFS fast enough that we'll realistically be able to use it.
Re:I'll reserve judgement (Score:3, Interesting)
Re:For lots of files... (Score:5, Interesting)
One word: FAT. You are making three assumptions here. The first is that the underlying implementation is capable of supporting near-infinite extension without degradation. Invalid for FAT, valid for the FS types mentioned in the grandparent, and the reason for what I said. The second is that the file system will be used as a hierarchy, which is invalid for most end users. The third is a combination of the first and second, being that the file system extends without unreasonable degradation to a vasst number of files in a single directory, and performing operations (esp. searches) on them quickly. This is invalid for all of these file systems, because of how they store metadata.
Again, you're assumiung you, a technically savvy user. End users don't behave like this. By and large they use meaningful file names in a single directory. If you're looking for a document someone else did, it will be in their single directory, not in a common folder for documents relating to that topic. If you don't know who worked on the document, you need to do a broad search based on keywords.
Which shows how little you've thought about the implementation of this system. You only have to make a change if the file metadata changes. In many file systems you already have to write that change in a different location to changes to the file itself (if you don't, your metadata search time goes out the window). If your "locate" database is a relational database, making a change has trivial overhead.
Actually, this isn't what I was meaning. I was referring to the relationship between the data in the FS and in the locate database (or any other metadata search database), and indicating that WinFS (in theory) takes out the step of building a separate database by using the database as the "index" of the file system. Unfortunately in this incarnation of WinFS (the current implementation) MS will not be implementing it quite in that fashion.
But to answer your point ... Win32 systems have had file change notification in their APIs from day 1 (NT 3.1 / Win95 + have FindFirstChangeNotification; NT 3.51 + have ReadDirectoryChangesW).
And that's pretty much what MS is doing by converging a tradition file system with a metadata view.
Of course, WinFS was intended for client operation systems, not servers. And while NTFS could still be improved, it doesn't make a lot of sense to do so: most high data volume applications store their data in structured files, and don't require much from the file system in any place where performance could be signficantly improved.
Re:This is not about performance! (Score:2, Interesting)
Re:db filesystem (Score:3, Interesting)
"You dont need a database when you have our file system." they now can say.
MS is basically putting everythingn under the sink into the operating system so that no one can compete with them. A practice they started after they could not have hidden systems calls to make MS own applications faster than competitors and could not force others to ship IE.
This way all other software companies go belly up since they canÃt offer anything that not alreadu is part of the operating system.
Re:other FSs are out there (Score:4, Interesting)
The term for that is 'non-sequitur', and you've just posted a lovely example of one. Let's go back to my post and see what I was replying too (hint, nothing to do with XP at all) - it's the bit you snipped:
As for '(I belive)2Gb', you are referring to the FAT16 installation of NT4. It doesn't apply to WindowsXP.
That's what I was replying to. I was attempting to clarify that the limit (4GB, not 2GB) also applied to an NT install in which you specified NTFS (your post seemed to imply FAT16 only).
I don't think we're disagreeing. I was clarifying a point you made which could imply something which wasn't the case.
Re:Can you say SQL Slammer x 100? (Score:5, Interesting)
Re:other FSs are out there (Score:5, Interesting)
Now storing the meta data in a database, which is essentially what WinFS and such are doing, is not as clear a benfit. Personally I can imagine that it would be a very practiacal FS for keeping movies and MP3's on. I don't really see the benefits of running the OS files on that FS though. A lot of unneccesary overhead. (I don't search for files in my OS partition very often.)
Indeed. It seems like what they are claiming as an improvement, (i.e., faster searching for files), does not appear to help what people actually do most of the time. It is similar to claims of "boots much faster!" that you used to hear about new versions of windows. I would think the thing that would be important to people is data integrity and access efficiency. I know my primary concern is "how safe is my data".
I also question the need to include the overhead of a database frontend to the filesystem. Seems like a catastrophe just waiting to happen.
Also, since the DB is always active, what issues do you have with backups? I'd be concerned about backup and restoral issues with this type of filesystem. I haven't seen that addressed at all.
Re:other FSs are out there (Score:3, Interesting)
That's true, but misleading. If soft updates are done right, the only reason to fsck is to reclaim resources (orphaned blocks etc.). It is not necessary to get your filesystem into a usable state, and can therefore be done in the background after you've come up. Journaling filesystems also still need to fsck, it's just faster and it's called a log redo, and that is necessary to make the filesystem usable. I'd say the two are very comparable, and soft updates come out slightly ahead. BTW, I'm one of those guys who writes filesystems, the ones you say are not so dumb. :-P
Exchange File System (Score:2, Interesting)
Re:db filesystem (Score:3, Interesting)
CORRECTION: Now that PCs ... anyone may actually be able to pull it off.
What I don't see addressed here is the added complexity of the bootstrap required to support RDBMS based files. Where are you going to stick this bootstrap? I see a tightly controlled licensing arrangement between motherboard suppliers and MS. "Thou shalt not boot except through WinFS bootstrap code which is licensed to you for this purpose. We will revoke your license to distribute WinFS bootstrap if you make us cry. We will take OUR ball and go home, and you will not be able to sell any PCs to our captive users."
Re:other FSs are out there (Score:3, Interesting)
BTW, cool name. If you ever decide to abandon that identity, let me know. ;-)
I'd have to say it depends. The beauty of soft updates is that they require exactly zero additional writes beyond what you'd be doing anyway; you're just being careful about the order in which you do them. Performance is fine, but this pretty much does nothing to ensure that data is consistent without some sort of sync/flush. With journals the picture is more complicated. Yes, there are additional writes, but they can be overlapped with the writes you're already doing so they often don't impact performance that much. Also, there are usually more opportunities to combine/strategize the metadata writes. Ultimately, the performance ends up being affected very little. As far as data protection, it's a big tradeoff. Most journaling filesystems only journal metadata, so they provide the exact same non-guarantee regarding data that soft updates would. If you want to journal data as well you get a better guarantee but worse performance, and it's rarely done; if you're heading in that direction you might as well go all the way to a log-structured filesystem.
There are certainly ways that either journaling or soft-update filesystems can be tweaked to provide guarantees for data or metadata. In either case, you write to a "clean" set of blocks (never write in place) and take care of the metadata updates in such a way that if the metadata makes it the new data automatically comes along for the ride and if it doesn't then the blocks containing new data get reclaimed. This can be useful in certain cases, but it can also suck massively for performance if you have a lot of sub-block updates.
As you can see, it's an interesting set of tradeoffs. It gets even better when your filesystem is distributed. No matter what, though, I tend to prefer soft updates due to greater storage efficiency and less need for provisioning/tuning.
Re:db filesystem ... will never be used by most (Score:5, Interesting)
For most file data, perhaps.
I will use this, and to good effect, as well.
The point to take into consideration is that the context will also change depending on the metadata available. Your view of the aggregate file objects changes, depending on the context. Not to mention that this same metadata will be available, in the same format, to all participating applications. Your apps can have all the same view, if you like.
What this means in concrete terms is that your carefully sorted directory of MP3's can look like a file library in iTunes. There are searchable, sortable columns for Title, Album, bitrate, Cover Art, year, label, and whatever (note I did not say "filename", which is just another attribute under a modern filesystem). This is possible with only the most basic gestures on the part of the user, and is remembered for the next time you visit this same view.
Similarly, a tree of photographs appear in any participating file browser with whatever columns you want (bit depth, format, date taken, date published, ICC info). It's important to consider that you can do this with any arbitrary collection of data, even one's you define yourself (to take the BeFS example, anyway).
So you can take your collection of widgets, define attributes about these widgets, and your file browser applet works the same for the same user in all applications. It should, anyway. This is why we have APIs.
To cite your example, why visual grep through a bunch of thumbnails looking for a particular photo when you can just indicate with a few gestures the "type" of photo you are looking for? I like the iPhoto interface when I'm browsing photographs, but if I want a particular photo of the GF from a rough date taken at night, I certainly don't want to browse through 1000's of images, especially when some of them can be hard to discern at thumbnail resolutions. I certainly don't want to do this repeatedly when I'm assembling a photo album on a specific subject.
Let the computer do the grunt work of selecting a result set that matches my criteria, and then I can use my human abilities to select the object I want, or refine the search.
Most of us already keep our aggregate file types in associated groups on the filesystem already. In most cases, the tree structure of most filesystems is sufficient. All this does is extended the functionality of the filesystem so that you can choose to abstract aggregate file objects and treat them in a a myriad of different ways. In the most basic sense, you tell the OS, "look, when I have the Explorer/Finder open on this directory of MP3's, make sure you change the column view so it shows this, this and that. In icon view, make sure that mouse-over pop-ups (if enabled) display this that and that. Default sort is alphabetically by Artist's Last Name. I don't want to see the filename, as that doesn't contain any useful information."
That is, you don't have do anything special to make use of the file attributes in this way. You just tell the ultimate app that all of us use the most (the operating system's file browser) to treat certain directories in a different manner.
Re:other FSs are out there (Score:3, Interesting)
Hehehe, I forgot to compliment you for the name of your homepage
Most journaling filesystems only journal metadata, so they provide the exact same non-guarantee regarding data that soft updates would.
I read once a paper about softupdates (quite old, I think it's the paper presenting the idea of softupdates for the first time, at least it reads that way), where they (completely IIRC) talk about an "update daemon" which writes the dirty in-memory metadata blocks to disc at regular intervals. That lead me to the conclusion about softupdates loosing somewhat more metadata in case of a crash. But OTOH there's a lag in writing to a journal, also.
As you can see, it's an interesting set of tradeoffs.
And as if that weren't enough things to think about, I heard that there are these drives which plainly lie about what has been really written to the platters.
No matter what, though, I tend to prefer soft updates due to greater storage efficiency and less need for provisioning/tuning.
Oh come on, in reality you prefer softupdates because you are a BSD zealot.
Thanks for your explanations, and if I ever decide to sell my slashdot handle and the attached wellness of super positive karma on ebay, I'll make you a special offer