Fatal WeaknessWith High-Capacity MMC/SD Cards? 57
"I am working on an embedded project where I am using Secure Digital and/or MultiMediaCards to store data. For convenience in developing and updating, I have decided to use a Windows FAT-type file system. This way I can create them, debug them, and update them on Windows development machines using USB card readers.
Since I have to keep around 25,000 files on the card, and since I'd like to minimize disk fragmentation that would result from large cluster size, I would like to use FAT32 with 512-byte clusters. This is no big deal, and certainly supported on windows - "format f: /fs:fat32 /a:512". Done and done.
The interesting thing was that I bought 4 256MB SD cards (Three from SanDisk, one from Lexar Media), and quickly killed 3 of the 4. The SanDisk cards report that track 0 is not readable when I try to format it. Snooping the SD bus shows the card inits OK, and allows writes, but returns an error whenever track 0 is read. The Lexar card's failure is a little more subtle: a format looks like it works, but subsequent chkdsks always fail. The 4th card I'm afraid to repeat this on.
SanDisk (after some weeks of running around) will replace my cards, but hasn't addressed the cause of the failure. I'm also still waiting for a reply from the Lexar 2-day-turnaround support, after 7 business days, including a reminder email.
My theory goes like this: on FAT32, in the first sector (sector 0), there's a field that gives the sector number of the File System Info Sector (FSInfoSec). Every indication I've seen puts this in sector 1, the second physical sector. This sector contains updated counters of used and free clusters on the device. The 256MB cards have about 499,000 512-byte clusters on them. These flash devices have a lifetime of 300,000 write per block, so if I copy 25,000 files to fill the card, the FSInfoSec has been updated either 25,000 times or 499,000 times (depending on when the filesystem updates the counters). If it's the former, I've just eaten up 8% of the lifetime of the card. If the latter, I've killed it before even finishing my write, since a write anywhere also causes a write to sector 1. At best case, once I update this card 12 times, I have to throw it away.
There is some Microsoft documentation that says the FSInfoSec pointer in sector 0 can be set to 0xffff to indicate it's not used. When I used dskprobe.exe from the Microsoft Windows Resource Kit to patch this pointer, Windows 2000 Professional (with a fresh Windows Update applied) blue screens so frequently when I do a dir or chkdsk on the card that I can't do anything useful before I need to cycle power on my PC.
To test my theory, I replaced the dead Lexar card, and repeated the experiment, this time formatting the card FAT16 (no FSInfoSec anymore), and the minimum supported cluster size of 2K. The bad news is of course that I lose about 26MB on the card to fragmentation since the clusters are so large. The good news is that I can write the disk full as many times as I can put up with, and it never fails.
So there are two conclusions: 1) There's a staggeringly high defect rate in the 256 MB cards (SanDisk denies this) and all my ideas about the large cards ever working well with FAT32 are groundless, or 2) even though FAT16 on a 256MB card is hugely wasteful, it's the only way to get the cards to work for very long at all."
Have you tried... (Score:2, Interesting)
Daniel
Re:Have you tried... (Score:2)
Fat32 introduction... (Score:4, Informative)
-Adam
Forgive my Ignorance... (Score:5, Insightful)
Re:Forgive my Ignorance... (Score:2, Interesting)
Good idea for Linux users, though.
Re:Forgive my Ignorance... (Score:1, Interesting)
Daniel
Re:Forgive my Ignorance... (Score:3, Interesting)
As for the loopback trick, that I don't know about. Someone cleverer than me might be able to do this within the Cygwin environment (or some other way?) but I have no idea where to even start...
Re:Forgive my Ignorance... (Score:2, Informative)
Re:Forgive my Ignorance... (Score:1)
Re:Forgive my Ignorance... (Score:2)
Re:Forgive my Ignorance... (Score:1)
Re:Forgive my Ignorance... (Score:2)
Moreover, I know that even without Cygwin there are ways to do this with native DOS tools -- I know this because the first time I installed Linux, the RedHat documentation described how to write a disc image to the floppy drive from a DOS prompt.
Rawrite.exe is a utility that you find on the RH CD. It's not a standard DOS utility.
Re:Forgive my Ignorance... (Score:1)
cygwin you still use the microsoft kernel even if you emulate native linux routines using the cygwin library. To my knowledge, there is nothing like loop mounting in windows, so this is why you can't loop mount there (try the mount command with option '-o loop' on a iso image file if you like).
Loop mounting is one of the cool features of linux, I would miss alot in windows.
Re:Forgive my Ignorance... (Score:1)
But it turns out (read my other post) that if you use mkdosfs, and copy via a mount point (e.g., "mount -t vfat
Same problem with CD-RWs (Score:3, Interesting)
Re:Same problem with CD-RWs (Score:1)
Now, there's an interesting idea though, why use fat filesystem if you can use a cd's type of filesystem.
Someone correct me please if I'm a babbling idiot.
Re:Same problem with CD-RWs (Score:2)
-Adam
Re:Same problem with CD-RWs (Score:2)
You need to have the DirectCD software installed on windows to read it because it's not supported in the kernel before WinXP. When you use DirectCD to format a disk, it creates a minimal ISO9660 filesystem with the DirectCD-reading software.
I wasn't able to use the UDF filesystem on older CD-ROM hardware, but it should work fine on all the CD-R and DVD-ROM drives.
Re:Same problem with CD-RWs (Score:2, Informative)
Its a filesystem specifically designed for CD-RWs and would not suffer from a problem like the poster describes.
However, CD-RWs do have a maximum write lifetime, and will eventually degrade. This will take a long time though using UDF.
Use a filesystem specific for flash (Score:5, Informative)
A good flash filesystem will ensure that sectors are only erased when absolutely necessary, and will spread the allocation table out accross multiple sectors FAT16 and FAT32 are horrible about this, and will lead to extremely early flash death. So, if you are going to use flash, please treat it like it is flash, even though it has an IDE interface, it is very different than a standard disk on the other end.
Re:Use a filesystem specific for flash (Score:2)
Re:Use a filesystem specific for flash (Score:2)
Re:Use a filesystem specific for flash (Score:3, Informative)
~GoRK
Re:Use a filesystem specific for flash (Score:2)
does anyone actually know what format pocket PC's do to their cards?
Re:Use a filesystem specific for flash (Score:1)
It's supposed to be the commodity part of the project were we just buy something that works.
The whole issue of '...if you are going to use flash, please treat it like it is flash', as you suggest, is -- I think -- a clear view of the problem, but the heart of it is that Windows only supports FAT on these cards, so that's what I'm stuck with if I want these to work on Windows without a lot of extra expense in time.
Re:Use a filesystem specific for flash (Score:2)
FAT is utterly ill-suited for use on a flash. In addition to the problem noted by the article, the FAT itself and the root directory are both at fixed locations, and will wear out more quickly than the rest of the disk. Not surprisingly (for a file system designed to run on floppy disks), FAT or root directory media failures are basically not recoverable.
The common solution to this problem is to add a flash file system layer (sometimes known as a flash translation layer) underneath FAT. This yields the simplicity of FAT and the wear leveling of a real FFS, but you're likely going to have to pay for it in money if you don't have the time.
Remember to fire the architect who chose FAT on SD/MMC without having a clue how they work. This is not the kind of problem that should be found when you're already testing your code. As you noticed, you're basically forced to lose 10% capacity, or insert a new expensive layer into your product.
Finally, in case you haven't noticed, you should also try to minimize the times you do write. Use a suitably-sized data buffer, don't flush it too often, and keep files open if you're going to update it many times.
Mixed Media Cards? (Score:1)
I realize that SRAM woudln't be viable with cards of modern styles because of the requirement of a battery to power all the memory, but what about a combination system? Partial blocks are SRAM with a tiny (read embedded Zinc Air or supercapacitor) battery supply, where the data storageblocks are Flash.
I'm not an EE, so I may be not understanding the way allocation tables are stored on a card, but is something like this viable?
Re:Mixed Media Cards? (Score:1)
The other option is the cheaper DRAM - where the memory cells are just a capacitor and a transistor. The problem is that the capactior doesn't hold its charge indenfinitly and needs to be refreshed. - It's this need of a refresh that is a pain because it blocks access by other devices, and requires logic circuts.
In a modern PC, cache is typically SRAM, and main core is DRAM.
As for "The olden days of PDAs" - well, Palms use exactly this for their primary storage.
You're right though, a simple battery backed SRAM card would be useful for a lot of situations - especially if an external power supply was available most of the time. (My palm happly keeps 8 MB of such ram valid for months at a time off two AA batteries.)
Of course, if it's read only or infrequently updated then preparing an image on a file and writing that would be a better solution.
I've seen a flash-based embedded system create a ramdisk to store working files - avoids the rewrite the flash issue.
Re:Mixed Media Cards? (Score:1)
I'm gazing whistfully at my Newton 1MB SRAM PCMCIA card across my desk
FFS? (Score:2)
Sounds like you're right (Score:4, Informative)
If possible, use one of the these techniques. If not, can you consolidate all the files into one file (easy if all files are the same size) and just rewrite portions of that file? That way, the FAT wouldn't need to be updated. Lastly, is there some sort of caching algorithm you can enable that would delay the write to the FAT+directory until (at best) the time the card needs to be removed?
p.s. you're not losing space to fragmentation; it's actually the slack space at the end of each file that's doing it (I think you just used the wrong term describing it).
Re:Sounds like you're right (Score:1)
I think what he's describing is also called "internal fragmentation [uni-frankfurt.de]" in some circles.
Re:Sounds like you're right (Score:1)
Re:Sounds like you're right (Score:1)
Since we'd like the convenience of the separate files, and since we'd like bozo users in the field to be able to update the cards (without us writing or distributing any software to accomplish the update), the FAT16 option is most appealing to me. These considerations aside, file consolidation is the best idea so far, and we've discussed it in the group.
The problem is that I'm not 100% sure that all the varieties of Windows really update the FSInfoSec once for each file rather than once for each cluster added during the file write. This would be stupid, but not out of the question. I spent some time with a a free USB snooper I found on sourceforge.org (thanks!), and it looks like the update is once per file on Win2K pro, but I didn't want to repeat the experiment on '95, '98, ME, XP, and the other flavors of 2K.
It might work to glob all these files into one massive file, and just seek around in it at runtime in the embedded system. That might take some extra work in the filesystem to be efficient, but if it gets back 26MB of wasted space from the cards it's worth considering.
Re:Sounds like you're right (Score:2)
You'll still be rewriting in place. This is bad, because essentially every write is preceded by a slow erase, which will result in horrible write performance. Also, you'll still be wearing out the same spots, just not the FAT. If you perform wear leveling inside that giant file, then you're really just putting the flash translation layer on top of FAT rather than below it.
Lastly, is there some sort of caching algorithm you can enable that would delay the write to the FAT+directory until (at best) the time the card needs to be removed?
Good caching is essential both for performance and wear. Unfortunately, many of these devices don't have physical locks that prevent their sudden removal.
Most Flash cards have wear-leveling controllers (Score:5, Informative)
These microcontrollers are precisely the reason why it is not a good idea to use these formats in devices that can be powered off suddenly. Look here (search down for "asynchronous power fail" [linux.org] for a mention of these problems. Elsewhere on the site (and in the JFFS author's other online comments), more discussion of this problem is available, including the JFFS author's own experiments.
JFFS works with MTD devices, which are flat Flash arrays with no microcontroller (and the JFFS author doesn't plan on supporting ATA-type Flash cards, although it appears others may be working on this). This gives JFFS complete control over journalling, wear-leveling, and error correction. It is able to do these things in a fashion that is robust in the face of asynchronous power failures. The microcontrollers in various Flash cards do not appear to be this sophisticated.
So, 1) it may not be the choice of filesystem that is the problem, 2) there are documented reasons for not using Flash cards in certain types of systems, and 3) JFFS (and JFFS2), even if they support non-MTD devices now, probably cannot safeguard against the problems in microcontroller-based Flash cards.
Re:Most Flash cards have wear-leveling controllers (Score:1)
From the "MultiMediaCard Product Manual" (© 2000 SanDisk Corporation):
1.5.3 Endurance
SanDisk MultiMediaCards have an endurance specification for each sector of 300,000 writes ...With typical applications the endurance limit is not of any practical concern to the vast majority of users.
1.5.4 Wear Leveling
SanDisk MultiMediaCards do not require or perform a Wear Level operation.
Must you use MMC cards? (Score:1, Informative)
are going to be problamatic, unless
you do the wear leveling yourself.
I might suggest using Compact Flash,
instead. As far as I know, all compact
flash cards do do wear leveling, and
they use a handy IDE interface, and so
should be easy to support.
On a possibly related note, there was
a huge thread on LKML a few days ago,
dealing with flash cards dying. There
may be some information you can glean
from the archives. The original post
was made on Febuary 2, 2003. At any
rate, good luck!
AC
Re: (Score:1)
Re:Wrong term (Score:1)
Hey, genius- (Score:1, Troll)
Problem solved.
Yes, I am troll. (Score:2)
Why .zip, .tar, and .gbfs are not file systems (Score:3, Informative)
Why not utilize a TAR based pseudo FS?
I actually did this once. It's called GBFS [pineight.com], and it's designed to hold graphics, text, and audio assets for programs running from ROM on embedded systems such as the Game Boy Advance compact video game system. I would have used GNU tar, but I dropped it when I saw that the header for each file took half a kilobyte and that I could reinvent a better wheel for my purposes.
The problem with doing a general file system in an archive file format such as .tar, .zip, or .gbfs is that you cannot change the size of a file without copying the whole file system to another file. Nevertheless, .zip and .gbfs do work well as read-only file systems.
Surely there are developer resources for this? (Score:2)
I suspect that you are failing the cards since the manufacturers do not expect usage patterns like you have; most purchasers of high capacity SD cards are using them because they want to keep a relatively small number of large files on them (JPG on your camera, MP3 in your player, databases in you PDA), not thousands of tiny cards.
Maybe you are outside the design spec, or maybe (especially since the sandisk and lexar fail differently) the specific cards are bad implementations of the standard that "normal" users won't ever notice.
Re:Surely there are developer resources for this? (Score:1)
I agree that my usage pattern is atypical, but since SanDisk is now marketing their Cruzer [sic] 256MB MMC USB plug as a PC file backup device, I'm guessing they'll see a whole bunch of these cards fail before their 'expected' lifetime.
The least they could do if they realized this were a problem is stick a note in their FAQ about filesystem support. The internet is FULL of Pocket PC and Palm users' complaints about getting these cards to last, perform well, and work cross-platform. Seems like it's not too much to ask for them to stick a couple paragraphs on their web site for the moderately technical user.
Watch out for SanDisk SD cards (Score:4, Informative)
I've used a 128MB Lexar, formatted as FAT16 and things have worked well. Many have used an SD card as main storage [schwag.org] on their Zauruses, formatted as ext2.
Stupid Question... (Score:2)
Re:I don't use FAT at all... (Score:2)
Try mounting it as /tmp and /var. (Any damage
is your own responsibility, please.)
heh, use a proper file system (Score:1)
Sandisk media (Score:2)
I did a quick Google search but couldn't find any articles. It's out there, though; I ran into this when I was selecting media for my first digital camera about two years ago. Also I seem to remember certain on-line retailers showing a warning note with Sandisk media about this, ahem, "unusual" definition of megabyte.
Could be unrelated to the thread problem (sounds like it, from some of the other posts about frequent writes wearing the card out), but it might be worth considering.
jvaigl! Why didn't you mention... (Score:2, Informative)
(I work with jvaigl.)