Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Hardware

Fatal WeaknessWith High-Capacity MMC/SD Cards? 57

jvaigl writes "I think I've spotted a pretty fundamental problem with those little Secure Digital and MultiMedia Cards. Since so many people are starting to use them as portable storage and file backup, I thought I'd try run this by the Slashdot readers and either get some corroboration or else have someone knowledgeable point out my misunderstanding." Read on for the trouble -- which has to do with FAT32 formatting and those high-capacity cards -- and the conclusions that jvaigl draws from his experiences.

"I am working on an embedded project where I am using Secure Digital and/or MultiMediaCards to store data. For convenience in developing and updating, I have decided to use a Windows FAT-type file system. This way I can create them, debug them, and update them on Windows development machines using USB card readers.

Since I have to keep around 25,000 files on the card, and since I'd like to minimize disk fragmentation that would result from large cluster size, I would like to use FAT32 with 512-byte clusters. This is no big deal, and certainly supported on windows - "format f: /fs:fat32 /a:512". Done and done.

The interesting thing was that I bought 4 256MB SD cards (Three from SanDisk, one from Lexar Media), and quickly killed 3 of the 4. The SanDisk cards report that track 0 is not readable when I try to format it. Snooping the SD bus shows the card inits OK, and allows writes, but returns an error whenever track 0 is read. The Lexar card's failure is a little more subtle: a format looks like it works, but subsequent chkdsks always fail. The 4th card I'm afraid to repeat this on.

SanDisk (after some weeks of running around) will replace my cards, but hasn't addressed the cause of the failure. I'm also still waiting for a reply from the Lexar 2-day-turnaround support, after 7 business days, including a reminder email.

My theory goes like this: on FAT32, in the first sector (sector 0), there's a field that gives the sector number of the File System Info Sector (FSInfoSec). Every indication I've seen puts this in sector 1, the second physical sector. This sector contains updated counters of used and free clusters on the device. The 256MB cards have about 499,000 512-byte clusters on them. These flash devices have a lifetime of 300,000 write per block, so if I copy 25,000 files to fill the card, the FSInfoSec has been updated either 25,000 times or 499,000 times (depending on when the filesystem updates the counters). If it's the former, I've just eaten up 8% of the lifetime of the card. If the latter, I've killed it before even finishing my write, since a write anywhere also causes a write to sector 1. At best case, once I update this card 12 times, I have to throw it away.

There is some Microsoft documentation that says the FSInfoSec pointer in sector 0 can be set to 0xffff to indicate it's not used. When I used dskprobe.exe from the Microsoft Windows Resource Kit to patch this pointer, Windows 2000 Professional (with a fresh Windows Update applied) blue screens so frequently when I do a dir or chkdsk on the card that I can't do anything useful before I need to cycle power on my PC.

To test my theory, I replaced the dead Lexar card, and repeated the experiment, this time formatting the card FAT16 (no FSInfoSec anymore), and the minimum supported cluster size of 2K. The bad news is of course that I lose about 26MB on the card to fragmentation since the clusters are so large. The good news is that I can write the disk full as many times as I can put up with, and it never fails.

So there are two conclusions: 1) There's a staggeringly high defect rate in the 256 MB cards (SanDisk denies this) and all my ideas about the large cards ever working well with FAT32 are groundless, or 2) even though FAT16 on a 256MB card is hugely wasteful, it's the only way to get the cards to work for very long at all."

This discussion has been archived. No new comments can be posted.

Fatal WeaknessWith High-Capacity MMC/SD Cards?

Comments Filter:
  • Have you tried... (Score:2, Interesting)

    by KDan ( 90353 )
    any other file systems on these (eg NTFS)? Or am I asking a stupid question (better to ask a stupid question and be stupid for a /. thread than to not ask it and be stupid forever!).

    Daniel
  • by stienman ( 51024 ) <adavis@@@ubasics...com> on Monday February 10, 2003 @05:14PM (#5273711) Homepage Journal
    PJRC has a nice introduction to the fat32 file system [pjrc.com] on their website. It's aimed at people writing code for microcontrollers to access fat32 partitions on IDE drives, so it's got the goods.

    -Adam
  • by nathanh ( 1214 ) on Monday February 10, 2003 @05:15PM (#5273719) Homepage
    ... but could you use the loopback device to create an image file and then "dd" the image file to the card? This way the 499,000 writes would be made on the host computer and only the final version written to the card.
    • He's using Windows on the host machine. No dd or loopback there ;-)

      Good idea for Linux users, though.

      • by KDan ( 90353 )
        That's a neat idea though. And surely those cards' drivers support some way of direct-dumping bytes onto them...

        Daniel
      • This is at least partly inaccurate: if you install Cygwin then you can also install DD on a Windows machine. Moreover, I know that even without Cygwin there are ways to do this with native DOS tools -- I know this because the first time I installed Linux, the RedHat documentation described how to write a disc image to the floppy drive from a DOS prompt. I want to say the DOS equivalent was actually something like "dd.exe", but it's been years now and, running "which dd" from a Cygwin shell just gives the POSIX dd that Cygwin itself installed, masking any system version that may also exist.

        As for the loopback trick, that I don't know about. Someone cleverer than me might be able to do this within the Cygwin environment (or some other way?) but I have no idea where to even start...

        • I believe you are talking about rawrite.exe. At least that's what I used to make boot floppies with RH in the past.
        • Moreover, I know that even without Cygwin there are ways to do this with native DOS tools -- I know this because the first time I installed Linux, the RedHat documentation described how to write a disc image to the floppy drive from a DOS prompt.

          Rawrite.exe is a utility that you find on the RH CD. It's not a standard DOS utility.

      • Loop mounting is a feature of the linux kernel. With
        cygwin you still use the microsoft kernel even if you emulate native linux routines using the cygwin library. To my knowledge, there is nothing like loop mounting in windows, so this is why you can't loop mount there (try the mount command with option '-o loop' on a iso image file if you like).

        Loop mounting is one of the cool features of linux, I would miss alot in windows.

    • Yes, that's one thing we discussed.

      But it turns out (read my other post) that if you use mkdosfs, and copy via a mount point (e.g., "mount -t vfat /dev/sdc1 /mntflash") this all works just fine.

  • by andfarm ( 534655 ) on Monday February 10, 2003 @05:15PM (#5273723)
    I haven't had any experience with this myself, but I've heard about some programs that supposedly allow one to use a CD-RW as a dynamically rewritable drive. (Like a floppy, in other words.) I would expect similar problems with such a configuration. Anybody tried this?
    • I've used some programs that claim to do that, but what they really do is queue the stuff to write up and then burn it when you decide to remove the cd (or when you specifically tell it to burn it). I haven't used windows to burn rw's recently, and at the time, this was the closest you got to using a cdrw like a floppy. Also, I don't know all the details of the filesystems used on cds, but it's iso9660 and not fat. I wouldn't imagine it works the same way, however, because you can burn multiple sessions on a cd-r, and this media wouldn't allow you to rewrite parts previously written.
      Now, there's an interesting idea though, why use fat filesystem if you can use a cd's type of filesystem.
      Someone correct me please if I'm a babbling idiot.
    • CD-RW disks are not written to with FAT32 or similar file system formats, so you wouldn't have the same problems. I believe DirectCD and others use a proprietary format, which is why you must have directcd installed on any machine you want to access such a cd from - Windows XP comes with this support built in, including writing.

      -Adam
      • DirectCD doesn't use a proprietary format, it uses the UDF filesystem and packet writing. The 2.4 kernel has read support for such drives, but not write support on CD-R drives due to lack of kernel write support for CDR/CDRWs. For more info [trylinux.com].

        You need to have the DirectCD software installed on windows to read it because it's not supported in the kernel before WinXP. When you use DirectCD to format a disk, it creates a minimal ISO9660 filesystem with the DirectCD-reading software.

        I wasn't able to use the UDF filesystem on older CD-ROM hardware, but it should work fine on all the CD-R and DVD-ROM drives.

    • There's no problem like that with CD-RWs, because they use a special file system -- UDF.
      Its a filesystem specifically designed for CD-RWs and would not suffer from a problem like the poster describes.
      However, CD-RWs do have a maximum write lifetime, and will eventually degrade. This will take a long time though using UDF.
  • by Lenolium ( 110977 ) <rawb&kill-9,net> on Monday February 10, 2003 @05:21PM (#5273802) Homepage
    Flash disks tend to have filesystems specifically designed for them because they have very different characteristics from traditional drives (ones can change to zeros, but zeros can't change to ones, unless you erase the entire flash sector, and writing a flash sector doesn't matter, it's the erases that count.)
    A good flash filesystem will ensure that sectors are only erased when absolutely necessary, and will spread the allocation table out accross multiple sectors FAT16 and FAT32 are horrible about this, and will lead to extremely early flash death. So, if you are going to use flash, please treat it like it is flash, even though it has an IDE interface, it is very different than a standard disk on the other end.
    • What are some examples of such a splendid, flash-specific filesystem?

    • that's great to say, but you think people using pocket PC machines format their SD cards?

      does anyone actually know what format pocket PC's do to their cards?
    • The reason we chose SD/MMC cards in the first place was that it's cheap, ahem 'reliable', widely available, and easy to write to from a user's PC in the field with no hardware or software development effort on our part.

      It's supposed to be the commodity part of the project were we just buy something that works.

      The whole issue of '...if you are going to use flash, please treat it like it is flash', as you suggest, is -- I think -- a clear view of the problem, but the heart of it is that Windows only supports FAT on these cards, so that's what I'm stuck with if I want these to work on Windows without a lot of extra expense in time.

      • Windows only supports FAT on these cards, so that's what I'm stuck with if I want these to work on Windows without a lot of extra expense in time.

        FAT is utterly ill-suited for use on a flash. In addition to the problem noted by the article, the FAT itself and the root directory are both at fixed locations, and will wear out more quickly than the rest of the disk. Not surprisingly (for a file system designed to run on floppy disks), FAT or root directory media failures are basically not recoverable.

        The common solution to this problem is to add a flash file system layer (sometimes known as a flash translation layer) underneath FAT. This yields the simplicity of FAT and the wear leveling of a real FFS, but you're likely going to have to pay for it in money if you don't have the time.

        Remember to fire the architect who chose FAT on SD/MMC without having a clue how they work. This is not the kind of problem that should be found when you're already testing your code. As you noticed, you're basically forced to lose 10% capacity, or insert a new expensive layer into your product.

        Finally, in case you haven't noticed, you should also try to minimize the times you do write. Use a suitably-sized data buffer, don't flush it too often, and keep files open if you're going to update it many times.

  • In the olden days of PDAs, we didn't have Flash RAM. Instead, we used so-called SRAM, which was just charge coupled RAM that has to be constantmy charged, like the RAM in your computer. This was great because you could write to the card assumably infinite number of times -- the downside was that it was super-expensive.

    I realize that SRAM woudln't be viable with cards of modern styles because of the requirement of a battery to power all the memory, but what about a combination system? Partial blocks are SRAM with a tiny (read embedded Zinc Air or supercapacitor) battery supply, where the data storageblocks are Flash.

    I'm not an EE, so I may be not understanding the way allocation tables are stored on a card, but is something like this viable?
    • SRAM, or Static RAM is still around. It's just the individual memory cells are a bistable formed from two transistors and other bits and pieces. It has the advantage that data is kept valid and accessable as long as power is applied.

      The other option is the cheaper DRAM - where the memory cells are just a capacitor and a transistor. The problem is that the capactior doesn't hold its charge indenfinitly and needs to be refreshed. - It's this need of a refresh that is a pain because it blocks access by other devices, and requires logic circuts.

      In a modern PC, cache is typically SRAM, and main core is DRAM.

      As for "The olden days of PDAs" - well, Palms use exactly this for their primary storage.

      You're right though, a simple battery backed SRAM card would be useful for a lot of situations - especially if an external power supply was available most of the time. (My palm happly keeps 8 MB of such ram valid for months at a time off two AA batteries.)

      Of course, if it's read only or infrequently updated then preparing an image on a file and writing that would be a better solution.

      I've seen a flash-based embedded system create a ramdisk to store working files - avoids the rewrite the flash issue.
      • Well, by "olden days", I was meaning expansion cards, not internal storage :)

        I'm gazing whistfully at my Newton 1MB SRAM PCMCIA card across my desk :)
  • What about with FFS? Is this behaviour only being seen with NTFS and FAT32?
  • by morcheeba ( 260908 ) on Monday February 10, 2003 @05:35PM (#5273935) Journal
    FAT32 is not really suitable for Flash memory for precisely the too-many-erase/write cycles you've noticed. The usualy solution for this is some sort of leveling algorithm so that blocks are rearranged in physical memory so that they are erased an equal number of times. This can be done with a software translation layer, in the hardware (doubtful on such a small, dumb device, but possible with an ide interface), or with an alternative file system designed for the purpose (such as JFFS [slashdot.org]).

    If possible, use one of the these techniques. If not, can you consolidate all the files into one file (easy if all files are the same size) and just rewrite portions of that file? That way, the FAT wouldn't need to be updated. Lastly, is there some sort of caching algorithm you can enable that would delay the write to the FAT+directory until (at best) the time the card needs to be removed?

    p.s. you're not losing space to fragmentation; it's actually the slack space at the end of each file that's doing it (I think you just used the wrong term describing it).
    • I think what he's describing is also called "internal fragmentation [uni-frankfurt.de]" in some circles.

    • Thanks for this idea.

      Since we'd like the convenience of the separate files, and since we'd like bozo users in the field to be able to update the cards (without us writing or distributing any software to accomplish the update), the FAT16 option is most appealing to me. These considerations aside, file consolidation is the best idea so far, and we've discussed it in the group.

      The problem is that I'm not 100% sure that all the varieties of Windows really update the FSInfoSec once for each file rather than once for each cluster added during the file write. This would be stupid, but not out of the question. I spent some time with a a free USB snooper I found on sourceforge.org (thanks!), and it looks like the update is once per file on Win2K pro, but I didn't want to repeat the experiment on '95, '98, ME, XP, and the other flavors of 2K.

      It might work to glob all these files into one massive file, and just seek around in it at runtime in the embedded system. That might take some extra work in the filesystem to be efficient, but if it gets back 26MB of wasted space from the cards it's worth considering.

    • can you consolidate all the files into one file (easy if all files are the same size) and just rewrite portions of that file? That way, the FAT wouldn't need to be updated.

      You'll still be rewriting in place. This is bad, because essentially every write is preceded by a slow erase, which will result in horrible write performance. Also, you'll still be wearing out the same spots, just not the FAT. If you perform wear leveling inside that giant file, then you're really just putting the flash translation layer on top of FAT rather than below it.

      Lastly, is there some sort of caching algorithm you can enable that would delay the write to the FAT+directory until (at best) the time the card needs to be removed?

      Good caching is essential both for performance and wear. Unfortunately, many of these devices don't have physical locks that prevent their sudden removal.

  • by RealTime ( 3392 ) on Monday February 10, 2003 @06:14PM (#5274382)
    This sounds more like a bug in the controllers inside the Flash cards than the actual choice of filesystem. Most Flash card formats (CompactFlash, MemoryStick, MMC/SD) contain a microcontroller that does wear-leveling and ECC. So, logical block zero of the device does not remain physical device zero if that block gets worn out. There are lots of references on the web discussing the microcontrollers in various Flash cards, for example this article [216.239.53.100] (linked via Google cache because the original is a PDF).

    These microcontrollers are precisely the reason why it is not a good idea to use these formats in devices that can be powered off suddenly. Look here (search down for "asynchronous power fail" [linux.org] for a mention of these problems. Elsewhere on the site (and in the JFFS author's other online comments), more discussion of this problem is available, including the JFFS author's own experiments.

    JFFS works with MTD devices, which are flat Flash arrays with no microcontroller (and the JFFS author doesn't plan on supporting ATA-type Flash cards, although it appears others may be working on this). This gives JFFS complete control over journalling, wear-leveling, and error correction. It is able to do these things in a fashion that is robust in the face of asynchronous power failures. The microcontrollers in various Flash cards do not appear to be this sophisticated.

    So, 1) it may not be the choice of filesystem that is the problem, 2) there are documented reasons for not using Flash cards in certain types of systems, and 3) JFFS (and JFFS2), even if they support non-MTD devices now, probably cannot safeguard against the problems in microcontroller-based Flash cards.
    • I suspect they're all the same since most seem to be made by SanDisk, Toshiba, or Panasonic and the others are just 'branded' versions of one of these. With that said...

      From the "MultiMediaCard Product Manual" (© 2000 SanDisk Corporation):

      1.5.3 Endurance

      SanDisk MultiMediaCards have an endurance specification for each sector of 300,000 writes ...With typical applications the endurance limit is not of any practical concern to the vast majority of users.

      1.5.4 Wear Leveling

      SanDisk MultiMediaCards do not require or perform a Wear Level operation.

      • by Anonymous Coward
        If they don't do wear leveling, they
        are going to be problamatic, unless
        you do the wear leveling yourself.

        I might suggest using Compact Flash,
        instead. As far as I know, all compact
        flash cards do do wear leveling, and
        they use a handy IDE interface, and so
        should be easy to support.

        On a possibly related note, there was
        a huge thread on LKML a few days ago,
        dealing with flash cards dying. There
        may be some information you can glean
        from the archives. The original post
        was made on Febuary 2, 2003. At any
        rate, good luck!

        AC
  • Comment removed based on user account deletion
    • I believe there's internal fragmentation - which is what you refer to as slackspace - as well as external fragmentation which you call just fragmentation.
  • Use tar.
    Problem solved.
  • What do the SD card association say? What do the manufacturers say when you complain that their cards have failed?

    I suspect that you are failing the cards since the manufacturers do not expect usage patterns like you have; most purchasers of high capacity SD cards are using them because they want to keep a relatively small number of large files on them (JPG on your camera, MP3 in your player, databases in you PDA), not thousands of tiny cards.

    Maybe you are outside the design spec, or maybe (especially since the sandisk and lexar fail differently) the specific cards are bad implementations of the standard that "normal" users won't ever notice.
    • SanDisk offerred to replace the cards (still waiting for the RMA), but ignored my question. Lexar still hasn't replied.

      I agree that my usage pattern is atypical, but since SanDisk is now marketing their Cruzer [sic] 256MB MMC USB plug as a PC file backup device, I'm guessing they'll see a whole bunch of these cards fail before their 'expected' lifetime.

      The least they could do if they realized this were a problem is stick a note in their FAQ about filesystem support. The internet is FULL of Pocket PC and Palm users' complaints about getting these cards to last, perform well, and work cross-platform. Seems like it's not too much to ask for them to stick a couple paragraphs on their web site for the moderately technical user.

      ...Climbs off soapbox... --Jim

  • by zsazsa ( 141679 ) on Monday February 10, 2003 @07:49PM (#5275359) Homepage
    In the Sharp Zaurus Linux PDA community, many have shied away from SanDisk's SD cards. Their 128MB and 256MB had many [farplanet.net] problems [google.com] with the Zaurus. Things are supposedly better now but you still hear about SanDisk SD problems with brand new cards and the latest Sharp ROM.

    I've used a 128MB Lexar, formatted as FAT16 and things have worked well. Many have used an SD card as main storage [schwag.org] on their Zauruses, formatted as ext2.
  • Is there anything limiting this problem to flash disks? What's preventing the multiple re-writing from damaging a normal hard drive's sectors? I know that the general increase in hard driv efailure is normally attributed to pushing the storage envelope, but could this also be a factor?
  • You should use a filesystem that is intended for use with flash devices, such as jffs.
  • Sandisk appears to have a mediocre rep when it comes to camera flash cards. They were nailed for misrepesenting the capcity of their media. After being busted, they pulled that stunt where they conveniently "redefined" what "they" mean by "megabyte".

    I did a quick Google search but couldn't find any articles. It's out there, though; I ran into this when I was selecting media for my first digital camera about two years ago. Also I seem to remember certain on-line retailers showing a warning note with Sandisk media about this, ahem, "unusual" definition of megabyte.

    Could be unrelated to the thread problem (sounds like it, from some of the other posts about frequent writes wearing the card out), but it might be worth considering.

  • that when we did the formatting with linux, it worked? I thought that was one of the funniest parts of all this. When we used mkdosfs to put a FAT32 fs on the cards, it worked (disregard that FAT32 is probably not appropriate -- we discussed this, and I even recommended we try a linux fs, but since it's an embedded project, jvaigl didn't want to go off and "prove" ext2/3 for this, and I can't blame him).

    (I work with jvaigl.)

I tell them to turn to the study of mathematics, for it is only there that they might escape the lusts of the flesh. -- Thomas Mann, "The Magic Mountain"

Working...