Catch up on stories from the past week (and beyond) at the Slashdot story archive


Forgot your password?
Linux Software

Undelete In Linux 697

Manuel Arriaga writes "[To the editors: I am not a professional programmer, nor will I ever be one. My income does not depend on my computing/programming skills, and hopefully it never will. So promoting free software I wrote does not help me in any financial way, no matter how indirect. libtrash is free software (GPL2), and I distribute it for free from my website. I have nothing to gain from the increased exposure, except for knowing that I am helping others. And I know slashdot isn't freshmeat... With that out of the way:] I have seen this topic discussed in the LKML multiple times by now, and many more people asking in the newsgroups why "I can't recover my deleted file on GNU/Linux". Here is my answer to that question. libtrash gives Linux a real "trash can". And it has been doing so (with varying degrees of stability) for more than one year now. If you consider it appropriate, make this information public on slashdot."
This discussion has been archived. No new comments can be posted.

Undelete In Linux

Comments Filter:
  • See also... (Score:3, Informative)

    by PDHoss ( 141657 ) on Monday September 30, 2002 @09:35AM (#4358738) 7256 [] for a similar discussion.

  • by fruey ( 563914 ) on Monday September 30, 2002 @09:39AM (#4358772) Homepage Journal
    Reasons you don't need a recycle bin:
    • Because rm doesn't take -f by default
    • Because delete means delete, not put somewhere until I decide I really don't need it
    • Because you're a Linux user and have a clue
    • Because you're sick of people who restore files from the recycle bin because they think it's some kind of temporary folder
    • Because you don't want anything to do with "recycling", you have /dev/null and you put everything there
    • Because you have a poor machine with less than 4Gb of disk and you need all the space you can get

    I can't believe how many Windows users get caught out when they dual boot my machine into Windows (have to have it for the office because others use my workstation) and find I have disabled the Recycle bin. Haha, more fool them.

    Disclaimer: take with a pinch of salt. If you have sodium issues, take with a pinch of Lo-Salt instead.

    • Yes, you're right. And all the people who contributed to this discussion ( 2 7256 ) are wrong.

      Nice attitude.
    • by Coplan ( 13643 ) on Monday September 30, 2002 @09:46AM (#4358823) Homepage Journal
      A friend of mine once told me that he would start using linux when they had a trash can type of thing. His reasoning was that he liked to make sure that his files weren't needed. He'd delete something, then wait about a week or so of regular computer usage before removing it from the "recyle bin". For him, this type of tool is very useful.

      If we want joe-user to use linux, we need silly stuff like this.

      For you and I (and those in the know), we know damn well that you can delete a JPEG without it affecting anything. And if we're in doubt about a file, we know to move it somewhere temporarily. If something breaks, move it back. It's not all that often that you'll be deleting system files (and even then, its usually configuration files).

      Anyhow, I guess the reality is that a tool like this only needs to be useful to someone. If it is useful to a couple of people, then its worthy of its existence. It's not like it is a default application. Don't use it if you don't want it. That's the beauty of the Open Source can do what you want.

      • No, I don't. (Score:5, Insightful)

        by Rui del-Negro ( 531098 ) on Monday September 30, 2002 @10:42AM (#4359259) Homepage
        Technically, you can use a pint mug to drink champagne. But most people prefer to use a champagne glass or a flute.

        Personally, I prefer to simply hit "delete" to move files to a preset temporary directory (which can also remember where those files originally were, and restore them with a couple of clicks) than to have to manually drag them to a directory I created.

        If this kind of "commodity" seems pointless to you, then you probably program by writing machine code with a text editor. ;-)

    • I think a recycling bin in more necessary in linux than in windows. In windows you graphically see exactly which files are being deleted, because everything is graphical.
      In linux you might do something like rm *a*b*c*.*
      That command can delete anywhere from 0 to all of your files depending on how they are named. And if you accidentally type *a*f*c*.* you might delete the wrong thing. Doesn't happen in windows, unless you are using cygwin.
    • by Transient0 ( 175617 ) on Monday September 30, 2002 @10:12AM (#4359003) Homepage
      It is nice to be able to have the cheap sense of superiority that comes with not needing something that someone else needs. Of your reasons, only one is valid that I see:

      >Because you have a poor machine with less than 4Gb of disk and you need all the space you can get

      But still, no matter how long you've been a linux user it's still possible to accidently type "rm core *" rather than "rm core*" and not catch it until half a second after you hit enter and realize that you have irrecoverably destroyed your project(you didn't really want to punish it for segfaulting).
      • But still, no matter how long you've been a linux user it's still possible to accidently type "rm core *" rather than "rm core*" and not catch it until half a second after you hit enter

        I once typed `rm -r logs/old /` instead of `rm -r logs/old/`

        ... as root...

        ... on a production machine ...

        ... that didn't yet have the backup-unit installed (by our colo -- their problem)...

        that was a sucky weekend. (-:

        • Once on an old Ultrix machine I wanted to delete some dot-files, including a subdirectory that started with a dot. So, as root, I typed

          rm -rf .*

          The problem with that was that '.*' included '..' ... so eventually it ascended into the parent directory, and began deleting every file and directory there. That was particularly unfortunate because the parent directory was the root directory!

          Before I realized what it was doing it had wiped out /bin and /etc. And this was our department's file server, so yes I had a sucky weekend too... I couldn't even give the machine a proper shutdown because I'd managed to wipe out that command!
    • by Latent IT ( 121513 ) on Monday September 30, 2002 @10:13AM (#4359007)
      To: Smug Person
      From: Other Smug Person
      Subject: Recycle Bin

      As you may be aware, you feel it necessary to disable a safeguard for your system. We're all aware that you're infallable, and subject to all the perks and benefits that come with that title. However, for the rest of us who realize that mistakes *do* happen...

      In Windows, delete sends things to the recycle bin. Shift+Delete sends it away forever and ever.

      If you're so worried about your disk, change the ammount of space the recycle bin can use, you shaved monkey!

      It's partly the attitude of people like you that kills Linux's chances on the desktop. Some people need to use computers, but *don't* work in IT, okay?


      Other Smug Person
    • "Reasons you don't need a recycle bin:"

      Mistakes shouldn't be recoverable? I'll have you know that some of us aren't so adept at Linux that they don't make mistakes. I learned the hard way. Thanks to some newbiness on my part, and lack of a 'recycle bin' on Linux's part, I had to re-upload my entire website.

      You can make fun of me if you like, but you're not going to hurt my feelings over it. Shit happens. A silly mistake shouldn't cost you anything. Anybody who knows anything about good UI design will tell you the same thing. It's fine if you want to turn off the 'recycle bin', I don't care. But to say "You don't need one" is kind of like saying "you don't need a backup parachute".
  • by minus_273 ( 174041 ) <aaaaa.SPAM@yahoo@com> on Monday September 30, 2002 @09:39AM (#4358774) Journal
    why ever would you need a "real" trash can, havent you used nautilus and konqueror? you even have the good fortune of waiting as your file is moved to the ~/gnome.desktop/trash folder!
    • Because libtrash works in Gnome, KDE, Openstep, the console, and everywhere else - because it uses ld_preload. A trash can isn't much good if you can't use it most of the time. And libtrash works really well - in fact, its the ideal replacement for both the Gnome and KDE trash cans.

      Note that some apps (generally network services) don't like preloaded apps running in their environment. You can easily disable it for such cases, and most of the time you wouldn't want real trashcan support on, say, your mail serverr anyway :).

      Check libtrash out people - seriously, it rocks. I'm glad the project is getting the exposure it deserves.
  • Closer and closer. (Score:3, Informative)

    by FreeLinux ( 555387 ) on Monday September 30, 2002 @09:41AM (#4358788)
    This sounds great but, the web site really doesn't give much information about the specifics of the library. However, it appears that libtrash is effective on the local machine rather than on the file system as a whole. This would mean that files deleted through Samba or NFS would not be recoverable. I could be very wrong about this as there is little detail on the website.

    But, this issue was discussed in a recent Ask Slashdot. I posted this comment [], regarging Novell's Salvage utility. A true file system level undelete utility such as this would be FANTASTIC. Is it possible to adapt libtrash to accomplish this?
    • by jjares ( 141954 )
      This is actually a replacement to unlink. I just tested and it works with samba shares as expected.
    • No! It works good! (Score:5, Informative)

      by fireboy1919 ( 257783 ) <rustyp AT freeshell DOT org> on Monday September 30, 2002 @11:43AM (#4359760) Homepage Journal
      I deleted an important directory in Linux about a year ago and decided that having no trashcan utility was a bad idea.

      For a while I tried a bash utility for it...boy was that a bad idea - it wouldn't let me delete files with spaces in the names, and occasionally it would totally wig out when given certian input. So I switched to libtrash.

      In answer to your comment, I want you to think about that. What you're suggesting, basically, is that the system works exclusively on local filesystems, which means that it is specifically programmed NOT to interact with the file system itself, but rather with the drivers for local devices. Seems rather far-fetched, don't you think? It works fine on anything you've got mounted.

      Here's how it works: it remaps the glibc function unlink() to move files to a new directory - that being different for each user and dependand upon environment variables (mines called ~/.trash). Certain directories can be marked as "you can't delete from here." ~/.trash, for example, is one of these directories. There is also an included script to keep the ~/.trash directory below a certian size (I put this script in my cron.hourly).

      There are currently two downsides:
      1) The rule that the cleaning utility uses isn't very good right now. It picks the biggest over the oldest files to delete (at least, that's what it used to do - not really a big change so maybe someone's fixed it by now).
      2) Using it as root can be disasterous during boot-up. Starting it at EXACTLY the right time after boot-up is difficult, and the only way I've heard of doing it is to replace the root's profile file after booting, which is a definite risk, because if the power cuts out and you still have that profile in place, the machine probably won't boot (mine wouldn't).
  • NOOOO!!!! (Score:4, Funny)

    by painehope ( 580569 ) on Monday September 30, 2002 @09:42AM (#4358793)
    now users can recover their files when i delete them?! that takes all the fun out of linux...
    fuck it, hey, boss, if this patch becomes mainstream, can we move to solaris?

  • by redhotchil ( 44670 ) on Monday September 30, 2002 @09:43AM (#4358802) Homepage Journal
    now we have almost everything we need:

    [x] Trashcan support
    [ ] Easy to use Windowing system
    [ ] Standard software install system
    [ ] Easy to use Windows filesharing
    [ ] Easy support for video files and DVD
    [ ] Desktop company support

    Way to go LINUX!
    • by FreeLinux ( 555387 ) on Monday September 30, 2002 @09:51AM (#4358856)
      [x] Trashcan support
      [X] Easy to use Windowing system - KDE
      [X] Standard software install system - LSB, Red Hat, Mandrake, Suse
      [X] Easy to use Windows filesharing - KDE, Samba
      [ ] Easy support for video files and DVD - No answer
      [X] Desktop company support - Red Hat, The Kompany
      • [X] Easy support for video files and DVD - Xine works for me like a charm :)

      • by oever ( 233119 ) on Monday September 30, 2002 @10:01AM (#4358920) Homepage
        [X] Easy support for video files and DVD - mplayer

        I've installed mplayer on two SuSE 8.0 linux machines, and it's amazing. You can see DVD's, AVI's and even look at at microsoft media streams.
        e.g. 'mplayer mms://'

        And how easy do you want it? You can easily make an icon on the desktop that starts mplayer on the dvd currently in the drive.

        So, visit and rejoice.
        • I don't consider "Easy support" being mplayer.

          Libraries, different configuration options (configure --with-gui), and configuration files...

          I use it daily, I love it, but I don't think most "everyday users" would think it was easy.
      • by Josuah ( 26407 ) on Monday September 30, 2002 @10:13AM (#4359009) Homepage
        [X] Easy to use Windowing system - KDE

        Um, KDE is really nice and my windowing system/manager of choice under Linux. But it's really not so "easy to use" "all the time" to the degree that Windows and Mac OS are.

        [X] Standard software install system - LSB, Red Hat, Mandrake, Suse

        By listing four things here, you've gone right ahead and said that the software install system is _not_ standard. There is a very different user experience for each distribution's install, enough to make the average user think he is installing a different OS for each one. I know my mom thinks Red Hat is an OS.

        [X] Easy to use Windows filesharing - KDE, Samba

        I can't say Samba is easy to use Windows filesharing. Easy to use Windows filesharing is clicking on a button that says share files and seeing that folder show up in Network Neighborhood. It's not SWAT.
        • By listing four things here, you've gone right ahead and said that the software install system is _not_ standard.

          Then what OS would you recommend that *does* have a standard software installation mechanism? Windows certainly doesn't count - I've used three entirely different installer applications just today...

        • "[X] Standard software install system - LSB, Red Hat, Mandrake, Suse"

          By listing four things here, you've gone right ahead and said that the software install system is _not_ standard.

          I think what the poster meant was that Red Hat, Mandrake and Suse all conform to the Linux Standards Base. Which defines the standard packaging (software install and maintenance) system for Linux. Of course, you can pick your own front end.

          "[X] Easy to use Windows filesharing - KDE, Samba"

          Easy to use Windows filesharing is clicking on a button that says share files and seeing that folder show up in Network Neighborhood. It's not SWAT

          Damn stright, I agree. But KDE does have this ability - look for ksambakonquiplugin (shit name I know) on Its too bad the distros don't ship with it turned on by default.
        • By listing four things here, you've gone right ahead and said that the software install system is _not_ standard. There is a very different user experience for each distribution's install, enough to make the average user think he is installing a different OS for each one. I know my mom thinks Red Hat is an OS.

          For the purpose of this complaint, your mom is basically right. Microsoft doesn't make a package management system that works on multiple corporations distributions of the OS, so why should Red Hat. Just pretend Red Hat is an OS and your complaint goes away. Just because both kernels are signed "Torvalds" doesn't mean their the same OS. Heck, Red Hat even changes the kernel anyway.

          I can't say Samba is easy to use Windows filesharing. Easy to use Windows filesharing is clicking on a button that says share files and seeing that folder show up in Network Neighborhood. It's not SWAT.

          Maybe your describing Mac OS X Windows file sharing, because it's not that easy on any Microsoft OS. Sure, that's all that you're supposed to have to do. But have the time it doesn't work. "Okay, enter this name and password to get my files." "uh--it's just asking me for a password, no name." That's if you can somehow magically get the computers to see each other.

          You can come back and say "you must have done it wrong, TRACK-YOUR-POSITION", but if there was anything for me to screw up, that just proves it's not as easy as you claim it is.

        • > Easy to use Windows filesharing is clicking on a button that
          > says share files and seeing that folder show up in Network
          > Neighborhood.
          Like one can do in Konqueror-3.1 (and in Mandrake-9.0's Konqueror) ?

          Yes, that box can get a checkmark now.

      • by Fastball ( 91927 ) on Monday September 30, 2002 @10:17AM (#4359039) Journal
        [X] Easy to use Windowing system - KDE

        You mean GNOME, right?

      • [x] Easy support for video files and DVD

        You could do a LOT worse than mplayer []..

      • [ ] Easy support for video files and DVD - No answer

        Take a look at []'s client. Easiest DVD player I've worked with...

        From the site,

        The VideoLAN Server can stream video read from a hard disk, a DVD player, a satellite card or an MPEG 2 compression card, and unicast or multicast it on a network. The VideoLAN Client can read the stream from the network and display it. It can also be used to display video read locally on the computer : DVDs, VCDs, MPEG and DivX files and from a satellite card. It is multi-plaform : Linux, Windows, Mac OS X, BeOS, BSD, Solaris, QNX, iPaq... The VideoLAN Client and Server now have a full IPv6 support.

        VideoLAN is free software, and is released under the GNU General Public License.

  • by netphilter ( 549954 ) on Monday September 30, 2002 @09:43AM (#4358803) Homepage Journal
    Come on, recycle bins are no fun at all. Where's the fun in having the files you "delete" stored in a folder until you REALLY want to delete them. It's much more fun to delete files knowing that there's a chance you may need them in the future and have no way of retrieving them (unless you're responsible and back your files up, but then again, what's fun about being responsible?).
  • Are you really sure that you really, really, really want to delete this file?

    Maybe you ought to take a stress pill and thing things over.
  • ...a GOOD thing. (Score:5, Insightful)

    by phigga ( 526030 ) < minus berry> on Monday September 30, 2002 @09:45AM (#4358818)
    I'm sure that there will be some die-hard Linux users out there that scoff at the idea of a Linux trashcan.

    That being said, though, I think it's a great idea. It would be a tremendous fail-safe to those just beginning to learn the survival commands (more specifically, rm).

    Plus, to all you old folks out there flipping me the eBird right now...I'll bet 70% of you (yet another made up statistic) have accidentally lost something by using rm without realizing which directory you were in. Happens to all of us. Wouldn't "sifting through the trash" be so much easier than looking throught yesterday's tape backup?

    Well, maybe the tape is penance for screwing up the rm.
    • Does undelete degrade performance? Is the perfromance hit worth the convenience if it's never used?

      I'm part of a team that runs our network. Specifically, we have two samba servers that store close to 70GB of files for over 60 users. The users cover both ends of the computer literate spectrum. On average I restore a file from tape less than once a month.

      If you feel this type of thing is needed make sure it's an option that can be turned off.

      • I'll answer your subject:

        Not very often. But when I do need it, it's an emergency. In DOS/Windows, even if I deleted for real (not just trashcanned the file) I can bail out of Windows, go to DOS, and use Norton Undelete to recover the file(s) in seconds.

        The lack of an undelete does makes me leery of *NIX as a work platform, because dumb typing mistakes DO happen, bad aim with the mouse DOES happen, and most common of all, brain cramps happen. These deleted-file errors are all quickly recoverable in DOS/Windows, without needing to know how good or recent your backups are (got a backup of that project you JUST finished??) let alone root thru them.

        Anyone got a technical explanation of why *NIX doesn't have a true undelete? this fellow's trashcan is a good start, but we've all emptied the trash by habit or reflex when we didn't really want to...

        • by theCoder ( 23772 ) on Monday September 30, 2002 @12:43PM (#4360395) Homepage Journal
          As I understand it, Norton undelete works through a quirk (intentional or not, I don't know) in the FAT filesystem. When you delete a file in FAT, the only thing that happens is the FAT entry for that file is removed. And even then, it's not really removed, only marked as removed (if you're a C programmer, it just puts a '\0' at the beginning of a char[] representing the file's name). Norton undelete looks around in the FAT for entries that are marked this way. Since most of the entry's data (and all the file's data) is still intact, it can restore the file easily.

          The trick is that once an entry has been marked, it, and the sectors on the disk the file uses can be used by a different file. So the longer you wait, the less likely you'll be able to recover the file.

          There's no undelete for Linux or UNIX because none of the filesystems support that feature, AFAIK. Also note that Microsoft's NTFS doesn't support that feature either. It's probably just too hard to integrate with everything else (like journaling) they've put in the FS. Personally, I'd like to see undelete put back into the FS layer (that's where it belongs, not the trashcan hack that everyone peddles). What would be really neat would be versioning (like a CVS filesystem) so I could revert to an earlier version of my file. This wouldn't necessiarly take more space since the FS could overwrite older data as it needed to (the older versions/delete files would appear to be free space).
    • Re:...a GOOD thing. (Score:5, Informative)

      by oever ( 233119 ) on Monday September 30, 2002 @10:26AM (#4359114) Homepage
      Here's a simple trashcan for you.
      Put this in your ~/.bashrc and type 'source ~/.bashrc'.
      I'm sure somebody can improve upon this by adding more checks, but basic functionality is there.

      function rm () {
      if [ $# -gt 0 ]; then
      if [ ! -e ~/Trash ]; then
      mkdir ~/Trash
      mv $@ ~/Trash

      - set file permissions for trash can
      - check that the files can be removed before moving them
      - make sure you have a trashcan for each parition and move files to the trash on the same harddrive when removing them
      - finish the TODO

    • Take a look at the man page for chattr and lsattr. Making a file undeletable has been on the table for a long time.

      But, like most other projects out there, it got to version 0.0.2-beta3-pre4 and stalled.

  • by Anonymous Coward on Monday September 30, 2002 @09:50AM (#4358846) /E xt2fs-Undeletion-1.html t/UNERASE.txt
    ht tp://
    http://www.securit r_Linux.html
  • Oh my. (Score:2, Insightful)

    by Anonymous Coward
    What's wrong with aliasing the rm command to something that simpy moves the files in question to a folder called 'trash' then, optionally, setting up cron to empty the folder's contents periodically?

  • by GreyyGuy ( 91753 ) on Monday September 30, 2002 @09:51AM (#4358855)
    I don't understand why there are so many people saying this is bad or implying that people who use Linux don't need it because they are so good. I must have missed the evolutionary step that made all Linux users so perfect that they never make mistakes. That is all the Recycle Bin is.

    Sure, some people use it as temporary folder, but so what? There will always be people who use things other then the way they are intended. If it works for them, so what? If it is so painful for you to contemplate, don't look at it.
    • by Znork ( 31774 ) on Monday September 30, 2002 @10:08AM (#4358976)
      Because a lot of us _mean_ rm when we type rm. Otherwise we would have used mv. Or used Nautilus or some other filemanager that by default puts stuff in the trashcan.

      rm means 'remove'. Not 'move to trash'. Think of it as the 'empty trashcan' command. Would you like a trashcan that moves things to yet another trashcan when you empty it?

      If you're uncertain about wether or not to remove something dont use rm. You're entirely free to rm /bin/rm if you dont want to use it. Or even mv /bin/rm /tmp if you're uncertain about wether or not to remove it permanently.

      And if you, despite knowing that rm means 'remove', make a mistake, just restore from your backups.
  • README? (Score:3, Insightful)

    by brunes69 ( 86786 ) <> on Monday September 30, 2002 @09:52AM (#4358860) Homepage

    Why is there no README or any other info on your site about this thing? I want to know how it works and how it is different from alias rm='mv ~/.trash', or the KDE trashcan, before I download it. Man I hate sites like this that expect you do download the package, then untar it, just to read a README file. How hard is it to throw it on your website with a link?

  • by wlugo ( 258046 )
    when I was in college some people and I did a Linux Undelete on the kernel using the ext2 filesystem. The whole procedure is described on The problem was we didn't found enough people to supported on greater kernels. I think it could be easily ported to ext3.
  • metacommentary! (Score:2, Insightful)

    by fraxas ( 584069 )
    Please ignore the idiots above -- the l1nux-l337 are always a pain in the butt about usability issues. As a response to the ask-slashdot rfe from last week, this works really well.

    As a point of note, those of you complaining about the disclaimer in the article should realize that, if the disclaimer hadn't been there, you would be complaining about how "/. isn't an advertising service, you Window$ Idiot!!111!11!!!11!!"

  • Not a solution (Score:5, Insightful)

    by cpt kangarooski ( 3773 ) on Monday September 30, 2002 @09:59AM (#4358904) Homepage
    While a trash can is nice to have, this doesn't fundementally address the issue of retrivability of accidently deleted information. That is, there is still going to be a step where information is going to be classed as unretrivable even when it COULD be retrieved. (i.e. when the trash is emptied)

    Clearly users appear to want to be able to correct mistakes that they've made -- perhaps even those that were not immediately apparent as being mistakes at the time -- for as long as possible. A trash is a step in that direction, but simply does not go far enough.

    My proposal is this: 1st it should be recognized that when you delete a file, you're really only marking the space where that file was as being available to be overwritten by more data. The original data is there, but what it consisted of, and where it was, are lost.

    So, let's keep that information in a log so that we can in a very real sense undelete anything that has not yet been overwritten. This log is not especially large, and with modern drive sizes is not a serious concern.

    Then, let's order the overwriting process to favor the maximum preservation of data. So for example this might result in new writes being done to the areas of the oldest deleted files first. Important files might be considered to be worth preserving longer, with importance dervived from various factors such as number of accesses, etc. prior to deletion. There's definately work for some user testing here to determine the optimal method. That's okay.

    If fragmentation is a worry, (bear in mind most people have never heard of it) then defragging software could take into consideration the undelete log and continue to preserve as much of the deleted data as possible when it shifts information around on the disk.

    In any event, the objective is to forestall the day when you have to tell a user who wants to undelete a file for as long as possible. Not longer, which the trash solution does, but AS LONG AS POSSIBLE.
    • Possible solution. (Score:3, Interesting)

      by FreeLinux ( 555387 )
      I think that you are on to the right solution.

      Perhaps the thing to do would be to use two file tables. The first table would be used normally as it is today. It would represent existing files and provide the correct information regarding space usage etc.

      The second table would only be used by the file system and the recovery utilty. The second file table would maintain the information of the files that had been marked for deletion and the file system would consult this table prior to saves so as not to overwrite the files that were marked for deletion.

      When the disk becomes full, the file system should consult the second file table and overwrite the oldest file that had been marked for deletion.

      Also, the recovery utility could consult the second table, listing the files that were marked for deletion but, still reside on the disk. Files selected for recovery could then be added back to the first, primary file table making them again available for the user.

      I'm not sure how Novell does it but, the above method would yield the same behavior as the Novell system.
      • You run into several problems here. First of all, at the current state of computers, the bottleneck in most machines is the hard disk. What we're doing here is adding additional work for the hard disk, thereby slowing down the computer further. Secondly, by continuing to avoid overwriting data and allowing the drive to fill, you further decrease disk performance. Hard drives generally begin to work more slowly when they become more than half filled, with a more severe and noticeable performance hit at around 80% depending on the drive.

        A more viable solution might be to take into account the above suggestions with the added idea of moving the data to the end of the drive during 'deletion' while still marking the space as available; albeit a new class of available which preserves data integrity based upon importance. This saves you from insane fragmentation and lower disk performance, and allows you to continue to maintain data integrity long after deletion. Two tables is again, twice the work, but a modified table which takes deleted information viability into account would certainly be useful. Issues such as security and performance are still in question, however, as well as how to implement such a table along-side existing file systems in such a way as to not break functionality or lose data. Backwards compatability and data security are probably the biggest issues, although preserving file permissions solves half of the security problem. Secure deletion must also be a choice for users eliminating sensitive data who don't want it recovered or viewed ever again.

    • Novell Netware's FS worked almost exactly like this. It was a wonderful feature. I don't understand why more implementations have taken this into consideration...
    • I have the solution! and it can be a HUGE moneymaker.

      i prepose the e-landfill. an online service that you can configure your trashcan to use a daemon process (garbagemand) that automatically ships the contents of the trashcan via a secure protocol (rubbishtruck/garbagetruck.. as known as RT/GT) to the e-landfill.. there the deleted file can pile up forever or at least until it is full then we just open up another landfill!

      Great idea!
  • I was thinking about filer last night, and this morning, we get libtrash. People have always had issues with deleting files. I personally keep a ~/bla/ directory. I move unneeded things there. If I don't need the files after a few months, I trash the directory, and recreate it. The concept is still better then an undelete, but I remember deleting some very important files on my first linux system... like vmlinuz and the /boot directory because I did not know better.

  • by dfgdfgdfg ( 577386 )
    What we need now is the Gnome (or KDE) panel set LD_PRELOAD so that all application can use libtrash.
  • by Myshkin ( 34701 ) on Monday September 30, 2002 @10:04AM (#4358942)
    So, what happens if you send something like, or your kernel into the recycling bin? Experimenting by randomly moving stuff you don't understand is never a good idea. Just sending it to some sort of recycling bin just gives folks a false sense of security and could lead them to completely hosing their entire install.
  • better solution (Score:3, Informative)

    by carpe_noctem ( 457178 ) on Monday September 30, 2002 @10:05AM (#4358954) Homepage Journal
    mkdir ~/trash
    alias rm="del"
    echo "* 4 * 1 * /bin/rm -rf /home/*/trash" >> /etc/crontab

    mv $* ~/trash /me nods

  • Its called backup (Score:3, Insightful)

    by jhines ( 82154 ) <> on Monday September 30, 2002 @10:06AM (#4358964) Homepage
    Don't delete anything, till it has been backed up. You do back up your data, right?
  • Back in like '94 a friend of mine in school wrote an Ext2 undelete program, which of course I no longer can find online... He doesn't have it listed on his webpage any more.

  • Having your users accustomed to "undelete"
    just makes it that much more harsh when they
    learn that something delete from a remote filesystem is irretrievable. "Undelete" creates bad habits.

    • Re:Complacency (Score:3, Insightful)

      by Reziac ( 43301 )
      Actually, I've found it's the other way around. If average users know that every mistake is fatal, they become afraid of making ANY mistakes, and that's when you discover a HD completely filled up with garbage that they didn't dare dispose of.

  • I flipped over to another virtual desktop and found that GNOME has already provided me with a trashcan. (More like Mac than Windows.) I never use it, though.

    If this is a trashcan for command-line rf, I can see how some people might want to use it. Not me, though.

  • by krray ( 605395 ) on Monday September 30, 2002 @10:12AM (#4358999)
    I've been a rabid Linux user from the early days. Today Linux handles DNS, Email, and Web services on my does NOT handle file access for JUST THIS REASON (lack of undelete).

    I'm not worried about *me*. When I delete something I fine with it being completely gone. What about completely clueless network users though? Being the MIS/IT MGR for where I work having access to "salvage" on the Novell Netware file servers is a wonderful tool for users mistakes.

    Classic example: last week one user created a Excel spreadsheet to be completed by another user. The second user opened the spreadsheet from Word, modified it, and saved it (as a .XLS file). Excel says it's corrupt (it's a Word document now).

    Getting the inserted table [spreadsheet] from Word back into Excel was next to impossible. Crappy Microsoft programming as usual -- and clueless users to boot. Easiet solution was to salvage the original spreadsheet and instruct user what NOT to do and re-enter the damn data PROPERLY this time.

    Linux would have left me high and dry. Well, not really, but having to go back to tape backups to simply salvage one file is a pain in the butt.

    I guess Linux will be nothing more than a niche product/market if "gurus" keep their attitudes posted here. Wake up and pay attention to corporate users and admins wants/needs. Telling me I'm clueless and wrong won't gain more market share (well, for Linux at least) -- I've recently bought another Netware license to cover just this issue for another remote office.
  • Set up CVS on your system (Come on, it's not THAT hard) and use it to archive your important files. THEN back up your CVS archive directory from time to time (You DO have a CD burner, don't you?) I'd also suggest backing up your home directory from time to time as well (You don't run as root, right?)

    The backup thing WILL save your ass eventually and that 1 time is well worth the cost of a CD burner if you don't already have one. Once you get used to CVS or some other version control, you'll wonder how you ever got by without it. Even without the "Oops! I didn't MEAN rm -rf * .txt!" protection it buys you, versioning of your files is damned nice.

  • rm revised! (Score:5, Funny)

    by Fastball ( 91927 ) on Monday September 30, 2002 @10:21AM (#4359071) Journal
    A new command-line parameter has been added to circumvent any and all trash bin implementations.

    -f fuck off with your trash bin, I'm deleting this file.

  • a better solution... (Score:3, Interesting)

    by dutky ( 20510 ) on Monday September 30, 2002 @10:21AM (#4359072) Homepage Journal
    It would be much better if the filesystem were to keep track of the most recent file membership of deleted blocks and ensure that recently deleted blocks would not be immediately reallocated for new files. If the file system did both these things we could simply walk the free block list to recover recently deleted files. The only problem with this scheme is that, in order to be of any real use, you would need to keep a file name in the inode, so that the user could identify the files after undeletion.

    On a similar note, it would be really nice if the filesystem kept a backup of the previous file metadata (specifically, owner, group, and permissions) so that you could "undo" an erroneous chmod or chown.

    The total amount of data that would need to be added to the inode to support this kind of recovery is not all that large: The user, group and permission backups would require only 4 bytes each (total 12 bytes) and the saved filename could make due with as few as 32 bytes (most filenames are much shorter than that, and the saved filename is just a hint to the user). It would be nice to be able to reconstruct the entire file with only the data blocks and the inode, so the inode and data blocks would need to have linkage pointers to associate them together. All told, the inode would need about 64 extra bytes of information and the data blocks would need an extra 8 or 12 bytes of overhead.

    And, yes, I've been thinking about this for a while: it's near the top of my 'to-do' list of neat open-source projects.

  • I'm Torn (Score:5, Informative)

    by ReadParse ( 38517 ) <(moc.wocynnuf) (ta) (nhoj)> on Monday September 30, 2002 @10:25AM (#4359104) Homepage
    I know in my heart that there's no need for this on Unix, because you shouldn't run as root AND use rm -rf and THE decide that you shouldn't have done that. There are safeguards in place and, after all, since you're a Linux superuser, you're either good enough that you don't make that kind of mistake or the system isn't important enough for it to really matter.

    Having said that, even though I know how dumb it was, I once accidentally issued `rm -rf /bin`. Funny story, though:

    For some reason or another, I happened into an additional hard disk that I put into my Linux box at work (not a production box). I don't remember how big it was, but it was big enough relative to my primary disk that, when I needed a mount point, I chose /big. That was the first mistake. I have no idea why I felt the need to mount it that close to the root. Although the similarity between "big" and "bin" is obvious in retrospect, it is, after all, retrospect.

    Actually, that wasn't my first mistake. My first mistake was running as root.

    I mounted the disk and played around with it. I suspect that it was my first time playing around with an additional hard disk, so I copied files over and examined "df -k" and so forth, and eventually I guess I decided to unmount it and do it all over again... I probably would have done endless, mindless file copies for the rest of the day, I was so thrilled with it. Hey, I was young.

    This is where it gets embarassing. Perhaps everybody has some mysterious glitch which adds confusion where there should be none. Yes, I honestly do know the difference between a symlink and a mount... I swear it. But in the very brief period of time that it takes to type a command, I sometimes confuse the two in my mind and try to unmount using the "rm" command. More specifically, "rm -rf".

    I also noticed on that day that we humans have kind of a built-in autocompletion. If you type the first few letters of your last name, you have a tendency to follow through with the rest of it. And that tendency increases dramatically the closer you get to the last letter. The way I noticed this was when I attempted to issue `rm -rf /big` and immediately pressed return (I found that return is also a mysterious part of that autocompletion).

    Just so you know, there are a great many important things in /bin. Among them, all of the shells, chmod/chown, grep, kill, ls (try working without that), mv.... the list goes on and on.

    This story also reminds me of the time I evaluated WS_FTP Server when it first came out. I needed an FTP server so I could go home and work on some files on an NT server. I wanted access to the whole box, so I set up my FTP account's home directory as c:\ -- I had no idea that when I deleted that account it would attempt to delete the user's home directory, even if it was c:\.

    I've never heard a disk thrash like that before or since. And you've never seen anybody turn a box off as quickly as I did when I realized what was going on. Alas, it was too late. Reinstallation and backup restore (yes, I had a backup) commenced immediately. By the way, I've never fully accepted responsibility for that -- I still feel like it should have said "You're about to delete c:\ and all of it's subdirectories. Are you sure?" Because I really didn't think it would do that.

    Anyway, my point is that "there, but for the grace of a godlike substance, go you". It's really easy to say we're too good for this, and there's a damn good case that a linux trashcan is not necessary, but for those who want it I think it's a cool piece of code.

    That is all.

    • Hate it when a funny story is screwed up by a blatant typo:

      "you shouldn't run as root AND use rm -rf and THEN decide that you shouldn't have done that."

  • Often, when I clean out my papers, binders, and whatnot, I end up throwing out stuff that I do need. Being able to root through the trash and retrieve it five minutes later when I come to my senses is very convenient.

    Yeah, yeah. I am not leet.:)
  • safedelete (Score:4, Informative)

    by oneeyedman ( 39461 ) on Monday September 30, 2002 @10:33AM (#4359169) Homepage Journal
    After losing eight hours of editing work during a botched backup attempt, I heard about a utility called safedelete. I can't find much on it, but here [] it is from Ibiblio. Interestingly, the person that told me about this utility (which sets up a trash directory with timed expiration and a system of aliases for rm and related commands) was an old Unix hand, and only secondarily a Linux user. The program works fine in Debian, I can report.

    And I don't get these people saying they are too smart to need an undelete capability. Must be nice!

  • The TCT (Score:4, Informative)

    by schlach ( 228441 ) on Monday September 30, 2002 @11:22AM (#4359558) Journal
    I can't believe no one's mentioned The Coroner's Toolkit []. Written by Dan Farmer and Wietse Venema, those crazy kids that wrote SATAN, back in the day. It has all kinds of fun tools for poking around backstage on a *nix box, ostensibly forensics-related work after a machine compromise, but if you accidentally delete something important, you could pretend that someone else broke in and did it. =)

    From the FAQ []:

    What the hell is it? The Coroner's Toolkit (TCT) is a collection of tools designed to assist in a forensic examination of a computer. It is primarily designed for Unix systems, but it can [do] some small amount of data collection & analysis from non-Unix disks/media.

    Features: Notable TCT components are the grave-robber tool that captures information, the ils and mactime tools that display access patterns of files dead or alive, the unrm and lazarus tools that recover deleted files, and the findkey tool that recovers cryptographic keys from a running process or from files.

    "Take this object, but beware! It carries a terrible curse!"

    The advantage is has over some recovery options is that it's entirely post-mortem. If you just deleted the boss's laundry-list, you could go download it, build it, and stand a pretty decent chance of recovering your file.

    The disadvantage is that, perhaps like a real autopsy, it's not for the faint of heart []...
  • by Ender Ryan ( 79406 ) on Monday September 30, 2002 @11:32AM (#4359656) Journal
    I don't think this is the proper solution. There are a lot of programs that create temp files and unlink them, so something like this is going to really clutter up a filesystem really quick.

    I think underlete should be handled at the application level, ie. in konqueror and nautilus, etc. Maybe alias rm to something else for the command line.

  • Potential gotcha (Score:3, Informative)

    by lpontiac ( 173839 ) on Monday September 30, 2002 @11:37AM (#4359698)

    This appears to work by placing itself ahead of the normal libc when it comes to dynamic library loading. Very neat idea, but it won't work on libraries which don't delete files by making calls to the shared library. The most common instance of this will probably be statically linked binaries. On FreeBSD, almost all of /bin (including rm) is statically linked, and it wouldn't surprise me if this was true on a Linux distro or two.

    So be wary of just installing this and playing with rm - you might give yourself a nasty surprise :) You can check whether rm is statically linked by running ldd `which rm`

  • by drwho ( 4190 ) on Monday September 30, 2002 @01:05PM (#4360598) Homepage Journal
    :>\-i yes it looks like line noise or an emoticon, but it's really a shell script. This protects against rm *.

    so cd to all of your really important directories (/, /etc, /bin), and type :>\-i

    what it does is create an empty file named -i

    when the shell expands * the first file it lists is -i, which rm interprets as an option for interactive mode, so you have to confirm each deletion.

    I am thoe original author of this shell script, consider it GPLd.
  • by Salamander ( 33735 ) < minus painter> on Monday September 30, 2002 @01:37PM (#4360931) Homepage Journal

    It shouldn't be all that hard to do this in-kernel, so it doesn't have library-preload dependencies or side effects and catches even stuff that comes into the kernel from unexpected directions. All you need is a dirt-simple filter driver that you push on top of the filesystem to change delete/unlink calls so they move stuff into the trashcan, plus some ioctls to view/empty it.

    Oh, wait, Linux doesn't have filter drivers. For a moment there I forgot we were talking about a "technically superior" OS.

I program, therefore I am.