Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Programming

Man Deletes His Entire Company With One Line of Bad Code (independent.co.uk) 460

Reader JustAnotherOldGuy writes: Marco Marsala appears to have deleted his entire company with one mistaken piece of code. By accidentally telling his computer to delete everything in his servers, the hosting provider has seemingly removed all trace of his company and the websites that he looks after for his customers. Marsala wrote on a Centos help forum, "I run a small hosting provider with more or less 1535 customers and I use Ansible to automate some operations to be run on all servers. Last night I accidentally ran, on all servers, a Bash script with a rm -rf {foo}/{bar} with those variables undefined due to a bug in the code above this line. All servers got deleted and the offsite backups too because the remote storage was mounted just before by the same script (that is a backup maintenance script)." The terse "rm -rf" is so famously destructive that it has become a joke within some computing circles, but not to this guy. Can this example finally serve as a textbook example of why you need to make offsite backups that are physically removed from the systems you're archiving?"Rm -rf" would mark the block as empty, and if the programmer hasn't written anything new, he should be able to recover nearly all of the data. Something about the story feels weird.
This discussion has been archived. No new comments can be posted.

Man Deletes His Entire Company With One Line of Bad Code

Comments Filter:
  • Three words (Score:5, Insightful)

    by MPAB ( 1074440 ) on Thursday April 14, 2016 @01:24PM (#51908943)

    Offsite, offline BACKUPS

    • by Nutria ( 679911 )

      Multiple off-site backups. Multiple, rotating off-site backups. Weekseven if something happens to the on-site tapes you've still got backups.

      Honestly, WTF is it about the PC/Internet mentality that makes sysadmins soooo stupid? Enterprises figured this out FIFTY YEARS AGO.

      • Re:Three words (Score:5, Insightful)

        by Aighearach ( 97333 ) on Thursday April 14, 2016 @01:51PM (#51909189)

        That's all great, but even a less complete, sloppy backup system would be an improvement here.

        Another thing people don't understand about cloud hosting... you should still have your own self-managed, non-cloud server that holds your images and ideally runs your service during the low-traffic hours. Whatever your daily lowest traffic 6 hours is, in most cases, should be traditionally hosted. Cloud is super-duper-awesome-webscale for the peak traffic, no way around that if you have peak traffic hours.

        Personally, I can re-deploy (including the latest database backup) from my dev workstation using a simple rake task.

        Another problem is; relying on your hosting company for backups. Never do that. The same fire/earthquate/bash script/volcano that makes the backup necessary, would destroy it! Expect the hosting company to have insurance, don't expect them to care if your data gets lost. Especially if it "user error."

        This has nothing to do with "PC/internet mentality" and everything to do with the latest anti-waterfall, anti-planning, 80% is all that matters mindset. Traditionally, this was easily solved because there was an engineering mindset.

        • In this context, the guy is the cloud provider. His customers, if they're sensible, will have their own backups and so will be able to recover, but they also won't trust his business much if that's their recovery strategy from his incompetence.

          Even with online backups, there's no way that this should happen. The backup system should be taking read-only snapshots at periodic intervals, so even if you rm -rf you'll only delete the live data and be able to revert to the snapshot from an hour ago.

          • In my experience, most of the customers of small hosting companies are paying for fully managed servers, which includes the backups. Most of the customers won't have any backup other than the code they started with. And they wouldn't know how to make a backup any more than they would know how to shoot a fireball spell out of a chopstick.

            This is compounded by human nature applying "trust" based on the quality of the personal relationship you have. If you have a nice conversation, by the end they really reall

        • Traditionally, this was easily solved because there was an engineering mindset.

          You seem to be implying that data loss was less common in the "Good ole' days", when all sys admins were highly trained engineers. That is almost certainly untrue, and based on false nostalgia. Backups are much easier today, with reliable high-capacity storage, journaling file systems, ubiquitous connectivity, and plenty of off-the-shelf software solutions.

          • Some projects I worked on in the 90s still have tape archives of that data.

            You can easily have a situation where the backup tools have improved, and there is less overall data loss now, but that the mindset now is sloppy and leads to a lot of errors of types that were less common in the past.

            In the past when you did it sloppy, you'd get called out on it; and sometimes it still sucked, because PHB. But when that was the case, it was at least known and accepted that it was technically inferior to not have cor

            • Where did you work where that mentality didn't exist? I worked for quite a few very large organisations back in the day and "put it live, we'll fix the bugs as we go" was the order of the day, usually after 2 years of shambolic waterfall development and ever-changing requirements.

            • In the past when you did it sloppy, you'd get called out on it

              I have been in tech for 30+ years, and I have seen no evidence whatsoever that sys admins were less sloppy in the past, nor do I believe that management was better at "calling them out" when they made mistakes. Backups and reliability in particular are way better today.

              Every generation tends to believe that young'ins are dumber and lazier than they were. They are usually wrong.

      • by lgw ( 121541 )

        I have to disagree here a bit. Not with the idea of doing backups -- everyone should -- but that's looking at the half problem the wrong way. It's the right solution for customer data, but not for all the code and other materials that make your web site happen.

        I've seen this problem a lot: all the work product that makes a web presence happen gets done on the hosted server. That's beyond stupid - that's failing to even understand your job.

        All the work that goes into your hosted web site -- your store, you

      • by tnk1 ( 899206 )

        Offsite, tape backups aren't even really all that necessary. You just need any backup that you can't use one command in the system to delete all of your data.

        You could use AWS S3, and just use something like Glacier to back up your data. Since it takes like 4 hours for it to be rotated back into being online, you have about the same effect.

        Also, while offsite backups are useful, for a host with 1,535 customers, who are all making changes, even if you have a daily offsite tape backup, you could find yourse

      • Minimums:

        3 Copies
        2 Locations
        2 Formats
        2 Mediums

        Copies, two local, one remote
        Locations, geographically distinct
        Formats Natural, Raw, compress etc
        Mediums, SATA, USB, Tape, SAN manufacturer etc.

        By Minimum I mean bare minimum. the reality is, there should be cascading copies being made, and Long Term Arching able to restore to a set point in time. For Copies you'll need at least three, more likely more version (date specific). You should separate your copies geographically so that when California gets the big on

    • by flopsquad ( 3518045 ) on Thursday April 14, 2016 @02:08PM (#51909319)

      Offsite, offline BACKUPS

      Would not have helped in this situation. His typo resulted in this command:

      "rm -rf --no-preserve-root --write-zeroes --shred-mbr --exec-all-ssh-hosts --douse-hydrofluoric --high-velocity-eject-removable-media --carpet-bomb-offsite-backup --salt-earth"

      Which, I mean, who hasn't accidentally done that? The keys are like right next to each other.

      • by Billy O'Connor ( 3638647 ) on Thursday April 14, 2016 @02:43PM (#51909639)
        I have this aliased to 'sl'. Keeps me on my toes.
      • Offsite, offline BACKUPS

        Would not have helped in this situation. His typo resulted in this command: "rm -rf --no-preserve-root --write-zeroes --shred-mbr --exec-all-ssh-hosts --douse-hydrofluoric --high-velocity-eject-removable-media --carpet-bomb-offsite-backup --salt-earth" Which, I mean, who hasn't accidentally done that? The keys are like right next to each other.

        Man, I haven't laughed out loud like that in a long time. Thank you for that.

    • Three words (Score:1, Redundant)
      Offsite, offline BACKUPS

      Make them Redundant backups too? Good idea.

    • by hey! ( 33014 )

      Four words: filesystem with automatic snapshots.

      I've never admined a major customer linux installation myself, but as a developer I've been called into rescue customers who messed up their databases, and let me tell you being able to root through the transaction log and undo mistakes like "delete * from foo where conditionThatIsAlwaysTrue" is a lifesaver. Oracle, which is a company I despise for a number of reasons, does a really good job of that.

      The rule for production systems should be "never work withou

  • --no-preserve-root (Score:5, Informative)

    by zopper ( 4044367 ) on Thursday April 14, 2016 @01:26PM (#51908957)
    Does he use --no-preserve-root by default? I think that it is there for many years. Of course, if his servers are running on something from 2004, then his rm might be without this safeguard...
    • by mysidia ( 191772 )
      This prevents the root itself from being deleted, but you can still do rm -rf /* even with no --no-preserve-root There are iterations which still accidentally cause a full system deletion, even with this safeguard in place.
  • by anlag ( 1917070 ) on Thursday April 14, 2016 @01:27PM (#51908977)
    I saw the post on ServerFault, and while the original scenario could have happened, the OP's follow-up blunder to reverse the input and output parameters of dd when trying to preserve the disk seemed just a wee bit too unlikely. I looked at the article to see if there was any additional data to suggest this was real, but it seems entirely based on the SF thread. Until corroborated, I'm going to call bs.
    • by crunchygranola ( 1954152 ) on Thursday April 14, 2016 @05:17PM (#51911011)

      My operating theory is that the guy is constructing an alibi. Perhaps he has gotten wind of an investigation and wants to look like a hapless idiot and not someone engaged in destroying evidence.

  • by Anrego ( 830717 ) * on Thursday April 14, 2016 @01:27PM (#51908979)

    This is borderline bait at this point.

    Can this example finally serve as a textbook example of why you need to make offsite backups that are physically removed from the systems you're archiving?

    There are plenty of examples already and keeping a set of backups physically disconnected from running infrastructure is pretty well established practice, with random software bugs and screw ups being just one of many reasons. That said people will continue to have all their backups fully accessible (and destroyable) or just not back things up at all and things like this will continue to happen.

    Guy can possibly recover the data, but the company is probably still screwed reputation wise.

    • There are plenty of examples already and keeping a set of backups physically disconnected from running infrastructure is pretty well established practice

      This seems to be more of a case for multiple backups instead of online vs offline backups. The way I read the summary it looks like the bug occurred after mounting the backup which could happen in any poorly coded scenario regardless of how secure you leave your offline backups.

      • by Anrego ( 830717 ) *

        Right, at minimum there should be two sets, and both should never be connected at the same time for exactly this kinda reason.

    • There are plenty of examples already and keeping a set of backups physically disconnected from running infrastructure is pretty well established practice

      Pixar circa ToyStory 2 springs to mind.

      https://www.techdirt.com/artic... [techdirt.com]

  • Empathy (Score:5, Funny)

    by The-Ixian ( 168184 ) on Thursday April 14, 2016 @01:28PM (#51908991)

    I have that cold feeling in my stomach just reading this summary. ick.

    I did something similar (though not quite so destructive) nearly 20 years ago when I was first learning Linux.

    I my case I was trying to get rid of all the hidden files in root's (/root) home dir using 'rm -rf .*'

    Guess what that did?

    Yeah, that wasn't a highlight of my career...

    • by c ( 8461 )

      I did something similar (though not quite so destructive) nearly 20 years ago when I was first learning Linux.

      Same here. Thought I was in /tmp, was actually in /, and did an "rm -rf *".

      Fortunately, things were a bit slower back then and glob ordering being what it is I was able to ctrl-C it before it got further than /bin. With rcp being in /usr/bin/, I was able to (carefully) recover from another system.

      • by cruff ( 171569 )

        Fortunately, things were a bit slower back then and glob ordering being what it is I was able to ctrl-C it before it got further than /bin. With rcp being in /usr/bin/, I was able to (carefully) recover from another system.

        I also did that years ago on a Sun 1 system, only got part way through /bin. Recovered the contents of /bin from a release tape. Learned to be a bit careful after that.

    • My turn.

      I was extracted a tarball into my home directory. I was done with it and the contents in my home directory and wanted to remove them. Knowing a tarball of foobar.tar.gz typically extracts to ./foobar, I typed:

      rm foo[TAB]* -rf

      I expected bash to fill in to the . in foobar.tar.gz, instead, somehow I hit a space between [TAB] and *, executing the command: rm foobar * /rf on my entire home directory (meant to execute rm foobar* -rf). And this was before I knew how to do data recovery.

      Similar misu

    • Well, I have good news and bad news. The good news is that I've removed all of the hidden files.

  • Fun thing about TRIM (Score:5, Informative)

    by CajunArson ( 465943 ) on Thursday April 14, 2016 @01:28PM (#51908995) Journal

    While this guy was most likely using traditional HDDs where block level recovery is a possibility, for those of you using SSDs that have TRIM properly enabled, don't expect to be able to recover deleted files from the same drive unless you are really really fast.

    TRIM automatically zeros the blocks of deleted files and they are GONE aside from vague sci-fi and probably nonexistent NSA-type forensics.

    • by Rockoon ( 1252108 ) on Thursday April 14, 2016 @02:03PM (#51909281)
      When the OS sends a trim command, with it is information about what the logical sector should look like if an attempt is made to read it again. IIRC the options are zeros, ones, and random.

      Without trim the ssd has to preserve the entire logical block device its emulating, ie if you have a 64GB drive then even if it only has 4KB of "files" on it, the device still has to preserve all 64GB because it doesnt even know what a file is, let alone that you deleted one.

      With trim the ssd only has to preserve what the OS told it was important to preserve. So instead of preserving 64GB if data it only has to preserve your 4KB of data. Trim marks logical sectors as dont-preserve.

      What the SSD will not do is overwrite trimmed physical sectors just because they were trimmed. In fact, that data could linger there for years even with a high amount of read/write activity because SSD's only erases entire physical blocks, not just the subsectors within blocks that were trimmed.

      So recovering is not sci-fi. Recovery is a fact. What can't be done is recovering the data via commands that target the logical rather than physical device.
  • a couple cheap Kimsufi servers from OVH for remote backup in EU and In Canada?

    • by SumDog ( 466607 )

      Read the article. He claimed to have off-site backups in other countries, but they were mounted.

      But also read the note under the summary. This whole story is probably bullshit.

      • by tnk1 ( 899206 )

        It probably is bullshit. Who fucking mounts servers in another country to do the backups to directly?

        You archive and compress that shit locally and then move it to the remote server. That prevents your daily backup from taking 48 hours to complete and helps considerably on those data transfer charges.

        Having a simple archive and transfer via FTP or something alone could have prevented him from deleting the remotes with one command. I have trouble believing he set up a NFS or other remote volume to another

  • by Anonymous Coward

    This has such a smell of BS around it. given the fact that backups are indeed offsite and that a company has more the 1 server etc.etc. Even my own simple setup consisting of a pc, laptop, tablet, qnap and some external HDD and sticks is impossible to delete with 1 script. total bollocks.

    Wonder if he found incriminating material or has gambling debts, far more plausible

  • manishs (Score:5, Insightful)

    by Verdatum ( 1257828 ) on Thursday April 14, 2016 @01:40PM (#51909095)
    Manishs, you seem to actually critically read articles before posting them, and you actually provide insight after the summary. What is up with that?
  • Why in hell is is running scripts out of ansible? Why are those scripts not running on a QA system thats a block for block clone of production? Finally what idiot thinks that some mounted drives he copies stuff for is a backup system?

    Tape disk I do not care just treat disk as tape, plenty of backup system are more than happy to do just that. Rsync is not nor will it ever be a backup, snapshots are not a backup, some script some guy wrote that works ok is not a backup. Now they can all help to meet your

    • by jcdr ( 178250 )

      man sync
      [...]
      -b, --backup
                                  With this option, preexisting destination files are renamed as each file is transferred or deleted. You can control where the backup file goes and what (if any) suffix gets appended using the
                                  --backup-dir and --suffix options.

      • by jcdr ( 178250 )

        Err, please read 'man rsync' of course :-)

        I also used the rsync batch mode to keep the last 6 months daily backup.

      • That's still not a real backup strategy. Look all my backups are on mounted disks/arrays. It's great for RTO and can be part of one but at the end of the day you still need to get that data offline and offsite. So it might be more correct that rysnc is not a complete backup system.

        I've been down that road to many times it's far far too prevalent in the hosting and small business segments that think a single copy will be fine. Idiocy like a backup drive in a local system, because that shares no failure

  • This was a blatant troll on a forum and now because some idiot millennial wrote an op-ed piece, some idiot (manishs) put it on the /. frontpage?
    Are the admins now supporting the things the moderation system fights on their own site?

    This story is more of an embarrassment than the political vomit I've had to endure because _this_ story doesn't even qualify as news. e.g. What Company did he destroy exactly? You would think the incredibly obvious lack of facts would be a tipoff to someone.

  • Hobbling the default rm command slightly would make a sense, possibly having a second command (oblit or something) for the really nasty stuff, would make sense. Many commands can be unnecessarily destructive, and those destructive commands are too easy to invoke by accident. Possibly requiring a --really and a --reallyreally switch on rm to enable things like rm -rf crossing filesystems, would make sense. I did once make a quick hack so that rm -rf would require an environment variable to be set in order to

  • When an friend an I got started with Linux he wanted to remove his Slackware install from a dual boot PC. For fun he ran rm -rf / on that install. We had a good laugh when the message scrolled by of the OS trying and failing to remove files from the CDROM. That was until he realized that he had mounted his Windows partition too. It didn't fail to remove files there :-)
     

  • I was working at a small development shop about 15 years ago and I came in one morning to find the main development server not working. Turned out that the previous night a developer on the other project ran "rm -rf" from the root directory on the Sun box and then tried to fix things before giving up and going home. No note, no call to the boss, nothing to indicate what had happened so I had to figure that out when I arrived around 8 AM. Oh, and no backups of their project. I at least had the latest versi

  • Joke or not he's voluntarily entered himself into the timeless database known as Google, viewable with the not-so-secret incantation "google Marco Marsala"
  • by Minupla ( 62455 ) <`moc.liamg' `ta' `alpunim'> on Thursday April 14, 2016 @02:01PM (#51909275) Homepage Journal

    I collect these stories for people who I mentor. Even if they're trolls, they work as cautionary tales, because lots of people have had similar smaller scale disasters (as evidenced by posts in this thread) and it's healthy for mentees to get a taste of what can happen when you (for example) forget to error check your script parameters.

    In a big way it doesn't matter if it's true or not, it could be true which makes it a teachable moment. I'm sure everyone who reads the story will run a mental checklist to see if they have a script somewhere that could EVER do it. Do they have their backups mounted when they should be rsyncing, etc.

    Min

  • That's so simple and effective, work on local network as well as on remote networks thank's to ssh.

  • Corrections (Score:4, Insightful)

    by ledow ( 319597 ) on Thursday April 14, 2016 @02:08PM (#51909321) Homepage

    Man ALLOWS his entire company to be wiped out in one command.

    Man DESIGNS his entire company to be wiped out in one command.

    Man SETS UP his entire company to be wiped out in one command.

    Hint: I work in schools. Once I had a teacher delete their entire planning folder. Then (and DO NOT ask me why, because I don't understand it either), they emptied that folder from Recycle Bin. They rang up in the more embarrassed panic.

    And then it was explained that we still had copies of that folder in:

    a) Shadow Copies of the profile on the client.
    b) Network Copies of the profile that they were logged in as (and which fortunately hadn't logged off once they realised what they did).
    c) Shadow Copies of the profile folder on the server.
    d) Copies of the profile folder on all the other servers.
    e) Copies of all the servers on replica servers.
    f) Copies of the server VM's and storage in a primary backup location.
    g) Copies of the server VM's and storage in a secondary backup location.
    h) Copies of the server VM's and storage in a tertiary backup location.
    i) Several off-line and off-site copies of the server VM's and storage .
    k) Random, casual backups all over the place.

    And that's just for the crap that teachers think is important (i.e. a lesson plan they have to write every two weeks and which they can't re-use anyway).

    Fuck knows what this guy was thinking, but there's no one one command ANYWHERE should be able to do that many actions, let alone dangerous actions that you haven't evaluated properly. Honestly, some of those machines don't even TURN ON until the backup window, and even the backup devices have rollback and shadow-copy-like functionality on top of whatever the backup software gives (incrementals, etc.). And several are DELIBERATELY offline for almost their entire lives and have entirely disparate credentials so no one command could ever affect them.

    Not being funny, but we're talking a small school of 400 5-14 year olds here. He actually has more customers than I have users. And you just can't fuck about like that, so if he thinks he can, I honestly have zero sympathy and can only laugh.

  • by ErichTheRed ( 39327 ) on Thursday April 14, 2016 @02:09PM (#51909333)

    I just got put on a project at work as "the systems guy" for a project being built in Azure. This is in support of a reasonably critical system, and the development staff are salivating over the chance to self-deploy code and infrastructure. It sounds like this problem was caused by the first thing I noticed as a risk -- if you don't limit what Azure users can do, it's just like giving them the keys to the data center. And this isn't in an "evil BOFH control freak" sense, this is just the fact that everything in Azure is virtual and easily changed either manually or through automation. So, someone who's having a bad day could easily make a mistake and get rid of things they have permissions on -- it's possible in AWS too.

    It's a really different mindset than even a hosted IaaS service. There, if you do something stupid, at least the physical infrastructure doesn't get rolled up and carried off. Now hopefully you have backups if that happens and can just restore the VMs and storage as needed, but if developers are running the show I would highly doubt it. (In Marco's case, I would imagine this was caused by the classic "run as root, because I'm the boss" issue.

    So, in summary, all the (good) sysadmins worrying about the cloud taking their jobs need not worry. The rules of designing a safe computing environment have changed, but they haven't gone away entirely! I'd be a little worried if I were a savant-level EMC or Cisco guru right about now, but generalists with good heads on their shoulders are still in demand.

  • Nope, not buying any part of this story, nope. No one is dumb enough to run that without a test. And how were the offsite backups even accessible? doesn't matter, because everything would be recoverable from the systems he "wiped". No, this is another bullshit story spread around by IT departments. this did not happen.
  • Just a bad brain!
    Can you spell "test"?
    • Can you spell "test"?

      Can you use it in a sentence please? Oh wait no never mind. I think I've got it! D-O-I-T-L-I-V-E???

  • by dentar ( 6540 ) on Thursday April 14, 2016 @02:30PM (#51909523) Homepage Journal

    He admitted it publicly?

  • The likely hood of this happening is slim, but I sometimes wonder if a minor change is really not that bad.
    In this case, change rm to NOT allow /, until -t/--top is added. Then it is allowed.
    With this minor change, it could save noobs from themselves and would like not be used that often in the first place.
  • Chain of Mistakes (Score:4, Informative)

    by Greyfox ( 87712 ) on Thursday April 14, 2016 @02:59PM (#51909773) Homepage Journal
    Recently the USPA was talking about stuff that kills skydivers. It's almost never just one mistake. It's a chain of mistakes where one single good decision anywhere in that chain would break the chain and prevent entirely preventable deaths. In the case of this story, if it had actually happened, which it didn't, the decisions made to violate best practices all along the chain (IE, running your bash scripts as root or as any user ID that has authority to delete anything on the file system, not pushing just pushing your backup data to isolated storage, not having numbered sequential backups, etc) would be so egregious that the story would simply be an example of Darwin at work. The conversation would go "Oh hey, did you hear about that guy who designed his system so badly that he was able to delete the whole fucking thing with one mistyped command? Yeah, the council of sysadmins voted to kill him. Said it was for the good of the species."
  • On a Linux system, root is God(*)

    God is omniscient, omnipresent, and infallible.

    Therefore, when root deletes files, it's never a mistake, and the files should be immediately destroyed forever without question.

    (*) Unlike those heathen Windows systems, where there can be multiple gods, some of whom are more equal than others... and not necessarily in ways that are obvious to casual observers... ;-)

  • Old Saying (Score:5, Interesting)

    by Tablizer ( 95088 ) on Thursday April 14, 2016 @03:02PM (#51909803) Journal

    "To err is human. To really fuck things up, you need a computer."

    I prefer that any bulk or query-based "delete" command ask for confirmation along with basic feedback. Example pseudo-code:

    > delete *:*.*

    You are about to delete 832 folders and 28,435 files.
    Your choices are:
          1 - Proceed with deletion
          2 - List path details about the above folders and files
          3 - Cancel deletion
    Your Choice: __

    (end of example)

    It may be slower and/or more resource intensive, but that's better than mass boo-boo's.

    An optional command parameter could switch off verification, but verification should be the default. This is something Unix/Linux gets backward in my opinion: the default should be confirmation mode, not the other way around. In other words, a command switch should be required to switch off confirmation rather than requiring a command switch to turn confirmation on.

    Typical SQL doesn't have a confirmation mode, so I usually do a verification query on the WHERE clause before running the actual:

    -- check
    SELECT count(*) FROM myTable
    WHERE x > 7 AND foo='BAR'

    -- actual, keeping same where-clause
    DELETE FROM myTable
    WHERE x > 7 AND foo='BAR'

    I also often inspect at least some of the actual rows, not just the count. Thus, as a rule of thumb, do random spot-checks of actual data, and a total count before final command execution.

    • "This is something Unix/Linux gets backward in my opinion: the default should be confirmation mode, not the other way around."

      1. All Ubuntu versions and derivatives (and I think Centos/RHEL as well) alias rm to "rm -i" out of the box. Drives me crazy; with every install I have to hunt down whether those aliases were defined in .profile, .bash_profile, .bashrc, /etc/profile, /etc/bashrc, or somewhere in /etc/bash/*.

      2. Command-line tools that ask for confirmation suck for scripting. Especially if those prompt

It is easier to write an incorrect program than understand a correct one.

Working...