This would break so many scripts. In such environments, surely it would be preferable to transparently implement "trash can" functionality behind RM transparently? (As on commenter on that site mentioned). Rather than unlinking, in some circumstances it would transfer the data (or rename it - I know `delete` in DOS used to just change the first character in the name to ?, which has hacky but fast and enabled utilities like `undelete`) to a recoverable repository.
'safe rm' or a version of rm which doesn't actually delete stuff is pretty easy. The tough bit it lets say the user creates loveletter.txt, edits it, gets pissed off, rm's it. Now they create it again, edit it again, and remove it.
The simplest implementation fell out of NetApp snapshots, but the next best thing was converting rm to essentially run a directory repostitory (at the time sccs but more modern ones would no doubt use git or something) and each 'rm' was a commit, push followed by removal of the file.
Running it on a live system (in the 90's) it slowed the build way down (removing old object files in 'make clean') but it gave us a way to recreate any state from scratch :-).
In BOFH mode you say "Gee that sucks, next time don't delete files you want to keep."
If they create a new version of a deleted file, why wouldn't that just be treated as an entirely new file? If the rm trashcan altered the filename (like, say, appending the time stamp of deletion) upon "deletion", that would avoid name collisions. Or just do like OS X and append -0, -1, etc.
If you're interested in a real-world example, Purdue University did exactly that, at least for a large portion of the systems they ran, back from 1995 to at least 2000. It was ok.
Here's a page explaining the "entomb" system they had.
I made such a wrapper for myself [1] after accidentally removing lots of photographs. It uses LD_PRELOAD trickery to achieve its goal. It's in a very early stage but I'm working on it.
... yeah? Do you have an actual point to make now? All of his suggestions are very fundamental changes that will break a lot of uses of rm. On the surface they seem to handle your typical users, 'rm -rf ...' but rm's behavior has been an assumed standard for a LONG time. Aliasing it is a baad idea.
It wouldn't necessarily - it just has to be 100% transparent. A simple alias would probably break a lot of scripts, but if you were very thorough or provided an entirely separate tool for users - that's what I'm suggesting would do this. In reality, I was really just thinking allowed - I think rm's fine as it is. To do this while overcoming the shortcomings I mentioned in DOS's simple approach requires a system more complex than the original problem is worth, IMO.
Why, decapitation definitely rids one of headaches, but why not consider less invasive options beforehand?
A file system might implement `rm` so that it does not lose data outright, but keeps a few recently deleted files intact for undeletion. I suppose every journaling FS already includes everything necessary for it. E.g. http://extundelete.sourceforge.net/ uses ext3/ext4 journals. AFAICT NTFS undelete works the same way.
Some file systems have explicit snapshots and other tools that facilitate rolling back changes: xfs, ext3, ext4 all support snapshotting on Linux when run under LVM. It's even better than undelete.
> I suppose every journaling FS already includes everything necessary for it.
Depends. Ext seems to use a F̶I̶F̶O̶ LIFO for the empty block list. It can make recovering data more tricky.
Last time I tried to recover some deleted data, it was about an hour after the delete took place. The recently deleted blocks were the first over-written by normal background tasks and everything I wanted to save was gone.
Every file more than a few hours old was easily recovered, right back to projects I had deleted a year previously.
If you alias rm for some users, you'll train them to think that rm has an undelete feature: bad idea.
Experienced *nix users often work from an ordinary user account and turn to "sudo" only for special cases so they don't accidentally mess things up. When you use "sudo", you know you have to slow down and be careful. So why isn't an undoable file delete function not the default file deletion command, with rm reserved for special cases where you slow down and make extra sure you really want an irreversible delete?
Automated scripts would use rm as always, but for interactive work, especially on client machines with giant disks such as the typical general purpose user computer these days, there should be a different standard command for deletion that everyone learns long before they learn about rm.
Leaving aside technical arguments against doing this, I don't think it would solve the problem anyway. Users would inevitably decide that they want a real "remove" operation (because yours is now wasting their disk space, etc.), and teach themselves to call their new fancy "really_rm" alias. And then just accidentally delete files with that instead of "rm".
(Anyone ever gotten too used to shift-Delete in Windows? Yeah...)
Instead, automate backups or file system snapshots.
Yes, in both Linux and Windows, I've got used to hitting shift-delete, return, when I want to delete a file. And yes, at times I've done that only to immediately realise I'd got the wrong file.
I think GMail has this right: instead of a confirmation, it does what you asked for, and has an obvious non-modal undo button. Also, because I know things in the trash will be discarded after 30 days, I don't feel any pressure to 'really delete' them myself.
Is this really that common an issue? It seems like anyone who's at a level where they're using rm regularly should know that that's a command you think before issuing. If they didn't, then that's exactly why you have backups.
Then, if it was in the backups, the short-term problem is solved, and maybe the experience will scare them into being a bit more careful in the future.
If it wasn't in the backups, the most likely reason would seem to me be that it was created since the last backup (not that I'd know, IANA sysadmin), in which case there wasn't a ridiculous amount of work lost - maybe the experience will make them a bit more careful in the future.
Of course, the above doesn't apply with the sort of user who thinks IT is magic and can do anything, but if they're using rm, you've got bigger problems.
Also, personal opinion, but I hate automatic trash folders - rm is supposed to delete stuff. If I want a recycle bin, I'm happy to mv to a folder I created for the purpose.
I like the solution but I am actually opposed to an out-of-the-box solution. The reason is that precisely what makes Unix such a stable, fast OS is its minimalism. It trusts the user to know what he/she is doing. Once you cross over into taking into account user error you start to add bloat. I've seen this happen recently in Rails 4.
Well-intentioned developers add a bit here and tad there and pretty soon you have 400% of what you need to perform your daily tasks to protect yourself from 0.001% occurrences. Which is actually ironic because I remember DHH stating that he was vehemently opposed to type-casting. Anyway, I think it is best to leave it up to end-users to build their own solutions and plug-ins for desired functionality but to leave these types of catch-all's out of the core product.
I think what you're saying is true and I am also seeing the pros and cons. Do we really need a new command on the same level as mv, cp, and rm? We don't want to add bloat but we also want some kind of solution.
I wonder if the problem is not cultural or if there just isn't a technical solution we can agree on. mindslight mentioned wipe(1), but i don't think it's aligned with the same goal in mind, and it seems to be tied down with many complications (http://wipe.sourceforge.net/).
I somewhat agree with what you say, but: "to leave it up to end-users to build their own solutions"? Is that optimism, naïvety, or am I overlooking a dose of sarcasm?
Ha, well not sarcasm. I guess I assume that if you are working in Unix you know enough (or can learn) to create a script to handle this for you. Of course that doesn't prevent somebody from running rm, but it does at least put the brunt of the blame on the person who used rm anyway, despite the admins advising against it in favor of the archiving script.
I guess the basic question is- should users be entrusted to permanently delete anything? rm is probably my third or fourth most-used commands because it is so fast and easy. It does exactly what it should- get rid of shit you don't need anymore, and very quickly. For example, I use templates to generate web applications and when I test new templates I will frequently want to get rid of the auto-generated one I just created. Without rm this would be an incredible pain in the ass or at least take more time and storage than it should. I like keeping the core strictly utilitarian and leaving the layers of safety to be built by bureaucracies after it has shipped.
How about we just don't backup anything? That's the user's job, and if they didn't do it, they know their data is 100% gone, no backsies, when they delete it. So maybe they'll be more careful. Backups are a moral hazard.
But seriously, I don't see an issue here. There are backups for a reason. If it takes too much time to go through the bureaucracy of requesting backed up data, that sounds like the problem that needs to be fixed, not the existence of the "rm" command. And if the delay is there on purpose, it sounds like everything is going according to plan. What's the problem?
What about the argument about loosing things between backups? Also, my understanding what that this issue is about using administrator time, not bureaucratic time.
I was just thinking back to the 80's, when newspapers actually printed stories about "rm" and "ls" and unfriendly computers. This was as the GUI came in and cryptic commands were eschewed.
It took us a long time to accept the best of all possible worlds, GUIs more most people, and genuine, expressive shells, for a few.
For those few, rm is not a problem, no. In fact, it's a reminder that this isn't Kansas anymore.
One thing the article mentions is to alias rm, but they never mention just performing a chmod to change the user/group permissions on the actual command itself. I can't see why it's impossible to change the permissions on the real rm command, while providing an alternate (or aliased, whatever you prefer) command to move "deleted" files into the Trash/Recycle Bin/What have you.
You can later implement another in company program to perform the actual deletion, with strict warnings, which can execute the real rm command with the appropriate group permissions. Ideally, this should at least deter users from blanket deletion of their file systems, though eventually some will come to abuse the true deletion program, believing they know better than IT. However, this is largely inevitable, and some users will always behave that way, so you need to consider this when discussing any technical measures taken towards something like rm.
I do remember a case on a mail list that shall go unnamed, where an "experienced" member of the list sarcastically told a newbie that typing "rm -rf /" would "solve her problem". Unfortunately, she dutifully tried it and wondered why her machine wasn't working any more. There was quite a shitstorm on the list after that. The smartass did feel very bad afterward.