Every Linux developer's nightmare scenario is accidentally running rm -rf /*
instead of the intended rm -rf ./*
. The difference of a single dot character can mean the difference between deleting temporary files and wiping your entire system. Even with rm -i
aliases and --preserve-root
defaults, the danger persists.
Here are concrete technical solutions to implement:
# Option 1: Safe rm wrapper function
safe_rm() {
if [[ "$*" =~ ^-.*[[:space:]]+/?\*([[:space:]]|$) ]]; then
echo "ERROR: Attempt to delete root detected!" >&2
return 1
fi
command rm "$@"
}
alias rm=safe_rm
# Option 2: Filesystem-level protection
sudo chattr +i /bin/rm # Make rm immutable
sudo ln -s /bin/true /usr/local/bin/rm # Replace rm with no-op
For production systems, consider these robust approaches:
# Create a restricted deletion environment
export RM_RESTRICTED_PATHS="/usr /lib /bin /etc"
function protected_rm() {
for path in $RM_RESTRICTED_PATHS; do
if [[ "$*" =~ $path ]]; then
echo "ALERT: Restricted path in rm command!" >&2
return 1
fi
done
command rm "$@"
}
Here's a comprehensive solution I use on my development machines:
#!/bin/bash
# ~/.bash_rm_safety
rm_safety_check() {
local dangerous_patterns=(
"/*$"
"^/[[:space:]]*$"
"/etc"
"/var"
"/usr"
)
for pattern in "${dangerous_patterns[@]}"; do
if [[ "$*" =~ $pattern ]]; then
read -p "DANGER! Pattern '$pattern' detected. Really run? (y/N) " -n 1 -r
echo
[[ $REPLY =~ ^[Yy]$ ]] || return 1
fi
done
}
alias rm='rm_safety_check && command rm'
Sometimes prevention isn't enough - consider these alternatives:
# Use trash-cli instead of rm
sudo apt install trash-cli
alias rm='trash-put'
# Implement zsh safe-rm plugin
# (For oh-my-zsh users)
plugins+=(safe-rm)
Modern filesystems offer additional safeguards:
# Btrfs snapshot protection
sudo btrfs subvolume snapshot / /snapshots/$(date +%Y-%m-%d)
# ZFS automated snapshots
zfs set com.sun:auto-snapshot=true rpool/ROOT
If prevention fails, know your recovery options:
# Extundelete for ext3/4
sudo extundelete /dev/sda1 --restore-all
# Testdisk for partition recovery
sudo testdisk /dev/sda
Every Linux user knows the horror story of accidentally running rm -rf /*
instead of rm -rf ./*
. This simple typo can wipe out your entire filesystem, and as we've seen in the original question, even common safeguards like alias rm='rm -i'
and --preserve-root
don't always prevent disaster.
The problem with rm -i
is that it only works for individual files, not recursive operations. --preserve-root
only protects the root directory itself, not its contents. When you're deep in coding flow, your brain might autocomplete commands in dangerous ways.
Here are several technical approaches to prevent this disaster:
1. Create a Safe rm Wrapper
Create a shell function that checks for dangerous patterns before executing:
safe_rm() {
for arg in "$@"; do
if [[ "$arg" =~ ^/+$ || "$arg" =~ ^/+[*] ]]; then
echo "ERROR: Attempting to delete root filesystem!" >&2
return 1
fi
done
command rm "$@"
}
alias rm='safe_rm'
2. Use trash-cli Instead of rm
Install a trash utility that moves files to trash instead of deleting:
sudo apt-get install trash-cli
alias rm='trash-put'
3. Filesystem Protection
For critical systems, consider mounting sensitive directories as read-only:
sudo mount -o remount,ro /
For maximum protection, you can create a simple kernel module that intercepts dangerous rm operations:
#include <linux/module.h>
#include <linux/kernel.h>
#include <linux/syscalls.h>
asmlinkage long (*original_unlink)(const char __user *pathname);
asmlinkage long hooked_unlink(const char __user *pathname) {
char buf[256];
long copied = strncpy_from_user(buf, pathname, sizeof(buf)-1);
if (copied > 0 && strstr(buf, "/*")) {
printk(KERN_ALERT "Blocked dangerous unlink: %s\n", buf);
return -EPERM;
}
return original_unlink(pathname);
}
static int __init protect_init(void) {
original_unlink = (void *)sys_call_table[__NR_unlink];
sys_call_table[__NR_unlink] = (unsigned long)hooked_unlink;
return 0;
}
Beyond technical solutions, consider these workflow improvements:
- Always use absolute paths for system operations
- Implement a "three-eyes" rule for dangerous commands
- Use version control for configuration files
- Consider using containers for development
If disaster strikes, here are some recovery options:
# For ext4 filesystems
sudo extundelete /dev/sda1 --restore-all
# Using testdisk for partition recovery
sudo testdisk /dev/sda