Essential Command-Line Safety Techniques: Practical Tips to Avoid Disaster in Unix/Linux Environments


1 views

Before executing any destructive command, implement this three-step verification:

# Step 1: Preview with ls
ls -l target_directory/
# Step 2: Dry-run with echo (for complex commands)
echo rm -rf target_directory/
# Step 3: Execute only after visual confirmation

Create visual distinction between environments using these methods:

# Add color-coded prompts to .bashrc
PS1='$$\e[1;31m$$[PROD] \u@\h $$\e[1;34m$$\w$$\e[0m$$\$ '
# For staging
PS1='$$\e[1;33m$$[STAGE] \u@\h $$\e[1;34m$$\w$$\e[0m$$\$ '

Implement this script pattern for safer file operations:

#!/bin/bash
set -e  # Exit on error

TARGET="$1"
CONFIRM="$2"

if [[ "$CONFIRM" != "--yes" ]]; then
    echo "First, verify files to be deleted:"
    ls -lah "$TARGET"
    echo "To confirm deletion, run: $0 $TARGET --yes"
    exit 1
fi

echo "Removing $TARGET..."
rm -rf "$TARGET"

Add this to your SSH config (~/.ssh/config):

Host prod-*
    User root
    StrictHostKeyChecking yes
    VisualHostKey yes
    LogLevel VERBOSE
    # Custom alert sound on connection
    PermitLocalCommand yes
    LocalCommand play ~/sounds/warning.wav 2>/dev/null

Create safe aliases for dangerous commands:

alias rm='rm -i'
alias mv='mv -i'
alias cp='cp -i'
alias chmod='chmod --preserve-root'
alias chown='chown --preserve-root'
alias shred='shred -v -n 10 -z -u'

Add this safety check to your .bashrc:

# Prevent accidental execution in wrong directories
critical_dirs=("/" "/home" "/etc")
for dir in "${critical_dirs[@]}"; do
    if [[ $(pwd) == "$dir" ]]; then
        echo -e "\033[1;31mWARNING: In critical directory $dir\033[0m"
    fi
done

Implement comprehensive command tracking:

# Append to ~/.bashrc
PROMPT_COMMAND='history -a; logger -t "[CMD][$USER]" "$(history 1 | sed "s/^[ ]*[0-9]*[ ]*//")"'

For critical operations, use this delay technique:

dangerous_operation() {
    echo "WARNING: This will delete all files matching $1"
    for i in {10..1}; do
        echo -n "$i "
        sleep 1
    done
    echo
    # Actual operation here
    rm "$1"
}

Implement these kernel-level protections:

# Make /boot and /etc read-only during normal operation
mount -o remount,ro /boot
mount -o remount,ro /etc

# Set immutable flag on critical files
chattr +i /etc/passwd /etc/shadow /etc/sudoers

Add verification steps for remote operations:

# Safe SCP wrapper
safe_scp() {
    echo "Source: $1"
    echo "Destination: $2"
    read -p "Verify paths (y/n)? " choice
    case "$choice" in
        y|Y) scp "$1" "$2";;
        *) echo "Aborted";;
    esac
}

Always verify before executing destructive commands. This simple pattern saved me countless times:

# First step: Preview what would be affected
ls /path/to/files*.log

# Second step: After visual confirmation, execute
rm /path/to/files*.log

Modify your shell prompt to always show critical environment information:

# Add to your .bashrc or .zshrc
export PS1='$$\e[1;31m$$[PROD]$$\e[0m$$ \u@\h:\w\$ '

# For staging environments:
export PS1='$$\e[1;33m$$[STAGE]$$\e[0m$$ \u@\h:\w\$ '

Many command-line tools support dry-run mode:

rsync --dry-run -avz /source/ user@remote:/destination/
scp --dry-run file.txt user@remote:/path/

Force interactive confirmation for dangerous commands:

alias rm='rm -i'
alias mv='mv -i'
alias cp='cp -i'

When working with absolute paths, use this verification method:

echo "About to operate on: $(pwd)"
sleep 3  # Gives you time to abort if wrong
# Proceed with actual commands

Before running commands on remote servers:

# First establish connection without executing anything
ssh user@server.example.com

# Then verify hostname matches in the prompt
hostname
whoami
pwd

For database operations, implement this workflow:

# 1. First show what would be affected
SELECT * FROM important_table WHERE condition LIMIT 10;

# 2. Count affected rows
SELECT COUNT(*) FROM important_table WHERE condition;

# 3. Perform actual operation
-- DELETE FROM important_table WHERE condition;

Before executing complex pipelines:

# Review command before execution
echo "Command to execute:"
echo "find . -name \"*.tmp\" -print0 | xargs -0 rm"

# Only then proceed to run it

In scripts, label dangerous sections clearly:

#!/bin/bash

# === DANGER ZONE ===
# This section performs irreversible operations
# Last verified 2023-10-15 by jdoe@example.com

if [[ $CONFIRM_DELETE == "YES" ]]; then
    rm -rf /backups/old/*
fi

Add automatic timeouts to prevent accidental long-running operations:

# Will abort after 5 seconds if not confirmed
read -t 5 -p "Press ENTER to continue or wait to abort..."