When automating file operations like ls
, cp
, and mv
, proper error handling is crucial. Many scripts fail silently, leaving administrators unaware of issues until they cause bigger problems. Implementing email alerts for both successes and failures creates a robust monitoring system.
We'll use these Bash features:
||
(OR operator) for error handling
&&
(AND operator) for success tracking
$?
to check exit status
mail
command for notifications
#!/bin/bash
# Configuration
RECIPIENT="admin@example.com"
SUBJECT_PREFIX="[File Ops]"
TMP_LOG=$(mktemp)
# Function to send email
send_alert() {
local status=$1
local message=$2
echo "$message" | mail -s "$SUBJECT_PREFIX $status" "$RECIPIENT"
rm -f "$TMP_LOG"
}
# Main operations
{
ls /target/directory || {
send_alert "FAILED" "ls command failed on /target/directory"
exit 1
}
cp source.txt /destination/ && \
mv oldfile.txt newfile.txt || {
send_alert "FAILED" "File operations failed. Check script execution."
exit 1
}
} > "$TMP_LOG" 2>&1
# If we reach here, all commands succeeded
send_alert "SUCCESS" "All file operations completed successfully"
For more complex scripts, consider this pattern:
#!/bin/bash
# Enhanced logging function
log() {
echo "[$(date '+%Y-%m-%d %H:%M:%S')] $1" >> /var/log/file_ops.log
}
# Operation wrapper
safe_operation() {
"$@" >> "$TMP_LOG" 2>&1
if [ $? -ne 0 ]; then
log "ERROR: Command failed: $*"
send_alert "FAILED" "Command failed: $*\n\n$(cat "$TMP_LOG")"
exit 1
fi
log "SUCCESS: $*"
}
# Usage example
safe_operation ls -l /target
safe_operation cp -v source.txt /backups/
safe_operation mv data.tmp data.final
For systems without mail
command:
# Using curl for Slack/webhook notifications
send_slack_alert() {
local message="$1"
curl -X POST -H 'Content-type: application/json' \
--data "{\"text\":\"$message\"}" \
https://hooks.slack.com/services/your/webhook/url
}
# Using SSMTP as mail alternative
send_ssmtp_alert() {
{
echo "To: $RECIPIENT"
echo "Subject: $1"
echo ""
echo "$2"
} | ssmtp "$RECIPIENT"
}
In Bash, every command returns an exit status (0 for success, non-zero for failure). We can check this using $?
immediately after command execution:
#!/bin/bash
ls /nonexistent/directory
if [ $? -ne 0 ]; then
echo "Error: ls command failed"
# Add email alert logic here
fi
For cleaner code, use trap
to catch errors throughout the script:
#!/bin/bash
# Set trap for ERR signal
trap 'handle_error $LINENO' ERR
handle_error() {
local line=$1
echo "Error occurred on line $line"
# Send email notification
echo "Script failed at $(date)" | mail -s "Script Failure" admin@example.com
exit 1
}
# Your operations
ls /some/directory
cp source.txt destination/
mv oldname newname
# Success notification
echo "Script completed successfully at $(date)" | mail -s "Script Success" admin@example.com
Here's a production-ready script with comprehensive error handling:
#!/bin/bash
# Configuration
TO_EMAIL="admin@example.com"
LOG_FILE="/var/log/script_operations.log"
# Log and email function
notify() {
local message="$1"
local subject="$2"
echo "$(date '+%Y-%m-%d %H:%M:%S') - $message" >> "$LOG_FILE"
echo "$message" | mail -s "$subject" "$TO_EMAIL"
}
# Error handler
error_exit() {
notify "Script failed at line $1" "CRITICAL: File Operation Failed"
exit 1
}
trap 'error_exit $LINENO' ERR
# Main operations
notify "Starting file operations" "Script Started"
ls /target/directory || exit 1
cp -v source.txt /target/destination/ >> "$LOG_FILE"
mv -v /target/destination/source.txt /target/destination/renamed.txt >> "$LOG_FILE"
# Success
notify "All operations completed successfully" "SUCCESS: File Operations Complete"
For more complex scripts, consider these approaches:
# Set strict mode
set -euo pipefail
# Function to execute commands with logging
safe_exec() {
local cmd="$@"
if ! $cmd >> "$LOG_FILE" 2>&1; then
notify "Command failed: $cmd" "Command Failure"
return 1
fi
}
# Usage
safe_exec cp largefile.dat /backup/
safe_exec mv /backup/largefile.dat /backup/archive/