When running batch operations in shell scripts, we often encounter this frustrating behavior:
for dir in *; do
rsync -av "$dir" remote:destination/
done
Pressing Ctrl+C (which sends SIGINT) will only terminate the current rsync process, but the loop continues with the next directory. This is because:
1. SIGINT only affects the foreground process (rsync)
2. The shell interpreter controlling the loop continues execution
Method 1: Trap SIGINT at script level
#!/bin/bash
trap "exit" INT
for dir in *; do
rsync -av "$dir" remote:destination/
done
This makes the entire script exit when receiving SIGINT.
Method 2: Check exit status after each iteration
for dir in *; do
rsync -av "$dir" remote:destination/
if [ $? -eq 130 ]; then # 130 is exit code for SIGINT
echo "Received interrupt, exiting..." >&2
exit 1
fi
done
For more complex scenarios with nested loops:
#!/bin/bash
should_exit=false
trap "should_exit=true" INT
while read -r file && ! $should_exit; do
while read -r line && ! $should_exit; do
process_line "$line"
done < "$file"
done < file_list.txt
Here's a production-ready pattern I frequently use:
#!/bin/bash
set -euo pipefail
trap 'echo "Aborting on interrupt"; exit 1' INT
total_dirs=$(find . -maxdepth 1 -type d | wc -l)
processed=0
for dir in */; do
printf "Processing %s (%d/%d)\n" "$dir" $((++processed)) $total_dirs
rsync -av --progress "$dir" backup-server:/backups/ || {
[ $? -eq 130 ] && exit 1
echo "Non-interrupt error occurred, continuing..." >&2
}
done
Key improvements:
- Progress tracking
- Proper error differentiation
- Clean exit on interrupt
- POSIX-compliant syntax
When running commands inside bash loops, many developers encounter this frustrating behavior: pressing Ctrl+C (which sends SIGINT) only terminates the current command, not the entire loop. This happens because bash's default behavior treats each command in the loop as a separate process.
Here's what actually happens when you interrupt a loop:
- SIGINT gets sent to the currently running rsync process
- The rsync process terminates
- Bash continues to the next iteration because it didn't receive SIGINT itself
Method 1: Trap SIGINT in the Script
The most robust solution is to handle SIGINT explicitly:
#!/bin/bash
trap "exit" INT
for DIR in *; do
rsync -a "$DIR" example.com:somewhere/
done
Method 2: Check Command Exit Status
Alternatively, check the exit status after each command:
for DIR in *; do
rsync -a "$DIR" example.com:somewhere/ || break
done
Method 3: Using a Control Variable
For more complex scenarios, use a control flag:
keep_running=true
trap "keep_running=false" INT
while $keep_running && read -r DIR; do
rsync -a "$DIR" example.com:somewhere/
done < <(find . -maxdepth 1 -type d)
For scripts with multiple layers, propagate the signal:
trap 'trap - INT; kill -INT $$' INT
for DIR in *; do
rsync -a "$DIR" example.com:somewhere/
done
- Not quoting variables in the rsync command (risks spaces in filenames)
- Forgetting to clean up temporary files in signal handlers
- Overriding SIGINT handling without proper restoration
Here's a production-tested pattern:
#!/bin/bash
cleanup() {
echo "Cleaning up..."
# Add your cleanup code here
exit 1
}
trap cleanup INT TERM
for SRC in /data/*; do
DEST="backup-server:/backups/$(basename "$SRC")"
if ! rsync -az --partial "$SRC" "$DEST"; then
cleanup
fi
done