When setting up temporary SSH tunnels for database backups, using a fixed timeout value (like sleep 600
) creates reliability issues. The tunnel might close prematurely if backups take longer than expected, or it might remain open unnecessarily if the operation completes faster.
Instead of relying on timing, we can leverage process management techniques to maintain and automatically close the tunnel. Here's an improved solution:
#!/bin/bash
# Create SSH tunnel in background and get its PID
ssh -f -N -T -L 4444:127.0.0.1:3306 remoteuser@208.77.188.166 &
SSH_PID=$!
# Function to cleanup (kill tunnel) when script exits
cleanup() {
echo "Closing SSH tunnel (PID: $SSH_PID)"
kill $SSH_PID
}
trap cleanup EXIT
# Perform database backups
for DB in db1 db2 db3; do
mysqldump --compress -h 127.0.0.1 -P 4444 -u user -ppassword "$DB" | \
gzip > "/root/backups/snapshot/${DB}.sql.gz"
# Verify dump was successful
if [ ${PIPESTATUS[0]} -ne 0 ]; then
echo "Error dumping $DB" >&2
exit 1
fi
done
- Uses
-N
flag to prevent shell allocation (not needed for port forwarding) - Stores the SSH process ID (
$SSH_PID
) for later cleanup - Sets up a
trap
to automatically close the tunnel when the script exits - Includes error checking for each dump operation
For more complex scenarios, you might consider using named pipes:
# Create named pipe
mkfifo /tmp/mysqlpipe
# SSH tunnel that reads from pipe
ssh -T -L 4444:127.0.0.1:3306 remoteuser@208.77.188.166 < /tmp/mysqlpipe &
SSH_PID=$!
# Send keepalive data to the pipe
( while true; do echo ""; sleep 60; done ) > /tmp/mysqlpipe &
KEEPALIVE_PID=$!
cleanup() {
kill $SSH_PID $KEEPALIVE_PID
rm /tmp/mysqlpipe
}
trap cleanup EXIT
While implementing this solution:
- Consider using SSH keys instead of password authentication
- Use a dedicated backup user with minimal privileges
- Store credentials securely (consider
~/.my.cnf
for MySQL) - Rotate backup files and implement proper retention policies
When backing up multiple MySQL databases through an SSH tunnel, we face two critical requirements:
- Maintaining tunnel persistence throughout all backup operations
- Ensuring proper tunnel termination after completion
Instead of relying on sleep
, we can use process control to manage the tunnel lifecycle:
#!/bin/bash
# Create tunnel in background and capture PID
ssh -f -N -L 4444:127.0.0.1:3306 remoteuser@208.77.188.166
TUNNEL_PID=$!
# Function to clean up tunnel
cleanup() {
kill $TUNNEL_PID
echo "SSH tunnel terminated"
}
trap cleanup EXIT
# Perform backups
for DB in db1 db2 db3; do
mysqldump --compress -h 127.0.0.1 -P 4444 -u user -ppassword $DB | \
gzip > "/root/backups/snapshot/${DB}.sql.gz"
done
Key improvements in this approach:
-N
flag prevents remote command execution (better security)- Trapping EXIT ensures tunnel closure even if script fails
- Explicit PID capture allows precise process management
For production environments, add error checking:
#!/bin/bash
set -euo pipefail
# Validate SSH connection first
if ! ssh -q -o BatchMode=yes -o ConnectTimeout=5 remoteuser@208.77.188.166 exit; then
echo "SSH connection failed" >&2
exit 1
fi
# Create tunnel
if ! ssh -f -N -L 4444:127.0.0.1:3306 remoteuser@208.77.188.166; then
echo "Tunnel creation failed" >&2
exit 1
fi
TUNNEL_PID=$!
cleanup() {
if kill -0 $TUNNEL_PID 2>/dev/null; then
kill $TUNNEL_PID || true
fi
}
# Register multiple cleanup triggers
trap cleanup EXIT TERM INT
# Backup with timestamp
TIMESTAMP=$(date +%Y%m%d-%H%M%S)
for DB in db1 db2 db3; do
if ! mysqldump --compress -h 127.0.0.1 -P 4444 -u user -ppassword $DB | \
gzip > "/root/backups/snapshot/${DB}_${TIMESTAMP}.sql.gz"; then
echo "Backup failed for $DB" >&2
exit 1
fi
done
For frequent backups, configure SSH ControlMaster in ~/.ssh/config
:
Host backup-server
HostName 208.77.188.166
User remoteuser
ControlMaster auto
ControlPath ~/.ssh/backup-%r@%h:%p
ControlPersist 10m
Then modify the script:
#!/bin/bash
# Establish master connection
ssh -f -N backup-server
# Perform backups using existing connection
for DB in db1 db2 db3; do
mysqldump --compress -h 127.0.0.1 -P 4444 -u user -ppassword $DB | \
gzip > "/root/backups/snapshot/${DB}.sql.gz"
done
# Close connection
ssh -O exit backup-server
- For large databases, consider parallel backups using
parallel
orxargs -P
- Compression level can be adjusted with
gzip -6
(trade speed for size) - Monitor tunnel with
lsof -i :4444
for debugging