While Amazon RDS provides automated backups and snapshots, there are situations where you need a complete local copy of your database:
- Migration to non-AWS environments
- Compliance requirements for offline storage
- Development/testing with production-like data
- Disaster recovery scenarios beyond AWS
You'll need these tools installed locally:
# For Linux/macOS (via package manager):
sudo apt-get install mysql-client # Debian/Ubuntu
brew install mysql-client # macOS
# Verify mysqldump version matches your RDS version:
mysqldump --version
The most reliable method is using MySQL's native mysqldump
utility:
mysqldump \
-h mydb.xxxxx.us-east-1.rds.amazonaws.com \
-u myuser \
-p \
--single-transaction \
--routines \
--triggers \
--events \
--hex-blob \
--quick \
--all-databases > full_backup_$(date +%Y%m%d).sql
Key parameters explained:
--single-transaction
: Creates consistent backup without locking tables--routines
: Includes stored procedures and functions--triggers
: Preserves trigger definitions--events
: Captures scheduled events--hex-blob
: Safely handles binary data
For databases over 10GB, consider these approaches:
# Split backup by tables
mysqldump -h [RDS_ENDPOINT] -u [USER] -p [DATABASE] \
--tables table1 table2 > partial_backup.sql
# Compress during backup
mysqldump -h [RDS_ENDPOINT] -u [USER] -p [DATABASE] | gzip > backup.sql.gz
# Parallel dump (using mydumper alternative)
mydumper \
--host=[RDS_ENDPOINT] \
--user=[USER] \
--password=[PASSWORD] \
--outputdir=/backups \
--threads=4 \
--compress \
--build-empty-files
Always validate your backups before relying on them:
# Check backup integrity
head -n 5 full_backup.sql # Verify dump header
tail -n 5 full_backup.sql # Check completion marker
# Test restore to local MySQL instance
mysql -u root -p -e "CREATE DATABASE restore_test"
mysql -u root -p restore_test < full_backup.sql
For production environments, set up a cron job:
# Sample cron entry (runs daily at 2 AM)
0 2 * * * /usr/bin/mysqldump -h [RDS_ENDPOINT] -u [USER] -p[PASSWORD] \
--all-databases | gzip > /backups/daily/$(date +\%Y\%m\%d).sql.gz
# Retention policy (keep 7 days)
0 3 * * * find /backups/daily/ -type f -mtime +7 -delete
- Always encrypt sensitive backups:
openssl enc -aes-256-cbc -salt -in backup.sql -out backup.sql.enc
- Store credentials securely (use AWS Secrets Manager or environment variables)
- Limit network exposure during transfer (SSH tunnel or VPN)
While Amazon RDS provides convenient snapshot functionality, these backups remain locked within AWS ecosystem. For true portability - enabling restoration on any MySQL server version 5.7+ (same major version) - we need standard SQL dump files.
Use this comprehensive command to capture schema, data, routines, and triggers:
mysqldump \
--host=mydb.xxxxx.us-east-1.rds.amazonaws.com \
--port=3306 \
--user=myuser \
--password \
--single-transaction \
--routines \
--triggers \
--events \
--hex-blob \
--quick \
--max_allowed_packet=512M \
--default-character-set=utf8mb4 \
--databases mydatabase > backup_$(date +%F).sql
- --single-transaction: Creates consistent backup without locking tables (InnoDB only)
- --routines: Includes stored procedures and functions
- --hex-blob: Properly handles binary data types
- --quick: Optimizes for large tables
Pipe directly to gzip for space savings:
mysqldump [options] | gzip > backup_$(date +%F).sql.gz
To verify dump integrity:
mysqlcheck --host=localhost --user=root --password mydatabase < backup_file.sql
Restoration command:
mysql --host=target.server.com --user=admin -p < backup_file.sql
Sample cron job for weekly backups (save as /etc/cron.weekly/rds_backup):
#!/bin/bash
mysqldump --host=[RDS_ENDPOINT] --user=[USER] -p[PASSWORD] \
--single-transaction --routines --triggers --all-databases | \
gzip > /backups/mysql/rds_$(date +\%Y\%m\%d).sql.gz
# Rotate backups older than 30 days
find /backups/mysql -name "*.sql.gz" -mtime +30 -delete
For selective table backups:
mysqldump --tables mydb table1 table2 > partial_backup.sql