As a seasoned sysadmin, I've always hated the three-step dance of:
1. Creating a raw SQL dump
2. Compressing it
3. Cleaning up the intermediate file
Here's how to streamline the process using Unix pipes and proper redirection.
The magic lies in piping mysqldump's output directly through gzip:
mysqldump -u root -p dbname | gzip > dbname.sql.gz
For those preferring tar.gz format (which handles multiple files better):
mysqldump -u root -p dbname | tar czvf - > dbname.sql.tar.gz
To directly transfer compressed backups to another server:
mysqldump -u root -p dbname | gzip | ssh user@remotehost "cat > /backup/dbname.sql.gz"
Or using tar over SSH:
mysqldump -u root -p dbname | tar czvf - | ssh user@remotehost "cat > /backup/dbname.sql.tar.gz"
For large databases, consider adding these optimizations:
mysqldump --single-transaction --quick --skip-lock-tables -u root -p dbname | pigz -9 > dbname.sql.gz
Using parallel gzip (pigz) and mysqldump flags can significantly speed up the process on multi-core systems.
Create a cron job for scheduled compressed backups:
0 3 * * * /usr/bin/mysqldump -u backupuser -p'password' dbname | gzip > /backups/dbname_$(date +\%Y\%m\%d).sql.gz
Remember to:
- Use a dedicated backup user with limited privileges
- Store credentials securely
- Implement proper rotation policies
Always verify your compressed backups:
gunzip -c dbname.sql.gz | head -n 20 # Check first 20 lines
To restore:
gunzip < dbname.sql.gz | mysql -u root -p dbname
As MySQL administrators, we're all familiar with the standard backup routine:
mysqldump dbname -u root -p > dbname.sql
tar czvf dbname.sql.tgz dbname.sql
rm dbname.sql
This creates three pain points:
1. Temporary file storage consumption
2. Additional I/O operations
3. Manual cleanup requirement
The solution lies in Unix pipes, which allow direct streaming between commands:
mysqldump dbname -u root -p | gzip > dbname.sql.gz
For tar.gz format (which preserves original filename metadata):
mysqldump dbname -u root -p | tar czvf dbname.sql.tgz --files-from -
When transferring to remote servers, we can combine pipes with SSH:
mysqldump dbname -u root -p | gzip | ssh user@remotehost "cat > /backups/dbname.sql.gz"
For environments requiring tar archives:
mysqldump dbname -u root -p | ssh user@remotehost "tar cz -C /backups/ -f dbname.sql.tgz --files-from -"
For production environments, consider these enhancements:
# Add timestamp to filename
mysqldump dbname -u root -p | gzip > dbname_$(date +%Y%m%d_%H%M%S).sql.gz
# Include compression level tuning
mysqldump dbname -u root -p | gzip -9 > dbname.sql.gz
# Parallel compression for large databases
mysqldump dbname -u root -p | pigz -9 > dbname.sql.gz
Always verify your backup integrity with exit status checks:
if mysqldump dbname -u root -p > dbname.sql; then
gzip -f dbname.sql
else
echo "Backup failed" >&2
exit 1
fi
For the piped version:
mysqldump dbname -u root -p | gzip > dbname.sql.gz || echo "Backup failed" >&2