How to Pipe MySQL Dump to S3 Bucket Using s3cmd: A Complete Guide for Developers


3 views


When trying to pipe MySQL data directly to S3 using s3cmd, many developers encounter the "Not enough parameters" error. The fundamental issue lies in how s3cmd expects input for the put command.

The correct approach requires using a hyphen (-) to indicate stdin as input source:

mysqldump -u root -ppassword --all-databases | gzip -9 | s3cmd put - s3://bucket/sql/databases.sql.gz

Parallel compression with pigz:

mysqldump -u root -ppassword --all-databases | pigz -9 | s3cmd put - s3://bucket/sql/databases.sql.gz

With progress monitoring:

mysqldump -u root -ppassword --all-databases | pv | gzip -9 | s3cmd put - s3://bucket/sql/databases.sql.gz

For production environments, consider these security improvements:

MYSQL_PWD=password mysqldump -u root --all-databases | gzip | s3cmd --access_key=XXX --secret_key=YYY put - s3://bucket/sql/$(date +%Y%m%d).sql.gz

Create a cron job for regular backups (add to crontab -e):

0 3 * * * mysqldump -u root -ppassword --all-databases | gzip -9 | s3cmd put - s3://bucket/sql/db_$(date +\%Y\%m\%d).sql.gz
  • Ensure s3cmd is configured properly (s3cmd --configure)
  • Verify MySQL user has sufficient privileges
  • Check available disk space in /tmp (s3cmd may use temporary files)
  • Monitor network bandwidth if dealing with large databases


When trying to directly pipe a MySQL dump through compression to S3, many developers encounter the frustrating "Not enough parameters" error from s3cmd. The fundamental issue lies in how s3cmd's put command expects a filename parameter rather than reading from stdin by default.


# This will fail:
mysqldump -u root -ppassword --all-databases | gzip -9 | s3cmd put s3://bucket/sql/databases.sql.gz

Here are three effective approaches to achieve this pipeline:

Method 1: Using Process Substitution

This creates a temporary file descriptor that s3cmd can read:


mysqldump -u root -ppassword --all-databases | gzip -9 | \
s3cmd put - s3://bucket/sql/databases.sql.gz

Method 2: Named Pipe Approach

For more complex pipelines or when additional processing is needed:


mkfifo /tmp/mysqlpipe
gzip -9 < /tmp/mysqlpipe > /tmp/mysql.gz &
mysqldump -u root -ppassword --all-databases > /tmp/mysqlpipe
s3cmd put /tmp/mysql.gz s3://bucket/sql/databases.sql.gz
rm /tmp/mysqlpipe /tmp/mysql.gz

Method 3: Using AWS CLI (Alternative)

If you have AWS CLI installed, this often works more reliably:


mysqldump -u root -ppassword --all-databases | gzip -9 | \
aws s3 cp - s3://bucket/sql/databases.sql.gz

For large databases, consider these optimizations:


# Use parallel gzip (pigz) for faster compression
mysqldump -u root -ppassword --all-databases | pigz -9 | \
s3cmd put - s3://bucket/sql/databases.sql.gz

# Add network optimization flags
mysqldump --quick --single-transaction -u root -ppassword --all-databases | \
gzip -9 | s3cmd --multipart-chunk-size-mb=15 put - s3://bucket/sql/databases.sql.gz

Always secure your credentials:


# Store password in my.cnf or environment variable
MYSQL_PWD=password mysqldump -u root --all-databases | \
gzip -9 | s3cmd put - s3://bucket/sql/$(date +%Y%m%d).sql.gz

# Use IAM roles when running on EC2
mysqldump -u root -ppassword --all-databases | \
aws s3 cp - s3://bucket/sql/databases.sql.gz --sse AES256