Performing MySQL backups during production hours requires careful resource management. The standard mysqldump
utility can consume significant CPU and I/O resources, potentially affecting Apache and other database instances running on the same server.
As noted in MySQL forums, simply using nice
with mysqldump often fails because:
- mysqldump spawns child processes that don't inherit nice values
- Disk I/O isn't prioritized by the CPU scheduler
- Table locks can still block production queries
1. Ionice + Nice Combination
The most effective approach combines both CPU and I/O prioritization:
ionice -c 3 nice -n 19 mysqldump --single-transaction --skip-lock-tables -u user -p dbname > backup.sql
Key parameters:
- ionice -c 3
: Sets idle I/O priority (Linux only)
- nice -n 19
: Lowest CPU priority
- --single-transaction
: Creates consistent backup without locking
- --skip-lock-tables
: Avoids metadata locks
2. mysqldump with Resource Groups (MySQL 8.0+)
For modern MySQL installations:
CREATE RESOURCE GROUP backup_group
TYPE = USER
VCPU = 0-3
THREAD_PRIORITY = 19;
SET RESOURCE GROUP backup_group;
mysqldump --single-transaction -u user -p dbname > backup.sql
3. Percona XtraBackup Alternative
For large databases, consider this enterprise-grade solution:
innobackupex --compress --no-lock --throttle=100 /backup/path
Always verify backup operations aren't affecting production:
# Check CPU wait I/O
vmstat 1 10
# Monitor MySQL threads
SHOW PROCESSLIST;
# Verify disk latency
iostat -x 1
For scheduled low-priority backups:
# /etc/cron.d/mysql-backup
0 2 * * * root /usr/bin/ionice -c3 /usr/bin/nice -n19 /usr/bin/mysqldump --single-transaction -u backup_user -p'password' dbname > /backups/dbname_$(date +\%Y\%m\%d).sql
When performing MySQL database backups on a production server, the default mysqldump
operation can consume significant system resources, potentially affecting other critical services like Apache and concurrent database operations. The challenge intensifies when backups must run during peak hours without degrading system performance.
A common misconception is that simply using the nice
command suffices:
nice -n 19 mysqldump -u user -p database > backup.sql
This approach fails because mysqldump spawns new processes that don't inherit the nice priority. The MySQL server itself continues processing the dump request at normal priority.
1. Using MySQL's Built-in Low-Priority Options
The most effective method uses MySQL's native low-priority flags:
mysqldump --single-transaction --skip-add-locks \
--low-priority --compress \
-u username -p database > backup.sql
Key parameters:
--single-transaction
: Creates a consistent snapshot without locking tables--skip-add-locks
: Avoids LOCK TABLES statements--low-priority
: Gives SELECT statements lower priority than writes--compress
: Reduces network bandwidth usage
2. Process Priority Control with Ionice
For additional disk I/O control on Linux systems:
ionice -c 3 nice -n 19 mysqldump \
--single-transaction -u user -p db > backup.sql
ionice -c 3
assigns the idle I/O class, ensuring the backup only uses disk bandwidth when the system is otherwise idle.
3. Advanced: Using MySQL Replication Slaves
For high-traffic environments, consider setting up a replication slave dedicated to backups:
# On slave server:
mysqldump --single-transaction --skip-slave-start \
--dump-slave=2 -u user -p db > backup.sql
Always verify your backup's impact using:
# Monitor CPU usage
top -b -n 1 | grep mysqldump
# Monitor I/O wait
iostat -x 1 5
Combine these techniques with smart scheduling:
# Example crontab entry for 2 AM with low priority
0 2 * * * ionice -c 3 nice -n 19 /usr/bin/mysqldump --single-transaction -u backup -p password db | gzip > /backups/db_$(date +\%Y\%m\%d).sql.gz
For very large databases, consider mydumper
which offers better performance controls:
mydumper -u user -p password \
-B database -o /backups \
--threads 4 --chunk-filesize 100 \
--compress --less-locking