Optimizing Daily Backups for a Large 22GB MySQL MyISAM Database Without Downtime


2 views

When dealing with a 22GB MySQL database using MyISAM tables, traditional mysqldump approaches become problematic. The main issues are:

  • Requiring web server downtime (5+ minutes of outage)
  • Extremely slow performance when attempted without downtime
  • Website becoming unresponsive during backup attempts

For production MyISAM databases of this size, consider these approaches:

1. Using mysqlhotcopy for MyISAM

This Perl script is specifically designed for MyISAM tables and works by locking tables and copying files:

mysqlhotcopy db_name /path/to/backup/directory \
--user=username --password=password \
--allowold --keepold

2. Filesystem Snapshots with LVM

If your server uses LVM, you can create atomic snapshots:

# Create snapshot
lvcreate --size 5G --snapshot --name dbsnap /dev/vg00/mysql

# Mount snapshot
mkdir /mnt/mysql-snapshot
mount /dev/vg00/dbsnap /mnt/mysql-snapshot

# Backup from snapshot
rsync -avz /mnt/mysql-snapshot/ /backup/mysql/

# Cleanup
umount /mnt/mysql-snapshot
lvremove /dev/vg00/dbsnap

3. Percona XtraBackup (Works with MyISAM)

While primarily for InnoDB, it can handle MyISAM with --no-lock:

xtrabackup --backup \
--target-dir=/data/backups/ \
--datadir=/var/lib/mysql/ \
--user=backup_user \
--password=backup_password \
--no-lock

If you must use mysqldump, try these optimizations:

mysqldump --single-transaction \
--quick \
--lock-tables=false \
--skip-add-locks \
--skip-disable-keys \
--skip-extended-insert \
--compress \
--databases your_db \
--result-file=backup.sql

For a database this size, consider implementing incremental backups:

# Full backup once per week
mysqldump --flush-logs --master-data=2 \
--single-transaction \
--databases your_db > full_backup.sql

# Daily incremental
mysqladmin flush-logs
cp /var/lib/mysql/mysql-bin.00000* /backup/incremental/

Track backup duration and impact with this simple monitoring script:

#!/bin/bash
START_TIME=$(date +%s)

# Your backup command here
mysqldump [options] > backup.sql

END_TIME=$(date +%s)
ELAPSED=$((END_TIME - START_TIME))

echo "Backup completed in $ELAPSED seconds" | \
mail -s "Backup Report" admin@example.com

When dealing with large MyISAM databases (22GB in this case), traditional mysqldump approaches present significant challenges:

  • Service disruption requiring web server downtime
  • Extended backup windows (5+ minutes)
  • Potential website inaccessibility during backups
  • Exponential performance degradation without downtime

Here are proven approaches I've implemented successfully for clients with similar requirements:

1. Filesystem Snapshots with LVM

For MyISAM tables, filesystem-level snapshots often provide the best balance of speed and reliability:

# Create LVM snapshot
lvcreate --size 5G --snapshot --name mysql_snap /dev/vg00/mysql

# Mount snapshot
mkdir /mnt/mysql_snap
mount /dev/vg00/mysql_snap /mnt/mysql_snap -onouuid,ro

# Perform backup from snapshot
rsync -avz /mnt/mysql_snap/ /backup/mysql/

# Cleanup
umount /mnt/mysql_snap
lvremove -f /dev/vg00/mysql_snap

2. Percona XtraBackup for MyISAM

While primarily for InnoDB, XtraBackup can handle MyISAM with proper preparation:

# Install Percona XtraBackup
wget https://repo.percona.com/apt/percona-release_latest.$(lsb_release -sc)_all.deb
sudo dpkg -i percona-release_latest.$(lsb_release -sc)_all.deb
sudo apt-get update
sudo apt-get install percona-xtrabackup-24

# Run backup
xtrabackup --backup --target-dir=/backup/mysql/ --datadir=/var/lib/mysql/

3. Master-Slave Replication with Hot Backups

For zero-downtime solutions, consider setting up replication:

# On slave server configuration
[mysqld]
server-id = 2
log_bin = mysql-bin
relay-log = relay-bin
log-slave-updates = 1
read-only = 1
skip-slave-start

When you must use mysqldump with MyISAM, these parameters help:

mysqldump --single-transaction --quick --lock-tables=false \
--skip-add-drop-table --skip-disable-keys --skip-extended-insert \
--no-autocommit --order-by-primary \
--databases your_db | gzip > backup.sql.gz

For growing databases, implement these monitoring practices:

  • Track table sizes with SELECT table_schema, table_name, data_length FROM information_schema.tables WHERE engine='MyISAM';
  • Schedule OPTIMIZE TABLE during low-traffic periods
  • Consider partitioning large tables by date ranges

While not immediate help, consider this conversion script for future planning:

SELECT CONCAT('ALTER TABLE ', table_schema, '.', table_name, ' ENGINE=InnoDB;') 
FROM information_schema.tables 
WHERE engine='MyISAM' AND table_schema NOT IN ('information_schema','mysql','performance_schema');