When implementing automated backups for PostgreSQL, we need to consider several approaches. The two most common methods are:
- Logical backups using pg_dump/pg_dumpall
- Physical backups using pg_basebackup or filesystem-level snapshots
For most daily backup scenarios, logical backups with pg_dump provide the right balance between simplicity and reliability.
Here's a robust backup script template that includes error handling, rotation, and notifications:
#!/bin/bash
# Configuration
BACKUP_DIR="/var/backups/postgres"
DATE=$(date +%Y-%m-%d)
DB_NAME="your_database"
USER="postgres"
RETENTION_DAYS=7
LOG_FILE="/var/log/postgres_backup.log"
# Create backup directory if not exists
mkdir -p $BACKUP_DIR
# Dump database
echo "$(date): Starting backup of $DB_NAME" >> $LOG_FILE
pg_dump -U $USER -Fc $DB_NAME > $BACKUP_DIR/${DB_NAME}_${DATE}.dump 2>> $LOG_FILE
if [ $? -eq 0 ]; then
echo "$(date): Backup completed successfully" >> $LOG_FILE
# Remove old backups
find $BACKUP_DIR -name "*.dump" -mtime +$RETENTION_DAYS -delete
# Optional: send notification
echo "PostgreSQL backup completed for $DATE" | mail -s "Backup Success" admin@example.com
else
echo "$(date): Backup failed" >> $LOG_FILE
echo "PostgreSQL backup failed for $DATE" | mail -s "Backup Failure" admin@example.com
fi
For modern Linux systems, consider using systemd timers:
[Unit]
Description=PostgreSQL daily backup
[Timer]
OnCalendar=daily
Persistent=true
[Install]
WantedBy=timers.target
Alternatively, the traditional cron approach works well:
0 2 * * * /path/to/backup_script.sh
For production environments with large databases:
- Implement parallel dumping with pg_dump's -j flag
- Consider compression during backup (add -Z 6 to pg_dump)
- Set up WAL archiving for point-in-time recovery
- Implement remote backup storage using rsync or cloud tools
Always verify your backups with:
pg_restore --list your_backup_file.dump | head -n 10
Set up monitoring for backup completion and disk space. A simple check script:
#!/bin/bash
BACKUP_DIR="/var/backups/postgres"
MIN_SIZE=1024 # 1KB minimum backup size
LATEST=$(ls -t $BACKUP_DIR/*.dump | head -1)
if [ -f "$LATEST" ]; then
SIZE=$(stat -c%s "$LATEST")
if [ $SIZE -lt $MIN_SIZE ]; then
echo "Backup file too small: $LATEST" | mail -s "Backup Alert" admin@example.com
fi
else
echo "No backup found" | mail -s "Backup Alert" admin@example.com
fi
Database backups are crucial for disaster recovery, and PostgreSQL is no exception. On Linux systems, automating this process ensures data safety without manual intervention.
The simplest approach uses cron to schedule pg_dump executions:
# Add to crontab -e 0 3 * * * pg_dump -U postgres -Fc mydb > /backups/mydb_$(date +\%Y-\%m-\%d).dump
Key parameters:
- -Fc: Custom format (compressed)
- $(date +\%Y-\%m-\%d): Daily timestamp
- /backups/: Secure storage directory
For production systems, consider this robust script (/usr/local/bin/pg_backup.sh):
#!/bin/bash DATE=$(date +%Y-%m-%d) BACKUP_DIR="/var/backups/postgres" DB_NAME="production_db" LOG_FILE="/var/log/postgres_backup.log" mkdir -p $BACKUP_DIR echo "[$DATE] Starting backup" >> $LOG_FILE pg_dump -U postgres -Fc $DB_NAME > "$BACKUP_DIR/${DB_NAME}_$DATE.dump" 2>> $LOG_FILE # Rotate old backups (keep 30 days) find $BACKUP_DIR -type f -name "*.dump" -mtime +30 -exec rm {} \; echo "[$DATE] Backup completed" >> $LOG_FILE
Make it executable:
chmod +x /usr/local/bin/pg_backup.sh
For critical systems, combine daily dumps with WAL archiving in postgresql.conf:
wal_level = replica archive_mode = on archive_command = 'test ! -f /var/lib/postgresql/wal/%f && cp %p /var/lib/postgresql/wal/%f'
Always verify backups. This script checks integrity and sends email alerts:
#!/bin/bash BACKUP_FILE="/backups/latest.dump" LOG_FILE="/var/log/backup_verify.log" if pg_restore -l $BACKUP_FILE >/dev/null 2>&1; then echo "$(date) - Backup verified" >> $LOG_FILE else echo "$(date) - Backup corrupted!" >> $LOG_FILE mail -s "PostgreSQL Backup Failure" admin@example.com < $LOG_FILE fi
For complex environments, Barman (Backup and Recovery Manager) provides advanced features:
# Install on Debian/Ubuntu sudo apt-get install barman # Sample configuration (/etc/barman.conf) [main] description = "Main PostgreSQL Server" conninfo = host=192.168.1.10 user=barman dbname=postgres backup_method = postgres parallel_jobs = 2
Extend the script to upload to AWS S3 using awscli:
#!/bin/bash # After local backup completes aws s3 cp /backups/latest.dump s3://your-bucket/postgres/$(date +\%Y-\%m-\%d).dump