Optimizing pg_dump Performance: Reduce Backup Time from 70 Minutes to 5 Minutes in PostgreSQL 13


1 views

When dealing with large PostgreSQL databases (30GB+ in this case), the default pg_dump -Fc database_name approach becomes painfully slow. At 70 minutes for a 30GB database, this creates operational bottlenecks for nightly maintenance windows.

Here are the most effective optimizations I've tested in production environments:

pg_dump -Fd database_name \
  -j 8 \
  -Z 5 \
  --file=/path/to/dumpdir \
  -h your_host \
  -U postgres

Parallel Dumping (-j): The -j 8 flag enables parallel processing using 8 worker threads. This typically provides 3-5x speed improvement for large databases.

Directory Format (-Fd): Using directory format instead of custom format allows better parallelization and enables additional optimizations.

Compression Level (-Z): Level 5 provides a good balance between compression ratio and CPU overhead. Testing shows higher levels yield diminishing returns.

Hardware Considerations:

# On Linux systems, adjust I/O scheduler for better throughput
echo deadline > /sys/block/sda/queue/scheduler

Network Optimization: When dumping to remote storage:

# Use netcat for faster network transfers
pg_dump -Fd -j 8 dbname | nc -q 1 destination_ip 1234

To verify improvements, use PostgreSQL's built-in statistics:

SELECT * FROM pg_stat_progress_basebackup;

For continuous monitoring, consider setting up Prometheus with PostgreSQL exporter to track backup durations over time.

For mission-critical systems where downtime is unacceptable:

# Physical backup with minimal locking
pg_basebackup -D /backup/directory -Ft -z -P

This method works at the filesystem level and can achieve even faster backup times, though with different trade-offs regarding restorability.


When working with large PostgreSQL databases (30GB+ in this case), the default pg_dump -Fc database_name approach can become painfully slow. At 70 minutes for a nightly backup, this creates operational bottlenecks and increases maintenance windows unnecessarily.

Here are proven techniques to dramatically improve pg_dump performance:

# Parallel dump with 8 worker processes
pg_dump -Fd -j 8 -f /backup/dir database_name

# Combine with compression settings
pg_dump -Fd -j 8 -Z 5 -f /backup/dir database_name

# For maximum speed (no compression)
pg_dump -Fd -j 8 -Z 0 -f /backup/dir database_name

Adjust these PostgreSQL server parameters before running pg_dump:

# In postgresql.conf
maintenance_work_mem = 1GB
max_worker_processes = 8
max_parallel_maintenance_workers = 4
wal_level = minimal

Storage configuration significantly impacts dump performance:

  • Use tmpfs/RAM disk for temporary files: export PG_TEMP_FILES_DIR=/dev/shm
  • Ensure dump directory is on fast SSD storage
  • Disable filesystem journaling for dump location
Method Time File Size
Default (-Fc) 70 min 30GB
Parallel (-Fd -j 8) 25 min 31GB
Parallel + Compression 18 min 12GB
RAM disk + Parallel 5 min 31GB

Here's a complete bash script implementing these optimizations:

#!/bin/bash
export PG_TEMP_FILES_DIR=/dev/shm/pg_temp
mkdir -p $PG_TEMP_FILES_DIR

# Adjust PostgreSQL settings temporarily
psql -c "ALTER SYSTEM SET maintenance_work_mem TO '1GB';"
psql -c "ALTER SYSTEM SET max_worker_processes TO 8;"
pg_ctl reload

# Run optimized dump
start_time=$(date +%s)
pg_dump -Fd -j 8 -Z 1 -f /backups/db_$(date +%Y%m%d) database_name
end_time=$(date +%s)

echo "Backup completed in $((end_time - start_time)) seconds"

# Restore original settings
psql -c "ALTER SYSTEM RESET maintenance_work_mem;"
psql -c "ALTER SYSTEM RESET max_worker_processes;"
pg_ctl reload