When using rsync across gigabit Ethernet (1000BASE-T), the compression decision depends on three key factors:
- CPU performance of both source and destination machines
- File compressibility ratio (text vs binary)
- Actual available network bandwidth
Consider these real-world test cases on a Debian 11 system with Intel Xeon E5-2650:
# Scenario 1: Uncompressed transfer
$ time rsync -a /source/large_dataset/ dest-server:/dest/
# Scenario 2: Compressed transfer
$ time rsync -az /source/large_dataset/ dest-server:/dest/
For a 10GB mixed-content dataset (source code + binaries):
Method | Transfer Size | Time (min:sec) | CPU Utilization |
---|---|---|---|
Uncompressed | 10.0 GB | 2:45 | 15% |
Compressed (-z) | 7.2 GB | 3:10 | 85% |
Compression becomes beneficial when:
1. Transferring highly compressible files (e.g., text logs):
$ rsync -az /var/log/ remote-backup:/logs/
2. Network bandwidth drops below 600Mbps:
# Check with: $ iperf3 -c dest-server
3. Using slower storage (HDD to HDD transfers)
For most gigabit LAN transfers:
# Better alternative to -z for local transfers:
$ rsync -a --no-compress --inplace --whole-file /src/ dest:/target/
# For WAN or limited bandwidth:
$ rsync -azP --bwlimit=50M user@remote:/backup/ /local/
For power users wanting granular control:
# Custom compression level (1=fast, 9=max):
$ rsync -az --compress-level=3 /src/ dest:/
# Combine with parallel transfers:
$ parallel -j 4 rsync -a ::: dir1 dir2 dir3 dir4 ::: user@remote:/
When working with rsync over gigabit Ethernet (1000BASE-T), we face an interesting performance optimization challenge. The -z compression flag, while beneficial for WAN transfers, may actually degrade performance in high-speed LAN environments. Let's examine the key factors:
The decision hinges on your hardware capabilities:
- Modern CPUs (Intel i5/i7/i9, Ryzen 5/7/9): Can handle compression at 2-5 Gbps speeds
- Older CPUs (Pre-2015): May bottleneck at 500 Mbps-1 Gbps compression throughput
- SSD vs HDD: Storage I/O becomes a factor when dealing with large file sets
Here's benchmark data from my lab environment (Ryzen 7 5800X, WD Red NAS):
# Uncompressed transfer
$ rsync -avh /source/ user@dest:/target/
Speed: 112 MB/s (896 Mbps)
# Compressed transfer
$ rsync -avhz /source/ user@dest:/target/
Speed: 86 MB/s (688 Mbps)
Exceptions where -z might be beneficial:
# Small files with high redundancy
$ rsync -avhz /var/log/ user@dest:/backup/logs/
# CPU-bound source but network-bound destination
$ rsync -avhz --bwlimit=500 /source/ user@remote:/target/
For most gigabit LAN scenarios, I recommend:
# Optimal flags for gigabit LAN
$ rsync -avh --no-compress --inplace --partial /source/ user@dest:/target/
# Alternative for high-performance networks
$ rsync -avh --no-compress --whole-file /source/ user@dest:/target/
For power users, consider these SSH optimizations:
$ rsync -e "ssh -T -c aes128-gcm@openssh.com -o Compression=no" \
-avh --no-compress /source/ user@dest:/target/
Remember to test with your specific workload using the --progress
flag to measure actual throughput.