When transferring 100GB+ datasets via rsync over SMB-mounted NAS shares, many admins report unexpectedly slow performance - often taking 12+ hours for incremental backups. The core issue stems from protocol mismatches between rsync's optimization logic and SMB's characteristics.
By default, rsync performs checksum comparisons for local transfers, which becomes painfully inefficient over SMB:
# Verify current checksum behavior rsync -avn --dry-run /source/path/ /mnt/nas/destination/
Override the default checksum behavior with these critical flags:
rsync -av --size-only --modify-window=2 /source/ /mnt/nas/backup/
Key parameters:
• --size-only: Skip checksums when sizes match
• --modify-window=2: Account for SMB timestamp rounding (seconds vs NTFS 100ns precision)
Tune both SMB mounts and rsync for better throughput:
# Mount with optimal SMB parameters mount -t cifs //nas/share /mnt/backup -o username=user,password=pass,vers=3.0,cache=none,dir_mode=0755,file_mode=0644,nobrl # Parallel rsync with batch files find /source -type f -print0 | xargs -0 -n1 -P8 rsync -a --relative --size-only --modify-window=2 {} /mnt/backup/
For mission-critical backups, consider these architecture changes:
# SSH-based rsync alternative rsync -avz -e 'ssh -C -c aes128-gcm@openssh.com' /source/ user@nas:/backup/ # Tar over netcat (for initial transfers) tar cf - /source | pv | nc -q 1 nas 1234
Use these tools to identify bottlenecks:
# Real-time SMB monitoring smbstatus -L # Network throughput analysis nethogs eth0 iftop -i eth0
Properly configured rsync-SMB combinations typically achieve:
- 10-20x faster incremental backups (minutes instead of hours)
- 50-70% reduction in CPU usage
- Consistent transfer rates matching network bandwidth
When running rsync over SMB mounts for large datasets (100GB+), many developers encounter unexpectedly slow transfer speeds. The fundamental issue stems from how rsync interacts with network-mounted filesystems versus local storage.
# Default rsync behavior that causes slowdown:
rsync -avz /local/path/ /mnt/smb/nas_backup/
The -a (archive) flag implicitly enables --checksum when rsync detects it's working with local paths. Since SMB mounts appear as local filesystems, rsync performs full file checksums rather than faster timestamp/size comparisons.
Explicitly disable checksumming and enable quick time/size comparisons:
rsync -rlptgoD --no-whole-file --size-only /source/ /mnt/smb/destination/
Key flags breakdown:
- --size-only: Skip checksum, compare only file sizes
- -rlptgoD: Partial archive mode (no -a) that excludes checksum behavior
- --no-whole-file: Force delta-transfer algorithm
# Mount with optimal SMB settings:
sudo mount -t cifs //nas/share /mnt/smb -o username=user,password=pass,vers=3.0,cache=strict,rsize=65536,wsize=65536
Important mount options:
- vers=3.0: Forces modern SMB protocol version
- cache=strict: Improves metadata handling
- rsize/wsize=65536: Increases transfer chunk sizes
For NAS devices supporting SSH, this often outperforms SMB:
rsync -azP --delete -e "ssh -T -c aes128-gcm@openssh.com -o Compression=no -x" \
/local/data/ user@nas:/backup/path/
Use iotop and nethogs to identify bottlenecks:
watch -n 1 'nethogs -t; iotop -o -b -n 1'
For massive directory structures, split the workload:
find /source -type d -print0 | xargs -0 -n1 -P8 -I{} rsync -Raq {} /mnt/smb/dest/