When working between two Linux machines on a Gigabit network, we'd theoretically expect rsync transfers to approach 125 MB/s. Yet in reality, many developers encounter much slower speeds - in this case just 22 MB/s. Let's analyze why.
# Benchmarking disk writes on target (laptop)
dd if=/dev/zero of=test.img bs=1M count=1024
# Result: 58.8 MB/s - this is our first bottleneck
The laptop's full-disk encrypted XFS filesystem shows write speeds of only 58.8 MB/s. While the source workstation's RAID-5 array with AES-NI acceleration hits 256 MB/s reads, the target can't keep up.
# Testing raw network throughput
iperf -c destination_host
# Shows 939 Mbit/s (~117 MB/s) - not the limiting factor
The network itself performs well at 939 Mbit/s (117 MB/s), eliminating network hardware as the primary constraint.
The combination of software RAID-5, LVM, and AES-CBC encryption creates significant CPU overhead. While AES-NI helps on the source (FX-8150 CPU), the target laptop likely lacks hardware acceleration.
Try these rsync flags to improve performance:
rsync -avz --progress \
--bwlimit=50000 \ # Rate limit to avoid overwhelming target
--compress-level=1 \ # Minimal compression
--no-whole-file \ # Force delta-transfer algorithm
--inplace \ # Reduce disk writes
source/ destination/
For encrypted XFS on the target:
# Mount with noatime and larger writeback
mount -o remount,noatime,logbsize=256k /path/to/mount
When dealing with many small files:
# Use tar over netcat for better performance
tar cf - . | pv | nc -l -p 1234 # On source
nc source_host 1234 | tar xf - # On destination
Consider these factors when diagnosing slow rsync transfers:
- Target disk write speeds
- Encryption overhead
- File size distribution
- Network contention
When transferring files between two Linux machines (both connected via Gigabit Ethernet) using rsync, throughput measured only 22 MB/s - significantly below the theoretical 125 MB/s limit. Here's the system configuration that was analyzed:
// Laptop receiving end:
Storage: XFS filesystem with full disk encryption
Cipher: aes-cbc-essiv:sha256 (256-bit)
Write speed: 58.8 MB/s (measured via dd)
// Workstation source:
Storage: Software RAID-5 (5 HDDs) with LVM and same encryption
CPU: FX-8150 with AES-NI support
Read speed: 256 MB/s (cold cache)
Network: 939 Mbit/s throughput (iperf test)
The encryption overhead appears to be the primary constraint. While the workstation benefits from AES-NI acceleration, the laptop might lack hardware acceleration. The RAID-5 read performance (256 MB/s) and network bandwidth (939 Mbit/s ≈ 117 MB/s) both exceed the observed transfer rate.
Option 1: Rsync with Compression
rsync -avz --progress source/ destination/
The -z flag enables compression which can reduce encryption overhead by sending less data.
Option 2: Alternative Transfer Methods
# Using tar over ssh (sometimes faster with small files)
tar -cf - /source | ssh user@host "tar -xf - -C /destination"
# Using nc (netcat) for raw speed
# On receiver:
nc -l 1234 | dd of=output.file
# On sender:
dd if=input.file | nc receiver.ip 1234
Consider these mount options for encrypted filesystems:
# In /etc/fstab
/dev/mapper/cryptroot / ext4 defaults,noatime,discard 0 1
For the RAID-5 array, ensure proper stripe size (typically 256K or 512K for large files):
mdadm --create /dev/md0 --level=5 --raid-devices=5 --chunk=256 /dev/sd[b-f]
These rsync parameters can help optimize throughput:
rsync -av --progress \
--bwlimit=0 \
--size-only \
--no-compress \
--block-size=131072 \
--whole-file \
source/ destination/
For encrypted transfers consider using ssh cipher alternatives:
rsync -e 'ssh -c aes128-gcm@openssh.com' source/ destination/