Optimizing Rsync Performance: Solving Slow Transfer Rates in Linux-Windows File Copy Operations


3 views

When transferring large datasets (approaching 1TB) across a LAN between Linux and Windows 7 via mounted SMB shares, I observed significant performance differences between file copy methods. Benchmarking an 800MB test file revealed:

cp: 5 minutes 33 seconds
scp: 6 minutes 33 seconds
rsync: 21 minutes 51 seconds

Several factors contribute to rsync's poor performance in this scenario:

  • SMB protocol overhead when mounted from Windows on Linux
  • rsync's delta-transfer algorithm performing unnecessary calculations
  • Filesystem metadata operations across heterogeneous systems

Instead of using SMB-mounted shares, implementing rsync in daemon mode provides significant performance improvements. Here's how to set it up:

# On Linux (server side):
sudo apt-get install rsync
echo "[shared]
    path = /path/to/share
    read only = no
    uid = nobody
    gid = nogroup" | sudo tee /etc/rsyncd.conf
sudo systemctl start rsyncd

For Windows clients, DeltaCopy provides an effective rsync implementation:

  1. Install DeltaCopy (Windows rsync client)
  2. Configure the service with proper credentials
  3. Add firewall exceptions for rsync port (873)

When using DeltaCopy on Windows:

# Fix UTF-8 handling by replacing cygwin1.dll with patched version:
wget http://www.okisoft.co.jp/esc/utf8-cygwin/cygwin1.dll
mv cygwin1.dll "C:\Program Files\DeltaCopy\"

Security considerations:

  • Always password-protect shares
  • Consider disabling automatic service startup
  • Monitor firewall rules for rsync port (873)

After implementing the rsync daemon solution:

Method Transfer Rate Improvement
SMB-mounted rsync 2MB/s -
Rsync daemon 8MB/s 4x faster

For time-sensitive transfers where checksum verification isn't critical:

rsync -rtv --size-only --progress /source/ user@host:/destination/

For maximum throughput when network reliability is good:

rsync -azv --progress --bwlimit=0 /source/ user@host:/destination/

When transferring large datasets (approaching 1TB) across a LAN between Linux and Windows systems, many administrators instinctively reach for rsync. However, in my recent migration project, I encountered surprisingly slow transfer speeds:

cp: 05:33 (for 800MB file)
scp: 06:33 
rsync: 21:51

This performance gap demands investigation, especially when working with Windows shares mounted on Linux systems.

The primary culprit in this scenario is the interaction between rsync's delta-transfer algorithm and the SMB/CIFS protocol stack. When rsync operates over a mounted Windows share, several inefficiencies emerge:

  • Excessive metadata operations due to rsync's file comparison checks
  • Protocol translation overhead between rsync's operations and SMB
  • Suboptimal packet sizing for the network conditions

The breakthrough came when switching from SMB-mounted transfers to using rsync in daemon mode. Here's how to implement this properly:

# On Linux (rsync server):
sudo apt-get install rsync
echo "[backup]
    path = /path/to/share
    comment = Backup Share
    read only = no
    list = yes
    uid = nobody
    gid = nogroup" | sudo tee /etc/rsyncd.conf
sudo systemctl start rsync

# On Windows (using DeltaCopy or similar):
rsync -avz --progress rsync://linux-server/backup /destination/path

Beyond just switching protocols, these additional tweaks can further improve transfer speeds:

# Recommended rsync flags for large transfers:
rsync -avz --progress --bwlimit=9000 --whole-file --compress-level=0 \
    --partial --inplace rsync://server/share /local/path

# For local network transfers, disable compression:
--compress-level=0

# For reliable resume capability:
--partial --inplace

When implementing this solution on Windows systems:

  1. Ensure proper firewall configuration for rsync port (873 by default)
  2. For DeltaCopy users, replace the Cygwin DLL with a UTF-8 compatible version
  3. Configure service credentials properly in Windows Services manager

Implementing these changes boosted my transfer speeds from 2MB/s to 8MB/s - a 4x improvement that made the TB-scale migration feasible.