Performance Benchmark: Why rsync Over SSH Outperforms Direct NFSv3 Writes by 2X?


2 views

During routine data migration tests between Ubuntu 10.04 (client) and 9.10 (server), I observed a consistent 2X performance difference when comparing:

rsync -av dir /nfs_mount/TEST/        # Slow (direct NFS)
rsync -av dir nfs_server-eth1:/nfs_mount/TEST/  # Fast (SSH)

Our test environment featured:

  • Jumbo frames (MTU 9000) enabled end-to-end
  • File size distribution: 1KB-15MB compressed files
  • NFSv3 configuration with large window sizes (rsize/wsize=1MB)

Additional tests revealed:

# Raw copy operations
cp -r dir /nfs_mount/TEST/            # 1.5X slower than SSH
scp -r dir nfs_server-eth1:/nfs_mount/TEST/ # Fastest overall

# Archive transfer tests
rsync -av dir.tar.gz [...]            # Similar to non-archive
scp -r dir.tar.gz [...]               # No compression benefit

The performance gap persists regardless of:

  • File packaging (individual files vs tar archives)
  • Jumbo frame enablement
  • Transfer tool (rsync/cp/scp)

Key NFS server configurations:

# /etc/exports
rw,no_root_squash,no_subtree_check,sync

# Client mount options
nfs rw,nodev,relatime,vers=3,rsize=1048576,
wsize=1048576,namlen=255,hard,proto=tcp,
timeo=600,retrans=2,sec=sys,mountvers=3,
mountproto=tcp

Effective parameter tuning for NFS:

# Try async mode on server
rw,no_root_squash,no_subtree_check,async

# Client mount adjustments
mount -t nfs -o rsize=32768,wsize=32768,\
noatime,nodiratime,vers=3,tcp,nolock \
nfs_server:/export /mnt/nfs

For maximum throughput:

# Parallel rsync wrapper
parallel -j 8 rsync -a {} nfs_server:dest/ ::: dir/*

# Tar pipe over SSH
tar cf - dir | ssh nfs_server "tar xf - -C /nfs_mount/TEST"

When benchmarking file operations between two Ubuntu systems (client: 10.04, server: 9.10) with identical network configurations (MTU 9000, jumbo frames enabled), we observed a consistent 2x performance difference:

# Slow path (direct NFS)
rsync -av dir /nfs_mount/TEST/  → X MBps
cp -r dir /nfs_mount/TEST/      → 1.1X MBps

# Fast path (SSH tunnel)
rsync -av dir nfs_server-eth1:/nfs_mount/TEST/ → 2X MBps
scp -r dir nfs_server-eth1:/nfs_mount/TEST/    → 2.2X MBps

NFSv3 introduces several layers of overhead that become apparent with small-to-medium files (1KB-15MB in our test):

  • Metadata Operations: Each file create/open/close in NFS requires separate RPC calls
  • Synchronous Writing: The sync export option forces immediate disk commits
  • TCP Window Scaling: SSH implements smarter congestion control than standard NFSv3 stacks

Here are concrete adjustments to improve NFS performance:

1. Server-Side /etc/exports Tweaks

# Replace 'sync' with 'async' for better throughput
/nfs_mount *(rw,no_root_squash,no_subtree_check,async)

2. Client Mount Options Optimization

# Current suboptimal options
mount -t nfs -o rw,nodev,relatime,vers=3,rsize=1048576,wsize=1048576...

# Improved version
mount -t nfs -o rw,nodev,noatime,vers=3,rsize=65536,wsize=65536,async,tcp,nolock

3. Alternative Protocol Stack

For time-sensitive operations, wrap NFS writes with SSH:

# Batch small files first
tar cf - dir | ssh nfs_server-eth1 "tar xf - -C /nfs_mount/TEST"

# Parallel transfer alternative
rsync -av --rsh="ssh -T -c aes128-ctr" dir/ nfs_server-eth1:/nfs_mount/TEST/

The underlying filesystem affects NFS performance. Benchmark different mkfs options:

# XFS often outperforms ext4 for NFS
mkfs.xfs -f -l size=128m -d agcount=32 /dev/sdX

To validate changes, use this benchmarking script:

#!/bin/bash
TESTDIR=/nfs_mount/benchmark_$(date +%s)
mkdir -p $TESTDIR

# Test sequential writes
dd if=/dev/zero of=$TESTDIR/sequential bs=1M count=1024 status=progress

# Test metadata-heavy operations
time (for i in {1..1000}; do touch $TESTDIR/file$i; done)

# Cleanup
rm -rf $TESTDIR