Optimizing NFS Performance for Small File Transfers: Benchmarking and Tuning Guide


2 views

When dealing with NFS transfers, small files present unique challenges compared to large sequential transfers. The benchmark shows decent throughput (58MB/s) for large blocks, but the actual file copy operation of 300MB small files takes 10 minutes. This indicates metadata operations and protocol overhead are dominating the transfer time.

The primary performance killers for small files in NFS are:

  • Metadata operations (stat, lookup, getattr)
  • Default sync writes behavior
  • Network round-trip latency
  • Filesystem journaling overhead

Here are concrete tuning parameters to try on your Openfiler server's /etc/exports:

/export/path client.ip(rw,async,no_subtree_check,no_wdelay,no_root_squash)

For the client side, these mount options can help:

mount -t nfs -o rsize=32768,wsize=32768,nosuid,nodev,noatime,nodiratime server:/export /mnt

When NFS proves inadequate for small files:

# Consider tar archiving for transfers
tar cf - /source/dir | ssh user@dest "cd /target && tar xf -"

# Or use rsync with batch mode
rsync -aH --inplace --no-whole-file /source/ user@dest:/target/

Since you're using ext3, these tweaks may help:

tune2fs -o journal_data_writeback /dev/sdX
echo 50 > /proc/sys/vm/dirty_ratio
echo 10 > /proc/sys/vm/dirty_background_ratio

Use these commands to diagnose performance:

# Server side
nfsstat -o all

# Client side
mountstats /proc/self/mountstats
iotop -oPa

For production systems handling many small files, consider testing with NFSv4 (if available) or evaluating alternative protocols like SMB3 or WebDAV which may handle metadata operations more efficiently in your specific workload.


When working with NFS shares, many developers encounter a frustrating scenario: while large file transfers perform reasonably well (as shown by our 58MB/s dd benchmark), operations involving numerous small files (like PHP scripts or JPG images) crawl at unacceptable speeds. Let's dive into practical solutions.

NFS wasn't originally designed for optimal small file performance. Each file operation requires:

  • Metadata lookups
  • Permission checks
  • Network round trips

For a directory with 10,000 small files, this overhead becomes significant.

First, check your current NFS mount options:

# Check current mount options
mount | grep nfs

Recommended mount options for small files:

# /etc/fstab example
nfs-server:/share  /mnt/nfs  nfs  rw,async,noatime,nodiratime,rsize=32768,wsize=32768,hard,intr  0  0

On your Openfiler server, consider these adjustments:

# Increase NFS server threads
echo "20" > /proc/sys/fs/nfs/nfsd_threads

# Adjust TCP parameters
echo "4194304" > /proc/sys/net/core/rmem_default
echo "4194304" > /proc/sys/net/core/wmem_default

For one-time transfers, consider these alternatives:

# Using tar over NFS
tar cf - /source/dir | (cd /dest/dir && tar xvf -)

# Using rsync with batch options
rsync -a --inplace --no-whole-file --max-size=4k /source/ user@host:/dest/

If you control the server storage:

  • Consider XFS instead of ext3 for better small file handling
  • Adjust inode size if reformatting is an option
  • Evaluate RAID stripe size alignment

Use these tools to identify bottlenecks:

# Monitor NFS operations
nfsstat -c
nfsstat -s

# Disk I/O monitoring
iostat -x 1

Remember that optimal settings depend on your specific workload. Test changes methodically and monitor the impact.