In NFS version 3 (NFSv3), the default rsize
(read size) and wsize
(write size) values are typically set to 32KB (32768 bytes) when not explicitly specified in the mount options. These values represent the maximum amount of data that can be transferred in a single read or write operation between the NFS client and server.
To check the actual values being used in your current NFS mount, run:
cat /proc/mounts | grep nfs
Or for more detailed NFS-specific information:
nfsstat -m
For read-heavy workloads, you might want to test various rsize
values. Here's how to mount with custom sizes:
mount -t nfs -o rsize=65536,wsize=65536 {server_ip}:/home/{server_user}/{server_path} /home/{client_user}/{client_path}
Or in /etc/fstab
:
{server_ip}:/home/{server_user}/{server_path} /home/{client_user}/{client_path} nfs rsize=65536,wsize=65536 0 0
Typical values to benchmark for read-heavy operations:
- 8192 (8KB) - Minimum useful value
- 32768 (32KB) - Default
- 65536 (64KB) - Common optimization
- 131072 (128KB) - For high bandwidth networks
- 262144 (256KB) - Maximum for most implementations
Use tools like dd
or iozone
to measure performance. Example read test:
dd if=/home/{client_user}/{client_path}/testfile of=/dev/null bs=1M count=1000
For more comprehensive testing with iozone
:
iozone -a -i 0 -i 1 -s 1G -r 64k -f /mnt/nfs/testfile
Remember that optimal values depend on:
- Network MTU (1500 bytes standard Ethernet)
- TCP window scaling
- Server and client hardware capabilities
- Filesystem block sizes on both ends
If you experience timeouts or errors with large values, try adjusting these additional parameters:
mount -t nfs -o rsize=131072,wsize=131072,timeo=600,retrans=2 {server_ip}:/path /mnt
In NFSv3 implementations, the default read (rsize
) and write (wsize
) transfer sizes typically vary between different operating systems and NFS client implementations:
# Common defaults across platforms: - Linux kernels: usually 32KB (32768 bytes) - FreeBSD: 8KB (8192 bytes) - Solaris: 32KB default - AIX: 40KB (40960 bytes)
To verify your current NFS mount parameters:
# Check mounted NFS parameters cat /proc/mounts | grep nfs # Alternative method using nfsstat nfsstat -m # Sample output (truncated): # 172.16.1.100:/export/data on /mnt/data type nfs (rw,relatime,vers=3,rsize=32768,wsize=32768,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=172.16.1.100,mountvers=3,mountport=892,mountproto=tcp,local_lock=none,addr=172.16.1.100)
For read-heavy workloads, consider these benchmarking approaches:
# Test sequential read performance with various rsize values for size in 4096 8192 16384 32768 65536 131072; do mount -o remount,rsize=$size,wsize=$size ${server_ip}:/path /mnt/nfs echo "Testing rsize/wsize=$size" dd if=/mnt/nfs/largefile of=/dev/null bs=1M count=1024 done # Measure IOPS with fio cat > fio_test.ini <<EOF [global] ioengine=libaio direct=1 runtime=60 size=1G [readtest] rw=randread bs=4k directory=/mnt/nfs numjobs=4 EOF fio fio_test.ini
Key factors affecting optimal transfer sizes:
- Network MTU (1500 bytes standard, 9000 for jumbo frames)
- Server and client hardware capabilities
- NFS server configuration (e.g., nfsd thread count)
- Workload characteristics (sequential vs random access)
# /etc/fstab entry with optimized values for read-heavy workload {server_ip}:/home/{server_user}/{server_path} /home/{client_user}/{client_path} nfs rw,rsize=65536,wsize=32768,hard,intr,tcp,timeo=600 0 0 # Mount command alternatives mount -t nfs -o rsize=131072,wsize=65536 ${server_ip}:/path /mnt/nfs
Essential tools for ongoing optimization:
# Monitor NFS statistics nfsiostat -d 5 # Similar to iostat but for NFS nfsstat -c # Client-side statistics # Kernel parameters worth tuning (sysctl) vm.dirty_ratio = 40 vm.dirty_background_ratio = 10 sunrpc.tcp_slot_table_entries = 128
When experimenting with large transfer sizes:
# Check for "fragmenting" messages in dmesg dmesg | grep -i nfs # Verify network MTU consistency ping -M do -s 8972 ${server_ip} # For 9000 byte MTU testing # Important: Always test with actual workload patterns rather than synthetic benchmarks alone