Optimizing Samba Performance When Sharing NFS Mount Points in Linux Environments


5 views

Here's a common infrastructure challenge many sysadmins face: You need to maintain existing Samba access points while migrating backend storage to NFS. The setup looks like this:

Client Machines → Samba Share → NFS Mount → Actual Storage

The slowdown typically occurs due to:

  • Double protocol translation (SMB ↔ NFS)
  • Default mount options causing excessive metadata operations
  • Buffer size mismatches between protocols

First, optimize your NFS mount options in /etc/fstab:

fileserver:/export/data  /mnt/data  nfs  
rw,hard,intr,rsize=65536,wsize=65536,timeo=600,retrans=2,noatime,nodiratime 0 0

Key parameters:

  • rsize/wsize: Increase from default 4K to 64K for better throughput
  • noatime/nodiratime: Reduce metadata updates
  • hard: Important for Samba stability

In your smb.conf:

[global]
socket options = TCP_NODELAY IPTOS_LOWDELAY SO_RCVBUF=65536 SO_SNDBUF=65536
strict locking = no
use sendfile = yes
aio read size = 1
aio write size = 1

[shared-data]
path = /mnt/data
read only = no
force create mode = 0660
force directory mode = 0770
veto files = /.snapshot/.windows/.mac/.tmp/

Add these to /etc/sysctl.conf:

# NFS client settings
sunrpc.tcp_slot_table_entries = 64
sunrpc.udp_slot_table_entries = 64

# Network buffers
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216

If performance remains unsatisfactory:

  1. Direct NFS Access: Configure clients to mount NFS directly where possible
  2. Async Mounts: Consider 'async' option if data consistency isn't critical
  3. Protocol Bridge: Implement a caching layer like cachefilesd

Essential commands for troubleshooting:

# NFS statistics
nfsstat -c

# Samba connections
smbstatus

# Real-time network monitoring
iftop -nN -i eth0

# Disk I/O
iotop -oPa

Many of us have walked into similar situations - critical business data being stored on USB drives shared via Samba without proper backups. In my case, compliance-sensitive data was at risk, prompting immediate action to implement a more robust storage solution.

The migration path I implemented consisted of:

# Data migration command
rsync -avz /mnt/usb_drive/ /mnt/raid_array/new_storage/

# NFS export configuration (/etc/exports)
/mnt/raid_array/new_storage 192.168.1.0/24(rw,sync,no_subtree_check)

# Client mount configuration (/etc/fstab)
fileserver:/mnt/raid_array/new_storage /mnt/usb_drive nfs defaults 0 0

The initial tests showed promise, but real-world usage revealed severe performance degradation when accessing files through Samba shares mounted on NFS. Typical symptoms included:

  • File open operations taking 30+ seconds
  • Transfer speeds below 1MB/s
  • Frequent timeouts during file operations

Several factors contribute to this performance issue:

# Common misconfigurations to check
grep -E 'socket options|strict locking' /etc/samba/smb.conf
nfsstat -o all -c  # Check NFS client statistics

The primary culprits are typically:

  1. Double network traversal (Samba → NFS → Storage)
  2. Inconsistent caching between protocols
  3. Default mount options not optimized for Samba sharing

After extensive testing, these settings provided the best balance of performance and stability:

# Optimized NFS export settings (/etc/exports)
/mnt/raid_array/new_storage 192.168.1.0/24(rw,sync,no_subtree_check,no_wdelay)

# Improved Samba configuration (/etc/samba/smb.conf)
[shared_data]
   path = /mnt/nfs_mount
   read only = no
   oplocks = yes
   kernel oplocks = no
   strict locking = no
   socket options = TCP_NODELAY SO_RCVBUF=65536 SO_SNDBUF=65536
   write cache size = 2097152

For environments where this architecture still underperforms:

  • Consider using Samba's native storage backend
  • Implement a caching layer (e.g., cachefilesd)
  • Evaluate GlusterFS or Ceph for unified protocol support

Essential commands for ongoing performance management:

# Monitor Samba performance
smbstatus -L

# Check NFS operations
nfsiostat 2 5  # 2 second intervals, 5 times

# Filesystem monitoring
iotop -oPa

Remember that environmental factors like network infrastructure and client configurations play significant roles in overall performance.