When managing multiple load-balanced web servers, implementing efficient shared storage is crucial for handling user uploads and synchronized content. The protocol choice significantly impacts performance, reliability, and maintenance overhead.
After extensive benchmarking across our production environments, these are the key observations:
# Sample benchmark results (throughput in MB/s)
Protocol | Seq Read | Rand Read | Small Files
------------|----------|-----------|------------
NFSv4.1 | 1120 | 890 | 420
SMB3.1.1 | 980 | 670 | 380
FUSE+SSHFS | 450 | 310 | 290
For pure Linux environments, NFSv4.1 consistently delivers superior performance:
# /etc/exports configuration example
/share 192.168.1.0/24(rw,sync,no_subtree_check,no_root_squash)
# Client mount options improving reliability:
mount -t nfs4 -o hard,intr,timeo=300,retrans=3 server:/share /mnt/share
While slightly slower, SMB3 offers better compatibility if Windows servers might join later:
# smb.conf excerpt for performance tuning
[global]
server multi channel support = yes
socket options = TCP_NODELAY IPTOS_LOWDELAY
aio read size = 1
aio write size = 1
[share]
path = /srv/share
writable = yes
durable handles = yes
kernel share modes = no
SSHFS can work for smaller deployments but shows limitations at scale:
# SSHFS mount with performance options
sshfs -o allow_other,reconnect,ServerAliveInterval=15,compression=no \
user@fileserver:/remote/path /local/mount
Common issues and solutions:
- NFS stale file handles: Implement proper lock management with nlockmgr
- SMB disconnects: Adjust keepalive parameters in both client and server
- Permission problems: Use consistent UID/GID mapping across servers
Here's our Ansible playbook snippet for automated NFS setup:
- name: Configure NFS server
hosts: fileservers
tasks:
- name: Install NFS server
apt:
name: nfs-kernel-server
state: present
- name: Create share directory
file:
path: /srv/nfs/share
state: directory
mode: '0775'
- name: Configure exports
lineinfile:
path: /etc/exports
line: "/srv/nfs/share {{ nfs_clients }}(rw,sync,no_subtree_check)"
state: present
notify: restart nfs
handlers:
- name: restart nfs
service:
name: nfs-kernel-server
state: restarted
When setting up shared storage for load-balanced web servers, protocol choice significantly impacts performance. From my experience managing similar infrastructures, here's a technical breakdown:
# Simple NFS mount example (/etc/fstab entry):
nfs-server:/shared/path /local/mountpoint nfs defaults,hard,intr,noatime 0 0
Protocol | Latency | Throughput | Concurrency |
---|---|---|---|
NFSv4 | Low | High | Excellent |
SMB3 | Medium | High | Good |
FUSE+SFTP | High | Medium | Fair |
For web server environments handling user uploads, NFSv4 with these optimized settings performs best:
# /etc/exports configuration example:
/shared/path web1(rw,sync,no_subtree_check) web2(rw,sync,no_subtree_check)
# Recommended mount options:
mount -t nfs4 -o rw,hard,sync,noatime,nodiratime,vers=4.2 \
nfs-server:/shared/path /mnt/shared
- Use TCP instead of UDP for better reliability
- Set proper rsize/wsize values (typically 32768 or 65536)
- Enable noatime/nodiratime to reduce metadata operations
- Consider async mounts if crash recovery is properly handled
Essential commands for monitoring NFS performance:
# Check NFS statistics
nfsstat -c
nfsstat -m
# Monitor network throughput
iftop -i eth0 -f "port 2049"
# Check disk I/O
iostat -x 2