When working with NFS-mounted directories, developers often need granular visibility into the I/O patterns of specific mounts. Standard system monitoring tools like iostat or vmstat provide system-wide statistics but lack mount-point specificity. Here's how to achieve precise NFS client-side monitoring.
The most effective approach combines kernel statistics with custom parsing:
# Method 1: Using /proc/self/mountstats
cat /proc/self/mountstats | grep -A 3 "/your/mount/point"
# Sample output parsing:
nfs_io_monitor() {
local mountpoint=$1
while true; do
awk -v mp="$mountpoint" '$0 ~ mp {getline; getline; print $3" "$4}' /proc/self/mountstats
sleep 1
done
}
For more sophisticated monitoring, consider this Python script:
import time
def monitor_nfs_io(mount_path, interval=1):
prev_stats = {}
while True:
with open('/proc/self/mountstats') as f:
found = False
for line in f:
if mount_path in line:
found = True
continue
if found and 'bytes' in line:
current = line.strip().split()
if prev_stats:
read_diff = int(current[0]) - prev_stats['read']
write_diff = int(current[1]) - prev_stats['write']
print(f"Read: {read_diff/interval/1024/1024:.2f} MB/s")
print(f"Write: {write_diff/interval/1024/1024:.2f} MB/s")
prev_stats = {'read': int(current[0]), 'write': int(current[1])}
found = False
break
time.sleep(interval)
For RHEL-based systems, the nfs-utils package provides nfsiostat:
# Install if needed
sudo yum install nfs-utils
# Usage for specific mount
nfsiostat /mnt/nfs/share 1
When implementing continuous monitoring:
- Disk I/O overhead from frequent stats collection
- Impact of monitoring interval on accuracy
- Potential need for kernel-based tracing (eBPF) in high-performance environments
For long-term monitoring, consider integrating with tools like:
# Telegraf configuration example
[[inputs.procstat]]
pattern = "nfsiostat"
interval = "10s"
metric_prefix = "nfs_stats"
When working with NFS mounts, monitoring specific I/O activity is crucial for performance tuning and troubleshooting. Unlike local filesystems, NFS introduces network overhead, making targeted monitoring essential.
The nfsiostat tool (part of nfs-utils) provides NFS-specific statistics similar to iostat:
# Install if needed sudo apt-get install nfs-common # Debian/Ubuntu sudo yum install nfs-utils # RHEL/CentOS # Basic usage nfsiostat /mnt/nfs_mount 1 5
Sample output columns include:
- ops/s - Operations per second
- kB_read/s - Read throughput
- kB_wrtn/s - Write throughput
For more granular control, parse /proc/self/mountstats:
#!/bin/bash
NFS_MOUNT="/mnt/nfs_share"
get_nfs_io() {
awk -v mount="$NFS_MOUNT" '
$1 ~ /^READ:/ {read_bytes=$2; read_ops=$5}
$1 ~ /^WRITE:/ {write_bytes=$2; write_ops=$5}
END {
printf "READ: %.2f MB/s (%d ops)\n", read_bytes/1048576, read_ops;
printf "WRITE: %.2f MB/s (%d ops)\n", write_bytes/1048576, write_ops;
}' <(grep -A4 "statvers" /proc/self/mountstats | grep -A10 "$mount")
}
while true; do
get_nfs_io
sleep 1
done
For production environments, consider the node_exporter NFS collector:
# Enable NFS collector in node_exporter ./node_exporter --collector.nfs # Sample PromQL query rate(node_nfs_bytes_read[1m]) / 1024 / 1024 # MB/s read rate(node_nfs_bytes_written[1m]) / 1024 / 1024 # MB/s written
For deep inspection, use this SystemTap script (requires kernel headers):
probe nfs.read {
printf("NFS READ: %d bytes\n", $size)
@reads[execname()] <<< $size
}
probe nfs.write {
printf("NFS WRITE: %d bytes\n", $size)
@writes[execname()] <<< $size
}
probe timer.s(1) {
println("\nREAD Statistics (MB/s):")
foreach (proc in @reads) {
printf("%s: %.2f\n", proc, @sum(@reads[proc])/1048576)
}
println("\nWRITE Statistics (MB/s):")
foreach (proc in @writes) {
printf("%s: %.2f\n", proc, @sum(@writes[proc])/1048576)
}
clear(@reads)
clear(@writes)
}
- Monitor during peak and off-peak hours for baseline comparison
- Correlate NFS metrics with network statistics (
iftop,nethogs) - Watch for NFS version differences (v3 vs v4 performance characteristics vary)