Difference Between System-wide and Process-specific File Descriptors: Analyzing /proc/sys/fs/file-nr vs /proc/$pid/fd Counts


2 views



When monitoring file descriptor usage on Linux systems, two primary sources provide different perspectives:

# System-wide allocated descriptors
cat /proc/sys/fs/file-nr
12750   0   753795

This output shows three values: currently allocated FDs, unused allocated FDs (0 in this case), and the system-wide maximum limit.

The alternative method counts open FDs per process:

for pid in $(lsof | awk '{ print $2 }' | uniq); do 
    find /proc/$pid/fd/ -type l 2>&1 | grep -v "No"; 
done | wc -l

This returns 11069 in our case - significantly lower than the system-wide count.

  • Kernel-internal FDs: The system count includes descriptors not visible in /proc
  • Closed-but-not-freed FDs: Some may be in transition states
  • Ephemeral processes: Short-lived processes might not be captured
  • Permission issues: The scanning process may lack access to some /proc entries

A more reliable approach using /proc directly:

ls -1 /proc/[0-9]*/fd/ 2>/dev/null | wc -l

Or alternatively:

find /proc -maxdepth 3 -path '/proc/[0-9]*/fd/*' 2>/dev/null | wc -l

For production monitoring, consider these approaches:

#!/bin/bash
# Track FD usage over time

sys_fds=$(awk '{print $1}' /proc/sys/fs/file-nr)
proc_fds=$(ls -1 /proc/[0-9]*/fd/ 2>/dev/null | wc -l)

echo "$(date '+%Y-%m-%d %H:%M:%S'),$sys_fds,$proc_fds" >> fd_monitor.log

For detailed per-process breakdown:

ps -eo pid,nlwp,rss,vsz,cmd | awk '{print $1}' | xargs -I {} sh -c 'echo -n "{} "; ls /proc/{}/fd 2>/dev/null | wc -l' | sort -k2 -n


When monitoring file descriptor usage in Linux systems, we encounter two primary data sources that often show discrepancies:

# System-wide FD allocation
cat /proc/sys/fs/file-nr
12750   0   753795

# Process-level FD count calculation
for pid in $(lsof | awk '{ print $2 }' | uniq); do 
  find /proc/$pid/fd/ -type l 2>&1 | grep -v "No"; 
done | wc -l
11069

The first value in file-nr (12750 in this case) represents the total number of file handles allocated by the kernel since boot. This includes:

  • Currently open files
  • Closed files where handles haven't been fully released
  • Kernel-internal file structures
  • Socket descriptors

The process-level enumeration (11069) typically shows lower numbers because:

# Example of kernel-owned FDs not visible in process spaces
ls -l /proc/$(pidof init)/fd
# Shows system-level descriptors not counted by user-space tools

Key differences stem from:

  • Kernel threads maintaining private FDs
  • Ephemeral descriptors during process creation
  • Namespace isolation in container environments
  • Race conditions during process scanning

For precise monitoring, consider these alternatives:

# Method 1: Using /proc/sys/fs/file-nr with proper interpretation
awk '{print $1-$2}' /proc/sys/fs/file-nr

# Method 2: Comprehensive process scan including kernel threads
find /proc/[0-9]*/fd -type l 2>/dev/null | wc -l

# Method 3: Using sysctl for the current allocation
sysctl fs.file-nr | awk '{print $3-$4}'

When facing FD leaks, this diagnostic helps identify the source:

# Compare system allocation vs process usage
sys_total=$(awk '{print $1}' /proc/sys/fs/file-nr)
proc_used=$(find /proc/[0-9]*/fd -type l 2>/dev/null | wc -l)
echo "Kernel overhead: $((sys_total - proc_used))"

High differences (>20%) may indicate:

  • Kernel resource leaks
  • Container runtime issues
  • Zombie processes holding FDs

For production systems, implement this check:

#!/bin/bash
threshold=0.8
max_fd=$(cat /proc/sys/fs/file-max)
used_fd=$(awk '{print $1}' /proc/sys/fs/file-nr)
utilization=$(echo "$used_fd/$max_fd" | bc -l)

if (( $(echo "$utilization > $threshold" | bc -l) )); then
  echo "FD usage critical: $used_fd/$max_fd"
  # Add alerting logic here
fi