When monitoring file descriptor usage against ulimit
values, these two commands reveal fundamentally different information sets:
lsof -p 1234 | wc -l # Lists open files + additional resources
ls /proc/1234/fd | wc -l # Counts actual file descriptors
/fd /h2>
The /proc
filesystem provides the most accurate count of allocated file descriptors:
- Counts every FD currently allocated in kernel space
- Includes deleted files still held open (common in temp files)
- Shows pipes/sockets with numeric names
Example monitoring script:
#!/bin/bash
pid=$1
fd_count=$(ls -1 /proc/$pid/fd 2>/dev/null | wc -l)
echo "Process $pid has $fd_count FDs (ulimit: $(ulimit -n))"
lsof
provides a superset including:
- Memory-mapped files (counted but don't consume FDs)
- Current working directory
- Root directory
- Shared libraries
This explains why lsof
counts are typically higher:
# Typical output comparison
$ ls /proc/4567/fd | wc -l
42
$ lsof -p 4567 | wc -l
57
Use /proc when:
- Strictly monitoring ulimit consumption
- Debugging "Too many open files" errors
- Writing resource-constrained applications
Use lsof when:
- Need comprehensive file/network resource auditing
- Debugging file handle leaks (shows filenames)
- Checking what files a process actually uses
Here's how we track FD usage across multiple processes:
#!/bin/bash
# Find all PIDs for a specific user
pids=$(pgrep -u appuser)
for pid in $pids; do
fd_limit=$(grep "Max open files" /proc/$pid/limits | awk '{print $5}')
fd_count=$(ls -1 /proc/$pid/fd 2>/dev/null | wc -l)
utilization=$((100 * fd_count / fd_limit))
if [ $utilization -gt 90 ]; then
echo "WARNING: PID $pid at ${utilization}% FD capacity ($fd_count/$fd_limit)"
fi
done
- Deleted files still appear in /proc but not lsof
- Zombie processes may report incorrect counts
- Network sockets appear differently in each view
- Containerized environments may show host-level FDs
When monitoring process file descriptor usage against ulimit -n
, two common approaches emerge:
# Method 1: Using lsof
lsof -p <pid> | wc -l
# Method 2: Using procfs
ls /proc/<pid>/fd | wc -l
The fundamental distinction lies in what each method actually counts:
- /proc/<pid>/fd: Shows actual file descriptors allocated to the process (including sockets, pipes, and regular files)
- lsof: Lists all open files including memory-mapped files, shared libraries, and some non-file-descriptor resources
Here's a Python script demonstrating the difference:
import os
import subprocess
pid = os.getpid()
print(f"Testing with PID: {pid}")
# Create some file descriptors
files = [open(f"/tmp/test_{i}", "w") for i in range(3)]
# Compare counts
lsof_count = int(subprocess.check_output(f"lsof -p {pid} | wc -l", shell=True))
fd_count = len(os.listdir(f"/proc/{pid}/fd"))
print(f"lsof count: {lsof_count}")
print(f"/proc count: {fd_count}")
The /proc/<pid>/fd method directly reflects the count against the file descriptor limit (ulimit -n
). lsof will typically show higher numbers because:
- It includes memory mapped files (mmap)
- Counts shared libraries loaded by the process
- May show duplicate entries for the same file descriptor
For accurate monitoring, be aware of:
# Closed but unreleased descriptors (shouldn't happen in normal cases)
ls -l /proc/<pid>/fd | grep '(deleted)'
# Special cases that don't consume fd slots
ls -l /proc/<pid>/fd | egrep '^(c|b)'
Here's a robust bash implementation:
#!/bin/bash
pid=$1
if [[ ! -d "/proc/$pid" ]]; then
echo "Process $pid not found" >&2
exit 1
fi
fd_count=$(ls /proc/$pid/fd | wc -l)
fd_limit=$(grep "Max open files" /proc/$pid/limits | awk '{print $4}')
echo "FD usage: $fd_count/$fd_limit"
echo "Breakdown:"
ls -l /proc/$pid/fd | awk '{print $9,$10,$11}' | sort | uniq -c | sort -nr
For high-frequency monitoring, /proc is significantly faster:
- lsof: Spawns multiple processes, parses various system files
- /proc: Direct kernel interface, single directory listing