When examining system resources, many Linux administrators encounter this puzzling scenario:
# lsof | wc -l
4399
# ulimit -n
1024
At first glance, this appears impossible - how can a process have more open files than the system limit allows? Let's dive into the technical nuances behind this phenomenon.
The apparent contradiction stems from fundamental differences in what these commands measure:
- ulimit shows per-process soft limits for the current shell session
- lsof lists files opened by all processes system-wide
The kernel maintains several layers of file descriptor limits:
# System-wide maximum (kernel parameter)
cat /proc/sys/fs/file-max
# Per-process hard limit (can be raised by root)
ulimit -Hn
# Per-process soft limit (user-configurable)
ulimit -Sn
A single process cannot exceed its soft limit (1024 in our example), but the entire system can have thousands of files open across all processes.
Consider an Apache server with 10 worker processes, each hitting its 1024 file limit:
# Each worker process
lsof -p [PID] | wc -l # Shows ~1024
# Entire system
lsof | wc -l # Shows ~10240
The aggregate count easily exceeds individual process limits while remaining within system boundaries.
Linux tracks open files through several mechanisms:
- File descriptor tables (per-process)
- System-wide open file table
- Inode cache
This multi-layer approach explains why system-wide counts differ from per-process limits. The kernel efficiently shares open file handles between processes through its internal data structures.
When debugging file descriptor issues:
# Check per-process usage
ls -l /proc/[PID]/fd | wc -l
# Identify heavy users
lsof | awk '{print $1}' | sort | uniq -c | sort -rn
# Monitor system-wide usage
watch -n 1 "cat /proc/sys/fs/file-nr"
Remember that shared libraries and memory-mapped files also appear in lsof output, contributing to the higher count.
To adjust these limits system-wide:
# Temporary increase
sysctl -w fs.file-max=100000
# Permanent setting
echo "fs.file-max = 100000" >> /etc/sysctl.conf
# Process hard limit (requires root)
ulimit -Hn 4096
Always consider both application requirements and system capabilities when modifying these values.
When examining system resources, it's not uncommon to encounter situations where lsof
reports significantly more open files than the ulimit -n
value suggests should be possible. This apparent contradiction stems from several key Linux architectural concepts.
1. Per-process vs System-wide Limits:
ulimit -n
shows the soft limit for the current shell process, while lsof
displays all open files across the entire system.
# Check system-wide file descriptor limit
cat /proc/sys/fs/file-max
# Compare with per-user limit
cat /proc/sys/fs/nr_open
2. Inherited File Descriptors:
Child processes may inherit open file descriptors from parent processes, allowing the total count to exceed individual process limits.
# View all open files for a specific process
ls -l /proc/PID/fd
# Count open files per process
for pid in /proc/[0-9]*; do
echo "${pid##*/} $(ls -1 $pid/fd 2>/dev/null | wc -l)";
done | sort -n -k2
The Linux kernel maintains multiple counters for file handles:
fs.file-nr
- Current allocated file handlesfs.file-max
- Maximum allowed file handles- Per-process RLIMIT_NOFILE - Process-specific limit
# View kernel file handle statistics
cat /proc/sys/fs/file-nr
# Output format: allocated unused maximum
Scenario 1: Multi-process applications
A web server with 10 worker processes, each using 100 file descriptors, would show 1000 open files in lsof
while each process respects its 1024 limit.
Scenario 2: Shared libraries
Common libraries opened by multiple processes are counted separately in lsof
but share the same inode and physical resources.
# Find duplicate open files
sudo lsof | awk '{print $9}' | sort | uniq -c | sort -nr
To properly diagnose file descriptor usage:
# View system-wide open file count
awk '{print $1}' /proc/sys/fs/file-nr
# View per-process limits
cat /proc/$(pgrep your_process)/limits
# Alternative lsof filtering
sudo lsof -u username | wc -l # Per-user count
sudo lsof -p PID | wc -l # Per-process count
When dealing with file descriptor limits:
- Differentiate between soft and hard limits (
ulimit -Sn
vsulimit -Hn
) - Consider system-wide limits in
/etc/security/limits.conf
- Monitor for leaks using
/proc/sys/fs/file-nr
trends
# Temporary increase for testing
ulimit -n 4096
# Persistent system-wide configuration
echo "fs.file-max = 100000" >> /etc/sysctl.conf
sysctl -p