Understanding the Relationship Between ulimit -n and /proc/sys/fs/file-max: System-wide vs Per-user File Descriptor Limits in Linux


2 views

In Linux systems, there are actually two distinct layers of file descriptor limitations:

  • ulimit -n: Sets the per-process soft limit for open file descriptors (FDs)
  • /proc/sys/fs/file-max: Defines the system-wide maximum number of file handles

Your observation is correct - the 1024 default from ulimit -n applies to individual processes, while the 761,408 value in file-max represents the total available FDs across all processes on the system.

When examining multiple login sessions for the same user:

# Example showing two SSH sessions for user 'webapp'
$ ulimit -n
1024
$ ssh webapp@localhost
$ ulimit -n
1024

Each session maintains its own 1024 FD limit. These are not shared or aggregated between sessions. For example:

  • Session 1 can open 1024 files
  • Session 2 can open another 1024 files
  • Total system usage: 2048 files (well below file-max)

Setting high limits has minimal performance impact when unused. However, there are memory implications:

# Calculate approximate kernel memory usage
files_max=$(cat /proc/sys/fs/file-max)
memory_usage=$((files_max * 512 / 1024)) # ~512 bytes per FD
echo "Potential kernel memory usage: ${memory_usage}KB"

For web servers handling many concurrent connections, we typically recommend:

# For Nginx/PHP-FPM setups
echo "fs.file-max = 500000" >> /etc/sysctl.conf
sysctl -p

# For individual processes
echo "www-data soft nofile 50000" >> /etc/security/limits.conf
echo "www-data hard nofile 100000" >> /etc/security/limits.conf

Check actual FD consumption with:

# System-wide usage
cat /proc/sys/fs/file-nr

# Per-process usage
ls -1 /proc/$PID/fd/ | wc -l

A typical Apache tuning scenario:

# Calculate needed FDs based on:
# MaxClients 400 * 20 FDs per connection = 8,000 minimum
# Plus 1,000 for system files = 9,000 total

# Set system-wide limit
sysctl -w fs.file-max=100000

# Set Apache user limits
echo "apache soft nofile 9000" >> /etc/security/limits.conf
echo "apache hard nofile 10000" >> /etc/security/limits.conf

Remember to restart affected services after changing limits.


In Linux systems, file descriptor management operates at two distinct levels:

# System-wide limit (kernel-enforced maximum)
cat /proc/sys/fs/file-max
# Output example: 761408

# Per-process limit (user-level restriction)
ulimit -n
# Output example: 1024

The relationship between these limits follows a strict hierarchy:

  • /proc/sys/fs/file-max: Absolute system-wide ceiling for all file descriptors
  • /proc/sys/fs/nr_open: Per-process maximum (can override ulimit)
  • ulimit -n: User session limit inherited by child processes

When multiple sessions exist for the same user:

# Session 1 (PID 1234) has 400 open files
# Session 2 (PID 5678) has 600 open files
# Neither hits the 1024 individual limit, system-wide count is 1000/761408

Increasing limits has minimal overhead when unused, but requires kernel memory allocation:

# Check current allocation
grep -i file /proc/meminfo
# Example output:
# Buffers:           10240 kB
# Cached:          245760 kB
# SwapCached:            0 kB

For high-performance servers handling many connections:

# Permanent system-wide setting
echo "fs.file-max = 2097152" >> /etc/sysctl.conf

# User-level limits in /etc/security/limits.conf
* soft nofile 65535
* hard nofile 131072

# Verify changes
sysctl -p
ulimit -n

Essential commands for tracking actual usage:

# System-wide count
cat /proc/sys/fs/file-nr
# Output format: allocated unused maximum

# Per-process count
ls -1 /proc/[0-9]*/fd/ | wc -l

When encountering "Too many open files" errors despite available capacity:

# Check for file descriptor leaks
lsof -n | awk '{print $2}' | sort | uniq -c | sort -nr | head

# Verify inherited limits
cat /proc/[PID]/limits | grep "Max open files"