When you increase the nofile
limit in /etc/security/limits.conf
, you're primarily affecting three system resources:
# Example limits.conf entry
* soft nofile 100000
* hard nofile 100000
Each open file descriptor consumes kernel memory for:
- struct file (96 bytes on 64-bit systems)
- struct dentry (~600 bytes)
- struct inode (~500 bytes)
The actual memory usage per file descriptor typically ranges between 1-2KB when considering all associated kernel data structures.
# View current memory usage per process
cat /proc/$(pidof your_process)/status | grep VmSize
# Check current file descriptor count
ls -l /proc/$(pidof your_process)/fd | wc -l
Higher limits affect:
- select()/poll() performance (O(n) complexity)
- epoll scales better (O(1) for ready descriptors)
- System call overhead (each FD operation requires context switches)
For most production servers:
# Web servers (Nginx/Apache)
www-data soft nofile 65535
www-data hard nofile 65535
# Database servers (PostgreSQL/MySQL)
postgres soft nofile 65536
postgres hard nofile 65536
Use these commands to track FD consumption:
# System-wide FD usage
cat /proc/sys/fs/file-nr
# Per-process FD count
lsof -u username | wc -l
# Real-time monitoring
watch -n 1 "ls /proc/$(pidof process)/fd | wc -l"
Consider higher limits when:
- Running high-concurrency servers (10k+ connections)
- Using connection pools or microservices architectures
- Processing many small files simultaneously
Here's how to test limits safely:
# Test script to open multiple files
for i in {1..100000}; do
exec {fd}<> /dev/null || break
echo "Opened FD: $i"
done
Also adjust these related kernel parameters:
# /etc/sysctl.conf
fs.file-max = 2097152
fs.nr_open = 1048576
Remember to reload settings:
sysctl -p
ulimit -n # Verify changes
In Linux systems, every process maintains a table of file descriptors (FDs) that represent open files, sockets, pipes, and other I/O resources. The nofile
limit in /etc/security/limits.conf
controls the maximum number of these descriptors a process can open simultaneously.
# Example limits.conf entry
username soft nofile 4096
username hard nofile 8192
Each file descriptor consumes kernel memory (typically 1-2KB) for tracking state. While this seems small, consider:
- A single process with 100,000 FDs would consume ~200MB kernel memory
- The kernel must maintain data structures for each FD in its cache
- System-wide limits exist (visible in
/proc/sys/fs/file-max
)
Higher limits don't directly impact performance unless actually used. However:
# Check current usage:
cat /proc/sys/fs/file-nr
# Output shows: allocated unused maximum
# 1024 200 8192
For web servers handling many concurrent connections (Nginx, Apache), we typically see:
worker_processes 4;
worker_rlimit_nofile 30000;
events {
worker_connections 10000;
}
For different server roles:
Server Type | Recommended Limit |
---|---|
Database (MySQL) | 100,000+ |
Web Server | 30,000-50,000 |
Application Server | 10,000-20,000 |
Common symptoms of hitting limits include:
# Check per-process FD count:
ls -l /proc/<PID>/fd | wc -l
# System-wide monitoring:
watch -n 1 "cat /proc/sys/fs/file-nr"
For a production web server:
# /etc/security/limits.conf
www-data soft nofile 50000
www-data hard nofile 100000
# /etc/sysctl.conf
fs.file-max = 2097152
fs.nr_open = 1000000
After changes, verify with:
ulimit -n
cat /proc/sys/fs/file-max