Every Unix/Linux process has a file descriptor limit that determines how many files it can have open simultaneously. When a long-running service like a database server or web application hits this limit, you'll typically see errors like "Too many open files". While ulimit
can set this for new processes, changing it for already running ones requires more finesse.
First, identify your process ID and current limits:
# Find your process ID
ps aux | grep [process_name]
# Check current soft and hard limits
cat /proc/[PID]/limits | grep files
Method 1: Using prlimit (Modern Systems)
The cleanest approach on systems with util-linux ≥ v2.36:
sudo prlimit --pid [PID] --nofile=65535:65535
This sets both soft and hard limits to 65535. Verify with:
prlimit --pid [PID] --nofile
Method 2: Using gdb (Universal but Risky)
For older systems without prlimit:
sudo gdb -p [PID] -batch -ex 'call setrlimit(RLIMIT_NOFILE, {10240, 10240})' -ex 'detach'
Where 10240 is your desired soft/hard limit pair.
Method 3: Temporary Workaround with lsof
For immediate relief while you implement a permanent solution:
# Identify leaked file descriptors
lsof -p [PID] | awk '{print $4,$5,$9}' | sort | uniq -c | sort -rn | head
# Close specific descriptors (use with caution)
gdb -p [PID] -ex 'call close([FD_NUMBER])' -ex 'detach'
To prevent recurrence, adjust system limits:
# /etc/security/limits.conf
* soft nofile 65535
* hard nofile 65535
# Systemd services require special configuration
# /etc/systemd/system/[service].d/override.conf
[Service]
LimitNOFILE=infinity
For a web server hitting limits during traffic spikes:
# Check current worker processes
ps aux | grep nginx | grep worker
# Dynamically adjust each worker
for pid in $(pgrep -f "nginx: worker"); do
sudo prlimit --pid $pid --nofile=32768:32768
done
# Verify changes
for pid in $(pgrep -f "nginx: worker"); do
echo "PID $pid:"; cat /proc/$pid/limits | grep files
done
When managing long-running processes like database servers (MySQL, PostgreSQL) or web servers (Nginx, Apache), you might encounter the dreaded "Too many open files" error. This occurs when a process hits its file descriptor limit. While we know how to adjust these limits system-wide in /etc/security/limits.conf
or via ulimit
, changing limits for already-running processes requires different techniques.
First, let's check the current limits of a running process (replace PID with your process ID):
cat /proc/PID/limits
Sample output might show:
Limit Soft Limit Hard Limit Units
Max open files 1024 4096 files
On modern Linux systems (kernel 2.6.36+), use prlimit
from util-linux:
sudo prlimit --pid PID --nofile=NEW_SOFT_LIMIT:NEW_HARD_LIMIT
Example to increase a web server's limit:
sudo prlimit --pid $(pgrep nginx) --nofile=65535:65535
For systems without prlimit, you can write directly to the proc filesystem:
echo "NEW_MAX_FD" | sudo tee /proc/PID/limits >/dev/null
Note: This method may require root privileges and might not work on all systems.
For critical services, consider a monitoring script that automatically adjusts limits:
#!/bin/bash
PID=$(pgrep -f "your_process_name")
CURRENT=$(cat /proc/$PID/limits | grep "Max open files" | awk '{print $4}')
THRESHOLD=800
if [ $CURRENT -lt $THRESHOLD ]; then
prlimit --pid $PID --nofile=65535:65535
logger "Increased file descriptor limit for PID $PID"
fi
For services managed by systemd, you can set limits permanently in the service file:
[Service]
LimitNOFILE=65535
Then reload with:
sudo systemctl daemon-reload
sudo systemctl restart service_name
For extremely high-load systems, you might need to adjust kernel parameters:
sysctl -w fs.file-max=2097152
echo "fs.file-max = 2097152" >> /etc/sysctl.conf
- Verify changes with
cat /proc/PID/limits
- Check system-wide limits with
sysctl fs.file-nr
- Monitor file descriptor usage with
ls -1 /proc/PID/fd | wc -l