Resolving “Too many open files” Error in CentOS7 with LEMP Stack: Comprehensive System Limits Configuration Guide


2 views

When your CentOS7 server running Nginx, PHP-FPM and MariaDB suddenly starts throwing "Too many open files" errors, it's often not just about adjusting ulimit settings. The real solution requires understanding Linux's multi-layered file descriptor management system.

Your /proc/sys/fs/file-nr output shows:

45216   0   6520154

This reveals:

  • 45216 - Currently allocated file handles
  • 0 - Number of free file handles
  • 6520154 - System-wide maximum file handles

The failed nginx restart through systemd indicates deeper issues. Modern CentOS7 uses systemd which overrides traditional limits.conf settings. We need to configure both:

# Create override directory
mkdir -p /etc/systemd/system/nginx.service.d

# Create limit override file
cat > /etc/systemd/system/nginx.service.d/limit.conf <

Your current settings look good but need these additional tweaks:

1. Kernel-level adjustments

# Add to /etc/sysctl.conf
fs.file-max = 2097152
fs.nr_open = 2097152

# Apply immediately
sysctl -p

2. PHP-FPM Process Management

# In /etc/php-fpm.d/www.conf
pm = dynamic
pm.max_children = 50
pm.start_servers = 10
pm.min_spare_servers = 5
pm.max_spare_servers = 15
pm.max_requests = 500

# Important for file descriptor inheritance
rlimit_files = 65536

3. Nginx Worker Optimization

# In nginx.conf main context
worker_rlimit_nofile 65536;

events {
    worker_connections 4096;
    use epoll;
    multi_accept on;
}

After implementing all changes, verify with:

# Check nginx limits
cat /proc/$(pgrep nginx | head -1)/limits | grep files

# Check PHP-FPM limits (for each worker)
ps -ef | grep php-fpm | grep -v grep | awk '{print $2}' | xargs -I {} cat /proc/{}/limits | grep files

# Monitor open files
watch -n 1 "lsof -n | awk '{print \$3}' | sort | uniq -c | sort -nr | head -20"

If issues persist:

# Find which processes are consuming file descriptors
lsof -n | awk '{print $1,$2,$3}' | sort | uniq -c | sort -nr | head

# Check for file descriptor leaks in PHP
strace -p $(pgrep php-fpm | head -1) -e trace=open,close,openat,fcntl

Remember to restart all services after making changes:

systemctl restart php-fpm nginx mariadb

For production environments, consider these optimized limits:

# /etc/security/limits.conf
* soft nofile 65536
* hard nofile 65536
nginx soft nofile 65536
nginx hard nofile 65536
root soft nofile 65536
root hard nofile 65536

# System-wide limit
echo "fs.file-max=2097152" >> /etc/sysctl.conf

These settings provide a balanced approach for most LEMP stack configurations while preventing resource exhaustion.


When running a LEMP stack (NGINX + PHP-FPM + MariaDB) on CentOS 7, you might encounter persistent Error: Too many open files messages, particularly during service restarts:

systemctl status nginx.service
● nginx.service - nginx - high performance web server
   Loaded: loaded (/usr/lib/systemd/system/nginx.service; enabled; vendor preset: disabled)
  Drop-In: /usr/lib/systemd/system/nginx.service.d
           └─worker_files_limit.conf
   Active: failed (Result: resources)

First, let's verify the actual system limits with these diagnostic commands:

# Check system-wide file handles
cat /proc/sys/fs/file-nr
45216   0   6520154

# Check process-specific limits
ps aux | grep nginx | grep -v grep
nginx      929  0.0  0.2  50880  6028 ?        S    00:25   0:00 nginx: worker process

# Check current open files count
lsof | wc -l
4776

We need a multi-layer approach addressing systemd, kernel, and application levels:

# 1. System-wide kernel limits (add to /etc/sysctl.conf)
fs.file-max = 2097152
fs.nr_open = 2097152

# 2. Systemd service override for NGINX
mkdir -p /etc/systemd/system/nginx.service.d
echo '[Service]' > /etc/systemd/system/nginx.service.d/override.conf
echo 'LimitNOFILE=65536' >> /etc/systemd/system/nginx.service.d/override.conf

# 3. NGINX main configuration (/etc/nginx/nginx.conf)
worker_rlimit_nofile 40000;
events {
    worker_connections 10000;
    use epoll;
    multi_accept on;
}

For PHP-FPM processes running under NGINX user:

# /etc/php-fpm.d/www.conf
rlimit_files = 65535
pm = dynamic
pm.max_children = 50
pm.start_servers = 5
pm.min_spare_servers = 5
pm.max_spare_servers = 35

After implementing changes, verify with:

# Check current limits for NGINX processes
cat /proc/$(pgrep nginx | head -1)/limits | grep 'Max open files'

# Real-time monitoring
watch -n 1 'lsof -u nginx | wc -l'

# Alternative monitoring
sudo -u nginx bash -c 'ulimit -n'

If issues persist, investigate file descriptor leaks:

# Check which files are being held open
lsof -u nginx | awk '{print $9}' | sort | uniq -c | sort -nr | head -20

# Alternative by file types
lsof -u nginx | awk '{print $5" "$9}' | sort | uniq -c | sort -nr

For production systems, consider this comprehensive systemd configuration:

# /usr/lib/systemd/system/nginx.service.d/limits.conf
[Service]
LimitNOFILE=65536
LimitMEMLOCK=infinity
LimitSTACK=65536
LimitCPU=infinity
LimitDATA=infinity
LimitFSIZE=infinity
LimitRSS=infinity
LimitNPROC=65536

Remember to reload systemd after changes:

systemctl daemon-reload
systemctl restart nginx php-fpm