When running Nginx with high concurrency configurations, many administrators encounter the frustrating message:
8096 worker_connections exceed open file resource limit: 1024
This occurs despite having proper configurations in:
- nginx.conf (with worker_connections set high)
- /etc/security/limits.conf (with increased nofile limits)
- /etc/default/nginx (with ULIMIT settings)
1. System-wide File Descriptor Limits
First verify current limits with:
# Check global limits
cat /proc/sys/fs/file-max
# Check user limits
ulimit -n
# Check Nginx process limits
cat /proc/$(cat /var/run/nginx.pid)/limits
2. Permanent System Configuration
Add these settings to /etc/sysctl.conf:
fs.file-max = 200000
fs.nr_open = 200000
Then apply immediately:
sysctl -p
3. Service-Specific Configuration
Edit /lib/systemd/system/nginx.service (or create override):
[Service]
LimitNOFILE=200000
LimitNPROC=200000
Then reload systemd:
systemctl daemon-reload
systemctl restart nginx
4. Nginx-Specific Optimizations
Add these to nginx.conf:
worker_rlimit_nofile 200000;
events {
worker_connections 8096;
multi_accept on;
use epoll;
}
After applying all changes:
# Check Nginx process limits
cat /proc/$(cat /var/run/nginx.pid)/limits | grep 'Max open files'
# Alternative verification
sudo -u nginx bash -c 'ulimit -n'
- Forgetting to restart services after config changes
- Mismatch between systemd and init.d configurations
- Not setting both soft and hard limits in limits.conf
- Overlooking PAM session limits
For Debian/Ubuntu systems, also check /etc/pam.d/common-session and add:
session required pam_limits.so
When you see the "worker_connections exceed open file resource limit: 1024" error in Nginx, it means your system's file descriptor limit is conflicting with Nginx's configuration. Even though you've set higher limits in limits.conf
, several other factors can still enforce the default 1024 limit.
Here's everything you need to check and configure:
# 1. System-wide limits (already done in your case)
/etc/security/limits.conf:
* hard nofile 199680
* soft nofile 65535
nginx hard nofile 199680
nginx soft nofile 65535
# 2. Systemd service override (critical for modern systems)
/etc/systemd/system/nginx.service.d/override.conf:
[Service]
LimitNOFILE=65535
# 3. Nginx startup configuration
/etc/default/nginx:
ULIMIT="-n 65535"
# 4. Kernel-level limits
/etc/sysctl.conf:
fs.file-max = 2097152
On Debian 7 with systemd, you need to create a service override:
sudo mkdir -p /etc/systemd/system/nginx.service.d
sudo nano /etc/systemd/system/nginx.service.d/override.conf
# Add these contents:
[Service]
LimitNOFILE=65535
# Then reload and restart:
sudo systemctl daemon-reload
sudo systemctl restart nginx
After making all changes, verify with:
# Check Nginx worker process limits
cat /proc/$(cat /var/run/nginx.pid)/limits | grep "Max open files"
# Check system-wide limits
ulimit -n
# Check kernel limits
sysctl fs.file-max
For high-traffic servers, consider this optimized setup:
# /etc/sysctl.conf additions
fs.file-max = 500000
fs.nr_open = 500000
# /etc/security/limits.conf
* soft nofile 100000
* hard nofile 500000
nginx soft nofile 100000
nginx hard nofile 500000
# Nginx configuration
worker_processes auto;
worker_rlimit_nofile 100000;
events {
worker_connections 40000;
multi_accept on;
use epoll;
}
If the error persists after all configurations:
- Check if you're using SELinux (less common on Debian)
- Verify the nginx user exists and has correct permissions
- Ensure no other security modules are limiting resources
- Check for typos in configuration files