When testing Nginx's default page with our current configuration (4 CPU cores, 8GB RAM Ubuntu server), we're hitting a hard ceiling at 3,000 concurrent connections despite having ulimit set to 20,000. The server shows minimal CPU and RAM usage, indicating the limitation isn't resource-related but configuration-bound.
Our nginx.conf
shows several potential optimization opportunities:
# Current worker setup
worker_processes auto; # Better than fixed number
worker_rlimit_nofile 100000; # Good practice
events {
worker_connections 5000;
use epoll;
multi_accept on; # Should be enabled
accept_mutex off; # Important for high traffic
}
The current sysctl.conf
lacks several crucial parameters for high concurrency:
# Add these to /etc/sysctl.conf
net.ipv4.tcp_max_syn_backlog = 3240000
net.core.netdev_max_backlog = 3240000
net.ipv4.tcp_syncookies = 0 # Disable for high performance
net.ipv4.tcp_max_orphans = 3240000
net.netfilter.nf_conntrack_max = 3240000
The most common reason for connection timeouts at scale is the listen queue overflow. Implement this configuration:
server {
listen 80 backlog=32768 reuseport;
listen [::]:80 backlog=32768 reuseport;
...
}
To verify improvements, use wrk with this command:
wrk -t12 -c10000 -d60s --timeout 30s http://yourserver
Key parameters:
- -t12: 12 threads (typical for 4-core CPU)
- -c10000: 10,000 concurrent connections
- -d60s: 60 second test duration
Here's the full recommended setup for 10K+ connections:
user www-data;
worker_processes auto;
worker_rlimit_nofile 100000;
events {
worker_connections 32768;
use epoll;
multi_accept on;
accept_mutex off;
}
http {
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 30;
keepalive_requests 1000;
reset_timedout_connection on;
server {
listen 80 backlog=32768 reuseport;
server_name _;
location / {
root /var/www/html;
try_files $uri $uri/ =404;
}
}
}
After applying changes, monitor these metrics:
watch -n 1 "cat /proc/net/sockstat && \
echo 'Open files:' && \
lsof -u www-data | wc -l"
Key indicators:
- sockets: used/total in sockstat
- TCP memory usage
- Actual open files count
When your Nginx server hits a connection limit around 3k concurrent requests despite having proper ulimit settings (20,000 in this case), we need to examine multiple layers of the stack. Your current configuration shows good foundation settings, but let's enhance it systematically.
# In /etc/nginx/nginx.conf
worker_processes auto; # Better than static number
events {
worker_connections 10000;
use epoll;
multi_accept on;
}
http {
# Existing settings plus:
keepalive_requests 10000;
keepalive_timeout 30s;
client_body_timeout 15s;
client_header_timeout 15s;
# Buffer optimizations
client_body_buffer_size 16k;
client_header_buffer_size 8k;
client_max_body_size 8m;
large_client_header_buffers 4 8k;
}
Add these to /etc/sysctl.conf and run sysctl -p
:
# Socket options
net.ipv4.tcp_max_syn_backlog = 3240000
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_orphans = 3240000
net.ipv4.tcp_orphan_retries = 3
# Memory management
net.ipv4.tcp_mem = 786432 2097152 3145728
net.ipv4.tcp_rmem = 4096 87380 6291456
net.ipv4.tcp_wmem = 4096 16384 4194304
Create /etc/security/limits.conf with:
www-data soft nofile 102400
www-data hard nofile 102400
root soft nofile 102400
root hard nofile 102400
Use wrk for load testing:
wrk -t12 -c10000 -d30s http://yourserver/
Monitor with:
watch -n 1 "echo \"TCP Connections: \"; \
netstat -n | grep -E 'tcp|udp' | wc -l; \
echo \"Nginx Processes: \"; \
ps -ef | grep nginx | wc -l"
- Consider using Nginx's
reuseport
feature for listen directives - Implement proper connection queueing with
net.core.somaxconn
- For HTTP/2 scenarios, adjust
http2_max_concurrent_streams
- Monitor with
ss -s
for socket statistics