Nginx worker_connections: Optimal Configuration and Performance Considerations for High-Traffic Servers


3 views

The worker_connections directive in Nginx is a crucial performance parameter that defines how many simultaneous connections each worker process can handle. This includes:

  • Client connections (HTTP/HTTPS requests)
  • Upstream server connections (when acting as reverse proxy)
  • Internal connections for keepalives and other operations

The ideal setting depends on your server's resources and expected traffic patterns. Here's the standard calculation:

worker_connections = (ulimit -n) / worker_processes

For a typical production server with:

ulimit -n = 65536
worker_processes = auto (matches CPU cores)

On an 8-core server, this would suggest:

worker_connections 8192;

While increasing worker_connections can handle more traffic, consider these trade-offs:

  • Memory Usage: Each connection consumes ~256 bytes (more with SSL)
  • File Descriptors: Must stay under system-wide limits
  • Epoll Efficiency: Linux epoll scales better than connection count

Here's a production-tested configuration for a high-traffic web server:

worker_processes auto;
worker_rlimit_nofile 100000;

events {
    worker_connections 16384;
    multi_accept on;
    use epoll;
}

Verify your settings with these commands:

# Check current connections
ss -s

# Monitor file descriptor usage
cat /proc/sys/fs/file-nr

# Test configuration
nginx -T

Remember to adjust sysctl settings for high connection counts:

fs.file-max = 2097152
net.core.somaxconn = 65535

Consider higher values when:

  • Using HTTP/2 (multiple streams per connection)
  • Handling many slow clients (e.g., mobile apps)
  • Running as a high-traffic proxy or load balancer

The worker_connections directive in Nginx controls how many simultaneous connections each worker process can handle. This includes:

  • Client connections (HTTP/HTTPS requests)
  • Upstream server connections (when acting as reverse proxy)
  • Internal Nginx connections

events {
    worker_connections 1024; # Default value on many systems
}

There's no universal "best" value, but we can calculate it based on:


# Formula:
max_clients = worker_processes × worker_connections

# Example for 4-core server:
worker_processes auto; # Typically matches CPU cores
worker_connections 4096; # Allows 16,384 total connections (4×4096)

System Limitations:


# Check your system's file descriptor limit:
ulimit -n

# Temporary increase for testing:
ulimit -n 65536

Memory Impact: Each connection consumes ~256 bytes (x2 for SSL). At 10,000 connections:


# Memory estimation:
10000 connections × 0.5KB ≈ 5MB RAM per worker

Medium Traffic Server (2CPU/4GB):


worker_processes 2;
events {
    worker_connections 2048;
    use epoll;
    multi_accept on;
}

High Traffic Server (8CPU/32GB):


worker_processes auto;
events {
    worker_connections 8192;
    worker_rlimit_nofile 30000;
}

Check current connection usage:


# Monitor active connections:
watch -n 1 "netstat -an | grep :80 | wc -l"

# Nginx status module (requires compilation with --with-http_stub_status_module)
location /nginx_status {
    stub_status;
}

Combine with these directives for optimal performance:


http {
    sendfile on;
    tcp_nopush on;
    tcp_nodelay on;
    keepalive_timeout 65;
    keepalive_requests 100;
}