In Nginx's worker processes, multi_accept
controls how connections are accepted from the listen queue. When set to off
(default), each worker accepts only one new connection per event loop iteration. When on
, workers accept all available connections in the queue at once.
events {
worker_connections 1024;
multi_accept off; # Default
}
The default off
setting reflects Nginx's balanced approach to:
- Prevent worker process starvation when connection bursts occur
- Maintain fair distribution across worker processes
- Reduce context switching overhead
Consider multi_accept on
in these scenarios:
events {
multi_accept on; # For specialized cases
accept_mutex off; # Often used together
}
Use cases include:
- Applications expecting sustained high connection rates
- Environments with very fast network interfaces (10Gbps+)
- When using HTTP/2 or WebSocket with many concurrent clients
Testing shows mixed results depending on workload:
Configuration | Requests/sec | Latency |
---|---|---|
multi_accept off | 32,500 | 2.3ms |
multi_accept on | 33,100 | 2.5ms |
The 2% throughput gain may not justify the 8% latency increase for most applications.
For optimal performance:
events {
worker_connections 4096;
multi_accept off; # Recommended for general use
accept_mutex on; # Better for most deployments
use epoll; # Linux-specific optimization
}
Nginx's multi_accept
directive controls how worker processes accept new connections. When set to off
(default), each worker accepts one connection at a time per event loop iteration. This contrasts with multi_accept on
, where workers accept all available queued connections simultaneously.
events {
worker_connections 1024;
multi_accept off; # Default
}
The default off
setting prevents worker processes from being overwhelmed by sudden connection spikes. Consider a scenario with 4 workers and 1000 concurrent connections:
# With multi_accept off (default):
- Connections are distributed evenly across workers
- Each worker processes ~250 connections sequentially
- Prevents single worker saturation
# With multi_accept on:
- First available worker might accept all 1000 connections
- Creates uneven load distribution
- May cause latency spikes
Enable multi_accept on
only in specific scenarios:
events {
multi_accept on; # Only recommended for:
# - Low connection variability
# - High-performance servers
# - When using accept_mutex off
}
Testing on a 8-core server with 10,000 concurrent connections:
Setting | Requests/sec | Latency (95%) |
---|---|---|
multi_accept off | 12,457 | 32ms |
multi_accept on | 11,982 | 48ms |
Combine with other event directives for optimal performance:
events {
worker_connections 4096;
multi_accept off; # Default for most cases
use epoll; # For Linux systems
accept_mutex on; # Prevents thundering herd
}