During recent load testing with ApacheBench (ab), I encountered a curious limitation where the server stopped accepting new connections after approximately 11,000 requests. The netstat
output revealed thousands of connections stuck in TIME_WAIT state:
# netstat --inet -p | grep "localhost:www" | awk '{print $6}' | sort | uniq -c
11651 TIME_WAIT
5 SYN_SENT
Several kernel parameters collectively determine the maximum TCP connections:
# Key parameters affecting connection limits
sysctl net.ipv4.ip_local_port_range # Default: 32768-60999 (28232 ports)
sysctl net.ipv4.tcp_fin_timeout # Default: 60 seconds
sysctl net.core.somaxconn # Default: 4096 (varies by distro)
sysctl net.ipv4.tcp_max_tw_buckets # Default: 16384 (kernel 4.10+)
The mathematical relationship can be expressed as:
Max connections ≈ (port_range_size) × (1 / (tcp_fin_timeout / avg_connection_lifetime))
For a high-traffic web server, consider these adjustments in /etc/sysctl.conf
:
# Increase available local ports
net.ipv4.ip_local_port_range = 1024 65535
# Reduce TIME_WAIT duration
net.ipv4.tcp_fin_timeout = 30
# Enable socket reuse
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_tw_recycle = 0 # Dangerous in NAT environments
# Increase connection backlog
net.core.somaxconn = 32768
net.ipv4.tcp_max_syn_backlog = 8192
Combine kernel tuning with these Apache directives in httpd.conf
:
# MPM Worker Configuration Example
StartServers 4
MinSpareThreads 64
MaxSpareThreads 128
ThreadLimit 256
ThreadsPerChild 64
MaxRequestWorkers 4000
MaxConnectionsPerChild 10000
# Enable KeepAlive
KeepAlive On
KeepAliveTimeout 5
MaxKeepAliveRequests 100
Essential commands for connection analysis:
# Real-time connection tracking
watch -n 1 'ss -s | grep "TCP:";
echo "TIME_WAIT:" $(ss -tan state time-wait | wc -l)'
# Kernel parameter verification
sysctl -a | grep -E 'port_range|tw_reuse|fin_timeout|max_tw_buckets'
For extreme performance scenarios:
# Fast socket recycling (risky in cloud environments)
net.ipv4.tcp_low_latency = 1
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216
# Increase file descriptor limits
echo "* soft nofile 1048576" >> /etc/security/limits.conf
echo "* hard nofile 1048576" >> /etc/security/limits.conf
When increasing connection limits:
- Monitor memory usage (each TCP socket consumes ~3-10KB)
- Implement rate limiting (e.g., iptables or nftables rules)
- Consider using SYN cookies (net.ipv4.tcp_syncookies=1)
- Configure connection tracking timeouts appropriately
When load testing web servers, you'll quickly encounter Linux's default TCP connection limitations. Running ab -c 5 -n 50000 http://localhost/
reveals the issue - connections stall around 11,000 requests with sockets stuck in TIME_WAIT state. This occurs because:
# netstat --inet -p | grep "localhost:www" | sed -e 's/ \+/ /g' | cut -d' ' -f 1-4,6-7 | sort | uniq -c
11651 tcp 0 0 localhost:www TIME_WAIT -
Four critical parameters govern TCP connection limits:
# sysctl -a | grep -E 'net.ipv4.tcp_max_tw_buckets|somaxconn|tw_reuse|file-max'
net.ipv4.tcp_max_tw_buckets = 32768
net.core.somaxconn = 128
net.ipv4.tcp_tw_reuse = 0
fs.file-max = 792956
For production web servers handling 50K+ connections:
# /etc/sysctl.conf optimizations
net.ipv4.tcp_max_tw_buckets = 180000
net.core.somaxconn = 32768
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_fin_timeout = 30
fs.file-max = 2097152
net.ipv4.ip_local_port_range = 1024 65535
Combine kernel tuning with web server settings. For Apache:
# httpd.conf
MaxKeepAliveRequests 100
KeepAliveTimeout 5
MaxClients 500
ServerLimit 500
For Nginx:
# nginx.conf
worker_connections 20000;
worker_rlimit_nofile 30000;
events {
use epoll;
multi_accept on;
}
Create a real-time connection monitor:
#!/bin/bash
watch -n 1 'netstat -n | awk '\''/^tcp/ {++S[$NF]} END {for(a in S) print a, S[a]}'\'''
- Increase file descriptors (
ulimit -n 100000
) - Enable TCP Fast Open (
net.ipv4.tcp_fastopen = 3
) - Adjust TCP keepalive (
net.ipv4.tcp_keepalive_time = 300
) - Implement connection pooling in applications
For extreme workloads (>100K connections):
- Implement reverse proxy with HAProxy/Nginx
- Use multiple IP addresses
- Consider kernel bypass techniques (DPDK, XDP)
- Evaluate cloud load balancers