html
When dealing with HTTP floods and reverse proxy timeouts, the standard Nginx error log only tells part of the story. Here's how to dig deeper when you see those frustrating connect() failed (110: Connection timed out)
messages.
First, let's enable additional debugging modules in Nginx:
# Reconfigure Nginx with debug symbols
./configure --with-debug
make
make install
Use these commands to monitor connections in real-time:
# Show active connections to upstream
ss -tnp | grep nginx
# Monitor TCP retransmits
watch -n 1 "netstat -s | grep retrans"
Add these directives to your nginx.conf for better timeout handling:
upstream backend {
server backend.example.com:8080;
keepalive 32;
keepalive_timeout 30s;
keepalive_requests 100;
}
server {
proxy_connect_timeout 5s;
proxy_send_timeout 10s;
proxy_read_timeout 30s;
proxy_buffer_size 16k;
proxy_buffers 4 32k;
}
For severe connection storms, adjust these sysctl parameters:
# Increase TCP SYN backlog
echo 4096 > /proc/sys/net/ipv4/tcp_max_syn_backlog
# Enable SYN cookies
echo 1 > /proc/sys/net/ipv4/tcp_syncookies
# Reduce FIN timeout
echo 30 > /proc/sys/net/ipv4/tcp_fin_timeout
Attach strace to Nginx worker processes to see system calls:
strace -p $(pgrep -f "nginx: worker") -s 1024 -tt -o nginx_trace.log
Implement rate limiting to protect your upstream servers:
limit_req_zone $binary_remote_addr zone=flood:10m rate=10r/s;
server {
location / {
limit_req zone=flood burst=20 nodelay;
proxy_pass http://backend;
}
}
Use tcpdump to capture traffic between proxy and upstream:
tcpdump -i eth0 -w nginx_debug.pcap port 8080 and host backend.example.com
Check if you're hitting file descriptor limits:
watch -n 1 "cat /proc/$(pgrep nginx)/limits | grep 'open files'"
Consider compiling Nginx with additional debugging modules:
--add-module=../nginx-debug-conn
--with-http_stub_status_module
--with-http_realip_module
When your Nginx reverse proxy starts throwing connect() failed (110: Connection timed out)
errors during HTTP floods, it typically indicates upstream connection bottlenecks. The key symptom - direct backend access working while proxy fails - suggests proxy-specific limitations rather than backend unavailability.
1. Enable Debug-Level Logging:
error_log /var/log/nginx/error.log debug;
events {
debug_connection [your_ip];
}
2. Monitor TCP Connections:
ss -s | grep -i timewait
netstat -ant | awk '{print $6}' | sort | uniq -c
Upstream Timeout Adjustments:
proxy_connect_timeout 60s;
proxy_send_timeout 60s;
proxy_read_timeout 60s;
keepalive_timeout 75s;
Connection Pool Optimization:
upstream backend {
server 10.0.0.1:8080;
keepalive 32; # Match worker_connections
}
limit_req_zone $binary_remote_addr zone=flood:10m rate=10r/s;
server {
limit_req zone=flood burst=20 nodelay;
# ... other config
}
sysctl -w net.ipv4.tcp_tw_reuse=1
sysctl -w net.ipv4.tcp_fin_timeout=30
sysctl -w net.core.somaxconn=32768
During traffic spikes:
nginx -T | grep -i timeout # Verify current settings
tail -f /var/log/nginx/error.log | grep -i timeout
dstat -tcn --socket
location / {
access_by_lua_block {
local rate = ngx.shared.limit:get(ngx.var.binary_remote_addr)
if rate and rate > 100 then
ngx.exit(503)
end
}
}