HAProxy's three fundamental timeout parameters form the backbone of connection management:
timeout connect 10s # Maximum time to establish backend connection
timeout client 1m # Maximum client inactivity time
timeout server 1m # Maximum server response time
The art of timeout tuning balances between resource conservation and user experience. Consider these technical factors:
- Network topology: Multi-AZ deployments need higher connect timeouts
- Application behavior: Long-polling endpoints require adjusted client timeouts
- Protocol differences: HTTP/2 connections often need longer timeouts than HTTP/1.1
For a high-traffic API gateway with microservices backend:
# API Gateway Profile
timeout connect 15s # Accommodates service discovery
timeout client 30s # Matches mobile SDK keepalive
timeout server 45s # Allows for downstream processing
E-commerce frontend with static content:
# Web Frontend Profile
timeout connect 5s # Fast CDN connections
timeout client 2m # Accommodates slow mobile users
timeout server 10s # Static content should respond quickly
Short timeout risks:
- Premature connection drops during network blips
- Increased TCP handshake overhead
- Higher 504 error rates
Long timeout consequences:
- Resource exhaustion during traffic spikes
- Slower failure detection
- Potential DoS vulnerability
Dynamic timeout adjustment based on health checks:
backend dynamic_timeouts
timeout connect 10s
timeout server calc(avg_response*3)
stick-table type integer size 1m expire 5m store gpt0
http-request track-sc0 base32+src
http-response set-var(req.avg_resp) base32+src,table_gpt0(0)
When debugging timeout-related problems:
# Diagnostic commands
show sess
show errors
show activity
Key metrics to monitor:
- Connection attempt timeouts
- Client abort rates
- Backend queue times
HAProxy's timeout settings directly impact both performance and reliability. The three fundamental timeouts (client
, connect
, and server
) serve as circuit breakers to prevent resource starvation. Leaving them unset creates operational risks, which is why HAProxy generates warnings.
# Minimum recommended configuration
timeout connect 5s # Max time to establish backend connection
timeout client 30s # Max client inactivity period
timeout server 30s # Max server response time
Connect timeout: This should reflect your network's typical TCP handshake duration. For cloud environments, 3-10 seconds accommodates most scenarios. Shorter values may drop legitimate connections during network spikes.
Client/server timeouts: These should consider your application's behavior:
- APIs: Match expected response times plus buffer (e.g., 95th percentile + 30%)
- File transfers: Scale based on file sizes and bandwidth
- Websockets: May require much longer values (hours/days)
Follow this decision framework:
# Step 1: Baseline measurement
frontend main
timeout client 1h # Temporary high value for metrics collection
option logasap # Log requests that timeout
# Step 2: Analyze logs to determine:
# - Average request duration
# - 99th percentile response time
# - Connection establishment patterns
Timeout | Too Short | Too Long |
---|---|---|
connect | Increased connection errors | Slow failure detection |
client | User experience degradation | Resource exhaustion |
server | Premature request termination | Backend congestion |
# API Gateway Scenario
timeout http-request 5s # Protect against slow HTTP attacks
timeout connect 3s # Fast fail for unhealthy backends
timeout client 30s # Accommodates mobile latency
timeout server 15s # Enforces SLO for microservices
# Legacy Application Migration
timeout client-fin 1m # Graceful connection closing
timeout tunnel 1h # Support long-running sessions
Symptom: 503 errors during traffic spikes
Solution: Increase connect
timeout while scaling connection pools
Symptom: Client disconnections during large downloads
Solution: Implement tiered timeouts:
frontend downloads
timeout client 1h
acl is_large_file path_beg /largefiles/
use_backend big_files if is_large_file
backend big_files
timeout server 4h