HTTP Reverse Proxy Keep-Alive Optimization: Client vs. Server Side Configuration in HAProxy, Nginx, and Other Load Balancers


1 views

When dealing with satellite connections introducing ~600ms latency, TCP connection reuse becomes crucial. HTTP Keep-Alive allows multiple requests to be sent over a single TCP connection, significantly reducing the three-way handshake overhead. The strategic configuration of keeping client-side connections persistent while terminating server-side connections offers several advantages:

# HAProxy configuration example
frontend http-in
    mode http
    timeout client 60s
    option http-keep-alive
    
backend servers
    mode http
    timeout server 30s
    option httpclose  # Explicitly closes server-side connections

Nginx implements similar functionality through its keepalive directives. While it maintains persistent connections with clients, it can be configured to close upstream connections:

http {
    keepalive_timeout 65;  # Client-side keep-alive
    
    upstream backend {
        server 10.0.0.1;
        keepalive 0;  # Disables upstream keep-alive
    }
    
    server {
        location / {
            proxy_pass http://backend;
            proxy_http_version 1.1;
            proxy_set_header Connection "";
        }
    }
}

This asymmetric connection handling pattern appears across various solutions:

  • AWS ALB: Maintains client keep-alive (default 60s) while creating new backend connections
  • F5 BIG-IP: Configurable through OneConnect profile and HTTP profile settings
  • Traefik: Supports keepAlive parameter in service definitions

For satellite connections with 600ms RTT, enabling client-side keep-alive can yield:

Scenario Time for 10 Requests
Without Keep-Alive ~6s (10×600ms)
With Keep-Alive ~600ms + (9×50ms) ≈ 1.05s

When implementing this pattern, consider:

# HAProxy optimized settings
tune.ssl.default-dh-param 2048
tune.bufsize 16384
tune.http.maxhdr 64

The memory footprint increases with keep-alive connections, requiring proper tuning of connection pools and timeouts.


When implementing HTTP reverse proxies, the keep-alive behavior can significantly impact performance, especially in high-latency environments like satellite connections (~600ms). The ability to configure keep-alive separately for client and server connections is a powerful optimization technique.

HAProxy provides explicit control through these directives:


# Enable keep-alive for client connections
defaults
    timeout http-keep-alive 300s
    option http-keep-alive

# Disable keep-alive for server connections
backend webservers
    option httpclose
    server s1 192.168.1.1:80

Nginx achieves similar behavior through different mechanisms:


http {
    keepalive_timeout 75s;  # Client-side keep-alive
    keepalive_requests 100;

    upstream backend {
        server 10.0.0.1:80;
        keepalive 0;  # Disables server-side keep-alive
    }
}

For satellite connections (600ms RTT), enabling client-side keep-alive while disabling server-side:

  • Reduces TCP handshake overhead by 1.2s per request
  • Decreases SSL negotiation time for HTTPS connections
  • Maintains server efficiency by preventing connection pooling
Software Client Keep-Alive Server Keep-Alive Config Syntax
HAProxy option http-keep-alive option httpclose Explicit
Nginx keepalive_timeout upstream keepalive Implicit
Apache KeepAlive On ProxySet keepalive=Off Mixed

When configuring for satellite connections, consider these additional parameters:


# HAProxy tuning for high latency
defaults
    timeout connect 5s
    timeout client 60s
    timeout server 60s
    retries 3

Verify your configuration with these commands:


# For HAProxy
echo "show info" | socat /var/run/haproxy.sock -
# For Nginx
nginx -T | grep keepalive

Network packet analysis tools like Wireshark can confirm the actual TCP connection behavior between client-proxy and proxy-server segments.