HAProxy Failover Configuration: Switching Servers Only When Current Server Fails


6 views

Many admins need a specific failover behavior where HAProxy only switches traffic when the current active server fails, without automatic fallback when the previously failed server recovers. This creates a stable, predictable traffic pattern that only changes when forced by server unavailability.

HAProxy can achieve this through careful backend server configuration and health checks. Here's a complete example:

backend app_backend
    balance roundrobin
    option httpchk GET /health
    http-check expect status 200
    
    # Primary server (SA) with higher weight
    server SA 192.168.1.10:80 check weight 100
    
    # Backup server (SB) with lower weight and backup flag
    server SB 192.168.1.11:80 check weight 1 backup
    
    # Health check parameters
    default-server inter 3s fall 3 rise 2
    option redispatch 1
    retries 3
    timeout connect 5s
    timeout server 30s

Weight Differential: The primary server (SA) has significantly higher weight (100) than the backup (1), making it always preferred when available.

Backup Flag: The 'backup' parameter on SB ensures it's only used when SA is unavailable.

Health Checks: Configured to detect server failures quickly (3s interval, 3 failed checks marks server down).

Redispatch: Enabled to handle cases where a connection fails mid-request.

For more complex scenarios where you need to maintain session stickiness during failovers:

backend app_backend
    balance roundrobin
    stick-table type ip size 200k expire 30m
    stick on src
    
    server SA 192.168.1.10:80 check
    server SB 192.168.1.11:80 check backup
    
    # Only switch when current server is down
    option persist
    no option allbackups

After implementing, verify the behavior:

echo "show servers state" | socat stdio /var/run/haproxy.sock

Monitor the stats page to ensure traffic flows correctly and failovers happen only when intended.

If you need more complex failover logic (like geographic failover or multi-tiered fallbacks), consider:

  • Keepalived with VRRP
  • DNS-based failover solutions
  • Cloud provider load balancers with custom health checks

However, for most single-datacenter scenarios, HAProxy's native capabilities work perfectly.


When implementing high-availability services with HAProxy, a common requirement is to maintain persistent server assignment unless the active server fails. Traditional round-robin or health-check based load balancing doesn't satisfy this specific need where we want:

  • Permanent stickiness to the primary server (SA)
  • Failover only when SA becomes unavailable
  • No automatic fallback when SA recovers
  • Repeat the same behavior for secondary server (SB)

We'll implement this using HAProxy's stick-table combined with custom health checks and server state tracking:

global
    log /dev/log local0
    maxconn 4000
    stats socket /run/haproxy/admin.sock mode 660 level admin

defaults
    mode http
    timeout connect 5s
    timeout client 30s
    timeout server 30s

listen stats
    bind *:8404
    stats enable
    stats uri /stats
    stats refresh 10s

backend app_cluster
    stick-table type string len 32 size 1m expire 30m
    stick on req.cookie(SERVERID)
    
    # Primary server definition
    server SA 192.168.1.10:80 check cookie SA backup
    server SB 192.168.1.11:80 check cookie SB
    
    # Custom failover logic
    http-request set-header X-Server-Selection %[srvkey]
    http-request set-var(txn.server) str(%[srvkey])
    http-request set-var(txn.backup) bool(!nbsrv(SA))
    acl primary_available nbsrv(SA) gt 0
    use_backend %[var(txn.backup)]_server if !primary_available
    
backend primary_server
    server SA 192.168.1.10:80
    
backend backup_server
    server SB 192.168.1.11:80

The magic happens through several coordinated features:

  1. Stick Tables: Maintains client-to-server mapping
  2. Backup Designation: SB starts as backup server
  3. Custom Health Checks: Extended interval checks for more stable state
  4. Manual State Control: Using the stats socket for admin control

For complete control, we can manage server states through HAProxy's runtime API:

# Disable automatic health checks
echo "disable server app_cluster/SA" | socat /run/haproxy/admin.sock -

# Force server maintenance mode
echo "set server app_cluster/SB state maint" | socat /run/haproxy/admin.sock -

# Manual failover trigger
echo "set server app_cluster/SA addr 192.168.1.10 port 80" | socat /run/haproxy/admin.sock -

Combine this with a watchdog script that monitors your application health beyond simple TCP/HTTP checks.

If you need even more control consider:

  • Keepalived: For IP-level failover
  • Custom Scripting: Using HAProxy's Lua integration
  • Consul: For service discovery with custom health policies