When working with cloud environments like AWS OpsWorks, we often encounter situations where backend server hostnames are dynamically managed. The traditional HAProxy configuration expects resolvable DNS entries at startup, which becomes problematic when:
- Auto-scaling groups scale down instances
- OpsWorks removes instances from /etc/hosts
- Maintenance windows temporarily take servers offline
Here are three battle-tested approaches we've implemented for resilient HAProxy configurations:
1. DNS Resolution with init-addr
server ops-ca-revealv2e-prod-1 ops-ca-revealv2e-prod-1:443
cookie ops-ca-revealv2e-prod-1
ssl
weight 1
maxconn 512
check
init-addr none
2. Combined DNS and Static IP Fallback
server ops-ca-revealv2e-prod-2 ops-ca-revealv2e-prod-2:443
cookie ops-ca-revealv2e-prod-2
ssl
weight 1
maxconn 512
check
resolvers mydns
resolve-prefer ipv4
init-addr last,libc,none
For complete dynamic DNS handling, set up a resolvers section:
resolvers mydns
nameserver dns1 10.0.0.2:53
nameserver dns2 10.0.0.3:53
hold valid 10s
timeout retry 1s
accepted_payload_size 8192
backend web_backend
server dynamic-server ${CURRENT_SERVER}:443
resolvers mydns
resolve-prefer ipv4
check inter 5s
- Always combine both init-addr and resolvers for maximum reliability
- Set appropriate hold valid timers based on your DNS TTLs
- Monitor HAProxy runtime DNS resolution with
echo "show resolvers" | socat stdio /var/run/haproxy.sock
- Consider using AWS Route53 health checks with DNS failover for critical services
# Verify DNS resolution
dig +short ops-ca-revealv2e-prod-2
# Check HAProxy runtime configuration
haproxy -c -f /etc/haproxy/haproxy.cfg
# Test hostname resolution
getent hosts ops-ca-revealv2e-prod-2
When working with HAProxy in cloud environments like AWS OpsWorks, we often face situations where backend servers are dynamically registered via hostnames in /etc/hosts
. The fundamental issue occurs when HAProxy attempts to validate these hostnames during configuration parsing, especially when some instances are temporarily down.
HAProxy performs DNS resolution during configuration parsing, not at runtime. This creates problems in dynamic environments where:
- Auto-scaling groups spin up/down instances
- OpsWorks updates
/etc/hosts
asynchronously - Temporary network issues prevent DNS resolution
Option 1: Using DNS Resolvers with Hold Periods
Modern HAProxy versions (1.8+) support runtime DNS resolution:
resolvers mydns
nameserver dns1 10.0.0.1:53
hold valid 10s
backend my_backend
server ops-ca-revealv2e-prod-1 ops-ca-revealv2e-prod-1:443 resolvers mydns cookie ops-ca-revealv2e-prod-1 ssl weight 1 maxconn 512 check
server ops-ca-revealv2e-prod-2 ops-ca-revealv2e-prod-2:443 resolvers mydns cookie ops-ca-revealv2e-prod-2 ssl weight 1 maxconn 512 check
Option 2: IP Address Fallback Pattern
For environments where you might know the IP ranges:
server ops-ca-revealv2e-prod-1 ops-ca-revealv2e-prod-1:443,cookie ops-ca-revealv2e-prod-1 ssl weight 1 maxconn 512 check resolvers mydns init-addr none
Combine these techniques with health checks:
backend reveal_backend
balance roundrobin
option httpchk GET /health
http-check expect status 200
server-template reveal-prod 2 ops-ca-revealv2e-prod-:443 cookie ops-ca-revealv2e-prod- ssl weight 1 maxconn 512 check resolvers mydns init-addr none
Key commands to verify your setup:
# Show DNS resolution status
echo "show resolvers" | socat stdio /var/run/haproxy.sock
# Check server states
echo "show servers state" | socat stdio /var/run/haproxy.sock