When using HAProxy as a load balancer for your application servers, the HAProxy instance itself becomes a critical single point of failure. If it goes down, your entire application becomes unavailable despite having multiple healthy backend servers. This architectural vulnerability needs addressing.
There are several ways to implement HAProxy redundancy:
- Using Keepalived with VRRP (similar to Cisco's HSRP)
- DNS-based failover (less responsive)
- Cloud provider load balancer in front of HAProxy
This is the most robust solution for on-premise deployments. Here's how to set it up:
# On both HAProxy servers (Master and Backup)
apt-get install keepalived haproxy
# /etc/keepalived/keepalived.conf (Master)
vrrp_instance VI_1 {
state MASTER
interface eth0
virtual_router_id 51
priority 101
advert_int 1
authentication {
auth_type PASS
auth_pass yourpassword
}
virtual_ipaddress {
192.168.1.100/24
}
}
# /etc/keepalived/keepalived.conf (Backup)
vrrp_instance VI_1 {
state BACKUP
interface eth0
virtual_router_id 51
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass yourpassword
}
virtual_ipaddress {
192.168.1.100/24
}
}
To ensure both nodes have identical configurations:
# Set up rsync between nodes
rsync -avz /etc/haproxy/ backup-server:/etc/haproxy/
# Or use configuration management tools like:
# - Ansible
# - Puppet
# - Chef
Enhance the solution with proper health checks:
vrrp_script chk_haproxy {
script "killall -0 haproxy"
interval 2
weight 2
}
# Then add to vrrp_instance:
track_script {
chk_haproxy
}
For cloud deployments, consider:
- AWS: Application Load Balancer + HAProxy instances
- GCP: Cloud Load Balancing with managed instance groups
- Azure: Load Balancer with availability sets
When implementing HAProxy as a load balancer for critical infrastructure, a single point of failure becomes unacceptable. Your scenario with 3 backend servers needs protection at the load balancing layer itself. The solution lies in creating an active-passive HAProxy cluster using Keepalived and VRRP (Virtual Router Redemption Protocol).
We'll implement two HAProxy servers:
- Primary (master) - handles all traffic normally
- Secondary (backup) - takes over when primary fails
Both servers share a virtual IP (VIP) that clients connect to. The VIP automatically fails over to the backup when issues occur.
1. Install Required Packages
# On both servers:
sudo apt-get update
sudo apt-get install -y haproxy keepalived
2. Configure HAProxy (identical on both servers)
# /etc/haproxy/haproxy.cfg
global
log /dev/log local0
log /dev/log local1 notice
chroot /var/lib/haproxy
stats socket /run/haproxy/admin.sock mode 660 level admin
stats timeout 30s
user haproxy
group haproxy
daemon
defaults
log global
mode http
timeout connect 5000ms
timeout client 50000ms
timeout server 50000ms
frontend http_front
bind *:80
default_backend http_back
backend http_back
balance roundrobin
server app1 192.168.1.101:80 check
server app2 192.168.1.102:80 check
server app3 192.168.1.103:80 check
3. Configure Keepalived (Primary Server)
# /etc/keepalived/keepalived.conf
vrrp_script chk_haproxy {
script "killall -0 haproxy"
interval 2
weight 2
}
vrrp_instance VI_1 {
interface eth0
state MASTER
virtual_router_id 51
priority 101
advert_int 1
authentication {
auth_type PASS
auth_pass secretpassword
}
virtual_ipaddress {
192.168.1.100/24
}
track_script {
chk_haproxy
}
}
4. Configure Keepalived (Secondary Server)
# /etc/keepalived/keepalived.conf
vrrp_script chk_haproxy {
script "killall -0 haproxy"
interval 2
weight 2
}
vrrp_instance VI_1 {
interface eth0
state BACKUP
virtual_router_id 51
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass secretpassword
}
virtual_ipaddress {
192.168.1.100/24
}
track_script {
chk_haproxy
}
}
To verify the setup works:
- Connect to your application via the VIP (192.168.1.100 in this example)
- Simulate primary failure:
sudo systemctl stop haproxy
on master - Within seconds, the backup should take over the VIP
- Check with
ip addr show eth0
to see which server has the VIP
For production environments, consider adding:
- Health checks for backend servers
- SSL termination at HAProxy
- Centralized logging
- Monitoring for both HAProxy and Keepalived