When implementing Keepalived for high availability, I encountered a hard limitation where only 20 virtual IP addresses (VIPs) could be assigned within a single vrrp_instance
block. My use case required managing over 100 VIPs (10.200.85.100-10.200.85.200) across two Debian servers (LB01:10.200.85.1 and LB02:10.200.85.2) serving as SSL termination points.
vrrp_script chk_apache2 {
script "killall -0 apache2"
interval 2
weight 2
}
vrrp_instance VI_1 {
interface eth0
state MASTER
virtual_router_id 51
priority 101
virtual_ipaddress {
10.200.85.100
.
. # truncated for brevity
.
10.200.85.200
}
}
The most elegant solution involves using a single VIP as a gateway for routing traffic to other addresses. This approach avoids Keepalived's VIP limitation while maintaining full failover capability.
Configuration Steps
- Reduce VIP declarations to just one primary address:
- Enable IP forwarding on both nodes:
- Add routing rules on all network devices:
- Configure pfSense firewall routing:
virtual_ipaddress {
10.200.85.100
}
echo "net.ipv4.ip_forward=1" >> /etc/sysctl.conf
sysctl -p
ip route add 10.200.85.200/32 via 10.200.85.100
Interface: LAN
Destination network: 10.200.85.200/32
Gateway: 10.200.85.100
For more granular control, you can create multiple VRRP instances, each managing a subset of VIPs:
vrrp_instance VI_1 {
virtual_router_id 51
virtual_ipaddress {
10.200.85.100
10.200.85.101
# 18 more VIPs
}
}
vrrp_instance VI_2 {
virtual_router_id 52
virtual_ipaddress {
10.200.85.120
10.200.85.121
# 18 more VIPs
}
}
After implementation, verify the setup works correctly:
# Check VIP status
ip addr show eth0
# Test failover
systemctl stop keepalived
ping 10.200.85.100
# Verify routing
traceroute 10.200.85.200
When dealing with large numbers of VIPs:
- Monitor ARP table size on network devices
- Consider using VRRP unicast mode for large deployments
- Balance VIPs across multiple physical interfaces if possible
Here's the complete optimized configuration that solved my problem:
vrrp_instance VI_1 {
interface eth0
state MASTER
virtual_router_id 51
priority 101
unicast_src_ip 10.200.85.1
unicast_peer {
10.200.85.2
}
virtual_ipaddress {
10.200.85.100
}
track_script {
chk_apache2
}
}
When working with Keepalived for high availability setups, many administrators hit the practical limit of 20 virtual IP addresses (VIPs) per vrrp_instance. This becomes particularly problematic in SSL termination scenarios where SNI isn't sufficient and we need dedicated IPs for each certificate.
The fundamental limitation comes from the VRRP protocol's design, where each VIP requires separate advertisements. Too many VIPs can cause:
- Increased network overhead
- Potential packet fragmentation
- Reduced failover responsiveness
The most elegant solution involves using a single VIP as a gateway for routing other addresses. Here's how to implement it:
# On both keepalived nodes (master and backup)
ip route add 10.200.85.100/32 dev eth0
ip route add 10.200.85.200/32 via 10.200.85.100
And the corresponding keepalived.conf modification:
vrrp_instance VI_1 {
interface eth0
state MASTER
virtual_router_id 51
priority 101
virtual_ipaddress {
10.200.85.100
}
}
For proper integration with your pfSense firewall:
- Navigate to System > Routing > Static Routes
- Add route for each VIP pointing to your floating IP
Here's a full working configuration for handling 100+ VIPs:
# /etc/keepalived/keepalived.conf
global_defs {
router_id LVS_DEVEL
}
vrrp_script chk_apache2 {
script "killall -0 apache2"
interval 2
weight 2
}
vrrp_instance VI_1 {
interface eth0
state MASTER
virtual_router_id 51
priority 101
virtual_ipaddress {
10.200.85.100
}
track_script {
chk_apache2
}
}
# Post-configuration script /etc/keepalived/post_config.sh
#!/bin/bash
for i in {100..200}; do
ip route add 10.200.85.$i/32 via 10.200.85.100
done
Key verification steps:
# Check routes
ip route show
# Verify VIP failover
tcpdump -i eth0 vrrp
# Test connectivity to backend VIPs
curl -vk https://10.200.85.150
Remember to enable IP forwarding in sysctl:
net.ipv4.ip_forward = 1
net.ipv4.ip_nonlocal_bind = 1
For extreme cases consider:
- Multiple vrrp_instances with different virtual_router_ids
- LVS/NAT-based solutions
- Modern alternatives like kube-vip for containerized environments