When rebuilding load balancing infrastructure, DNS presents unique challenges compared to HTTP/FTP services. While HAProxy excels at TCP-based protocols (HTTP, FTP passive mode), its lack of UDP support becomes problematic for DNS services where:
- ~95% of DNS queries use UDP (typically for efficiency)
- TCP fallback occurs for large responses (>512 bytes) or zone transfers
- EDNS0 may increase UDP payload sizes
A proven production architecture combines HAProxy for TCP services with LVS for UDP/TCP DNS:
# Sample LVS Configuration for DNS (keepalived.conf)
vrrp_instance VI_DNS {
state MASTER
interface eth0
virtual_router_id 51
priority 100
virtual_ipaddress {
192.168.1.100
}
}
virtual_server 192.168.1.100 53 {
delay_loop 6
lb_algo wlc
lb_kind DR
protocol UDP
real_server 192.168.1.101 53 {
weight 1
MISC_CHECK {
misc_path "/usr/local/bin/dns_healthcheck.sh"
}
}
real_server 192.168.1.102 53 {
weight 1
MISC_CHECK {
misc_path "/usr/local/bin/dns_healthcheck.sh"
}
}
}
Health Checking: DNS requires specialized health checks beyond TCP connectivity:
#!/bin/bash
# dns_healthcheck.sh
response=$(dig +short +time=1 +tries=1 @localhost example.com SOA)
[ -n "$response" ] && exit 0 || exit 1
Session Persistence: While DNS is typically stateless, TCP connections for zone transfers benefit from LVS's persistence engine:
virtual_server 192.168.1.100 53 {
# ... (previous config)
persistence_timeout 300
protocol TCP
}
- Packet Direct Routing (DR): Preferred for LVS DNS nodes to minimize latency
- Conntrack Tuning: Adjust UDP timeouts (default 30s) for DNS traffic
# sysctl adjustments for DNS load balancing
net.netfilter.nf_conntrack_udp_timeout = 10
net.netfilter.nf_conntrack_udp_timeout_stream = 30
Synchronize VIP failover between HAProxy and LVS instances using VRRP:
# Keepalived shared configuration
vrrp_sync_group VG1 {
group {
VI_HTTP
VI_DNS
}
notify_master "/usr/local/bin/vip_failover.sh master"
notify_backup "/usr/local/bin/vip_failover.sh backup"
}
Solution | UDP Support | TCP Support | DNS Awareness |
---|---|---|---|
HAProxy 2.4+ | Limited (experimental) | Excellent | No |
LVS | Full | Full | Via scripting |
PowerDNS dnsdist | Full | Full | Native |
Here's an integrated configuration managing both HTTP and DNS services:
# HAProxy frontend for web services
frontend main_http
bind :80
mode http
default_backend web_servers
# LVS configuration snippet for DNS
virtual_server 192.168.1.100 53 {
delay_loop 10
lb_algo lblc
lb_kind DR
protocol UDP
real_server 192.168.1.101 53 {
weight 3
MISC_CHECK {
misc_path "/usr/local/bin/dns_healthcheck.sh"
}
}
}
When rebuilding load balancing infrastructure, DNS presents unique challenges since it primarily uses UDP (with TCP fallback). Most modern load balancers like HAProxy excel at TCP/HTTP traffic but lack native UDP support - a critical limitation for DNS services.
DNS operates on both UDP (port 53) and TCP (port 53):
# Typical DNS query
dig example.com +notcp # Forces UDP
dig example.com +tcp # Forces TCP connection
UDP handles most queries (small packets), while TCP is used for:
- Zone transfers (AXFR)
- Responses > 512 bytes
- EDNS0 extensions
For UDP DNS load balancing, LVS remains the most robust open-source solution:
# Basic LVS NAT configuration for DNS
ipvsadm -A -u VIP:53 -s rr
ipvsadm -a -u VIP:53 -r DNS1:53 -m
ipvsadm -a -u VIP:53 -r DNS2:53 -m
Key advantages:
- Native UDP support
- Multiple scheduling algorithms (rr, wrr, lc, etc.)
- Direct kernel integration
You can run parallel load balancers:
Network Layout:
Client → [LVS (UDP/DNS)] → DNS Servers
[HAProxy (TCP)] → Web/App Servers
Configuration example using keepalived for failover:
# keepalived.conf snippet
virtual_server VIP 53 {
protocol UDP
real_server DNS1 53 {
weight 1
MISC_CHECK {
misc_path "/usr/local/bin/dns_healthcheck.sh"
}
}
}
For pure DNS load balancing, consider:
- PowerDNS Recursor: Built-in load balancing
- dnslb: Lightweight DNS-specific balancer
- BIND Views: For split-horizon DNS
When load balancing DNS:
- UDP has no connection state - use stateless algorithms
- DNSSEC adds packet size - ensure TCP fallback works
- EDNS0 buffer size affects UDP/TCP selection
Example health check script:
#!/bin/bash
dig +short +time=1 +tries=1 @localhost example.com SOA >/dev/null
exit $?
Essential metrics to track:
- UDP/TCP query ratio
- Response code distribution
- Packet fragmentation events
Tool integration:
# ipvsadm monitoring
watch -n 1 ipvsadm -ln --stats