When architecting applications on AWS, the Elastic Load Balancer (ELB) is designed with high availability as a core principle. ELBs are not single points of failure because they're inherently distributed systems spanning multiple Availability Zones (AZs) within a region.
Every ELB you create automatically distributes its nodes across multiple AZs in your selected region. Here's what happens under the hood:
- Each AZ gets its own load balancer nodes
- DNS resolution provides multiple IP addresses for the ELB endpoint
- Health checks automatically route traffic away from unhealthy nodes
Consider this CloudFormation snippet showing multi-AZ ELB configuration:
Resources:
MyELB:
Type: AWS::ElasticLoadBalancingV2::LoadBalancer
Properties:
Subnets:
- subnet-12345678 # US-West-2a
- subnet-87654321 # US-West-2b
SecurityGroups:
- sg-12345678
Scheme: internet-facing
Type: application
You can verify ELB's multi-AZ status through AWS CLI:
aws elbv2 describe-load-balancers --names my-load-balancer \
--query "LoadBalancers[0].AvailabilityZones"
This returns JSON showing which AZs contain active ELB nodes.
While ELB itself is highly available, consider these architectural patterns for additional resilience:
- Implement DNS failover using Route53 for cross-region redundancy
- Use weighted routing to distribute traffic across multiple ELBs
- Consider Network Load Balancer for extreme performance requirements
Remember that ELB charges are based on:
- Number of AZs enabled ($0.025 per AZ/hour for ALB)
- Processed bytes ($0.008 per GB for ALB)
- LCU usage (varies by features used)
Amazon's Elastic Load Balancer (ELB) is designed as a fully managed service that inherently avoids being a single point of failure (SPoF). When you deploy an ELB in AWS, it automatically distributes its infrastructure across multiple Availability Zones (AZs) within your selected region.
The ELB service runs on a distributed system that spans all AZs in a region. When you enable multiple AZs for your ELB:
// Example of enabling multiple AZs in AWS CLI
aws elb create-load-balancer \
--load-balancer-name my-load-balancer \
--listeners "Protocol=HTTP,LoadBalancerPort=80,InstanceProtocol=HTTP,InstancePort=80" \
--availability-zones us-west-2a us-west-2b
This creates ELB nodes in each specified AZ, with DNS resolution handling the distribution of incoming traffic across these nodes.
AWS implements several protection layers:
- Multiple physical ELB nodes per AZ
- Automatic health checks and traffic rerouting
- DNS-based load balancing across ELB nodes
- Continuous monitoring and auto-recovery
You can verify ELB's health using CloudWatch metrics:
// Example CloudWatch query for ELB health
aws cloudwatch get-metric-statistics \
--namespace AWS/ELB \
--metric-name HealthyHostCount \
--dimensions Name=LoadBalancerName,Value=my-load-balancer \
--start-time 2023-01-01T00:00:00 \
--end-time 2023-01-02T00:00:00 \
--period 3600 \
--statistics Average
- Always enable cross-zone load balancing
- Configure health checks with appropriate thresholds
- Use Route 53 for DNS failover if needed
- Monitor ELB metrics and set up alarms
Here's a Terraform configuration for a highly available ELB setup:
resource "aws_elb" "web" {
name = "terraform-web-elb"
availability_zones = ["us-west-2a", "us-west-2b"]
listener {
instance_port = 80
instance_protocol = "http"
lb_port = 80
lb_protocol = "http"
}
health_check {
healthy_threshold = 2
unhealthy_threshold = 2
timeout = 3
target = "HTTP:80/"
interval = 30
}
cross_zone_load_balancing = true
connection_draining = true
connection_draining_timeout = 400
}