SSL Certificate Deployment Strategies Behind Load Balancers: Single vs. Multiple Certificates


2 views

When implementing HTTPS across multiple web servers behind a load balancer, you essentially have three architectural approaches:


// Option 1: SSL termination at LB only
[Client] --HTTPS--> [LB] --HTTP--> [Web Servers]

// Option 2: SSL passthrough with shared cert
[Client] --HTTPS--> [LB] --HTTPS--> [Web Servers (same cert)]

// Option 3: SSL passthrough with unique certs
[Client] --HTTPS--> [LB] --HTTPS--> [Web Servers (unique certs)]

For the same domain across multiple servers, you can use identical certificates on all backend servers when:

  • Using SSL passthrough configuration
  • Maintaining identical certificate chains
  • Private keys are securely distributed

Here's how to implement SSL termination at the load balancer level:


frontend https-in
    bind *:443 ssl crt /etc/ssl/certs/domain.pem
    default_backend web_servers

backend web_servers
    balance roundrobin
    server web1 192.168.1.10:80 check
    server web2 192.168.1.11:80 check
    server web3 192.168.1.12:80 check

Consider unique certificates per server when:

  1. Implementing mutual TLS (mTLS) between LB and backends
  2. Using hostname-based routing to specific servers
  3. Meeting compliance requirements for key isolation

For large deployments, consider these patterns:


# Using certbot with shared storage:
for server in web{1..5}; do
    ssh $server "certbot certonly --standalone -d example.com \
        --non-interactive --agree-tos \
        --email admin@example.com"
done

# Or with centralized certificate distribution:
ansible web_servers -m copy \
    -a "src=/central/certs/ dest=/etc/ssl/certs/"
Approach CPU Usage Security Complexity
LB Termination Low Medium Low
Shared Cert Medium High Medium
Unique Certs High Highest High

If you're concerned about session distribution, implement these instead of SSL pinning:


# HAProxy stick table configuration
backend web_servers
    stick-table type ip size 200k expire 30m
    stick on src
    server web1 192.168.1.10:443 check ssl verify none
    server web2 192.168.1.11:443 check ssl verify none

When deploying multiple web servers (e.g., 5 instances) behind a load balancer like HAProxy serving the same domain, certificate management becomes critical. The fundamental question isn't whether you can reuse certificates (you technically can), but whether you should from security and operational perspectives.

Two primary approaches exist:

  • Terminated SSL at Load Balancer: The LB handles encryption/decryption
  • End-to-End Encryption: Each server maintains its own certificate

Option 1: HAProxy with SSL Termination


frontend https-in
    bind *:443 ssl crt /etc/haproxy/certs/example.com.pem
    default_backend web_servers

backend web_servers
    balance roundrobin
    server web1 192.168.1.10:80 check
    server web2 192.168.1.11:80 check

Option 2: End-to-End Encryption with Certificate Replication


# On each web server (Nginx example):
server {
    listen 443 ssl;
    ssl_certificate /etc/ssl/certs/example.com.crt;
    ssl_certificate_key /etc/ssl/private/example.com.key;
    # ... other configs
}

While reusing certificates works technically, consider:

  • Private key exposure risk increases with each server
  • Revocation becomes more complex during compromise
  • PCI DSS compliance requirements for e-commerce systems

For most production environments, we recommend:

  1. Use SSL termination at the load balancer when possible
  2. If end-to-end encryption is required, generate unique certificates per server
  3. Implement automated certificate rotation using tools like Certbot

For performance-critical deployments using multiple certificates:


# HAProxy OCSP configuration example
frontend https-in
    bind *:443 ssl crt /etc/haproxy/certs/ ocsp-update on
    default_backend web_servers