Nginx vs. HAProxy: Optimal Reverse Proxy and Load Balancer Configuration for HTTPS, Caching, and URL-Based Routing


2 views

When designing a web infrastructure that requires HTTPS termination, load balancing, caching, and URL-based routing, the choice between placing Nginx or HAProxy at the front becomes critical. Both tools excel in different areas:

  • Nginx: Excellent at SSL/TLS termination, HTTP caching, and compression.
  • HAProxy: Superior for advanced load balancing algorithms and URL-based routing.

For your described use case, the optimal setup is:

Client → Nginx (SSL termination + caching) → HAProxy (load balancing/routing) → Backend Servers

Nginx SSL Termination

server {
    listen 443 ssl;
    server_name example.com;
    
    ssl_certificate /path/to/cert.pem;
    ssl_certificate_key /path/to/key.pem;
    
    location / {
        proxy_pass http://haproxy_backend;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
    }
    
    # Enable caching for static content
    location ~* \.(jpg|jpeg|png|gif|ico|css|js)$ {
        proxy_cache my_cache;
        proxy_pass http://haproxy_backend;
    }
}

HAProxy URL-Based Routing

frontend http_in
    bind *:80
    acl is_upload path_beg /upload
    use_backend upload_servers if is_upload
    default_backend web_servers

backend web_servers
    balance roundrobin
    server web1 192.168.1.10:80 check
    server web2 192.168.1.11:80 check

backend upload_servers
    server upload1 192.168.1.20:80 check
    server upload2 192.168.1.21:80 check

This architecture provides several advantages:

  • Nginx handles SSL/TLS encryption/decryption, reducing CPU load on HAProxy
  • Caching static content at the Nginx level reduces backend load
  • HAProxy can focus on its strength - efficient load balancing
  • URL-based routing rules are cleaner in HAProxy configuration

For supporting both HTTP and HTTPS pages:

# Nginx configuration for mixed protocol support
server {
    listen 80;
    server_name example.com;
    
    location /login {
        return 301 https://$host$request_uri;
    }
    
    location / {
        proxy_pass http://haproxy_backend;
    }
}

If you prefer to minimize components, HAProxy 2.0+ now supports SSL termination. However, Nginx still offers better caching and compression features. The choice depends on your specific performance requirements.


When designing a web infrastructure with mixed HTTP/HTTPS endpoints, load balancing, and specialized routing needs, we must consider:

  • SSL/TLS termination capabilities
  • Layer 7 routing sophistication
  • Connection pooling efficiency
  • Caching layer implementation

After analyzing the requirements, the optimal topology is:

Clients → Nginx (SSL termination + caching) → HAProxy (load balancing) → Backend servers

1. Nginx SSL Configuration


server {
    listen 443 ssl;
    server_name example.com;
    
    ssl_certificate /etc/nginx/ssl/cert.pem;
    ssl_certificate_key /etc/nginx/ssl/key.pem;
    
    # HTTP/2 support
    listen 443 ssl http2;
    
    location /login {
        proxy_pass http://haproxy_backend;
        proxy_set_header X-Forwarded-Proto https;
    }
    
    location / {
        return 301 http://$host$request_uri;
    }
}

2. HAProxy Load Balancing Configuration


frontend http_front
    bind *:80
    mode http
    
    # Image upload routing
    acl is_upload path_beg /upload
    use_backend upload_servers if is_upload
    
    default_backend web_servers

backend web_servers
    balance roundrobin
    server web1 10.0.1.1:80 check
    server web2 10.0.1.2:80 check

backend upload_servers
    server upload1 10.0.2.1:80 check
    server upload2 10.0.2.2:80 check

Key tuning parameters for the Nginx-HAProxy chain:


# Nginx worker tuning
worker_processes auto;
worker_connections 2048;
keepalive_timeout 65;
gzip on;
gzip_types text/plain application/json;

# Buffer optimizations
proxy_buffers 16 32k;
proxy_buffer_size 64k;

Essential metrics to track in this architecture:

  • Nginx: Active connections, SSL handshake times
  • HAProxy: Queue depth, backend response times
  • End-to-end: 95th percentile latency for critical paths

A production deployment serving 50K RPM with this architecture:


# Nginx log format showing SSL termination
log_format ssl_log '$remote_addr - $ssl_protocol/$ssl_cipher '
                   '"$request" $status $body_bytes_sent';

# HAProxy health check configuration
backend web_servers
    option httpchk GET /health
    http-check expect status 200