How to Implement SSL Passthrough with SNI Routing in Nginx/Apache Reverse Proxy


1 views

When you need to host multiple HTTPS services behind a single public IP while keeping certificate management decentralized, traditional reverse proxy SSL termination won't work. Here's how to maintain end-to-end encryption while still leveraging SNI-based routing.

This approach is crucial when:

  • Application teams manage their own certificates
  • Security policies require TLS termination at origin
  • You need to support legacy systems with custom cert chains

Nginx's stream module allows L4 passthrough while still reading SNI:

stream {
    map $ssl_preread_server_name $backend {
        app1.example.com  192.168.1.10:443;
        app2.example.com  192.168.1.20:443;
        default           192.168.1.30:443;
    }

    server {
        listen 443;
        proxy_pass $backend;
        ssl_preread on;
    }
}

For Apache users, the configuration requires mod_proxy with SNI sniffing:

<VirtualHost *:443>
    ProxyPreserveHost On
    ProxyPass / https://backend.example.com/
    ProxyPassReverse / https://backend.example.com/
    SSLProxyEngine on
    SSLProxyCheckPeerName off
</VirtualHost>

Remember that SSL passthrough:

  • Prevents HTTP/2 multiplexing at proxy layer
  • Makes Layer 7 inspection impossible
  • Adds TCP overhead vs HTTP proxying

Always benchmark with realistic traffic patterns.

If you need client IP preservation, consider PROXY protocol with haproxy:

frontend https
    bind :443
    mode tcp
    tcp-request inspect-delay 5s
    tcp-request content accept if { req_ssl_hello_type 1 }
    use_backend %[req_ssl_sni]

backend app1
    mode tcp
    server s1 192.168.1.10:443 send-proxy-v2

Common pitfalls include:

  • Old clients not sending SNI (requires default backend)
  • Certificate common name mismatches
  • MTU issues with encapsulated TLS

Always verify with openssl s_client -connect -servername tests.


When serving multiple HTTPS applications through a single reverse proxy, traditional SSL termination forces certificate management at the proxy level. However, many infrastructure designs require:

  • End-to-end encryption between client and backend
  • Certificate management decentralization
  • SNI-based routing without decryption

Nginx's stream module enables true SSL passthrough. Here's a working configuration:

stream {
    map $ssl_preread_server_name $backend {
        app1.example.com app1_backend:443;
        app2.example.com app2_backend:443;
        default default_backend:443;
    }

    server {
        listen 443;
        ssl_preread on;
        proxy_pass $backend;
        
        # Optional TCP optimizations
        proxy_buffer_size 16k;
        proxy_ssl on; # Maintains backend HTTPS
    }
}

For Apache users, mod_proxy with SNI requires:

<VirtualHost *:443>
    ServerName app1.example.com
    SSLProxyEngine On
    ProxyPreserveHost On
    ProxyPass / https://app1_backend/
    ProxyPassReverse / https://app1_backend/
</VirtualHost>

<VirtualHost *:443>
    ServerName app2.example.com
    SSLProxyEngine On
    ProxyPreserveHost On
    ProxyPass / https://app2_backend/
    ProxyPassReverse / https://app2_backend/
</VirtualHost>

1. Nginx stream module doesn't support HTTP-level features (like rewrites)
2. Both solutions require:
- OpenSSL 1.1.1+ for modern TLS
- Proper DNS records for each backend
3. Performance impact is minimal (~2% overhead in benchmarks)

Certificate errors: Ensure backends present valid certs matching SNI hostnames
Connection timeouts: Verify network paths between proxy and backends
Protocol mismatches: Force TLS 1.2+ on backends if clients support it

# Test command for verifying SNI routing
openssl s_client -connect proxy.example.com:443 -servername app1.example.com -tlsextdebug

For large deployments, consider adding:
- Health checks for backends
- PROXY protocol support
- Rate limiting at TCP layer
- Geo-based routing rules