When managing multiple IIS-hosted applications across subdomains with a wildcard SSL certificate, we face an interesting routing challenge. The typical setup involves:
site1.example.com → IISServer01:8081
site2.example.com → IISServer01:8082
site3.example.com → IISServer02:8083
After experimenting with various Nginx configurations, I've identified two viable patterns for SSL termination and routing:
Option 1: Server Blocks with SNI
The most straightforward approach uses separate server blocks with Server Name Indication (SNI):
server {
listen 443 ssl;
server_name site1.example.com;
ssl_certificate /path/to/wildcard.crt;
ssl_certificate_key /path/to/wildcard.key;
location / {
proxy_pass http://IISServer01:8081;
include proxy_params;
}
}
server {
listen 443 ssl;
server_name site2.example.com;
ssl_certificate /path/to/wildcard.crt;
ssl_certificate_key /path/to/wildcard.key;
location / {
proxy_pass http://IISServer01:8082;
include proxy_params;
}
}
Option 2: Single SSL Termination with Internal Routing
A more elegant solution uses a single SSL termination point with internal routing:
# SSL Termination
server {
listen 443 ssl;
server_name ~^(www\.)?(?.+)\.example\.com$;
ssl_certificate /path/to/wildcard.crt;
ssl_certificate_key /path/to/wildcard.key;
location / {
proxy_pass http://127.0.0.1:8080;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
# Internal routing based on Host header
server {
listen 127.0.0.1:8080;
server_name site1.example.com;
location / {
proxy_pass http://IISServer01:8081;
include proxy_params;
}
}
server {
listen 127.0.0.1:8080;
server_name site2.example.com;
location / {
proxy_pass http://IISServer01:8082;
include proxy_params;
}
}
For production environments with HAProxy integration, I recommend the following optimized configuration:
# Main SSL termination
server {
listen 443 ssl http2;
server_name ~^(?.+)\.example\.com$;
ssl_certificate /etc/ssl/wildcard.example.com.crt;
ssl_certificate_key /etc/ssl/wildcard.example.com.key;
# SSL optimizations
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers HIGH:!aNULL:!MD5;
ssl_prefer_server_ciphers on;
# Proxy settings
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# Dynamic backend selection
set $backend "";
if ($host = "site1.example.com") {
set $backend "http://IISServer01:8081";
}
if ($host = "site2.example.com") {
set $backend "http://IISServer01:8082";
}
if ($host = "site3.example.com") {
set $backend "http://IISServer02:8083";
}
location / {
proxy_pass $backend;
}
}
When implementing this configuration:
- Enable keepalive connections to backend servers
- Implement proper SSL session caching
- Consider OCSP stapling for improved SSL performance
- Monitor connection queues under heavy load
Use these Nginx debugging techniques when troubleshooting:
# Check configuration syntax
nginx -t
# Verify server_name matching
nginx -T | grep server_name
# Debug proxy headers
add_header X-Backend $backend;
add_header X-Host $host;
When implementing an Nginx reverse proxy for multiple subdomains using a wildcard SSL certificate, we face several architectural decisions. The primary challenge lies in efficiently routing SSL-terminated traffic to various backend servers while maintaining clean configuration.
Let's examine the two approaches mentioned:
Option 1: Location-based Routing
While functional, this method becomes unwieldy with growing subdomains:
server {
listen 443 ssl;
server_name *.example.com;
ssl_certificate /path/to/wildcard.crt;
ssl_certificate_key /path/to/wildcard.key;
location /site1 {
proxy_pass http://IISServer01:8081;
# Additional proxy settings
}
location /site2 {
proxy_pass http://IISServer01:8082;
# Additional proxy settings
}
}
Option 2: Host-based Internal Routing
The proposed solution using internal routing proves more scalable:
Here's the optimized version of your proposed configuration:
# SSL Termination Layer
server {
listen 443 ssl;
server_name *.example.com;
ssl_certificate /etc/ssl/wildcard.example.com.crt;
ssl_certificate_key /etc/ssl/wildcard.example.com.key;
# Maintain original host header
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# Route to internal proxy based on hostname
location / {
proxy_pass http://127.0.0.1:8080;
}
}
# Internal Routing Layer
server {
listen 127.0.0.1:8080;
server_name site1.example.com;
location / {
proxy_pass http://IISServer01:8081;
# Additional backend-specific settings
}
}
server {
listen 127.0.0.1:8080;
server_name site2.example.com;
location / {
proxy_pass http://IISServer01:8082;
# Additional backend-specific settings
}
}
server {
listen 127.0.0.1:8080;
server_name site3.example.com;
location / {
proxy_pass http://IISServer02:8083;
# Additional backend-specific settings
}
}
- Use
127.0.0.1
for internal communication rather than exposing ports externally - Maintain all essential headers (
Host
,X-Forwarded-*
) for backend compatibility - Consider adding health checks for backend servers when preparing for HAProxy integration
- Implement proper SSL protocols and ciphers in the termination layer
While this double-proxy approach adds minimal latency, the benefits of centralized SSL management outweigh the costs. For high-traffic environments:
- Enable SSL session caching in Nginx
- Consider OCSP stapling for improved SSL handshake performance
- Tune buffer sizes based on expected request/response sizes
For those preferring a single-configuration solution, Nginx's map
directive offers another option:
map $host $backend {
site1.example.com "IISServer01:8081";
site2.example.com "IISServer01:8082";
site3.example.com "IISServer02:8083";
}
server {
listen 443 ssl;
server_name *.example.com;
ssl_certificate /etc/ssl/wildcard.crt;
ssl_certificate_key /etc/ssl/wildcard.key;
location / {
proxy_pass http://$backend;
# Standard proxy settings
}
}
This approach reduces configuration files but requires careful maintenance of the mapping table.