When configuring nginx to serve websites via HTTPS, many administrators face an unexpected security leak - the server responds to direct IP access with its default certificate. While HTTP blocking is straightforward, HTTPS presents unique challenges due to the TLS handshake sequence.
The standard approach for HTTP works perfectly:
server {
listen 80 default_server;
server_name _;
return 444;
}
But with HTTPS, the situation changes because:
- SNI (Server Name Indication) happens after TCP connection
- The server must present a certificate before seeing the Host header
- Modern browsers will cache the certificate even if connection is aborted
For clients supporting SNI (virtually all modern browsers), this configuration works:
server {
listen 443 ssl;
listen [::]:443 ssl;
ssl_certificate /path/to/dummy.crt;
ssl_certificate_key /path/to/dummy.key;
server_name _;
return 444;
}
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
ssl_certificate /path/to/real.crt;
ssl_certificate_key /path/to/real.key;
server_name example.com;
# Your normal configuration
}
For legacy clients without SNI support, you have two options:
Option 1: Firewall Blocking
Using iptables to block non-SNI traffic:
iptables -I INPUT -p tcp --dport 443 -m conntrack --ctstate NEW -m snimext --sni "" -j DROP
This requires the xt_snimext
kernel module.
Option 2: Default Certificate Strategy
Configure a generic certificate that doesn't reveal your domain:
openssl req -new -x509 -days 3650 -nodes \
-out /etc/nginx/dummy.crt \
-keyout /etc/nginx/dummy.key \
-subj "/CN=invalid"
The hostname checking approach mentioned in the question:
if ($host != "example.com") {
return 444;
}
While functional, it has two drawbacks:
- It processes every request after TLS handshake
- It exposes your certificate before blocking
The separate server block approach is more efficient as nginx selects the appropriate block during the SNI phase.
For systems with changing IPs, combine with DNS checks:
map $ssl_server_name $valid_domain {
"example.com" 1;
default 0;
}
server {
listen 443 ssl;
ssl_certificate /path/to/cert_${valid_domain}.crt;
ssl_certificate_key /path/to/cert_${valid_domain}.key;
if ($valid_domain = 0) {
return 444;
}
}
This solution maintains security without hardcoding IP addresses while properly handling both SNI and non-SNI clients.
When implementing IP-based access restrictions in Nginx, HTTP requests (port 80) are straightforward to handle, but HTTPS presents unique challenges due to certificate negotiation occurring before hostname verification. The fundamental issue is that Nginx must present a certificate before it can inspect the Host
header in HTTPS connections.
The typical approach of using a catch-all server block with return 444;
fails because:
server {
listen 443 ssl;
server_name _;
return 444; # This executes AFTER SSL handshake
ssl_certificate /path/to/default.crt; # Leaks domain info
}
The proper solution requires two components:
1. Create a Dummy SSL Certificate
Generate a self-signed certificate specifically for IP access blocking:
openssl req -x509 -nodes -days 365 -newkey rsa:2048 \
-keyout /etc/nginx/blocked.key \
-out /etc/nginx/blocked.crt \
-subj "/CN=invalid"
2. Implement the Blocking Server Block
server {
listen 443 ssl;
listen [::]:443 ssl;
server_name _;
ssl_certificate /etc/nginx/blocked.crt;
ssl_certificate_key /etc/nginx/blocked.key;
# Immediately close connection
return 444;
}
For those preferring firewall-level blocking:
# Block HTTPS access to IP but allow domain traffic
iptables -A INPUT -p tcp --dport 443 -m string \\
--algo bm --string "Host: " \\
--to 128 -m string ! --algo bm \\
--string "Host: yourdomain.com" -j DROP
The certificate-based solution has minimal overhead since:
- Connection terminates during handshake
- No request processing occurs
- Works with SNI-aware clients (99% of modern traffic)
For completeness, you might add this to handle legacy clients:
server {
listen 443 default_server ssl;
ssl_certificate /etc/nginx/blocked.crt;
ssl_certificate_key /etc/nginx/blocked.key;
return 444;
}