Recently, I discovered that my Nginx web server was serving my website content to unauthorized domains that pointed to my server's IP address. This occurred despite having properly configured the server_name
directive in my Nginx configuration. Even worse, Google had started indexing these unauthorized domains with my content.
Here's the original Nginx server block I was using:
server {
listen 443 ssl;
server_name example.com;
ssl_certificate /etc/nginx/ssl/example.com/ssl-bundle.crt;
ssl_certificate_key /etc/nginx/ssl/example.com/example.com.key;
root /var/www/example.com;
index index.html;
}
By default, if Nginx can't find a matching server_name
for the requested domain, it will use the first server block that matches the IP address and port. This explains why unauthorized domains were getting my content.
I initially tried adding an if
statement to check the $host
variable:
server {
listen 443 ssl;
server_name example.com;
ssl_certificate /etc/nginx/ssl/example.com/ssl-bundle.crt;
ssl_certificate_key /etc/nginx/ssl/example.com/example.com.key;
if ($host != $server_name) {
return 404;
}
root /var/www/example.com;
index index.html;
}
This approach has several issues:
- Using
if
in location contexts is generally discouraged in Nginx - It doesn't handle all edge cases properly
- It's not the most efficient solution
The most robust solution is to create a default server block that catches all unmatched domains:
server {
listen 443 ssl default_server;
server_name _;
ssl_certificate /etc/nginx/ssl/default/ssl-bundle.crt;
ssl_certificate_key /etc/nginx/ssl/default/default.key;
return 444;
}
server {
listen 443 ssl;
server_name example.com;
ssl_certificate /etc/nginx/ssl/example.com/ssl-bundle.crt;
ssl_certificate_key /etc/nginx/ssl/example.com/example.com.key;
root /var/www/example.com;
index index.html;
}
For comprehensive protection, consider these additional steps:
# Prevent processing requests with undefined server names
server {
listen 80 default_server;
server_name "";
return 444;
}
# Redirect HTTP to HTTPS
server {
listen 80;
server_name example.com;
return 301 https://example.com$request_uri;
}
If you need to support multiple domains or subdomains, use this pattern:
server {
listen 443 ssl;
server_name ~^(www\.)?(?.+)$;
# Use wildcard certificate or domain-specific cert
ssl_certificate /etc/nginx/ssl/$domain/ssl-bundle.crt;
ssl_certificate_key /etc/nginx/ssl/$domain/$domain.key;
if ($host !~* ^(www\.)?example\.com$) {
return 403;
}
root /var/www/$domain;
index index.html;
}
After making changes:
- Test your configuration with
nginx -t
- Reload Nginx:
systemctl reload nginx
- Verify using curl:
curl -v -H "Host: unauthorized.com" https://your-server-ip
Many Nginx administrators encounter a surprising situation where their server responds to requests from domains that shouldn't have access. When another party points their domain to your server's IP address, Nginx might serve your content to their visitors - even when you've explicitly configured server_name
.
Nginx follows specific rules when selecting which server block to use:
# This is the problematic default behavior
server {
listen 443 ssl;
server_name example.com;
# ... other config
}
When no matching server_name
is found, Nginx uses:
- The first server block with matching port/IP
- Or the server block marked as
default_server
1. Explicit Default Server Block
Create a catch-all server block that rejects invalid domains:
server {
listen 443 ssl default_server;
server_name _;
ssl_reject_handshake on;
return 444; # Close connection without response
}
2. Strict Host Verification
For your legitimate domains, use this enhanced configuration:
server {
listen 443 ssl;
server_name example.com www.example.com;
# Strict host verification
if ($host !~* ^(example.com|www.example.com)$) {
return 403;
}
ssl_certificate /etc/nginx/ssl/example.com/ssl-bundle.crt;
ssl_certificate_key /etc/nginx/ssl/example.com/example.com.key;
root /var/www/example.com;
index index.html;
}
Consider implementing these protections:
HTTPS Enforcement
server {
listen 80;
server_name _;
return 301 https://$host$request_uri;
}
Preventing Search Engine Indexing /h2>
Add this to your legitimate server block:
add_header X-Robots-Tag "noindex, nofollow";
if ($http_referer ~* (example\.com|www\.example\.com)) {
add_header X-Robots-Tag "none";
}
After implementing these changes:
- Test with
curl -v -H "Host: fake.com" https://your-server-ip
- Check Nginx error logs for invalid access attempts
- Use Google Search Console to remove improperly indexed pages