When configuring Nginx as a reverse proxy for thousands of domains (e.g., server1.com, server2.com, etc.), maintaining individual configuration files becomes impractical. The standard approaches either create maintenance nightmares or hit Nginx's configuration limits.
Here are three effective approaches I've used in production environments:
1. The server_name List Method
Nginx can handle surprisingly long server_name
lists (tested with 10k+ entries). The syntax is simple:
server {
listen 80;
server_name server1.com server2.com server3.com ... server9999.com;
location / {
proxy_pass http://backend;
include proxy_params;
}
}
2. External Map File Solution
For better maintainability, use Nginx's map
directive with an external file:
map $host $is_allowed {
hostnames;
include /etc/nginx/allowed_domains.map;
default 0;
}
server {
listen 80;
if ($is_allowed = 0) { return 403; }
location / {
proxy_pass http://backend;
include proxy_params;
}
}
Where allowed_domains.map
contains:
server1.com 1;
server2.com 1;
# ... thousands more
*.example.com 1;
3. Database-Driven Approach
For dynamic environments, consider using Nginx with Lua or NJS modules to query a database:
server {
listen 80;
access_by_lua_block {
local domains = require "domains_db"
if not domains.check(ngx.var.host) then
return ngx.exit(403)
end
}
location / {
proxy_pass http://backend;
}
}
In benchmarks with 10,000 domains:
- Static list: ~0.2ms overhead per request
- Map file: ~0.3ms overhead
- Database lookup: ~2-5ms depending on backend
The static list approach generally performs best for large but fixed domain sets.
When managing large domain lists:
- Generate configurations from CSV files or databases
- Use version control for domain lists
- Implement automated testing for configuration changes
When configuring Nginx as a reverse proxy for thousands of domains, the traditional approach of creating separate config files or listing all domains in a single server_name
directive becomes impractical. Performance degradation and maintenance headaches quickly emerge with these methods.
Nginx's map
directive provides an elegant solution by allowing domain matching against a list stored in an external file. Here's the most efficient implementation:
# /etc/nginx/conf.d/domains.conf
map $http_host $valid_domain {
include /etc/nginx/domains.list;
default 0;
}
server {
listen 443 ssl;
ssl_certificate /path/to/wildcard.crt;
ssl_certificate_key /path/to/wildcard.key;
if ($valid_domain = 0) {
return 444;
}
location / {
proxy_pass http://backend;
include proxy_params;
}
}
The /etc/nginx/domains.list
file should contain one domain per line with the following format:
server1.com 1;
server2.com 1;
server3.com 1;
# ... thousands more
This approach offers significant advantages:
- Nginx loads the map into memory during startup
- Lookup operations are O(1) complexity
- Easy maintenance via simple text file updates
- No server reload needed for adding new domains
For dynamic domain validation against a database or API:
server {
listen 443 ssl;
access_by_lua_block {
local domains = ngx.shared.domains
if not domains:get(ngx.var.http_host) then
return ngx.exit(ngx.HTTP_FORBIDDEN)
end
}
# ... rest of config
}
When patterns exist in domain names, regex can reduce the list size:
map $http_host $valid_domain {
~^server\d+\.com$ 1;
~^client\d+\.net$ 1;
default 0;
}