When managing thousands of virtual hosts in Nginx, the server_names_hash parameters become critical performance factors. The default values (max_size=256, bucket_size=64) work fine for small deployments but fail spectacularly when scaling beyond 30-50 domains, as seen in your error logs.
Nginx uses hash tables for rapid server_name lookups. Each bucket contains a linked list of server names - longer lists degrade performance. The relationship between parameters:
# Formula for estimating required bucket_size
bucket_size = longest_domain_length + 32
# max_size should be total_domains * 1.5 (safety margin)
For your 200-domain setup (with planned growth):
http {
server_names_hash_bucket_size 128; # For domains up to 96 chars
server_names_hash_max_size 2048; # Supports ~1500 domains
# Additional performance tweaks
server_names_hash_max_slots 32768; # Available in Nginx 1.15.9+
resolver_timeout 30s;
}
The lengthy reload times indicate two issues:
- Hash table rebuilding overhead
- Configuration parsing latency
Implement these improvements:
# In main nginx.conf
worker_processes auto; # Match CPU cores
worker_rlimit_nofile 100000; # Handle more connections
events {
worker_connections 4096;
use epoll; # For Linux systems
multi_accept on;
}
For 10,000+ domains, consider:
# Wildcard-based approach (reduces hash table entries)
server {
listen 80;
server_name ~^(?.+)\.domain\.com$;
location / {
proxy_pass http://127.0.0.1:8011;
proxy_set_header Host $host;
}
}
Combine with Lua scripting for dynamic SSL certificates:
server {
listen 443 ssl;
ssl_certificate_by_lua_block {
local ssl = require "ngx.ssl"
-- Dynamic certificate loading logic here
}
}
Add these to your monitoring:
# Check hash table stats
nginx -T 2>&1 | grep -A 5 "server names hash"
# Sample log format for tracking
log_format vhost '$host $remote_addr [$time_local] '
'"$request" $status $body_bytes_sent';
When managing hundreds or thousands of virtual hosts in Nginx, the server_names_hash parameters become critical for performance. The default values (bucket_size=64, max_size=512) work fine for small setups, but when you're hosting a multi-tenant platform where each user gets their own subdomain or custom domain, these defaults quickly become inadequate.
Nginx uses hash tables for fast server_name matching during request processing. The system has two key controls:
server_names_hash_bucket_size 64; # Size of each hash bucket (must be power of 2)
server_names_hash_max_size 1024; # Maximum number of server names
Each virtual host's server_name gets hashed and placed in a bucket. When buckets become too full (due to hash collisions), Nginx needs to rebuild the hash table with larger parameters.
For a system with thousands of domains, I recommend these settings:
server_names_hash_bucket_size 128;
server_names_hash_max_size 8192;
The bucket size should be:
- Minimum: Longest server_name length (including wildcards)
- Optimal: Next power of 2 above that length
With 2000+ domains, you'll notice:
# Before optimization
nginx -t # Takes 3-5 seconds
service nginx reload # Causes 2-3 second downtime
# After optimization
nginx -t -T /dev/shm/nginx_temp.conf # Test config in RAM
service nginx reload # Downtime reduced to under 1 second
For truly massive deployments (10,000+ domains):
# In main nginx.conf
server_names_hash_bucket_size 256;
server_names_hash_max_size 32768;
# Enable shared memory for hash tables
proxy_cache_path /dev/shm/nginx_cache levels=1:2 keys_zone=my_cache:128m;
worker_processes auto;
worker_rlimit_nofile 100000;
Create a monitoring script to check hash table usage:
#!/bin/bash
DOMAIN_COUNT=$(grep -r "server_name" /etc/nginx/conf.d/ | wc -l)
BUCKET_SIZE=$(nginx -T 2>&1 | grep server_names_hash_bucket_size | awk '{print $2}')
echo "Domains: $DOMAIN_COUNT | Current bucket size: $BUCKET_SIZE"