```html
When using proxy_pass
with dynamic hostnames (like those pointing to auto-scaling cloud instances or containers), Nginx's default DNS caching behavior can become problematic. The web server resolves the DNS name only once during startup or configuration reload, then caches the IP indefinitely.
# Problematic traditional configuration
location /service {
proxy_pass http://dynamic-backend.example.com:8080;
# IP gets cached here!
}
Nginx maintains its own DNS resolver cache separate from the operating system. This cache:
- Persists through reloads (
nginx -s reload
) - Only updates during full restarts
- Has no built-in TTL enforcement for most versions
The most reliable approach uses Nginx's resolver
directive combined with variables in proxy_pass
:
location /service {
resolver 8.8.8.8 valid=10s; # Use Google DNS with 10s cache
set $backend "http://dynamic-backend.example.com:8080";
proxy_pass $backend;
}
Key components:
resolver
specifies DNS server and cache duration- Variable (
$backend
) forces Nginx to re-resolve valid=10s
controls cache duration (adjust as needed)
Here's a fully configured solution including error handling:
http {
# Global DNS settings (can be overridden in server/location)
resolver 1.1.1.1 8.8.8.8 valid=30s; # Cloudflare + Google DNS
resolver_timeout 5s;
server {
listen 80;
server_name myproxy.example.com;
location /api {
set $upstream "http://api-cluster.example.com:8080";
proxy_pass $upstream;
# Standard proxy settings
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_connect_timeout 5s;
proxy_read_timeout 60s;
# Handle DNS resolution failures
proxy_next_upstream error timeout invalid_header;
proxy_next_upstream_timeout 0;
proxy_next_upstream_tries 3;
}
}
}
1. Using OpenResty with Lua:
location / {
access_by_lua_block {
ngx.var.upstream = "http://" .. ngx.shared.dns_cache:get("backend") .. ":8080";
}
proxy_pass $upstream;
}
2. Tengine (Nginx fork) with dynamic_module:
dynamic_resolve;
location / {
proxy_pass http://dynamic-backend.example.com:8080;
}
- Nginx Plus (commercial version) has more sophisticated DNS features
- Frequent DNS lookups may violate your DNS provider's rate limits
- Always test with
nginx -T
to verify configuration - Consider health checks for critical services
Add these to your log format to track resolution behavior:
log_format proxy_log '$remote_addr - $upstream_addr [$time_local] '
'"$request" $status $body_bytes_sent';
access_log /var/log/nginx/proxy.log proxy_log;
When working with nginx's proxy_pass
directive, you might encounter a frustrating behavior: nginx resolves the DNS name only once during startup or configuration reload, then caches the IP indefinitely. This becomes particularly problematic when:
- Working with dynamic DNS entries (like AWS ALB hostnames)
- Using containerized services with frequently changing IPs
- Implementing blue-green deployments
The problem occurs because nginx uses a static resolver that performs DNS lookups only during these events:
- Initial configuration load
- Configuration reload (
nginx -s reload
) - Worker process restart
For nginx 1.1+ (recommended upgrade path), the proper solution involves:
server {
listen 80;
server_name example.com;
# Specify your DNS resolver (Google DNS shown)
resolver 8.8.8.8 valid=10s;
resolver_timeout 5s;
set $backend "http://dynamic-backend.example.com:8888";
location / {
proxy_pass $backend;
# Other proxy settings
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
}
For nginx 0.7.x (as in your case), we need a different approach since variables in proxy_pass weren't fully supported:
server {
listen 80;
server_name example.com;
location / {
# Use a rewrite trick to force DNS resolution
rewrite ^(.*)$ "http://dynamic-backend.example.com:8888$1" break;
proxy_pass $uri;
proxy_set_header Host $host;
}
}
Key considerations when implementing DNS resolution changes:
- Always specify
resolver
withvalid
timeout (controls cache duration) - Monitor performance impact - frequent DNS lookups add latency
- Consider implementing local DNS caching (dnsmasq) for heavy traffic scenarios
- For production, always test with
nginx -t
before reloading
Here's how we handle AWS ALB hostnames that change during deployments:
http {
resolver 169.254.169.253 valid=5s; # AWS Route 53 resolver
server {
listen 80;
server_name api.myapp.com;
set $alb_host "my-alb-123456789.us-west-2.elb.amazonaws.com";
location / {
proxy_pass http://$alb_host;
proxy_http_version 1.1;
proxy_set_header Connection "";
}
}
}