How to Force Nginx Reverse Proxy Cache Invalidation for Upstream Servers


2 views

When running nginx as a reverse proxy in front of other nginx instances (like in your Proxmox container setup), cache invalidation becomes critical for serving fresh static content. The core issue stems from nginx's default caching behavior which doesn't automatically detect upstream file changes.

Your current configuration has several characteristics that explain the stale content issue:

proxy_cache_valid 12h;
expires 30d;
proxy_cache_use_stale error timeout invalid_header updating;

These directives tell nginx to:

  • Consider cached responses valid for 12 hours
  • Tell clients to cache for 30 days
  • Use stale cache in certain error conditions

Here are proven approaches to solve this:

1. Cache Bypass with Custom Headers

Modify your container's nginx config to send cache-control headers:

location / {
    root /var/www;
    expires -1;
    add_header Cache-Control "no-store, no-cache, must-revalidate";
    add_header Pragma "no-cache";
    add_header X-Cache-Status $upstream_cache_status;
}

2. Selective Cache Purging

Add a purge location to your reverse proxy config:

location ~ /purge(/.*) {
    proxy_cache_purge cache $scheme://$host$1;
    allow 127.0.0.1;
    allow 192.168.1.0/24;
    deny all;
}

Then trigger purges via:

curl -X PURGE http://my-domain.com/path/to/file

3. Versioned Asset URLs

For static assets, append version strings:

location ~* \.(js|css|png|jpg|jpeg|gif|ico)$ {
    expires 1y;
    add_header Cache-Control "public";
    try_files $uri $uri/ @proxy;
}

location @proxy {
    proxy_pass http://my-domain.local:80;
    proxy_cache cache;
    proxy_cache_valid 200 302 12h;
    proxy_cache_valid 404 1m;
}

For a more automated approach using filesystem monitoring:

proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=cache:10m 
    inactive=60m use_temp_path=off;

server {
    # ... existing config ...
    
    location / {
        proxy_pass http://my-domain.local:80;
        proxy_cache cache;
        proxy_cache_bypass $http_cache_purge;
        proxy_cache_lock on;
        proxy_cache_use_stale error timeout updating http_500 http_502 http_503 http_504;
        proxy_cache_background_update on;
        proxy_cache_revalidate on;
    }

    # Invalidation endpoint
    location /invalidate {
        internal;
        proxy_cache_purge cache $arg_key;
    }
}

Combine this with inotifywait on the container:

inotifywait -m -r -e modify,create,delete /var/www | while read path action file; do
    curl "http://localhost/invalidate?key=http://my-domain.com${path}${file}"
done

Add these headers to monitor cache status:

add_header X-Cache-Status $upstream_cache_status;
add_header X-Cache-Key $proxy_cache_key;
add_header X-Upstream-Last-Modified $upstream_http_last_modified;

This will help you verify whether requests are being served from cache or upstream.

For your specific Proxmox setup, I recommend:

  1. Keep proxy_cache enabled but reduce the cache duration
  2. Implement versioned URLs for static assets
  3. Set up a purge mechanism for immediate invalidation
  4. Monitor cache behavior with headers
# Updated proxy.conf snippet
proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=cache:10m 
    inactive=1h max_size=700m use_temp_path=off;

# Updated server block
location / {
    proxy_pass http://my-domain.local:80;
    proxy_cache cache;
    proxy_cache_valid 200 302 10m;
    proxy_cache_valid 404 1m;
    proxy_cache_bypass $http_cache_purge;
    add_header X-Cache-Status $upstream_cache_status;
}

When implementing an nginx reverse proxy setup where content originates from upstream servers (like Proxmox containers in this case), cache invalidation becomes critical for serving fresh content. The fundamental issue is that nginx's reverse proxy cache doesn't automatically detect file changes on upstream servers.

Your current solutions (clearing /var/cache/nginx/ or disabling cache) work because they force nginx to re-fetch content from upstream. However, these aren't sustainable solutions for production environments.

1. Cache Purging with Nginx Plus or Open Source Solutions

For commercial nginx Plus, you can use the purge directive. For open source nginx, consider using the ngx_cache_purge module:

location ~ /purge(/.*) {
    proxy_cache_purge cache "$scheme://$host$1";
    allow 127.0.0.1;
    allow container_ip;
    deny all;
}

2. Cache-Control Headers from Upstream

Configure your container's nginx to send proper cache-control headers:

location ~* \.(js|css|png|jpg|jpeg|gif|ico)$ {
    expires 30d;
    add_header Cache-Control "public, must-revalidate";
    add_header X-Cache-Status $upstream_cache_status;
    etag on;
}

3. Versioned File References

Implement cache busting through file versioning:

# In your HTML templates:
<link rel="stylesheet" href="/styles.css?v=12345678">

4. Proxy Cache Bypass

Add conditions to bypass cache when needed:

location / {
    proxy_pass http://my-domain.local:80/;
    proxy_cache cache;
    proxy_cache_bypass $http_cache_purge;
    proxy_cache_valid 200 12h;
    proxy_cache_use_stale error timeout invalid_header updating;
}

Here's a complete example combining these techniques:

# Reverse proxy configuration
proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=my_cache:10m inactive=60m use_temp_path=off;

server {
    listen 80;
    server_name .my-domain.com;
    
    location / {
        proxy_pass http://my-domain.local:80;
        proxy_cache my_cache;
        proxy_cache_key "$scheme://$host$request_uri";
        proxy_cache_valid 200 301 302 12h;
        proxy_cache_bypass $http_purge_cache;
        add_header X-Proxy-Cache $upstream_cache_status;
        
        # Pass through headers that might affect caching
        proxy_set_header If-None-Match $http_if_none_match;
        proxy_set_header If-Modified-Since $http_if_modified_since;
    }
    
    location ~ /purge(/.*) {
        proxy_cache_purge my_cache "$scheme://$host$1";
    }
}

Add these headers to monitor cache behavior:

add_header X-Cache-Status $upstream_cache_status;
add_header X-Cache-Key "$scheme://$host$request_uri";

For static content that changes infrequently, consider:

  • Setting longer cache times with proper versioning
  • Implementing a build process that changes file names when content changes
  • Using cache purge API endpoints for critical updates