How to Prevent Nginx from Retrying PUT/POST Requests on Upstream Timeout


2 views

When working with Nginx as a load balancer, one critical behavior to understand is how it handles upstream timeouts. By default, Nginx will retry failed requests (including timeouts) on the next available upstream server - this applies to all HTTP methods including PUT and POST.

The fundamental issue here is that PUT/POST methods are not idempotent - retrying them can lead to duplicate data processing, which is exactly what we want to avoid in most cases.

In your current configuration:

upstream mash {
    ip_hash;
    server 127.0.0.1:8081;
    server 192.168.0.11:8081;
}

Nginx will automatically retry timed-out requests on the next server in the upstream block. This behavior is controlled by:

proxy_next_upstream_timeout 60s;  # default timeout
proxy_next_upstream error timeout;

We need to modify the behavior to only retry safe methods (GET, HEAD, OPTIONS). Here's the optimized configuration:

upstream mash {
    ip_hash;
    server 127.0.0.1:8081;
    server 192.168.0.11:8081;
}

server {
    ...
    location / {
        proxy_pass http://mash/;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        
        # Critical configuration changes:
        proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504;
        proxy_next_upstream_tries 3;
        proxy_next_upstream_timeout 10s;
        
        # Only retry GET/HEAD requests
        if ($request_method !~ ^(GET|HEAD)$ ) {
            set $no_retry 1;
        }
        proxy_next_upstream $no_retry;
    }
}

For more complex scenarios, you might want to consider using error_page to handle timeouts differently for POST/PUT requests:

location / {
    proxy_pass http://mash/;
    proxy_intercept_errors on;
    error_page 408 502 503 504 = @no_retry;
    
    if ($request_method ~ ^(POST|PUT)$ ) {
        error_page 408 502 503 504 = @no_retry;
    }
}

location @no_retry {
    return 503 "Service unavailable - non-retryable request failed";
}

To verify your setup is working correctly, you can use this test procedure:

  1. Configure one upstream server to intentionally timeout (add sleep in your application)
  2. Send POST/PUT requests and verify they fail immediately
  3. Send GET requests and verify they're retried on other servers
  • Always test timeout behavior with real-world request sizes
  • Consider client-side retry logic for critical POST operations
  • Monitor your 503 errors to adjust timeout thresholds
  • The ip_hash method may affect retry behavior - consider testing with different load balancing methods

When working with Nginx load balancing, a critical distinction exists between HTTP methods. GET requests are idempotent - repeating them causes no side effects. However, POST/PUT requests are non-idempotent, where retries can lead to duplicate data creation or unintended state changes.

By default, Nginx will retry failed requests (including timeouts) on the next available upstream server for all HTTP methods. This becomes problematic when:

  • Processing payments (duplicate charges)
  • Creating database records (duplicate entries)
  • Uploading files (partial/corrupt data)

The key directive is proxy_next_upstream, which controls when Nginx will attempt the next upstream server. Here's how to modify it:

location / {
    proxy_pass http://mash/;
    proxy_set_header Host $host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504;
    proxy_next_upstream_tries 3;
    proxy_next_upstream_timeout 10s;
}

While Nginx doesn't natively support method-based retry logic, we can implement it using map directives:

map $request_method $non_idempotent {
    default 0;
    POST    1;
    PUT     1;
    PATCH   1;
    DELETE  1;
}

server {
    ...
    location / {
        proxy_pass http://mash/;
        proxy_next_upstream $non_idempotent error timeout ...;
        ...
    }
}

For critical operations, consider implementing:

  1. Idempotency keys - Unique identifiers processed only once
  2. Database constraints - Prevent duplicate records at storage layer
  3. Two-phase commits - Verify operation completion before finalizing

Use this curl command to verify behavior for POST requests:

curl -X POST -d "test=data" http://your-nginx-server/endpoint \
-H "Content-Type: application/x-www-form-urlencoded"

Monitor Nginx logs with:

tail -f /var/log/nginx/error.log | grep "upstream timed out"

Disabling retries for POST/PUT requests means:

  • Fewer duplicate operations
  • Potentially more failed requests visible to clients
  • Requires proper client-side error handling