After migrating from a single Apache instance to an nginx load-balanced setup with two Apache backends, we're observing duplicate POST requests for long-running operations like file uploads. The nginx logs show HTTP 499 (client closed connection) followed by a 200 success, while Apache logs reveal two successful 200 responses.
// Nginx logs
[17:17:47 +0200] "POST /upload HTTP/1.1" 499 0
[17:17:52 +0200] "POST /upload HTTP/1.1" 200 5641
// Apache logs
[17:17:37 +0200] "POST /upload HTTP/1.0" 200 9045
[17:17:47 +0200] "POST /upload HTTP/1.0" 200 20687
The issue stems from nginx's default behavior when handling slow POST requests:
- Client sends POST request to nginx
- Nginx proxies to backend Apache server
- Processing takes longer than client/nginx timeout
- Client or nginx closes connection (499)
- Browser automatically retries the request
Here are the critical nginx settings that need adjustment:
1. Increase Timeout Values
# In your nginx.conf or site configuration
proxy_read_timeout 300s; # Increased from 90s
proxy_connect_timeout 300s;
proxy_send_timeout 300s;
keepalive_timeout 300s;
client_body_timeout 300s;
2. Disable Buffering for Uploads
location /upload {
proxy_pass http://apache-backend;
proxy_request_buffering off;
proxy_buffering off;
}
3. Configure Proper Retry Logic
upstream apache-backend {
ip_hash;
server SERVER1 max_fails=3 fail_timeout=30s;
server SERVER2 max_fails=3 fail_timeout=30s;
proxy_next_upstream error timeout http_500 http_502 http_503 http_504;
proxy_next_upstream_timeout 0;
proxy_next_upstream_tries 0;
}
Implement idempotency tokens to prevent duplicate processing:
// PHP example
session_start();
$upload_token = bin2hex(random_bytes(16));
$_SESSION['upload_token'] = $upload_token;
// In your form
<input type="hidden" name="upload_token" value="<?php echo $upload_token; ?>">
// Server-side validation
if ($_POST['upload_token'] !== $_SESSION['upload_token']) {
http_response_code(409);
die("Duplicate request detected");
}
unset($_SESSION['upload_token']);
Add these logging directives to track the issue:
log_format proxy_log '$remote_addr - $remote_user [$time_local] '
'"$request" $status $body_bytes_sent '
'"$http_referer" "$http_user_agent" '
'rt=$request_time uct="$upstream_connect_time" '
'uht="$upstream_header_time" urt="$upstream_response_time"';
access_log /var/log/nginx/proxy.log proxy_log;
Consider enabling HTTP/2 which handles timeouts more gracefully:
server {
listen 443 ssl http2;
# ... existing SSL configuration ...
http2_recv_timeout 300s;
}
After migrating from a standalone Apache setup to a load-balanced environment with Nginx reverse proxy, we encountered an intermittent issue where POST requests - particularly file uploads - were being processed twice. The Apache logs clearly show two successful 200 responses, while Nginx shows one 499 (client closed connection) followed by a 200.
// Apache logs
[17:17:37] "POST /upload HTTP/1.0" 200 9045
[17:17:47] "POST /upload HTTP/1.0" 200 20687
// Nginx logs
[17:17:47] "POST /upload HTTP/1.1" 499 0
[17:17:52] "POST /upload HTTP/1.1" 200 5641
The issue primarily manifests with:
- Large file uploads (30MB+ in our config)
- Operations with backend processing time >10 seconds
- Requests that trigger complex backend operations (image processing, distribution)
After thorough investigation, we identified the issue stems from the interaction between these factors:
- Nginx Timeout Settings: Default proxy_read_timeout of 60s (we had 90s) can be insufficient
- Client-Side Behavior: Browser retry mechanism when connection appears stalled
- Buffering Configuration: Current proxy_buffering settings may cause premature timeout detection
Here's the revised Nginx configuration that resolved our issue:
# Increased timeout settings
proxy_read_timeout 300s;
proxy_connect_timeout 300s;
proxy_send_timeout 300s;
# Disable buffering for uploads
proxy_request_buffering off;
client_body_buffer_size 1M;
# Better handling of backend failures
proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504;
proxy_next_upstream_timeout 0;
proxy_next_upstream_tries 0;
# Keepalive improvements
keepalive_timeout 75;
keepalive_requests 1000;
For critical upload operations, we implemented these server-side validations:
// PHP example of duplicate request prevention
session_start();
function is_duplicate_request($request_id) {
if (!isset($_SESSION['last_upload'])) {
$_SESSION['last_upload'] = $request_id;
return false;
}
$is_duplicate = ($_SESSION['last_upload'] === $request_id);
$_SESSION['last_upload'] = $request_id;
return $is_duplicate;
}
$upload_id = md5_file($_FILES['file']['tmp_name']);
if (is_duplicate_request($upload_id)) {
header('HTTP/1.1 409 Conflict');
die('Duplicate upload detected');
}
We added these log formats to better track request chains:
log_format upstream_time '$remote_addr - $remote_user [$time_local] '
'"$request" $status $body_bytes_sent '
'"$http_referer" "$http_user_agent" '
'rt=$request_time uct="$upstream_connect_time" '
'uht="$upstream_header_time" urt="$upstream_response_time"';