Understanding Nginx proxy_send_timeout: A Deep Dive into Upstream Request Transmission Timeout


3 views

When Nginx acts as a reverse proxy, three critical timeout directives govern its interaction with backend servers:

proxy_connect_timeout 60s;
proxy_send_timeout    60s; 
proxy_read_timeout    60s;

While most developers understand proxy_connect_timeout (TCP connection establishment) and proxy_read_timeout (response waiting), proxy_send_timeout causes frequent confusion. This timeout specifically measures the interval between successive write operations when transmitting a request to the upstream server.

Consider a large file upload through Nginx to a backend application:

location /upload {
    proxy_pass http://backend;
    proxy_send_timeout 300s;
    client_max_body_size 100M;
}

Here's what happens behind the scenes:

  1. Nginx establishes connection (governed by proxy_connect_timeout)
  2. Begins transmitting request in chunks (monitored by proxy_send_timeout)
  3. If no data is transmitted between chunks for 300s, connection closes
  4. After complete transmission, proxy_read_timeout takes over

Common cases where proxy_send_timeout matters:

  • Large POST/PUT payloads (file uploads, data exports)
  • Slow upstream servers with network congestion
  • Chunked transfer encoding scenarios

Best practices for production environments:

# For API gateways
proxy_send_timeout 30s;

# For file upload services  
proxy_send_timeout 600s;

# When using keepalive
proxy_send_timeout 60s;
keepalive_timeout 75s;
keepalive_requests 100;

Add these to your error_log directive for troubleshooting:

error_log /var/log/nginx/error.log debug;

Sample error message when proxy_send_timeout triggers:

2023/01/01 12:00:00 [error] 1234#1234: *5678 upstream timed out 
(110: Connection timed out) while sending request to upstream, 
client: 192.168.1.100, server: example.com, request: "POST /upload HTTP/1.1", 
upstream: "http://10.0.0.5:8080/upload", host: "example.com"

For complex scenarios with multiple upstreams:

upstream app_servers {
    server 10.0.0.1:8080;
    server 10.0.0.2:8080;
}

server {
    location / {
        proxy_pass http://app_servers;
        proxy_send_timeout 45s;
        
        # Important for slow clients
        proxy_request_buffering on;
        proxy_buffering on;
        proxy_buffer_size 4k;
        proxy_buffers 8 16k;
    }
}

When configuring Nginx as a reverse proxy, understanding timeout directives is crucial for optimizing backend communication. While proxy_connect_timeout and proxy_read_timeout are relatively straightforward, proxy_send_timeout often causes confusion among developers.

Unlike proxy_read_timeout which monitors response delays, proxy_send_timeout specifically governs:

  • The interval between successive write operations when sending requests
  • The maximum time allowed between TCP packet transmissions
  • The window for completing chunked uploads to slow backends

Consider this common scenario with file uploads:


location /upload {
    proxy_pass http://backend;
    proxy_send_timeout 300s;
    proxy_read_timeout 30s;
    client_max_body_size 100M;
}

Here, we've configured:

  • 5 minutes (300s) for transmitting large files to the backend
  • 30 seconds for receiving responses after transmission completes
  • 100MB maximum upload size

At the network level, Nginx monitors:

  1. Time between successful ACKs of TCP segments
  2. Intervals between write() system call completions
  3. Buffer flush delays caused by network congestion

When debugging timeout issues:


# Enable debug logging
error_log /var/log/nginx/debug.log debug;

# Sample log output:
# 2023/01/01 12:00:00 [debug] 1234#1234: *1 http upstream send timed out
# 2023/01/01 12:00:00 [debug] 1234#1234: *1 finalize http upstream request: 504

For different use cases:

Scenario Recommended Value
API Proxying 60s
File Uploads 300s
Streaming Services 0 (disabled)

For WebSocket applications:


location /ws/ {
    proxy_pass http://backend;
    proxy_http_version 1.1;
    proxy_set_header Upgrade $http_upgrade;
    proxy_set_header Connection "upgrade";
    proxy_send_timeout 7d; # Long-lived connection
    proxy_read_timeout 7d;
}