The key symptom here is that Nginx terminates connections prematurely during large file transfers (specifically JPEG images in this case), while smaller files transfer successfully. The wget output shows multiple 206 Partial Content responses and retries, indicating the connection keeps dropping during transfer.
The main suspicious parameters in the current Nginx configuration are:
keepalive_timeout 0; # Disables keep-alive connections entirely
tcp_nodelay on; # Good for small packets but may affect large transfers
After analyzing similar cases, these are the most likely causes:
- Insufficient proxy buffer settings
- Connection keepalive being disabled (keepalive_timeout 0)
- Potential timeout conflicts between Nginx and upstream Apache
- Missing proxy_temp_file directives for large files
Update your Nginx configuration with these parameters:
http {
proxy_buffering on;
proxy_buffer_size 16k;
proxy_buffers 64 16k;
proxy_busy_buffers_size 64k;
proxy_temp_path /var/nginx/proxy_temp;
proxy_max_temp_file_size 1024m;
keepalive_timeout 65;
sendfile on;
tcp_nopush on;
tcp_nodelay off; # Changed from 'on' for large files
# Existing configurations...
}
After applying changes, test with curl to verify proper chunked transfer:
curl -v -o /dev/null http://www.site.com/images/theme/front/clean.jpg
Look for proper Content-Length headers and continuous transfer without breaks.
If issues persist, consider adding these debug directives temporarily:
proxy_read_timeout 300;
proxy_connect_timeout 300;
proxy_send_timeout 300;
error_log /var/log/nginx/debug.log debug;
For extremely large files, consider using X-Accel-Redirect:
location /protected_files/ {
internal;
alias /path/to/files/;
}
When serving larger image files (typically above 30KB), Nginx intermittently terminates connections before completing the transfer. This manifests in both command-line tools and browsers, while smaller files transfer normally. The key observations:
$ wget -O /dev/null http://example.com/large.jpg
...
26% [===============> ] 24,291 --.-K/s in 8.7s
2012-07-11 21:37:12 (2.74 KB/s) - Connection closed at byte 24291
The configuration contains several problematic directives that combine to create this behavior:
keepalive_timeout 0
- Disables HTTP keepalive entirelyproxy_cache
settings without proper buffering configuration- Missing
proxy_buffering
andproxy_buffer_size
directives
Here's the corrected server block configuration:
server {
listen 123.234.123.234:80;
server_name site.com www.site.com;
# Buffer and timeout settings
proxy_buffering on;
proxy_buffer_size 16k;
proxy_buffers 64 16k;
proxy_busy_buffers_size 64k;
proxy_temp_file_write_size 64k;
keepalive_timeout 65;
send_timeout 600;
location / {
proxy_pass http://123.234.123.234:8080;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# Cache settings
proxy_cache ino;
proxy_cache_valid 12h;
proxy_cache_lock on;
proxy_cache_use_stale error timeout updating;
}
location ~* \.(jpg|jpeg|png|gif|ico)$ {
expires 30d;
add_header Cache-Control "public";
try_files $uri @fallback;
}
}
After applying these changes, verify the fix with:
ab -n 100 -c 10 http://example.com/large.jpg
curl -v -o /dev/null http://example.com/large.jpg
Key metrics to check:
- Complete file transfer without interruption
- Consistent transfer speeds
- HTTP status 200 (not 206 Partial Content)
For high-traffic servers, consider these additional optimizations:
# In http context
proxy_max_temp_file_size 1024m;
proxy_read_timeout 300;
proxy_connect_timeout 300;
proxy_send_timeout 300;
# Kernel tuning
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216