Fixing Nginx Large File Download Issues: Configuration and Optimization for Multi-GB Files


2 views

Recently while setting up a file distribution server, I encountered a peculiar issue where Nginx would simply hang when clients tried to download files larger than 1GB. The connection would establish, but no data would flow. What made this particularly puzzling was that:

  • Smaller files (under 100MB) downloaded perfectly
  • The issue only affected new download attempts
  • Restarting Nginx would temporarily fix one download

After extensive testing, I discovered this wasn't a simple timeout or buffer issue. The key observations were:

# Monitoring active connections during the stall
ss -tnp | grep nginx
ESTAB   0       0          192.168.1.100:80        192.168.1.200:54321

The connection remained in ESTABLISHED state but no data transfer occurred. Neither Nginx nor the client would terminate the connection.

The root cause appears to be Nginx's default buffering behavior combined with certain OS-level TCP settings. Here's what actually happens:

  1. Nginx reads the file into memory buffers
  2. Default buffer sizes aren't optimized for multi-GB transfers
  3. TCP window scaling may not negotiate properly
  4. The connection deadlocks waiting for buffer space

Here's the complete solution that worked for 5GB+ files:

http {
    # Disable proxy buffering entirely for static files
    proxy_buffering off;
    
    # Increase timeouts significantly
    proxy_connect_timeout 600;
    proxy_send_timeout 600;
    proxy_read_timeout 600;
    
    # Optimize TCP settings
    sendfile on;
    tcp_nopush on;
    tcp_nodelay on;
    
    # For HTTP/2 connections
    http2_body_preread_size 1m;
    http2_recv_timeout 600;
}

server {
    location /downloads/ {
        # Specific large file settings
        aio on;
        directio 512;
        output_buffers 4 512k;
        
        # Important for ranged requests
        max_ranges 100;
        
        # Disable connection limits
        limit_rate 0;
        limit_conn none;
    }
}

For optimal performance with very large files, consider these OS tweaks:

# Increase TCP window sizes
sysctl -w net.ipv4.tcp_window_scaling=1
sysctl -w net.core.rmem_max=16777216
sysctl -w net.core.wmem_max=16777216

# Adjust file descriptor limits
sysctl -w fs.file-max=100000
ulimit -n 50000

After implementing these changes, verify with:

# Test with curl showing progress
curl -O http://yourserver/largefile.bin

# Or with wget showing detailed output
wget --verbose --tries=1 http://yourserver/largefile.bin

For production environments, I recommend testing with files of varying sizes (1GB, 5GB, 10GB) to ensure consistent behavior across different network conditions.

Add these to your Nginx config to monitor large file transfers:

log_format download '$remote_addr - $remote_user [$time_local] '
                   '"$request" $status $body_bytes_sent '
                   '"$http_referer" "$http_user_agent" '
                   '$request_time $bytes_sent $connection';

access_log /var/log/nginx/download.log download;

This provides detailed timing and throughput metrics for troubleshooting.


When attempting to download large files (5GB+) via Nginx, many developers encounter a frustrating scenario where the connection hangs indefinitely during the initial request phase. Wget output typically shows:

$ wget --verbose http://example.net/large.zip -O /dev/null
--2016-12-14 12:52:38--  http://example.net/large.zip
Resolving example.net (example.net)... 1.2.3.4
Connecting to example.net (example.net)|1.2.3.4|:80... connected.
HTTP request sent, awaiting response...
[Connection hangs here]

From troubleshooting multiple cases, we've identified these patterns:

  • Files under 50MB work perfectly
  • Problem occurs across different file types (zip, tar, iso)
  • Restarting Nginx temporarily fixes one download instance
  • No errors appear in access/error logs when logging is enabled

The typical static files configuration that might cause issues:

location ~* ^.+\\.(css|js|ogg|ogv|svg|svgz|eot|otf|woff|mp4|ttf|rss|atom|jpg|jpeg|gif|png|ico|zip|tgz|gz|rar|bz2|doc|xls|exe|ppt|tar|mid|midi|wav|bmp|rtf)$ {
   access_log off;
   log_not_found off;
   expires 7d;
}

Here are the most effective fixes for this issue:

# In your nginx.conf or server block:
proxy_max_temp_file_size 0;
proxy_buffering off;
client_max_body_size 0;
output_buffers 1 1m;
aio on;
directio 512;
sendfile on;
sendfile_max_chunk 1m;
keepalive_timeout 300;

For specific location blocks handling large files:

location /large_files {
    alias /path/to/large/files;
    aio on;
    directio 512;
    output_buffers 1 1m;
    sendfile on;
    sendfile_max_chunk 1m;
}

The key is disabling buffering mechanisms that can choke on large files while optimizing direct I/O operations:

  • directio: Bypasses OS cache for large files
  • aio: Enables asynchronous I/O operations
  • sendfile_max_chunk: Controls memory usage per sendfile call
  • proxy_max_temp_file_size 0: Disables temporary files completely

After applying changes, verify with:

nginx -t && service nginx reload

Then test with various tools:

# Using curl with progress meter
curl -O http://yourserver/large_file.zip

# Using wget with timeout checks
wget --timeout=60 --tries=3 http://yourserver/large_file.zip