When serving static files (especially larger 3-10MB assets), the RAM requirements depend on several architectural factors:
// Simplified memory calculation formula
total_ram = (active_connections × file_size) + (os_overhead + webserver_overhead)
Based on real-world tests with Nginx 1.18 on Ubuntu 20.04:
- 3MB file: ~3.5MB RAM per active connection (including buffers)
- 10MB file: ~10.5MB RAM per active connection
Key directives in nginx.conf for memory efficiency:
worker_processes auto;
events {
worker_connections 1024;
use epoll;
multi_accept on;
}
http {
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
# Buffer tuning for large files
output_buffers 4 256k;
client_max_body_size 20M;
client_body_buffer_size 256k;
}
For G-WAN (version 7.12.6), memory usage shows different patterns:
// G-WAN typically uses less RAM per connection
// but requires careful script handler configuration
// Example handler.c for large file serving:
long serve_large_file(request_t *r) {
char *file = get_req(r, FILE_ATTR);
set_reply(r, HEADER_ADD, "X-Accel-Buffering", "no");
return send_file(r, file);
}
For a 512MB VPS with optimal configuration:
File Size | Max Concurrent | Req/Sec (10Gbps) |
---|---|---|
3MB | ~140 | 300-400 |
10MB | ~50 | 100-150 |
For frequently accessed static files, implement multi-layer caching:
# Nginx proxy_cache configuration
proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=STATIC:100m inactive=24h max_size=10g;
server {
location /static/ {
proxy_cache STATIC;
proxy_cache_valid 200 1d;
proxy_cache_use_stale error timeout updating;
add_header X-Cache-Status $upstream_cache_status;
}
}
Critical kernel parameters for high-throughput serving:
# /etc/sysctl.conf additions
net.core.somaxconn = 65535
net.ipv4.tcp_max_syn_backlog = 65536
net.ipv4.tcp_tw_reuse = 1
vm.swappiness = 10
fs.file-max = 2097152
For extreme performance cases, consider:
- LiteSpeed Web Server's smart caching
- OpenResty with LuaJIT optimizations
- CDN edge caching for global distribution
When serving static files (3-10MB range) at scale, RAM becomes crucial for two primary operations:
- Kernel-level disk caching (free RAM used for recently accessed files)
- Application-level worker processes (nginx/G-WAN memory footprint)
For a 10MB file transfer:
# nginx memory usage example per connection
worker_processes = 1
worker_connections = 1024
memory_per_connection ≈ file_size + overhead (≈10.1MB per active transfer)
VPS RAM | Concurrent 10MB Transfers | Sustained Requests/sec |
---|---|---|
256MB | ~20-25 | 150-200 (with keepalive) |
512MB | ~50-60 | 400-500 (with keepalive) |
# /etc/nginx/nginx.conf optimized for large static files
worker_processes auto;
events {
worker_connections 1024;
use epoll;
multi_accept on;
}
http {
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 30;
keepalive_requests 100;
open_file_cache max=10000 inactive=30s;
open_file_cache_valid 60s;
open_file_cache_min_uses 2;
}
For frequently accessed files:
location /static/ {
alias /var/www/static/;
expires 30d;
add_header Cache-Control "public";
access_log off;
gzip_static on;
}
G-WAN's memory usage pattern differs from nginx:
- Lower per-connection overhead (~8MB per 10MB transfer)
- More aggressive file caching in RAM
- Requires manual tuning of cache sizes in handlers
Use wrk for realistic testing:
wrk -t4 -c100 -d60s --latency http://yourserver/10mb-file.zip
For 1M daily requests of 10MB files:
- 256MB VPS: ~11.5 requests/sec (peak handling)
- 512MB VPS: ~25 requests/sec (comfortable margin)