When serving static content like images through Nginx, disk I/O operations can become a significant bottleneck under high traffic. The "Too many open files" error you encountered indicates the system is hitting file descriptor limits because Nginx keeps reading files from disk for each request.
Nginx actually has a powerful memory caching capability through its proxy_cache_path
directive when used with the tmpfs
filesystem. Here's how to implement it:
http {
proxy_temp_path /dev/shm/nginx_temp;
proxy_cache_path /dev/shm/nginx_cache levels=1:2 keys_zone=my_cache:10m inactive=60m use_temp_path=off;
server {
location /static/ {
proxy_cache my_cache;
proxy_pass http://backend;
proxy_cache_valid 200 302 60m;
proxy_cache_valid 404 1m;
}
}
}
For smaller sites where you want specific files always in RAM, mounting a tmpfs filesystem is more straightforward:
# First, create a tmpfs mount point
sudo mkdir -p /var/cache/nginx-ram
sudo mount -t tmpfs -o size=100M tmpfs /var/cache/nginx-ram
# Then configure Nginx to serve from this location
server {
root /var/cache/nginx-ram;
location /images/ {
try_files $uri @fallback;
}
location @fallback {
root /var/www/html;
}
}
When implementing RAM caching, consider these additional optimizations:
events {
worker_connections 4096;
}
http {
open_file_cache max=2000 inactive=20s;
open_file_cache_valid 60s;
open_file_cache_min_uses 2;
open_file_cache_errors on;
aio threads;
sendfile on;
tcp_nopush on;
}
Use this command to check your cache status:
nginx -T | grep cache_path
And monitor hits/misses with:
tail -f /var/log/nginx/access.log | grep -E 'HIT|MISS|EXPIRED|STALE'
To prevent this error, increase system limits:
# Check current limits
ulimit -n
# Increase limits (add to /etc/security/limits.conf)
nginx soft nofile 65535
nginx hard nofile 65535
# And in /etc/nginx/nginx.conf
worker_rlimit_nofile 65535;
When serving static content like images through Nginx, disk I/O operations can become a significant bottleneck under high traffic. While Nginx is inherently efficient, we can optimize it further by keeping frequently accessed files in RAM. Here's how to configure this properly:
http {
proxy_temp_path /dev/shm/nginx_temp;
client_body_temp_path /dev/shm/nginx_client_body;
fastcgi_temp_path /dev/shm/nginx_fastcgi;
proxy_cache_path /dev/shm/nginx_cache levels=1:2
keys_zone=STATIC:10m
inactive=24h
max_size=1g
use_temp_path=off;
server {
location /static/ {
proxy_cache STATIC;
proxy_cache_valid 200 24h;
proxy_pass http://backend;
proxy_cache_use_stale error timeout updating;
proxy_cache_lock on;
}
}
}
1. Memory-based paths: Using /dev/shm (shared memory) for temporary files
2. Proxy cache settings: Defining a memory-backed cache zone
3. Cache validation: Controlling how long items stay cached
For systems where /dev/shm isn't available or suitable:
# Create tmpfs mount in fstab
tmpfs /var/cache/nginx tmpfs defaults,size=1G 0 0
Then configure Nginx to use this location:
proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=CACHE:100m inactive=24h max_size=900m;
After configuration, verify with:
# Check memory usage
free -m
# Verify files are in RAM
ls -lh /dev/shm/
Add these directives to your nginx.conf:
worker_rlimit_nofile 100000;
events {
worker_connections 4000;
}
Also increase system limits:
# In /etc/security/limits.conf
nginx soft nofile 100000
nginx hard nofile 100000
proxy_cache_min_uses 1;
proxy_cache_methods GET HEAD;
open_file_cache max=1000 inactive=20s;
open_file_cache_valid 30s;
open_file_cache_min_uses 2;
open_file_cache_errors on;