When dealing with high-traffic Django deployments using the Gunicorn/Nginx stack with Unix sockets, the "Resource temporarily unavailable" (EAGAIN/EWOULDBLOCK) error typically indicates system resource exhaustion at the OS level. This manifests when Nginx cannot establish new connections to the Gunicorn socket due to:
[error] 2388#0: *208027 connect() to unix:/tmp/gunicorn-ourapp.socket failed (11: Resource temporarily unavailable)
From your setup, several parameters need optimization:
- Worker processes calculation:
workers = multiprocessing.cpu_count() * 3 + 1
- Socket connection handling in Nginx
- System-level file descriptor limits
Add these parameters to your gunicorn.conf.py
:
backlog = 2048 # Double the default
worker_connections = 1000
keepalive = 5
Then modify your Nginx configuration:
location / {
proxy_pass http://unix:/tmp/gunicorn-ourapp.socket;
proxy_socket_keepalive on;
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_buffers 8 32k;
proxy_buffer_size 64k;
}
For high-traffic servers, consider these kernel parameter adjustments:
# /etc/sysctl.conf
net.core.somaxconn = 4096
net.ipv4.tcp_max_syn_backlog = 4096
fs.file-max = 2097152
Apply them with:
sudo sysctl -p
Instead of static worker count, consider dynamic scaling:
# gunicorn.conf.py
workers = 4 # Base workers
threads = 2 # Per worker
max_requests = 1000
max_requests_jitter = 50
Add this to your supervisor configuration to ensure proper socket cleanup:
[program:gunicorn]
command=/path/to/gunicorn --config /path/to/gunicorn.conf.py app.wsgi
umask=022
user=youruser
autostart=true
autorestart=true
stopsignal=QUIT
stopasgroup=true
killasgroup=true
Install these tools to monitor socket connections:
sudo apt install lsof net-tools
# Monitor active connections:
watch -n 1 "sudo netstat -anp | grep gunicorn"
# Check file descriptor usage:
watch -n 1 "cat /proc/sys/fs/file-nr"
When running Django with Gunicorn and Nginx via Unix sockets under heavy traffic, the "Resource temporarily unavailable" (EAGAIN) error typically indicates one of these scenarios:
1. Gunicorn worker exhaustion (all workers busy processing requests)
2. OS-level connection queue overflow
3. File descriptor limits being hit
4. Socket buffer capacity issues
The current worker calculation workers = multiprocessing.cpu_count() * 3 + 1
might not be optimal for your traffic pattern. Consider these adjustments:
# New gunicorn.conf.py
import multiprocessing
bind = 'unix:/tmp/gunicorn-ourapp.socket'
workers = multiprocessing.cpu_count() * 2 # More conservative scaling
threads = 2 # Add threaded mode for I/O bound workloads
backlog = 2048 # Increase connection queue
timeout = 300 # Reduced from 600
keepalive = 2 # Enable connection reuse
The current Nginx config needs several critical enhancements:
upstream gunicorn_socket {
server unix:/tmp/gunicorn-ourapp.socket fail_timeout=0;
keepalive 32; # Maintain persistent connections
}
server {
# ... existing config ...
location / {
proxy_pass http://gunicorn_socket;
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_buffers 8 32k;
proxy_buffer_size 64k;
# Critical timeouts should match Gunicorn
proxy_read_timeout 300s;
proxy_connect_timeout 75s;
# Add these important headers
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
Even with 50,000 open files limit, you might need deeper OS tuning:
# /etc/sysctl.conf additions
net.core.somaxconn = 4096
net.ipv4.tcp_max_syn_backlog = 4096
net.core.netdev_max_backlog = 4096
fs.file-max = 2097152
# For the socket itself
net.unix.max_dgram_qlen = 4096
Apply with sysctl -p
after saving changes.
Use these commands to monitor your bottleneck:
# Check socket connection queue
ss -xlp | grep gunicorn
# Monitor worker utilization
watch -n 1 'ps -ef | grep gunicorn | grep -v grep | wc -l'
# Check for dropped connections
dmesg | grep -i "tcp drop"
netstat -s | grep -i "listen"
For very high traffic, consider these architectural changes:
# Use multiple socket files with load balancing
upstream gunicorn_cluster {
server unix:/tmp/gunicorn-ourapp-1.socket;
server unix:/tmp/gunicorn-ourapp-2.socket;
least_conn; # Better than round-robin for uneven loads
}
# In Gunicorn config
import glob
sockets = [f'unix:/tmp/gunicorn-ourapp-{i}.socket' for i in range(1,3)]
bind = sockets