Nginx uses a master-worker architecture where worker processes handle actual client requests. The worker_processes
directive in nginx.conf
controls this behavior. For a single vCPU VM running Drupal, the optimal configuration differs from multi-core systems.
For your 1 vCPU VMware VM, I recommend:
worker_processes 1;
events {
worker_connections 1024;
# Use epoll on Linux for better performance
use epoll;
}
This single-worker setup avoids context switching overhead while still handling concurrent connections through event-driven architecture.
While you might see recommendations for multiple workers, these apply to:
- Physical multi-core servers
- CPU-bound workloads
- High-traffic scenarios
For a Drupal site on limited resources, test with:
ab -n 1000 -c 50 http://yoursite.com/
If you need to handle more concurrent connections, consider:
worker_processes auto; # Matches CPU cores
events {
worker_connections 4096;
multi_accept on;
}
But remember to adjust kernel limits:
# In /etc/sysctl.conf
fs.file-max = 70000
net.core.somaxconn = 4096
Check worker utilization with:
watch -n 1 "ps -eo pid,user,pcpu,pmem,args | grep nginx"
Look for:
- Even distribution across CPU cores
- Memory usage per worker
- No worker process stuck at 100% CPU
When configuring Nginx for a Drupal installation, the worker_processes
directive is crucial for performance. This setting determines how many worker processes Nginx spawns to handle incoming requests. The optimal value depends on your server's CPU architecture and workload characteristics.
By default, Nginx sets worker_processes auto;
, which automatically matches the number of available CPU cores. However, in virtualized environments like your single vCPU VMware VM, we need more precise tuning:
# Default nginx.conf snippet
worker_processes auto;
events {
worker_connections 1024;
}
For your specific case (1 vCPU VM), here's what you should know:
- Setting worker_processes > 1 creates unnecessary context switching
- Single worker can typically handle thousands of connections
- Memory becomes the limiting factor for concurrent connections
For a Drupal site on a 1 vCPU VM, use this optimized configuration:
worker_processes 1;
worker_rlimit_nofile 4096;
events {
worker_connections 2048;
multi_accept on;
use epoll;
}
http {
# Drupal-specific optimizations
client_max_body_size 64M;
keepalive_timeout 65;
# Additional performance tweaks...
}
To validate your configuration, use tools like ab
(Apache Benchmark) or wrk
:
ab -n 1000 -c 100 http://your-drupal-site/
wrk -t4 -c1000 -d30s http://your-drupal-site/
Consider these additional optimizations for Drupal:
# FastCGI cache settings for Drupal
fastcgi_cache_path /var/run/nginx-cache levels=1:2 keys_zone=DRUPAL:100m inactive=60m;
fastcgi_cache_key "$scheme$request_method$host$request_uri";
fastcgi_cache_use_stale error timeout invalid_header updating http_500;
fastcgi_cache_valid 200 60m;
Use Nginx's status module to monitor worker performance:
location /nginx_status {
stub_status on;
access_log off;
allow 127.0.0.1;
deny all;
}
Then access http://localhost/nginx_status
to view active connections and worker status.
- Setting worker_processes higher than available CPU cores
- Not adjusting
worker_rlimit_nofile
to matchworker_connections
- Overlooking kernel-level connection limits (
net.core.somaxconn
) - Neglecting PHP-FPM worker configuration synchronization
For high-traffic Drupal sites on limited resources, consider:
- Implementing reverse proxy caching
- Using Nginx's microcache functionality
- Offloading static assets to CDN