When configuring PHP-FPM with Nginx, you essentially have two approaches for handling the FastCGI connection:
# Approach 1: Direct socket passing
location ~ \.php$ {
fastcgi_pass unix:/run/php-fpm/php-fpm.sock;
}
# Approach 2: Upstream declaration
upstream php {
server unix:/run/php-fpm/php-fpm.sock;
}
location ~ \.php$ {
fastcgi_pass php;
}
Direct fastcgi_pass is simpler and more straightforward when:
- You have a single PHP-FPM instance
- Your setup doesn't require load balancing
- You want minimal configuration overhead
Upstream blocks become valuable when:
- You need to implement load balancing across multiple PHP-FPM instances
- You want to define backup servers (with backup parameter)
- You need to adjust connection timeouts at the upstream level
- Your architecture might scale to multiple servers later
In most single-server setups, the performance difference is negligible. However, upstream blocks add a slight indirection layer. Here's a more advanced upstream configuration example:
upstream php_backend {
server unix:/run/php-fpm/primary.sock weight=5;
server unix:/run/php-fpm/secondary.sock;
server 127.0.0.1:9000 backup;
keepalive 5;
}
For high-traffic sites using Kubernetes:
upstream php {
least_conn;
server php-fpm-pod-1:9000;
server php-fpm-pod-2:9000;
server php-fpm-pod-3:9000;
keepalive 30;
}
location ~ [^/]\.php(/|$) {
fastcgi_pass php;
fastcgi_keep_conn on;
# Additional fastcgi params...
}
For local development with Docker:
location ~ \.php$ {
fastcgi_pass php-fpm-container:9000;
# Simpler direct approach often sufficient
}
The upstream method allows for more sophisticated tuning:
upstream php {
zone backend 64k;
server unix:/run/php-fpm/php-fpm.sock max_conns=100;
queue 10 timeout=30s;
}
This enables connection limiting and queuing when the PHP-FPM pool is saturated.
The fundamental difference lies in architectural organization versus direct implementation. upstream defines a reusable backend server group (commonly used for load balancing or centralized config), while location handles request routing at the HTTP processing level.
// Scenario 1: Simple single-server deployment (location direct)
location ~ \.php$ {
fastcgi_pass unix:/run/php-fpm/php-fpm.sock;
include fastcgi_params;
}
// Scenario 2: Multi-server or load-balanced setup (upstream)
upstream php_cluster {
server unix:/run/php-fpm/primary.sock;
server unix:/run/php-fpm/secondary.sock weight=2;
server 127.0.0.1:9000 backup;
}
location ~ \.php$ {
fastcgi_pass php_cluster;
}
Direct location passing shows marginally better performance (1-3% benchmarks) for single-server deployments by eliminating the upstream resolution step. However, upstream provides:
- Connection pooling reuse
- Health check capabilities (with nginx-plus)
- Weighted distribution logic
upstream php_backend {
zone backend 64k;
server unix:/run/php-fpm/app1.sock max_fails=3 fail_timeout=5s;
server unix:/run/php-fpm/app2.sock max_conns=100;
}
location ~ [^/]\.php(/|$) {
fastcgi_keep_conn on;
fastcgi_pass php_backend;
fastcgi_param SCRIPT_FILENAME $realpath_root$fastcgi_script_name;
include fastcgi_params;
}
1. Socket permission issues when using upstream:
chmod 660 /run/php-fpm/*.sock
chown nginx:php-fpm /run/php-fpm/*.sock
2. Missing fastcgi_param
inheritance when switching between methods.