When processing large batches of data through PHP-FPM with Nginx, many developers encounter the frustrating 504 Gateway Timeout error. This typically occurs when script execution exceeds the default FastCGI timeout settings. In my case, processing 500 users takes about 55 seconds, but scaling to 1000 users pushes this beyond the default 60-second threshold.
Nginx provides several timeout-related directives for FastCGI:
fastcgi_read_timeout 300; # Default is usually 60s
fastcgi_send_timeout 300;
fastcgi_connect_timeout 75;
The most critical is fastcgi_read_timeout
, which defines how long Nginx will wait for a response from the FastCGI server.
You can set this value in multiple locations:
# Option 1: In your server block
server {
location ~ \.php$ {
fastcgi_read_timeout 300;
# Other fastcgi params...
}
}
# Option 2: In fastcgi_params or fastcgi.conf
fastcgi_read_timeout 300;
# Option 3: In your PHP pool configuration
php_admin_value[max_execution_time] = 300
Here's a comprehensive setup that handles longer-running scripts:
location ~ \.php$ {
fastcgi_pass unix:/var/run/php/php8.1-fpm.sock;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
# Timeout configurations
fastcgi_read_timeout 300;
fastcgi_send_timeout 300;
fastcgi_connect_timeout 75;
# Buffer settings
fastcgi_buffers 16 16k;
fastcgi_buffer_size 32k;
}
Beyond timeout settings, consider these improvements:
- Implement chunked processing for large batches
- Add progress tracking for long-running scripts
- Consider queue systems like RabbitMQ for massive jobs
- Monitor PHP-FPM's pm.max_children setting
Create a simple test script to verify your timeout settings:
<?php
// test_timeout.php
sleep(65); // Should work with 300s timeout, fail with default 60s
echo "Completed successfully";
Remember to reload Nginx after changes: sudo systemctl reload nginx
When dealing with computationally intensive PHP scripts that process large datasets (like user batches), the default 60-second FastCGI read timeout often becomes a bottleneck. The 504 Gateway Timeout error occurs when Nginx waits longer than this threshold for a response from PHP-FPM.
There are three strategic locations to configure fastcgi_read_timeout
:
# Option 1: In server block (recommended for specific routes)
location ~ \.php$ {
fastcgi_read_timeout 300s; # 5 minutes
include fastcgi_params;
fastcgi_pass unix:/run/php/php8.1-fpm.sock;
}
# Option 2: In fastcgi_params file
fastcgi_read_timeout 300s;
# Option 3: In http block (global setting)
http {
fastcgi_read_timeout 300s;
}
For batch processing scripts, I recommend this optimized setup:
server {
listen 80;
server_name api.example.com;
location /batch-process {
fastcgi_read_timeout 600s; # 10 minutes
fastcgi_send_timeout 600s;
fastcgi_connect_timeout 75s;
try_files $uri =404;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root/index.php;
fastcgi_pass unix:/var/run/php/php8.1-fpm.sock;
}
}
Combine timeout adjustments with these improvements:
- Increase PHP
max_execution_time
in php.ini - Adjust FPM pool settings:
request_terminate_timeout
- Implement chunked processing for large batches
- Add Nginx buffering controls:
fastcgi_buffers 16 16k; fastcgi_buffer_size 32k;
After configuration:
sudo nginx -t # Test configuration
sudo systemctl reload nginx
Use ab
or siege
for load testing with realistic processing times.