When deploying Laravel applications to AWS Elastic Beanstalk, one common pain point is getting queue workers to run reliably. Unlike traditional servers where you can SSH in and run php artisan queue:work
, Elastic Beanstalk's ephemeral instances require a different approach.
While Redis or Amazon SQS might seem like obvious choices, there are valid reasons to use the database driver:
- Simpler architecture when you're already using RDS
- No additional AWS service costs
- Easier to debug since jobs are stored in your database
The most reliable approach is using Elastic Beanstalk's configuration files. Create a file at .ebextensions/01_queue_worker.config
with this content:
files:
"/opt/elasticbeanstalk/hooks/appdeploy/post/artisan_queue_worker.sh":
mode: "000755"
owner: root
group: root
content: |
#!/bin/bash
cd /var/www/html
nohup php artisan queue:work database --sleep=3 --tries=3 --daemon > /dev/null 2>&1 &
echo $! > /var/run/artisan_queue_worker.pid
container_commands:
01_restart_queue_worker:
command: |
if [ -f /var/run/artisan_queue_worker.pid ]; then
kill $(cat /var/run/artisan_queue_worker.pid) || true
fi
/opt/elasticbeanstalk/hooks/appdeploy/post/artisan_queue_worker.sh
For more control over process management, you can configure Supervisor:
files:
"/etc/supervisor/conf.d/laravel-worker.conf":
mode: "000644"
owner: root
group: root
content: |
[program:laravel-worker]
process_name=%(program_name)s_%(process_num)02d
command=php /var/www/html/artisan queue:work database --sleep=3 --tries=3
autostart=true
autorestart=true
user=webapp
numprocs=2
redirect_stderr=true
stdout_logfile=/var/log/supervisor/laravel-queue.log
commands:
01_restart_supervisor:
command: /sbin/service supervisor restart
To make your queue processing more resilient:
- Implement proper job retries in your job classes
- Monitor failed_jobs table
- Set up CloudWatch alerts for stuck workers
When using the database driver with RDS:
- Set proper connection timeouts in database.php
- Consider increasing RDS connection limits
- Implement connection retry logic in your queue worker
// In config/database.php
'mysql' => [
'options' => [
PDO::ATTR_TIMEOUT => 60,
PDO::ATTR_ERRMODE => PDO::ERRMODE_EXCEPTION,
PDO::ATTR_PERSISTENT => false,
]
]
As your application grows:
- Monitor queue length with CloudWatch metrics
- Consider multiple queue workers for different queues
- Implement proper queue prioritization
When deploying Laravel applications to AWS Elastic Beanstalk, many developers encounter difficulties setting up queue workers. The environment's ephemeral nature and permission restrictions make traditional php artisan queue:work
commands challenging to maintain.
First, ensure your Laravel configuration uses the database driver:
// config/queue.php
'default' => env('QUEUE_CONNECTION', 'database'),
'database' => [
'driver' => 'database',
'table' => 'jobs',
'queue' => 'default',
'retry_after' => 90,
],
Create the jobs table migration if you haven't already:
php artisan queue:table
php artisan migrate
Create a .ebextensions/01-worker.config
file with these contents:
files:
"/etc/cron.d/artisan_queue_worker":
mode: "000644"
owner: root
group: root
content: |
* * * * * webapp /usr/bin/php /var/app/current/artisan queue:work database --sleep=3 --tries=3 --max-time=3600 > /dev/null 2>&1
commands:
01_remove_old_cron:
command: "rm -f /etc/cron.d/*.bak"
02_restart_crond:
command: "service crond restart"
For more control, use Supervisord. Create .ebextensions/02-supervisord.config
:
container_commands:
01_setup_supervisord:
command: "cat .ebextensions/supervisord.conf > /etc/supervisord.conf && mkdir -p /var/log/supervisor"
02_start_supervisord:
command: "supervisord -c /etc/supervisord.conf"
files:
"/.ebextensions/supervisord.conf":
mode: "000755"
owner: root
group: root
content: |
[supervisord]
logfile=/var/log/supervisor/supervisord.log
logfile_maxbytes=50MB
logfile_backups=10
loglevel=info
pidfile=/var/run/supervisord.pid
nodaemon=true
[program:laravel-worker]
process_name=%(program_name)s_%(process_num)02d
command=php /var/app/current/artisan queue:work database --sleep=3 --tries=3
autostart=true
autorestart=true
user=webapp
numprocs=8
redirect_stderr=true
stdout_logfile=/var/app/current/storage/logs/worker.log
Configure the failed jobs table:
php artisan queue:failed-table
php artisan migrate
Then in your job class:
public $tries = 3;
public $maxExceptions = 3;
public function handle()
{
// Job logic here
}
public function failed(Exception $exception)
{
// Failure handling
}
Add these routes for monitoring:
Route::get('/queue-monitor', function() {
$pendingJobs = DB::table('jobs')->count();
$failedJobs = DB::table('failed_jobs')->count();
return response()->json([
'pending_jobs' => $pendingJobs,
'failed_jobs' => $failedJobs,
'worker_status' => exec('pgrep -f "queue:work"') ? 'running' : 'stopped'
]);
});