By default, Linux cron jobs are executed sequentially by the cron daemon. When multiple jobs are scheduled to run at the same time or overlapping times, they will be executed one after another in the order they appear in the crontab file.
In your example:
# Example crontab entries 0 8 * * * /path/to/jobA.sh # Job A at 8:00 AM 5 8 * * * /path/to/jobB.sh # Job B at 8:05 AM
If Job A takes 12 hours to complete, Job B will not start at 8:05 AM as scheduled. Instead, it will wait until Job A finishes. This behavior is because the cron daemon processes jobs in a single-threaded manner by default.
If you need jobs to run in parallel, you have several options:
- Modify your scripts to run in background:
- Use GNU parallel or similar tools:
- Implement locking mechanisms for critical sections:
0 8 * * * /path/to/jobA.sh & # The & makes it run in background 5 8 * * * /path/to/jobB.sh &
0 8 * * * parallel ::: "/path/to/jobA.sh" "/path/to/jobB.sh"
#!/bin/bash # jobA.sh ( flock -n 200 || exit 1 # Critical section code here ) 200>/var/lock/jobA.lock
Cron jobs interrupted by system reboots have specific behaviors:
- Anacron: Better suited for systems that aren't always running, as it catches up on missed jobs
- Standard cron: Missed jobs are simply skipped and won't run until the next scheduled time
To make jobs resilient to reboots:
@reboot /path/to/init_script.sh # Runs once at boot */5 * * * * /path/to/regular_job.sh # Runs every 5 minutes
For jobs that take significant time to complete:
#!/bin/bash # Example of a robust long-running script LOCKFILE="/tmp/jobA.lock" # Check for existing lock if [ -e ${LOCKFILE} ] && kill -0 cat ${LOCKFILE}; then echo "Already running" exit fi # Create lock file echo $$ > ${LOCKFILE} # Main job logic here /path/to/actual_work.sh # Clean up rm -f ${LOCKFILE}
Consider also implementing:
- Logging mechanisms
- Progress tracking
- Notification systems for failures
- Automatic retries for transient failures
Essential commands for monitoring:
# View cron logs (system dependent) grep CRON /var/log/syslog journalctl -u cron.service # Check running processes ps aux | grep 'jobA.sh' # Verify lock files ls -l /tmp/*.lock
In Linux systems, cron jobs are executed by the cron daemon (crond) which spawns individual processes for each job. The fundamental behavior is:
- Each cron job runs as a separate process
- Jobs scheduled at the same time will run in parallel
- No inherent queuing mechanism exists between jobs
In your example with Job A (8:00AM, 12hr duration) and Job B (8:05AM):
# Example crontab entries
0 8 * * * /path/to/jobA.sh # Job A
5 8 * * * /path/to/jobB.sh # Job B
Job B will execute at 8:05AM regardless of Job A's status because:
- The cron daemon creates separate processes
- No process dependency exists between jobs
- System resources permitting, both will run concurrently
When a system reboots during cron execution:
- Active cron jobs do not automatically resume
- The cron daemon itself restarts and continues with new schedules
- Interrupted jobs require manual implementation for recovery
For mission-critical jobs, consider these approaches:
#!/bin/bash
# jobA.sh with recovery mechanism
LOCKFILE=/var/lock/jobA.lock
if [ -e "$LOCKFILE" ]; then
echo "Previous run detected, implementing recovery..."
# Add recovery logic here
fi
touch "$LOCKFILE"
# Main job logic
rm "$LOCKFILE"
For better job management:
# Example systemd service unit
[Unit]
Description=Long Running Job A
After=network.target
[Service]
Type=simple
ExecStart=/path/to/jobA.sh
Restart=on-failure
[Install]
WantedBy=multi-user.target
Then schedule with systemd timers instead of cron for better state tracking.
To track parallel executions:
# Check running cron processes
ps aux | grep cron
# View cron logs (system location may vary)
grep CRON /var/log/syslog
- Implement proper locking mechanisms
- Consider breaking into smaller jobs
- Log job start/end times
- Monitor system resource usage
- Use process supervisors like supervisord for critical jobs