When dealing with persistent background processes that run indefinitely once started through cron, we need a mechanism to ensure single execution while still maintaining the reliability of cron's scheduling system. The traditional cron approach isn't designed for one-off executions of persistent services.
Here are three robust methods to achieve this, each with different trade-offs:
The most reliable approach for production systems:
#!/bin/bash
LOCK_FILE=/tmp/my_job.lock
if [ ! -f "$LOCK_FILE" ]; then
touch "$LOCK_FILE"
/path/to/your/command --persistent-flag &
# Optionally remove lock file when process ends (requires monitoring)
else
echo "Job already running" >&2
fi
For modern Linux systems with systemd:
# /etc/systemd/system/my-job.service
[Unit]
Description=My one-time job
[Service]
Type=oneshot
ExecStart=/path/to/your/command
RemainAfterExit=yes
[Install]
WantedBy=multi-user.target
Then enable it with:systemctl enable my-job.service
For systems without systemd:
*/5 * * * * pgrep -f "/path/to/your/command" || /path/to/your/command &
To ensure execution after reboot while maintaining single-instance guarantee:
@reboot flock -n /tmp/my_job.lock -c "/path/to/your/command &"
For enterprise-grade solution with logging and monitoring:
#!/bin/bash
LOCK_DIR=/tmp/my_job.lock.d
LOG_FILE=/var/log/my_job.log
if mkdir "$LOCK_DIR" 2>/dev/null; then
trap 'rm -rf "$LOCK_DIR"' EXIT
echo "$(date): Starting job" >> "$LOG_FILE"
/path/to/your/command >> "$LOG_FILE" 2>&1 &
echo "$(date): Job started (PID $!)" >> "$LOG_FILE"
else
echo "$(date): Job already running" >> "$LOG_FILE"
exit 1
fi
When dealing with persistent processes (services/daemons) that run indefinitely once launched, traditional cron scheduling becomes problematic. The standard cron syntax lacks native support for single-execution jobs, which creates complications when you need to:
- Initialize a background service on system reboot
- Run a maintenance script exactly once after deployment
- Trigger a long-running process without creating duplicates
Here are three reliable approaches to achieve one-time execution while maintaining process persistence:
1. The Lockfile Technique
The most robust method combines cron with file-based locking:
#!/bin/bash
LOCKFILE=/tmp/myjob.lock
if [ -e $LOCKFILE ]
then
echo "Job already running" >&2
exit 1
else
touch $LOCKFILE
/path/to/your/persistent/process &
# No need to remove lockfile for persistent processes
fi
Schedule this in crontab with:
@reboot /path/to/above/script.sh
2. Systemd Service Integration
For modern Linux systems, systemd provides better control:
# /etc/systemd/system/one-time.service
[Unit]
Description=One-time service starter
[Service]
Type=simple
ExecStart=/path/to/your/daemon
RemainAfterExit=yes
[Install]
WantedBy=multi-user.target
Then enable with:
sudo systemctl enable one-time.service
3. Temporary Cron Entry
For manual one-time execution, use this bash function:
function add_one_time_job() {
local cmd="$1"
local temp_cron=$(mktemp)
# Add to temporary crontab
echo "* * * * * $cmd" >> $temp_cron
echo "* * * * * crontab -r" >> $temp_cron # Self-destruct
crontab $temp_cron
rm $temp_cron
}
When implementing these solutions, pay attention to:
- Process monitoring - use
supervisord
or similar for crash recovery - Resource limits - configure memory/CPU constraints
- Log rotation - essential for long-running processes
- User permissions - especially when running as system services
Here's a complete deployment-ready solution for a Node.js service:
#!/usr/bin/env bash
# one-time-node.sh
SERVICE_DIR="/opt/node-service"
LOCK_FILE="/var/run/node-service.lock"
LOG_FILE="/var/log/node-service.log"
if [ -f $LOCK_FILE ]; then
echo "$(date): Service already running" >> $LOG_FILE
exit 0
fi
touch $LOCK_FILE
cd $SERVICE_DIR
nohup node server.js >> $LOG_FILE 2>&1 &
Cron entry for reboot execution:
@reboot /bin/bash /usr/local/bin/one-time-node.sh