Every sysadmin knows the frustration: you're running a long-running process via SSH when suddenly the connection drops. Maybe your WiFi hiccuped, perhaps the network had issues, or the server decided to terminate the session. The question is - can you recover that session and its running process?
When you SSH into a server and run a command, it typically looks like this:
sshd (parent)
└── bash (your shell)
└── your_command (child process)
When the SSH connection drops, bash receives SIGHUP (hangup signal) which propagates to child processes by default.
Option 1: Using nohup
The classic approach for persistent processes:
nohup your_command &
But this only prevents termination - it doesn't let you reattach to the process.
Option 2: Disconnecting Properly
Before your connection might drop, prepare the shell:
~Ctrl+z # Suspend current process
bg # Move to background
disown -h %1 # Remove from shell's job control
Option 3: The Reptyr Magic
Install this powerful tool:
sudo apt-get install reptyr # Debian/Ubuntu
sudo yum install reptyr # RHEL/CentOS
After reconnecting:
ps aux | grep your_command # Find PID
reptyr PID # Reattach
For critical processes, consider creating a systemd service:
[Unit]
Description=My Persistent Process
[Service]
ExecStart=/path/to/command
Restart=always
User=youruser
[Install]
WantedBy=multi-user.target
Add these to ~/.ssh/config to make sessions more resilient:
Host *
ServerAliveInterval 60
ServerAliveCountMax 5
TCPKeepAlive yes
html
How to Resume Interrupted SSH Commands Without Screen/Tmux
When an SSH connection drops unexpectedly, any foreground processes initiated through that session typically receive SIGHUP and terminate. This becomes particularly problematic when running long-running operations like database migrations, large file transfers, or compilation jobs.
Linux provides several mechanisms to prevent process termination:
# Method 1: nohup (most basic)
nohup ./long_script.sh &
# Method 2: disown (for already running jobs)
./long_task.sh
^Z
bg %1
disown -h %1
# Method 3: setsid (new session)
setsid ./background_process.sh
For critical processes, consider creating transient systemd units:
# Create a temporary service
systemd-run --user --scope --unit=my-task ./critical_process.py
# Check status later
journalctl --user-unit=my-task
For custom applications, implement proper signal handling:
#!/usr/bin/env python3
import signal
import time
def ignore_hup(signum, frame):
print("Received SIGHUP, continuing execution")
signal.signal(signal.SIGHUP, ignore_hup)
while True:
print("Working...")
time.sleep(5)
Adjust these in /etc/ssh/sshd_config:
# Prevent immediate termination
ClientAliveInterval 60
ClientAliveCountMax 5
# Keep sessions alive longer
TCPKeepAlive yes
If you quickly reconnect, try these recovery techniques:
# Find orphaned process
ps -efj | grep -i "your_command"
# Reattach to process if possible
reptyr PID_NUMBER
# Alternative reattachment
gdb -p PID_NUMBER
>> call setsid()
>> detach
While the question specifies no screen/tmux, in modern environments you might use:
# Emergency tmux session
tmux new -d -s rescue_session
tmux send -t rescue_session "./important_job" ENTER