How to Keep Processes Running After SSH Disconnection: Preventing Termination of Long-Running Commands


2 views

When working with remote servers through SSH, many developers encounter this common scenario: you start a long-running process like rsync, cp, or a database migration, only to have your SSH connection drop unexpectedly. The critical question is - what happens to your process?

By default, when an SSH session terminates (either intentionally or due to network issues), all processes started in that session receive a SIGHUP signal, which typically terminates them. This happens because:

  1. The shell sends SIGHUP to all child processes
  2. The process becomes orphaned when the shell exits
  3. The kernel may terminate orphaned process groups

Option 1: Using nohup

The traditional approach is to use the nohup command:

nohup rsync -avz /large_directory user@remote:/backup/ &

Key points about nohup:

  • Redirects output to nohup.out by default
  • Ignores SIGHUP signal
  • Process continues after SSH disconnection

Option 2: Utilizing tmux or screen

Terminal multiplexers provide more flexibility:

# Using tmux
tmux new -s backup_session
rsync -avz /large_directory user@remote:/backup/
# Detach with Ctrl+b then d

Advantages of terminal multiplexers:

  • Persistent sessions you can reattach to
  • Multiple virtual terminals
  • Session logging capabilities

Option 3: The disown Command

For already running processes:

# Start process normally
rsync -avz /large_directory user@remote:/backup/ &
# Remove from shell's job table
disown -h %1

Systemd Service Approach

For critical production processes:

[Unit]
Description=Critical Backup Service

[Service]
Type=simple
ExecStart=/usr/bin/rsync -avz /data user@backup:/storage/
Restart=on-failure
User=backupuser

[Install]
WantedBy=multi-user.target

Using setsid

Start process in new session:

setsid rsync -avz /large_directory user@remote:/backup/

After reconnecting, verify your process status:

ps aux | grep rsync
pgrep -a rsync
# For tmux/screen
tmux ls
screen -list

For a production database migration that might take hours:

tmux new -s migration
pg_dump -U postgres production_db | psql -U postgres staging_db
# Detach and let it run

Remember to monitor resource usage for long-running processes:

watch -n 60 "ps -eo pid,user,pcpu,pmem,cmd | grep rsync"

When an SSH session terminates abruptly (due to network issues or manual disconnection), the operating system sends a SIGHUP (Signal Hang Up) to all processes spawned by that session. By default, this kills the entire process tree.


# These would terminate if run directly in SSH:
rsync -avz /large_files/ user@remote:/backup/
cp -r /massive_dataset/ /new_location/

The classic solution is using nohup:


nohup rsync -avz /source/ user@remote:/dest/ > transfer.log 2>&1 &

This will:

  • Ignore SIGHUP signals
  • Redirect output to a log file
  • Run in background (&)

For more control, consider these options:

1. Using tmux/screen


tmux new -s data_transfer
rsync -avz /data/ remote:/backup/
# Detach with Ctrl+b d

2. Systemd Service (For Critical Operations)


# Create a service file
echo "[Unit]
Description=Critical Data Sync

[Service]
Type=simple
ExecStart=/usr/bin/rsync -avz /vital_data/ backup-server:/storage/
Restart=on-failure

[Install]
WantedBy=multi-user.target" > /etc/systemd/system/datasync.service

After reconnecting, check running processes:


ps aux | grep rsync  # For nohup processes
tmux attach -t data_transfer  # For tmux sessions
journalctl -u datasync.service  # For systemd services

For programs you develop yourself, implement signal handlers:


#!/usr/bin/python3
import signal
import time

def cleanup(signum, frame):
    print("Received signal", signum)
    # Perform cleanup here
    exit(0)

signal.signal(signal.SIGHUP, cleanup)

while True:
    print("Working...")
    time.sleep(1)
  • Use -e ssh -o "ServerAliveInterval 30" with rsync
  • Consider autossh for persistent connections
  • For large transfers, split into smaller chunks with tar pipes