When automating tasks across Linux machines, repeatedly establishing new SSH connections for individual commands creates significant overhead. Each connection requires:
- Authentication handshake (50-500ms)
- Encryption setup
- Environment initialization
The OpenSSH suite provides built-in functionality for persistent connections:
1. ControlMaster Configuration
Add to ~/.ssh/config:
Host * ControlMaster auto ControlPath ~/.ssh/control:%h:%p:%r ControlPersist 1h
This creates a reusable socket connection. Test with:
# First connection (master) ssh user@remote # Subsequent connections (slaves) ssh -O check user@remote # Verify connection ssh user@remote "uptime" # Reuses existing connection
2. Named Pipes with SSH
Create a persistent tunnel:
mkfifo /tmp/sshpipe ssh user@remote "bash -s" < /tmp/sshpipe &
Stream commands:
echo "date" > /tmp/sshpipe echo "df -h" > /tmp/sshpipe
3. tmux/screen Session Piping
Create a detached session:
ssh user@remote "tmux new -d -s worker"
Send commands to the session:
ssh user@remote "tmux send-keys -t worker 'ls -l' C-m" ssh user@remote "tmux capture-pane -t worker -p" # Get output
4. SSH Connection Pooling
Bash script example:
#!/bin/bash # Maintain 5 persistent connections for i in {1..5}; do ssh -MNf user@remote done # Execute command using pooled connection function sshexec() { local cmd="$1" ssh -O forward -S ~/.ssh/control:%h:%p:%r user@remote "$cmd" } sshexec "uname -a"
Method | Connection Time | Memory Usage |
---|---|---|
New SSH per command | 300ms | 5MB per instance |
ControlMaster | 5ms | Shared 8MB |
Named Pipe | 2ms | 6MB persistent |
- Always use SSH keys with passphrases
- Set appropriate ControlPersist timeout (e.g., 10m for sensitive operations)
- Regularly check active connections with:
ssh -O check user@remote
When building automation workflows between Linux machines, constantly establishing new SSH connections creates significant overhead. Each command execution requires:
ssh user@remote "command1" ssh user@remote "command2" ssh user@remote "command3"
This approach has three critical drawbacks:
- Authentication latency (especially with 2FA)
- TCP connection setup/teardown overhead
- Environment isolation between commands
The most efficient native solution uses SSH's ControlMaster feature. Add this to your ~/.ssh/config:
Host remote-server HostName 192.168.1.100 User automation-user ControlMaster auto ControlPath ~/.ssh/cm-%r@%h:%p ControlPersist 10m
Now your first connection establishes a master socket:
ssh -Nf remote-server # Background master connection
Subsequent commands attach to this existing connection:
ssh remote-server "echo Command1" # Uses existing socket ssh remote-server "touch /tmp/file" # No re-authentication
For true command streaming (sending multiple commands through one connection), combine ControlMaster with named pipes:
# Create command pipe mkfifo cmd_pipe # Persistent SSH session reading from pipe ssh remote-server < cmd_pipe & # Send commands echo "date" > cmd_pipe echo "df -h" > cmd_pipe
Here's a practical monitoring implementation that streams system metrics:
#!/bin/bash # Establish master connection ssh_monitor() { while true; do ssh -T remote-server <<'EOF' while read -r cmd; do case $cmd in cpu) grep 'cpu ' /proc/stat ;; mem) free -m ;; disk) df -h / ;; *) echo "Unknown command" ;; esac done EOF sleep 5 done } # Start in background ssh_monitor & # Send commands echo "cpu" > cmd_pipe sleep 2 echo "mem" > cmd_pipe
Ensure reliable operation with these practices:
- Use
ServerAliveInterval 60
in SSH config - Implement connection health checks
- Set appropriate
ControlPersist
timeout (e.g., 1h for intermittent tasks)