Efficient Ways to Execute Commands Across Multiple Linux Servers Simultaneously


2 views

Managing multiple Linux servers often requires executing the same commands across all machines. Manual SSH logins for each server quickly become tedious and error-prone. While cron jobs work for scheduled tasks, they don't help with ad-hoc commands that need immediate execution.

The simplest approach uses SSH with shell loops:

for server in server1 server2 server3; do
    ssh user@$server "your_command_here"
done

For better security and automation, use SSH keys:

# Generate keys if you haven't already
ssh-keygen -t rsa
# Copy public key to all servers
for server in $(cat server_list.txt); do
    ssh-copy-id user@$server
done

Several specialized tools exist for parallel execution:

1. pssh (Parallel SSH)

# Install pssh
sudo apt-get install pssh

# Basic usage
parallel-ssh -h servers.txt -l username -A -i "your_command"

2. Ansible Ad-Hoc Commands

# Install Ansible
sudo apt-get install ansible

# Create inventory file
echo "[servers]
server1
server2
server3" > inventory.ini

# Run command
ansible servers -i inventory.ini -m shell -a "your_command"

For complex scenarios, consider these approaches:

# Using clusterssh (CSSH)
cssh server1 server2 server3

# With tmux broadcasting
tmux new-session -s cluster
tmux split-window "ssh user@server1"
tmux split-window "ssh user@server2"
tmux select-layout tiled
tmux set-window-option synchronize-panes on

When running commands across multiple servers, consider:

# Capture output to separate files
for server in $(cat servers.txt); do
    ssh user@$server "your_command" > $server.log 2>&1
done

# Check exit status
for server in $(cat servers.txt); do
    if ! ssh user@$server "your_command"; then
        echo "Command failed on $server" >&2
    fi
done
  • Always use SSH keys instead of passwords
  • Limit sudo privileges on remote commands
  • Consider using SSH certificates for large deployments
  • Use tools like Vault for secret management

For large numbers of servers:

# Use GNU parallel for better resource utilization
parallel -j 10 ssh user@{} "your_command" ::: $(cat servers.txt)

# Or with xargs
xargs -P 10 -n 1 -I {} ssh user@{} "your_command" < servers.txt

For basic parallel execution across a small number of machines, a simple shell script with SSH can work effectively:


#!/bin/bash
# Define your server list
SERVERS=("server1.example.com" "server2.example.com" "server3.example.com")

# The command to run (passed as argument)
COMMAND="$@"

# Loop through servers and execute command
for SERVER in "${SERVERS[@]}"; do
    echo "Running on $SERVER"
    ssh "$SERVER" "$COMMAND" &
done
wait
echo "All commands completed"

For more robust solutions, consider these specialized tools:


# pssh (Parallel SSH) example
pssh -h hosts.txt -l username -A -i "sudo systemctl restart nginx"

# Ansible ad-hoc command example
ansible all -i inventory.ini -m shell -a "df -h"

For larger environments, consider these options:

  • Ansible Playbooks for complex multi-step operations
  • SaltStack for real-time parallel execution
  • ClusterSSH for interactive parallel sessions

Here's a complete script to check and install security updates:


#!/bin/bash
# servers.txt contains one hostname per line
while read -r server; do
    ssh -n "$server" \
    "echo 'Checking $server';
     sudo apt update &&
     sudo apt list --upgradable | grep security &&
     echo 'Applying updates' &&
     sudo apt upgrade -y security" &
done < servers.txt
wait

When implementing parallel execution:

  • Use SSH keys for password-less authentication
  • Implement proper error handling and logging
  • Consider rate limiting for large fleets
  • Test commands with -n or --dry-run first