When automating infrastructure deployment or test harnesses, one common pain point is the initial SSH host verification. The default interactive prompt breaks automation flows when connecting to newly provisioned hosts. Here's how to handle this properly in production-grade scripts.
Before bypassing host verification, it's crucial to understand why SSH requires this step:
- Prevents man-in-the-middle attacks
- Verifies server identity consistency
- Creates audit trail of connected hosts
Here are the most robust approaches with their trade-offs:
1. Using StrictHostKeyChecking=no (Not Recommended)
The quick but insecure method:
ssh -o StrictHostKeyChecking=no user@newhost
2. Pre-populating known_hosts
Best practice for controlled environments:
# Get host key before first connection
ssh-keyscan -H newhost.example.com >> ~/.ssh/known_hosts
# Alternative for hashed entries:
ssh-keyscan -t rsa newhost.example.com | ssh-keygen -lf -
3. Using SSH Config
For recurring automation:
# ~/.ssh/config
Host *
StrictHostKeyChecking accept-new
UserKnownHostsFile /path/to/automation_known_hosts
For cloud environments where hosts change frequently:
#!/bin/bash
TARGET_HOST="new-vm-$(date +%s).example.com"
TEMP_KNOWN_HOSTS="/tmp/known_hosts.$$"
# Get fresh host keys
ssh-keyscan -H "$TARGET_HOST" > "$TEMP_KNOWN_HOSTS"
# Verify key fingerprint matches expected pattern
if grep -q "SHA256:expected-pattern" "$TEMP_KNOWN_HOSTS"; then
cat "$TEMP_KNOWN_HOSTS" >> ~/.ssh/known_hosts
ssh user@"$TARGET_HOST" "deploy-script.sh"
else
echo "Host key verification failed!" >&2
exit 1
fi
- Always verify keys in production (compare against CMDB or cloud metadata)
- Consider using SSH certificates instead of host keys
- Rotate known_hosts files periodically for ephemeral environments
- Set proper permissions: chmod 600 ~/.ssh/known_hosts
Example Ansible playbook snippet:
- name: Ensure host is in known_hosts
known_hosts:
path: /etc/ssh/ssh_known_hosts
name: "{{ inventory_hostname }}"
key: "{{ lookup('file', '{{ host_key_file }}') }}"
When automating infrastructure provisioning or test harnesses, you'll often encounter this familiar SSH prompt:
The authenticity of host '[hostname] ([IP address])' can't be established.
RSA key fingerprint is [key fingerprint].
Are you sure you want to continue connecting (yes/no)?
This interactive prompt breaks automation workflows when connecting to newly provisioned VMs or containers with fresh hostnames/IPs.
The simplest solution is to modify SSH's host key verification behavior:
ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null user@host
However, this completely disables security checks - not recommended for production.
A better approach is to pre-populate the known_hosts file before connection attempts:
# Get host key (adjust for your key type)
ssh-keyscan -t rsa hostname >> ~/.ssh/known_hosts
# Or for multiple hosts:
ssh-keyscan -t rsa host1 host2 host3 >> ~/.ssh/known_hosts
Create a wrapper script or configuration:
#!/bin/bash
# auto_ssh.sh
HOST=$1
KEYSCAN="/usr/bin/ssh-keyscan -t rsa"
if ! grep -q "$HOST" ~/.ssh/known_hosts; then
$KEYSCAN $HOST >> ~/.ssh/known_hosts 2>/dev/null
fi
ssh user@$HOST
For frequent use, add to ~/.ssh/config:
Host *
StrictHostKeyChecking accept-new
UserKnownHostsFile ~/.ssh/known_hosts
The accept-new
option (SSH 7.6+) adds new hosts automatically while still protecting against changed keys.
For production environments:
- Pre-bake known_hosts entries in your VM images
- Use certificate-based authentication
- Maintain a central repository of host keys