In high-velocity OpenStack environments where engineers spin up 15+ VMs daily, constantly encountering SSH host key verification failures becomes a major productivity drain. The core issue arises when:
- IP addresses get recycled across VM instances
- New VMs generate fresh host keys upon creation
- Users'
known_hosts
files retain old fingerprints
Completely disabling host key checking (StrictHostKeyChecking no
) creates security vulnerabilities by opening doors to MITM attacks. The manual alternative (ssh-keygen -R
) becomes tedious when performed multiple times daily.
For internal OpenStack clouds where you control the infrastructure, consider these approaches:
1. Subnet-Based Relaxation
# ~/.ssh/config
Host 192.168.100.*
StrictHostKeyChecking no
UserKnownHostsFile /dev/null
LogLevel ERROR
2. Host Key Pinning via Cloud-Init
Pre-generate host keys during provisioning:
# cloud-config
ssh_keys:
rsa_private: |
-----BEGIN RSA PRIVATE KEY-----
YOUR-KEY-HERE
-----END RSA PRIVATE KEY-----
rsa_public: ssh-rsa YOUR-PUBLIC-KEY
3. Dynamic known_hosts Management
Script-based solution combining OpenStack metadata with SSH config:
#!/bin/bash
# refresh_known_hosts.sh
openstack server list -f value -c Networks | \
awk -F= '{print $2}' | \
xargs -I{} ssh-keygen -R {} 2>/dev/null
nova list --fields name,networks | \
grep -Eo '[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+' | \
xargs -I{} ssh-keyscan -H {} >> ~/.ssh/known_hosts
For large teams, implement a centralized solution:
- Deploy HashiCorp Vault for SSH certificate authority
- Configure short-lived certificates (1-8 hour validity)
- Automate credential rotation through your CI/CD pipeline
Example Vault SSH setup:
vault secrets enable ssh
vault write ssh/roles/openssh \
key_type=otp \
default_user=ubuntu \
cidr_list=192.168.100.0/24
When relaxing security controls:
- Maintain comprehensive logging (
sshd_config
LogLevel VERBOSE) - Implement network-level controls (VPN, jump hosts)
- Regularly rotate master host keys
In OpenStack environments where VMs are frequently created and destroyed (15+ instances per user daily), we face persistent SSH host key verification failures. When an IP address gets reassigned to a new VM, users encounter:
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@ WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
[...]
Host key verification failed.
While ssh-keygen -R ipAddress
works for individual cases, it becomes untenable when:
- Users manage multiple VMs simultaneously
- IP addresses cycle rapidly in DHCP pools
- Automation scripts break due to verification prompts
Add these directives to ~/.ssh/config
for your OpenStack subnet:
Host 192.168.1.* 10.0.0.*
StrictHostKeyChecking no
UserKnownHostsFile /dev/null
LogLevel ERROR
Or implement a more granular approach using host patterns:
Match host *.openstack.internal user dev-*
StrictHostKeyChecking accept-new
UserKnownHostsFile ~/.ssh/known_hosts.openstack
To mitigate risks when relaxing host key checks:
- Combine with network-level protections (VPC peering, security groups)
- Use certificate-based authentication instead of passwords
- Implement host key pinning for critical infrastructure
For Ansible users, add this to your inventory file:
[openstack_vms:vars]
ansible_ssh_common_args='-o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null'
For Terraform provisioners:
provisioner "remote-exec" {
connection {
type = "ssh"
user = "ubuntu"
host = self.access_ip_v4
host_key = "${tls_private_key.vm_key.public_key_openssh}"
}
}
Create a script to maintain a rotating known_hosts file:
#!/bin/bash
SSH_DIR="$HOME/.ssh"
TEMP_HOSTS=$(mktemp)
# Get current OpenStack VM IPs
openstack server list -f value -c Networks | \
awk -F'=' '{print $2}' | \
xargs -I{} ssh-keyscan -H {} >> "$TEMP_HOSTS"
mv "$TEMP_HOSTS" "$SSH_DIR/known_hosts.openstack"