When attempting to SSH into your AWS EC2 instance, you may encounter the dreaded "Host key verification failed" error. This typically happens when the remote host's key stored in your ~/.ssh/known_hosts
doesn't match what the server is presenting.
In AWS EC2 scenarios, this commonly occurs when:
1. The instance was stopped and started (changing the underlying host) 2. You're connecting to a different instance at the same IP 3. The Elastic IP was reassigned 4. The instance was rebuilt from an AMI
The fastest way to resolve this is to remove the conflicting entry from your known_hosts file:
ssh-keygen -f "/home/ubuntu/.ssh/known_hosts" -R "46.137.253.231"
For more reliable SSH connections to AWS instances:
# Use the instance's public DNS instead of IP
ssh -i ~/.ssh/your-key.pem ubuntu@ec2-xx-xx-xx-xx.ap-southeast-1.compute.amazonaws.com
# Or configure SSH config for persistent connections
Host aws-sg-rails
HostName ec2-xx-xx-xx-xx.ap-southeast-1.compute.amazonaws.com
User ubuntu
IdentityFile ~/.ssh/your-key.pem
StrictHostKeyChecking no
UserKnownHostsFile /dev/null
While the quick fix works, these approaches have security implications. For production environments:
# Verify the instance fingerprint first:
ssh-keyscan -t rsa ec2-xx-xx-xx-xx.ap-southeast-1.compute.amazonaws.com
# Then manually add to known_hosts:
echo "ec2-xx-xx-xx-xx... ssh-rsa AAAAB3Nza..." >> ~/.ssh/known_hosts
For CI/CD pipelines or frequent deployments, consider this bash script:
#!/bin/bash
IP="46.137.253.231"
TMP_FILE=$(mktemp)
# Remove old entry
ssh-keygen -R $IP -f ~/.ssh/known_hosts
# Get new fingerprint
ssh-keyscan -H $IP >> $TMP_FILE
# Verify fingerprint (manual step recommended)
cat $TMP_FILE
# If verified:
cat $TMP_FILE >> ~/.ssh/known_hosts
rm $TMP_FILE
When you see the "Host key verification failed" error during SSH connection attempts to your EC2 instance, it typically means one of these scenarios:
1. The instance's host key has genuinely changed (common when reinstalling OS)
2. You're connecting to a different machine than expected (possible MITM attack)
3. The known_hosts file contains outdated or incorrect entries
The debug output conveniently gives you the exact command to fix this:
ssh-keygen -f "/home/ubuntu/.ssh/known_hosts" -R 46.137.253.231
This removes the old host key from your known_hosts file. After running this, your next SSH attempt will prompt you to verify and accept the new host key.
In AWS environments, this frequently occurs when:
- You terminate and recreate an instance with the same Elastic IP
- The AMI gets updated with new SSH host keys
- You restore from a snapshot to a different instance
If the basic solution doesn't work, try these:
# Verify connectivity to the instance
nc -zv 46.137.253.231 22
# Check instance status in AWS CLI
aws ec2 describe-instances --instance-ids i-1234567890abcdef0 --region ap-southeast-1
# Alternative SSH command with strict checking disabled
ssh -o StrictHostKeyChecking=no -i st.pem ubuntu@46.137.253.231
For automated scripts where host key verification isn't practical, you can:
# Add to your ~/.ssh/config
Host 46.137.253.231
StrictHostKeyChecking no
UserKnownHostsFile /dev/null
Or for better security, pre-populate known hosts:
# First get the new host key
ssh-keyscan 46.137.253.231 >> ~/.ssh/known_hosts
If you still can't connect after these steps:
- Verify the security group allows inbound SSH (port 22)
- Confirm the instance has a public IP or proper Elastic IP association
- Check the instance's system logs in AWS console