We've all been there - that moment when you realize you've just locked yourself out of a production server. In this case, it happened during a routine security hardening procedure on an Ubuntu 8.10 EC2 instance. The sequence was:
# The fateful commands:
sudo nano /etc/ssh/sshd_config
# Changed: PermitRootLogin no
sudo service ssh restart
# Then... the accidental terminal closure
Three critical mistakes compounded the problem:
- Disabled root SSH access before creating sudo users
- No root password was ever set
- Terminal closure before creating backup access
AWS provides several recovery options for these scenarios:
Method 1: Using EC2 Instance Connect
If enabled, this can bypass SSH restrictions:
aws ec2-instance-connect send-ssh-public-key \
--instance-id i-1234567890abcdef0 \
--availability-zone us-east-1a \
--instance-os-user ubuntu \
--ssh-public-key file://~/.ssh/id_rsa.pub
Method 2: Stop/Start with User Data
This nuclear option resets SSH configurations:
#!/bin/bash
sed -i 's/PermitRootLogin no/PermitRootLogin prohibit-password/g' /etc/ssh/sshd_config
systemctl restart sshd
Always follow this safe hardening sequence:
# 1. First create backup user
sudo adduser rescue
sudo usermod -aG sudo rescue
# 2. Test new user access
ssh rescue@your-instance
sudo -i
# 3. Only THEN disable root
sudo nano /etc/ssh/sshd_config
# PermitRootLogin no
For instances with SSM Agent installed:
aws ssm start-session --target i-1234567890abcdef0
# Then modify sshd_config through the session
The key lesson? Always maintain multiple access paths when modifying authentication systems. Cloud instances require different recovery strategies than physical servers.
We've all been there - making security changes that backfire spectacularly. In this case, disabling root SSH access (PermitRootLogin no
) before setting up proper sudo privileges is like changing your front door lock while leaving the keys inside. Let me walk you through recovery options for this Ubuntu EC2 instance.
Since this is AWS EC2, we have some powerful recovery options not available in traditional hosting:
# If using EC2 Instance Connect
aws ec2-instance-connect send-ssh-public-key \
--instance-id i-1234567890abcdef0 \
--availability-zone us-east-1a \
--instance-os-user ubuntu \
--ssh-public-key file://my_key.pub
The most elegant solution if your instance has the SSM agent installed (default on Amazon Linux AMIs):
# Start an interactive shell session
aws ssm start-session --target i-1234567890abcdef0
# Once connected, become root
sudo -i
If the agent isn't installed, you'll need to use the EC2 serial console:
First enable serial console access in your account (one-time setup):
aws ec2 enable-serial-console-access
Then connect to the instance:
aws ec2 get-serial-console-connection-status \
--instance-id i-1234567890abcdef0
If you need to modify SSH configs directly:
# Mount the root volume on another instance
lsblk
sudo mount /dev/xvdf1 /mnt/recovery
# Edit the sshd_config
sudo nano /mnt/recovery/etc/ssh/sshd_config
# Change back to PermitRootLogin yes
Always test sudo access before disabling root:
# Create backup user with full sudo
sudo adduser backupadmin
sudo usermod -aG sudo backupadmin
# Verify access
su - backupadmin
sudo -i