Any internet-facing SSH server will experience constant brute force attempts. My monitoring data shows 300-500 daily login attempts across multiple servers, with patterns suggesting botnet activity. This isn't personal targeting - it's automated scanning of all reachable IPs.
1. Key-Based Authentication Only
Disable password authentication completely in /etc/ssh/sshd_config
:
PasswordAuthentication no
PubkeyAuthentication yes
2. Implement Fail2Ban
Automatically block IPs after repeated failures. Sample jail.local configuration:
[sshd]
enabled = true
port = ssh
filter = sshd
logpath = /var/log/auth.log
maxretry = 3
bantime = 1h
Port Knocking (sequence-based port opening):
# Example using knockd
[options]
logfile = /var/log/knockd.log
[openSSH]
sequence = 7000,8000,9000
seq_timeout = 10
command = /sbin/iptables -A INPUT -s %IP% -p tcp --dport 22 -j ACCEPT
GeoIP Filtering (iptables example):
# Block all except US and Canada
iptables -A INPUT -p tcp --dport 22 -m geoip ! --src-cc US,CA -j DROP
Configure real-time alerts for multiple failed attempts. Sample Bash script:
#!/bin/bash
tail -n 0 -f /var/log/auth.log | grep --line-buffered "Failed password" | \
while read line; do
echo "$(date) - $line" >> /var/log/ssh_attempts.log
# Add custom alert logic here
done &
While constant attempts are normal, watch for:
- Sudden spikes in volume
- Repeated attempts from the same IP over days
- Attempts targeting non-root users
These may indicate targeted attacks rather than automated scanning.
Seeing hundreds of failed SSH login attempts per day is unfortunately the new normal in server administration. Automated bots constantly scan the internet for exposed SSH ports (default TCP/22) and attempt common username/password combinations. From my experience managing 50+ production servers, I typically see 300-800 brute force attempts daily on each machine.
# Sample auth.log entries showing brute force patterns
May 15 03:12:42 server1 sshd[1234]: Failed password for root from 185.143.223.61 port 48234 ssh2
May 15 03:12:45 server1 sshd[1235]: Failed password for root from 185.143.223.61 port 48234 ssh2
May 15 03:12:48 server1 sshd[1236]: Failed password for admin from 185.143.223.61 port 48234 ssh2
While strong passwords help, each attempt consumes server resources. During peak attack periods, I've seen servers spend 15% CPU time just processing authentication requests. More critically, one successful breach through a weak password can compromise your entire infrastructure.
1. Port Knocking (Advanced Technique)
This hides your SSH port until a predefined sequence of connection attempts unlocks it:
# Example knockd configuration (/etc/knockd.conf)
[options]
UseSyslog
[openSSH]
sequence = 7000,8000,9000
seq_timeout = 5
command = /sbin/iptables -A INPUT -s %IP% -p tcp --dport 22 -j ACCEPT
tcpflags = syn
[closeSSH]
sequence = 9000,8000,7000
seq_timeout = 5
command = /sbin/iptables -D INPUT -s %IP% -p tcp --dport 22 -j ACCEPT
tcpflags = syn
2. Fail2Ban Implementation
The most effective solution I've deployed across all servers:
# Install on Debian/Ubuntu
sudo apt install fail2ban
# Custom SSH jail configuration (/etc/fail2ban/jail.local)
[sshd]
enabled = true
port = ssh
filter = sshd
logpath = /var/log/auth.log
maxretry = 3
findtime = 600
bantime = 86400
ignoreip = 127.0.0.1/8 ::1
3. SSH Hardening
Essential sshd_config modifications:
# /etc/ssh/sshd_config
Port 22222 # Change from default 22
Protocol 2
PermitRootLogin no
MaxAuthTries 3
LoginGraceTime 60
PasswordAuthentication no # Use keys only
AllowUsers specific_username
Set up automated monitoring for suspicious activity:
# Simple alert script checking auth.log
#!/bin/bash
ATTEMPTS=$(grep "Failed password" /var/log/auth.log | wc -l)
if [ $ATTEMPTS -gt 50 ]; then
echo "High SSH attack volume detected: $ATTEMPTS attempts" | mail -s "SSH Alert" admin@example.com
fi
For enterprise environments, consider integrating with SIEM solutions like Wazuh or OSSEC for real-time threat detection.