When a supposedly idle Ubuntu 12.04 server suddenly consumes 24GB of outgoing bandwidth (12x its daily limit), we're clearly dealing with either misconfiguration or malicious activity. Here's how I approached troubleshooting:
First, let's check real-time traffic with iftop (install if needed):
sudo apt-get install iftop
sudo iftop -nNP
This shows live connections with ports and IPs. Look for unusual destinations or spikes.
For historical data, vnstat is invaluable:
sudo apt-get install vnstat
vnstat -d # daily summary
vnstat -h # hourly breakdown
When basic tools don't reveal the culprit, tcpdump becomes essential:
sudo tcpdump -i eth0 -w traffic.pcap -s 0
# Analyze later with:
tcpdump -r traffic.pcap -n | awk '{print $3}' | sort | uniq -c | sort -n
This captures full packets and later shows which IPs are communicating most.
Sometimes the issue stems from a specific process:
sudo apt-get install nethogs
sudo nethogs eth0
This shows bandwidth per process in real-time.
In my experience with Ubuntu servers, frequent bandwidth hogs include:
1. Unattended Updates
Check for automatic updates running amok:
ps aux | grep -i 'apt\|update'
sudo cat /var/log/apt/history.log
2. Backup Processes
Verify cron jobs and backup configurations:
crontab -l
ls -la /etc/cron*
3. Compromised Server
For potential malware or unauthorized access:
sudo netstat -tulnp
sudo lsof -i :22 # check SSH connections
sudo last # check login history
To prevent future surprises, implement this basic monitoring script:
#!/bin/bash
DATE=$(date +%Y-%m-%d)
LIMIT=2147483648 # 2GB in bytes
USAGE=$(vnstat --oneline | awk -F';' '{print $11}')
if [ "$USAGE" -gt "$LIMIT" ]; then
echo "Bandwidth alert! Used $USAGE bytes on $DATE" | mail -s "Bandwidth Warning" admin@example.com
sudo iftop -nNP -t -s 10 > /var/log/bandwidth_alert_$DATE.log
fi
Add this to cron for daily checks.
For long-term bandwidth management:
- Implement QoS rules using tc
- Set up proper monitoring (Nagios, Zabbix, or Prometheus)
- Consider setting up bandwidth limits per service
- Keep the system updated (Ubuntu 12.04 is ancient!)
Recently faced a head-scratcher where an Ubuntu 12.04 server supposedly consumed 24GB of outbound traffic despite having 2GB/day limits. The admin had physical proof he wasn't accessing the server during the spike. Here's how we forensically identified the culprit.
Start with basic CLI tools before diving deep:
# Real-time interface monitoring iftop -i eth0 -P -n -B # Historical data from syslog grep "ETH0 TX" /var/log/syslog | awk '{sum+=$5} END {print sum/1024/1024 " MB"}' # Per-process network usage (requires nethogs install) sudo nethogs eth0
When standard tools don't reveal enough:
# Capture 1000 packets with full payload inspection sudo tcpdump -i eth0 -w traffic.pcap -c 1000 -A -s 0 # Filter by destination IP after capture tcpdump -r traffic.pcap -nn 'dst host 192.168.1.100' # Export HTTP requests specifically tshark -r traffic.pcap -Y "http.request" -T fields -e http.host -e http.request.uri
In our scenario, tcpdump revealed:
18:32:45.123 IP server.example.com.3421 > pool.xmr.5555: UDP, length 1432 18:32:45.456 IP server.example.com.3421 > pool.xmr.5555: UDP, length 1432 [repeating every 5 seconds]
Combined with ps auxf
, we found a compromised cron job running:
/usr/bin/.sshd -o pool.monero.hashvault.pro:5555 -u wallet...
While investigating, implement emergency controls:
# Install wondershaper sudo apt-get install wondershaper # Limit eth0 to 1Mbps up/down sudo wondershaper eth0 1024 1024 # Alternative with tc sudo tc qdisc add dev eth0 root tbf rate 1mbit burst 32kbit latency 400ms
Setup vnStat for historical tracking:
sudo apt-get install vnstat sudo vnstat -u -i eth0 # Generate daily reports vnstat -d -i eth0 # Live monitoring dashboard vnstat -l -i eth0
- Audit all cron jobs:
sudo ls -la /etc/cron*
- Check unauthorized SSH keys:
~/.ssh/authorized_keys
- Verify service versions:
ss -tulnp | grep -v 127.0.0.1
- Monitor outbound connections:
sudo lsof -i -P -n | grep ESTABLISHED