Simple Remote Linux Server CPU Monitoring: Lightweight Graphing Solutions for Developers


4 views

When you just need basic CPU usage graphs for a Linux server, enterprise solutions like Cacti, Nagios, or Zabbix feel unnecessarily complex. These tools require significant setup, database configuration, and often include features you'll never use for this simple task.

For quick CPU monitoring over a week, consider these lightweight alternatives:


# Basic CPU collection with sar (sysstat package)
sudo apt install sysstat
sudo sed -i 's/ENABLED="false"/ENABLED="true"/' /etc/default/sysstat
sudo systemctl restart sysstat

Collect data to CSV and visualize with matplotlib:


#!/bin/bash
# collect_cpu.sh
while true; do
    echo "$(date +%s),$(top -bn1 | grep "Cpu(s)" | awk '{print $2 + $4}')" >> cpu_usage.csv
    sleep 300  # 5-minute intervals
done

# Python visualization script
import pandas as pd
import matplotlib.pyplot as plt

df = pd.read_csv('cpu_usage.csv', names=['timestamp', 'usage'])
df['datetime'] = pd.to_datetime(df['timestamp'], unit='s')
plt.plot(df['datetime'], df['usage'])
plt.title('Weekly CPU Usage')
plt.ylabel('CPU %')
plt.xticks(rotation=45)
plt.tight_layout()
plt.savefig('cpu_usage.png')

For remote servers, use SSH to collect data:


# Remote data collection
ssh user@remote-server "top -bn1 | grep 'Cpu(s)' | awk '{print \$2 + \$4}'"

For a slightly more featured but still simple solution:


# One-line Netdata install
bash <(curl -Ss https://my-netdata.io/kickstart.sh)

Access the web interface at http://your-server:19999 for real-time graphs that persist for several days.

If your needs grow beyond basic CPU monitoring (alerting, historical data beyond a month, multiple servers), then tools like Prometheus + Grafana become worth the setup effort. But for a single server's weekly CPU usage, the simpler methods above will serve you better.


Many sysadmins reach for heavyweight solutions like Cacti, Nagios, or Zabbix when they just need simple CPU monitoring. These tools often require complex configurations, database backends, and extensive setup - all for what could be a simple line graph of CPU utilization over time.

For basic CPU monitoring, we can leverage tools already present on most Linux systems:

# Collect 1-second CPU samples for 60 seconds
sar -u 1 60 > cpu_usage.log

This uses sysstat's sar utility to capture CPU metrics. To make this persistent:

# Install sysstat if needed (Debian/Ubuntu)
sudo apt-get install sysstat

# Enable data collection (RHEL/CentOS)
sudo sed -i 's/^ENABLED=.*/ENABLED="true"/' /etc/default/sysstat
sudo systemctl enable sysstat
sudo systemctl start sysstat

For graphing, we can use Python with matplotlib:

import matplotlib.pyplot as plt
import pandas as pd

# Read sar output
df = pd.read_csv('cpu_usage.log', delim_whitespace=True, 
                skiprows=1, parse_dates=True)
                
# Plot CPU usage
plt.figure(figsize=(12,6))
plt.plot(df['%user'], label='User CPU')
plt.plot(df['%system'], label='System CPU') 
plt.title('CPU Usage Over Time')
plt.legend()
plt.savefig('cpu_usage.png')

To collect data from remote servers without installing additional agents:

#!/bin/bash
# Remote CPU monitoring script
REMOTE_HOST="example.com"
SSH_USER="monitor"
OUTPUT_FILE="remote_cpu.log"

# Collect 5-minute samples for a week (2016 samples)
ssh ${SSH_USER}@${REMOTE_HOST} "sar -u 300 2016" > ${OUTPUT_FILE}

For those wanting slightly more features without complexity:

  • Netdata: Single-command install with real-time web dashboard
  • Prometheus + Node Exporter: More scalable but still simple setup
  • Glances: Python-based monitoring with web interface

For week-long data retention, consider these approaches:

# Compress old logs
find /var/log/sa/ -name "sa[0-9]*" -mtime +7 -exec gzip {} \;

# Daily log rotation
sudo nano /etc/logrotate.d/sysstat
/var/log/sysstat/* {
    daily
    rotate 7
    compress
    missingok
    notifempty
}