When working with Monit across multiple servers in academic or lab environments, the commercial M/Monit interface becomes prohibitively expensive. After evaluating various monitoring solutions like Nagios and Zabbix, I found they don't provide the lightweight simplicity of Monit's approach to process monitoring and recovery.
The Ruby-based monittr
(GitHub: karmi/monittr) offers a basic web interface, but lacks some critical features:
# Sample monittr configuration
Monittr.configure do |config|
config.monitrc = '/etc/monitrc'
config.port = 3000
config.auth = { username: 'admin', password: 'secret' }
end
Monit provides an HTTP interface that can be queried directly. Here's a Python script to aggregate status from multiple servers:
import requests
from xml.etree import ElementTree as ET
servers = [
{'url': 'http://server1:2812', 'auth': ('admin', 'monit')},
{'url': 'http://server2:2812', 'auth': ('admin', 'monit')}
]
def get_status(server):
try:
response = requests.get(f"{server['url']}/_status", auth=server['auth'])
return ET.fromstring(response.text)
except Exception as e:
print(f"Error querying {server['url']}: {str(e)}")
return None
for server in servers:
status = get_status(server)
if status is not None:
print(f"Status from {server['url']}:")
for service in status.findall('service'):
print(f" {service.find('name').text}: {service.find('status').text}")
The Django community is developing django-monit
, which shows promise for multi-server management:
# settings.py configuration
MONIT_SERVERS = [
{
'NAME': 'Web Server',
'URL': 'http://webserver:2812',
'USERNAME': 'admin',
'PASSWORD': 'monit'
}
]
# views.py example
from django.shortcuts import render
from monit.collector import get_monit_status
def dashboard(request):
statuses = []
for server in settings.MONIT_SERVERS:
statuses.append(get_monit_status(server))
return render(request, 'monit/dashboard.html', {'statuses': statuses})
For those already using Prometheus, the monit_exporter
bridges the gap:
# monit_exporter configuration
exporters:
- name: "web_cluster"
url: "http://web1:2812/_status"
username: "admin"
password: "monit"
interval: "30s"
For quick visualization, integrate with existing tools:
# Grafana dashboard query example
SELECT
$__timeEpoch(time),
value
FROM
monit_metrics
WHERE
$__timeFilter(time) AND
instance =~ /$server/ AND
metric =~ /$service/
ORDER BY
time
When implementing any solution, ensure proper security measures:
- Always use HTTPS for Monit's web interface
- Implement IP whitelisting for Monit ports
- Use strong, unique credentials for each server
- Consider SSH tunneling for remote access
When working with Monit in distributed environments, the lack of centralized visualization becomes apparent. The commercial M/Monit solution provides excellent dashboards, but for academic or small-scale deployments, we need FOSS alternatives that can:
- Aggregate status from multiple Monit instances
- Present unified health dashboards
- Maintain lightweight resource usage
From my recent evaluations, here are the most promising approaches:
# Example monittr configuration (Ruby)
Monittr.configure do |config|
config.servers = [
{ host: 'server1.lab.edu', port: 2812 },
{ host: 'server2.lab.edu', port: 2812, ssl: true }
]
config.interval = 60 # seconds
end
For complete control, we can build a simple aggregator using Monit's built-in HTTP interface:
import requests
from xml.etree import ElementTree
def get_monit_status(host, port=2812, auth=('admin', 'monit')):
url = f"http://{host}:{port}/_status?format=xml"
response = requests.get(url, auth=auth)
return ElementTree.fromstring(response.content)
# Usage example:
server_status = get_monit_status('10.0.1.15')
cpu_usage = server_status.find('.//cpu/system').text
The Django community is developing django-monit-collector (currently in alpha). Key features include:
# settings.py configuration
MONIT_COLLECTOR_SETTINGS = {
'SERVERS': [
{'NAME': 'Web Server', 'HOST': 'web1', 'PORT': 2812},
{'NAME': 'DB Server', 'HOST': 'db1', 'PORT': 2812}
],
'POLL_INTERVAL': 300,
'STORAGE_BACKEND': 'django_monit_collector.backends.sqlite'
}
For minimal overhead, consider this Node.js implementation:
const monit = require('node-monit');
const servers = [
{ host: '192.168.1.10', user: 'monit', pass: 'secret' },
{ host: '192.168.1.11', user: 'monit', pass: 'secret' }
];
async function checkAll() {
const results = await Promise.all(
servers.map(s => monit.getStatus(s))
);
return results.map(r => ({
host: r.host,
services: r.services.filter(s => s.status !== 'Running')
}));
}
When rolling your own solution, account for:
- Authentication security (use HTTPS where possible)
- Data retention policies
- Alert threshold configuration
- Cross-platform service naming consistency