Log files are treasure troves of information, silently recording every significant event happening in your system. When critical errors occur, waiting to manually check logs isn't practical. We need automated monitoring that can instantly notify us when specific patterns appear.
Here's a simple solution using standard Linux tools:
#!/bin/bash # Monitor /var/log/app/error.log for "CRITICAL" entries tail -F /var/log/app/error.log | \ while read LINE do if echo "$LINE" | grep -q "CRITICAL"; then echo "$LINE" | mail -s "CRITICAL Error Detected" admin@example.com fi done
For more robust monitoring, install swatch:
sudo apt install swatch
Create a config file ~/.swatchrc:
watchfor /CRITICAL|FATAL/ echo mail=admin@example.com,subject=Application_Alert
Then run it:
swatch --config-file=~/.swatchrc --tail-file=/var/log/app/error.log
For system-wide monitoring, configure syslog-ng:
filter f_critical { match("CRITICAL"); }; destination d_mail { program("/usr/bin/mail -s 'Critical Error' admin@example.com"); }; log { source(s_src); filter(f_critical); destination(d_mail); };
For more complex pattern matching:
import os import smtplib from watchdog.observers import Observer from watchdog.events import FileSystemEventHandler class LogHandler(FileSystemEventHandler): def on_modified(self, event): with open(event.src_path) as f: for line in f: if "CRITICAL" in line: send_email(line) def send_email(message): server = smtplib.SMTP('smtp.example.com', 587) server.starttls() server.login("user", "password") server.sendmail("alert@example.com", "admin@example.com", message) server.quit() observer = Observer() observer.schedule(LogHandler(), path='/var/log/app/') observer.start()
- Rate limiting to prevent email floods
- Log rotation handling
- Secure email transmission
- Multiple recipient support
When managing Linux systems, automated monitoring of log files for critical events is a common operational requirement. Many applications write errors to log files without built-in alerting mechanisms, leaving administrators to implement their own solutions.
Here's a simple yet effective approach using standard Linux tools:
#!/bin/bash LOG_FILE="/var/log/application/error.log" ALERT_EMAIL="admin@example.com" SEARCH_STRING="CRITICAL ERROR" tail -F "$LOG_FILE" | grep --line-buffered "$SEARCH_STRING" | while read line do echo "$line" | mail -s "Alert: $SEARCH_STRING detected" "$ALERT_EMAIL" done
This script requires mailutils
package for email functionality. Key components:
tail -F
: Tracks file changes (handles log rotation)grep --line-buffered
: Forces line-by-line output- Pipeline to
mail
: Sends matched lines via email
For more sophisticated monitoring, swatch
(Simple Log Watcher) provides additional features:
# Install swatch sudo apt-get install swatch # Configuration file (~/.swatchrc) watchfor /ERROR/ throttle 10:00 exec /usr/bin/mail -s "Error Alert" admin@example.com
Benefits include:
- Pattern matching with regular expressions
- Throttling to prevent alert storms
- Multiple action types
For systems using systemd, create a dedicated service:
# /etc/systemd/system/log-watcher.service [Unit] Description=Log file watcher service [Service] ExecStart=/usr/local/bin/log-watcher.sh Restart=always [Install] WantedBy=multi-user.target
For production environments, consider these enhancements:
# Sample advanced monitoring script #!/bin/bash LOG_FILE="$1" ALERT_EMAIL="$2" SEARCH_STRING="$3" MAX_ALERTS=5 ALERT_COUNT=0 tail -F "$LOG_FILE" | grep --line-buffered "$SEARCH_STRING" | while read line do if [ $ALERT_COUNT -lt $MAX_ALERTS ]; then echo "$(date): $line" >> /var/log/alert_history.log echo "$line" | mail -s "URGENT: $SEARCH_STRING" "$ALERT_EMAIL" ALERT_COUNT=$((ALERT_COUNT+1)) fi done
For enterprise environments, consider these alternatives:
- Logcheck (part of Debian/Ubuntu systems)
- Fail2ban (primarily for security logs)
- Splunk or ELK stack for comprehensive monitoring