When managing hundreds of Linux servers (RHEL/CentOS), the default Logwatch configuration generates excessive email traffic containing:
- HTTP error logs already monitored by Splunk
- Disk space metrics covered by Nagios thresholds
- Authentication attempts logged in SIEM systems
# Sample default Logwatch output snippet
-------------------------- httpd Begin --------------------------
404 Errors:
/favicon.ico: 12 Time(s)
/wp-admin.php: 8 Time(s)
--------------------------- httpd End ---------------------------
Edit /usr/share/logwatch/default.conf/logwatch.conf
with these key parameters:
# Critical settings for enterprise environments
Detail = Low # Reduce from default Medium
MailTo = admins@domain.com
Range = yesterday
Service = "-exim" # Disable specific services
Service = "-http-error" # Skip HTTP errors
Service = "-zz-network" # Disable minor network services
For granular control, create override files in /etc/logwatch/conf/
:
# /etc/logwatch/conf/services/http.conf
# Only report critical HTTP errors
*OnlyService = http
*RemoveHeaders
*RemoveFooters
*ErrorLevel = Warn
Modify /etc/cron.daily/00logwatch
to add conditional logic:
#!/bin/bash
# Only send emails if critical issues found
/usr/sbin/logwatch --output mail --format html \
--subject "CRITICAL $(hostname) Log Alerts" \
--service high-priority --service security
Create custom filters in /etc/logwatch/conf/logfiles/
:
# auth.log filter example
*ApplyStdDate
*KeepMsg "authentication failure"
*KeepMsg "invalid user"
*RemoveMsg "session opened"
*RemoveMsg "session closed"
Test configurations before deployment:
# Dry-run with custom config
logwatch --print --range yesterday --service all \
--detail High --filename /tmp/logwatch.test
When managing fleet-wide Linux deployments, Logwatch's default verbosity creates alert fatigue. The daily cron-driven emails (/etc/cron.daily/0logwatch) generate excessive data points like:
# Default noisy sections
Memory statistics
Disk space usage
HTTP 404 errors
SSH login attempts
Edit the main configuration file to implement surgical filtering:
# /etc/logwatch/conf/logwatch.conf
MailTo = admin-team@example.com
MailFrom = logwatch@$HOSTNAME
Range = yesterday
Detail = Low # Medium/High for critical systems
Service = "-zz-network" # Disable network stats
Service = "-zz-sys" # Disable system stats
Service = "-eximstats" # Disable email stats
Create custom service filters in /etc/logwatch/conf/services/ to focus on actionable alerts:
# /etc/logwatch/conf/services/http.conf
*Remove = "HTTP/1.1\" 404"
*Remove = "HTTP/1.1\" 304"
*Only = "HTTP/1.1\" 5[0-9][0-9]"
# /etc/logwatch/conf/services/sshd.conf
*Remove = "Accepted publickey for"
*Only = "Failed password for|authentication failures"
Implement conditional reporting with logfiles thresholds:
# /etc/logwatch/scripts/services/custom-ssh
grep "Failed password" /var/log/secure | \
awk '{count[$9]++} END {for (ip in count) \
if (count[ip] > 5) print "SSH Bruteforce:",ip,count[ip]}'
For multi-server environments, deploy a consolidated reporting solution:
#!/bin/bash
# /usr/local/bin/aggregate-logwatch
for host in $(cat /etc/cluster-hosts); do
ssh $host "/usr/sbin/logwatch --output mail --format text" >> /var/log/cluster-report.log
done
logwatch --range today --service custom-cluster --print
Pipe critical findings to existing alert systems:
# /etc/logwatch/conf/postfix.conf
*Action = "/usr/local/bin/nagios-notify --service=postfix-alerts"
*ActionThreshold = 3