Automated Website Uptime Monitoring Script with Email/SMS Alerts for Linux Servers


3 views

As a solo developer managing multiple client websites on a CentOS VPS, service interruptions can quickly become a nightmare scenario. Just last week, I discovered my Apache httpd service had silently failed - not the ideal way to start your day when clients begin calling about broken websites.

While comprehensive monitoring solutions like Nagios or Zabbix exist, they often represent overkill for small-scale operations. What we really need is a lightweight, cron-based solution that:

  • Tests website availability
  • Provides rapid notification
  • Requires minimal maintenance

Here's a robust monitoring script I've implemented that checks website availability and sends alerts:

#!/bin/bash

# Configuration
URL="https://yourdomain.com"
EMAIL="admin@yourdomain.com"
TIMEOUT=30
LOG_FILE="/var/log/website_monitor.log"

# Check website
if curl --output /dev/null --silent --head --fail --max-time $TIMEOUT "$URL"; then
    echo "$(date) - $URL is UP" >> $LOG_FILE
else
    echo "$(date) - $URL is DOWN" >> $LOG_FILE
    # Send email alert
    echo "$URL is not responding after $TIMEOUT seconds" | mail -s "URGENT: Website Down Alert" $EMAIL
    # Optional SMS via Twilio API
    # curl -X POST "https://api.twilio.com/2010-04-01/Accounts/YOUR_ACCOUNT_SID/Messages.json" \
    # --data-urlencode "Body=Website $URL is down" \
    # --data-urlencode "From=+1234567890" \
    # --data-urlencode "To=+0987654321" \
    # -u YOUR_ACCOUNT_SID:YOUR_AUTH_TOKEN
fi

To run this script every 5 minutes:

# Edit crontab
crontab -e

# Add this line
*/5 * * * * /path/to/website_monitor.sh

For monitoring multiple client sites:

#!/bin/bash

# Array of URLs to monitor
URLS=("https://client1.com" "https://client2.com" "https://client3.com")
EMAIL="admin@yourdomain.com"
TIMEOUT=30
LOG_FILE="/var/log/multi_site_monitor.log"

for URL in "${URLS[@]}"; do
    if ! curl --output /dev/null --silent --head --fail --max-time $TIMEOUT "$URL"; then
        echo "$(date) - $URL is DOWN" >> $LOG_FILE
        echo "$URL is not responding after $TIMEOUT seconds" | mail -s "URGENT: $URL Down" $EMAIL
    fi
done

For automatic recovery attempts:

# Add this after the curl check fails
systemctl restart httpd.service
echo "$(date) - Attempted httpd restart" >> $LOG_FILE

When implementing monitoring scripts:

  • Store sensitive credentials in separate config files with restricted permissions
  • Implement rate limiting on alerts to prevent notification storms
  • Consider using a dedicated monitoring user with limited privileges
  • Regularly review log files to prevent disk space issues

As a fellow developer managing my own CentOS VPS for client sites, I've experienced the frustration of unexpected HTTPD service crashes. Waking up to client calls about downtime is never pleasant. Here's how I solved this monitoring challenge with a lightweight shell script solution.

We need a solution that:

  • Periodically checks website availability
  • Detects failures within a reasonable timeout (30 seconds)
  • Sends alerts via both email and SMS
  • Runs as a cron job from a stable local machine

Here's the complete monitoring script I've been using successfully:

#!/bin/bash

# Configuration
URL="https://yourdomain.com"
TIMEOUT=30
EMAIL="your@email.com"
SMS="your_phone@carrier.sms.gateway"
LOG="/var/log/webmonitor.log"

# Check function with timeout
check_site() {
    response=$(curl -s -o /dev/null -w "%{http_code}" --max-time $TIMEOUT $URL)
    
    if [ $? -ne 0 ] || [ "$response" != "200" ]; then
        echo "[$(date)] Website DOWN! HTTP Code: ${response:-Timeout}" >> $LOG
        send_alert
        # Attempt automatic restart (uncomment if desired)
        # ssh user@server "sudo systemctl restart httpd"
        return 1
    else
        echo "[$(date)] Website UP" >> $LOG
        return 0
    fi
}

# Alert function
send_alert() {
    # Email alert
    echo "Website $URL is not responding!" | mail -s "URGENT: Website Down Alert" $EMAIL
    
    # SMS alert (requires mail-to-SMS gateway)
    echo "Website $URL is down! Check server immediately." | mail -s "Website Alert" $SMS
}

# Main execution
check_site

Prerequisites:

  • curl installed on monitoring machine
  • mailutils or equivalent for email sending
  • Properly configured mail server or SMTP relay

Cron Job Configuration:

# Run every 5 minutes
*/5 * * * * /path/to/website_monitor.sh

For those wanting more robust monitoring:

#!/bin/bash

# Enhanced version with multiple URL checks and retries
URLS=("https://site1.com" "https://site2.com")
MAX_RETRIES=2
RETRY_DELAY=10

check_url() {
    local url=$1
    for ((i=1; i<=$MAX_RETRIES; i++)); do
        response=$(curl -s -o /dev/null -w "%{http_code}" --max-time $TIMEOUT $url)
        if [ $? -eq 0 ] && [ "$response" == "200" ]; then
            return 0
        fi
        sleep $RETRY_DELAY
    done
    return 1
}

for url in "${URLS[@]}"; do
    if ! check_url "$url"; then
        echo "[$(date)] $url failed after $MAX_RETRIES attempts" >> $LOG
        send_alert "$url"
    fi
done

For developers preferring modern notification channels:

# Slack webhook integration
SLACK_WEBHOOK="https://hooks.slack.com/services/..."

send_slack_alert() {
    message="{\"text\":\"? Website $1 is down!\"}"
    curl -X POST -H 'Content-type: application/json' --data "$message" $SLACK_WEBHOOK
}

# Telegram bot integration
TELEGRAM_BOT_TOKEN="your_token"
TELEGRAM_CHAT_ID="your_chat_id"

send_telegram_alert() {
    message="Website $1 is down!"
    curl -s -X POST "https://api.telegram.org/bot$TELEGRAM_BOT_TOKEN/sendMessage" \
        -d "chat_id=$TELEGRAM_CHAT_ID" -d "text=$message"
}

When deploying this in production:

  • Rotate logs to prevent disk space issues
  • Implement alert throttling to prevent notification storms
  • Monitor the monitor (ensure the cron job keeps running)
  • Consider adding SSL certificate expiration checks