When monitoring public HTTPS websites through bash scripts, we need solutions that handle:
- Certificate validation
- HTTP status code analysis
- Connection timeout handling
- Header inspection capabilities
Here's a production-ready script example with comprehensive error handling:
#!/bin/bash
check_https() {
local url=$1
local timeout=${2:-10}
response=$(curl -sSLI \
--connect-timeout $timeout \
--max-time $timeout \
--fail \
-w "%{http_code}" \
-o /dev/null \
"$url" 2>&1)
if [[ $? -eq 0 ]]; then
echo "SUCCESS: $url is accessible (HTTP $response)"
return 0
else
echo "ERROR: $url failed - $response"
return 1
fi
}
# Usage example:
check_https "https://example.com" 5
Flag | Purpose |
---|---|
-sSLI | Silent mode + follow redirects + show only headers |
--connect-timeout | Connection establishment timeout |
--max-time | Maximum operation timeout |
-w "%{http_code}" | Extract status code |
-o /dev/null | Discard regular output |
Using wget
wget --spider --timeout=10 --tries=2 --no-check-certificate https://example.com 2>&1
Using httpie (Modern Alternative)
http --check-status --timeout=5 HEAD https://example.com
Using openssl for Certificate Verification
echo | openssl s_client -connect example.com:443 -servername example.com 2>/dev/null | openssl x509 -noout -dates
This version adds email alerts and logging:
#!/bin/bash
LOG_FILE="/var/log/website_monitor.log"
ALERT_EMAIL="admin@example.com"
SITES=("https://example.com" "https://api.example.com")
for site in "${SITES[@]}"; do
if ! curl -sSLI --connect-timeout 10 --max-time 15 --fail "$site" >/dev/null 2>&1; then
echo "[$(date)] CRITICAL: $site is down" >> "$LOG_FILE"
mail -s "ALERT: $site is down" "$ALERT_EMAIL" <<< "Check $site immediately"
else
echo "[$(date)] OK: $site is up" >> "$LOG_FILE"
fi
done
- Certificate errors: Add
--insecure
flag temporarily for testing - Slow responses: Adjust timeout values based on baseline performance
- Redirect chains: Use
--location-trusted
for sensitive redirects
When monitoring web services, verifying HTTPS availability is crucial for production environments. The challenge lies in properly handling SSL/TLS while capturing meaningful status information.
The most robust method uses CURL with these essential parameters:
#!/bin/bash
check_https() {
response=$(curl -s -o /dev/null -w "%{http_code}" -m 10 "https://example.com")
if [[ "$response" == 200 ]]; then
echo "HTTPS service is UP"
return 0
else
echo "HTTPS service returned $response"
return 1
fi
}
-s
: Silent mode (no progress meter)-o /dev/null
: Discards actual content-w "%{http_code}"
: Outputs just the status code-m 10
: 10-second timeout
wget method:
wget --spider --tries=1 --timeout=10 --no-check-certificate https://example.com 2>&1 | grep "200 OK"
openssl s_client for deeper checks:
echo | openssl s_client -connect example.com:443 2>&1 | grep -A 2 "Certificate chain"
For enterprise monitoring, consider adding:
#!/bin/bash
endpoint="https://example.com"
max_retries=3
timeout=15
for ((i=1; i<=$max_retries; i++)); do
http_code=$(curl -s -o /dev/null -w "%{http_code}" -m $timeout "$endpoint")
case $http_code in
200)
echo "Success: $endpoint"
exit 0
;;
000)
echo "Connection failed (attempt $i/$max_retries)"
;;
*)
echo "HTTP $http_code received (attempt $i/$max_retries)"
;;
esac
sleep $((i * 2)) # Exponential backoff
done
exit 1
- Always handle SSL certificate validation properly in production
- Implement proper timeout values to avoid hanging scripts
- Consider DNS resolution time in your timeout calculations
- For load balanced services, check multiple endpoints