WebLogic logs use a distinct timestamp format that makes temporal filtering challenging. A typical entry looks like:
####
Here comes the actual log message content
We need three key elements for robust log monitoring:
- Timestamp extraction and comparison
- Continuous monitoring with retry logic
- Pattern matching for the target string
Here's a production-ready script combining all requirements:
#!/bin/bash
TARGET_STRING="ERROR_WE_CARE_ABOUT"
LOG_FILE="/path/to/weblogic.log"
MAX_ATTEMPTS=10
ATTEMPT=0
FOUND=false
while [ $ATTEMPT -lt $MAX_ATTEMPTS ] && [ "$FOUND" = false ]; do
NOW=$(date +%s)
TEN_MIN_AGO=$((NOW - 600))
# Process log entries from last 10 minutes
while IFS= read -r line; do
if [[ "$line" =~ ^####\<([^>]+)\> ]]; then
log_date=$(date -d "${BASH_REMATCH[1]}" +%s 2>/dev/null)
if [ -n "$log_date" ] && [ $log_date -ge $TEN_MIN_AGO ]; then
current_entry="$line"
continue
fi
fi
if [ -n "$current_entry" ] && [[ "$line" == *"$TARGET_STRING"* ]]; then
echo "Found matching entry:"
echo "$current_entry"
echo "$line"
FOUND=true
break
fi
done < "$LOG_FILE"
if [ "$FOUND" = false ]; then
echo "Pattern not found, retrying in 60 seconds..."
sleep 60
((ATTEMPT++))
fi
done
if [ "$FOUND" = false ]; then
echo "Error: Target string not found after $MAX_ATTEMPTS attempts"
exit 1
fi
For high-volume logs, consider these optimized approaches:
# Using awk for better performance
awk -v target="$TARGET_STRING" -v threshold=$(date -d '10 minutes ago' +%s) '
BEGIN { found=0 }
/^#### {
split($0, parts, /[<>]/)
"date -d \"" parts[2] "\" +%s" | getline timestamp
if (timestamp >= threshold) valid=1
else valid=0
next
}
valid && $0 ~ target { print; found=1; exit }
END { exit !found }
' "$LOG_FILE"
When dealing with EDT/EST timestamps, add conversion:
# Convert log timestamp to UTC for consistent comparison
LOG_TIME="Sep 21, 2018 1:56:20 PM EDT"
UTC_SECONDS=$(TZ=UTC date -d "$(echo "$LOG_TIME" | sed 's/EDT/EST5EDT/')" +%s)
- Add log rotation checks using
ls -l
or inode comparison - Implement signal trapping for clean script termination
- Consider using
tail -n
with large number as initial optimization - Add monitoring for script execution timeouts
WebLogic logs with their ####<timestamp> prefix format pose a particular challenge when you need to:
- Filter only recent entries (e.g., last 10 minutes)
- Search for specific patterns within that timeframe
- Implement retry logic when matches aren't found
Here's a robust approach combining date calculations and text processing:
#!/bin/bash
# Configuration
LOG_FILE="/path/to/weblogic.log"
SEARCH_STRING="ERROR"
RETRY_INTERVAL=60
MAX_RETRIES=5
# Calculate timestamp 10 minutes ago in WebLogic format
TEN_MIN_AGO=$(date -d "10 minutes ago" "+%b %d, %Y %-I:%M:%S %p %Z")
# Retry loop
for ((retry=1; retry<=MAX_RETRIES; retry++)); do
# Extract and search recent entries
awk -v search="$SEARCH_STRING" -v threshold="$TEN_MIN_AGO" '
BEGIN {
# Convert threshold to sortable timestamp
cmd = "date -d \"" threshold "\" +%s"
cmd | getline threshold_ts
close(cmd)
}
/^####</ {
# Extract timestamp part
ts_str = substr($0, 6, index($0, ">")-6)
# Convert to sortable timestamp
cmd = "date -d \"" ts_str "\" +%s 2>/dev/null"
cmd | getline ts
close(cmd)
current_entry_ts = ts
valid_entry = (ts >= threshold_ts)
}
valid_entry && $0 ~ search {
print
found=1
}
END { exit !found }
' "$LOG_FILE"
# Check if search was successful
if [ $? -eq 0 ]; then
exit 0
fi
# Wait before retrying
if [ $retry -lt $MAX_RETRIES ]; then
sleep $RETRY_INTERVAL
fi
done
exit 1
This solution improves upon simpler approaches by:
- Properly handling WebLogic's timestamp format
- Using
date
commands for accurate time comparisons - Implementing configurable retry logic
- Being efficient with large log files
For frequently updated logs, consider tailing the log and applying time filters:
tail -n 10000 "$LOG_FILE" | awk -v search="$SEARCH_STRING" -v threshold="$TEN_MIN_AGO" '
# Same AWK logic as above
'
When dealing with massive log files:
- First use
grep
to find potential matches, then apply time filtering - Consider using
tac
(reverse cat) to process from the end backward - For production systems, log rotation policies should be implemented