When working with AWS CloudWatch Logs across multiple EC2 instances, developers often need real-time visibility into consolidated logs. The native CloudWatch console has limitations:
- The "Search Events" function doesn't support continuous tailing
- The jump-to-end button remains disabled for log groups
- The AWS CLI's
get-log-events
requires individual stream specification
Here are three effective approaches to tail CloudWatch logs across multiple streams:
1. Using AWS CLI with Shell Scripting
#!/bin/bash
LOG_GROUP_NAME="/your/log/group"
# Get all log streams
STREAMS=$(aws logs describe-log-streams \
--log-group-name $LOG_GROUP_NAME \
--query 'logStreams[].logStreamName' \
--output text)
# Tail each stream
for stream in $STREAMS; do
aws logs tail $LOG_GROUP_NAME \
--log-stream-names $stream \
--follow &
done
wait
2. CloudWatch Logs Insights for Advanced Querying
For structured log analysis:
fields @timestamp, @message
| sort @timestamp desc
| limit 20
3. Third-Party Tools Integration
Consider these alternatives:
- AWS CLI v2's
logs tail
command with--follow
flag - cw (CloudWatch CLI tool) with
cw tail -f
- Fluentd or Logstash for pipeline processing
For production environments, implement this Python solution using boto3:
import boto3
import time
client = boto3.client('logs')
def tail_log_group(log_group_name):
streams = client.describe_log_streams(
logGroupName=log_group_name)['logStreams']
last_events = {}
for stream in streams:
stream_name = stream['logStreamName']
events = client.get_log_events(
logGroupName=log_group_name,
logStreamName=stream_name,
limit=1)
if events['events']:
last_events[stream_name] = events['events'][0]['eventId']
while True:
for stream_name, last_event in last_events.items():
events = client.get_log_events(
logGroupName=log_group_name,
logStreamName=stream_name,
nextToken=last_event)
for event in events['events'][1:]: # Skip first (duplicate)
print(f"[{stream_name}] {event['message']}")
if events['events']:
last_events[stream_name] = events['events'][-1]['eventId']
time.sleep(5)
- Monitor your API request rates to avoid throttling
- Adjust timeouts based on your log volume
- Consider log retention settings when designing your solution
- Use IAM policies to restrict log access appropriately
When working with AWS CloudWatch Logs across multiple EC2 instances, developers often need to monitor consolidated logs in real-time. The native CloudWatch console provides a "Search Events" feature, but it lacks true tail functionality - the "Jump to End" button remains disabled, forcing manual timestamp adjustments.
The AWS CLI's get-log-events
command requires specifying a single log stream, making it impractical for monitoring an entire log group. Here's a Python solution using Boto3 that addresses this limitation:
import boto3
from datetime import datetime, timedelta
def tail_log_group(log_group_name, follow=True):
client = boto3.client('logs')
last_event_time = int((datetime.now() - timedelta(minutes=5)).timestamp() * 1000)
while follow:
streams = client.describe_log_streams(
logGroupName=log_group_name,
orderBy='LastEventTime',
descending=True
)
for stream in streams['logStreams']:
events = client.get_log_events(
logGroupName=log_group_name,
logStreamName=stream['logStreamName'],
startTime=last_event_time
)
for event in events['events']:
print(f"[{stream['logStreamName']}] {event['message']}")
last_event_time = max(last_event_time, event['timestamp'] + 1)
For production environments, consider these robust solutions:
- CloudWatch Logs Insights: Run queries like:
fields @timestamp, @message | sort @timestamp desc | limit 50
- Third-party Tools: Tools like
awslogs
(Python package) provide tail functionality:
awslogs get /var/log/syslog --watch --no-group --no-stream
When implementing log tailing:
- Implement proper error handling for throttling (use exponential backoff)
- Cache log stream names to reduce API calls
- Consider using CloudWatch Logs Subscription Filters for high-volume scenarios