How to Enable and Analyze MySQL Slow Query Logs in AWS RDS for Performance Optimization


2 views

To enable slow query logging for your RDS MySQL instance, you'll need to modify your DB parameter group. Unlike self-managed MySQL servers, you can't directly edit the my.cnf file in RDS. Here's how to do it:

# Using AWS CLI
aws rds modify-db-parameter-group \
    --db-parameter-group-name my-db-param-group \
    --parameters "ParameterName=slow_query_log,ParameterValue=1,ApplyMethod=immediate" \
                "ParameterName=long_query_time,ParameterValue=2,ApplyMethod=immediate" \
                "ParameterName=log_output,ParameterValue=FILE,ApplyMethod=immediate"

RDS stores slow query logs differently than standard MySQL installations. You have several options to access them:

  • AWS Console: Navigate to RDS → Databases → Select your instance → Logs & events tab
  • AWS CLI: aws rds describe-db-log-files --db-instance-identifier your-instance-id
  • Download logs: aws rds download-db-log-file-portion --db-instance-identifier your-instance-id --log-file-name slowquery/mysql-slowquery.log --output text > slowquery.log

For serious performance analysis, I recommend using Percona's pt-query-digest tool:

# Install the toolkit
sudo apt-get install percona-toolkit

# Analyze your slow query log
pt-query-digest slowquery.log > analysis.txt

# Sample output format:
# Rank Query ID           Response time  Calls R/Call  V/M   Item
# ==== ================== ============== ===== ======= ===== ===============
#    1 0x12345ABCDEF      112.4568 68.8%   234  0.4806 0.21 SELECT customers

When reviewing slow queries, pay special attention to:

# Example of a problematic query
SELECT * FROM orders 
WHERE customer_id IN (SELECT id FROM customers WHERE status = 'active')
ORDER BY created_at DESC
LIMIT 1000;

# What makes it slow:
1. Correlated subquery
2. No index on status field
3. Large result set with sorting

For production systems, consider setting up automated monitoring:

#!/bin/bash
# Weekly slow query analysis script

LOG_FILE="/tmp/slowquery_$(date +%Y%m%d).log"
ANALYSIS_FILE="/tmp/query_analysis_$(date +%Y%m%d).txt"

# Download latest slow queries
aws rds download-db-log-file-portion \
    --db-instance-identifier prod-db-1 \
    --log-file-name slowquery/mysql-slowquery.log \
    --output text > $LOG_FILE

# Analyze and email results
pt-query-digest $LOG_FILE > $ANALYSIS_FILE
mail -s "Weekly Slow Query Report" dba@example.com < $ANALYSIS_FILE

For MySQL 5.6+ on RDS, Performance Schema offers real-time monitoring:

-- Enable performance schema
CALL mysql.rds_set_configuration('performance_schema', 1);

-- Find slow queries
SELECT digest_text, count_star, avg_timer_wait/1000000000 as avg_ms
FROM performance_schema.events_statements_summary_by_digest
ORDER BY avg_timer_wait DESC LIMIT 10;

Don't forget to leverage AWS's built-in tools:

# Enable enhanced monitoring
aws rds modify-db-instance \
    --db-instance-identifier your-instance \
    --monitoring-interval 60 \
    --monitoring-role-arn arn:aws:iam::123456789012:role/rds-monitoring-role

When troubleshooting database performance issues in AWS RDS MySQL, the slow query log is one of your most valuable tools. Unlike self-managed MySQL instances where logs are stored on the filesystem, RDS provides several specialized methods to access these logs.

First, you'll need to enable slow query logging through RDS parameter groups:

1. Navigate to AWS RDS Console
2. Select "Parameter groups" in the left menu
3. Create or modify a DB parameter group
4. Set these parameters:
   - slow_query_log = 1
   - long_query_time = [your threshold in seconds, e.g., 2]
   - log_output = FILE
5. Apply the parameter group to your RDS instance (may require reboot)

After enabling, you can access logs through:

1. AWS RDS Console → Databases → Select your instance
2. Navigate to "Logs & events" tab
3. Find and download the "slowquery/mysql-slowquery.log" file

For automation or frequent access, use the AWS CLI:

# List available logs
aws rds describe-db-log-files \
  --db-instance-identifier your-instance-name \
  --filename-contains slowquery

# Download a specific log file
aws rds download-db-log-file-portion \
  --db-instance-identifier your-instance-name \
  --log-file-name slowquery/mysql-slowquery.log.2023-12-31 \
  --starting-token 0 \
  --output text > slowquery.log

After obtaining the log file, use these tools for analysis:

# Using mysqldumpslow (included with MySQL client tools)
mysqldumpslow -s t slowquery.log > sorted_slow_queries.txt

# Using pt-query-digest (Percona Toolkit)
pt-query-digest slowquery.log > analysis_report.txt

When examining slow queries, pay special attention to:

  • Queries without proper indexes (look for "full scan" in EXPLAIN)
  • N+1 query problems (many similar queries in succession)
  • Queries with high "Rows_examined" to "Rows_sent" ratio
  • Queries with temporary tables or filesort operations

For more detailed analysis without logs, enable Performance Schema:

UPDATE performance_schema.setup_consumers SET ENABLED = 'YES' 
WHERE NAME LIKE '%events_statements%';

SELECT * FROM performance_schema.events_statements_summary_by_digest
ORDER BY SUM_TIMER_WAIT DESC LIMIT 10;

For production systems, consider setting up automated monitoring:

# Sample Lambda function to process slow logs
import boto3

def lambda_handler(event, context):
    rds = boto3.client('rds')
    logs = rds.describe_db_log_files(
        DBInstanceIdentifier='your-instance',
        FilenameContains='slowquery'
    )
    
    # Process and alert on new slow queries
    # (Implementation would vary based on your alerting system)
    process_logs(logs)