Debugging Apache httpd Memory Leaks on EC2: How to Fix High RAM Usage in Prefork MPM


2 views

When monitoring our t2.small EC2 instance running Amazon Linux (CentOS-based), we noticed Apache httpd processes were consuming 90-100% of the 1.7GB available memory. Restarting httpd temporarily resolves the issue, but memory usage gradually climbs back up within hours. This pattern suggests either:
1. Genuine memory leak in Apache/modules
2. Misconfigured MPM prefork settings
3. Unbounded process growth due to traffic patterns


First, let's examine the current prefork configuration. Run:

apachectl -V | grep MPM
httpd -V | grep SERVER_CONFIG_FILE


Then check your current prefork settings:

grep -i "StartServers\|MinSpareServers\|MaxSpareServers\|ServerLimit\|MaxClients\|MaxRequestsPerChild" /etc/httpd/conf/httpd.conf


Typical problematic output might show:

StartServers        8
MinSpareServers     5
MaxSpareServers     20
ServerLimit         256
MaxClients          256
MaxRequestsPerChild 4000



For a 1.7GB EC2 instance, these settings would be more appropriate:

<IfModule prefork.c>
StartServers        2
MinSpareServers     2
MaxSpareServers     5
ServerLimit         50
MaxClients          50
MaxRequestsPerChild 1000
</IfModule>



Create a monitoring script (/usr/local/bin/apache-memcheck):

#!/bin/bash
ps -ylC httpd --sort:rss | awk '{sum+=$8; ++n} END {printf "Total RAM: %dMB\nAvg RAM per process: %dMB\nProcesses: %d\n", sum/1024, (sum/n)/1024, n}'


Make it executable and run hourly via cron:

chmod +x /usr/local/bin/apache-memcheck
echo "0 * * * * root /usr/local/bin/apache-memcheck >> /var/log/apache-mem.log" > /etc/cron.d/apache-mem



If memory still leaks after config changes:

1. Check for problematic modules:

httpd -M | grep -E 'php|python|perl'


2. Install mod_status for real-time monitoring:

yum install mod_status


Add to httpd.conf:

<Location /server-status>
    SetHandler server-status
    Require ip 127.0.0.1
</Location>


3. Consider switching to MPM worker if possible:

yum remove httpd
yum install httpd24u



For long-term stability on resource-constrained instances:

1. Implement process recycling:

MaxRequestsPerChild 300


2. Add memory-based restart trigger to /etc/httpd/conf.d/health.conf:

<IfModule mod_rewrite.c>
RewriteEngine On
RewriteCond %{ENV:MEMORY_USAGE} >90
RewriteRule ^ - [E=HTTPD_RESTART:true]
</IfModule>


3. Set up OOM killer protection:

echo 'echo -17 > /proc/$$/oom_adj' >> /etc/httpd/rc.local


When monitoring the EC2 instance running Amazon Linux (CentOS-based), the memory consumption pattern shows a steady climb until reaching 90-100% utilization. The critical detail is that memory usage only resets after restarting the httpd service, suggesting either:

  • Incorrect MPM prefork configuration thresholds
  • Potential memory leak in custom modules/scripts
  • Unbounded process growth due to KeepAlive settings

First, verify your current MPM prefork settings in /etc/httpd/conf/httpd.conf:

        5
     5  
     10
         256
   150  
 1000

The key parameters for memory control are:

  • MaxRequestWorkers: Should be calculated as (Total RAM - Safety Margin) / Average Apache process size
  • MaxConnectionsPerChild: Critical for preventing gradual memory leaks

Run these commands to gather forensic data:

# Real-time memory usage per Apache process
ps -ylC httpd --sort:rss | awk '{print $8,$10,$12}' 

# Track memory growth over time
watch -n 5 "pmap -x $(pgrep -o httpd) | tail -1"

# Module memory footprint
apachectl -t -D DUMP_MODULES

For a 1.7GB EC2 instance, try these optimized settings:


StartServers            3
MinSpareServers         3
MaxSpareServers         5
ServerLimit             25
MaxRequestWorkers       20
MaxConnectionsPerChild  500

Additional tuning recommendations:

# Reduce KeepAlive impact
KeepAlive On
KeepAliveTimeout 2
MaxKeepAliveRequests 50

# Disable unused modules
LoadModule expires_module modules/mod_expires.so
LoadModule deflate_module modules/mod_deflate.so

Create a cron job to monitor and restart Apache when memory exceeds thresholds:

#!/bin/bash
THRESHOLD=85
CURRENT=$(free | awk '/Mem/{printf("%.0f"), $3/$2*100}')

if [ "$CURRENT" -gt "$THRESHOLD" ]; then
    systemctl restart httpd
    echo "$(date) - Memory usage $CURRENT% - Apache restarted" >> /var/log/apache_mem.log
fi

For persistent leaks, use gdb to attach to a running process:

gdb -p $(pgrep -o httpd)
(gdb) info proc mappings
(gdb) malloc_info 0

Consider using Apache's mod_status with extended statistics:

ExtendedStatus On
<Location /server-status>
    SetHandler server-status
    Require host localhost
</Location>