Optimizing Linux OOM-Killer Behavior: vm.overcommit_memory Configuration for Apache Web Servers


2 views

When dealing with memory management on Linux servers running critical services like Apache, the OOM-killer's behavior can be particularly disruptive. In your case with CentOS 5.4 (kernel 2.6.16.33-xenU), the sporadic OOM events despite having adequate memory (512MB RAM + 1GB swap) suggest an overcommitment issue rather than a genuine memory shortage.

Linux's memory overcommit policy has three modes controlled by vm.overcommit_memory:

0: Heuristic overcommit (default)
1: Always overcommit
2: Strict overcommit based on vm.overcommit_ratio

The current default behavior (mode 0) uses a complex algorithm that sometimes allows more memory allocation than physically available, which can trigger OOM-killer when processes actually try to use that memory.

For your Apache server hosting 10 low-traffic sites, consider this configuration:

# /etc/sysctl.conf
vm.overcommit_memory = 2
vm.overcommit_ratio = 80
vm.swappiness = 10

Key implications:

  • vm.overcommit_memory=2: Enforces strict accounting using the formula: CommitLimit = Swap + (RAM * overcommit_ratio/100)
  • vm.overcommit_ratio=80: Limits committed memory to swap + 80% of RAM (409MB + 1024MB = 1.4GB commit limit in your case)

Combine these kernel settings with Apache tuning:

# httpd.conf or apache2.conf
StartServers 2
MinSpareServers 2
MaxSpareServers 5
MaxClients 50
MaxRequestsPerChild 10000

Implement these checks to validate your configuration:

# Check current memory commitments
$ grep -i commit /proc/meminfo

# Monitor OOM events
$ dmesg | grep -i "oom\|kill"

# Real-time memory monitoring
$ watch -n 5 free -m

If strict overcommit proves too restrictive, consider:

  1. Tuning OOM-killer priorities for critical processes
  2. Implementing cgroups for memory isolation
  3. Upgrading to a newer kernel with better memory management

For critical processes, you can adjust OOM scores:

# Protect Apache processes
for pid in $(pgrep httpd); do
    echo -1000 > /proc/$pid/oom_score_adj
done

In production environments, I recommend:

  • Testing changes in staging first
  • Implementing gradual rollout with monitoring
  • Considering containerization for better memory isolation
  • Regularly reviewing memory usage patterns

When dealing with memory-intensive applications on Linux servers, particularly in constrained environments like VPS instances, the Out-of-Memory (OOM) killer can become a frustrating source of instability. The scenario described - where a server with adequate memory (512MB RAM + 1GB swap) becomes unresponsive approximately once a month - is a classic case where memory overcommit settings need adjustment.

Linux's default memory allocation strategy uses optimistic memory overcommit (vm.overcommit_memory=0), which means the kernel will:

  • Allow most memory allocation requests to succeed
  • Only perform serious checking when physical memory is exhausted
  • Rely on the OOM killer to terminate processes when overcommit becomes unsustainable

The recommended configuration changes:

# Add to /etc/sysctl.conf
vm.overcommit_memory = 2
vm.overcommit_ratio = 80

This changes the kernel's behavior to:

  1. Never overcommit beyond available memory (vm.overcommit_memory=2)
  2. Limit allocations to swap space plus 80% of physical RAM (vm.overcommit_ratio=80)

For an Apache web server hosting multiple low-traffic sites, this approach offers several benefits:

# Calculate current memory commitment
grep Commit /proc/meminfo
# Check current overcommit settings
sysctl vm.overcommit_memory vm.overcommit_ratio

Some processes might fail to allocate memory when they normally would under default settings. To monitor for this:

# Watch for failed allocations
dmesg | grep -i "failed allocation"
# Check OOM killer activity
grep -i kill /var/log/messages*

Consider these additional measures for comprehensive memory management:

# Adjust swappiness
vm.swappiness = 10
# Limit process memory
ulimit -v [KBYTES]
# Use cgroups for memory limits
echo [PID] > /sys/fs/cgroup/memory/groupname/tasks

After applying the settings, monitor performance with:

# Continuous memory monitoring
watch -n 5 free -m
# Apache memory usage per process
ps -ylC httpd --sort:rss