When your system's OOM killer activates despite showing ample free memory, you're likely dealing with one of Linux's more counterintuitive memory management scenarios. In this case, we observe two suspicious processes showing impossible CPU usage values:
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND root 1399 60702042 0.2 482288 1868 ? Sl Feb21 21114574:24 /sbin/rsyslogd mysql 2022 60730428 5.1 1606028 38760 ? Sl Feb21 21096396:49 /usr/libexec/mysqld
The vm.sysctl settings reveal several aggressive memory policies that might contribute to the issue:
vm.overcommit_memory = 1 vm.swappiness = 100 vm.drop_caches = 3 vm.min_free_kbytes = 3518
The kernel logs show the critical moment when memory allocation failed:
Feb 21 17:12:51 host kernel: Out of memory: kill process 2053 (mysqld_safe) score 891049 or a child Feb 21 17:12:51 host kernel: Killed process 2266 (mysqld) vsz:1540232kB, anon-rss:4692kB, file-rss:128kB
Here's a diagnostic script to monitor memory pressure in real-time:
#!/bin/bash watch -n 1 ' echo -e "Memory Stats:\n--------------"; free -m; echo -e "\nMemory Pressure:\n--------------"; cat /proc/pressure/memory; echo -e "\nTop Memory Consumers:\n--------------"; ps -eo pid,ppid,cmd,%mem,%cpu --sort=-%mem | head -n 5; echo -e "\nSwap Usage:\n--------------"; swapon --show; '
For a MySQL server with 768MB RAM, consider these adjustments:
# Add to /etc/sysctl.conf vm.overcommit_memory = 2 vm.overcommit_ratio = 80 vm.swappiness = 60 # MySQL specific optimizations innodb_buffer_pool_size = 128M innodb_log_file_size = 32M innodb_flush_method = O_DIRECT
Configure this alert rule for early OOM detection:
groups: - name: memory.rules rules: - alert: PotentialOOM expr: (node_memory_MemFree_bytes + node_memory_Cached_bytes + node_memory_Buffers_bytes) / node_memory_MemTotal_bytes < 0.1 for: 5m labels: severity: warning annotations: summary: "Low available memory (instance {{ $labels.instance }})" description: "Available memory is only {{ $value | humanizePercentage }} of total memory"
When the Linux OOM killer starts terminating processes despite having visible free RAM, we're typically dealing with one of these scenarios:
# Common symptoms to check:
dmesg | grep -i oom
grep -i oom /var/log/messages
free -h
vmstat 1 5
The process metrics showing impossible CPU utilization (60702042%) immediately suggest kernel accounting corruption. This explains why the OOM killer misbehaves - it's working with corrupted data. Key areas to investigate:
# Check for memory leaks/corruption:
cat /proc/meminfo | grep -E 'MemTotal|MemFree|Buffers|Cached|Swap'
cat /proc/$(pidof mysqld)/smaps | grep -i swap
Setting swappiness to 100 aggressively pushes memory pages to swap. Combine this with vm.drop_caches=3
(actively purging caches) and you create artificial memory pressure:
# Recommended temporary settings:
sysctl -w vm.swappiness=60
sysctl -w vm.drop_caches=1
echo 1 > /proc/sys/vm/overcommit_memory
The OOM killer targeted MySQL, suggesting configuration problems. For a 768MB system, these my.cnf adjustments help:
[mysqld]
key_buffer_size = 32M
table_open_cache = 64
query_cache_size = 16M
tmp_table_size = 16M
max_connections = 30
innodb_buffer_pool_size = 64M
innodb_log_file_size = 16M
For systems showing accounting corruption, these steps often help:
# Rebuild kernel slab caches:
echo 2 > /proc/sys/vm/drop_caches
sync
# Prevent accounting overflows:
sysctl -w kernel.panic_on_oops=1
sysctl -w kernel.panic=5
Add these to /etc/sysctl.conf for production systems:
vm.overcommit_memory = 2
vm.overcommit_ratio = 80
vm.swappiness = 60
vm.dirty_ratio = 10
vm.dirty_background_ratio = 5
vm.min_free_kbytes = 65536
Remember to monitor with atop
or htop
after implementing changes.