Diagnosing Unexplained High Memory Usage in Linux: When Processes Don’t Show Memory Allocation


5 views

During LVM stress testing on a Debian server, I encountered a puzzling scenario where memory usage spiked dramatically without any single process appearing to claim it. Standard monitoring tools like htop and top showed no obvious culprits, yet free -m reported nearly all 32GB RAM consumed.

# free -m output showing discrepancy
             total       used       free     shared    buffers     cached
Mem:         32153      31958        194          0         52       3830
-/+ buffers/cache:      28075       4077

When standard process monitors fail to reveal memory allocation, we need to examine kernel-level memory tracking. The Linux kernel manages memory through several mechanisms that might not appear in user-space process listings:

# Check slab memory usage
sudo cat /proc/meminfo | grep -E 'SReclaimable|SUnreclaim'
sudo cat /proc/slabinfo

# Check kernel modules memory
sudo lsmod | awk '{print $1}' | xargs -n1 sudo cat /sys/module/*/holders 2>/dev/null

Given the LVM testing context, device mapper (dm) and LVM tools often allocate kernel memory that doesn't show in process stats. The dmsetup command creates device mappings that use kernel memory:

# Check device mapper memory usage
sudo dmsetup status --target snapshot
sudo dmsetup table

# Monitor kernel memory allocations in real-time
sudo perf stat -e 'kmem:*' -a sleep 10

When conventional tools fail, we need specialized memory diagnostics:

# Install and use kmemleak detector
echo scan > /sys/kernel/debug/kmemleak
cat /sys/kernel/debug/kmemleak

# Check for memory fragmentation
cat /proc/buddyinfo
cat /proc/pagetypeinfo

When facing such memory leaks, consider these mitigation techniques:

# Clear slab cache (temporary relief)
echo 2 > /proc/sys/vm/drop_caches

# Check for OOM killer activity
dmesg | grep -i 'oom'

# Monitor kernel allocations over time
sudo slabtop -o

The most effective long-term solution involves kernel debugging tools and potentially patching problematic kernel modules. For LVM/DM specific issues, consider testing with newer kernel versions or alternative device mapper implementations.


During LVM stress testing on a Debian server, I encountered a peculiar scenario where system memory would gradually exhaust itself without any visible process claiming responsibility. Standard monitoring tools like htop and top showed massive memory usage (31GB out of 32GB) while the process list displayed no matching consumption.

# free -m output during the issue:
             total       used       free     shared    buffers     cached
Mem:         32153      31958        194          0         52       3830
-/+ buffers/cache:      28075       4077
Swap:          975          0        975

When standard tools fail to identify memory consumers, we need to dig deeper:

# Check slab memory usage
sudo cat /proc/meminfo | grep -E 'SReclaimable|SUnreclaim'

# Examine kernel slab caches
sudo slabtop -o

# Check for memory leaks in kernel modules
sudo cat /proc/modules | sort -k3 -n -r | head -20

The smoking gun appeared when stopping LVM operations returned memory to normal levels. Device mapper (DM) and LVM tools can consume memory through:

  • Kernel space allocations not attributed to user processes
  • Memory-mapped device operations
  • Metadata caching in kernel slab allocator

For such cases, these approaches often reveal the truth:

# 1. Check kernel memory allocation
sudo cat /proc/buddyinfo
sudo cat /proc/pagetypeinfo

# 2. Device mapper specific diagnostics
sudo dmsetup status --tree
sudo dmsetup table

# 3. Kernel memory leak detection (requires debugfs)
sudo mount -t debugfs none /sys/kernel/debug
sudo cat /sys/kernel/debug/kmemleak

When facing similar issues:

  1. First isolate the triggering operation (LVM commands in this case)
  2. Monitor kernel memory allocations during operations
  3. Check for slab memory growth using /proc/slabinfo
  4. Consider temporary kernel parameter adjustments for diagnostics:
# Temporarily reduce dirty memory ratios for testing
echo 5 > /proc/sys/vm/dirty_background_ratio
echo 10 > /proc/sys/vm/dirty_ratio

For persistent issues, kernel instrumentation tools like systemtap or perf can trace memory allocation calls back to their origins.