When you notice kswapd
consuming 100% CPU, it indicates your system is aggressively swapping memory pages to disk. The kernel's swap daemon activates when available memory falls below certain thresholds defined in /proc/sys/vm/swappiness
.
Use this comprehensive approach to trace the root cause:
# 1. Check current memory pressure
free -h
cat /proc/meminfo | grep -E 'MemFree|SwapCached'
# 2. Monitor swap activity in real-time
vmstat 1 5
# 3. Identify processes with highest memory usage
ps -eo pid,ppid,cmd,%mem,%cpu --sort=-%mem | head -n 15
For deeper analysis, combine these tools:
# Trace page reclaim activity
echo 1 > /proc/sys/vm/vmstat_refresh
cat /proc/vmstat | grep -E 'pgscan|pgsteal'
# Use perf to analyze swap events
perf record -e vm:vmscan*
perf script
When we encountered this issue with a MySQL server:
# Found MySQL consuming 80% of RAM
ps aux | grep mysql
# Confirmed with detailed memory mapping
pmap -x $(pgrep mysqld) | sort -nk3 | tail -10
# Solution was to adjust innodb_buffer_pool_size
# and add swap monitoring via cron:
Create this shell script for regular checks:
#!/bin/bash
SWAP_THRESHOLD=80
while true; do
swap_usage=$(free | awk '/Swap/{printf "%.0f", $3/$2*100}')
if [ "$swap_usage" -ge "$SWAP_THRESHOLD" ]; then
echo "High swap detected at $(date)" >> /var/log/swap_alert.log
ps -eo pid,comm,%mem --sort=-%mem >> /var/log/swap_alert.log
fi
sleep 300
done
Consider adjusting these values in /etc/sysctl.conf
:
vm.swappiness = 10
vm.vfs_cache_pressure = 50
vm.dirty_background_ratio = 5
vm.dirty_ratio = 10
When kswapd (kernel swap daemon) shows sustained high CPU usage (like 100%), it indicates the system is under memory pressure and actively swapping pages between RAM and disk. The challenge is tracing which specific process(es) are causing this memory pressure.
Here's a systematic approach to identify the root cause:
1. Check Overall Memory Status
# Show memory summary
free -h
# Detailed memory info
cat /proc/meminfo
# Swappiness value
cat /proc/sys/vm/swappiness
2. Identify Memory-Hungry Processes
# Top memory consumers
ps -eo pid,ppid,cmd,%mem,%cpu --sort=-%mem | head -n 10
# Alternative with RSS
ps -eo pid,cmd,rss --sort=-rss | head -n 5
3. Trace Page Faults and Swapping Activity
# Show major page faults (which may trigger swapping)
ps -eo pid,cmd,min_flt,maj_flt --sort=-maj_flt | head -n 5
# Monitor swap activity in real-time
vmstat 1 5
Using perf for Deep Analysis
# Sample kswapd stack traces
sudo perf record -g -p $(pgrep kswapd) -o kswapd_perf.data -- sleep 30
sudo perf report -i kswapd_perf.data
# Trace memory pressure events
sudo perf stat -e 'vmscan:*' -a sleep 10
Examining Kernel OOM Killer Logs
# Check kernel messages for OOM events
dmesg | grep -i oom
journalctl -k --grep=oom
In one production incident, we traced kswapd activity to a Java application with a memory leak. The smoking gun was:
# Showed massive RSS and growing maj_flt
ps -p 12345 -o pid,cmd,rss,maj_flt
# perf revealed kswapd was reclaiming pages for this PID
sudo perf record -e 'kmem:mm_page_alloc' -a -g -- sleep 30
- Adjust swappiness:
sysctl vm.swappiness=10
- Set memory limits with cgroups
- Monitor memory pressure:
cat /proc/pressure/memory
- Consider adding swap space if legitimately needed
For more visual analysis:
# Install and run
sudo apt install sysstat
sar -r 1 3 # Memory utilization statistics
# Or use atop for comprehensive monitoring
sudo atop -m