How to Diagnose and Fix Memory Leaks in Linux Server Running Liquidsoap+Icecast and HTTPD+MySQL Stack


2 views

Memory leaks can be particularly insidious in long-running media streaming servers. The gradual memory consumption often goes unnoticed until critical failure occurs. Here's how to systematically identify the culprit in your Linux environment.

While top gives a basic overview, we need more surgical instruments:


# Install essential monitoring tools
sudo apt-get install htop smem sysstat

# Real-time memory monitoring
sudo htop --sort-key=PERCENT_MEM

For deeper analysis, Linux provides several specialized tools:


# Process-specific memory usage
sudo pmap -x $(pidof liquidsoap)

# Track memory allocation changes
sudo valgrind --leak-check=full --show-leak-kinds=all --track-origins=yes \
    --verbose --log-file=valgrind-out.txt /usr/bin/liquidsoap

When dealing with streaming stacks, focus on these potential leak areas:

  • Icecast: Connection pooling issues
  • Liquidsoap: Unreleased audio buffers
  • MySQL: Query cache fragmentation
  • HTTPD: Keep-alive connections

Implement these emergency measures while diagnosing:


# Clear page cache, dentries and inodes (non-destructive)
echo 3 | sudo tee /proc/sys/vm/drop_caches

# Set memory limits per service
sudo systemctl set-property httpd.service MemoryHigh=2G MemoryMax=3G
sudo systemctl set-property liquidsoap.service MemoryHigh=3G MemoryMax=4G

Create a simple watchdog script to detect memory trends:


#!/bin/bash
THRESHOLD=85
CHECK_INTERVAL=300

while true; do
    MEM_USAGE=$(free | awk '/Mem/{printf("%d"), $3/$2*100}')
    if [ $MEM_USAGE -gt $THRESHOLD ]; then
        echo "$(date) - Memory usage $MEM_USAGE%" >> /var/log/mem_watchdog.log
        # Insert your mitigation commands here
    fi
    sleep $CHECK_INTERVAL
done

For liquidsoap instances, these configuration tweaks often help:


# In liquidsoap.liq
settings.server.telnet := false
settings.server.http.streams.port := 0
settings.server.harbor.bind_addrs := ["0.0.0.0"]

When your 8GB Linux server gradually consumes all available memory despite stable traffic (50 concurrent users, 2000 daily visits), you're likely facing one of three scenarios:

  • Application memory leaks (most probable in this case)
  • Kernel slab fragmentation
  • Filesystem cache not being properly released

Forget basic top - we need surgical instruments:

# Install investigation toolkit
sudo apt install smem numactl linux-tools-common

# Real process memory (not just RSS)
smem -t -k -P "liquidsoap|icecast|httpd|mysqld"

# Detailed per-process breakdown
sudo pmap -x $(pgrep liquidsoap) | tail -n 1

# Kernel slab anomalies
sudo slabtop -o | head -20

Add this to your liquidsoap script for runtime monitoring:

def memory_debug() =
  mem_stats = system("ps -p #{pid()} -o rss,vsz,pmem")
  log("Memory report: #{mem_stats}")
  thread.delay(300.0, memory_debug)
end

# Start monitoring thread
memory_debug()

Run these SQL commands during peak usage:

SELECT * FROM sys.memory_global_by_current_bytes 
WHERE current_alloc > 100000000; -- Show allocations >100MB

SELECT event_name, CURRENT_NUMBER_OF_BYTES_USED/1024/1024 AS size_mb
FROM performance_schema.memory_summary_global_by_event_name
ORDER BY size_mb DESC LIMIT 10;

Try these before rebooting:

# Drop filesystem caches (safe production operation)
echo 3 | sudo tee /proc/sys/vm/drop_caches

# MySQL memory flush (won't affect connections)
mysqladmin -uroot -p flush-hosts flush-tables flush-privileges

# Liquidsoap gentle restart
sudo systemctl kill -s USR1 liquidsoap.service

Add this cron job (sudo crontab -e):

# Hourly memory watchdog
0 * * * * if [ $(free -m | awk '/Mem:/ {print $4}') -lt 500 ]; then 
    alert-script.sh && systemctl restart liquidsoap; fi

For persistent leaks, capture allocation patterns:

sudo perf record -e syscalls:sys_enter_brk -a -g -- sleep 60
perf script | awk '/liquidsoap/ && /brk/ {print $0}' | head -50