When MariaDB crashes with "Out of memory" errors despite having available RAM, we're typically dealing with memory fragmentation or improper buffer pool configuration. The key error patterns we see in your logs:
InnoDB: mmap(137756672 bytes) failed; errno 12
InnoDB: Fatal error: cannot allocate memory for the buffer pool
What's particularly interesting is that your free -m
output shows available memory, but MariaDB can't allocate contiguous blocks. This suggests either:
- Kernel memory fragmentation
- Overcommit memory settings being too restrictive
- Multiple memory-intensive processes competing for resources
First, let's optimize your my.cnf
settings for a 1GB RAM VPS:
[mysqld]
innodb_buffer_pool_size = 256M # Increased from 128M
innodb_log_file_size = 64M
innodb_log_buffer_size = 16M
innodb_flush_log_at_trx_commit = 2
innodb_flush_method = O_DIRECT
innodb_file_per_table = ON
innodb_open_files = 600
key_buffer_size = 32M
query_cache_size = 0 # Disabled for MariaDB 5.5+
Add these to /etc/sysctl.conf
and run sysctl -p
:
vm.overcommit_memory = 2
vm.overcommit_ratio = 80
vm.swappiness = 10
Create a simple monitoring script (check_mariadb_mem.sh
):
#!/bin/bash
MEM_THRESHOLD=90
CURRENT_MEM=$(free | awk '/Mem:/ {print $3/$2 * 100.0}')
if (( $(echo "$CURRENT_MEM > $MEM_THRESHOLD" | bc -l) )); then
echo "High memory usage detected: $CURRENT_MEM%"
systemctl restart mariadb
echo "$(date): MariaDB restarted due to high memory" >> /var/log/mariadb_restarts.log
fi
For more control, consider running MariaDB in a Docker container with memory limits:
docker run --name mariadb \
-e MYSQL_ROOT_PASSWORD=yourpassword \
--memory=800m \
--memory-swap=1800m \
-v /mydata/mariadb:/var/lib/mysql \
-d mariadb:5.5
When dealing with MariaDB crashes showing "Out of memory" errors followed by InnoDB initialization failures, we're typically facing one of these scenarios:
# Key error patterns to watch for:
1. Failed buffer pool allocation (mmap failure with errno 12)
2. InnoDB heap disabled warnings
3. Storage engine registration failures
The free -m
output reveals a critical detail masked by the "free" memory numbers:
# Actual available memory calculation:
Total RAM: 994MB
"Free" showing: 71MB
But real available: 508MB (after buffers/cache)
This discrepancy explains why MariaDB fails despite apparent free memory - the OS memory management isn't properly accounted for in the default monitoring.
The current my.cnf
settings need several adjustments for stable operation:
[mysqld]
# Current problematic settings:
innodb_buffer_pool_size = 128M # Too aggressive for this VM
innodb_log_buffer_size = 8M # Could be optimized
# Recommended adjustments:
innodb_buffer_pool_size = 64M
innodb_log_buffer_size = 4M
innodb_flush_neighbors = 0 # Important for SSD
innodb_read_io_threads = 4
innodb_write_io_threads = 4
Immediate diagnostic commands to run:
# Check memory fragmentation:
cat /proc/buddyinfo
# Verify process limits:
cat /proc/$(pgrep mysqld)/limits | grep 'memory'
# Monitor allocation attempts:
strace -e trace=mmap,munmap -p $(pgrep mysqld)
For VPS environments with limited resources, consider these additional optimizations:
# Reduce connection overhead
max_connections = 30
thread_cache_size = 4
# Optimize table handling
table_open_cache = 200
table_definition_cache = 100
# Memory-safe query processing
tmp_table_size = 16M
max_heap_table_size = 16M
Implement these monitoring solutions to prevent future crashes:
#!/bin/bash
# Simple monitoring script
ALERT_THRESHOLD=80
while true; do
AVAIL_MEM=$(free -m | awk '/buffers\/cache/ {print $NF}')
if [ $AVAIL_MEM -lt $ALERT_THRESHOLD ]; then
echo "WARNING: Low available memory - $AVAIL_MEM MB" | mail -s "MariaDB Memory Alert" admin@example.com
systemctl restart mariadb
fi
sleep 300
done
When configuration tuning isn't enough, consider these architectural changes:
- Migrate to MySQL 5.7+ with better memory management
- Implement connection pooling at application level
- Offload read operations to read replicas
- Upgrade to a VPS with dedicated RAM resources