While MySQLTuner reports a theoretical maximum memory usage of 1.5GB, real-world observation shows memory consumption ballooning to 25GB+ over time. This discrepancy stems from several MyISAM-specific behaviors that standard monitoring tools often miss.
Your current key_buffer_size (1GB) appears adequate for 5.3GB of MyISAM indexes, but the write hit rate of 5% indicates severe inefficiency:
# Current configuration key_buffer_size = 1G
Consider implementing dynamic key cache partitioning:
# Improved key buffer configuration key_buffer_size = 2G key_cache_segment_size = 128M key_cache_block_size = 128K
The 0% table cache hit rate with 1M opened tables reveals a critical issue. MySQL 5.7 has known MyISAM table cache scalability problems (Bug #49177). Immediate adjustments needed:
table_open_cache = 2000 table_definition_cache = 2000 open_files_limit = 65535
Despite having 32MB allocated, your system shows 386,310 daily prunes - indicating severe cache churn. For read-heavy MyISAM workloads:
query_cache_type = 2 # DEMAND mode query_cache_size = 64M query_cache_min_res_unit = 2K
The 57,882 joins without indexes represent memory-hungry operations. Implement these session-level controls:
SET SESSION join_buffer_size = 2M; SET SESSION max_join_size = 1000000; SET SESSION optimizer_switch='index_condition_pushdown=off';
This shell script helps track real memory usage beyond standard tools:
#!/bin/bash while true; do DATE=$(date +%Y-%m-%d\ %H:%M:%S) MYISAM_MEM=$(ps -eo rss,cmd | grep mysqld | grep -v grep | awk '{print $1/1024/1024}') CACHE_EFFICIENCY=$(mysql -e "SHOW STATUS LIKE 'Key%'" | awk '/Key_reads|Key_read_requests/ {print $2}' | xargs | awk '{print 100-($1/$2*100)}') echo "$DATE - MyISAM RSS: ${MYISAM_MEM}GB | Key Cache Eff: ${CACHE_EFFICIENCY}%" sleep 300 done
For MyISAM-specific tuning, add these to your my.cnf:
myisam_sort_buffer_size = 64M bulk_insert_buffer_size = 32M concurrent_insert = 2 delay_key_write = ALL
When memory exhaustion leads to crashes, implement this recovery handler in /etc/mysql/conf.d/recovery.cnf:
[mysqld] myisam_recover_options = BACKUP,FORCE myisam_use_mmap = 0 myisam_max_sort_file_size = 10G
While MySQLTuner reports maximum possible memory usage at 1.5GB, our monitoring shows MySQL gradually consumes all available RAM (25GB+) within 48 hours. This discrepancy suggests either:
- Unaccounted memory allocations by MyISAM storage engine
- Memory fragmentation issues
- Kernel-level caching not reflected in process statistics
Your configuration reveals several MyISAM-specific optimization opportunities:
# Current problematic settings
key_buffer_size = 1G
join_buffer_size = 512K
read_buffer_size = 256K
sort_buffer_size = 512K
Execute these while memory usage is climbing:
SHOW STATUS LIKE 'Key%';
SHOW STATUS LIKE 'Created%';
SHOW STATUS LIKE 'Threads%';
SHOW ENGINE MYISAM STATUS;
SHOW VARIABLES LIKE 'table_open_cache';
SHOW GLOBAL STATUS LIKE 'Open%tables';
Your MySQLTuner output shows alarming table cache metrics:
- Table cache hit rate: 0% (416 open / 1M opened)
- This indicates severe cache thrashing - MySQL constantly opens/closes table definitions
Here's an optimized my.cnf section for MyISAM-heavy workloads:
[mysqld]
# MyISAM Optimization
key_buffer_size = 4G
myisam_sort_buffer_size = 256M
concurrent_insert = 2
bulk_insert_buffer_size = 64M
# Table Cache
table_open_cache = 2000
table_definition_cache = 1400
open_files_limit = 4000
# Connection Buffers
thread_cache_size = 32
tmp_table_size = 64M
max_heap_table_size = 64M
# Query Cache (controversial)
query_cache_size = 0 # Disable for high-write systems
query_cache_type = 0
# Per-thread Buffers (DANGER ZONE)
read_buffer_size = 128K
read_rnd_buffer_size = 256K
sort_buffer_size = 256K
join_buffer_size = 256K
Create a cron job with this bash script to track memory trends:
#!/bin/bash
LOG_FILE="/var/log/mysql_memory.log"
echo "$(date) - MySQL Memory Snapshot" >> $LOG_FILE
mysql -e "SHOW GLOBAL STATUS LIKE 'Memory%';" >> $LOG_FILE
echo "OS Report:" >> $LOG_FILE
ps -eo pid,user,pmem,cmd | grep mysqld | grep -v grep >> $LOG_FILE
free -m >> $LOG_FILE
For systems that must stay online, implement an automated restart trigger:
#!/bin/bash
THRESHOLD=85 # Percentage of RAM usage
CURRENT=$(free | awk '/Mem:/ {print $3/$2 * 100}')
if (( $(echo "$CURRENT > $THRESHOLD" | bc -l) )); then
systemctl restart mysql
echo "$(date) - MySQL restarted due to memory pressure" >> /var/log/mysql_restart.log
fi
For long-term stability, consider these architectural changes:
- Convert frequently accessed tables to InnoDB with strict buffer pool sizing
- Implement partitioning for large MyISAM tables
- Move to a dedicated database server if possible