When dealing with slow LAMP stack performance, we need systematic profiling at each layer:
1. Client → Web Server (Apache/Nginx)
2. Web Server → PHP-FPM/FastCGI
3. PHP → Database (MySQL)
4. Database → Storage
First, let's examine Apache's performance with mod_status:
# Enable mod_status in Apache config
<Location /server-status>
SetHandler server-status
Require host example.com
</Location>
Key metrics to watch:
- Requests currently being processed (BusyWorkers)
- Idle workers (IdleWorkers)
- CPU load (CPULoad)
For PHP profiling, XHProf provides detailed call graphs:
// Install XHProf
pecl install xhprof
// Sample profiling code
xhprof_enable(XHPROF_FLAGS_CPU + XHPROF_FLAGS_MEMORY);
// Your application code
// ...
$xhprof_data = xhprof_disable();
file_put_contents('/tmp/xhprof.json', json_encode($xhprof_data));
Enable slow query logging:
# In my.cnf
slow_query_log = 1
slow_query_log_file = /var/log/mysql/mysql-slow.log
long_query_time = 1
log_queries_not_using_indexes = 1
Analyze with pt-query-digest:
pt-query-digest /var/log/mysql/mysql-slow.log > slow-query-analysis.txt
For full-stack tracing, consider using Blackfire:
# Installation
wget -O - https://packagecloud.io/gpg.key | apt-key add -
echo "deb http://packages.blackfire.io/debian any main" | tee /etc/apt/sources.list.d/blackfire.list
apt update && apt install blackfire-php blackfire-agent
Common MediaWiki optimizations:
# LocalSettings.php optimizations
$wgMainCacheType = CACHE_MEMCACHED;
$wgMemCachedServers = ['127.0.0.1:11211'];
$wgEnableParserCache = true;
$wgCachePages = true;
Create a simple monitoring script:
#!/bin/bash
# Monitor LAMP components
apache_status=$(systemctl status apache2)
mysql_queries=$(mysqladmin status | awk '{print $6}')
php_processes=$(pgrep -c php-fpm)
echo "Apache: $apache_status"
echo "MySQL Queries: $mysql_queries"
echo "PHP Processes: $php_processes"
When dealing with a sluggish MediaWiki instance or any PHP application, the first step is to identify which component is causing the bottleneck. The LAMP stack consists of multiple layers that can each contribute to performance issues.
Begin with system-wide monitoring to identify resource constraints:
# Check overall CPU usage
top -c
# Check memory usage
free -m
# Disk I/O monitoring
iostat -x 1
# Network traffic
iftop
For Apache-specific analysis, enable mod_status and use these tools:
# In httpd.conf:
ExtendedStatus On
<Location /server-status>
SetHandler server-status
Require host localhost
</Location>
# Real-time monitoring
apachetop -f /var/log/apache2/access.log
Key metrics to watch:
- Requests per second
- CPU usage per request
- Keep-alive utilization
Enable the slow query log and examine query performance:
# In my.cnf:
slow_query_log = 1
slow_query_log_file = /var/log/mysql/mysql-slow.log
long_query_time = 1
log_queries_not_using_indexes = 1
# Analyze with:
mysqldumpslow -s t /var/log/mysql/mysql-slow.log
Use EXPLAIN for problematic queries:
EXPLAIN SELECT * FROM wiki_pages WHERE page_title = 'Home';
Install and configure XHProf for detailed PHP function-level analysis:
# Installation
pecl install xhprof
# In php.ini:
[xhprof]
extension=xhprof.so
xhprof.output_dir="/tmp/xhprof"
Sample profiling code:
<?php
xhprof_enable(XHPROF_FLAGS_CPU + XHPROF_FLAGS_MEMORY);
// Your application code here
$xhprof_data = xhprof_disable();
include_once "/path/to/xhprof_lib/utils/xhprof_lib.php";
include_once "/path/to/xhprof_lib/utils/xhprof_runs.php";
$xhprof_runs = new XHProfRuns_Default();
$run_id = $xhprof_runs->save_run($xhprof_data, "xhprof_test");
?>
Common MediaWiki optimizations include:
# Enable opcache in php.ini
opcache.enable=1
opcache.memory_consumption=128
opcache.interned_strings_buffer=8
opcache.max_accelerated_files=4000
opcache.revalidate_freq=60
# Apache KeepAlive settings
KeepAlive On
KeepAliveTimeout 2
MaxKeepAliveRequests 100
Run load tests to simulate traffic:
siege -c50 -t1M http://yoursite/wiki/Main_Page
Monitor how performance degrades under load to identify scaling limits.
Create a performance baseline by:
- Recording key metrics before changes
- Implementing one optimization at a time
- Measuring the impact of each change
- Documenting results for future reference