When I recently benchmarked identical Drupal installations across production (separate web/db servers) and development (localhost) environments, the results were staggering:
// Production (100Mbps LAN) 291 queries: 320.33ms (homepage) 517 queries: 999.81ms (content page) // Development (localhost) 316 queries: 46.28ms 586 queries: 79.09ms
Ping tests show 0.16ms round-trip latency between servers. For 517 queries, this theoretically adds ~82ms - significant but nowhere near the observed 920ms difference. Clearly, we're dealing with more than just network hops.
After extensive profiling, several key factors emerged:
// Common performance killers in networked MySQL 1. Protocol overhead (each query requires multiple network packets) 2. Non-optimized TCP/IP stack settings 3. Disk-based temporary tables (48% in our case) 4. Connection pooling inefficiencies 5. Uncompressed large-object transfers
Default OS TCP settings often hurt database performance. Try these sysctl optimizations on both servers:
# /etc/sysctl.conf net.core.rmem_max = 16777216 net.core.wmem_max = 16777216 net.ipv4.tcp_rmem = 4096 87380 16777216 net.ipv4.tcp_wmem = 4096 65536 16777216 net.ipv4.tcp_window_scaling = 1 net.ipv4.tcp_timestamps = 1 net.ipv4.tcp_sack = 1
These settings proved crucial for networked performance:
[mysqld] skip-name-resolve max_allowed_packet=256M net_buffer_length=16K thread_cache_size=16 query_cache_size=0 # Controversial but works for our write-heavy Drupal DB
Drupal's default connection handling can be inefficient. Consider implementing persistent connections:
// settings.php $databases['default']['default']['pconnect'] = TRUE; $databases['default']['default']['init_commands'] = array( 'isolation_level' => 'SET SESSION TRANSACTION ISOLATION LEVEL READ COMMITTED' );
Our 48% disk-based temp tables issue required special attention. While TEXT/BLOB columns limit in-memory options, we improved performance by:
ALTER TABLE cache_page MODIFY body LONGTEXT; SET GLOBAL tmp_table_size = 256*1024*1024; SET GLOBAL max_heap_table_size = 256*1024*1024;
Use this to identify specific bottlenecks:
rand(1, 10000)]); } $network_time = microtime(true) - $start; // Compare with local benchmark $start = microtime(true); for ($i = 0; $i < 1000; $i++) { // Local operation with equivalent complexity } $baseline = microtime(true) - $start; $db_overhead = $network_time - $baseline; echo "Database overhead per query: ".($db_overhead/1000)."ms\n"; ?>
- Upgrade to 1Gbps networking (surprisingly, this gave us 30% improvement alone)
- Implement MySQL query caching at application layer
- Consider using UNIX sockets if servers are on same physical host
- Profile with
SHOW PROFILE FOR QUERY
during peak loads
When dealing with distributed systems, database performance can become a bottleneck, especially when comparing local and networked setups. Here's what I found in my environment:
// Production (separate servers) Homepage: 291 queries in 320.33ms Content page: 517 queries in 999.81ms // Development (same machine) Homepage: 316 queries in 46.28ms Content page: 586 queries in 79.09ms
While ping shows 0.16ms latency, several other network factors impact MySQL performance:
- TCP connection overhead
- Packet fragmentation
- Network stack processing
- NIC interrupt coalescence settings
The temporary tables issue (48% on disk) suggests memory allocation problems. Try these settings:
# In my.cnf [mysqld] tmp_table_size=256M max_heap_table_size=256M max_allowed_packet=64M net_buffer_length=16K max_connections=500 thread_cache_size=100 query_cache_size=0 # Disabled for Drupal
Each query requires multiple network hops. Here's how to verify the overhead:
mysql> SHOW SESSION STATUS LIKE 'Handler%'; mysql> SHOW STATUS LIKE 'Bytes%';
Connection Pooling: Implement connection pooling in your application:
// PHP example with PDO $options = [ PDO::ATTR_PERSISTENT => true, PDO::ATTR_TIMEOUT => 30, PDO::MYSQL_ATTR_USE_BUFFERED_QUERY => true ]; $db = new PDO($dsn, $user, $pass, $options);
Query Batching: Combine multiple operations:
// Instead of multiple inserts INSERT INTO table VALUES (1,'a'), (2,'b'), (3,'c');
For Rackspace environments, consider these adjustments:
- Enable jumbo frames (MTU 9000) if supported
- Configure TCP window scaling
- Use MySQL compressed protocol with --compress
For Drupal sites, implement these modules and configurations:
// settings.php $conf['cache'] = 2; $conf['page_compression'] = 1; $conf['preprocess_css'] = 1; $conf['preprocess_js'] = 1;
Set up continuous monitoring with these tools:
# Percona Toolkit pt-query-digest /var/log/mysql-slow.log pt-mysql-summary # Network monitoring iftop -n -N -P tcptrack -i eth0
For high-traffic Drupal sites, consider:
- MySQL read replicas
- ProxySQL for query routing
- Local Redis caching
- Edge caching with Varnish