When preparing database performance demonstrations or testing monitoring systems, creating authentic slow queries can be surprisingly difficult with clean datasets. Here are several effective methods I've used in production environments and training sessions.
The most reliable way is using MySQL's built-in functions to force delays:
-- Simple sleep function
SELECT SLEEP(1.5) FROM your_large_table LIMIT 100;
-- Combine with actual query
SELECT *, SLEEP(0.1) FROM orders
WHERE order_date BETWEEN '2020-01-01' AND '2023-12-31';
For more complex scenarios, create cartesian products:
-- Generate heavy join
SELECT * FROM table1, table2, table3
WHERE table1.id = table2.id AND table2.id = table3.id;
Intentionally degrade your schema performance:
-- Remove critical indexes
ALTER TABLE customers DROP INDEX idx_email;
-- Create inefficient data types
ALTER TABLE products ADD COLUMN description2 TEXT CHARACTER SET utf8mb4;
UPDATE products SET description2 = REPEAT(description, 100);
Combine MySQL with system resource constraints:
# Limit MySQL CPU usage
cpulimit -e mysqld -l 20 -b
# Reduce available memory
sudo ulimit -v 500000
Create a simple bash script to generate concurrent load:
#!/bin/bash
for i in {1..50}
do
mysql -u user -p password db_name -e "SELECT SLEEP(1.2), * FROM large_table WHERE non_indexed_column LIKE '%$RANDOM%'" &
done
Temporarily adjust MySQL settings:
SET GLOBAL sort_buffer_size = 1024;
SET GLOBAL join_buffer_size = 1024;
SET GLOBAL max_heap_table_size = 1048576;
Mimic common application patterns that cause problems:
-- N+1 query pattern simulation
SELECT * FROM users WHERE account_id IN
(SELECT account_id FROM accounts WHERE status = 'pending');
-- Over-fetching data
SELECT * FROM order_history WHERE user_id = 123
AND YEAR(order_date) = 2023;
Remember to reset all changes after your demonstration to avoid impacting production performance.
When preparing database performance demonstrations or testing monitoring systems, artificially creating slow queries can be surprisingly difficult. Many developers struggle with this when working with clean test databases or new installations. Let me share some practical techniques that go beyond basic approaches.
Here are several effective methods to force MySQL to generate slow queries:
-- Intentional full table scans
SELECT * FROM large_table WHERE non_indexed_column = RAND();
-- Cartesian product joins
SELECT a.*, b.* FROM table_a a, table_b b, table_c c
WHERE a.id = b.id AND b.id = c.id AND c.non_indexed > 0;
-- Sleep function injection
SELECT SLEEP(1.5) FROM dual;
-- Heavy mathematical operations
SELECT EXP(LOG(SIN(RADIANS(column)) + 1)) FROM large_table;
Combine these with your existing stress approaches:
# Reduce available memory (Linux)
sudo echo 1 > /proc/sys/vm/drop_caches
# Limit MySQL CPU usage
cpulimit -l 50 -p $(pgrep mysqld)
# I/O throttling
ionice -c 3 -p $(pgrep mysqld)
This simple bash script generates random slow queries:
#!/bin/bash
DB_USER="demo"
DB_PASS="password"
DB_NAME="test_db"
while true; do
RAND_TABLE=$(mysql -u$DB_USER -p$DB_PASS $DB_NAME -e "SHOW TABLES" | tail -n +2 | shuf -n 1)
RAND_COLUMN=$(mysql -u$DB_USER -p$DB_PASS $DB_NAME -e "SHOW COLUMNS FROM $RAND_TABLE" | tail -n +2 | awk '{print $1}' | shuf -n 1)
mysql -u$DB_USER -p$DB_PASS $DB_NAME -e "SELECT * FROM $RAND_TABLE WHERE $RAND_COLUMN = RAND() LIMIT 1000" &
sleep 0.5
done
Ensure your slow query log is properly configured in my.cnf:
[mysqld]
slow_query_log = 1
slow_query_log_file = /var/log/mysql/mysql-slow.log
long_query_time = 0.5
log_queries_not_using_indexes = 1
log_slow_admin_statements = 1
log_slow_slave_statements = 1
The BENCHMARK() function can help create CPU-intensive queries:
SELECT BENCHMARK(5000000, ENCODE('hello',RAND()));
-- Combine with other operations
SELECT user.* FROM users user
WHERE BENCHMARK(10000000, MD5(user.email)) > 0
LIMIT 100;