When developing or testing I/O-bound applications, simulating realistic disk latency becomes crucial. Unlike network latency which can be easily controlled via tc
, disk I/O manipulation requires different approaches. The challenge lies in increasing iowait
without affecting CPU usage, which standard benchmarking tools like bonnie++
don't directly address.
The most effective method involves using Linux's device mapper:
# Create a delayed device mapping
echo "0 blockdev --getsize /dev/sdX delay /dev/sdX 0 500" | dmsetup create delayed-disk
This creates a virtual device that adds 500ms delay to all I/O operations on /dev/sdX
. Adjust the last parameter (500) to control latency.
For more granular control, consider these methods:
1. Using cgroups v2 I/O controller
# Create new cgroup
mkdir /sys/fs/cgroup/io.latency
echo "8:0 latency=500" > /sys/fs/cgroup/io.latency/io.latency
2. FUSE-based Filesystem
Create a simple FUSE filesystem wrapper that adds artificial delays:
class DelayedFS(Operations):
def __init__(self, original, delay=0.5):
self.original = original
self.delay = delay
def read(self, path, size, offset, fh):
time.sleep(self.delay)
return self.original.read(path, size, offset, fh)
# Implement other required methods
To measure the effects of your latency injection:
# Monitor iowait changes
watch -n1 'iostat -xmt 1'
Key metrics to observe:
%util
: Should increase as latency risesawait
: Directly reflects your artificial delaysvctm
: Shows the actual service time
For complex scenarios, combine multiple methods:
# Stack device mapper with cgroups
dmsetup create slow-disk --table "0 blockdev --getsize /dev/sdX delay /dev/sdX 0 200"
echo "8:0 latency=300" > /sys/fs/cgroup/io.latency/io.latency
This creates a cumulative 500ms delay while allowing separate control of different delay components.
When benchmarking system performance or testing application behavior under disk pressure, real-world disk latency is often unpredictable. Developers need controlled ways to simulate:
- Consistent artificial delay for repeatable tests
- Various latency scenarios without physical hardware changes
- Specific iowait conditions to validate application resilience
The Linux kernel provides several mechanisms to influence disk behavior:
# Disable DMA (forces slower PIO mode)
hdparm -d0 /dev/sdX
# Set acoustic management (indirectly increases latency)
hdparm -M 128 /dev/sdX
# Disable write cache (adds flush delays)
hdparm -W0 /dev/sdX
For precise control, create a virtual device with artificial latency:
# Create delayed device (500ms read, 200ms write)
echo "0 $(blockdev --getsz /dev/sdX) delay /dev/sdX 0 500" | dmsetup create delayed-disk
# Verify creation
lsblk | grep delayed
cgroups v2 offers more granular control:
# Create IO-limited cgroup
mkdir /sys/fs/cgroup/io.latency
echo "8:0 rbps=1048576 wbps=1048576 riops=100 wiops=100" > /sys/fs/cgroup/io.latency/io.max
# Add process to cgroup
echo $PID > /sys/fs/cgroup/io.latency/cgroup.procs
Combine these techniques to measure application impact:
#!/bin/bash
# Setup delayed device
DELAY_DEV=$(mktemp -d)/latency.img
dd if=/dev/zero of=$DELAY_DEV bs=1M count=1024
sudo losetup -f $DELAY_DEV
LOOP_DEV=$(losetup -j $DELAY_DEV | cut -d: -f1)
# Apply 300ms delay
echo "0 $(blockdev --getsz $LOOP_DEV) delay $LOOP_DEV 0 300" |
sudo dmsetup create test-delay
# Run benchmark
fio --name=test --filename=/dev/mapper/test-delay \
--rw=randread --size=100m --time_based --runtime=60s \
--ioengine=libaio --iodepth=16 --direct=1
For userspace control, consider writing a FUSE filesystem with artificial delays:
#include
#include
static int delayed_read(const char *path, char *buf, size_t size,
off_t offset, struct fuse_file_info *fi) {
usleep(500000); // 500ms delay
// Actual read implementation here
return size;
}
static struct fuse_operations ops = {
.read = delayed_read,
};
int main(int argc, char *argv[]) {
return fuse_main(argc, argv, &ops, NULL);
}