When examining the I/O scheduler configuration on Ubuntu 14.04 cloud images, you'll find an unusual entry in /sys/block/[drive]/queue/scheduler
:
cat /sys/block/vda/queue/scheduler
[none]
This isn't a renamed noop
scheduler or a configuration error - it's a deliberate design choice for cloud environments.
Modern cloud-optimized kernels (particularly those using virtio drivers) often disable the guest-level I/O scheduler because:
- The hypervisor already implements its own scheduling algorithms
- Eliminating redundant scheduling reduces latency
- Simplifies the I/O stack for virtualized workloads
The technical implementation differs significantly from traditional noop
:
# Traditional noop scheduler behavior
echo noop > /sys/block/vda/queue/scheduler
# Modern none implementation (kernel 3.13+)
echo none > /sys/block/vda/queue/scheduler
Benchmark tests show the impact of this configuration:
# FIO test comparing noop vs none
fio --name=test --filename=/dev/vdb \
--rw=randread --ioengine=libaio --direct=1 \
--bs=4k --iodepth=64 --runtime=60 \
--group_reporting
Results typically show:
- ~5-8% lower CPU utilization with "none"
- Marginally better latency (0.5-1ms improvement)
- Nearly identical throughput
Certain workload patterns may benefit from explicit scheduling:
# Temporary reactivation for testing
modprobe bfq
echo bfq > /sys/block/vda/queue/scheduler
Use cases that might warrant this:
- Mixed read/write workloads on dedicated instances
- When using raw device mapping instead of virtio
- For specific database workloads requiring write coalescing
The actual kernel configuration can be checked with:
grep CONFIG_IOSCHED_ /boot/config-$(uname -r)
CONFIG_IOSCHED_NONE=y
CONFIG_IOSCHED_BFQ=m
CONFIG_IOSCHED_DEADLINE=m
This shows the modular architecture where schedulers can be loaded dynamically when needed.
When coding for cloud environments:
// Example: Detecting scheduler type in Python
import os
def get_scheduler(device):
with open(f'/sys/block/{device}/queue/scheduler') as f:
return f.read().strip()
print(f"Active scheduler: {get_scheduler('vda')}")
Key takeaways:
- Assume FIFO ordering at guest level
- Optimize for hypervisor-level scheduling
- Consider direct I/O in performance-critical applications
/queue/scheduler? /h2>
When you check the I/O scheduler in Ubuntu cloud-based images (like 14.04-1 LTS) and see none
listed in /sys/block/sdX/queue/scheduler
, it doesn't mean the scheduler is completely removed. Instead, it indicates one of two possibilities:
- The kernel is configured to bypass the traditional I/O scheduler stack entirely (common in virtualized environments)
- The host's hypervisor already handles I/O scheduling efficiently, making guest-level scheduling redundant
Cloud-optimized Ubuntu images often ship with this configuration because:
# Check if your kernel has CONFIG_IOSCHED_NONE set
zgrep CONFIG_IOSCHED_NONE /proc/config.gz
# Typical output for cloud kernels:
# CONFIG_IOSCHED_NONE=y
This means the kernel is compiled with the "none" scheduler option, which essentially creates a direct path to the underlying storage layer.
Without a traditional I/O scheduler:
- Requests are indeed passed to the host in essentially FIFO order
- Latency-sensitive workloads may suffer without deadline guarantees
- Throughput-intensive operations lose merge/sort optimizations
You can verify the actual behavior with:
# Monitor request patterns
sudo blktrace -d /dev/sda -o - | blkparse -i -
If you need specific scheduling behavior, you can manually load a module:
# Example: Force deadline scheduler
echo deadline | sudo tee /sys/block/sdX/queue/scheduler
sudo modprobe deadline-iosched
However, benchmark carefully - in many cloud environments, the host's scheduler will override your settings anyway.
Here's a simple fio test to compare "none" vs forced scheduler:
[global]
ioengine=libaio
direct=1
runtime=30
size=1G
[randread]
rw=randread
bs=4k
iodepth=64
numjobs=4
Typical results show minimal difference in cloud VMs, confirming the host's dominant role in I/O scheduling.