In Linux system monitoring, iowait
represents CPU idle time while waiting for I/O operations to complete. The key technical detail from the kernel perspective is that it specifically tracks time when:
- The CPU has no runnable tasks
- At least one outstanding disk I/O request exists
Network operations follow a different path in the kernel compared to disk I/O. When examining the Linux kernel source (particularly kernel/sched/cputime.c
), we see:
/*
* Account for idle time.
*/
static void account_idle_time(struct rq *rq, u64 cputime)
{
u64 *cpustat = kcpustat_this_cpu->cpustat;
struct rq_flags rf;
rq_lock(rq, &rf);
if (atomic_read(&rq->nr_iowait) > 0)
cpustat[CPUTIME_IOWAIT] += cputime;
else
cpustat[CPUTIME_IDLE] += cputime;
rq_unlock(rq, &rf);
}
This shows iowait
only increments when nr_iowait
is positive, which happens specifically during block device operations.
We can demonstrate this with two test cases:
Disk I/O Test:
# dd if=/dev/zero of=testfile bs=1G count=1 oflag=direct
Monitoring with vmstat 1
will show iowait spikes during this operation.
Network I/O Test:
import socket
s = socket.create_connection(("example.com", 80))
s.sendall(b"GET / HTTP/1.1\r\nHost: example.com\r\n\r\n")
data = s.recv(4096) # Blocks on network
This network wait time will show as regular CPU idle time, not iowait.
The Linux kernel's Documentation/filesystems/proc.txt
clarifies:
"iowait: Time waiting for I/O to complete. This counts as idle time from the kernel's perspective."
Notably missing is any mention of network operations, which are handled through different kernel subsystems (network stack vs block layer).
When diagnosing system performance:
- High iowait → Disk subsystem bottleneck
- High idle with slow network → Network latency issues
Tools like iotop
for disks and iftop
for networks reflect this architectural separation.
The Linux kernel treats these differently because:
- Disk I/O involves direct hardware interrupts
- Network operations go through protocol stacks and NIC drivers
- Memory pressure behaviors differ (network buffers vs page cache)
This distinction becomes important when tuning systems for specific workloads.
Consider a PostgreSQL server configuration:
# Disk-bound workload shows in iowait
vmstat 1
procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----
r b swpd free buff cache si so bi bo in cs us sy id wa
1 0 0 3012344 102384 2981232 0 0 1024 0 123 456 10 5 70 15
# Network-bound workload doesn't
0 0 0 3012344 102384 2981232 0 0 0 512 1200 2500 25 15 60 0
The second case shows high system time (network processing) but no iowait despite waiting on client queries.
In Linux performance monitoring, iowait
(shown in tools like top
, vmstat
, or /proc/stat
) represents the percentage of CPU time spent waiting for I/O operations to complete while the CPU is otherwise idle. The key technical definition from the kernel perspective:
/* From kernel/sched/cputime.c */
if (irqtime_account_hi_update() || irqtime_account_si_update())
return;
if (steal_account_process_time())
return;
if (idle_cpu(cpu) && !in_serving_softirq())
account_idle_time(jiffies - jiffies_snapshot);
else
account_system_time(p, user_mode(regs), jiffies - jiffies_snapshot);
Network operations typically don't count toward iowait because:
- Network sockets use different kernel subsystems (network stack vs block layer)
- Network operations often utilize interrupts rather than pure waiting
- The kernel treats network buffers differently from disk buffers
Here's how you can test this behavior:
# Measure iowait during disk operations
dd if=/dev/zero of=/tmp/test bs=1M count=1000 &
vmstat 1
# Measure during network operations
curl http://example.com/largefile &
vmstat 1
When debugging performance issues:
- High iowait suggests disk/storage bottlenecks
- Network latency won't appear in iowait metrics
- Use
ss -ti
ornstat
for network analysis
For network-bound applications, monitor these instead:
nstat -az | grep -E 'TcpExt|IpExt'
cat /proc/net/snmp
cat /proc/net/netstat
Kernel 4.6+ provides better network monitoring via:
bpftrace -e 'tracepoint:net:* { @[probe] = count(); }'