How to Diagnose and Fix jbd2/dm-0-8 High I/O Wait Issues on Linux Systems


1 views

When your Linux system shows jbd2/dm-0-8 consuming all I/O resources in iotop or causing high %wa in top, you're dealing with the journaling thread for ext4 filesystems on LVM/dm devices. The process appears as jbd2/dm-X-Y where X is the device number and Y is the partition.

The journaling block device (jbd2) is crucial for ext4 filesystem consistency. During heavy write operations, especially with:

  • Database operations
  • Virtual machine disk activity
  • Log file writing
  • Package manager transactions

the journaling process can become a bottleneck. Your Bonnie++ results showing 35.71 MB/sec reads suggest potential hardware limitations exacerbating the issue.

First confirm the source of I/O pressure:

# Check which filesystem is affected
$ lsblk -o NAME,MOUNTPOINT,FSTYPE | grep -A1 dm-0
# Monitor I/O wait in real-time
$ vmstat 1
# Identify heavy writers
$ sudo iotop -o

Option 1: Tune Journaling Parameters

# Temporarily adjust commit interval (default is 5 seconds)
$ echo 10 > /proc/sys/fs/jbd2/commit_timeout
# Make persistent
$ echo "fs.jbd2.commit_timeout = 10" >> /etc/sysctl.conf

Option 2: Filesystem Optimization

# Remount with reduced journaling impact
$ mount -o remount,data=writeback /dev/mapper/your-volume
# For new filesystems:
$ mkfs.ext4 -O ^has_journal /dev/your_device

Option 3: Hardware-Level Improvements

# Check scheduler (consider switching to deadline/noop for SSDs)
$ cat /sys/block/sda/queue/scheduler
# Change scheduler temporarily
$ echo deadline > /sys/block/sda/queue/scheduler

Only disable journaling if:

  • This is a non-critical filesystem
  • You have alternative data protection (RAID, backups)
  • The performance impact is truly unacceptable

Disable with extreme caution:

# Convert existing filesystem
$ tune2fs -O ^has_journal /dev/your_device
# Force filesystem check
$ e2fsck -f /dev/your_device

Implement continuous monitoring:

# Simple monitoring script
while true; do
  echo "--- $(date) ---"
  iostat -xdm 1 5 | awk '/dm-0/{print $12,$14}'
  sleep 30
done

When you notice your system slowing down with high I/O wait, and iotop shows jbd2/dm-0-8 consuming most disk bandwidth, you're dealing with the journaling thread of the ext4 filesystem on an LVM volume.

The jbd2 process is part of ext4's journaling mechanism. It becomes particularly active during:

  • Heavy filesystem operations (like large file deletions)
  • Filesystem checks or repairs
  • Improper shutdown recovery
  • Metadata-intensive workloads
# Typical output showing the issue:
$ iotop -o
  TID  PRIO  USER     DISK READ  DISK WRITE  SWAPIN     IO>    COMMAND
  253 be/3 root        0.00 B/s   15.69 M/s  0.00 % 98.99 % jbd2/dm-0-8

Your Bonnie++ results show extremely slow writes (0.01 MB/s) which confirms the I/O bottleneck. The hdparm test showing 35.71 MB/sec suggests your hardware is capable of better performance.

First, check current mount options:

$ mount | grep ' / '
/dev/mapper/vg-root on / type ext4 (rw,relatime,errors=remount-ro,data=ordered)

Check journal size:

$ dumpe2fs /dev/mapper/vg-root | grep Journal
Journal inode:            8
Journal backup:           inode blocks
Journal features:         journal_incompat_revoke
Journal size:             128M
Journal length:           32768
Journal sequence:         0x0000061e
Journal start:            1

1. Temporary Relief (for emergency situations):

# Lower the kernel's dirty page thresholds
echo 10 > /proc/sys/vm/dirty_ratio
echo 5 > /proc/sys/vm/dirty_background_ratio
echo 1000 > /proc/sys/vm/dirty_expire_centisecs

2. Permanent Configuration Changes:

  • Add data=writeback to your ext4 mount options (in /etc/fstab) if you can tolerate slightly less strict journaling
  • Consider adding commit=60 to batch metadata writes every 60 seconds
# Example fstab modification:
/dev/mapper/vg-root / ext4 defaults,data=writeback,commit=60 0 1

3. Alternative Approaches:

  • Schedule heavy disk operations during off-peak hours
  • Consider XFS for workloads with many small files
  • Upgrade to SSDs if still using spinning disks

In extreme cases, you might need to:

# Disable journaling completely (NOT RECOMMENDED for production systems)
tune2fs -O ^has_journal /dev/mapper/vg-root

Remember to always back up important data before making filesystem modifications.