LVM Mirroring vs RAID1: Performance, Reliability and Practical Considerations for Linux System Administrators


21 views

When implementing disk redundancy on Linux systems, administrators often debate between using LVM's native mirroring capabilities versus traditional RAID1 through mdadm. The choice involves several technical considerations:


# RAID1 creation example
mdadm --create /dev/md0 --level=mirror --raid-devices=2 /dev/sda1 /dev/sdb1

# LVM mirror creation example
pvcreate /dev/sda1 /dev/sdb1
vgcreate vg0 /dev/sda1 /dev/sdb1
lvcreate -L 100G -m1 -n lv_mirror vg0

Benchmark tests consistently show RAID1 outperforms LVM mirroring in read operations due to:

  • RAID1's ability to perform parallel reads from both disks
  • LVM mirroring's limitation of reading from only one leg at a time
  • RAID1's more optimized kernel-level implementation

LVM mirroring requires careful configuration for power failure resilience:


# Critical LVM configuration for reliability:
/etc/lvm/lvm.conf:
    mirror_log_fault_policy = "allocate"
    write_cache_state = 0
    use_lvmetad = 0

Without these settings, power interruptions can lead to:

  • Incomplete mirror synchronization
  • Metadata corruption
  • Requirement for manual recovery

While early LVM implementations recommended a separate log disk, modern versions offer alternatives:


# Core mirror types in LVM:
lvcreate -m1 --mirrorlog core -n lv_core vg0  # In-memory logging
lvcreate -m1 --mirrorlog disk -n lv_disk vg0  # Disk-based logging
lvcreate -m1 --mirrorlog mirrored -n lv_mirroredlog vg0  # Mirrored log

For different use cases:


# High-performance database server (RAID1 preferred):
mdadm --create /dev/md0 --level=raid1 --raid-devices=2 /dev/sd[a-b]1
mkfs.xfs /dev/md0

# Flexible storage pool with snapshots (LVM preferred):
pvcreate /dev/sd[c-d]1
vgcreate vg_data /dev/sd[c-d]1
lvcreate -L 1T -m1 -n lv_primary --mirrorlog core vg_data

RAID1 recovery typically involves:


mdadm --manage /dev/md0 --fail /dev/sda1
mdadm --manage /dev/md0 --remove /dev/sda1
mdadm --manage /dev/md0 --add /dev/sdc1

LVM mirror recovery process:


lvconvert --repair vg0/lv_mirror
vgreduce --removemissing vg0
vgextend vg0 /dev/sdc1
lvconvert --mirrors 1 vg0/lv_mirror /dev/sdc1

LVM mirroring offers unique capabilities:

  • Ability to temporarily split mirrors for backups
  • Support for uneven mirror sizes
  • Integration with other LVM features like thin provisioning

However, it lacks RAID1's:

  • Bitrot detection capabilities
  • Automatic bad block management
  • Bootloader compatibility on some distributions

When comparing LVM mirroring with traditional RAID1 implementations, we need to examine three critical aspects:

  • Read Performance: RAID1 typically offers better read performance through parallel reads from multiple disks
  • Write Safety: LVM mirroring requires careful cache configuration to prevent data loss during power failures
  • Disk Requirements: LVM mirroring often needs a third disk for logging in standard configurations

Here's a simple test script to compare read performance between the two approaches:


#!/bin/bash
# RAID1 test
hdparm -t /dev/md0

# LVM mirror test
hdparm -t /dev/vg0/lv_mirror

Typical results show RAID1 delivering 10-30% better read speeds due to its ability to distribute read operations across multiple disks.

Standard RAID1 Setup:


mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sda1 /dev/sdb1
mkfs.ext4 /dev/md0

LVM Mirroring with Log:


pvcreate /dev/sda1 /dev/sdb1 /dev/sdc1
vgcreate vg0 /dev/sda1 /dev/sdb1
lvcreate -m1 -L50G -n lv_mirror vg0 /dev/sda1 /dev/sdb1 /dev/sdc1

To make LVM mirroring safer, you must disable write caching:


for disk in /dev/sd[a-c]; do
    hdparm -W0 $disk
done

If you're limited to two disks, consider this workaround using a mirrored log:


lvcreate -m1 --mirrorlog mirrored -L50G -n lv_mirror vg0 /dev/sda1 /dev/sdb1

Choose RAID1 when:

  • Maximum read performance is critical
  • You need simple, well-documented recovery procedures
  • Working with hardware that handles caching properly

Choose LVM mirroring when:

  • You need flexibility in volume management
  • Can dedicate a third disk for logging
  • Willing to implement additional safety measures