LVM Mirror vs MDADM Mirror: Performance, Logging, and Best Practices for Xen Disk Storage


2 views

When creating mirrored volumes in LVM, the --mirrorlog option plays a crucial role in maintaining consistency. The log stores metadata about which parts of the mirror are synchronized. There are three log types:

# Example LVM mirror creation with different log types
lvcreate --type mirror -m 1 -L 10G -n lv_mirror --mirrorlog core vg_name /dev/sda1 /dev/sdb1
lvcreate --type mirror -m 1 -L 10G -n lv_mirror --mirrorlog mirrored vg_name /dev/sda1 /dev/sdb1 /dev/sdc1
lvcreate --type mirror -m 1 -L 10G -n lv_mirror --mirrorlog disk vg_name /dev/sda1 /dev/sdb1 --corelog

Using --mirrorlog core keeps the log in memory, which is faster but risks data inconsistency on crashes. The disk-based log (default) is safer but requires an additional device.

For Xen disk storage, both approaches have trade-offs:

  • LVM-only stack: Simpler management but may have higher overhead due to layered operations
  • MDADM+LVM: Potentially better performance but more complex to manage

Benchmark results typically show:

# Sample benchmark command
fio --filename=/dev/vg_name/lv_mirror --rw=randrw --bs=4k --ioengine=libaio --iodepth=64 --runtime=60 --name=test

Based on production experience:

  1. For critical systems, use MDADM RAID1 + LVM for better performance monitoring
  2. For flexibility in resizing, LVM mirroring alone may be preferable
  3. Always test both configurations with your specific workload

Example Xen configuration using LVM mirror:

disk = [
    'phy:/dev/vg_name/lv_mirror,xvda,w',
    'file:/path/to/other/storage,xvdb,w'
]

For optimal performance with LVM mirrors:

# Tune parameters for better performance
lvchange --writemostly /dev/sdb1 --writebehind 1024 vg_name/lv_mirror
echo 2048 > /sys/block/mdX/md/stripe_cache_size

Remember to monitor both approaches:

# Monitoring commands
lvs -a -o +devices,segtype
mdadm --detail /dev/mdX

When creating mirrored volumes in LVM, the mirror log plays a crucial role in tracking which parts of the mirror are in sync. The log contains:

  • Dirty region maps (tracking unsynchronized blocks)
  • Recovery information after crashes
  • Metadata about synchronization status

Using --mirrorlog core stores this information in memory instead of a separate disk:

lvcreate -m1 --mirrorlog core -L10G -n lv_mirror vg_name /dev/sda1 /dev/sdb1
Log Type Pros Cons
Core (memory) No extra disk needed, faster Risk of data loss on crash
Disk More reliable recovery Requires separate partition

For Xen domU storage, I recommend this hybrid approach:

# Create MDADM RAID1
mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sda /dev/sdb

# LVM on top
pvcreate /dev/md0
vgcreate vg_xen /dev/md0

Benchmark results show:

  • MDADM+LVM: ~5% overhead
  • Pure LVM mirror: ~8-12% overhead

For optimal Xen storage:

# Create thin pool on mirrored storage
lvcreate -L 100G -n thin_pool vg_xen
lvconvert --type thin-pool vg_xen/thin_pool

# Create thin volume for domU
lvcreate -V 20G -T vg_xen/thin_pool -n domU1

Essential monitoring commands:

# Check MDADM status
cat /proc/mdstat

# Check LVM mirror status
lvs -a -o +devices,segtype