Data Recovery in Non-RAID LVM: What Happens When a Single Disk Fails?


3 views

When using LVM (Logical Volume Manager) without any RAID configuration, your data distribution works fundamentally differently from traditional RAID setups. In this scenario, LVM simply concatenates physical disks into larger logical volumes without redundancy.

If a single disk fails in a non-RAID LVM setup:

  • You will lose all data in the entire volume group, not just the data on the failed disk
  • The LVM metadata becomes corrupted because it's distributed across all physical volumes
  • The volume group will become unavailable until you manually intervene

Here's what happens at the filesystem level:


# Example of checking LVM status before failure
$ sudo vgdisplay
  --- Volume group ---
  VG Name               my_media_vg
  System ID             
  Format                lvm2
  Metadata Areas        3
  Metadata Sequence No  8
  VG Access             read/write

After a disk failure, any attempt to access the volume group would result in errors:


$ sudo vgdisplay
  Couldn't find device with uuid 'ABCD-1234-5678-EFGH'.
  Cannot process volume group my_media_vg

While complete recovery is challenging, partial recovery might be possible with:

  1. Using vgcfgrestore to attempt metadata recovery
  2. Manual reconstruction using pvcreate --uuid and vgcfgrestore
  3. Specialized data recovery tools for the underlying filesystem

If you must use LVM without RAID, consider these alternatives:


# Option 1: Separate mount points per disk
$ sudo mkdir /mnt/disk{1,2,3}
$ sudo mount /dev/sdb1 /mnt/disk1
$ sudo mount /dev/sdc1 /mnt/disk2

# Option 2: Use mergerfs for pooling without LVM
$ sudo mergerfs -o defaults,allow_other /mnt/disk1:/mnt/disk2 /media/pool

For a media server where expandability is important but redundancy isn't critical, consider this hybrid approach:


# Create individual filesystems on each disk
$ for disk in /dev/sd{b,c,d}; do
    sudo mkfs.ext4 $disk
    sudo mkdir /mnt/${disk##*/}
    sudo mount $disk /mnt/${disk##*/}
  done

# Use symbolic links to create unified view
$ ln -s /mnt/sdb1/movies /media/movies
$ ln -s /mnt/sdc1/music /media/music

When using LVM (Logical Volume Manager) with multiple physical disks without RAID configuration, it's crucial to understand how data is distributed across disks. LVM in its basic configuration doesn't provide redundancy - it simply aggregates storage space from multiple physical volumes (PVs) into a single logical volume (LV).

# Example LVM setup commands
pvcreate /dev/sdb /dev/sdc /dev/sdd
vgcreate media_vg /dev/sdb /dev/sdc /dev/sdd
lvcreate -n media_lv -l 100%FREE media_vg

In a non-RAID LVM setup:

  • Complete loss occurs if the disk contained metadata: The first disk where you created the volume group typically stores critical LVM metadata.
  • Partial loss for pure data disks: Other disks may only contain portions of your actual data.

Recovery potential depends on several factors:

# Checking PV allocation
pvs -o+pv_pe_count,pv_pe_alloc_count
# Examining LV segments
lvs -o+segtype,seg_pe_ranges

While not using RAID, you can implement these precautions:

  1. Metadata backups:
    vgcfgbackup -f /backup/vg_backup media_vg
    
  2. Filesystem choice matters: XFS and Btrfs offer better recovery tools than ext4.

If a secondary disk fails (not containing metadata):

# 1. Identify failed disk
dmesg | grep -i error
# 2. Remove from VG
vgreduce --removemissing media_vg
# 3. Replace hardware
# 4. Add new disk
pvcreate /dev/sde
vgextend media_vg /dev/sde

For media servers where redundancy isn't critical but availability is:

  • Consider LVM's --alloc anywhere policy to minimize single-disk concentration
  • Implement regular filesystem-level integrity checks:
    xfs_repair -n /dev/mapper/media_vg-media_lv