When working with Logical Volume Management (LVM) in Linux, you might encounter a situation where your logical volumes become inactive during system boot. This typically manifests with errors like "volume group not found" in GRUB or inactive volumes when inspecting through emergency shells like BusyBox.
First, let's verify the current state of your volumes from a recovery environment:
# vgscan # vgchange -ay # lvdisplay
If you're seeing inactive volumes even after attempting activation, we need to investigate further. Common causes include:
- Incorrect initramfs configuration
- Missing LVM utilities in early boot
- Device UUID changes
- Kernel parameter issues
The most common solution involves regenerating your initramfs image:
# update-initramfs -u -k all # update-grub
For specific distributions like CentOS/RHEL:
# dracut --force # grub2-mkconfig -o /boot/grub2/grub.cfg
Sometimes you need to modify GRUB configuration to ensure proper LVM detection:
# Edit /etc/default/grub GRUB_CMDLINE_LINUX="rd.lvm.vg=your_volume_group_name" # Then update GRUB: update-grub
When stuck in emergency mode, follow these steps:
# vgscan --mknodes # vgchange -ay # mount /dev/mapper/your_volume_group-root /mnt # mount --bind /dev /mnt/dev # mount --bind /proc /mnt/proc # mount --bind /sys /mnt/sys # chroot /mnt
To prevent future occurrences, ensure your system has proper LVM configuration:
# Check /etc/lvm/lvm.conf for: activation { volume_list = ["your_volume_group"] auto_activation_volume_list = ["your_volume_group"] }
For deeper investigation, boot with these parameters to see detailed LVM operations:
rd.lvm.lv=your_volume_group/root rd.lvm=1 rd.debug
Examine dmesg output for LVM-related messages during boot:
# dmesg | grep -i lvm
During a recent system maintenance session with kernel upgrades, I encountered a perplexing issue where my logical volumes remained inactive after reboot. The system would drop to a busybox shell with errors like "Volume group not found" and "file not found" when attempting manual mounts.
Key symptoms included:
# vgscan
Reading all physical volumes. This may take a while...
No volume groups found
# lvdisplay
No volume groups found
The device mapper showed no active volumes despite the physical volumes being present.
From the busybox shell, these commands helped regain access:
# vgchange -ay
Volume group "vg00" successfully activated
Logical volume "root" successfully activated
# mkdir /mnt/root
# mount /dev/mapper/vg00-root /mnt/root
However, this was just a temporary solution that needed to be repeated after each reboot.
The issue stemmed from LVM's device filtering configuration. The system wasn't scanning the correct devices during early boot. Examining the initramfs revealed missing device nodes:
# lsinitramfs /boot/initrd.img-$(uname -r) | grep lvm
To fix this permanently, we need to:
- Update initramfs filters:
# cat /etc/lvm/lvm.conf | grep filter filter = [ "a|.*|" ]
- Rebuild initramfs:
# update-initramfs -u -k all
- Verify grub configuration contains rd.lvm.vg parameters:
# grep lvm /etc/default/grub GRUB_CMDLINE_LINUX="rd.lvm.vg=vg00"
For systems using encrypted LVM, additional steps may be required:
# dracut --force --add lvm --add crypt
Or for Debian-based systems:
# dpkg-reconfigure cryptsetup-initramfs
To avoid future issues:
- Always test kernel updates in a VM first
- Maintain backup initramfs images
- Document custom LVM configurations
This issue typically occurs when the initramfs fails to properly initialize LVM components during early boot. The solution involves ensuring proper device filtering and rebuilding the initramfs with correct LVM modules.