When working with KVM virtualization, you might encounter situations where you need to access the filesystem of a guest VM directly from the host machine. This becomes particularly challenging when the guest uses LVM (Logical Volume Manager) for its storage configuration.
The key issue appears when trying to mount the second partition which contains LVM volumes. While the first partition (typically /boot) mounts successfully, the LVM partition requires additional steps.
# kpartx -av /dev/VolGroup00/kvm101_img
add map kvm101_img1 : 0 208782 linear /dev/VolGroup00/kvm101_img 63
add map kvm101_img2 : 0 125612235 linear /dev/VolGroup00/kvm101_img 208845
Here's the complete process to access LVM partitions from a KVM guest image:
# Activate partition mappings
kpartx -av /dev/VolGroup00/kvm101_img
# Scan for LVM volumes
pvscan
vgscan
lvscan
# Activate the volume group (may need to use --refresh)
vgchange -ay VolGroup00
# Verify the logical volumes are now visible
ls -l /dev/mapper/
# Mount the root filesystem (adjust volume name as needed)
mount /dev/mapper/VolGroup00-LogVol00 /mnt
If you encounter errors like "device-mapper: reload ioctl failed", try these additional steps:
# Deactivate any existing mappings
vgchange -an VolGroup00
# Remove stale device mappings
dmsetup remove_all
# Then retry the activation process
vgchange -ay VolGroup00
For more complex scenarios, consider using libguestfs tools:
# Install guestfish if needed
yum install libguestfs-tools # or apt-get equivalent
# Mount and explore the VM image
guestfish --ro -a /dev/VolGroup00/kvm101_img
# Inside guestfish shell:
> run
> list-filesystems
> mount /dev/VolGroup00/LogVol00 /
> ls /etc
When working with KVM virtual machines that use LVM partitions, you might need to access the VM's filesystem directly from the host. This often occurs during recovery scenarios, forensic analysis, or when modifying system files outside the VM environment.
First, let's examine the partition table of our VM image:
# kpartx -av /dev/VolGroup00/kvm101_img
add map kvm101_img1 : 0 208782 linear /dev/VolGroup00/kvm101_img 63
add map kvm101_img2 : 0 125612235 linear /dev/VolGroup00/kvm101_img 208845
The output shows two partitions: a small boot partition (kvm101_img1) and a larger LVM partition (kvm101_img2).
The boot partition is straightforward to mount as it typically uses ext4:
# mount /dev/mapper/kvm101_img1 /mnt/boot
The main challenge comes with the LVM partition. Here's how to properly scan and activate it:
# vgscan
# vgchange -ay
# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
root VolGroup00 -wi-a----- 55.00g
swap VolGroup00 -wi-a----- 4.00g
Once the LVM volumes are activated, you can mount them normally:
# mount /dev/VolGroup00/root /mnt/root
Here's a complete script that handles the entire process:
#!/bin/bash
VM_IMAGE="/dev/VolGroup00/kvm101_img"
MOUNT_POINT="/mnt/vm_root"
# Create partition mappings
kpartx -av $VM_IMAGE
# Scan for LVM volumes
vgscan
vgchange -ay
# Create mount point
mkdir -p $MOUNT_POINT
# Mount root partition
mount /dev/VolGroup00/root $MOUNT_POINT
# Optionally mount boot partition
mkdir -p $MOUNT_POINT/boot
mount /dev/mapper/kvm101_img1 $MOUNT_POINT/boot
If you encounter errors, consider these solutions:
- Ensure LVM2 tools are installed on the host
- Check for volume group name conflicts (use vgrename if needed)
- Try activating volumes with
vgchange -a y --partial
for damaged systems
When finished, properly unmount and deactivate:
umount /mnt/vm_root/boot
umount /mnt/vm_root
vgchange -an VolGroup00
kpartx -d /dev/VolGroup00/kvm101_img