In Xen environments, we enjoyed direct LVM volume passthrough using phy:
device mapping, where guests interacted with pre-formatted block devices exactly as they existed on the host. This gave us two critical advantages:
# Xen disk configuration example
disk = [
'phy:/dev/vg1/guest1-swap,sda1,w',
'phy:/dev/vg1/guest1-disk,sda2,w',
'phy:/dev/vg1/guest1-tmp,sda3,w'
]
KVM/QEMU traditionally wraps everything in image files (even on LVM), which breaks this workflow. Let's explore how to regain those capabilities.
For raw LVM access in KVM, we have several approaches:
<disk type='block' device='disk'>
<driver name='qemu' type='raw'/>
<source dev='/dev/vg1/guest1-root'/>
<target dev='vda' bus='virtio'/>
</disk>
Method 1: Libvirt XML Configuration
Add multiple disk entries for each partition:
<devices>
<disk type='block' device='disk'>
<source dev='/dev/vg1/guest1-swap'/>
<target dev='sda1'/>
</disk>
<disk type='block' device='disk'>
<source dev='/dev/vg1/guest1-root'/>
<target dev='sda2'/>
</disk>
</devices>
With direct LVM passthrough, mounting becomes straightforward:
# Create read-only snapshot
lvcreate -s -n guest-snap -pr /dev/vg1/guest1-root
# Mount snapshot
mount -o ro,noload /dev/vg1/guest-snap /mnt/guest_backup
# Backup operations here...
rsync -a /mnt/guest_backup/ /backup/storage/
# Cleanup
umount /mnt/guest_backup
lvremove -f /dev/vg1/guest-snap
For live expansion without guest downtime:
# On host
lvextend -L+10G /dev/vg1/guest1-root
# Inside guest (via virsh console or SSH)
resize2fs /dev/sda2
For libvirt-managed guests, use this sequence:
# Extend the LV first
lvresize -L +5G /dev/vg1/guest-disk
# Notify libvirt about the new size
virsh blockresize --domain vm1 --path /dev/vg1/guest-disk --size 15G
# Then resize filesystem within guest
resize2fs /dev/vda1
For thin provisioning support:
<disk type='block' device='disk'>
<driver name='qemu' type='raw' discard='unmap'/>
<source dev='/dev/vg1/guest-thin'/>
<target dev='vda' bus='virtio'/>
</disk>
This configuration enables TRIM support for better thin provisioning management.
Permission Problems: Ensure proper SELinux contexts:
chcon -t svirt_image_t /dev/vg1/guest*
restorecon -Rv /dev/vg1/
Cache Modes: For better performance with direct LVM:
<driver name='qemu' type='raw' cache='none' io='native'/>
Multi-path Considerations: When using SAN storage:
<disk type='block' device='disk'>
<source dev='/dev/disk/by-id/scsi-3600508b4000de6b00000000000013a4'/>
...
</disk>
Many system administrators transitioning from Xen to KVM encounter a fundamental architectural difference in storage handling. Where Xen allowed direct LVM volume attachment via phy:
device notation, KVM/QEMU typically encapsulates storage within disk image files - even when using LVM backends.
Contrary to common assumption, KVM/QEMU can directly utilize LVM volumes without image file intermediaries. Here are three implementation approaches:
<disk type='block' device='disk'> <driver name='qemu' type='raw'/> <source dev='/dev/vg1/guest1-disk'/> <target dev='vda' bus='virtio'/> </disk>
For command-line usage with qemu-system-x86_64:
qemu-system-x86_64 \ -drive file=/dev/vg1/guest1-disk,format=raw,if=virtio
When using direct LVM attachment, standard Linux tools remain available for host-side management:
# Create read-only snapshot lvcreate -s -n guest1-backup -pr /dev/vg1/guest1-disk # Mount snapshot mkdir /mnt/guest-backup mount -o ro,noload /dev/vg1/guest1-backup /mnt/guest-backup
The complete workflow for live expansion:
# On host: lvextend -L+10G /dev/vg1/guest1-disk # In guest OS: echo 1 > /sys/block/vda/device/rescan parted /dev/vda resizepart 2 100% resize2fs /dev/vda2
Direct LVM attachment offers several advantages:
- Bypasses filesystem overhead when using raw volumes
- Enables use of LVM caching
- Allows direct access to storage features like TRIM/DISCARD
Important security measures when using this configuration:
# Set proper SELinux context: chcon -t svirt_image_t /dev/vg1/guest1-* # Recommended libvirt configuration: <disk type='block' device='disk'> <driver name='qemu' type='raw' cache='none' io='native'/> <source dev='/dev/vg1/guest1-disk'> <seclabel model='dac' relabel='no'/> </source> <target dev='vda' bus='virtio'/> </disk>
Frequent challenges and solutions:
# Permission denied errors: usermod -a -G disk qemu # LVM locking conflicts: lvchange -a y --sysinit /dev/vg1/guest1-disk # Cache coherency: sync echo 3 > /proc/sys/vm/drop_caches