How to Force Remove a Stuck Logical Volume in Linux (LVM Removal Error Fix)


1 views

When working with LVM (Logical Volume Manager) in Linux, you might encounter a situation where a logical volume refuses to be removed despite all attempts. The key error messages look like this:

# lvremove /dev/my-volumes/volume-1
Can't remove open logical volume "volume-1"

# lvchange -an -v /dev/my-volumes/volume-1
LV my-volumes/volume-1 in use: not deactivating

The I/O errors (read failed after 0 of 4096 at 0) suggest the device might be in a failed state or have hardware issues. The "in use" message indicates some process still has the volume open, even if it's not actively being used.

Step 1: Verify Active Handles

First check what's holding the volume open:

# lsof +f -- /dev/my-volumes/volume-1
# fuser -v /dev/my-volumes/volume-1

Step 2: Force Deactivation

When normal deactivation fails, try these methods:

# dmsetup remove /dev/my-volumes/volume-1
# dmsetup info -c /dev/my-volumes/volume-1
# dmsetup table

Step 3: Nuclear Option (DANGER ZONE)

If the volume is truly stuck and not critical, use the device mapper directly:

# dmsetup remove --force /dev/mapper/my--volumes-volume--1
# lvremove --force /dev/my-volumes/volume-1

For particularly stubborn cases:

  • Reboot into single-user mode
  • Use lvremove --force --force (double force flag)
  • Manually clear the LVM metadata with vgcfgrestore

To avoid this situation:

# Properly unmount before removal:
umount /mnt/volume
lvchange -an /dev/my-volumes/volume-1
lvremove /dev/my-volumes/volume-1

The I/O errors suggest possible hardware issues. Check:

# dmesg | grep -i error
# smartctl -a /dev/sdX
# pvck -v /dev/sdX

When dealing with LVM (Logical Volume Manager) in Linux, you might encounter stubborn logical volumes that refuse to be removed. The error typically looks like this:

# lvremove /dev/my-volumes/volume-1
Can't remove open logical volume "volume-1"

This occurs when the system still considers the volume to be in use, even when it appears inactive. The I/O errors in your case (/dev/dm-1: read failed after 0 of 4096 at 0: Input/output error) suggest possible filesystem corruption or device mapping issues.

Method 1: Force Deactivation First

Before attempting removal, ensure the volume is properly deactivated:

# lvchange -an /dev/my-volumes/volume-1
# lvremove /dev/my-volumes/volume-1

Method 2: Using dmsetup for Stubborn Cases

When standard methods fail, check device mapper status:

# dmsetup info /dev/my-volumes/volume-1
# dmsetup remove /dev/my-volumes/volume-1

Method 3: The Nuclear Option (Force Removal)

For completely stuck volumes where filesystem access is impossible:

# lvremove --force --force /dev/my-volumes/volume-1

The double --force flag is significant - the first bypasses safety checks, the second forces removal even if the volume is considered open.

If you're still encountering problems:

# lsof /dev/my-volumes/volume-1     # Check for processes using the volume
# lsblk                              # Verify device tree
# pvscan; vgscan; lvscan            # Refresh LVM metadata
  • Always unmount filesystems before LV operations
  • Stop any services accessing the volume
  • Consider using --test flag for risky operations
  • Maintain regular LVM metadata backups (vgcfgbackup)

Here's how I recently resolved a similar issue on a production server:

# umount /mnt/data                  # First attempt unmount
# fuser -vm /dev/mapper/vg-data     # Identify stubborn processes
# kill -9 1234 5678                 # Terminate offenders
# dmsetup remove /dev/mapper/vg-data # Clear device mapping
# lvremove --force --force /dev/vg/data # Final removal