When attempting to create a filesystem on /dev/sdb
in Ubuntu, you might encounter the frustrating error: /dev/sdb is apparently in use by the system; will not make a filesystem here
. This typically occurs even when the drive isn't mounted or actively used by any visible processes.
The system might be holding onto the device for various reasons:
- LVM might have a stale reference
- mdadm (software RAID) could be holding the device
- The drive might be part of an inactive array
- Some background process might have a file handle open
Before proceeding with solutions, let's verify the device status:
# Check mounted filesystems
mount | grep sdb
# Verify swap usage
cat /proc/swaps
# Check for LVM involvement
pvdisplay
vgdisplay
lvdisplay
# Look for RAID arrays
cat /proc/mdstat
mdadm --detail /dev/sdb
# Check for open file handles
lsof | grep sdb
Here are several approaches to resolve this issue:
Method 1: Wipe the Drive Signature
Sometimes, old signatures can confuse the system:
# Wipe filesystem signatures
wipefs -a /dev/sdb
# Alternatively use dd
dd if=/dev/zero of=/dev/sdb bs=512 count=1
Method 2: Reinitialize the Partition Table
Completely recreate the partition structure:
# Start fdisk
fdisk /dev/sdb
# Inside fdisk:
# d (delete all partitions)
# n (create new partition)
# w (write changes)
Method 3: Force the Filesystem Creation
If you're certain the device isn't in use:
mkfs.ext4 -F /dev/sdb
Method 4: Check for LVM Metadata
Remove any existing LVM information:
pvremove -ff /dev/sdb
vgremove
To avoid similar issues in the future:
- Always unmount devices before modification
- Check
dmesg
for kernel messages about the device - Consider using
lsblk
for a clearer device hierarchy view
If the problem persists, try these advanced steps:
# Check kernel device mapper
dmsetup ls
dmsetup remove_all
# Re-scan SCSI devices
echo 1 > /sys/class/block/sdb/device/rescan
# Check udev rules
udevadm info /dev/sdb
When attempting to create a filesystem on a secondary drive in Ubuntu Server 11.10, you might encounter this stubborn error message. The system insists the device is in use when all conventional checks (mount status, swap usage, etc.) suggest otherwise. Let's dive deep into this frustrating scenario.
This protection mechanism exists for good reason. The kernel might have the device open through:
# Check for open file handles
lsof | grep /dev/sdb
# Or more specifically:
lsblk -o NAME,MAJ:MIN,RM,SIZE,RO,FSTYPE,MOUNTPOINT,LABEL,UUID
The most likely culprits include:
- LVM volume groups referencing the disk
- MDADM (software RAID) arrays
- Pending disk operations in the kernel
- Residual partition table signatures
Step 1: Wipe All Existing Signatures
Use wipefs
to completely erase all filesystem signatures:
wipefs -a /dev/sdb
Step 2: Verify RAID Configuration
For hardware RAID setups (as in your case):
# Check active RAID arrays
cat /proc/mdstat
# Or for detailed info:
mdadm --detail /dev/sdb
Step 3: Force the Filesystem Creation
If absolutely certain the disk isn't in use:
mkfs -t ext4 -F /dev/sdb
Checking Kernel Device Mappings
dmsetup ls
dmsetup info
Full Device Reset Procedure
For stubborn cases where the kernel maintains references:
# Unbind from any drivers
echo 1 > /sys/block/sdb/device/delete
# Rescan SCSI bus
echo "- - -" > /sys/class/scsi_host/host0/scan
For future operations:
- Always unmount filesystems before modifying
- Stop any services using the disk
- Check for LVM volumes with
vgdisplay
- Consider using
partprobe
after partition changes