When replacing a failed disk in a RAID array and reinstalling GRUB, you might encounter these cryptic warnings:
grub-install: warning: Couldn't find physical volume (null)'. Some modules may be missing from core image..
/usr/sbin/grub-probe: warning: Couldn't find physical volume (null)'. Some modules may be missing from core image..
This typically occurs when:
- GRUB is looking for LVM volumes that no longer exist in the same configuration
- The RAID metadata wasn't properly synchronized after disk replacement
- GRUB's device.map file contains stale entries
Here's the complete remediation process I've used successfully:
# First, ensure the new disk is properly synced in RAID
mdadm --manage /dev/mdX --add /dev/sdXN
mdadm --wait /dev/mdX
# Regenerate GRUB configuration with explicit device mapping
grub-mkdevicemap --no-floppy
update-initramfs -u
update-grub
# Force reinstall GRUB to both disks
grub-install --recheck --no-floppy /dev/sda
grub-install --recheck --no-floppy /dev/sdb
If warnings continue after these steps:
# Check for residual LVM configurations
pvscan
vgscan
lvscan
# Verify RAID status
cat /proc/mdstat
mdadm --detail /dev/mdX
# Force GRUB to ignore LVM temporarily
grub-install --modules="ext2 part_msdos raid" /dev/sdX
The warnings disappeared after reboot because:
- RAID resynchronization completed in background
- GRUB's runtime environment properly detected the new disk layout
- The temporary device mapping inconsistencies were resolved during initramfs loading
For production systems, always verify bootability by testing the boot process from each disk independently before considering the issue resolved.
When replacing a failed HDD in a RAID array and reinstalling GRUB, you might encounter these cryptic warnings about missing physical volumes. The messages appear during both grub-install
and update-grub
operations, though curiously the system still functions after reboot.
The core warning pattern:
grub-install: warning: Couldn't find physical volume (null)'. Some modules may be missing from core image.
indicates GRUB's device mapper is struggling with RAID metadata interpretation during installation.
# Sample RAID status check
mdadm --detail /dev/md0
cat /proc/mdstat
# Output shows healthy RAID1 arrays:
# md0 : active raid1 sdb1[3] sda1[2]
# 8387572 blocks super 1.2 [2/2] [UU]
When running:
grub-install /dev/sdb
the installer attempts to build a core image with all necessary modules. The warnings suggest it's failing to properly map RAID components during this process.
1. Regenerate device maps:
grub-mkdevicemap --force
2. Clean GRUB installation:
apt-get purge grub-pc grub-common
apt-get install grub-pc grub-common
grub-install /dev/sdb
update-grub
3. Check RAID assembly:
mdadm --examine /dev/sd[ab]
mdadm --assemble --scan --verbose
The warnings disappear post-reboot because:
- The RAID sync completes fully
- udev rules regenerate proper device mappings
- GRUB's runtime environment differs from installation environment
For future RAID maintenance:
# Before replacing disks:
mdadm --manage /dev/mdX --fail /dev/sdY1
mdadm --manage /dev/mdX --remove /dev/sdY1
# After physical replacement:
mdadm --manage /dev/mdX --add /dev/sdZ1
Always verify with:
mdadm --detail /dev/mdX | grep -i rebuild