When your Linux servers crash periodically without leaving core dumps in /var/crash
, it becomes a detective game. Let's examine why kdump fails to save crash information despite seemingly correct configuration.
# Error messages seen during crash sequence
Saving to the local filesystem UUID=e7abcdeb-1987-4c69-a867-fabdceffghi2
Usage: fsck.ext4 [-panyrcdfvtDFV] [-b superblock] [-B blocksize]
mount: can't find /mnt in /etc/fstab
Yes, having root on LVM matters significantly. The kdump initramfs needs special handling for LVM volumes. Try adding these lines to /etc/kdump.conf
:
ext4 /dev/mapper/vg00-lv_root
lvm2 vg00
The error suggests kdump cannot mount the target filesystem. Verify your fstab entries match the actual disk configuration:
# Check current mount points
lsblk -f
cat /proc/mounts
# Sample working fstab entry
UUID=e7abcdeb-1987-4c69-a867-fabdceffghi2 /var/crash ext4 defaults 0 0
Rebuild your kdump initramfs with proper modules. For Scientific Linux 6.5:
# Rebuild with LVM and filesystem support
mkdumprd -f /boot/initramfs-$(uname -r)kdump.img $(uname -r)
# Verify included modules
lsinitrd /boot/initramfs-$(uname -r)kdump.img | grep -E 'ext|lvm'
When standard methods fail, enable verbose logging in kdump:
# Add to /etc/kdump.conf
debug_mem_level 7
log_level 7
default shell
Here's a proven kdump.conf setup for LVM systems:
# /etc/kdump.conf
path /var/crash
core_collector makedumpfile -c --message-level 1 -d 31
ext4 /dev/mapper/vg00-lv_root
lvm2 vg00
options root=/dev/mapper/vg00-lv_root ro
extra_bins /usr/bin/bash
extra_modules ext4 mbcache jbd2 dm_mod
After making changes, always validate:
service kdump restart
echo c > /proc/sysrq-trigger
Check dmesg after reboot for kdump attempts:
dmesg | grep -i kdump
When your Scientific Linux 6.5 systems crash with kdump configured, you're observing several critical symptoms:
1. No crash dumps in /var/crash despite correct configuration
2. fsck.ext4 usage message appearing during crash dump attempt
3. "mount: can't find /mnt in /etc/fstab" error when debugging
4. Successful remote dumping via SSH (proving basic kdump functionality)
Yes, having your root filesystem on LVM matters significantly. The issue stems from how kdump handles LVM volumes in the initramfs environment during crash recovery. When the kernel crashes:
1. The kdump kernel boots with a minimal initramfs
2. It needs to mount the root filesystem to save the vmcore
3. LVM volumes require special handling in this minimal environment
Your troubleshooting has revealed important clues:
# Current kdump.conf (problematic version)
path /var/crash
core_collector makedumpfile -c --message-level 1 -d 31
And working remote configuration:
# Working remote configuration
path vmcore
ssh user@hostb.example.org
sshkey /root/.ssh/kdump_id_rsa
The core issue is that the kdump initramfs isn't properly configured to handle LVM. Here's how to fix it:
# First, ensure lvm2 packages are installed for kdump
yum install lvm2 -y
# Then modify /etc/kdump.conf to explicitly specify LVM handling:
path /var/crash
core_collector makedumpfile -c --message-level 1 -d 31
lvm2
After making these changes, rebuild the kdump initramfs:
# For RHEL6/Scientific Linux 6:
service kdump restart
mkdumprd -f /boot/initramfs-$(uname -r)kdump.img $(uname -r)
To confirm the fix works:
# 1. Check kdump status
service kdump status
# 2. Verify initramfs contains LVM modules
lsinitrd /boot/initramfs-$(uname -r)kdump.img | grep lvm
# 3. Test crash dump generation
echo c > /proc/sysrq-trigger
If the LVM solution doesn't work, you can specify an explicit mount point:
# Modify /etc/kdump.conf
ext4 /dev/mapper/vg_root-lv_root
path /var/crash
options ro,norecovery
For deeper investigation when issues persist:
# 1. Increase verbosity in kdump.conf
core_collector makedumpfile -c --message-level 7 -d 31
# 2. Capture serial console output during crash
# 3. Check /var/log/messages after reboot for kdump errors