When working with SAN storage and ext3 filesystems, a common frustration occurs when temporary SAN issues trigger the filesystem to remount as readonly. The real challenge begins when the SAN connection is restored, but the filesystem stubbornly remains readonly despite all paths being healthy.
The standard mount -o remount,rw
approach fails because:
# This won't work after SAN recovery
mount -o remount,rw /mnt/foo
mount: block device /dev/mapper/mpath0 is write-protected, mounting read-only
The kernel maintains an internal write-protect flag after detecting I/O errors, which isn't automatically cleared when paths are restored.
Here's the complete step-by-step method to regain write access:
# 1. First verify multipath recovery
multipath -ll
# 2. Unmount the filesystem completely
umount /mnt/foo
# 3. Clear the journal (critical step)
tune2fs -f -O ^has_journal /dev/mapper/mpath0
# 4. Recreate the journal
tune2fs -j /dev/mapper/mpath0
# 5. Force a filesystem check
e2fsck -f /dev/mapper/mpath0
# 6. Finally remount
mount -o rw /dev/mapper/mpath0 /mnt/foo
For systems where unmounting isn't possible, try this lower-impact method:
# 1. First bring paths offline
echo "offline" > /sys/block/sdb/device/state
echo "offline" > /sys/block/sdc/device/state
# 2. Then bring them back online
echo "running" > /sys/block/sdb/device/state
echo "running" > /sys/block/sdc/device/state
# 3. Reset the multipath device
multipath -r /dev/mapper/mpath0
# 4. Now attempt remount
mount -o remount,rw /mnt/foo
To minimize occurrence of this issue:
- Configure multipath with
no_path_retry=queue
- Increase SCSI timeouts in /etc/udev/rules.d
- Consider using ext4 with
errors=continue
mount option
For particularly stubborn cases, this nuclear option often works:
# 1. Flush device mapper
dmsetup remove --force /dev/mapper/mpath0
# 2. Reload multipath
multipath -r
# 3. Verify new device appears
multipath -ll
# 4. Proceed with standard recovery steps
When working with SAN storage and EXT3 filesystems, a common frustration occurs when transient SAN issues trigger the kernel's protective mechanism to remount filesystems as read-only. While this prevents data corruption, the filesystem often remains stubbornly read-only even after the underlying storage is fully restored.
The standard mount -o remount,rw
approach fails because:
# Typical failed attempt:
mount -o remount,rw /mnt/foo
# Output: mount: block device /dev/mapper/mpath0 is write-protected, mounting read-only
This occurs due to lingering error flags in both the block device layer and filesystem journal.
Step 1: Verify Storage Path Recovery
First confirm multipath has fully recovered:
multipath -ll
# Should show all paths as [active][ready], not [failed][faulty]
Step 2: Reset Device Mapper Flags
Clear the write-protect flag at the device mapper level:
dmsetup message mpath0 0 "fail_if_no_path 0"
dmsetup suspend mpath0
dmsetup resume mpath0
Step 3: Force Journal Recovery
EXT3 maintains its own protection state that must be cleared:
tune2fs -E clear_mmp /dev/mapper/mpath0
tune2fs -O ^has_journal /dev/mapper/mpath0 # Temporarily remove journal
tune2fs -O has_journal /dev/mapper/mpath0 # Re-add journal
e2fsck -yf /dev/mapper/mpath0 # Force filesystem check
Step 4: Final Remount
Now the remount should succeed:
mount -o remount,rw /mnt/foo
When the above doesn't work, try a lazy unmount cycle:
umount -l /mnt/foo
mount /dev/mapper/mpath0 /mnt/foo
Add these to /etc/sysctl.conf to make the system more resilient:
vm.block_dump = 1
vm.panic_on_oom = 0
vm.overcommit_memory = 2
For particularly stubborn cases, you may need to:
- Unmount completely
- Stop all processes accessing the device
- Run
dmsetup remove mpath0
- Let multipathd recreate the device
- Remount