When examining the blkid
output, I noticed something peculiar - all three EBS volumes (/dev/xvde1
, /dev/xvdf1
, and /dev/xvdg1
) share identical UUIDs and labels:
UUID="f5bd1ae0-85b5-4686-85ff-ed8deb328c92" TYPE="xfs" LABEL="/"
This is problematic because Linux uses UUIDs to uniquely identify filesystems. Having multiple volumes with identical UUIDs can cause mounting issues, especially when trying to mount more than one of them simultaneously.
Before proceeding with fixes, let's verify the filesystem health using xfs_repair
:
sudo umount /dev/xvdf1 # if mounted
sudo xfs_repair -n /dev/xvdf1
# Sample output:
# Phase 1 - find and verify superblock...
# Phase 2 - using internal log
# Phase 3 - for each AG...
# Phase 4 - check for duplicate blocks...
# Phase 5 - rebuild AG headers and trees...
# Phase 6 - check inode connectivity...
# Phase 7 - verify and correct link counts...
# done
The safest approach is to generate new UUIDs for the conflicting volumes:
# Unmount the volume first
sudo umount /dev/xvdf1
# Assign new UUID
sudo xfs_admin -U generate /dev/xvdf1
# Verify new UUID
sudo blkid /dev/xvdf1
# Mount with new UUID
sudo mount /dev/xvdf1 /mnt/xvdf1
If you cannot change the UUID (e.g., production environment), try these alternatives:
# Method 1: Mount by device path instead of UUID
sudo mount /dev/xvdf1 /mnt/xvdf1 -o nouuid
# Method 2: Use device mapper path
sudo mount /dev/mapper/xvdf1 /mnt/xvdf1
# Method 3: Explicit filesystem type
sudo mount -t xfs -o defaults,nouuid /dev/xvdf1 /mnt/xvdf1
To avoid this issue with future EBS volumes:
# When creating new XFS filesystems:
sudo mkfs.xfs -f -m uuid=$(uuidgen) /dev/xvdf1
# For existing filesystems, set a custom UUID during creation:
sudo mkfs.xfs -f -m uuid=your-custom-uuid-here /dev/xvdf1
# Verify using:
sudo xfs_admin -u /dev/xvdf1
Essential commands for diagnosing mount issues:
# Check kernel messages
dmesg | grep xfs
# View mounted filesystems and their options
mount | grep xfs
# Filesystem check (read-only)
xfs_db -c sb -c print -r /dev/xvdf1
# Detailed XFS information
xfs_info /mnt/xvdf1
Looking at your blkid
output, I spotted something unusual - all three volumes have identical UUIDs:
/dev/xvdf1: UUID="f5bd1ae0-85b5-4686-85ff-ed8deb328c92"
/dev/xvdg1: UUID="f5bd1ae0-85b5-4686-85ff-ed8deb328c92"
/dev/xvde1: UUID="f5bd1ae0-85b5-4686-85ff-ed8deb328c92"
This is the root cause of your mounting issues. In Linux systems, filesystem UUIDs must be unique. When you attempt to mount volumes with duplicate UUIDs, the system gets confused.
These volumes were likely created from the same snapshot or AMI. AWS doesn't automatically regenerate UUIDs when you create volumes from snapshots. The xfs_admin
utility shows us the underlying issue:
sudo xfs_admin -u /dev/xvdf1
UUID = f5bd1ae0-85b5-4686-85ff-ed8deb328c92
sudo xfs_admin -u /dev/xvdg1
UUID = f5bd1ae0-85b5-4686-85ff-ed8deb328c92
For XFS filesystems, we need to use xfs_admin
to assign new UUIDs:
sudo umount /dev/xvdf1 # Ensure it's unmounted first
sudo xfs_admin -U generate /dev/xvdf1
sudo mount /dev/xvdf1 /home/ec2-user/xvdf1
# Repeat for xvdg1
sudo xfs_admin -U generate /dev/xvdg1
sudo mount /dev/xvdg1 /home/ec2-user/xvdg1
If the above doesn't work (perhaps due to filesystem corruption), you can recreate the filesystem:
sudo mkfs.xfs -f /dev/xvdf1
sudo mount /dev/xvdf1 /mnt/xvdf1
Warning: This will erase all data on the volume!
When creating multiple volumes from the same snapshot:
- First attach and mount one volume
- Regenerate its UUID immediately
- Create subsequent volumes only after this step
After fixing, verify with:
lsblk -f
NAME FSTYPE LABEL UUID MOUNTPOINT
xvdf1 xfs b3e3a1c1-5a9d-4e7b-9f02-cd4d7a8e3c1a /home/ec2-user/xvdf1
xvdg1 xfs 7d2e8b3a-1f4c-4a6d-b5e9-f3a7b8c1d2e0 /home/ec2-user/xvdg1
Notice the UUIDs are now different and mounts succeed.