When you attach a new EBS volume to an EC2 instance, it comes as raw block storage without any filesystem. That's why mount -t ext3 /dev/sdf /testName
fails with "wrong fs type" - the device doesn't contain a recognizable filesystem signature yet.
First, verify the device name (AWS sometimes assigns different names):
lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda 202:0 0 8G 0 disk
└─xvda1 202:1 0 8G 0 part /
xvdf 202:80 0 10G 0 disk # This is our new volume
Now let's create a filesystem (for ext4 instead of ext3 as it's more modern):
sudo mkfs -t ext4 /dev/xvdf
mke2fs 1.42.13 (17-May-2015)
Creating filesystem with 2621440 4k blocks...
Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done
Create mount point and mount:
sudo mkdir /data
sudo mount /dev/xvdf /data
Verify the mount succeeded:
df -h /data
Filesystem Size Used Avail Use% Mounted on
/dev/xvdf 9.8G 37M 9.2G 1% /data
Add to /etc/fstab for automatic mounting after reboots:
sudo sh -c "echo '/dev/xvdf /data ext4 defaults,nofail 0 2' >> /etc/fstab"
If the device isn't appearing:
sudo fdisk -l
sudo file -s /dev/xvdf # Check if filesystem exists
When dealing with NVMe volumes (common in newer instance types):
ls /dev/nvme*
sudo mkfs -t ext4 /dev/nvme1n1
sudo mount /dev/nvme1n1 /data
After expanding volume in AWS console:
sudo growpart /dev/xvdf 1
sudo resize2fs /dev/xvdf1
When you encounter the "wrong fs type" error while trying to mount an EBS volume in Linux, it typically means one of two things:
- The volume hasn't been formatted with a filesystem
- You're specifying the wrong filesystem type in your mount command
First, let's verify the volume's status and filesystem:
lsblk
sudo file -s /dev/xvdf
If the output shows "/dev/xvdf: data", this confirms the volume has no filesystem. Note that AWS sometimes maps /dev/sdf to /dev/xvdf in modern Linux instances.
For a new volume, we need to create a filesystem. Here's how to format it as ext4 (recommended over ext3 for AWS volumes):
sudo mkfs -t ext4 /dev/xvdf
For optimal performance on EBS, consider these additional options:
sudo mkfs.ext4 -E lazy_itable_init=0,lazy_journal_init=0 /dev/xvdf
Now we can properly mount the formatted volume:
sudo mkdir /data
sudo mount /dev/xvdf /data
Verify the mount was successful:
df -h
lsblk
To ensure the volume mounts automatically after reboots, add this entry to /etc/fstab:
sudo bash -c "echo '/dev/xvdf /data ext4 defaults,nofail 0 2' >> /etc/fstab"
For proper file access, adjust permissions:
sudo chown -R ec2-user:ec2-user /data
If you're attaching a volume with existing data, first identify its filesystem type:
sudo blkid /dev/xvdf
Then mount using the correct filesystem type:
sudo mount -t xfs /dev/xvdf /data # Example for XFS volumes
- After attaching, wait 5-10 seconds for the volume to initialize
- Check dmesg for attachment errors:
dmesg | grep xvdf
- Verify volume attachment in AWS console
Remember that device naming (/dev/sdf vs /dev/xvdf) can vary based on instance type and Linux distribution. Always verify with lsblk
before proceeding.