The core issue manifests when an EC2 instance with an attached 500GB EBS volume only shows 8GB available storage space. Diagnostic commands reveal conflicting information:
[root@ip-10-244-134-250 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/xvda1 8.0G 1.3G 6.7G 16% /
Yet lower-level tools correctly identify the full capacity:
[root@ip-10-244-134-250 ~]# fdisk -l
Disk /dev/xvda1: 536.9 GB, 536870912000 bytes
This discrepancy occurs because the EBS volume was expanded after the original filesystem creation. The underlying block device recognizes the new size, but the filesystem wasn't extended to utilize the additional space. Most Linux distributions create an 8GB root partition by default during initial setup.
First, verify the filesystem type (ext4/xfs):
df -Th | grep /dev/xvda1
For ext4 filesystems:
# Grow the partition
sudo growpart /dev/xvda 1
# Resize the filesystem
sudo resize2fs /dev/xvda1
For XFS filesystems:
# XFS requires the filesystem to be mounted
sudo xfs_growfs /
To prevent this in future deployments, add these commands to your user-data script:
#!/bin/bash
growpart /dev/xvda 1
resize2fs /dev/xvda1
# For XFS: xfs_growfs /
After resizing, confirm the changes:
lsblk
df -h
Both commands should now consistently report the full 500GB capacity.
- Always snapshot EBS volumes before resizing operations
- This process works for both SSD (gp2/gp3) and magnetic volumes
- For Windows instances, use Disk Management instead
- Some older AMIs may require a reboot after resizing
If growpart fails, install the cloud-utils package:
sudo yum install cloud-utils -y # RHEL/CentOS
sudo apt-get install cloud-utils -y # Ubuntu/Debian
For encrypted volumes, first resize via AWS console, then:
sudo cryptsetup resize /dev/mapper/crypt1
resize2fs /dev/mapper/crypt1
When working with AWS EC2 instances, you might encounter a situation where your EBS volume shows the correct size in fdisk -l
but df -h
displays a smaller partition. In this case:
# fdisk shows correct size:
Disk /dev/xvda1: 536.9 GB
# But df shows only 8GB:
/dev/xvda1 8.0G 1.3G 6.7G 16% /
This discrepancy occurs because AWS provides storage at multiple levels:
- EBS volume size (500GB in your case)
- Partition table configuration
- File system allocation
Here's how to make your EC2 instance recognize the full EBS volume capacity:
# 1. Verify the current partition table
sudo fdisk -l /dev/xvda
# 2. Install growpart if not available
sudo yum install cloud-utils-growpart -y # For Amazon Linux
sudo apt-get install cloud-guest-utils -y # For Ubuntu
# 3. Expand the partition to use all available space
sudo growpart /dev/xvda 1
# 4. For ext2/ext3/ext4 filesystems:
sudo resize2fs /dev/xvda1
# 5. For xfs filesystems:
sudo xfs_growfs /
For production environments, consider using this script:
#!/bin/bash
DEVICE="/dev/xvda"
PARTITION="1"
# Expand partition
growpart "$DEVICE" "$PARTITION"
# Detect filesystem type
FS_TYPE=$(lsblk -o FSTYPE "$DEVICE$PARTITION" | tail -n 1)
# Resize filesystem based on type
case "$FS_TYPE" in
ext2|ext3|ext4)
resize2fs "$DEVICE$PARTITION"
;;
xfs)
xfs_growfs /
;;
*)
echo "Unsupported filesystem: $FS_TYPE"
exit 1
;;
esac
echo "Filesystem resized successfully"
After resizing, confirm the changes took effect:
# Check partition size
sudo fdisk -l /dev/xvda
# Verify filesystem usage
df -h
# Check block device information
lsblk
- Not detaching/reattaching the volume when modifying root volumes
- Forgetting to take a snapshot before resizing
- Mismatched filesystem types between commands
- Instance types with NVMe storage require different device naming
For complex cases, creating a new volume might be simpler:
# Create new 500GB volume
aws ec2 create-volume --availability-zone us-east-1a --size 500
# Attach to instance
aws ec2 attach-volume --volume-id vol-123456 --instance-id i-123456 --device /dev/sdf
# Inside instance:
sudo mkfs -t ext4 /dev/xvdf
sudo mkdir /data
sudo mount /dev/xvdf /data