How to Reduce AWS EBS Root Volume Size: A Step-by-Step Guide for Ubuntu HVM Instances


1 views

While expanding EBS volumes is straightforward through the AWS console, reducing root volume size presents unique challenges. The process requires manual intervention because AWS doesn't provide a direct method to shrink EBS volumes. This guide focuses specifically on Ubuntu HVM instances, providing detailed steps to safely reduce your root volume size.

  • A running Ubuntu HVM EC2 instance
  • AWS CLI configured with appropriate permissions
  • Basic Linux command line knowledge
  • Enough free space on the volume to accommodate the shrinking process

First, determine your current volume size and desired target size:

df -h
lsblk

Then create a new EBS volume through AWS CLI:

aws ec2 create-volume \
    --availability-zone us-east-1a \
    --size 20 \
    --volume-type gp3 \
    --tag-specifications 'ResourceType=volume,Tags=[{Key=Name,Value=shrunk-root-volume}]'

Create a temporary instance to perform the volume operations:

aws ec2 run-instances \
    --image-id ami-0abcdef1234567890 \
    --instance-type t2.micro \
    --key-name MyKeyPair \
    --security-group-ids sg-903004f8 \
    --subnet-id subnet-6e7f829e

Attach both the original and new volumes to the temporary instance:

aws ec2 attach-volume \
    --volume-id vol-1234567890abcdef0 \
    --instance-id i-01474ef662b89480 \
    --device /dev/sdf

aws ec2 attach-volume \
    --volume-id vol-0987654321fedcba \
    --instance-id i-01474ef662b89480 \
    --device /dev/sdg

SSH into your temporary instance and verify the volumes:

lsblk
sudo file -s /dev/xvdf1
sudo e2fsck -f /dev/xvdf1

Shrink the filesystem to its minimum size:

sudo resize2fs -M /dev/xvdf1

Calculate the required block count:

sudo dumpe2fs /dev/xvdf1 | grep 'Block count'

Use dd to copy the data:

sudo dd if=/dev/xvdf1 bs=16M of=/dev/xvdg count=80 status=progress
sudo resize2fs /dev/xvdg
sudo e2fsck -f /dev/xvdg

Detach the new volume and attach it to your original instance as /dev/xvda:

aws ec2 detach-volume --volume-id vol-0987654321fedcba
aws ec2 attach-volume \
    --volume-id vol-0987654321fedcba \
    --instance-id i-0123456789abcdef \
    --device /dev/xvda

After verifying your instance boots correctly with the new volume:

df -h
lsblk

Don't forget to terminate your temporary instance and delete the old volume when you're confident everything works.

For those who prefer a more AWS-native approach, consider using AWS Backup:

aws backup start-restore-job \
    --recovery-point-arn arn:aws:ec2:us-east-1::snapshot/snap-0123456789abcdef \
    --metadata file://metadata.json \
    --iam-role-arn arn:aws:iam::123456789012:role/service-role/AWSBackupDefaultServiceRole

Where metadata.json specifies your desired smaller volume size.

  • Always create snapshots before attempting volume modifications
  • Test the process in a non-production environment first
  • Account for future growth when selecting your new volume size
  • Some applications may require configuration updates after volume changes

While expanding EBS volumes is straightforward, reducing them requires manual intervention because AWS doesn't provide native shrink operations. This technical deep dive explains the safest method for Ubuntu HVM instances.

  • A running EC2 instance (Ubuntu 18.04/20.04 LTS used in this example)
  • AWS CLI configured with proper IAM permissions
  • Basic familiarity with Linux filesystem commands
  • Current volume snapshot (critical safety measure)
# 1. Create a new smaller volume (e.g., reducing from 100GB to 30GB)
aws ec2 create-volume --availability-zone us-east-1a \
--volume-type gp3 --size 30 --tag-specifications \
'ResourceType=volume,Tags=[{Key=Name,Value=shrunk-root}]'

# 2. Attach both volumes to a temporary instance
aws ec2 attach-volume --volume-id vol-123456 --device /dev/sdf \
--instance-id i-abcdefg

# 3. Filesystem verification and shrinking
sudo e2fsck -f /dev/xvda1
sudo resize2fs -M -p /dev/xvda1

# 4. Calculate block count for dd operation
BLOCK_COUNT=$(sudo dumpe2fs /dev/xvda1 | grep 'Block count' | awk '{print $3}')
echo "Required blocks: $BLOCK_COUNT"

# 5. Data transfer with progress monitoring
sudo dd if=/dev/xvda1 bs=16M of=/dev/xvdg status=progress \
count=$((BLOCK_COUNT/4000 + 1))

After copying, you must:

  1. Expand the filesystem to fill the new volume: sudo resize2fs /dev/xvdg
  2. Verify filesystem integrity: sudo e2fsck -f /dev/xvdg
  3. Update /etc/fstab if device names changed
  4. Test the new volume before terminating the original

From production experience:

Issue Solution
LVM volumes Use pvresize and lvresize before filesystem operations
Boot failures Keep original volume detached but available for 48 hours
Permission errors Use sudo consistently and verify AWS IAM policies

For comparison, here's a rsync-based approach:

# On temporary instance:
sudo mkfs.ext4 /dev/xvdg
sudo mkdir /mnt/{old,new}
sudo mount /dev/xvda1 /mnt/old
sudo mount /dev/xvdg /mnt/new
sudo rsync -aAXv /mnt/old/ /mnt/new/

Note: This requires additional bootloader configuration but allows better progress tracking.

  • Run df -h to confirm proper sizing
  • Check system logs: journalctl -b
  • Validate critical services start properly
  • Monitor CloudWatch metrics for disk performance