AWS EC2 “No Space Left on Device” Error: Fixing Inode Exhaustion on Root Partition


2 views

The df -h output shows plenty of available disk space (3.5GB free), but mktemp fails with "No space left on device". The real issue becomes clear when we check inodes with df -i - the root partition has exhausted all 524,288 inodes (100% usage).

Small EC2 instance types (especially t2/t3 series with default 8GB volumes) often hit this because:

  • Default ext4 filesystem allocates 1 inode per 16KB (AWS's default ratio)
  • Applications creating thousands of small files (like Docker, CI/CD systems, or temp files)
  • Log rotation failures leaving deleted files' inodes locked

First, identify what's consuming inodes:

# Find directories with most files
sudo find / -xdev -type f | cut -d "/" -f 2 | sort | uniq -c | sort -n

# Alternative method for large filesystems
sudo tune2fs -l /dev/xvda1 | grep -i inode

Common offenders I've found:

# Docker containers (check for dead containers)
docker ps -aq | xargs docker inspect --format='{{.LogPath}}'

# PHP sessions
ls -la /var/lib/php/sessions/

# Systemd journal logs
journalctl --disk-usage

For emergency recovery when you can't even run basic commands:

# Clear systemd logs (if using journald)
sudo journalctl --vacuum-size=200M

# Remove PHP sessions older than 24h
sudo find /var/lib/php/sessions/ -type f -mtime +0 -delete

# Delete orphaned Docker files
docker system prune -a -f

1. Filesystem Resizing (if using EBS):

# Check current inode count
sudo dumpe2fs -h /dev/xvda1 | grep -i inode

# After resizing the volume via AWS console:
sudo growpart /dev/xvda 1
sudo resize2fs /dev/xvda1

2. Preventative Configuration:

# For new volumes, specify higher inode ratio:
sudo mkfs.ext4 -i 8192 /dev/xvdf

# Add to /etc/fstab for automatic cleanup:
tmpfs /tmp tmpfs defaults,noatime,nosuid,size=512M 0 0

3. Monitoring Setup (CloudWatch example):

#!/bin/bash
INODES=$(df -i / | awk 'NR==2 {print $5}' | tr -d '%')
aws cloudwatch put-metric-data \
  --namespace "Custom" \
  --metric-name "InodeUsage" \
  --dimensions "InstanceId=$HOSTNAME" \
  --value "$INODES" \
  --unit "Percent"

For extreme cases where you need to create space just to operate:

# Mount RAM as temporary workspace
sudo mount -t tmpfs -o size=512M tmpfs /mnt/tmpwork
export TMPDIR=/mnt/tmpwork

Many EC2 users encounter this confusing scenario where df -h shows available disk space, yet operations fail with "No space left on device". The key lies in checking inodes - the metadata structures that track files in Unix filesystems.

# Check disk space (shows available)
df -h

# Check inode usage (reveals the real problem)
df -i

Each file consumes one inode. When you see 100% inode usage but only 55% disk usage, it means:

  • Your filesystem contains an enormous number of small files
  • Docker containers, log files, or cache files might be the culprits
  • Even with free space, you can't create new files

Here's how to fix this in production:

# Find directories consuming most inodes
sudo find / -xdev -type f | cut -d "/" -f 2 | sort | uniq -c | sort -n

# Clean up common offenders
# 1. Docker artifacts
docker system prune -af

# 2. Package manager cache
sudo apt-get clean
sudo yum clean all

# 3. Temp files
sudo rm -rf /tmp/*

For long-term management:

  • Monitor inodes with CloudWatch: Track DiskNodeUtilization metric
  • When provisioning: AWS EC2's default inode count is based on volume size. For small files, consider:
# Format with custom inode ratio (ext4)
mkfs.ext4 -N 2000000 /dev/xvdf

If you can't immediately clean up, temporarily redirect writes:

# Create alternative temp location
mkdir /mnt/temp
chmod 1777 /mnt/temp

# Redirect system temp
export TMPDIR=/mnt/temp