How to Fix “No space left on device” Error: Resolving 100% Full /dev/xvda1 on EC2 Linux Instance


2 views

When your EC2 instance throws a "No space left on device" error and df -h shows /dev/xvda1 at 100% usage, you're dealing with a classic disk space issue. This is particularly common when running database servers like MongoDB or applications with log files.

First, let's verify the actual disk usage:

df -h
du -sh /*
du -xh --max-depth=1 / | sort -hr | head -n 10

From experience, these are the most frequent space hogs:

  • MongoDB journal files (in /var/lib/mongodb)
  • Node.js application logs
  • System log files (/var/log)
  • APT cache (/var/cache/apt/archives)

If MongoDB is your main suspect:

# Check MongoDB storage usage
mongo --eval "db.stats()"

# Compact the database
mongo --eval "db.runCommand({compact: 'yourCollectionName'})"

# Repair database (if needed)
sudo systemctl stop mongod
mongod --repair --dbpath /var/lib/mongodb/
sudo systemctl start mongod

For system and application logs:

# Clear old log files
sudo journalctl --vacuum-size=100M
sudo find /var/log -type f -name "*.log" -exec truncate -s 0 {} \;

# Set up log rotation
sudo nano /etc/logrotate.d/your_application

If you consistently run out of space, consider expanding the volume:

# AWS CLI command to modify volume size
aws ec2 modify-volume --volume-id vol-xxxxxxxx --size 20

# Then on the instance:
sudo growpart /dev/xvda 1
sudo resize2fs /dev/xvda1

Create a simple monitoring script to prevent future issues:

#!/bin/bash
THRESHOLD=90
CURRENT=$(df / | grep / | awk '{ print $5}' | sed 's/%//g')

if [ "$CURRENT" -gt "$THRESHOLD" ]; then
    echo "Disk space alert on $(hostname) at $(date)" | \
    mail -s "Disk Space Alert: ${CURRENT}% used" admin@example.com
fi

Add this to cron with crontab -e:

0 * * * * /path/to/disk_monitor.sh
  • Set up proper log rotation for all applications
  • Regularly clean package caches (sudo apt-get clean)
  • Consider moving large data to separate EBS volumes
  • Implement proper database maintenance routines

The /dev/xvda1 is typically the root partition in AWS EC2 instances using Xen virtualization. When it shows 100% usage, your system literally has no space left to write files, which explains the MongoDB and Node.js errors.

First, verify the actual disk usage with these commands:

# Check overall disk usage
df -h

# Find largest directories (run from root)
sudo du -sh /* | sort -h

# Alternative: scan specific directories
sudo du -xh / | sort -h | tail -20

From my experience managing hundreds of EC2 instances, these are frequent offenders:

# 1. MongoDB log files (check default location)
ls -lh /var/log/mongodb/

# 2. Node.js application logs
find /var/log/ -name "*.log" -type f -size +100M

# 3. System journal logs (especially with debug logging)
journalctl --disk-usage
sudo journalctl --vacuum-size=100M

For MongoDB: Rotate logs or enable log rotation in /etc/mongod.conf:

systemLog:
  destination: file
  logAppend: true
  path: /var/log/mongodb/mongod.log
  logRotate: reopen

For Node.js applications: Implement log rotation with logrotate:

/var/log/node-app/*.log {
    daily
    missingok
    rotate 7
    compress
    delaycompress
    notifempty
    copytruncate
}

When standard cleanup isn't enough, try these:

# Remove old kernel versions (common space hog)
sudo apt-get autoremove --purge

# Clean package cache
sudo apt-get clean

# Find and remove large files (careful with this!)
sudo find / -type f -size +100M -exec ls -lh {} \;

To avoid recurrence:

# Set up monitoring alerts
aws cloudwatch put-metric-alarm \
    --alarm-name "EC2-DiskSpace-Alarm" \
    --metric-name "DiskSpaceUtilization" \
    --namespace "System/Linux" \
    --statistic "Average" \
    --period 300 \
    --threshold 80 \
    --comparison-operator "GreaterThanThreshold" \
    --evaluation-periods 2 \
    --alarm-actions "arn:aws:sns:us-east-1:1234567890:DiskSpace-Alert"

For AWS EBS volumes, you can resize without downtime:

1. Take snapshot (safety first)
2. Modify volume size in AWS Console
3. On Linux instance:
   sudo growpart /dev/xvda 1
   sudo resize2fs /dev/xvda1