When executing the rsync command:
rsync -zr --compress-level=9 --delete /var/www/mywebsite/current/web/js login@192.168.1.4:/srv/data2_http
You encounter the frustrating error:
rsync: write failed on "/srv/data2_http/js/8814c77.js": No space left on device (28)
The initial verification shows sufficient space:
df -h
/dev/xvdb 202G 168G 25G 88% /srv/data2_http
df -i
/dev/xvdb 13434880 2152940 11281940 17% /srv/data2_http
Both storage space and inodes appear available, yet rsync fails.
Several factors could be causing this behavior:
- Filesystem quota restrictions
- Permission issues on target directory
- Special filesystem configurations (e.g., LVM thin provisioning)
- Disk reservation settings
- Filesystem corruption
1. Verify Filesystem Health:
sudo touch /srv/data2_http/testfile
sudo rm /srv/data2_http/testfile
This basic write test helps confirm general filesystem functionality.
2. Check User Quotas:
quota -vs
3. Examine Disk Reservations:
tune2fs -l /dev/xvdb | grep -i "reserved block"
4. Test with Different rsync Options:
rsync -v --progress --partial /var/www/mywebsite/current/web/js login@192.168.1.4:/srv/data2_http
For persistent issues, consider:
# Try without compression
rsync -r --delete /var/www/mywebsite/current/web/js login@192.168.1.4:/srv/data2_http
# Use alternative transfer methods
tar czf - /var/www/mywebsite/current/web/js | ssh login@192.168.1.4 "tar xzf - -C /srv/data2_http"
For XFS filesystems:
xfs_info /srv/data2_http
For ext4:
dumpe2fs -h /dev/xvdb | grep -i block
After resolving the issue, verify with:
rsync --dry-run -avz /var/www/mywebsite/current/web/js login@192.168.1.4:/srv/data2_http
You're trying to rsync a relatively small folder (2.4MB) to a destination with 25GB available space, yet rsync fails with:
rsync: write failed on "/srv/data2_http/js/8814c77.js": No space left on device (28)
rsync error: error in file IO (code 11) at receiver.c(322) [receiver=3.0.9]
When disk space appears available but rsync fails, these are the most likely causes:
- Inode exhaustion (check with
df -i
) - Filesystem quota restrictions
- Permission issues on destination
- Filesystem corruption
From your df -i
output, we can see inodes are only 17% used, so that's not the issue here.
1. Check for Hidden Quotas
Many hosting providers (like AWS, Gandi) implement hidden quotas. Verify with:
quota -v
# Or for specific user:
quota -v username
2. Verify Filesystem Health
Check for filesystem errors on the destination:
sudo fsck -n /dev/xvdb
3. Alternative rsync Approaches
Try these rsync variants to isolate the problem:
Basic test with verbose output:
rsync -vv --dry-run /var/www/mywebsite/current/web/js/login@192.168.1.4:/srv/data2_http
Alternative with different compression:
rsync -zr --compress-level=1 --partial /var/www/mywebsite/current/web/js login@192.168.1.4:/srv/data2_http
If the issue persists, try these advanced techniques:
# 1. Check for SELinux context issues
ls -Z /srv/data2_http
# 2. Test with different temporary directory
rsync --temp-dir=/tmp/rsync_temp ...
# 3. Check system logs for clues
journalctl -xe
dmesg | tail -50
To avoid such issues in future deployments:
- Implement monitoring for both disk space and inodes
- Set up alerts when usage exceeds 80% threshold
- Consider using
--inplace
rsync option for large file transfers - Regularly verify filesystem integrity