How to Clean Docker Logs to Free Up Disk Space on GCE: JSON Log Rotation & Truncation Guide


1 views

When running Docker on Google Compute Engine (GCE), many developers encounter root filesystem space issues despite having large separate volumes for images. The hidden culprit often turns out to be Docker's JSON log files accumulating in /var/lib/docker/containers/.

Each container stores its logs in a JSON file at:

/var/lib/docker/containers/[container-id]/[container-id]-json.log

To quickly find all log files consuming space:

sudo find /var/lib/docker/containers/ -name "*.log" -exec ls -lh {} \;

For immediate space recovery, you can:

# View log file sizes
sudo du -ha /var/lib/docker/containers/ | grep -P "\d+\.?\d+M|G"

# Truncate specific log file
sudo truncate -s 0 /var/lib/docker/containers/[container-id]/[container-id]-json.log

For long-term management, configure Docker's logging driver in /etc/docker/daemon.json:

{
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "10m",
    "max-file": "3",
    "compress": "true"
  }
}

Then restart Docker:

sudo systemctl restart docker

Create a cron job with this script (/usr/local/bin/clean_docker_logs.sh):

#!/bin/bash
LOG_DIR="/var/lib/docker/containers/"
MAX_SIZE="100M"

find "$LOG_DIR" -name "*.log" -type f -size +"$MAX_SIZE" -print0 | while IFS= read -r -d '' file; do
    echo "Cleaning $file"
    truncate -s 0 "$file"
done

Make it executable and add to cron:

sudo chmod +x /usr/local/bin/clean_docker_logs.sh
sudo crontab -e
# Add: 0 3 * * * /usr/local/bin/clean_docker_logs.sh

Create /etc/logrotate.d/docker:

/var/lib/docker/containers/*/*.log {
  daily
  rotate 7
  compress
  delaycompress
  missingok
  copytruncate
}

After implementation, monitor disk space with:

watch -n 60 df -h /

And check log file sizes periodically:

sudo find /var/lib/docker/containers/ -name "*.log" -exec ls -lh {} \; | sort -k5 -hr

By default, Docker stores container logs in JSON format at:

/var/lib/docker/containers/[container-id]/[container-id]-json.log

These logs can grow rapidly, especially for verbose applications. You can verify the location using:

docker inspect --format='{{.LogPath}}' [container-name]

To truncate logs for running containers:

# Find large log files
sudo find /var/lib/docker/containers/ -name "*.log" -size +100M -exec ls -lh {} \;

# Clear log contents (keeps file handle)
sudo truncate -s 0 /var/lib/docker/containers/*/*-json.log

For permanent solutions, configure Docker daemon with log rotation in /etc/docker/daemon.json:

{
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "10m",
    "max-file": "3"
  }
}

Restart Docker after changes:

sudo systemctl restart docker

For high-volume logging environments consider:

# Syslog driver
docker run --log-driver=syslog --log-opt syslog-address=udp://1.2.3.4:1111 nginx

# Journald driver
docker run --log-driver=journald nginx

On Google Compute Engine, ensure your Docker directory is mounted on the larger persistent disk:

# Check current mount points
df -h

# Move Docker directory if needed
sudo systemctl stop docker
sudo mv /var/lib/docker /mnt/disks/[YOUR_DISK]/
sudo ln -s /mnt/disks/[YOUR_DISK]/docker /var/lib/docker
sudo systemctl start docker

Set up a cron job for log maintenance:

# Add to /etc/crontab
0 3 * * * root find /var/lib/docker/containers/ -name "*.log" -size +100M -exec truncate -s 0 {} \;

If containers fail to start after log changes:

# Check Docker daemon logs
journalctl -u docker.service

# Verify container log configuration
docker inspect --format='{{.HostConfig.LogConfig}}' [container-name]