Root Cause Analysis: Docker 'No Space Left on Device' and OverlayFS
Quick Fix Summary
TL;DRRun 'docker system prune -a --volumes' to reclaim space immediately.
This error occurs when Docker's OverlayFS storage driver exhausts either disk space or inodes. Unlike traditional 'df' output, Docker's layered filesystem can hide fragmentation and inode exhaustion.
Diagnosis & Causes
Recovery Steps
Step 1: Diagnose the Real Constraint
Determine if the issue is disk space or inode exhaustion. Docker's layered filesystem can show free space but fail on inodes.
df -h /var/lib/docker
df -i /var/lib/docker
docker system df Step 2: Emergency Cleanup of Unused Objects
Remove all stopped containers, unused networks, dangling images, and build cache. Use --volumes flag cautiously as it removes anonymous volumes.
docker system prune -a --volumes --force
docker builder prune --all --force Step 3: Target Specific Space Hogs
Identify and remove large containers, images, or volumes individually when full cleanup isn't possible.
docker ps -s --format "table {{.Names}}\t{{.Size}}"
docker images --format "table {{.Repository}}\t{{.Tag}}\t{{.Size}}" --filter "dangling=false"
docker volume ls --format "table {{.Name}}\t{{.Driver}}" Step 4: Clean Container Logs (Major Culprit)
Container JSON logs in /var/lib/docker/containers can consume gigabytes. Use log rotation or truncate existing logs.
find /var/lib/docker/containers -name "*.log" -type f -size +100M -exec truncate -s 0 {} \;
echo "{\"max-size\": \"10m\", \"max-file\": \"3\"}" > /etc/docker/daemon.json && \
systemctl restart docker Step 5: Configure Docker Daemon for Prevention
Set global limits on disk usage and enable log rotation in daemon.json to prevent recurrence.
cat > /etc/docker/daemon.json << EOF
{
"storage-driver": "overlay2",
"storage-opts": [
"overlay2.override_kernel_check=true",
"overlay2.size=100G"
],
"log-driver": "json-file",
"log-opts": {
"max-size": "10m",
"max-file": "3"
}
}
EOF
systemctl restart docker Step 6: Monitor with Proactive Script
Implement monitoring for Docker disk and inode usage with automated alerts.
#!/bin/bash
THRESHOLD=90
DISK_USAGE=$(df /var/lib/docker | awk 'NR==2 {print $5}' | sed 's/%//')
INODE_USAGE=$(df -i /var/lib/docker | awk 'NR==2 {print $5}' | sed 's/%//')
if [ $DISK_USAGE -gt $THRESHOLD ] || [ $INODE_USAGE -gt $THRESHOLD ]; then
echo "CRITICAL: Docker storage usage high - Disk: ${DISK_USAGE}%, Inodes: ${INODE_USAGE}%"
docker system prune -a --filter "until=24h" --force
fi Architect's Pro Tip
"OverlayFS 'no space' errors often mean inode exhaustion, not disk space. Check 'df -i' first. Millions of small files in image layers consume inodes silently."
Frequently Asked Questions
Why does 'df' show free space but Docker fails with 'no space left'?
Docker uses OverlayFS which creates copy-on-write layers. Each file operation can create new inodes, exhausting them while disk space appears available. Always check both 'df -h' and 'df -i'.
Is it safe to delete files directly from /var/lib/docker?
Never manually delete files from /var/lib/docker. This corrupts Docker's internal database. Always use 'docker system prune', 'docker volume rm', or 'docker image rm' commands.
How do I prevent this error in production?
Implement three layers: 1) Docker daemon storage limits, 2) Log rotation configuration, 3) Cron job for 'docker system prune --filter until=24h', and 4) Monitor both disk space and inode usage.