CRITICAL

Docker Build: Fix OOM Killer Termination During Multi-Stage Builds

Quick Fix Summary

TL;DR

Increase Docker daemon memory limit and retry the build.

Exit code 137 (SIGKILL) indicates the Linux OOM Killer terminated a Docker build process because it exceeded available memory, often during resource-intensive stages like compilation or dependency installation.

Diagnosis & Causes

  • Insufficient memory allocated to the Docker daemon.
  • A build stage (e.g., compiling a large application) consumes more memory than available.
  • Recovery Steps

    1

    Step 1: Verify OOM Killer Activity

    Check system and Docker logs to confirm the OOM Killer was the cause of the SIGKILL.

    bash
    sudo journalctl -k --grep="killed process"
    sudo dmesg -T | grep -i "killed process"
    docker system events --since '5m' --filter 'event=die' --format '{{.Actor.Attributes.exitCode}} {{.Actor.Attributes.name}}'
    2

    Step 2: Increase Docker Daemon Memory Limit (Docker Desktop)

    In Docker Desktop, adjust the resource limits via the UI or configuration file to provide more memory for builds.

    bash
    # For direct config edit (Linux/macOS):
    cat ~/.docker/daemon.json
    # Ensure it contains, for example:
    {"memory": "8g"}
    3

    Step 3: Increase Docker Daemon Memory Limit (Linux Daemon)

    On a native Linux Docker installation, configure the daemon's cgroup memory limits via systemd.

    bash
    sudo systemctl edit docker.service
    # Add the following lines, adjusting the value (e.g., 8G):
    [Service]
    MemoryHigh=10G
    MemoryMax=12G
    # Then reload and restart:
    sudo systemctl daemon-reload
    sudo systemctl restart docker
    4

    Step 4: Optimize Dockerfile for Memory Usage

    Modify the Dockerfile to reduce peak memory consumption during the build.

    dockerfile
    # 1. Split large RUN commands (e.g., apt-get update/install).
    RUN apt-get update && \
      apt-get install -y \
        package1 \
        package2 \
        && \
      rm -rf /var/lib/apt/lists/*
    # 2. Use .dockerignore to exclude unnecessary build context.
    # 3. Leverage build cache effectively by ordering commands from least to most frequent change.
    5

    Step 5: Use BuildKit and Set Build-Time Memory Limit

    Enable BuildKit and use its --memory flag to explicitly limit per-build memory, preventing system-wide exhaustion.

    bash
    DOCKER_BUILDKIT=1 docker build --memory 4g -t myapp:latest .
    6

    Step 6: Implement Build Stage Memory Control

    For multi-stage builds, ensure heavy operations (compilation) happen in an early stage with sufficient resources, and copy only artifacts to final stages.

    dockerfile
    # Example structure focusing memory use in 'builder' stage.
    FROM golang:1.21 AS builder
    WORKDIR /app
    COPY . .
    # Memory-intensive compilation happens here
    RUN go build -o myapp .
    FROM alpine:latest
    WORKDIR /root/
    # Copy only the binary, not source or build cache.
    COPY --from=builder /app/myapp .
    CMD ["./myapp"]
    7

    Step 7: Monitor Build Memory in Real-Time

    Use system monitoring tools during the build to identify the exact stage causing the OOM.

    bash
    # Terminal 1: Start the build.
    docker build -t myapp:debug . &
    # Terminal 2: Watch Docker container stats.
    docker stats $(docker ps -q --filter "ancestor=docker/buildx" --filter "ancestor=moby/buildkit")
    # Terminal 3: Watch system memory.
    watch -n 1 "free -h"

    Architect's Pro Tip

    "The OOM Killer often strikes during the 'solving' phase of BuildKit, which analyzes the Dockerfile graph. A large number of layers, complex COPY commands with big contexts, or a multi-platform build (--platform) can drastically increase memory pressure during this phase, not just during RUN commands."

    Frequently Asked Questions

    I increased my system RAM, but the build still fails with OOM. Why?

    Docker has its own configurable memory limit, independent of system RAM. You must increase the Docker daemon's memory allocation (Steps 2 & 3). Also, check if you are using Docker Desktop, as it runs in a VM with its own resource limits.

    Can I just disable the OOM Killer for the Docker daemon?

    No. Disabling the OOM Killer system-wide is dangerous and not recommended. The correct approach is to properly allocate resources to Docker (--memory, daemon.json) and optimize your build process to stay within those limits.

    Why does this happen more in CI/CD pipelines (like Jenkins/GitLab CI)?

    CI runners are often VMs or containers with strict, shared resource limits. A single memory-intensive build can consume the entire runner's allocation. Always configure your CI job's resource requests (memory) to match your build's needs and use the docker build --memory flag.

    Related Docker Guides