CRITICAL

Docker Build SIGKILL 137: Fixing OOM Killer Terminations During Multi-Stage Image Builds

Quick Fix Summary

TL;DR

Increase Docker daemon memory limit and rebuild with --memory flag.

Exit code 137 (SIGKILL) indicates the Docker build process was killed by the Linux Out-of-Memory (OOM) Killer due to excessive memory consumption, typically during resource-intensive stages like compiling large dependencies.

Diagnosis & Causes

  • Insufficient memory allocated to the Docker daemon or build container.
  • A build stage (e.g., compiling Go/Java/Rust) consumes more memory than available.
  • Recovery Steps

    1

    Step 1: Verify OOM Killer Activity

    Check system and Docker logs to confirm the OOM Killer terminated the process.

    bash
    sudo journalctl -k | grep -i 'killed process'
    sudo dmesg -T | grep -E -i 'killed process|oom|out of memory'
    docker system events --since '5m ago' --filter 'event=die' --format '{{.Status}}'
    2

    Step 2: Increase Docker Daemon Memory Limit (Docker Desktop)

    Permanently allocate more RAM to the Docker engine via settings.

    bash
    # Open Docker Desktop -> Settings -> Resources -> Memory slider.
    # Increase limit (e.g., from 2GB to 4-8GB). Apply & Restart.
    3

    Step 3: Limit Build Container Memory with --memory

    Use the --memory flag to explicitly set a higher, enforced limit for the build container, preventing host OOM.

    bash
    docker build --memory 4g --memory-swap 4g -t myapp:latest .
    4

    Step 4: Optimize Dockerfile for Multi-Stage Builds

    Reduce intermediate layer memory footprint by splitting operations and cleaning up in the same RUN instruction.

    dockerfile
    # BAD: Creates large intermediate layers.
    RUN apt-get update && \
      apt-get install -y huge-package ...
    RUN ./compile-my-app # Uses lots of RAM
    RUN rm -rf /var/lib/apt/lists/* # Cleanup too late
    # GOOD: Cleanup in same layer.
    RUN apt-get update && \
      apt-get install -y huge-package \
        && \
      ./compile-my-app \
        && \
      apt-get purge -y --auto-remove huge-package \
        && \
      rm -rf /var/lib/apt/lists/*
    5

    Step 5: Use BuildKit and --mount=type=cache

    Leverage BuildKit's cache mounts to avoid re-downloading dependencies, reducing memory pressure on subsequent builds.

    dockerfile
    # syntax=docker/dockerfile:1
    FROM alpine AS base
    RUN --mount=type=cache,target=/var/cache/apk \
        apk add --no-cache go gcc musl-dev
    6

    Step 6: Adjust BuildKit Resources (for CI/CLI)

    Configure BuildKit daemon settings to increase memory limits if using docker buildx or standalone buildkitd.

    toml
    # Create or edit /etc/buildkitd.toml
    [worker.oci]
      max-parallelism = 1
      [[worker.oci.lowlevel]]
        max-parallelism = 1
        "memory" = 4294967296 # 4GB in bytes
    # Restart BuildKit
    sudo systemctl restart buildkit

    Architect's Pro Tip

    "The OOM Killer often targets the *buildkitd* or *runc* process, not your application binary. If builds fail intermittently on a shared host (like a CI agent), it's likely due to memory contention from other jobs. Isolate build agents or implement resource quotas."

    Frequently Asked Questions

    I increased Docker Desktop memory to 8GB but still get error 137. Why?

    Your build might be exceeding the per-container limit. Use `docker build --memory 8g` to apply the limit directly. Also, ensure no other memory-intensive processes are running on the host.

    Can I disable the OOM Killer for Docker?

    Not recommended in production. Disabling it can cause the entire host to become unstable. The correct fix is to properly allocate resources and optimize your build process.

    Related Docker Guides