CRITICAL

How to Fix Linux ENOMEM: Out of Memory Error

Quick Fix Summary

TL;DR

Kill the top memory-consuming process and increase swap space immediately.

ENOMEM (Error NO MEMory) occurs when the Linux kernel cannot allocate requested memory. This is a critical system-level failure that halts processes and can crash applications.

Diagnosis & Causes

  • Physical RAM and swap space are completely exhausted.
  • A single process or container has a memory leak.
  • Kernel overcommit settings are too restrictive.
  • cgroup memory limit has been reached.
  • Fragmented memory prevents large contiguous allocations.
  • Recovery Steps

    1

    Step 1: Immediate Triage - Identify the Culprit

    Find which process is consuming the most memory to target your recovery actions.

    bash
    free -h
    top -o %MEM
    ps aux --sort=-%mem | head -20
    2

    Step 2: Emergency Memory Free-Up

    Clear page caches, dentries, and inodes. This is safe and can free significant memory instantly.

    bash
    sync; echo 3 > /proc/sys/vm/drop_caches
    3

    Step 3: Manage the OOM Killer & Critical Processes

    Check if the OOM killer intervened and manually terminate the problematic process if needed.

    bash
    dmesg -T | grep -i "killed process"
    # To kill the top memory user:
    kill -9 $(ps aux --sort=-%mem | awk 'NR==2{print $2}')
    4

    Step 4: Create or Expand Swap Space (Temporary Fix)

    Add emergency swap to prevent immediate crashes while you root-cause. Create a 2GB swap file.

    bash
    sudo fallocate -l 2G /swapfile
    sudo chmod 600 /swapfile
    sudo mkswap /swapfile
    sudo swapon /swapfile
    sudo swapon --show
    5

    Step 5: Analyze Kernel Memory Parameters

    Check overcommit and swappiness settings, which control the kernel's allocation aggressiveness.

    bash
    cat /proc/sys/vm/overcommit_memory
    cat /proc/sys/vm/overcommit_ratio
    cat /proc/sys/vm/swappiness
    6

    Step 6: Investigate cgroup Limits (Containers/Kubernetes)

    If running containers, the ENOMEM might be a cgroup limit, not a host OOM.

    bash
    cat /sys/fs/cgroup/memory/memory.usage_in_bytes
    cat /sys/fs/cgroup/memory/memory.limit_in_bytes
    # For a Docker container:
    docker stats --no-stream
    7

    Step 7: Enable Persistent Swap & Tune Sysctl (Long-term)

    Make the swap file permanent and adjust kernel parameters to be more permissive (if appropriate).

    bash
    echo '/swapfile none swap sw 0 0' | sudo tee -a /etc/fstab
    # To allow overcommitting memory (value 1 is often safer for workloads like Java):
    echo 'vm.overcommit_memory = 1' | sudo tee -a /etc/sysctl.conf
    sudo sysctl -p

    Architect's Pro Tip

    "ENOMEM often hits *before* the OOM killer triggers. Monitor '/proc/meminfo' for 'CommitLimit' and 'Committed_AS'; if Committed_AS nears CommitLimit, ENOMEM is imminent."

    Frequently Asked Questions

    What's the difference between ENOMEM and the OOM Killer?

    ENOMEM is an error returned to an application when a memory request fails. The OOM Killer is a last-resort kernel process that terminates applications *after* the system is out of memory to free some up.

    Is it safe to set `vm.overcommit_memory = 1`?

    For most application servers (Java, databases), yes—it prevents ENOMEM from overcommit heuristics. For systems where strict guarantee is needed (scientific computing), keep it at the default (0).

    My container has free memory on the host but gets ENOMEM. Why?

    This is almost always a cgroup memory limit. Check your container runtime (Docker `--memory`, Kubernetes `resources.limits.memory`) or the cgroup files at `/sys/fs/cgroup/memory/` for the container.

    Related Linux Guides