How to Fix Kubernetes FailedMount: Volume Mount Error
Quick Fix Summary
TL;DRCheck PersistentVolumeClaim binding, verify node storage availability, and ensure correct StorageClass configuration.
A FailedMount error occurs when the kubelet on a node cannot attach or mount a volume to a pod. This prevents the pod from starting and is a critical production-blocking issue.
Diagnosis & Causes
Recovery Steps
Step 1: Diagnose the Pod and PVC State
First, gather detailed status from the affected pod and its associated PersistentVolumeClaim to identify the specific failure reason.
kubectl describe pod <pod-name> -n <namespace>
kubectl get pvc <pvc-name> -n <namespace> -o yaml
kubectl get events -n <namespace> --field-selector involvedObject.name=<pod-name> Step 2: Verify PersistentVolume (PV) Binding
Ensure the PVC is correctly bound to an available PV. An unbound PVC ('Pending' status) is a primary cause of FailedMount.
kubectl get pv
kubectl get pvc -A | grep -v Bound
# If PVC is Pending, check StorageClass:
kubectl get storageclass
kubectl describe storageclass <storageclass-name> Step 3: Check Node Storage & Kubelet Status
SSH into the node hosting the pod (from describe output) and verify kubelet logs and local storage health. The error often originates here.
# Find the node:
kubectl get pod <pod-name> -n <namespace> -o wide
# On the node, check kubelet logs for mount errors:
sudo journalctl -u kubelet --since "5 minutes ago" | grep -i mount
# Check local disk space:
df -h Step 4: Validate Volume Configuration & Secrets
For cloud or CSI volumes, ensure the required secrets (for provider credentials) exist and are correctly referenced in the StorageClass or PV.
# Check for secrets referenced by the StorageClass or PV:
kubectl get secret -n <namespace>
kubectl describe pv <pv-name> | grep -A5 -B5 Secret Step 5: Inspect CSI Driver or In-Tree Plugin
If using a CSI driver (e.g., aws-ebs-csi, azure-disk-csi), ensure its pods are running and healthy in the kube-system namespace.
kubectl get pods -n kube-system | grep -i csi
kubectl logs -n kube-system -l app=<csi-driver-name> --tail=50 Step 6: Address Security Policies & Permissions
Check for SELinux denials on the node or overly restrictive Pod Security Admission (PSA)/PodSecurityPolicy (PSP) blocking volume mounts.
# Check for SELinux AVC denials on the node:
sudo ausearch -m avc -ts recent
# Check Pod Security Admission labels on the namespace:
kubectl get ns <namespace> -o jsonpath='{.metadata.labels}' Step 7: Force Pod Restart on a Different Node
As an immediate recovery tactic, cordon the faulty node and delete the pod to let the controller schedule it elsewhere.
kubectl cordon <node-name>
kubectl delete pod <pod-name> -n <namespace>
# Monitor new pod scheduling:
kubectl get pod <pod-name> -n <namespace> -w Architect's Pro Tip
"For dynamic provisioning failures, the kube-controller-manager logs often contain the root cause before the error propagates to the kubelet. Check them with `kubectl logs -n kube-system -l component=kube-controller-manager --tail=100`."
Frequently Asked Questions
My PVC status is 'Pending' forever. What's wrong?
A 'Pending' PVC usually indicates no StorageClass is specified, the specified StorageClass doesn't exist, or the storage provider (e.g., cloud quota) cannot provision the volume. Run `kubectl describe pvc <name>` for events.
The pod works on Node A but fails with FailedMount on Node B. Why?
This points to a node-specific issue. Node B likely lacks the necessary storage driver, has a different OS/kernel version, has full disk, or has a network/firewall issue preventing communication with the storage backend (e.g., NFS server, cloud API).
How do I distinguish between an 'Attach' error and a 'Mount' error?
Check the detailed pod events (`kubectl describe pod`). 'FailedAttachVolume' indicates the cloud/CSI controller couldn't attach the disk to the node VM. 'FailedMount' means the kubelet on the node failed to mount the already-attached disk into the pod's filesystem.