When running simple container images like Ubuntu or Alpine in Minikube, you might encounter the dreaded CrashLoopBackOff
state accompanied by Error syncing pod
messages. This occurs even with basic commands:
kubectl run ubuntu --image=ubuntu
The pod appears to start but immediately terminates with status Completed
, triggering Kubernetes' restart policy which creates the loop.
Unlike server applications like Nginx that run as persistent processes, Ubuntu's base image executes /bin/bash
as its default command and exits immediately when not running interactively. The container lifecycle looks like:
1. Container starts (State: Running)
2. Default command completes (State: Terminated, Exit Code: 0)
3. Kubernetes restarts container (State: Waiting, Reason: CrashLoopBackOff)
4. Cycle repeats
Here are three working approaches to keep your Ubuntu container running:
Method 1: Run an Infinite Loop
kubectl run ubuntu --image=ubuntu --command -- /bin/sh -c "while true; do sleep 10; done"
Method 2: Create a Deployment with Custom Command
apiVersion: apps/v1
kind: Deployment
metadata:
name: ubuntu-deployment
spec:
replicas: 1
selector:
matchLabels:
app: ubuntu
template:
metadata:
labels:
app: ubuntu
spec:
containers:
- name: ubuntu
image: ubuntu
command: ["/bin/bash"]
args: ["-c", "tail -f /dev/null"]
Method 3: Attach Interactive Shell
For debugging purposes, you can force an interactive session:
kubectl run -i --tty ubuntu --image=ubuntu --restart=Never -- bash
# Remember to delete afterward:
kubectl delete pod ubuntu
When troubleshooting, always check:
kubectl describe pod [pod-name]
- Shows termination reasonkubectl logs [pod-name] --previous
- Views logs from crashed instancekubectl get events --sort-by=.metadata.creationTimestamp
- Chronological event log
If problems persist in Minikube:
# Reset the cluster (warning: destructive)
minikube delete
minikube start
# Check resource allocation
minikube config set memory 4096
minikube config set cpus 2
Remember that base OS images aren't designed to run as services - they're meant as building blocks. For production, always use proper process managers or define appropriate commands in your Kubernetes manifests.
When running basic container images like Ubuntu or Alpine in Minikube using commands like:
kubectl run ubuntu --image=ubuntu
The pod enters a CrashLoopBackOff state with these key indicators:
- Container status shows "Reason: Completed" with Exit Code 0
- Events log shows "Back-off restarting failed container"
- Frequent "Error syncing pod" messages
The fundamental issue stems from how Kubernetes treats containers differently than Docker. Unlike Docker which keeps containers running by default, Kubernetes expects containers to:
- Run a persistent process (like Nginx does)
- Explicitly declare a command to keep running
- Not exit immediately (which Ubuntu/Alpine do by default)
Method 1: Force the container to stay alive
kubectl run ubuntu --image=ubuntu --command -- sleep infinity
Method 2: Create a proper Deployment manifest
apiVersion: apps/v1
kind: Deployment
metadata:
name: ubuntu-deployment
spec:
replicas: 1
selector:
matchLabels:
app: ubuntu
template:
metadata:
labels:
app: ubuntu
spec:
containers:
- name: ubuntu
image: ubuntu
command: ["/bin/bash", "-c", "--"]
args: ["while true; do sleep 30; done;"]
Nginx containers inherently run a persistent web server process, making them Kubernetes-friendly out of the box. Compare these behaviors:
# Nginx (works)
kubectl run nginx --image=nginx
# Ubuntu (requires adjustment)
kubectl run ubuntu --image=ubuntu --command -- tail -f /dev/null
When facing similar issues:
- Check container logs:
kubectl logs -p POD_NAME
- Inspect events:
kubectl get events --sort-by=.metadata.creationTimestamp
- Test with interactive shell:
kubectl run -i --tty debug --image=ubuntu --restart=Never -- bash
For Minikube environments:
- Verify VM resources:
minikube ssh free -m
- Check Docker driver compatibility
- Consider newer Minikube versions (v0.24.1 is quite old)