When running kubectl get nodes
or any other Kubernetes command, encountering the error You must be logged in to the server (Unauthorized)
indicates an authentication failure. This typically means your kubectl
client cannot authenticate with the Kubernetes API server.
First, verify these basic items:
kubectl config view # Check your current context and credentials
kubectl config current-context # Verify the active context
1. Expired or Invalid Credentials
Kubernetes clusters using token-based authentication may have expired credentials. Check your kubeconfig file:
cat ~/.kube/config
Look for the users:
section and verify the credentials. For clusters using certificates, check expiration:
openssl x509 -in /path/to/cert.crt -noout -dates
2. Incorrect Context Configuration
Your current context might point to a different cluster or user. To list all contexts:
kubectl config get-contexts
To switch contexts:
kubectl config use-context your-correct-context
3. API Server Accessibility Issues
Verify you can reach the API server:
curl -k https://your-api-server:6443
If this fails with 401 Unauthorized, the issue is definitely authentication-related.
Using Verbose Output
While kubectl doesn't have a -vv
flag, you can enable more verbose HTTP logging:
kubectl get nodes -v=6 # Shows HTTP requests
kubectl get nodes -v=8 # Shows HTTP headers (sensitive info!)
Direct API Server Access
Try accessing the API directly with credentials from your kubeconfig:
APISERVER=$(kubectl config view --minify -o jsonpath='{.clusters[0].cluster.server}')
TOKEN=$(kubectl get secrets -o jsonpath="{.items[?(@.metadata.annotations['kubernetes\.io/service-account\.name']=='default')].data.token}" | base64 --decode)
curl -X GET $APISERVER/api --header "Authorization: Bearer $TOKEN" --insecure
To avoid future authentication issues:
- Regularly rotate credentials before expiration
- Use
kubectl config set-credentials
to update credentials - Consider using
kubeadm alpha certs check-expiration
for certificate-based clusters
If you still can't resolve the issue:
- Check Kubernetes control plane component logs (
kube-apiserver
,kube-controller-manager
) - Verify RBAC rules haven't changed
- Check if the cluster's authentication webhooks are functioning
When executing kubectl get nodes
or other Kubernetes commands, the "Unauthorized" error typically indicates authentication failure between your client and the Kubernetes API server. The error persists even though you haven't made infrastructure changes, suggesting possible credential expiration or configuration issues.
First, verify your current authentication context:
kubectl config view
kubectl config current-context
Inspect your kubeconfig file for potential issues:
cat ~/.kube/config | yq eval .users[].user
Or for older kubectl versions:
cat ~/.kube/config | grep -A5 "user:"
Try these diagnostic commands to identify the root cause:
# Check if certs are expired
openssl x509 -enddate -noout -in ~/.kube/config | grep notAfter
# Verify API server connectivity
curl -k https://<your-cluster-ip>:6443
If using certificate authentication, renew expired credentials:
# Extract and decode client cert
grep 'client-certificate-data' ~/.kube/config | awk '{print $2}' | base64 -d > client.crt
# Check cert expiration
openssl x509 -in client.crt -noout -dates
When certificates fail, try these authentication alternatives:
# Use token authentication
kubectl --token=<your-token> get nodes
# Or basic auth (if configured)
kubectl --username=admin --password=secret get nodes
Verify direct communication with the API server:
# Get API server endpoints
kubectl --kubeconfig=~/.kube/config config view -o jsonpath='{.clusters[].cluster.server}'
# Test connectivity with verbose output
curl -v -k https://<api-server>:6443/healthz
If all else fails, regenerate your kubeconfig:
# For EKS clusters
aws eks --region <region> update-kubeconfig --name <cluster-name>
# For GKE clusters
gcloud container clusters get-credentials <cluster-name>
Different Kubernetes distributions require specific approaches:
# Minikube
minikube update-context
# kubeadm
sudo cp /etc/kubernetes/admin.conf ~/.kube/config
sudo chown $(id -u):$(id -g) ~/.kube/config