When your nginx-ingress returns a 400 Bad Request error while trying to access Kubernetes Dashboard, the upstream connection reset (error 104) indicates a protocol mismatch between your ingress controller and the dashboard service. The key evidence is in the nginx logs showing HTTP protocol being used against an HTTPS backend.
Kubernetes Dashboard by default runs with HTTPS on port 8443, but your current configuration has several conflicting elements:
upstream: "http://10.42.0.2:8443/" # Notice HTTP protocol here
servicePort: 443 # While dashboard runs on 8443
backend-protocol: "HTTPS" # Annotation suggests HTTPS
Here's the corrected ingress manifest that works with Kubernetes Dashboard 2.x+:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: kubernetes-dashboard
namespace: kubernetes-dashboard
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
nginx.ingress.kubernetes.io/proxy-ssl-secret: "kubernetes-dashboard/kubernetes-dashboard-certs"
nginx.ingress.kubernetes.io/proxy-ssl-verify: "off"
nginx.ingress.kubernetes.io/configuration-snippet: |
proxy_ssl_server_name on;
proxy_ssl_name $proxy_host;
spec:
tls:
- hosts:
- kube.example.com
secretName: dashboard-tls
rules:
- host: kube.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: kubernetes-dashboard
port:
number: 443
1. The service port must match your actual dashboard service port (443 in most cases)
2. The proxy-ssl-*
annotations are required because the dashboard uses self-signed certs by default
3. Using proxy_ssl_server_name
prevents SNI-related issues
Check these components if you still encounter errors:
# Verify service endpoints
kubectl -n kubernetes-dashboard get endpoints kubernetes-dashboard
# Check service selector matches dashboard pods
kubectl -n kubernetes-dashboard describe svc kubernetes-dashboard
# Inspect dashboard pod logs
kubectl -n kubernetes-dashboard logs -l k8s-app=kubernetes-dashboard
# Test connectivity manually
kubectl -n kubernetes-dashboard run -ti --rm test \
--image=curlimages/curl -- /bin/sh
curl -vk https://kubernetes-dashboard.kubernetes-dashboard.svc:443
Remember that exposing the dashboard externally requires proper security measures:
- Enable authentication (OIDC recommended for production)
- Implement network policies to restrict access
- Consider using a VPN instead of public exposure
- Regularly rotate certificates
When trying to access Kubernetes Dashboard through an nginx-ingress controller, you might encounter a 400 Bad Request error with connection reset messages in the logs. Here's what the error typically looks like:
2020/08/28 01:25:58 [error] 2609#2609: *795 readv() failed (104: Connection reset by peer) while reading upstream,
client: 10.0.0.25, server: kube.example.com, request: "GET / HTTP/1.1",
upstream: "http://10.42.0.2:8443/", host: "kube.example.com"
The main issue stems from protocol mismatch between the ingress and the Kubernetes Dashboard service. The dashboard expects HTTPS connections, but the ingress is trying to communicate via HTTP upstream.
Here's the corrected ingress configuration that solves this problem:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: kubernetes-dashboard
namespace: kubernetes-dashboard
annotations:
kubernetes.io/ingress.class: nginx
cert-manager.io/cluster-issuer: "letsencrypt-prod"
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
nginx.ingress.kubernetes.io/configuration-snippet: |
proxy_ssl_verify off;
proxy_ssl_server_name on;
nginx.ingress.kubernetes.io/proxy-ssl-secret: "kubernetes-dashboard/kubernetes-dashboard-certs"
nginx.ingress.kubernetes.io/proxy-ssl-verify: "off"
spec:
tls:
- hosts:
- kube.example.com
secretName: dashboard-tls
rules:
- host: kube.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: kubernetes-dashboard
port:
number: 443
Several critical annotations make this work:
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
nginx.ingress.kubernetes.io/proxy-ssl-secret: "kubernetes-dashboard/kubernetes-dashboard-certs"
nginx.ingress.kubernetes.io/proxy-ssl-verify: "off"
If you're still facing issues, consider these checks:
# Verify the dashboard service is running
kubectl -n kubernetes-dashboard get pods
# Check service endpoints
kubectl -n kubernetes-dashboard get endpoints kubernetes-dashboard
# Examine nginx controller logs
kubectl logs -n ingress-nginx deploy/nginx-ingress-controller
While we're disabling SSL verification for simplicity in this example, for production environments you should:
- Properly configure SSL certificates
- Enable client certificate verification if needed
- Consider adding authentication at the ingress level