When implementing multiple TLS-enabled hosts in Kubernetes Ingress, developers often encounter persistent 503 "Service Temporarily Unavailable" errors. The configuration appears correct at first glance, but the backend services become unreachable through HTTPS while HTTP continues to work.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: multi-tls-ingress
spec:
tls:
- hosts:
- secure.example.com
secretName: tls-secret
rules:
- host: secure.example.com
http:
paths:
- backend:
serviceName: webapp-service
servicePort: 80
The root cause often lies in the interaction between AWS ELB proxy protocol and nginx ingress controller. The generated nginx.conf shows proxy protocol being used:
set_real_ip_from 0.0.0.0/0;
real_ip_header proxy_protocol;
real_ip_recursive on;
Three configuration changes are essential:
- Service annotation for proper protocol handling
- Ingress controller argument for proxy protocol
- Correct TLS secret mounting
First, update your LoadBalancer service:
apiVersion: v1
kind: Service
metadata:
annotations:
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp
service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: '*'
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: '443'
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:us-east-1:ACCOUNT:certificate/ID
Modify your ingress controller deployment to include proxy protocol support:
spec:
containers:
- name: nginx-ingress-controller
args:
- /nginx-ingress-controller
- --default-backend-service=$(POD_NAMESPACE)/default-http-backend
- --configmap=$(POD_NAMESPACE)/nginx-configuration
- --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
- --udp-services-configmap=$(POD_NAMESPACE)/udp-services
- --annotations-prefix=nginx.ingress.kubernetes.io
- --enable-ssl-passthrough
- --http-port=80
- --https-port=443
- --proxy-protocol
Verify the configuration with these commands:
kubectl get ing -o wide
kubectl describe ing multi-tls-ingress
curl -v https://secure.example.com
curl -v http://secure.example.com
If issues persist, examine:
- nginx controller logs:
kubectl logs -n ingress-nginx [pod-name]
- Backend pod logs
- AWS ELB health checks
- DNS propagation status
When working with Kubernetes clusters (particularly those provisioned via kops on AWS) that host multiple websites with mixed HTTP/HTTPS configurations, developers often encounter persistent 503 errors. The issue typically manifests when:
- Multiple TLS hosts are defined in a single Ingress resource
- The cluster uses AWS ALB with proxy protocol
- WordPress or similar CMS applications are deployed
Looking at the provided setup, several potential trouble spots emerge:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-rules
spec:
tls:
- hosts:
- site1.com
secretName: site1-tls-secret
- hosts:
- www.site1.com
secretName: site1-tls-secret
rules:
- host: site1.com
http:
paths:
- path: /
backend:
serviceName: site1
servicePort: 80
The root causes typically fall into these categories:
1. TLS Secret Issues
- Verify the secret exists in the correct namespace:
kubectl get secret site1-tls-secret --namespace=your-namespace
2. Ingress Controller Configuration
The nginx-ingress deployment shows several potential optimization points:
-
containers:
- name: nginx-ingress
image: gcr.io/google_containers/nginx-ingress-controller:0.9.0-beta.11
args:
- /nginx-ingress-controller
- --default-backend-service=$(POD_NAMESPACE)/echoheaders-default
Solution 1: Separate Ingress Resources
# HTTPS Ingress
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: secure-ingress
annotations:
kubernetes.io/ingress.class: nginx
spec:
tls:
- hosts:
- site1.com
- www.site1.com
secretName: site1-tls-secret
rules:
- host: site1.com
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: site1
port:
number: 80
# HTTP Ingress
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: http-ingress
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- host: blog.site2.com
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: site2
port:
number: 80
Solution 2: Update Ingress Controller
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-ingress-controller
spec:
template:
spec:
containers:
- name: nginx-ingress-controller
image: k8s.gcr.io/ingress-nginx/controller:v1.0.0
args:
- /nginx-ingress-controller
- --publish-service=$(POD_NAMESPACE)/ingress-nginx-controller
- --election-id=ingress-controller-leader
- --ingress-class=nginx
- --configmap=$(POD_NAMESPACE)/nginx-configuration
- --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
- --udp-services-configmap=$(POD_NAMESPACE)/udp-services
- --validating-webhook=:8443
- --validating-webhook-certificate=/usr/local/certificates/cert
- --validating-webhook-key=/usr/local/certificates/key
For AWS environments, these annotations are crucial:
metadata:
annotations:
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: https
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "443"
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:us-west-2:account:certificate/id
service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: "3600"
Useful commands for troubleshooting:
# Check ingress controller logs
kubectl logs -n ingress-nginx deployment/nginx-ingress-controller
# Verify endpoints
kubectl get endpoints site1
# Describe ingress resource
kubectl describe ingress my-rules
# Check service status
kubectl get svc -n ingress-nginx
For production environments, consider:
- Using cert-manager for automatic certificate management
- Implementing proper readiness/liveness probes
- Setting resource requests/limits for ingress controller
- Enabling access logs with proper verbosity