In Kubernetes, networking involves two primary IP address ranges:
- Pod IPs: Assigned from the
--cluster-cidr
range (e.g., 10.20.0.0/14) - Service IPs: Assigned from the
--service-cluster-ip-range
(e.g., 10.23.240.0/20)
From the GCP cluster example:
gcloud container clusters describe cluster0 | grep -i cidr
clusterIpv4Cidr: 10.20.0.0/14
servicesIpv4Cidr: 10.23.240.0/20
We can observe that:
- Pod IP range: 10.20.0.1 - 10.23.255.254
- Service IP range: 10.23.240.1 - 10.23.255.254
While the example shows service IPs being a subset of pod IPs, this isn't a Kubernetes requirement. The two ranges:
- Can overlap (as in this GCP example)
- Can be completely separate
- Must not conflict with node IPs
When debugging networking issues, consider:
# Check service IP assignments
kubectl get svc --all-namespaces -o wide
# Check pod IP assignments
kubectl get pods --all-namespaces -o wide
Kubernetes uses different network abstractions:
- Pod-to-Pod communication uses the cluster CIDR
- Services use virtual IPs (VIPs) managed by kube-proxy
- These can be implemented via iptables, IPVS, or other methods
When creating a cluster, you can specify separate ranges:
kubeadm init \
--pod-network-cidr=10.20.0.0/14 \
--service-cidr=10.96.0.0/12
This shows completely non-overlapping ranges, which is equally valid.
If you encounter IP conflicts:
# Check for IP conflicts
kubectl describe nodes | grep -i address
ip route show
# Verify kube-proxy configuration
kubectl -n kube-system get cm kube-proxy -o yaml
In Kubernetes networking, IP address management follows a hierarchical structure:
# Typical GKE cluster CIDR configuration
gcloud container clusters describe cluster0 | grep -i cidr
clusterIpv4Cidr: 10.20.0.0/14 # Pod network range
servicesIpv4Cidr: 10.23.240.0/20 # Service cluster IP range
The pod network range (10.20.0.0/14) indeed contains the service IP range (10.23.240.0/20) in this configuration. This isn't mandatory but frequently observed in cloud provider implementations due to:
- Simplified routing table management
- Efficient utilization of private IP space
- Consistent network policy application
When a pod communicates with a Service:
# Inside a pod
curl http://my-service.namespace.svc.cluster.local
# Network path:
Pod (10.20.1.5) → iptables NAT rules → Service IP (10.23.240.10) → Endpoint (10.20.2.3)
You can explicitly separate the ranges during cluster creation:
gcloud container clusters create cluster1 \
--cluster-ipv4-cidr=10.10.0.0/16 \
--services-ipv4-cidr=192.168.0.0/24
Check actual IP assignments:
# List all services with their cluster IPs
kubectl get svc --all-namespaces -o wide
# List all pods with their IPs
kubectl get pods --all-namespaces -o wide
Diagnostic commands when facing Service-Pod communication issues:
# Check kube-proxy rules
iptables -t nat -L KUBE-SERVICES
# Verify endpoints
kubectl get endpoints my-service
# Network trace from pod to service
kubectl exec -it test-pod -- traceroute 10.23.240.10