How to Verify Max Pod Capacity Configuration on Kubernetes Nodes: A Technical Guide for Cluster Operators


2 views

When configuring Kubernetes clusters with RKE, validating the max-pods setting is crucial for proper node capacity planning. Here are several reliable methods to verify the configuration:

kubectl describe node <node-name> | grep -i "pods"
# Example output:
# pods:                         13 (200 available)

SSH into the node and check the kubelet configuration:

ps aux | grep kubelet | grep max-pods
# Should show: --max-pods=200 in the running process

Query the node's allocatable resources through the Kubernetes API:

kubectl get node <node-name> -o jsonpath='{.status.allocatable.pods}'

If the value doesn't match your RKE configuration (200 in this case), check:

  • Ensure the node has been properly drained and rebooted after configuration changes
  • Verify no conflicting settings in /var/lib/kubelet/config.yaml
  • Check for admission controllers that might enforce pod limits

Here's a complete verification script for cluster operators:

#!/bin/bash
NODE=$1
MAX_PODS=$(kubectl get node $NODE -o jsonpath='{.status.allocatable.pods}')

if [ "$MAX_PODS" -eq 200 ]; then
    echo "✅ Configuration correct: Max pods set to 200"
else
    echo "❌ Configuration mismatch: Found $MAX_PODS pods (expected 200)"
    echo "Debug steps:"
    echo "1. Check RKE cluster.yml for max-pods setting"
    echo "2. Verify kubelet process arguments"
    echo "3. Ensure proper node recycling after config changes"
fi

When deploying a Kubernetes cluster with Rancher Kubernetes Engine (RKE), the max-pods setting in kubelet configuration determines the maximum number of pods that can run on a single node. This is particularly important for:

  • Resource allocation planning
  • Cluster capacity management
  • Preventing node over-subscription

Here are three reliable methods to check the active max-pods setting on your nodes:

# Method 1: Check kubelet configuration file
cat /var/lib/kubelet/config.yaml | grep maxPods

# Method 2: Query kubelet directly (requires SSH access)
ps aux | grep kubelet | grep max-pods

# Method 3: Check node allocatable resources (Kubernetes API)
kubectl get node <node-name> -o jsonpath='{.status.allocatable.pods}'

For RKE clusters where you've configured max-pods through the cluster.yml:

services:
  kubelet:
    extra_args:
      max-pods: 200

After cluster deployment, verify the configuration was applied:

# Check the actual kubelet arguments
ssh <node> "sudo cat /etc/systemd/system/kubelet.service | grep max-pods"

# Expected output should include:
--max-pods=200

If your max-pods setting isn't being applied:

# 1. Check for conflicts in config locations
sudo ls -la /etc/kubernetes/ssl/

# 2. Verify kubelet service reload
sudo systemctl daemon-reload
sudo systemctl restart kubelet

# 3. Check logs for configuration errors
journalctl -u kubelet -n 100 | grep -i max-pods

To calculate realistic pod density based on your max-pods setting:

# Get current pod count vs. max-pods capacity
kubectl get node <node-name> \
  -o jsonpath='{.status.capacity.pods}{"\\n"}{.status.allocatable.pods}'

Remember that actual usable pods will be slightly less than max-pods due to system daemons and overhead.