While K3s is typically deployed in multi-node configurations for production environments, running it as a single-node cluster (combining control plane and worker roles) is fully supported. This approach is particularly useful for:
- Local development environments
- CI/CD pipeline testing
- Edge computing prototypes
- Resource-constrained learning environments
Here's the most straightforward installation method:
curl -sfL https://get.k3s.io | sh -
This automatically:
- Installs k3s as both control plane and worker
- Creates a kubeconfig file at /etc/rancher/k3s/k3s.yaml
- Sets up the containerd runtime
For customized deployments, consider these flags:
curl -sfL https://get.k3s.io | sh -s - \
--disable traefik \
--write-kubeconfig-mode 644 \
--node-name single-node \
--cluster-init
After installation, check cluster status:
kubectl get nodes -o wide
kubectl get pods -A
For local persistent volumes, create a StorageClass:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-path
provisioner: rancher.io/local-path
volumeBindingMode: WaitForFirstConsumer
When using MetalLB for LoadBalancer services:
helm repo add metallb https://metallb.github.io/metallb
helm install metallb metallb/metallb \
--namespace metallb-system \
--create-namespace \
--set configInline.address-pools[0].name=default \
--set configInline.address-pools[0].protocol=layer2 \
--set configInline.address-pools[0].addresses[0]=192.168.1.240-192.168.1.250
Common issues and solutions:
# Reset cluster if needed
/usr/local/bin/k3s-uninstall.sh
# Check logs
journalctl -u k3s -f
# Access embedded etcd
k3s etcd-snapshot save
Yes, you absolutely can run both K3s control plane and worker components on a single physical machine. While this architecture isn't recommended for production environments due to the single point of failure, it's perfectly valid for development, testing, or edge computing scenarios where resource constraints exist.
There are two primary approaches to achieve this:
- Default Installation: K3s automatically combines control plane and worker roles when installed on a single node.
- Explicit Configuration: Using flags to manually specify the node's dual role.
# Basic single-node installation
curl -sfL https://get.k3s.io | sh -
# Alternative with explicit configuration
curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="--node-taint CriticalAddonsOnly=true:NoExecute" sh -
When running in single-node mode, be aware of these operational aspects:
- Resource allocation between control plane and workloads
- Automatic taint management for system-critical pods
- Storage configuration for etcd when using embedded storage
Here's a more advanced setup with custom configurations:
# Install with customized parameters
curl -sfL https://get.k3s.io | \
INSTALL_K3S_EXEC="server \
--disable traefik \
--node-taint node-role.kubernetes.io/master=true:NoSchedule \
--kubelet-arg 'max-pods=100'" sh -
If you encounter problems, check these areas:
- Port conflicts (default 6443, 8472, 10250)
- SELinux/AppArmor restrictions
- Resource exhaustion (memory, CPU, or file descriptors)
After installation, verify your setup with:
kubectl get nodes -o wide
kubectl get pods -A -o wide
journalctl -u k3s -f
While technically possible for production, consider these mitigations:
- Implement regular etcd snapshots
- Configure proper monitoring and alerting
- Plan for quick disaster recovery procedures