Getting Started with K3s
/ 2 min read
Table of Contents
Getting Started with K3s
K3s is a lightweight Kubernetes distribution optimized for small-scale clusters, edge computing, and IoT devices.
Unlike full Kubernetes, K3s is faster, consumes fewer resources, and is easy to set up.
This guide covers:
- Installing K3s on a single-node cluster
- Deploying a simple application
- Configuring K3s with multiple nodes
1. Why Use K3s Instead of Kubernetes?
K3s provides a fully compliant Kubernetes environment with a smaller footprint.
It eliminates heavy dependencies like etcd and replaces them with a SQLite datastore by default.
K3s vs. Kubernetes
Feature | K3s | Kubernetes |
---|---|---|
Size | ~50MB binary | 300MB+ components |
Database | SQLite (default) | etcd (default) |
Requirements | 512MB RAM, 1 vCPU | 2GB+ RAM, 2 vCPUs |
Installation | Single command | Multi-step setup |
2. Installing K3s (Single-Node Cluster)
Step 1: Install K3s
On any Linux-based machine, run:
curl -sfL https://get.k3s.io | sh -
Verify installation:
k3s --version
Step 2: Check Cluster Status
kubectl get nodes
K3s automatically installs kubectl, so you can use standard Kubernetes commands.
3. Deploying a Simple Application
K3s supports standard Kubernetes YAML manifests.
Step 1: Create a Deployment
nano app-deployment.yaml
apiVersion: apps/v1kind: Deploymentmetadata:name: demo-appspec:replicas: 2selector:matchLabels:app: demotemplate:metadata:labels:app: demospec:containers: - name: demo-containerimage: nginxports: - containerPort: 80
Step 2: Apply the Deployment
kubectl apply -f app-deployment.yaml
Step 3: Verify the Running Pods
kubectl get pods
4. Configuring K3s for Multiple Nodes
K3s can run in multi-node mode, with one node as the server (control plane) and others as agents (worker nodes).
Step 1: Get Node Token
On the master node, run:
cat /var/lib/rancher/k3s/server/node-token
Step 2: Join a Worker Node
On the worker node, run:
curl -sfL https://get.k3s.io | K3S_URL=https://<MASTER_IP>:6443 K3S_TOKEN=<NODE_TOKEN> sh -
Verify:
kubectl get nodes
5. Exposing Services with K3s’ Built-in Load Balancer
K3s includes Traefik as the default Ingress Controller.
Step 1: Deploy a Service
apiVersion: v1kind: Servicemetadata:name: demo-servicespec:selector:app: demoports: - protocol: TCPport: 80targetPort: 80type: LoadBalancer
Apply:
kubectl apply -f demo-service.yaml
Check:
kubectl get svc
6. Conclusion
- K3s is a lightweight Kubernetes distribution, ideal for edge devices and resource-limited environments.
- Setup is quick, with a single command installation.
- Multi-node clusters can be easily configured using tokens.
- Built-in Traefik load balancer simplifies service exposure.
K3s provides the power of Kubernetes in a simplified, efficient package, making it a great choice for smaller deployments.