Ubuntu 22.04 - Complete Guide
Version 1.0 | October 2025
Author: Cynthia Vázquez
Status: Production Ready
- Introduction
- Prerequisites
- Master Node Installation
- Worker Nodes Installation
- Troubleshooting
- Firewall Configuration
- SSH Configuration
- File Transfer Between Nodes
- Container Management
- References
- Appendix: Quick Command Reference
This manual provides step-by-step instructions for installing and configuring a K3s Kubernetes cluster on Ubuntu 22.04 with one master node and two worker nodes.
- 3 Ubuntu 22.04 servers
- Network connectivity between all nodes
- Root or sudo access on all nodes
- Basic knowledge of Linux commands
- Master Node: k3s-master
- Worker Node 1: k3s-worker-01
- Worker Node 2: k3s-worker-02
Connect to your master server and run:
curl -sfL https://get.k3s.io | sh -Check that K3s is running:
sudo systemctl status k3sCheck the node status:
sudo kubectl get nodesYou should see the master node in "Ready" status.
Step 1: Search for existing KUBECONFIG configurations
grep -r "KUBECONFIG" ~/If you find a line like export KUBECONFIG=/etc/rancher/k3s/k3s.yaml in files like ~/.bashrc, you need to remove or comment it out.
Step 2: Edit the configuration file
nano ~/.bashrcComment out or remove the line:
# export KUBECONFIG=/etc/rancher/k3s/k3s.yamlStep 3: Apply changes
source ~/.bashrcStep 4: Configure kubectl properly
Create the kube directory and copy the config:
mkdir -p ~/.kube
sudo cp /etc/rancher/k3s/k3s.yaml ~/.kube/config
sudo chown $(whoami):$(whoami) ~/.kube/config
chmod 600 ~/.kube/configStep 5: Set KUBECONFIG permanently
echo 'export KUBECONFIG=~/.kube/config' >> ~/.bashrc
source ~/.bashrcStep 6: Test kubectl access
kubectl get nodesYou should now see your nodes without using sudo.
This method is cleaner and avoids permission issues.
Step 1: Create a new admin group
# Create the k8s-admins group
sudo groupadd k8s-admins
# Add your current user to the group
sudo usermod -aG k8s-admins $(whoami)Step 2: Log out and log back in
⚠️ IMPORTANT: You must log out and log back in for group changes to take effect.
Verify group membership:
groupsYou should see k8s-admins in the list.
Step 3: Reinstall K3s with proper permissions
curl -sfL https://get.k3s.io | \
INSTALL_K3S_EXEC="--write-kubeconfig-mode 644 \
--write-kubeconfig-group k8s-admins" sh -Explanation:
--write-kubeconfig-mode 644: Sets read/write for owner and read for group--write-kubeconfig-group k8s-admins: Sets the group owner to k8s-admins
Step 4: Configure kubectl environment
# Create .kube directory
mkdir -p ~/.kube
# Copy configuration (no sudo needed this time!)
cp /etc/rancher/k3s/k3s.yaml ~/.kube/config
# Ensure you own the file
sudo chown $(whoami):$(whoami) ~/.kube/configStep 5: Test kubectl access
kubectl get nodesSave this token for joining worker nodes:
sudo cat /var/lib/rancher/k3s/server/node-tokenExample output:
K10abcdef1234567890::server:abcdef1234567890abcdef12
💾 Save this token securely - you'll need it for worker nodes.
Connect to worker-01 and run:
curl -sfL https://get.k3s.io | \
K3S_URL=https://MASTER_IP:6443 \
K3S_TOKEN=YOUR_TOKEN_HERE sh -Replace:
MASTER_IP: Your master node's IP addressYOUR_TOKEN_HERE: The token from step 3.4
Connect to worker-02 and run the same command:
curl -sfL https://get.k3s.io | \
K3S_URL=https://MASTER_IP:6443 \
K3S_TOKEN=YOUR_TOKEN_HERE sh -From the master node, check all nodes:
kubectl get nodesExpected output:
NAME STATUS ROLES AGE VERSION
k3s-master Ready control-plane,master 10m v1.28.x+k3s1
k3s-worker-01 Ready <none> 5m v1.28.x+k3s1
k3s-worker-02 Ready <none> 3m v1.28.x+k3s1
If kubectl get nodes only shows the master node, the worker might be pointing to localhost instead of the master IP.
sudo journalctl -u k3s-agent.service --no-pagerCommon error message:
Error starting load balancer: listen tcp 127.0.0.1:6444:
address already in use
Step 1: Find the process using port 6444
sudo lsof -i :6444Or if lsof is not installed:
sudo netstat -tulpn | grep :6444Step 2: Kill the process
sudo kill -9 <PID>Step 1: Stop all K3s services
sudo systemctl stop k3s k3s-agentStep 2: Uninstall existing K3s
sudo /usr/local/bin/k3s-uninstall.sh
sudo /usr/local/bin/k3s-agent-uninstall.shStep 3: Remove all residual files
sudo rm -rf /var/lib/rancher/k3s/ \
/etc/rancher/k3s/ \
/var/lib/rancher/ \
/etc/systemd/system/k3s*.serviceStep 4: Reload systemd
sudo systemctl daemon-reloadStep 5: Verify port 6444 is free
sudo ss -tulpn | grep :6444Should return nothing.
Step 6: Reinstall with correct master IP
curl -sfL https://get.k3s.io | \
K3S_URL=https://MASTER_IP:6443 \
K3S_TOKEN=TOKEN_FROM_MASTER sh -Step 7: Verify service status
sudo systemctl status k3s-agent.serviceStep 8: Check from master node
kubectl get nodesIf the cluster is not responding and continuously restarting, check the logs to identify the problem.
View the K3s service logs:
sudo journalctl -u k3s -fIf you see an error like:
level=error msg="controller-manager exited: unknown flag:
--pod-eviction-timeout"
This means the kube-controller-manager, an essential Kubernetes component integrated by K3s, is receiving an argument (--pod-eviction-timeout) that it doesn't recognize.
This error typically occurs due to:
- Version mismatch: In recent versions of Kubernetes (and K3s), the
--pod-eviction-timeoutflag was removed or replaced - Legacy configuration: A previous installation or manual configuration file is passing this deprecated flag
Step 1: Check if the K3s config file exists
ls /etc/rancher/k3s/config.yamlStep 2: If the file exists, edit it
sudo nano /etc/rancher/k3s/config.yamlStep 3: Look for and remove any line containing --pod-eviction-timeout
The flag might be located under:
controller-manager-args:section- Or directly as an argument
Step 4: Delete the problematic line and save the file
Step 5: Restart K3s service
sudo systemctl restart k3sStep 6: Verify the service is running properly
sudo systemctl status k3s
kubectl get nodesIf the config.yaml file doesn't exist or doesn't contain the flag:
- The flag might be embedded in the systemd service (uncommon with clean K3s installation)
- Consider a complete reinstallation following the steps in Section 5.1.3
📝 Note: If you find other unfamiliar configurations in config.yaml, they might be remnants from a previous failed installation. Review them carefully or remove the entire file and reinstall K3s.
Configure UFW firewall on all nodes with the necessary ports.
Reference Guide:
sudo ufw allow 6443/tcp # Kubernetes API
sudo ufw allow 10250/tcp # Kubelet metrics
sudo ufw allow 8472/udp # Flannel VXLAN
sudo ufw allow 2379:2380/tcp # etcd
sudo ufw allow 22/tcp # SSH
sudo ufw enablesudo ufw allow 10250/tcp # Kubelet metrics
sudo ufw allow 8472/udp # Flannel VXLAN
sudo ufw allow 22/tcp # SSH
sudo ufw enablesudo ufw status verbose
⚠️ Important: If you accidentally disable port 22 (SSH), you'll need physical access to the server or web console (iDRAC, iLO, etc.) to re-enable it.
Test connectivity between nodes using ping:
# From master to worker-01
ping WORKER_01_IP
# From master to worker-02
ping WORKER_02_IP
# From worker-01 to master
ping MASTER_IPAll nodes should respond successfully.
Set up SSH key authentication between all three nodes for secure, password-less access.
Steps:
- Generate SSH key on the master node (if not already created):
ssh-keygen -t rsa -b 4096- Copy SSH key to worker nodes:
ssh-copy-id user@WORKER_01_IP
ssh-copy-id user@WORKER_02_IP- Test SSH connection:
ssh user@WORKER_01_IP
ssh user@WORKER_02_IPYou should now be able to connect without entering a password.
Confirm that all nodes can communicate securely via SSH.
Transfer files from your local machine to servers or between servers using scp.
Syntax:
scp -r /local/path/folder user@DESTINATION_IP:/remote/path/Example: Transfer test scripts from local machine to master node
scp -r /local/path/test-scripts \
admkube01@192.168.1.10:/home/admkube01/If you're developing scripts locally in VSCode:
- Create your test scripts locally
- Use scp command to transfer to the server
- Execute on the server
Reference Guide:
Use the appropriate tool depending on your needs:
- For cluster operations:
kubectl [command] # Run from master node- For container-level debugging:
sudo k3s ctr [command] # K3s container runtime
sudo k3s crictl [command] # CRI debugging toolRun these commands on each node as needed.
# Get all nodes
kubectl get nodes
# Get all pods
kubectl get pods -A
# Get deployments
kubectl get deployments
# Describe a pod
kubectl describe pod <pod-name>
# View logs
kubectl logs <pod-name># List all containers
sudo k3s crictl ps
# List all images
sudo k3s crictl images
# Inspect a container
sudo k3s crictl inspect <container-id># Install K3s
curl -sfL https://get.k3s.io | sh -
# Get node token
sudo cat /var/lib/rancher/k3s/server/node-token
# Configure kubectl
mkdir -p ~/.kube
sudo cp /etc/rancher/k3s/k3s.yaml ~/.kube/config
sudo chown $(whoami):$(whoami) ~/.kube/config# Install K3s agent
curl -sfL https://get.k3s.io | \
K3S_URL=https://MASTER_IP:6443 \
K3S_TOKEN=YOUR_TOKEN sh -
# Check agent status
sudo systemctl status k3s-agent# Check nodes (from master)
kubectl get nodes
# Check pods
kubectl get pods -A
# Check services
kubectl get services -A# View agent logs
sudo journalctl -u k3s-agent.service --no-pager
# Check port usage
sudo ss -tulpn | grep :6444
# Restart services
sudo systemctl restart k3s-agentDocument Version: 1.0
Last Updated: October 2025
Status: Production Ready