Photo by Growtika Developer Marketing Agency / Unsplash

How to Install Kubernetes Cluster on Ubuntu 22.04 | Self Hosted | K8S

kubernetes Dec 5, 2022

Minimum requirements:

  • Master/s Linux Ubuntu 22.04 2 CPU 2 G RAM
  • Worker/s Linux Ubuntu 22.04 2 CPU 2 G RAM

Kubernetes (k8s) will "eat" up around 1G RAM for each machine once installed.
So have that in mind.

  • Prep Stage:

Setup hostname on master node

sudo hostnamectl set-hostname k8smaster.example.dev

Setup hostname on worker nodes

sudo hostnamectl set-hostname k8sworker1.example.dev
sudo hostnamectl set-hostname k8sworker2.example.dev

Insted of k8sworker1.example.dev add you domain (it can be local or real FQDN)

nano /etc/hosts file on each node (worker and master node)

ip_addr_here   k8smaster.example.dev k8smaster
ip_addr_here   k8sworker1.example.dev k8sworker1
ip_addr_here   k8sworker2.example.dev k8sworker2

Also if your nodes (servers) are behind firewall check what ports need to be open here.


You can go step by step or we can simply do everything with a bash script.

Install Script (Just Copy/Paste on each server)

#!/bin/bash

# Disable swap
echo "removing swap"
echo "remove swap file manualy"
sleep 1

sudo swapoff -a
sudo sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab

# Load kernel modules on all nodes


sudo tee /etc/modules-load.d/containerd.conf <<EOF
overlay
br_netfilter
EOF

sudo modprobe overlay

sudo modprobe br_netfilter

# Kubernetes kernel paramiters

sudo tee /etc/sysctl.d/kubernetes.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF

sudo sysctl --system

# Install containerd run time

sudo apt install -y curl gnupg2 software-properties-common apt-transport-https ca-certificates

# Docker Repo

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"

sudo apt update
sudo apt install -y containerd.io

containerd config default | sudo tee /etc/containerd/config.toml >/dev/null 2>&1
sudo sed -i 's/SystemdCgroup \= false/SystemdCgroup \= true/g' /etc/containerd/config.toml

sudo systemctl restart containerd
sudo systemctl enable containerd

# Kuberenetes Repo

curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
sudo apt-add-repository "deb http://apt.kubernetes.io/ kubernetes-xenial main"

# Install Kubernetes components Kubectl, kubeadm & kubelet

sudo apt update
sudo apt install -y kubelet kubeadm kubectl
sleep 1
# Hold packages (Kubernetes upgrade should be controled)
sudo apt-mark hold kubelet kubeadm kubectl

echo "done"
echo ""
echo "run to enable control plane"
echo ""
echo "sudo kubeadm init --control-plane-endpoint=k8smaster.example.dev"

You are now done with configuring worker nodes. - Almost

On your master server (of your choice) run the following to initiate master node - aka control plane

sudo kubeadm init --control-plane-endpoint=k8smaster.example.dev

Wait for it to finish.

It will printout join commnd for worker nodes to join your cluster.

Copy command and paste it with sudo on all worker nodes (2 in this example).

Run next commands so you can fire kubectl commnds as non sudo user.
Only on master node.

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Check with commands:

kubectl cluster-info
kubectl get nodes
  • Place all manifests in one place (Optional)

Create workspace / folder for kubernetes (good idea is to put it on a git)

sudo mkdir /kubernetes && sudo chown $USER:$USER /kubernetes && cd /kubernetes
  • Install Calico Pod Network Add-on (needed for internal kubernetes networking and for nodes to become "ready")
curl https://projectcalico.docs.tigera.io/manifests/calico.yaml -O
kubectl apply -f calico.yaml

Check with command:

kubectl get pods -n kube-system
kubectl -n kube-system describe pod calico-kube-controllers-<randomstrings>-<randomnum>

Check to see if the nodes are ready.

kubectl get nodes

Somthimes there is an issue with containerd permissions:

stat /run/containerd/containerd.sock

Simple fix (if it happens) and then reinstall calico (kubectl delete -f calico.yaml && kubectl apply -f calico.yaml):

sudo chmod 666 /run/containerd/containerd.sock
  • Install MetalLB - Loadbalancer for self hosted kubernetes cluster
curl https://raw.githubusercontent.com/metallb/metallb/v0.13.7/config/manifests/metallb-native.yaml -O
kubectl apply -f metallb-native.yaml
  • Metallb config

Create metallb-config.yaml

nano metallb-config.yaml
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
  name: first-pool
  namespace: metallb-system
spec:
# your VM IP range or whatever if your willing to use ssh tunneling from you proxy server (sshuttle does the trick)
  addresses:
  - 192.168.1.100-192.168.1.250 
kubectl apply -f matallb-config.yaml
  • MetalLB (Load Balancer fot self hosted k8s )

Read more about metallb here


  • You can test your kubernetes cluster with this simple nginx manifest

Create nginx.yaml

nano nginx.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx
        ports:
        - containerPort: 80

---

apiVersion: v1
kind: Service
metadata:
  name: nginx-service
  labels:
    app: nginx
    svc: nginx
spec:
  type: LoadBalancer
  loadBalancerIP: 192.168.1.150
#  type: NodePort
  selector:
    app: nginx
  ports:
    - protocol: TCP
      port: 80
kubectl apply -f nginx.yaml

Enjoy!

Tags