Running Kubernetes on Bluvalt

The purpose of this tutorial is to explain how to run a production-grade Kubernetes cluster on top of Bluvalt Virtual Data Center. And we will create 3 VMs, one is master and two workers. To find out more about Virtual Servers click here.

What is Kubernetes

Kubernetes is an open-source orchestration system for automating deployment, scaling, and management of containerized applications. Read More

Prerequisites

This tutorial will assume the followings:

  • You are subscribed to “Virtual Data Center” through Bluvalt cloud
  • You have created a network inside your VDC that has a subnet and a router to Public_Network
  • Open the required ports as follow:

For master’s security group:

Port RangeRemote IP Prefix
64430.0.0.0/0
2379Open to your network CIDR
2380Open to your network CIDR
10250Open to your network CIDR
10257Open to your network CIDR
10259Open to your network CIDR

For workers’ security group:

Port RangeRemote IP Prefix
30000 - 327670.0.0.0/0
10250Open to your network CIDR
  • You have created three Ubuntu virtual machines
    • master
    • worker1
    • worker2

Prepare the nodes

  1. Access to your master and worker nodes and do the following:

You need to disable swap on all nodes by the following command:

sudo swapoff -a

Now we will add our host’s IPs to the /etc/hosts file:

sudo echo -e "<IP_ADDESS> master\n<IP_ADDESS> worker1\n<IP_ADDESS> worker2" >> /etc/hosts

You have to change <IP_ADDESS> with the private IP address of your node.

Install containerd on master and workers

In order for pods in the cluster to run, we need to install an external component called containerd. Its part of docker but it works with any containers available for kubernetes. For more information click here.

  1. First, forwarding IPv4 by the following command:
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF

sudo modprobe overlay
sudo modprobe br_netfilter

# sysctl params required by setup, params persist across reboots
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables  = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward                 = 1
EOF

# Apply sysctl params without reboot
sudo sysctl --system
  1. Update packages:
sudo apt Update
  1. Install containerd:

There are multble options to install containerd, we are going to install it from apt-get.

sudo apt-get -y install containerd
  1. Configure containerd:

Make a directory:

sudo mkdir -p /etc/containerd

Make the configration file available to containerd:

containerd config default | sudo tee /etc/containerd/config.toml

Change the false value of SystemdCgroup in /etc/containerd/config.toml to true.

sudo sed -i 's/SystemdCgroup \= false/SystemdCgroup \= true/g' /etc/containerd/config.toml

Restart containerd:

sudo systemctl restart containerd

Install Kubernetes component on master and workers

Here we are going to install kubernetes component kubeadm, kubelet and kubectl on all nodes. For more information click here.

  1. Update packages:
sudo apt Update
  1. Install required packages:
sudo apt-get install -y apt-transport-https ca-certificates curl
  1. Download the required key:
sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg
  1. Add Kubernetes repository:
echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
  1. Update packages:
sudo apt Update
  1. Install kubeadm, kubelet and kubectl:
sudo apt-get install -y kubelet=1.22.3-00 kubeadm=1.22.3-00 kubectl=1.22.3-00

If you want to install a specific version of kubernetes. You can specify the version like this: sudo apt-get install -y kubelet=1.22.3-00 kubeadm=1.22.3-00 kubectl=1.22.3-00 To see all available Kubernetes versions, this command will show all versions you need: apt-cache medison kubeadm

  1. Mark Kubernetes component on hold:

This will prevent Kubernetes from auto upgrading you component.

sudo apt-mark hold kubelet kubeadm kubectl

Initializing the master node only

This command will convert the master node to a controller and produce the join token command for the workers.

  1. init master node:
sudo kubeadm init
  1. Add kube to home directory:

To be able to perform kubectl command without sudo or pointing all commands to config file, the following command will solve this issue.

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Join the worker node to the cluster

From master node copy the join command with the token that resulted from sudo kubeadm init. If for any reason you missed it, we can generate it again by the following command:

From maser node:

kubeadm token create --print-join-command

Copy the join command and proceed to worker node and past it with sudo. The result join and token output will look something like this:

ubuntu@master:~$ kubeadm token create --print-join-command
kubeadm join 192.168.4.6:6443 --token 1owh58.efuwklgo9sa77019 --discovery-token-ca-cert-hash sha256:78d2be8b11e2fda8438484848985984477b477c61c35ed813b5835

Interact with the cluster

From master node check the joint worker nodes by the following command:

kubectl get nodes

The result will look something like this:

ubuntu@master:~$ kubectl get nodes
NAME      STATUS   ROLES                  AGE   VERSION
master    NotReady    control-plane,master   11h   v1.22.3
worker1   NotReady    <none>                 10h   v1.22.3
worker2   NotReady    <none>                 10h   v1.22.3

As you see the node status is NotReady and that because we didn’t install a networking addons yet.

Install network addons on master

We are going to install Cilium net addons to make all nodes see each other. For more information click here.

  1. Install Cilium CLI:
CILIUM_CLI_VERSION=$(curl -s https://raw.githubusercontent.com/cilium/cilium-cli/master/stable.txt)
CLI_ARCH=amd64
if [ "$(uname -m)" = "aarch64" ]; then CLI_ARCH=arm64; fi
curl -L --fail --remote-name-all https://github.com/cilium/cilium-cli/releases/download/${CILIUM_CLI_VERSION}/cilium-linux-${CLI_ARCH}.tar.gz{,.sha256sum}
sha256sum --check cilium-linux-${CLI_ARCH}.tar.gz.sha256sum
sudo tar xzvfC cilium-linux-${CLI_ARCH}.tar.gz /usr/local/bin
rm cilium-linux-${CLI_ARCH}.tar.gz{,.sha256sum}
  1. Install Cilium:
cilium install

You can check Cilium status by this command: cilium status. And the result will look something like this:

ubuntu@master:~$ cilium status
    /¯¯\
 /¯¯\__/¯¯\    Cilium:         OK
 \__/¯¯\__/    Operator:       OK
 /¯¯\__/¯¯\    Hubble:         disabled
 \__/¯¯\__/    ClusterMesh:    disabled
    \__/

Deployment        cilium-operator    Desired: 1, Ready: 1/1, Available: 1/1
DaemonSet         cilium             Desired: 3, Ready: 3/3, Available: 3/3
Containers:       cilium             Running: 3
                  cilium-operator    Running: 1
Cluster Pods:     2/2 managed by Cilium
Image versions    cilium             quay.io/cilium/cilium:v1.12.2@sha256:986f8b04cfdb35cf714701e58e35da0ee63da2b8a048ab596ccb49de58d5ba36: 3
                  cilium-operator    quay.io/cilium/operator-generic:v1.12.2@sha256:00508f78dae5412161fa40ee30069c2802aef20f7bdd20e91423103ba8c0df6e: 1

So now if you check the cluster again it will be in a ready state:

ubuntu@master:~$ kubectl get nodes
NAME      STATUS   ROLES                  AGE   VERSION
master    Ready    control-plane,master   11h   v1.22.3
worker1   Ready    <none>                 10h   v1.22.3
worker2   Ready    <none>                 10h   v1.22.3