Create a single-master Kubernetes 1.16.2 Cluster with Kubeadm

Overview

This blog article will help you install a single master Kubernetes 1.16 cluster using kubeadm on Debian 9+, Ubuntu 18.04+. This guide will deploy three(3) servers, one master node and two worker nodes, however you can deploy as many servers to accomplish your goal. This guide uses Containerd 1.3.0 as CRI, Flannel as CNI plugin

Create a single-master Kubernetes 1.16.2 Cluster with Kubeadm, Containerd and Flannel

Prerequisites

I will cover steps for operating systems as mentioned below. Minimum server requirements:

  • Debian 9+, Ubuntu 18.04+ tested
  • 2 GB or more of RAM per node (Any less will leave little room for your Apps)
  • 2 CPUs or more on the Master node
  • Full network connectivity between all nodes in the cluster (public or private network is fine)
  • Root access maybe required

Setup Environment

HostnameRoleIP Address
k8s-master-01master / control plane192.168.0.10
k8s-worker-01worker node192.168.0.20
k8s-worker-02worker node192.168.0.30

Disable Swap

Swap needs to be disabled. Kubeadm will check to make sure that swap is disabled. Turn swap off and disable for future reboots.

sudo swapoff -a
sudo sed -i.bak '/swap/d' /etc/fstab

Disable Firewall

Stop and disable the firewall to avoid any issues. Kubernetes uses IPTables to handle inbound and outbound traffic.

sudo systemctl status firewalld
sudo systemctl stop firewalld 
sudo systemctl disable firewalld

Disable SELinux

We need to disable SELinux or set it to permissive mode if it’s enabled. I’m setting to Permissive mode.

sudo setenforce 0
sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config

Configure IPTables to receive bridged network traffic

Enable IPTables to route bridged network traffic correctly

sudo bash -c 'cat <<EOF > /etc/sysctl.d/99-kubernetes-cri.conf
net.bridge.bridge-nf-call-iptables  = 1
net.ipv4.ip_forward                 = 1
net.bridge.bridge-nf-call-ip6tables = 1
EOF'

Load system configuration files

Apply configuration changes made above

sudo sysctl --system

Install Containerd with Release Tarball

Containerd Prerequisites

sudo bash -c 'cat <<EOF > /etc/modules-load.d/containerd.conf 
overlay
br_netfilter
EOF'

sudo modprobe overlay
sudo modprobe br_netfilter

Install Updates and Containerd 1.3.0

Update the system and install libseccomp2 module, required by containerd

sudo apt-get update
sudo apt-get install libseccomp2
wget https://storage.googleapis.com/cri-containerd-release/cri-containerd-1.3.0.linux-amd64.tar.gz
sudo tar --no-overwrite-dir -C / -xzf cri-containerd-1.3.0.linux-amd64.tar.gz
sudo systemctl enable containerd && systemctl start containerd

Generate containerd configuration

sudo mkdir -p /etc/containerd 
# Run command as root [sudo su -]
containerd config default > /etc/containerd/config.toml
# Exit root

Reload and restart containerd

sudo systemctl daemon-reload
sudo systemctl restart containerd

Install Kubeadm, Kubelet and Kubectl

sudo apt-get update && apt-get install -y apt-transport-https ca-certificates curl software-properties-common
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
sudo cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb http://apt.kubernetes.io/ kubernetes-xenial main
EOF
sudo apt-get update  
sudo apt-get install -y kubelet kubeadm kubectl

Enable and Start Kubelet

sudo systemctl enable kubelet  
sudo systemctl start kubelet

Configure cgroup driver used by kubelet on control-plane node

sudo vim /etc/systemd/system/kubelet.service.d/0-containerd.conf

[Service]                                                 
Environment="KUBELET_EXTRA_ARGS=--container-runtime=remote --runtime-request-timeout=15m --container-runtime-endpoint=unix:///run/containerd/containerd.sock"

Restart the kubelet to pick up the configuration

sudo systemctl daemon-reload
sudo systemctl restart kubelet

Initialize kubeadm on master

Note: This step only for the master node

kubeadm init --pod-network-cidr 10.244.0.0/16 --apiserver-advertise-address 172.16.0.4

# To start using your cluster, Login as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

# Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf

Join Worker Nodes to Cluster

Note: Perform this as root on all worker nodes you want to join cluster

# Run the command that was output by kubeadm init
kubeadm join --token <token> <control-plane-host>:<control-plane-port> --discovery-token-ca-cert-hash sha256:<hash>

# If you do not have the token, you can get it by running the following command on the control-plane node
kubeadm token list

# Tokens expire after 24 hours. Joining a node after the current token has expired? You can create a new token:
kubeadm token create

Deploy Pod network – Flannel

For flannel to work correctly, --pod-network-cidr=10.244.0.0/16 has to be passed to kubeadm init. See Kubernetes docs for other CNI configurations.

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/2140ac876ef134e0ed5af15c65e414cf26827915/Documentation/kube-flannel.yml

Test the Cluster

Do a series of tests for your cluster

larry@master-01:~$ kubectl get pods --namespace kube-system --output wide
NAME                                READY   STATUS    RESTARTS   AGE   IP           NODE    
coredns-5644d7b6d9-85trw            1/1     Running   0          48m   10.244.1.2   worker-01 
coredns-5644d7b6d9-lgzwb            1/1     Running   0          48m   10.244.2.2   worker-02 
etcd-master-01                      1/1     Running   0          47m   10.240.0.3   master-01 
kube-apiserver-master-01            1/1     Running   0          47m   10.240.0.3   master-01 
kube-controller-manager-master-01   1/1     Running   0          47m   10.240.0.3   master-01 
kube-flannel-ds-amd64-58h27         1/1     Running   0          43m   10.240.0.5   worker-02 
kube-flannel-ds-amd64-94hgj         1/1     Running   0          43m   10.240.0.3   master-01 
kube-flannel-ds-amd64-h6rl4         1/1     Running   0          43m   10.240.0.4   worker-01 
kube-proxy-9flw7                    1/1     Running   0          45m   10.240.0.4   worker-01 
kube-proxy-hbj2g                    1/1     Running   0          48m   10.240.0.3   master-01 
kube-proxy-n49zq                    1/1     Running   0          45m   10.240.0.5   worker-02 
kube-scheduler-master-01            1/1     Running   0          47m   10.240.0.3   master-01 
larry@master-01:~$ kubectl cluster-info
Kubernetes master is running at https://10.240.0.3:6443
KubeDNS is running at https://10.240.0.3:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

Verify the services

larry@master-01:~$ kubectl get services -n kube-system
NAME       TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)                  AGE
kube-dns   ClusterIP   10.96.0.10   <none>        53/UDP,53/TCP,9153/TCP   41m
larry@master-01:~$