Home » Kubernetes » How to install Kubernetes cluster on Centos

How to install Kubernetes cluster on Centos

Kubernetes is a free open-source container orchestration engine or tool for automating deployment, scaling, and management of containerized applications. Google initially designed Kubernetes, but the Cloud Native Computing Foundation now maintains the project.
Kubernetes automates functioning tasks of container administration and includes integral commands for deploying applications, rolling out changes to your applications, scaling your applications up and down to suitable changing needs, monitoring your applications, and more making it cooler to manage applications.
In this articles you will learn How to install Kubernetes cluster on Centos
RAM-: 2 GB or more on each machine
CPUs-: 2 core or more
Full network connectivity between all machines in the cluster
Fully qualify domain name on all nodes.
Swap disabled. You MUST disable swap in order for the kubelet to work properly
1. Master node
2. Worker node
CNI :: https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
cgroupDriver: systemd

Optional if need to change cgroupDriver after installation
kubectl edit cm kubelet-config -n kube-system

Step1. Configure /etc/hosts on all cluster nodes master.linuxpcfix.com master worker1.linuxpcfix.com worker1 worker2.linuxpcfix.com worker2
Step 2: Install Docker on all nodes master and workers
You require to install Docker on all nodes. But before docker installation we have to update the yum repositories.

[root@linuxpcfix ~]#yum install -y yum-utils
[root@linuxpcfix ~]#yum-config-manager –add-repo \ https://download.docker.com/linux/centos/docker-ce.repo

After adding the docker repository execute the following command to install docker on all cluster nodes.

[root@linuxpcfix ~]#yum install docker-ce docker-ce-cli containerd.io

Once the required docker packages get installed, enable and start the docker service.

[root@linuxpcfix ~]#systemctl start docker
[root@linuxpcfix ~]#systemctl enable docker

Steps 3. Disable selinux on all master and workers nodes

[root@linuxpcfix ~]#setenforce 0
[root@linuxpcfix ~]#sed -i ‘s/^SELINUX=enforcing$/SELINUX=permissive/’ /etc/selinux/config

Steps 4. Disable firewalld on all cluster nodes.

[root@linuxpcfix ~]#systemctl disable firewalld
[root@linuxpcfix ~]#systemctl stop firewalld

Step 5: Disable swapping on all cluster nodes
We need to disable the swapping on master node as well as both worker node. Because Kubernetes doesn’t support swapping hence to install Kubernetes we require to disable the swapping on both the nodes
Perform the following command on master as well as both worker nodes

[root@linuxpcfix ~]#swapoff -a

Step 6: Enable the usage of “iptables” on all cluster nodes
Enable the usage of iptables which will avoid the routing errors trendy. As the below runtime parameters:

[root@linuxpcfix ~]#bash -c ‘echo “net.bridge.bridge-nf-call-ip6tables = 1” > /etc/sysctl.d/k8s.conf’
[root@linuxpcfix ~]#bash -c ‘echo “net.bridge.bridge-nf-call-iptables = 1” >> /etc/sysctl.d/k8s.conf’
[root@linuxpcfix ~]#sysctl –system

Steps 7: Append the kuberntes repo on all available cluster nodes.
Open the file /etc/yum.repos.d/kubernetes.repo

[root@linuxpcfix ~]#vi /etc/yum.repos.d/kubernetes.repo

Append the following lines.


Steps 8. Install the required Kubernetes packages on all nodes

[root@linuxpcfix ~]#yum install -y kubeadm kubelet kubectl

Step 9: Enable and Start Kubelet on master nodes as well as both worker nodes

[root@linuxpcfix ~]#systemctl enable kubelet
[root@linuxpcfix ~]#systemctl start kubelet

Step 10: Initialize Kubernetes cluster only on master node
–apiserver-advertise-address= this is the same IP address as we have assigned in the /etc/hosts) The pod network cidr range which will be use by pod in cluster

[root@linuxpcfix ~]#kubeadm init –apiserver-advertise-address= –pod-network-cidr=

Kindly note the kubeadm join command something like following, To join the workers nodes in cluster.

kubeadm join –token sfvd2z.8h8kfa0u9vcn4trf \
–discovery-token-ca-cert-hash sha256:cs130nmb47f3a9bfa5b880dcf7096faef05d25505f4c099732b65376b0e14557g

Step 11: Move kube config file to current user’s home directory, Only on master.
To communicate with the Kubernetes cluster and to use kubectl command, you require to have the Kube config file.
Perform the following command to get the kube config file and copy it under the user’s home working directory.

[root@linuxpcfix ~]#mkdir -p $HOME/.kube
[root@linuxpcfix ~]#sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@linuxpcfix ~]#chown $(id -u):$(id -g) $HOME/.kube/config

Step 12: Apply CNI from kube-flannel.yml on master node only, no need to perform on workers node
After the master node of the Kubernetes cluster is ready to handle tasks and the services are running, we need to setup separate container network to interact each other in the cluster.
Setup the CNI(container network interface) configuration from flannel

[root@linuxpcfix ~]#wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

Note – If you are performing the installation on virtual machine and vm has multiple network interfaces then need to check your Ethernet interfaces first.
Search out the Ethernet and make sure that which interface has an ip address is the ip address which we used for Kubernetes master)
If eth1 has assigned master node IP on your virtual machine then perform the following steps otherwise you can ignore.
Now we have to append the extra arguments for eth1 in kube-flannel.yml

[root@linuxpcfix ~]#vi kube-flannel.yml

Search the “flanneld” in downloaded kube-flannel.yml file.
In the args section add : – –iface=eth1

– –iface=eth1
– –ip-masq
– –kube-subnet-mgr
– –iface=eth1

Apply the flannel configuration

[root@linuxpcfix ~]#kubectl apply -f kube-flannel.yml
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds-amd64 created
daemonset.apps/kube-flannel-ds-arm64 created
daemonset.apps/kube-flannel-ds-arm created
daemonset.apps/kube-flannel-ds-ppc64le created
daemonset.apps/kube-flannel-ds-s390x created

Step 13: Join master node run only on worker node
Now we need to perform the join command from our worker nodes

[root@linuxpcfix ~]#kubeadm join –token sfvd2z.8h8kfa0u9vcn4trf –discovery-token-ca-cert-hash sha256:cs130nmb47f3a9bfa5b880dcf7096faef05d25505f4c099732b65376b0e14557g

Step 14: Check the Kubernetes cluster nodes status
Check the nodes status in the master

[root@linuxpcfix ~]#kubectl get nodes
master Ready master 26m v1.21.2
worker Ready 63s v1.21.2
worker Ready 43s v1.21.2

If you are seeing the above result is the proof that you have successfully installed the Kubernetes cluster with 1 master and 2 worker nodes.


I am founder and webmaster of www.linuxpcfix.com and working as a Sr. Linux Administrator (Expertise on Linux/Unix & Cloud Server) and have been in the industry since more than 14 years.

Leave a Reply

Your email address will not be published. Required fields are marked *


Time limit is exhausted. Please reload the CAPTCHA.

Categorized Tag Cloud