Hi Friends, Hope you are doing great. Here, I will walk through the installation process of Kubernetes cluster. I will be setting Kubernetes cluster using kubeadm in this post.
I am going to use 3 node cluster ( 1 master node and 2 worker nodes). If you want to read about the master and worker node, you can read from here.
I will be installing Kubernetes version v1.14 and in a later post, I will also show you how can you upgrade your Kubernetes cluster to some higher version. So without wasting much time, let’s jump into the installation process.
Cluster Details:
Master node | 10.0.1.101 | ||||
Worker node1 | 10.0.1.102 | ||||
Worker node2 | 10.0.1.103 | ||||
OS | Ubuntu 18.04.4 |
System Requirements:
- 2 GiB or more of RAM per machine/node.
- At least 2 CPUs on the machine that you use as a control-plane node.
- Make sure full connectivity among all the nodes/machines in the cluster.
To setup Kubernetes cluster using kubeadm, we need to perform the below actions:
- Install Docker on all the nodes.
- Install Kubeadm, Kubectl, Kubelet on all the nodes.
- Bootstrap the cluster on the Master node.
- Join the worker nodes to the cluster.
- Set up cluster networking with flannel.
Now, we will follow the above actions one by one.
Install Docker on all the nodes.
Execute the below commands on all the 3 nodes to install Docker.
Note: We are going the install community edition of docker.
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add - sudo add-apt-repository \ "deb [arch=amd64] https://download.docker.com/linux/ubuntu \ $(lsb_release -cs) \ stable" sudo apt-get update sudo apt-get install -y docker-ce=18.06.1~ce~3-0~ubuntu sudo apt-mark hold docker-ce
In the above steps, we are downloading the repository for docker, installing it and setting the docker on hold.
hold is used to mark a package as held back, which will prevent the package from being automatically installed, upgraded or removed.
apt–mark will change whether a package has been marked as being automatically installed.
After installing the docker, we need to verify that our docker is running in the server. To verify this, you can execute the below command:
sudo systemctl status docker
I checked in master node (10.0.1.101) and confirmed that docker is running fine in the server. We need to verify docker status in every node, so you just need to check it in all the 3 nodes.
Now we will install Kubeadm, Kubectl and Kubelet.
Install Kubeadm, Kubectl, Kubelet on all the nodes.
Execute the below commands on all the nodes to install these tools.
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add - cat << EOF | sudo tee /etc/apt/sources.list.d/kubernetes.list deb https://apt.kubernetes.io/ kubernetes-xenial main EOF sudo apt-get update sudo apt-get install -y kubelet=1.14.5-00 kubeadm=1.14.5-00 kubectl=1.14.5-00 sudo apt-mark hold kubelet kubeadm kubectl
In the above steps, we are downloading Kubernetes repo and from its repository, we are installing kubeadm, kubelet and kubectl.
kubeadm is a tool built to provide kubeadm init and kubeadm join as best-practice “fast paths” for creating Kubernetes clusters. kubeadm performs the actions necessary to get a minimum viable cluster up and running. By design, it cares only about bootstrapping, not about provisioning machines.. Refer the Official Kubernetes page for more details.
kubectl is a Kubernetes command-line tool which allows you to run commands against Kubernetes clusters. For configuration, kubectl looks for a file named config in the $HOME/. kube directory. You can specify other kubeconfig files by setting the KUBECONFIG environment variable or by setting the –kubeconfig flag. For more details, you can refer here.
kubelet is the primary “node agent” that runs on each node. It can register the node with the apiserver using one of: the hostname; a flag to override the hostname; or specific logic for a cloud provider. The kubelet works in terms of a PodSpec. A PodSpec is a YAML or JSON object that describes a pod. For more details on this, refer here.
After installing Kubeadm, Kubectl and Kubelet, we will perform the next task which is “Bootstrapping the cluster on the Master node“.
Bootstrap the cluster on the Master node.
Execute the below command and bootstrap the cluster.
sudo kubeadm init --pod-network-cidr=10.244.0.0/16
Note: This command should be run only on master node.
There are multiple ways to bootstrap your cluster, but kubeadm init
is the simplest way to do it.
It creates certificates and places them at the location /etc/kubernetes/pki. It also creates manifests and configuration files for kube-apiserver, kube-scheduler and kube-controller. After this, it runs kubelete and when the control plane components (kube-apiserver, etcd, kube-scheduler,kube-controller-manager and cloud-controller-manager) are ready, it runs kube-proxyDaemonet and kube-dns deployment. To read about components of Kubernetes, you can refer here.
For your reference, I have pasted the kubeadm init
output here. You can go through it to understand what happens when kubeadm init
gets executed.
cloud_user@ip-10-0-1-101:~$ sudo kubeadm init --pod-network-cidr=10.244.0.0/16 I1031 08:18:56.378860 2455 version.go:240] remote version is much newer: v1.19.3; falling back to: stable-1.14 [init] Using Kubernetes version: v1.14.10 [preflight] Running pre-flight checks [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Activating the kubelet service [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating "etcd/ca" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [ip-10-0-1-101 localhost] and IPs [10.0.1.101 127.0.0.1 ::1] [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [ip-10-0-1-101 localhost] and IPs [10.0.1.101 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "ca" certificate and key [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [ip-10-0-1-101 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.0.1.101] [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [apiclient] All control plane components are healthy after 15.002372 seconds [upload-config] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.14" in namespace kube-system with the configuration for the kubelets in the cluster [upload-certs] Skipping phase. Please see --experimental-upload-certs [mark-control-plane] Marking the node ip-10-0-1-101 as control-plane by adding the label "node-role.kubernetes.io/master=''" [mark-control-plane] Marking the node ip-10-0-1-101 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule] [bootstrap-token] Using token: 9bocgj.l5vmdh0q3102anhm [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstrap-token] creating the "cluster-info" ConfigMap in the "kube-public" namespace [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: kubeadm join 10.0.1.101:6443 --token 9bocgj.l5vmdh0q3102anhm \ --discovery-token-ca-cert-hash sha256:e644180680b24be11a6628074544438ed45940299d492a02b10a784fb7427c5d cloud_user@ip-10-0-1-101:~$
When you are done with kubeadm init
steps, then we need to setup local kubeconfig. We will do this in master node as of now. Commands for setting kubeconfig is already given in kubeadm init
output. I am also pasting here once again for your simplicity.
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
Also take a note of kubeadm init
command printed a long kubeadm join
command on the screen. We will need this kubeadm join
command to join the cluster. We will use this command in our next step. Now the next step is to join the cluster.
Join the worker nodes to the cluster.
Execute the below command to join the worker node to the cluster.
sudo kubeadm join $ip:6443 --token $token --discovery-token-ca-cert-hash $hash
Replace ip
, token
and hash
you obtained from the output of kubeadm init
.
At worker node1:
cloud_user@ip-10-0-1-102:~$ sudo kubeadm join 10.0.1.101:6443 --token 9bocgj.l5vmdh0q3102anhm \ > --discovery-token-ca-cert-hash sha256:e644180680b24be11a6628074544438ed45940299d492a02b10a784fb7427c5d [preflight] Running pre-flight checks [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' [kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.14" ConfigMap in the kube-system namespace [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Activating the kubelet service [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the control-plane to see this node join the cluster. cloud_user@ip-10-0-1-102:~$
At worker node2:
cloud_user@ip-10-0-1-103:~$ sudo kubeadm join 10.0.1.101:6443 --token 9bocgj.l5vmdh0q3102anhm \ > --discovery-token-ca-cert-hash sha256:e644180680b24be11a6628074544438ed45940299d492a02b10a784fb7427c5d [preflight] Running pre-flight checks [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' [kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.14" ConfigMap in the kube-system namespace [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Activating the kubelet service [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the control-plane to see this node join the cluster. cloud_user@ip-10-0-1-103:~$
Now run the below command from the master node to see whether your worker nodes are joined to the cluster or not.
kubectl get nodes
cloud_user@ip-10-0-1-101:~$ kubectl get nodes NAME STATUS ROLES AGE VERSION ip-10-0-1-101 NotReady master 12m v1.14.5 ip-10-0-1-102 NotReady <none> 2m29s v1.14.5 ip-10-0-1-103 NotReady <none> 2m22s v1.14.5 cloud_user@ip-10-0-1-101:~$
All the nodes are joined in the cluster. But you have noticed that nodes are in “NotReady
” state. This is expected. We need to now install Flannel
to make the nodes in “Ready
” state.
Flannel
is a very simple overlay network that satisfies the Kubernetes requirements. It is a simple and easy way to configure a layer 3 network fabric designed for Kubernetes For more details, refer here.
Set up cluster networking with flannel.
To install the flannel and work perfectly, we need to enable iptables bridge on all the nodes. Execute the below command to enable it.
echo "net.bridge.bridge-nf-call-iptables=1" | sudo tee -a /etc/sysctl.conf sudo sysctl -p
After enabling iptables bridge, setup flannel on master node. To do this, run the below command on the master node.
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/bc79dd1505b0c8681ece4de4c0d86c5cd2643275/Documentation/kube-flannel.yml
cloud_user@ip-10-0-1-101:~$ kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/bc79dd1505b0c8681ece4de4c0d86c5cd2643275/Documentation/kube-flannel.yml clusterrole.rbac.authorization.k8s.io/flannel created clusterrolebinding.rbac.authorization.k8s.io/flannel created serviceaccount/flannel created configmap/kube-flannel-cfg created daemonset.extensions/kube-flannel-ds-amd64 created daemonset.extensions/kube-flannel-ds-arm64 created daemonset.extensions/kube-flannel-ds-arm created daemonset.extensions/kube-flannel-ds-ppc64le created daemonset.extensions/kube-flannel-ds-s390x created cloud_user@ip-10-0-1-101:~$
We can see that flannel is installed. Now, if we check node status, we will be able to see that the nodes are in “Ready
” state.
cloud_user@ip-10-0-1-101:~$ kubectl get nodes NAME STATUS ROLES AGE VERSION ip-10-0-1-101 Ready master 13m v1.14.5 ip-10-0-1-102 Ready <none> 4m16s v1.14.5 ip-10-0-1-103 Ready <none> 4m9s v1.14.5 cloud_user@ip-10-0-1-101:~$
We can see that our nodes are in “Ready
” state, which tells us that the node is healthy and ready to accept the pods.
I hope you enjoyed reading this post and setting Kubernetes cluster using kubeadm. If you have any questions, feel free to leave a comment below. I will be happy to take the questions and answer it.
In my next post, we will learn upgrading the kubernetes cluster.
My name is Shashank Shekhar. I am a DevOps Engineer, currently working in one of the best companies in India. I am having around 5 years of experience in Linux Server Administration and DevOps tools.
I love to work in Linux environment & love learning new things.
Powered by Facebook Comments
Leave a Reply