How to Install Kubernetes on CentOS 7

What is Kubernetes?
Kubernetes is a portable, extensible open-source platform for managing containerized workloads and services, that facilitates both declarative configuration and automation. It has a large, rapidly growing ecosystem. Kubernetes services, support, and tools are widely available.

Google open-sourced the Kubernetes project in 2014. Kubernetes builds upon a decade and a half of experience that Google has with running production workloads at scale, combined with best-of-breed ideas and practices from the community.

In this tutorial I will show you how to create a Kubernetes cluster with 1 master and 2 worker nodes.

Note: This guide is written for CentOS 7 and written for a non-root user.
Commands that require elevated privileges are prefixed with [sudo]

Servers
I have created 3 CentOS 7 VM’s with the minimal installation.

Installation of Kubernetes Nodes
We start with the installation of the Kubernetes Master.
First we need to disable Selinx and FirewallD.

[kubernetes@k8s-master ~]$ sudo setenforce 0
[kubernetes@k8s-master ~]$ sudo sed -i --follow-symlinks 's/SELINUX=enforcing/SELINUX=permissive/g' /etc/sysconfig/selinux
[kubernetes@k8s-master ~]$ sudo systemctl disable firewalld
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.

Add the Kubernetes repository on our system.

[kubernetes@k8s-master ~]$ sudo bash -c 'cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF'

Now that we added the repository we can install Kubernetes.

[kubernetes@k8s-master ~]$ sudo yum install kubeadm kubelet kubectl docker -y

Make sure that Docker and Kubernetes are started at startup.

[kubernetes@k8s-master ~]$ sudo systemctl enable docker
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.
[kubernetes@k8s-master ~]$ sudo systemctl enable kubelet
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /etc/systemd/system/kubelet.service.

Some users on RHEL/CentOS 7 have reported issues with traffic being routed incorrectly due to iptables being bypassed. You should ensure net.bridge.bridge-nf-call-iptables is set to 1 in your sysctl config.

[kubernetes@k8s-master ~]$ sudo bash -c 'cat <<EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF'

The last step of the Kubernetes installation is to disable swap.
All Kubernetes master and worker(s) are expected to have swap disabled.
This is the recommended deployment by the Kubernetes community.

[kubernetes@k8s-master ~]$ sudo swapoff -a
[kubernetes@k8s-master ~]$ sudo sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab

Reboot the system to make sure the swap has been disabled.
Perform these same steps also for the worker nodes.

Configuration of the Kubernetes Master
Before we initialize our Kubernetes cluster, we need to choose a subnet range for CIDR to be used for pod IP addresses. Since I’m using the “Calico” network plugin I will be using the default CIDR 192.168.0.0/16.
Note: The CIDR may be different if you decide to use an other network plugin.
Keep in mind that your values may differ from those below.

[kubernetes@k8s-master ~]$ sudo kubeadm init --pod-network-cidr=192.168.0.0/16
[init] Using Kubernetes version: v1.10.2
[init] Using Authorization modes: [Node RBAC]
[preflight] Running pre-flight checks.
        [WARNING FileExisting-crictl]: crictl not found in system path
Suggestion: go get github.com/kubernetes-incubator/cri-tools/cmd/crictl
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.10.45.2]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated etcd/ca certificate and key.
[certificates] Generated etcd/server certificate and key.
[certificates] etcd/server serving cert is signed for DNS names [localhost] and IPs [127.0.0.1]
[certificates] Generated etcd/peer certificate and key.
[certificates] etcd/peer serving cert is signed for DNS names [k8s-master] and IPs [10.10.45.2]
[certificates] Generated etcd/healthcheck-client certificate and key.
[certificates] Generated apiserver-etcd-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests".
[init] This might take a minute or longer if the control plane images have to be pulled.
[apiclient] All control plane components are healthy after 36.567609 seconds
[uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[markmaster] Will mark node k8s-master as master by adding a label and a taint
[markmaster] Master k8s-master tainted and labelled with key/value: node-role.kubernetes.io/master=""
[bootstraptoken] Using token: 06lm6r.l1bsmie4djirdo98
[bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: kube-dns
[addons] Applied essential addon: kube-proxy

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join 10.10.45.2:6443 --token 06lm6r.l1bsmie4djirdo98 --discovery-token-ca-cert-hash sha256:24847401465e8ef3239740db9576b4dfc11d41224cbed0483e9405edcb992f82

If the initialization is succesfull, the init command will give us the token that can be used to add external worker node to the cluster. Note this command

If you wat to run kubectl as a non-root user you need to execute the following commands

[kubernetes@k8s-master ~]$ mkdir -p $HOME/.kube
[kubernetes@k8s-master ~]$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[kubernetes@k8s-master ~]$ sudo chown $(id -u):$(id -g) $HOME/.kube/config

Use the kubectl command to check if kubernetes is running fine. This could take a few minutes

[kubernetes@k8s-master ~]$ kubectl get pods --all-namespaces -o wide
NAMESPACE     NAME                                 READY     STATUS    RESTARTS   AGE       IP           NODE
kube-system   etcd-k8s-master                      1/1       Running   0          3s        10.10.45.2   k8s-master
kube-system   kube-apiserver-k8s-master            1/1       Running   0          3s        10.10.45.2   k8s-master
kube-system   kube-controller-manager-k8s-master   1/1       Running   0          3s        10.10.45.2   k8s-master
kube-system   kube-dns-86f4d74b45-r5wn8            0/3       Pending   0          8m        <none>       <none>
kube-system   kube-proxy-zvfvz                     1/1       Running   0          8m        10.10.45.2   k8s-master
kube-system   kube-scheduler-k8s-master            1/1       Running   0          3s        10.10.45.2   k8s-master

As you can see all the containers are up and running with the exeption of kube-dns.
The reason for this is that we don’t have the network plugin installed yet.

Now that everything is working perfectly we can at the worker(s) to the cluster.
Login on both worker nodes and execute the “kubeadm join” command from the cluster initialization.

[kubernetes@k8s-worker01 ~]$ sudo kubeadm join 10.10.45.2:6443 --token 06lm6r.l1bsmie4djirdo98 --discovery-token-ca-cert-hash sha256:24847401465e8ef3239740db9576b4dfc11d41224cbed0483e9405edcb992f82
[preflight] Running pre-flight checks.
        [WARNING FileExisting-crictl]: crictl not found in system path
Suggestion: go get github.com/kubernetes-incubator/cri-tools/cmd/crictl
[discovery] Trying to connect to API Server "10.10.45.2:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://10.10.45.2:6443"
[discovery] Requesting info from "https://10.10.45.2:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "10.10.45.2:6443"
[discovery] Successfully established connection with API Server "10.10.45.2:6443"

This node has joined the cluster:
* Certificate signing request was sent to master and a response
  was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the master to see this node join the cluster.

Check if both nodes are added to the cluster. This could take a few minutes

[kubernetes@k8s-master ~]$ kubectl get nodes
NAME           STATUS     ROLES     AGE       VERSION
k8s-master     Ready      master    34m       v1.10.2
k8s-worker01   Ready      <none>    2m        v1.10.2
k8s-worker02   Ready      <none>    2m        v1.10.2

Let’s install network plugin. In our case Calico
First we install the Calico CNI (Container Networking Interface)

[kubernetes@k8s-master ~]$ kubectl apply -f https://docs.projectcalico.org/v3.0/getting-started/kubernetes/installation/hosted/kubeadm/1.7/calico.yaml
configmap "calico-config" created
daemonset.extensions "calico-etcd" created
service "calico-etcd" created
daemonset.extensions "calico-node" created
deployment.extensions "calico-kube-controllers" created
clusterrolebinding.rbac.authorization.k8s.io "calico-cni-plugin" created
clusterrole.rbac.authorization.k8s.io "calico-cni-plugin" created
serviceaccount "calico-cni-plugin" created
clusterrolebinding.rbac.authorization.k8s.io "calico-kube-controllers" created
clusterrole.rbac.authorization.k8s.io "calico-kube-controllers" created
serviceaccount "calico-kube-controllers" created

We also need to install the Calico Controller to manage the CNI

[kubernetes@k8s-master ~]$ kubectl apply -f https://docs.projectcalico.org/v3.0/getting-started/kubernetes/installation/hosted/calicoctl.yaml
pod "calicoctl" created

Check if Calico is installed and if its working. This could take a few minutes

[kubernetes@k8s-master ~]$ kubectl get pod --all-namespaces -o wide
NAMESPACE     NAME                                       READY     STATUS    RESTARTS   AGE       IP                NODE
kube-system   calico-etcd-4hnc2                          1/1       Running   0          26m       10.10.45.2        k8s-master
kube-system   calico-kube-controllers-5d74847676-s4gkd   1/1       Running   0          26m       10.10.45.2        k8s-master
kube-system   calico-node-5xrtk                          2/2       Running   0          26m       10.10.45.2        k8s-master
kube-system   calico-node-bnv4n                          2/2       Running   0          2m        10.10.45.12       k8s-worker02
kube-system   calico-node-rstrk                          2/2       Running   0          4m        10.10.45.11       k8s-worker01
kube-system   calicoctl                                  1/1       Running   0          25m       10.10.45.11       k8s-worker01
kube-system   etcd-k8s-master                            1/1       Running   0          25m       10.10.45.2        k8s-master
kube-system   kube-apiserver-k8s-master                  1/1       Running   0          25m       10.10.45.2        k8s-master
kube-system   kube-controller-manager-k8s-master         1/1       Running   0          25m       10.10.45.2        k8s-master
kube-system   kube-dns-86f4d74b45-r5wn8                  3/3       Running   0          35m       192.168.235.193   k8s-master
kube-system   kube-proxy-49njp                           1/1       Running   0          4m        10.10.45.11       k8s-worker01
kube-system   kube-proxy-nbpgm                           1/1       Running   0          2m        10.10.45.12       k8s-worker02
kube-system   kube-proxy-zvfvz                           1/1       Running   0          35m       10.10.45.2        k8s-master
kube-system   kube-scheduler-k8s-master                  1/1       Running   0          25m       10.10.45.2        k8s-master
[kubernetes@k8s-master ~]$ kubectl exec -ti -n kube-system calicoctl -- /calicoctl get profiles -o wide
NAME              LABELS
kns.default       map[]
kns.kube-public   map[]
kns.kube-system   map[]

As we can see from the output there are some additional system containers installed on the master and worker nodes.
Now that Kubernetes is installed let’s spin up our first container

First Container
Now time to deploy our first container. In this tutorial I’ll be using NGINX

[kubernetes@k8s-master ~]$ kubectl create deployment nginx --image=nginx
deployment.extensions "nginx" created

Next we make the NGINX container available to the network.

kubectl create service nodeport nginx --tcp=80:80
service "nginx" created

Check if the NGNIX is listing.

[kubernetes@k8s-master ~]$ kubectl get svc
NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP   10.96.0.1       <none>        443/TCP        44m
nginx        NodePort    10.109.70.251   <none>        80:32637/TCP   2m

Note: The 32637 port was assigned during the create service command.
Keep in mind that your values may differ from those below.

Let’s test with a curl command if can display the NGINX index.html page

[kubernetes@k8s-master ~]$ curl k8s-worker01:32637
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

If you see the same output as above, congratulations, your NGINX container has been deployed on your Kubernetes cluster.

If you open up a webbrowser to http://<IP_or_DNS>:<NODEPORT> (Where IP or DNS is one of your nodes and NODEPORT is the port assigned during the create service command), you should see the NGINX Welcome page!

If you have some suggestions or tips, just leave a comment below.

Thanks for reading.

Leave a Reply

Your email address will not be published. Required fields are marked *