Kubernetes Volume Plugin for Nutanix ABS

So today I’ll show how install and use the Kubernetes Volume Plugin for Nutanix ABS (Acropolis Block Services).

I already have my setup up-and-running. As I describe in my previous post How Install Kubernetes on CentOS 7. If you haven’t seen this, please check this one out.

The only thing what I’ve added to my setup is a single node Nutanix Community Edition cluster.
If you want to know how to run Nutanix CE checkout this website. https://www.nutanix.com/products/community-edition/

So let’s get started!

Prerequisites
I’ll start by creating a local Nutanix user.
Login into my Nutanix cluster, open up Local User Management and create a new user

Now that the user is create let’s hop to both of our worker node and install the iSCSI initiator utilities

[kubernetes@k8s-worker01 ntnx]$ sudo yum install iscsi-initiator-utils -y

Once the iSCSI utils are installed we need to make sure that the InitiatorName is the same on all worker node. This name will be used to connect to the Nutanix cluster. This is also the default name.

[kubernetes@k8s-worker01 ntnx]$ sudo sed -i 's/InitiatorName=iqn.1994-05.*/InitiatorName=iqn.1994-05.com.nutanix:k8s-worker/g' /etc/iscsi/initiatorname.iscsi

Since I’ve had connectivity issues between the workers and the Nutanix cluster, cause it didn’t accept the InitiatorName after I changed it. I rebooted both worker nodes.

Now that the worker nodes are rebooting, let’s login into our Master node

Create a base64 hash with the Nutanix login credentials that we will use to connect Kubernetes with the Nutanix cluster.
Note: Keep in mind that your values may differ from those below.

[kubernetes@k8s-master ntnx]$ echo -n "k8s4ntnx:Kubernetes/4u" | base64
azhzNG50bng6S3ViZXJuZXRlcy80dQ==

Copy the hash to you clipboard or save it in a different location.

Installation
To install the volume plugin, we need to download it first.
You can use wget or you can download the file right here

[kubernetes@k8s-master ntnx]$ wget http://10.10.40.59/wp-content/uploads/2018/05/k8s_4_ntnx_abs.yml

Next, we need to edit 3 sections within the plugin-file.

[kubernetes@k8s-master ntnx]$ nano k8s_4_ntnx_abs.yml

The first section is ### StorageClass ###
Change for both gold and silver the following attributes.
Please don’t touch user, password or secretName unless you know what you’re doing.
* prism_ip_address
* data_service_ip_address
* storage_container

Additional options are optional
* fsType
* chapAuthEnabled
* iscsiSecretName
* defaultIqn

The second is ### Secret Credentials ###
Replace the hash “AABBCCDDEEFFGG==” with the one we created earlier.

The last section is ### iSCSI CHAP Authentication ###
If you use iSCSI CHAP you need add your base64 password, but in our case I will delete these lines, since I’m not using iSCSI CHAP.

Now that we’ve edited the plugin-file we can install the Kubernetes Volume Plugin for Nutanix ABS

[kubernetes@k8s-master ntnx]$ kubectl create -f k8s_4_ntnx_abs.yml
serviceaccount "ntnx-abs-serviceaccount" created
clusterrole.rbac.authorization.k8s.io "ntnx-abs-clusterrole" created
clusterrolebinding.rbac.authorization.k8s.io "ntnx-abs-clusterrole-binding" created
deployment.extensions "ntnx-abs" created
storageclass.storage.k8s.io "gold" created
storageclass.storage.k8s.io "silver" created
secret "ntnx-abs-secret" created

Check if the plugin is installed and if its working. This could take a few minutes

[kubernetes@k8s-master ntnx]$ kubectl get pod --all-namespaces -o wide
NAMESPACE     NAME                                       READY     STATUS    RESTARTS   AGE       IP                NODE
default       ntnx-abs-d9687cdb8-5wh8s                   1/1       Running   0          2m        10.10.45.12       k8s-worker02

Next, we need to create a Persistent Volume Claim to test of our plugin is working.
Create a file with the following content and install this file.

[kubernetes@k8s-master ntnx]$ cat <<EOF > ntnx-pvc-demo.yml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: ntnx-pvc-demo
spec:
  storageClassName: gold
  resources:
    requests:
      storage: 1Gi
  accessModes:
    - ReadWriteMany
EOF
[kubernetes@k8s-master ntnx]$ kubectl create -f ntnx-pvc-demo.yml
persistentvolumeclaim "ntnx-pvc-demo" created

Verify if the pvc is created and if its automatically bounded.

[kubernetes@k8s-master ntnx]$ kubectl get pvc ntnx-pvc-demo
NAME            STATUS    VOLUME                                                                               CAPACITY   ACCESS MODES   STORAGECLASS   AGE
ntnx-pvc-demo   Bound     e5e7a3521cc40006cf270afd0333bb5d4f58a26323935c501334031aadbffa52-nutanix-k8-volume   1Gi        RWX            gold           14s

Login in the Nutanix cluster to see if the volume group is created.

Yes it is!

Delete the pvc and check if the volume is deleted in Kubernetes and Nutanix

[kubernetes@k8s-master ntnx]$ kubectl delete pvc ntnx-pvc-demo
persistentvolumeclaim "ntnx-pvc-demo" deleted
[kubernetes@k8s-master ntnx]$ kubectl get pvc
No resources found.

And it’s gone…..

NGINX with Persistent Storage
Now that we know we can create a pvc on Nutanix let’s deploy a NGINX with persistent storage,
so when the NGINX pod is recreated/deleted the content still exist.

Create 2 new files with the following content and install both files

[kubernetes@k8s-master ntnx]$ cat <<EOF > ntnx-pvc-nginx.yml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
    name: ntnx-pvc-nginx
spec:
    accessModes:
      - ReadWriteOnce
    resources:
        requests:
            storage: 3Gi
    storageClassName: silver
EOF
[kubernetes@k8s-master ntnx]$ kubectl create -f ntnx-pvc-nginx.yml
persistentvolumeclaim "ntnx-pvc-nginx" created

[kubernetes@k8s-master ntnx]$ cat <<EOF > nginx01.yml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: nginx01
  labels:
    app: nginx
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: launcher.gcr.io/google/nginx1
        volumeMounts:
          - mountPath: /usr/share/nginx/html
            name: pvc-nginx01
      volumes:
        - name: pvc-nginx01
          persistentVolumeClaim:
            claimName: ntnx-pvc-nginx
EOF
[kubernetes@k8s-master ntnx]$ kubectl create -f nginx01.yml
deployment.extensions "nginx01" created

Verify if both are up-and-running. This could take a few minutes

[kubernetes@k8s-master ntnx]$ kubectl get pvc
NAME             STATUS    VOLUME                                                                               CAPACITY   ACCESS MODES   STORAGECLASS   AGE
ntnx-pvc-nginx   Bound     e4b14049d8cbe57854b3ece3ca11949ff68e1637c6179e6ae66c12a7720099b5-nutanix-k8-volume   3Gi        RWX            silver         3m
[kubernetes@k8s-master ntnx]$ kubectl get pods
NAME                       READY     STATUS    RESTARTS   AGE
nginx01-754cd6c757-q7sk7   1/1       Running   0          1m
ntnx-abs-d9687cdb8-5wh8s   1/1       Running   0          46m

Get inside the NGINX container.

[kubernetes@k8s-master ~]$ kubectl exec -it nginx01-754cd6c757-q7sk7 -- /bin/bash

Create a simple HTML file

root@nginx01-754cd6c757-q7sk7:/# cat <<EOF > /usr/share/nginx/html/index.html
YEAH!!
The Kubernetes volume plugin for Nutanix works!!
EOF

Exit the container

root@nginx01-754cd6c757-q7sk7:/# exit
exit

Make the NGINX container available to the network

[kubernetes@k8s-master ~]$ kubectl create service nodeport nginx --tcp=80:80
service "nginx" created

Check if the NGINX is listing

[kubernetes@k8s-master ~]$ kubectl get svc
NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP   10.96.0.1       <none>        443/TCP        44m
nginx        NodePort    10.109.70.251   <none>        80:32180/TCP   2m

Let’s test with a curl command if can display the NGINX index.html page we created.

[kubernetes@k8s-master ~]$ curl k8s-worker01:32180
YEAH!!
The Kubernetes volume plugin for Nutanix works!!

If you have the same output, we can delete the NGINX pod to check if our persistent storage works.

kubernetes@k8s-master ~]$ kubectl delete pod nginx01-754cd6c757-q7sk7

If we’re quick and execute the get pods command fast, we can see that our first NGINX container is Terminating and a new one is created.

[kubernetes@k8s-master ~]$ kubectl get pods
NAME                       READY     STATUS              RESTARTS   AGE
nginx01-754cd6c757-skgct   0/1       ContainerCreating   0          2s
nginx01-754cd6c757-q7sk7   1/1       Terminating         0          5m
ntnx-abs-d9687cdb8-7mhvb   1/1       Running             0          2h

After a couple of seconds the new NGINX container is up-and-running and should host the same index.html. Use curl once again to check if I’m right

[kubernetes@k8s-master ~]$ curl k8s-worker01:31411
YEAH!!
The Kubernetes volume plugin for Nutanix works!!

If you see the same output as above or you own text, congratulations, your NGINX container has been deployed with the help of the Kubernetes Volume Plugin for Nutanix ABS.

If you have some suggestions or tips, just leave a comment below.

Thanks for reading.

Leave a Reply

Your email address will not be published. Required fields are marked *