Deploying Kubernetes VMWare Cloud Storage

By Style Sync
| Published on
2fa98-33050221

 

Are you looking for a straightforward way to deploy a Kubernetes VMware CSI driver? How complex is this task; how long will take you to successfully deploy it? Is there a quick way to do this?

If you have already tried to deploy a Kubernetes VMware CSI driver, I’m sure you already know how complicated this task can be; but if you are looking for an easier and faster way, then this blog is for you. I have broken the following blog into smaller pieces to help you grasp the many steps needed to accomplish the installation. Each task is critical to the success of the next one, and the next, and so on; therefore, I have included verification checks after each step to ensure the task was accomplished correctly.

Note: It took me several weeks to successfully deploy the Kubernetes VMware Cloud Native Storage in my environment. I read and jumped between many articles and blogs and made many mistakes to get to the workable solution you see here. The information you see on this blog highlights the steps I took, with some modifications, to deploy my K8 VMware CPI and CSI on my environment. I am incredibly pleased to be able to share it with you. 

Without further delay, let’s get our hands dirty, starting with a review of the lab I used:

Infrastructure Pre-Requisites

To follow the steps on this blog, you will need the following:

    • VMware vCenter – v7.0.1 build 17491160
    • VMware vSphere Cluster
    • ESXi – v7.0.1 build 17551050
    • iScsi Datastore

With that infrastructure above, I deployed three virtual machines; one used for K8 Master, and two configured as K8 Worker nodes. You can learn how to deploy the K8 cluster by reading my previous blog post. To keep all the information on one blog, I will quickly highlight the important steps needed to make the integration work successfully.

We will start with:

Docker Deployment

After you setup and deploy the VMs as described in the previous blog, we need to deploy Docker on all the Nodes. To do that, follow these steps:

    • Deploy Doker: sudo apt install -y docker.io
    • Setup the daemon parameters
sudo tee /etc/docker/daemon.json >/dev/null <<EOF
 {
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2"
}
EOF
    • sudo mkdir -p /etc/systemd/system/docker.service.d
    • Restart Docker Service: sudo systemctl daemon-reload 
    • and then: sudo systemctl restart docker 
    • Enable Doker Service: systemctl enable docker.service
    • Check that Cgroup driver is working: sudo docker info | egrep "Server Version|Cgroup Driver"
    • Expect output as the below:
      
      

Kubernetes Cluster Deployment

Completing the previous steps successfully will set us up for the deployment of the Kubernetes cluster.  Start by completing the following steps on all nodes.:

Now we are ready to create the Kube Cluster by ruing the below step (Run only on the Master node):

    • Start the deployment: sudo kubeadm init
    • To start using the cluster, you must run the following steps (output from the deployment):
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
    • and then run the below command to configure Kubectl auto-completion:
echo 'source <(kubectl completion bash)' >>~/.bashrc
source .bashrc
    • Use the output command to join all the workers to the cluster

The last step is to apply the Network layout add-on to the cluster. In my environment, and after some testing, I’m comfortable with using the Calico add-on :

    • curl https://docs.projectcalico.org/manifests/calico.yaml -O
    • kubectl apply -f calico.yaml

Deploy the VMware Cloud Provider Interface

At this stage, our Kubernetes cluster is up and running and ready to host the VMware Cloud Provider Interface. To deploy the CPI successfully, we must now run through the following steps so they will run smoothly. Let’s get started:

    • Setup the disk.EnableUUID=1 using govc command, To install the govc command, run the following steps:
wget https://github.com/vmware/govmomi/releases/download/v0.20.0/govc_linux_amd64.gz
gunzip govc_linux_amd64.gz
mv govc_linux_amd64 govc
sudo chown root govc
sudo chmod ug+r+x govc
sudo mv govc /usr/local/bin/.
    • After the deployment has completed, run the following scripts to connect to your vCenter:
export GOVC_INSECURE=1

export GOVC_URL='<username@domain>:<password>'@<domain>
      • get list of the VM on your VMware Datacenter by running:

govc ls -k /<Datacenter>/vm

    • From the list of VMs, pick up all the K8 Nodes and run the following command for each node:
      • for Example: govc vm.change -vm "/OmnitraDataCenter/vm/K8Master" -e="disk.enableUUID=1" -k
    • Tainting the nodes. First check that the nodes are tainted using the following command:
kubectl describe nodes | egrep "Taints:|Name:"
    • We expect to see the following response for the Master node:
node-role.kubernetes.io/master:NoSchedule
    • If the Master node does not have the above taint, use the following command to add it:

kubectl taint nodes <Node Name> node-role.kubernetes.io/master:NoSchedulese

    • Use the following command for the Worker nodes:
node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule
    • If the Worker node does not have the above taint, you can add it using the following command:
kubectl taint nodes <Node Name> node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule
    • The following screenshot is taken from my environment. Before I run through the installation on the CPI, make sure this is what you see on your environment too:

 

    • The last step before the deployment is complete is to create the vsphere.conf file with the following parameters.

Note: this file must be placed under /etc/Kubernetes:

[Global]
user = “administrator@os.org”
password = “P@ssw0rd”
port = “443”
insecure-flag = “1”
[VirtualCenter “oasisvcsa.oasis.org”]
datacenters = “OmnitraDataCenter”
[Workspace]
server = “oasisvcsa.oasis.org”
datacenter = “OmnitraDataCenter”
default-datastore = “SYNDataStore”
[Disk]
scsicontrollertype = pvscsi
 
    • Create a config map by running the following command:
      kubectl create configmap cloud-config --from-file=vsphere.conf --namespace=kube-system
       
    • After the preparation work we have completed, we are ready to run through the installation of the CPI. Use the following commands:
      kubectl apply -f https://raw.githubusercontent.com/kubernetes/cloud-provider-vsphere/master/manifests/controller-manager/cloud-controller-manager-roles.yaml
      
      kubectl apply -f https://raw.githubusercontent.com/kubernetes/cloud-provider-vsphere/master/manifests/controller-manager/cloud-controller-manager-role-bindings.yaml
      
      kubectl apply -f https://github.com/kubernetes/cloud-provider-vsphere/raw/master/manifests/controller-manager/vsphere-cloud-controller-manager-ds.yaml
      
    • Let’s ensure that the CPI is deployed and running and without issues by running through the following commands:
    • Check the pods: kubectl get pods -n kube-system
    • You can also use the following commands to get more information about the status of the CPI appliance. Make sure there are no errors:
      • kubectl describe pod vsphere-cloud-controller-manager-vrtlh -n kube-system
      • kubectl logs vsphere-cloud-controller-manager-vrtlh -n kube-system
      • Make sure the providerID is set on all the nodes: kubectl describe nodes | grep "ProviderID"

 

Deploy VMware CSI Driver

Deploying the CPI provider is not enough for a complete integration between the Kubernetes and VMware. Therefore, we must complete the following final steps to accomplish our integration of Kubernetes and VMware.

    • Create a VMware credential file to connect to the vCenter. The following example config file, csi-csphere.conf, is a configuration file to be used with block storage:
[Global]
cluster-id = “<cluster name>”
 
[VirtualCenter “<vcenter IP/FQDN>”]
insecure-flag = “true”
user = “<username>”
password = “<password>”
port = “443”
datacenters = “OmnitraDataCenter”
    • After creating the above file, you are ready to run the following command to create the secret file:
kubectl create secret generic vsphere-config-secret --from-file=csi-vsphere.conf --namespace=kube-system
    • Verify that the secret file has created successfully:
kubectl get secret vsphere-config-secret --namespace=kube-system
 
 
 
 
 
 
    • That’s all the preparation necessary before starting the deployment.

Run the following commands to begin the deployment:

kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/vsphere-csi-driver/master/manifests/v2.1.0/vsphere-7.0u1/rbac/vsphere-csi-controller-rbac.yaml

kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/vsphere-csi-driver/master/manifests/v2.1.0/vsphere-7.0u1/deploy/vsphere-csi-controller-deployment.yaml

kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/vsphere-csi-driver/master/manifests/v2.1.0/vsphere-7.0u1/deploy/vsphere-csi-node-ds.yaml

    • Verify that the CSI was installed successfully by running the following command:
      kubectl get deployment --namespace=kube-system
    • You can also run the following command to check if there are any events on the controller:

Note: you must run the first command to get the controller name:

kubectl get pods --all-namespaces

kubectl describe pods vsphere-csi-controller-6f7484d584-nt5c4 -n kube-system

    • To review the logs, run the following command:

kubectl logs vsphere-csi-controller-6f7484d584-nt5c4 -n kube-system vsphere-csi-controller

Testing

After the previous steps, it is time to put our deployment under test. To do this, we will use the following .yaml file to create a storage class, and then claim the space. After the file runs successfully, we expect to see that the storage has been created on the vCenter and ready for any Kubernetes application to consume. As with my previous blog post, I like to create my .yaml files to combine all the tasks within one file. Below is a file called storageclass.yaml. It contains the Storage class and then the claim:

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: sc-block
  annotations:
    storageclass.kubernetes.io/is-default-class: “true”  # Optional
allowVolumeExpansion: true  # Optional: only applicable to vSphere 7.0U1 and above
parameters:
  datastoreurl: “ds:///vmfs/volumes/5ea13a2d-b7d00874-d097-1c697a027fe1/”  # Optional Parameter
  csi.storage.k8s.io/fstype: “ext4”  # Optional Parameter
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc-block
spec:
  accessModes:
    – ReadWriteOnce
  resources:
    requests:
      storage: 5Gi
  storageClassName: sc-block
After we create the above file, we can run the following command:
 
kubectl apply -f storageclass.yaml
 
To confirm the apply command run successfully and as expected, we can check that a volume was created on vCenter under Container Volumes (Cluster – Monitor – Cloud Native Storage):
 

 

 
You can also run the following commands to confirm if the apply command was successful:
 
kubectl get sc
 
kubectl get pvc
 

 

 

Summary

The steps we have just gone through will help you deploy and get started with the Kubernetes VMware CPI/CSI driver. As I mentioned at the beginning of this blog, those steps were very complex and difficult to master at the beginning; but after my first success, I been able to re-deploy the Kubernetes cluster and integrate with VMware by following these steps in under twenty minutes. As you will have noticed, the success of the deployment is heavily dependent on the correct preparation. I encourage you to always run through the check commands I provided after each step to prevent errors in previous steps causing problems in the next step.
Finally, if you are looking for a quick way to deploy CPI/CSI, try the steps in this blog and share your feedback with us, and share the blog with others.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

What do you think?

Leave a Reply

Your email address will not be published. Required fields are marked *

Subscribe to Our Newsletter

Table of Contents

Related Insights