Deploying Tanzu on VDS Networking-Part 3: Create Namespace & Deploy TKG Cluster

In the last post of this series, I demonstrated the deployment of the supervisor cluster which acts as a base for deploying Tanzu Kubernetes Guest (TKG) clusters. In this post, we will learn how to create a namespace and deploy the TKG cluster, and once the TKG cluster is up and running, how to deploy a sample application on top of TKG. 

This series of blogs will cover the following topics:

1: HA Proxy Deployment & Configuration

2: Enable Workload Management

3: How to Deploy Applications in TKG Cluster

Let’s get started.

Create Namespace

A namespace enables a vSphere administrator to control the resources that are available for a developer to provision TKG clusters.  Using namespace vSphere administrators stops a developer from consuming more resources than what is assigned to them. 

To create a namespace, navigate to the Menu > Workload Management and click on Create Namespace. 

  • Select the cluster where the namespace will be created and provide a name for the namespace.  
  • Also, select the workload network for your namespace. 

Once the namespace is created, we need to assign resource limits/quotas for the same.

First, we have to assign permissions for the namespace. If you have AD/LDAP integrated with vCenter and have created users/groups in AD/LDAP, then you can control which user/group will have access to this namespace. 

Since this is lab deployment, I provided access to the default SSO Admin. 

To configure resource (compute & storage) limits, select the namespace and click on configure tab and go to Resource Limits page.

Click on the Edit button and specify the limit. 

Download Kubernetes CLI Tools

Kubernetes CLI Tools helps you to configure & manage TKG clusters. To download CLI tools, you can connect to https://<control-plane-lb-ip>/.

This is the cluster IP that the supervisor cluster gets post-deployment. 

You can download the tools for Linux/Windows and Mac OS by altering the OS selection as shown in the below screenshot. 

In my lab, I downloaded the tools for Windows OS and configured the path to /bin directory under PATH environment variable so that I can run the kubectl command from anywhere within the os. 

Validate Control Plane (Supervisor Cluster)

Before attempting to deploy TKG clusters, it’s important that we verify important details of the deployed control plane. 

1: Connect to the namespace context.

Note: On prompting for credentials provide the credentials of the user who was assigned permission on the namespace. 

2: Switch context to your namespace

3: Get Control Plane Info

Before deploying the TKG cluster, please ensure that control plane nodes are in a Ready state and the storage policy that we specified during workload enablement is appearing as a storage class.

Also, ensure that virtual machine images that were imported in the subscribed content library are fully synchronized.

Deploy & Configure TKG Cluster

Deployment of the TKG cluster is done via a manifest file in YAML format.  The manifest file contains the following information: 

  • The number of control plane nodes.
  • The number of worker nodes.
  • Size of the control plane nodes.
  • Which storage class to use.
  • VM image to be used for the nodes.

Once the manifest file is populated with the above info, we can invoke kubectl command to deploy the TKG cluster. 

In my lab, I have deployed a sample TKG cluster with 3 control plane nodes and 3 worker nodes using the below manifest file

Command to deploy TKG cluster: kubectl apply -f <manifest-file-name>

TKG deployment takes roughly 20-30 minutes and during deployment, you will see control plane and worker nodes getting deployed & configured. 

Monitor TKG Deployment

Below commands can be used to monitor a TKG deployment and verify the cluster is up and ready to service workloads.

Command kubectl describe tanzukubernetescluster provides a lot of info about a TKG cluster such as Node/VM status, Cluster API Endpoint, Pods CIDR block, etc. 

And that’s it for this post. In the future posts of this series, I will demonstrate how vSphere with Tanzu works with NSX-T and NSX Advanced Load Balancer.

I hope you enjoyed reading this post. Feel free to share this on social media if it is worth sharing 🙂

Leave a Reply