Deploying Tanzu on VDS Networking-Part 2: Configure Workload Management

In the first post of this series, I talked about vSphere with Tanzu architecture and explained the deployment & configuration of the HA Proxy Appliance which acts as a load balancer for TKG Guest Clusters and Supervisor Cluster. 

In this post, I will walk through steps of enabling workload management which basically deploys & configure the supervisor cluster. 

This series of blogs will cover the following topics:

1: HA Proxy Deployment & Configuration

2: Enable Workload Management

3: How to Deploy Applications in TKG Cluster

Note: Before enabling Workload Management, make sure you have vSphere with Tanzu license available with you. If you don’t have the license, you can register for this product to get a 60 days evaluation license.

Prerequisites for Enabling Workload management

  • A vSphere Cluster created and both DRS and HA are enabled on the cluster.
  • A vSphere Distributed Switch created and all ESXi hosts are added to the VDS.
  • A Storage Policy created for the Supervisor Cluster VMs. You can also use the vSAN Default Storage Policy in POC/Lab environments. 
  • Create a Subscribed Content Library for importing virtual machine images that will be used when deploying the TKG cluster. The subscription URL for the content library is https://wp-content.vmware.com/v2/latest/lib.json. Your vCenter server should be able to talk to the internet to pull the images from VMware Repository. 
  • Static IP address range for TKG Guest cluster. The IP address range belongs to the subnet chosen for Workload Network.  
  • Static IP address range for the Load Balancer VIP. The range should match with what you have specified during HA proxy deployment (under LB configuration page)

Once the above prerequisites are satisfied, we are ready for deploying the supervisor cluster. 

Enable Workload Management

Login to vCenter Web Client and navigate to Menu > Workload management and click on Get Started.

Select the network backing type as “vCenter Server Network” and hit Next. 

Select the cluster where you want to enable Workload Management and hit Next. 

Select the control plane size. For lab environments, Tiny size works just fine. 

Select the storage policy that you have created for Workload Management. I am using the default vSAN policy. 

Click on View Datastore to associate the policy with a datastore. 

On the load balancer page:

  • Enter the hostname/fqdn of the HA Proxy appliance.
  • Select type as HA Proxy.
  • Data plane API address is the HA Proxy management address. Set port to 5556.
  • Enter the credentials via which supervisor cluster will authenticate with HA Proxy. This is specified during HA Proxy deployment.
  • IP Address Ranges is very important. A mistake in configuring this could mess up your deployment. The range of IP addresses is basically from the CIDR that you have specified in the LB configuration of HA Proxy.

Note: In my deployment, I used 172.18.90.80/29, and using the CIDR tool I calculated the range of IP and entered the same here. If you have forgotten what CIDR you have used for HA proxy LB configuration, then ssh to the HA proxy and check anyip-routes.cfg file.

  • Server Certificate Authority thumbprint can be obtained by connecting to HA proxy over ssh and checking /etc/haproxy/ca.crt file. 

Punch in all the gathered info and hit Next. 

Configure Management Network: Select the port group that is performing management activity in your SDDC and punch in the IP address, Subnet & Gateway details.

Note: Make sure that the address that you have specified as Starting IP Address, has 4 more consecutive free IPs. 

Network Topology for workload management looks like as shown in the below image. This pretty much gives an idea of how various constructs are connected with each other. 

Configure Workload Network: The supervisor cluster and the TKG guest cluster communicate with each other over Workload Network. 

  • Don’t change the Service CIDR as it is an internal non-routable network. 
  • Punch in the DNS server address

Click on Add button to configure workload network settings. 

  • Provide a name for the workload network. The default name that shows up here is network-1. Change this to a name that makes more sense. 
  • Select the port group that will be carrying workload network traffic. 
  • Punch in the IP address range/Subnet mask and gateway address for the workload network. 

You can add more than one workload network for your TKG cluster. This is how it looks post configuration. Hit Next to continue. 

Select the Content Library by clicking on Add and choosing the one which you created as subscribed content library. 

Hit finish to start Workload Management Enablement wizard. 

It roughly takes 30 minutes for the wizard to finish the configuration. Be patient 🙂

During the configuration phase, you will see automated deployment/configuration of the supervisor cluster vms.

Once the cluster configuration completes, Config Status will report as Running and you will see VIP of Control Plane to which you can connect to download the utilities to provision/manage TKG guest clusters. 

And that’s it for this post. In part 3 of this series, I will cover steps of enabling Workload management. 

I hope you enjoyed reading this post. Feel free to share this on social media if it is worth sharing 🙂

Leave a Reply