vSphere with Tanzu Leveraging NSX ALB-Part-1: Avi Controller Deployment & Configuration

With the release of vSphere 7.0 U2, VMware introduced support of Avi Load Balancer (NSX Advanced Load Balancer) for vSphere with Tanzu, and thus fully supported load balancing is now enabled for Kubernetes. Prior to vSphere 7.0 U2, HA Proxy was the only supported load balancer when vSphere with Tanzu needed to be deployed on vSphere Distributed Switch (vDS) based networking. 

HA Proxy was not supported for production-ready workloads as it has its own limitations. NSX ALB is a next-generation load balancing solution and its integration with vSphere with Tanzu, enables customers to run production workloads in the Kubernetes cluster.

When vSphere with Tanzu is enabled leveraging NSX ALB, ALB Controller VM has access to the Supervisor Cluster, Tanzu Kubernetes Grid clusters, and the applications/services that are deployed on top of TKG Cluster. 

The below diagram shows the high-level topology of NSX ALB & vSphere with Tanzu.

In this post, I will cover the steps of deploying & configuring NSX ALB for vSphere with Tanzu.

  • Download the NSX ALB (Avi Network) installer from here
  • Review Avi Networks official docs from here
  • Review VMware official docs from here

Networking Pre-requisites

For a production setup, you need 3 subnets:

  • One for the Avi Controller/SE management. The supervisor cluster will also connect to this network. 
  • One for Avi SE VIP for load balancing virtual services that get created when vSphere with Tanzu is deployed.
  • One for the applications/services that will be deployed on top of the TKG cluster. 

This is what my lab topology looks like. 

In my lab, I am having the below subnets/IP Pools configured. 

NSX ALB Deployment

Once you have downloaded Avi Controller OVA, deploy it using the standard VMware ova deployment procedure. 

Deployment is pretty simple. You just need to punch in details of the management port group/ IP address details. 

Once the OVA is deployed and the Controller VM boots up, access the UI by typing https://<avi-controller-fqdn>/

Configure a password and email account for the admin user and click on Create Account. 

Provide DNS Servers, DNS Search Domain, and Backup Passphrase to be used for the controller VM. 

Scroll down on the same page to enter NTP details. Hit Next to continue.

If you have an email server configured in your environment, enter the details of the server here and hit next. For Lab/POC it can be left default as shown in the below screenshot.

Note: You can also set it to none if you wish so.

Select VMware as Orchestrator.

  • Provide the vCenter Server information. This is the vCenter where vSphere with Tanzu will be deployed. 
  • Make sure Avi Controller gets write Acess to VC by toggling the Permissions button. 
  • Select None for SDN Integration.

Select the Datacenter where Avi Service Engines will be deployed.

For Service Engine management IP addresses, I prefer static IPs. You can select DHCP if you have a DHCP server running on your Management Network. 

Select the management network port group and specify IP Address details. Configure an IP Address Pool, which will be consumed by Service Engines when virtual services will be configured. 

Select No for multiple tenants support. 

And that’s it for the initial configuration. You will be redirected to the Applications Dashboard page. 

NSX ALB Configuration

Assign Avi Controller License

If you have Avi Enterprise License, you can assign the same by navigating to Administration > Settings > Licensing page and clicking on Apply Key. 

Punch in your license key and hit save. 

Configure the IPAM and DNS Profiles

IPAM and DNS profiles are configured so that the Service Engines have visibility to which IP pool and DNS configuration to use.

IPAM/DNS profiles are created under Templates > Profiles > IPAM/DNS Profiles. 

First, let’s create an IPAM profile. 

  • Provide a name for the profile and select type as Avi Vantage IPAM.
  • Select the Cloud for which we are creating the IPAM profile and choose the network that will be used by the Avi Service Engine datapath. 

Similarly, create a DNS profile and enter your domain name in the profile. You can leave other values to default. 

Associate newly created IPAM/DNS profile with the default cloud (vCenter) by navigating to Infrastructure > Clouds and editing the default cloud.

Under IPAM/DNS profile, select the profile that we created earlier and hit Save. 

 

Configure Controller Certificate

The default certificate that gets created when Avi Controller is deployed, don’t have the hostname/fqdn of the controller node. So we need to replace the default certificate with either a Self-Signed cert or a CA-signed cert. 

To create a self-signed certificate for the controller node, navigate to Templates > Security > SSL/TLS Certificates and click on Create Controller Certificate. 

Punch in relevant details as per your environment and hit Save.

Once the certificate is created, we need to assign it to the controller node. This is done under Administration > Settings > Access Settings and clicking on the Edit button. 

Remove the default certificates under SSL/TLS Certificate.

From the dropdown menu, select the certificate that you created in the previous step. 

Refresh the web page to verify the newly minted controller certificate is now configured. 

Configure a Service Engine Group

You can control SE’s Placement and resource allocation etc by configuring a Service Engine Group.

To create/edit a SE group navigate to Infrastructure -> Service Engine Group and edit the Default -Group. 

Navigate to the Advanced tab and specify the SE prefix and folder where SEs will be placed post-deployment. 

Configure Service Engine Virtual IP Network

We need to define an IP Pool for the Service Engine VMs so that VIPs can be assigned to SE VMs when virtual services are created.

To specify a VIP IP Pool, navigate to Infrastructure > Networks and edit the port group that you have created for Avi SE VIP/Datapath. 

Click on Add Subnet button.

Specify the subnet for Avi SE VIP and checkmark “Use Static IP Address for VIPs and SE”.

Click on Add Static IP Address Pool and specify a pool of IP addresses that belongs to the SE VIP network.

Note: Ensure DHCP Enabled and Exclude Discovered Subnets are unchecked for Virtual Service Placement

Hit Save to continue.

Configure Static Routes

Since the Tanzu Workload network is on a separate subnet than the SE VIP network, we need to configure a static route from the SE VIP network to the Tanzu workload network so that SE VMs knows how to reach the workload subnet. 

To create static routes, navigate to Infrastructure > Routing > Static Route and click on Create.

First, we will specify the Gateway Subnet and next-hop for the SE VIP network. 

And that concludes this post. In the next post of this series, I will demonstrate the deployment of the Supervisor Cluster. 

I hope you enjoyed reading this post. Feel free to share this on social media if it is worth sharing 🙂

Leave a Reply