Container Service Extension 4.0 on VCD 10.x – Part 3: Service Provider Configuration

The first two posts in this series covered CSE architecture and NSX ALB deployment/configuration. This post focuses on the steps taken by a service provider to set up a CSE deployment.

You can read the previous posts in this series by clicking on the links provided below.

1: CSE Introduction & Architecture

2: NSX Advanced Load Balancer Configuration & VCD Integration

At this time, it is assumed that the Service Provider has completed the following configurations in VCD:

  • vCenter is registered in VCD.
  • NSX-T is registered in VCD.
  • A Geneve-backed network pool is created in VCD.
  • Provider VDC has been created. 

The service provider workflow for CSE deployment includes the following tasks:

  1. Import Tier-0 gateway/VRF that is created for CSE in NSX-T.
  2. Create an organization in VCD. This is a Service Provider managed organization that hosts the Container Service Extension server and any other extensions in the future. This is known as a Service/Solutions organization.
Read More

Container Service Extension 4.0 on VCD 10.x – Part 2: NSX Advanced Load Balancer Configuration

In part 1 of this blog series, I discussed Container Service Extension 4.0 platform architecture and a high-level overview of a production-grade deployment. This blog post is focused on configuring NSX Advanced Load Balancer and integrating it with VCD. 

I will not go through each and every step of the deployment & configuration as I have already written an article on the same topic in the past. I will discuss the configuration steps that I took to deploy the topology that is shown below.

Let me quickly go over the NSX-T networking setup before getting into the NSX ALB configuration.

I have deployed a new edge cluster on a dedicated vSphere cluster for traffic separation. This edge cluster resides in my compute/workload domain. The NSX-T manager managing the edges is deployed in my management domain. 

On the left side of the architecture, you can see I have one Tier-0 gateway, and VRFs carved out for NSX ALB and CSE networking.Read More

Container Service Extension 4.0 on VCD 10.x – Part 1: Introduction & Architecture

Introduction

VMware Container Service is an extension to VMware Cloud Director which enables cloud providers to offer Kubernetes-as-a-Service to their tenants. CSE helps tenants quickly deploy the Tanzu Kubernetes Grid clusters in their virtual data centers in just a few clicks directly from the tenant portal. By using VMware Cloud Director Container Service Extension, customers can also use Tanzu products and services such as Tanzu Mission Control to manage their clusters.

Container Service Extension (CSE) has come a long way and with each release, the product is getting better and better. Folks who have worked on the older versions of CSE knows how painful it the setup process was and involved too many manual steps. With CSE 4.0, the provider workflow is simplified and the installation can be done in less than 30 minutes. Kudos to the CSE engineering team.

CSE 4.0 Benefits

I want to list a few benefits that CSE 4.0 offers before getting into the architecture.Read More

Quick Tip- Cleanup Failed Tasks from SDDC Manager Dashboard in VCF

Tasks in VCF might fail because one or more subtasks within the primary task have failed. Some of these tasks are not retriable and remain in a lingering state in the SDDC Manager dashboard.

The command provided in this blog post will help you in clearing out such tasks from the dashboard.

Step 1: Fetch the failed task ID from the SDDC manager interface.

Click on the failed task and notice the URL change in the browser. The task id is displayed in the URL itself.

Make a note of the task id.

Alternatively, you can run the below API call directly from the SDDC Manager VM.

The output of this API call returns a list of the tasks. You can filter the failed tasks and get the task ID.

Step 2: Delete the failed task

Execute the below API call and it will delete the failed task from the SDDC Manager dashboard.Read More

Tip and Tricks for VCF Lab Deployment

In this post, I’ll go over a few tips/tricks that you may use throughout your VCF lab deployment to get the most out of this fantastic tool.

Tip 1: Bring down lab resource utilization

Most of us, I believe, use VCF as a nested lab, and because VCF requires a lot of computing power, this is one area that we struggle with. Because of the limited resources available, a full-fledged deployment is not always practicable. NSX-T nodes, in my experience, are the most problematic component. VCF deploys several NSX-T nodes and each NSX-T requires a lot of resources. 

You can limit the number of NSX-T nodes in both the management and workload domains by following the below instructions:

Step 1: SSH into SDDC Manager using the vcf user and switch to root user by running the command: su – root

Step 2: Modify application-prod.propertiesRead More

Quick Tip: Cleanup Unused Image Bundles in VCF

I recently downloaded the image bundles for vRealize components while working in my newly deployed VCF 4.4 environment, not realizing that SDDC Manager does not orchestrate the deployment of any vRealize suite component except the vRealize Suite Life-Cycle Manager. I came across a useful out-of-the-box SDDC Manager feature when looking for a way to clean out the unneeded image bundles.

The process outlined in this post will assist you in clearing out any partially downloaded image bundles or unnecessary bundles that SDDC Manager is not currently using. 

Step 1: SSH into SDDC Manager using the vcf user and switch to root user by running the command: su – root

Step 2: Grab the unwanted image bundle id from UI

Step3: Run the following command to clean up the unwanted bundle

where bundle_id is the Id of the unwanted bundle

Example:

Read More

Quick Tip: Deploy VCF Management Domain with Single NSX-T Node

This article will show you how to set up a VCF Management domain with just one NSX-T manager. When there is a resource constraint, such as in a lab environment, this suggestion will be useful for lowering the management domain footprint.

The below steps outline the process of deploying an SDDC with one NSX-T node.

Step 1: Fill in all the parameters in the VCF configuration workbook spreadsheet.

Step 2: Transfer the spreadsheet to the cloud builder VM using WinSCP or a similar utility. 

Step 3: Use the following command to convert the spreadsheet to the json format

Where VCF-4.4.xlsx is the name of my spreadsheet. Change the name of the file to reflect your environment.Read More

Layer 7 Ingress in vSphere with Tanzu using NSX ALB

Introduction

vSphere with Tanzu currently doesn’t provide the AKO orchestration feature out-of-the-box. What I mean by this statement is that you can’t automate the deployment of AKO pods based on the cluster labels. There is no AkoDeploymentConfig that gets created when you enable workload management on a vSphere cluster and because of this, you don’t have anything running in your supervisor cluster to keep an eye on the cluster labels and take the decision of automated AKO installation in the workload clusters. 

However, this does not preclude you from using NSX ALB to provide layer-7 ingress for your workload clusters. AKO installation in a vSphere with Tanzu environment is done via helm charts and is a completely self-managed solution. You will be in charge of maintaining the AKO life cycle.

My Lab Setup

My lab’s bill of materials is shown below.

Component Version
NSX ALB (Enterprise) 20.1.7 
AKO 1.6.2
vSphere 7.0 U3c
Helm 3.7.4

The current setup of the NSX ALB is shown in the table below.Read More

Configuring L7 Ingress with NSX Advanced Load Balancer

NSX Advanced Load Balancer provides an L4+L7 load balancing using a Kubernetes operator (AKO) that integrates with the Kubernetes API to manage the lifecycle of load balancing and ingress resources for workloads. AKO runs as a pod in Tanzu Kubernetes clusters and provides an Ingress controller and load balancing functionality. AKO remains in sync with the required Kubernetes objects and calls the NSX ALB Controller APIs to deploy the Ingresses and Services and place them on the Service Engines.

In this post, I will discuss implementing ingress control for a sample application and will see NSX ALB in action.

What is Kubernetes Ingress?

As per Kubernetes documentation:

Ingress exposes HTTP and HTTPS routes from outside the cluster to services within the cluster. Traffic routing is controlled by rules defined on the Ingress resource.

How do I implement NSX ALB as an ingress controller?

If you have deployed AKO via helm, the below parameters in the values.yamlRead More

How to make NSX ALB 21.1.3 work with TKGm 1.5.1

To test TKGm 1.5.1 against the latest version of nSX ALB, I upgraded my ALB deployment to 21.1.3. The deployment of the TKG management and workload cluster went smoothly.

However, when I deployed a sample load balancer application that uses a dedicated SEG and VIP network, the service was waiting for an external IP assignment. 

Read More