How to Force Delete a Stale Virtual Service in NSX ALB

Recently, I ran into an interesting problem in my lab where I couldn’t get rid of an unused Virtual Service in NSX ALB. The attempt to delete was failing with an error: “VS cannot be deleted! ‘It is being referred to by SystemConfiguration object”

I tried deleting the VS via API and it returned the same error

To figure out where this VS is being referenced, I looked through the pool members and other settings in NSX ALB, but I couldn’t discover anything particular. Internet searches were also not very helpful.

I then checked this issue in internal tools and got a hint that I needed to remove the VS reference from the system configuration through API first. Read More

How to Force Delete a Stale Logical Segment in NSX 3.x

I ran into a problem recently when disabling NSX in my lab where I couldn’t remove a logical segment. This logical segment was previously attached to NSX Edge virtual machines. The logical segments still had a stale port, even after the Edge virtual machines were deleted.

Any attempt to delete the segment through UI/API resulted in the error Segment has 1 VMs or VIFs attached. Disconnect all VMs and VIFs before deleting a segment.” 

GUI Error

API Error

So how do I delete the stale segments in this case? The answer is through API, but before deleting the segments, you must delete the stale VIFs. 

Follow the procedure given below to delete stale VIFs.

1: List Logical Ports associated with the Stale Segment

This API call will list all segments. You can pass the segment uuid in the above command to list a specific segment (stale one).Read More

Quick Tip: Configure NSX Manager GUI Idle Timeout

The NSX Manager Web-UI has a default timeout of 1800 seconds. i.e., the NSX Manager UI will time out after 30 minutes of inactivity. This timeout looks reasonable for a production deployment, but since security is not an issue in lab settings, you might want to change it to a little bit higher. On top of that, it is annoying to get kicked out of the UI after 30 minutes of idle session.

In this post, I will demonstrate how you can change the timeout value.

Run the command get service http to see the currently configured value:

Run the command set service http session-timeout 0 to fully remove the session timeout.Read More

Securing TKG Workloads with Antrea and NSX-Part 2: Enable Antrea Integration with NSX

In the first part of this series of blog posts, I talked about how VMware Container Networking with Antrea addresses current business problems associated with a Kubernetes CNI deployment. I also discussed the benefits that VMware NSX offers when Antrea is integrated with NSX.

In this post, I will discuss how to enable the integration between Antrea and NSX. 

Antrea-NSX Integration Architecture

The below diagram is taken from VMware blogs and shows the high-level architecture of Antrea and NSX integration.

The following excerpt from vmware blogs summarizes the above architecture pretty well.

Antrea NSX Adapter is a new component introduced to the standard Antrea cluster to make the integration possible. This component communicates with K8s API and Antrea Controller and connects to the NSX-T APIs. When a NSX-T admin defines a new policy via NSX APIs or UI, the policies are replicated to all the clusters as applicable. These policies will be received by the adapter which in turn will create appropriate CRDs using K8s APIs.

Read More

Securing TKG Workloads with Antrea and NSX-Part 1: Introduction

What is a Container Network Interface

Container Network Interface (CNI) is a framework for dynamically configuring networking resources in a Kubernetes cluster. CNI can integrate smoothly with the kubelet to enable the use of an overlay or underlay network to automatically configure the network between pods. Kubernetes uses CNI as an interface between network providers and Kubernetes pod networking.

There exists a wide variety of CNIs (Antrea, Calico, etc.) that can be used in a Kubernetes cluster. For more information on the supported CNIs, please read this article.

Business Challenges with Current K8s Networking Solutions

The top business challenges associated with current CNI solutions can be categorized as below:

  • Community support lacks predefined SLAs: Enterprises benefit from collaborative engineering and receive the latest innovations from open-source projects. However, it is a challenge for any enterprise to rely solely on community support to run its operations because community support is a best effort and cannot provide a predefined service-level agreement (SLA).
Read More

Quick Tip: How to Reset NSX ALB Controller for a Fresh Configuration

Sometimes NSX ALB controllers are frequently redeployed in the lab environment to test and retest setup. Redeploying an NSX ALB controller only takes a few minutes, but in a slow environment, it can take up to 20-25 minutes. Using this handy tip, you can save some quality time.

To reset a controller node to the default settings, login to the node over SSH and run the following command.

Read More

TKG Multi-Site Global Load Balancing using Avi Multi-Cluster Kubernetes Operator (AMKO)

Overview

Load balancing in Tanzu Kubernetes Grid (when installed with NSX ALB) is accomplished by leveraging Avi Kubernetes operator (AKO), which delivers L4+L7 load balancing to the Kubernetes API endpoint and the applications deployed in Tanzu Kubernetes clusters. AKO runs as a pod in Tanzu Kubernetes clusters and serves as an Ingress controller and load balancer.

The Global Server Load Balancing (GSLB) function of NSX ALB enables load-balancing for globally distributed applications/workloads (usually, different data centers and public clouds). GSLB offers efficient traffic distribution across widely scattered application servers. This enables an organization to run several sites in either Active-Active (load balancing and disaster recovery) or Active-Standby (DR) mode.

With the growing footprint of containerized workloads in datacenters, organizations are deploying containerized workloads across multi-cluster/multi-site environments, necessitating the requirement for a technique to load-balance the application globally.

To meet this requirement, NSX ALB provides a feature called AMKO (Avi Multi-Cluster Kubernetes Operator) which is an operator for Kubernetes that facilitates application delivery across multiple clusters.Read More

Container Service Extension 4.0 on VCD 10.x – Part 2: NSX Advanced Load Balancer Configuration

In part 1 of this blog series, I discussed Container Service Extension 4.0 platform architecture and a high-level overview of a production-grade deployment. This blog post is focused on configuring NSX Advanced Load Balancer and integrating it with VCD. 

I will not go through each and every step of the deployment & configuration as I have already written an article on the same topic in the past. I will discuss the configuration steps that I took to deploy the topology that is shown below.

Let me quickly go over the NSX-T networking setup before getting into the NSX ALB configuration.

I have deployed a new edge cluster on a dedicated vSphere cluster for traffic separation. This edge cluster resides in my compute/workload domain. The NSX-T manager managing the edges is deployed in my management domain. 

On the left side of the architecture, you can see I have one Tier-0 gateway, and VRFs carved out for NSX ALB and CSE networking.Read More

Layer 7 Ingress in vSphere with Tanzu using NSX ALB

Introduction

vSphere with Tanzu currently doesn’t provide the AKO orchestration feature out-of-the-box. What I mean by this statement is that you can’t automate the deployment of AKO pods based on the cluster labels. There is no AkoDeploymentConfig that gets created when you enable workload management on a vSphere cluster and because of this, you don’t have anything running in your supervisor cluster to keep an eye on the cluster labels and take the decision of automated AKO installation in the workload clusters. 

However, this does not preclude you from using NSX ALB to provide layer-7 ingress for your workload clusters. AKO installation in a vSphere with Tanzu environment is done via helm charts and is a completely self-managed solution. You will be in charge of maintaining the AKO life cycle.

My Lab Setup

My lab’s bill of materials is shown below.

Component Version
NSX ALB (Enterprise) 20.1.7 
AKO 1.6.2
vSphere 7.0 U3c
Helm 3.7.4

The current setup of the NSX ALB is shown in the table below.Read More

Configuring L7 Ingress with NSX Advanced Load Balancer

NSX Advanced Load Balancer provides an L4+L7 load balancing using a Kubernetes operator (AKO) that integrates with the Kubernetes API to manage the lifecycle of load balancing and ingress resources for workloads. AKO runs as a pod in Tanzu Kubernetes clusters and provides an Ingress controller and load balancing functionality. AKO remains in sync with the required Kubernetes objects and calls the NSX ALB Controller APIs to deploy the Ingresses and Services and place them on the Service Engines.

In this post, I will discuss implementing ingress control for a sample application and will see NSX ALB in action.

What is Kubernetes Ingress?

As per Kubernetes documentation:

Ingress exposes HTTP and HTTPS routes from outside the cluster to services within the cluster. Traffic routing is controlled by rules defined on the Ingress resource.

How do I implement NSX ALB as an ingress controller?

If you have deployed AKO via helm, the below parameters in the values.yamlRead More