Install Tanzu Mission Control Self-Managed on vSphere with Tanzu on vSphere 7

Welcome to the Tanzu Mission Control Self-Managed series. So far in this series, I have covered the installation prerequisites and how to configure them. After that, I demonstrated the TMC-SM installation procedure on the TKGm platform. if you are not following along, you can read the earlier post of this series from the below links:

1: TMC Self-Managed – Introduction & Architecture

2: Configure DNS for TMC Self-Managed

3: Configure OIDC Complaint Identity Provider (Okta)

4: Install Cluster Issuer for TLS Certificates

5: Prepare Harbor Registry

6: Install Tanzu Mission Control Self-Managed on TKGm

The installation procedure for TMC Self-Managed on a vSphere with Tanzu (aka TKGS) Kubernetes platform is a bit different and this post is focused on covering the required steps. Let’s get started.

I have used the following BOM in my lab

Software Components Version
vSphere Namespace 1.24.9
VMware vSphere ESXi 7.0 U3n
VMware vCenter (VCSA) 7.0 U3n
VMware vSAN 7.0 U3n
NSX ALB 22.1.3

Make sure the following are already configured in your environment before attempting the installation:

1: DNS is configured.Read More

Install Tanzu Mission Control Self-Managed on TKGm

This is the sixth blog post of the TMC Self-Managed blog series. In the previous post of this series, I showed how to configure the final prerequisite (Harbour registry) of the installation. If you are following along with me, you are now ready for the installation. 

If you have landed on this post directly by mistake, I encourage you to read the previous blog posts of this series using the below links:

1: TMC Self-Managed – Introduction & Architecture

2: Configure DNS for TMC Self-Managed

3: Configure OIDC Complaint Identity Provider (Okta)

4: Install Cluster Issuer for TLS Certificates

5: Prepare Harbor Registry

This blog post is focused on installing TMC Self-Managed on Tanzu Kubernetes Grid multi-cloud (TKGm). I will cover the installation procedure for TKGS (vSphere with Tanzu) in a separate post.

I have used the following BOM in my lab

Software Components Version
Tanzu Kubernetes Grid 2.1.0
VMware vSphere ESXi 7.0 U3n
VMware vCenter (VCSA) 7.0 U3n
VMware vSAN 7.0 U3n
NSX Advanced LB 22.1.3

Step 1: Connect to the workload cluster where TMC Self-managed will be installed.Read More

Tanzu Mission Control Self-Managed – Part 5: Prepare Harbor Registry

Here we are at the fifth post in our blog series. In this post, I’ll discuss how to download and prepare TMC Self-Managed artifacts for installation in a harbor registry.

If you have landed on this post directly by mistake, I encourage you to read the previous blog posts of this series using the below links:

1: TMC Self-Managed – Introduction & Architecture

2: Configure DNS for TMC Self-Managed

3: Configure OIDC Complaint Identity Provider (Okta)

4: Install Cluster Issuer for TLS Certificates

Tanzu/Kubernetes supports a wide variety of image registry solutions including JFrog, Docker Hub, Amazon Elastic Container Registry, VMware Harbor, etc. for storing the application images that you deploy on the workload clusters. However, TMC Self-Managed only supports the Harbor image registry at the time of writing this post. The harbor registry must meet the following requirements:

  • A minimum storage of 20 GB is recommended for Harbor.
  • Authenticated registries are not supported.
Read More

Tanzu Mission Control Self-Managed – Part 4: Install Cert-Manager and Cluster Issuer for TLS certificates

Welcome to Tanzu Mission Control Self-Managed Part 4 of the series. I’ll show you how to use cluster issuer and cert-manager for automatic certificate issuing in this post.

If you have landed on this post directly by mistake, I encourage you to read the previous blog posts of this series using the below links:

1: TMC Self-Managed – Introduction & Architecture

2: Configure DNS for TMC Self-Managed

3: Configure OIDC Complaint Identity Provider (Okta)

For its certificates, TMC Self-Managed uses cert-manager. You can use the cert-manager and cluster issuer to create a self-signed certificate for the installation in a lab or POC environment. On the workload cluster where TMC Self-Managed will be installed in my lab, I have installed cert-manager as a Tanzu package.

In an airgap environment, you can follow the instructions outlined in the Add a Package Repository and Install cert-manager in the TKG product documentation to install cert-manager.Read More

Tanzu Mission Control Self-Managed – Part 3: Configure Identity Provider

Welcome to Part 3 of the TMc Self-Managed series. Part 1 concentrated on a general introduction to TMC Self-Managed, while Part 2 dived into the DNS configuration. You may read the previous entries in this series if you missed them by clicking the links below.

1: TMC Self-Managed – Introduction & Architecture

2: Configure DNS for TMC Self-Managed

Tanzu Mission Control Self-Managed manages user authentication using Pinniped Supervisor as the identity broker and requires an existing OIDC-compliant identity provider (IDP). Examples of OIDC-compliant IDPs are Okta, Keycloak, VMware Workspace One, etc. The Pinniped Supervisor expects the Issuer URL, client ID, and client secret to integrate with your IDP.

Note: This post demonstrates configuring Okta as an IDP. Although Okta is a SaaS service and is reachable over the internet, the intent is to show how you configure upstream IDP for authentication. In an airgap environment, you may use any IDP that doesn’t require an internet connection.Read More

Tanzu Mission Control Self-Managed – Part 2: Configure DNS

I covered the basic introduction of TMC Self-Managed, the general architecture, and the requirements your environment needs to meet before installing TMC Self-Managed in the first post of this series. Before getting your hands dirty with the installation, I’ll cover configuring DNS Zones and records in this post.

To enable correct traffic flow and access to various TMC endpoints, TMC Self-Managed needs a DNS zone to hold the DNS records. To ensure name resolution between the objects that are deployed during TMC Self-Managed installation, create the following A records in your DNS domain. You can create a new DNS zone or can leverage an existing zone.

  • alertmanager.<my-tmc-dns-zone>
  • auth.<my-tmc-dns-zone>
  • blob.<my-tmc-dns-zone>
  • console.s3.<my-tmc-dns-zone>
  • gts-rest.<my-tmc-dns-zone>
  • gts.<my-tmc-dns-zone>
  • landing.<my-tmc-dns-zone>
  • pinniped-supervisor.<my-tmc-dns-zone>
  • prometheus.<my-tmc-dns-zone>
  • s3.<my-tmc-dns-zone>
  • tmc-local.s3.<my-tmc-dns-zone>
  • tmc.<my-tmc-dns-zone>

To simplify the installation procedure, you can also use a wildcard DNS entry as shown below (for POCs only).

Record Type           Record Name Value                      
A *.<my-tmc-dns-zone> load balancer IP
A <my-tmc-dns-zone> load balancer IP

The IP address for the above records must point to the external IP of the contour-envoy service that is deployed during the installation.Read More

Tanzu Mission Control Self-Managed – Part 1: Introduction & Architecture

Introduction

VMware Tanzu Mission Control is a SaaS offering available through VMware Cloud Services and provides:

  • A centralized platform to deploy and manage Kubernetes clusters across multiple clouds.
  • Attach existing Kubernetes Clusters in the TMC portal for centralized operations and management.
  • A Policy Engine that automates Access control and security policies across a fleet of clusters.
  • Manage security across multiple clusters.
  • Centralize authentication and authorization, with federated identity from multiple sources.

TMC SaaS cannot be used in specific environments because of compliance or data governance requirements. Industries like Banking, Health Care, and the Defence sector are usually running workloads in an air-gapped environment (dark site). Imagine running a large number of Kubernetes clusters without any central pane of glass to manage day-1 & day-2 operations across the clusters. VMware understood this pain and introduced Tanzu Mission Control Self-Managed (TMC-SM) as an installable product that you can deploy in your environment.

TMC Self-Managed can be installed in data centers, sovereign clouds, and service-provider environments.Read More

Quick Tip: How to Reset NSX ALB Controller for a Fresh Configuration

Sometimes NSX ALB controllers are frequently redeployed in the lab environment to test and retest setup. Redeploying an NSX ALB controller only takes a few minutes, but in a slow environment, it can take up to 20-25 minutes. Using this handy tip, you can save some quality time.

To reset a controller node to the default settings, login to the node over SSH and run the following command.

Read More

TKG Multi-Site Global Load Balancing using Avi Multi-Cluster Kubernetes Operator (AMKO)

Overview

Load balancing in Tanzu Kubernetes Grid (when installed with NSX ALB) is accomplished by leveraging Avi Kubernetes operator (AKO), which delivers L4+L7 load balancing to the Kubernetes API endpoint and the applications deployed in Tanzu Kubernetes clusters. AKO runs as a pod in Tanzu Kubernetes clusters and serves as an Ingress controller and load balancer.

The Global Server Load Balancing (GSLB) function of NSX ALB enables load-balancing for globally distributed applications/workloads (usually, different data centers and public clouds). GSLB offers efficient traffic distribution across widely scattered application servers. This enables an organization to run several sites in either Active-Active (load balancing and disaster recovery) or Active-Standby (DR) mode.

With the growing footprint of containerized workloads in datacenters, organizations are deploying containerized workloads across multi-cluster/multi-site environments, necessitating the requirement for a technique to load-balance the application globally.

To meet this requirement, NSX ALB provides a feature called AMKO (Avi Multi-Cluster Kubernetes Operator) which is an operator for Kubernetes that facilitates application delivery across multiple clusters.Read More

Container Service Extension 4.0 on VCD 10.x – Part 4: Tenant Operations

In the previous post in this series, I discussed the CSE configuration options that a service provider can use to provide Kubernetes-as-a-service to their tenants. In this post, I’ll go over how tenants can use the Container Service Extension plugin for Kubernetes cluster deployment in a self-service manner.

If you haven’t read the previous posts in this series, you can do so by clicking on the links provided below.

1: CSE Introduction & Architecture

2: NSX Advanced Load Balancer Configuration & VCD Integration

3: Container Service Extension Configuration by Service Provider

Log in to the tenant’s org to deploy a Kubernetes cluster. The user should be assigned the “Kubernetes Cluster Author” role. To begin with the cluster creation wizard, navigate to Home > More > Kubernetes Container Clusters and click the New button.

Select the Kubernetes runtime for the cluster. CSE 4.0 only supports Tanzu Kubernetes Grid runtime.  

Choose the Kubernetes version and give the Kubernetes cluster a name.Read More