Simplify Your Application Deployments with VCD Content Hub

Introduction

Over the last few years, VCD has evolved as a true developer ready cloud. To start with, VCD enabled Service Providers to offer multi-tenant/multi-cluster Kubernetes as-a-Service through Container Service Extension and lately enabled integration with Tanzu Mission Control to simplify the Kubernetes management and visibility across environments through a single pane of glass.

Software as a Service (SaaS) has emerged as a game-changer, offering a flexible and scalable approach to software delivery that aligns perfectly with the demands of modern businesses. To cater to this need, VCD integrates with the App Launchpad service that offers a self-service portal to tenants to deploy and manage their applications easily. It allows users to deploy and manage applications on top of the infrastructure provisioned through the VCD portal and provides a user-friendly interface for application provisioning. 

The main challenge with App Launchpad was the need for administrators to handle catalog items individually, resulting in increased overhead.Read More

Integrate VMware Cloud Director 10.5.x with OKTA IDP

Introduction to OIDC & OAuth 2.0

OpenID Connect (OIDC) is an identity authentication protocol that extends open authorization (OAuth) 2.0 to standardize the process for authenticating and authorizing users. The OAuth 2.0 protocol enables a third-party application (called a client) to access resources from a resource server (such as an API) on behalf of a user (referred to as a resource owner). The user provides the client with a limited access token, which it can use to request resources from the resource server.

The OAuth 2.0 protocol provides security through scoped access tokens, and OIDC provides user authentication and single sign-on (SSO) functionality. The access token issued by an authorization server verifies the identity and consent of the user. 

VMware Cloud Director can be integrated with an external OIDC provider to import users/groups created in the upstream IDP.  The Service Provider imports users/groups in VCD and associates them with appropriate roles.Read More

How to Delete MQTT Enabled App Launchpad in VCD

Starting with VCD 10.2 and App Launchpad 2.0.0.1, it is possible to deploy App Launchpad using MQTT for communication with VCD.

VCD 10.5 introduced a new feature called Content Hub as a replacement for App Launchpad. Service providers running VCD 10.5.x are encouraged to provide container/vm applications to tenants by integrating Content Hub with VMware MarketPlace and Helm repositories.

In this post, I will demonstrate how you can delete MQTT enabled App Launchpad extension from VCD.

Step 1: List Installed Extensions

The GET call returns a json in response listing all installed extensions and its ID. From the extensions list filter the ID of the App Launchpad extension.Read More

How to Delete Legacy App Launchpad (AMQP Enabled) from VCD

App Launchpad is a VMware Cloud Director service extension that service providers can use to create and publish catalogs of deployment-ready applications. Tenant users can then deploy the applications with a single click.

  • App Launchpad supports applications from the Bitnami applications catalog that are available in the VMware Cloud Marketplace. 
  • You can create catalogs of your custom, in-house applications and configure App Launchpad to work with these catalogs.

The older versions of the App Launchpad (<=2.0), use AMQP to communicate with VCD. Starting with App Launchpad v2.0.0.1, the MQTT protocol is also supported.  

If you are using AMQP for the App Launchpad and running version > 2.0.0, you can reconfigure App Launchpad to use the MQTT protocol. 

You have to first delete AMQP enabled App Launchpad before you can reconfigure it to use MQTT. 

Step 1: Find App Launchpad Extension ID

Read More

Delete AMQP Broker Settings in VCD

The older versions of VMware Cloud Director used AMQP protocol to exchange messages (such as system notifications or any other update) with another VCD cell. Starting with VCD 10.1, MQTT  replaced AMQP. To learn more about how VCD used MQTT, see product documentation.

If you have an environment that still uses AMQP (e.g., VCD upgraded from version <=10.1) and wants to replace it with MQTT, you must first delete the AMQP broker settings from VCD. Unfortunately, it is not currently feasible to delete the settings from the GUI and must be done through APIs.

In the VCD GUI, you only see 2 options for the AMQP broker: Edit settings and Test AMQP config. There is no delete option.

In this post, I will show what APIs you need to delete AMQP broker settings.

Step 1: Get AMQP Broker Configuration

Note: The below APIs are applicable for VCD 10.5.1. If you are running an older version of VCD, check the supported API versions that you can use.Read More

TKG Cluster Deployment Gotchas with Node Health Check in CSE 4.2

Recently, I upgraded Container Service Extension to 4.2.0 in my lab and was trying to deploy a TKG 2.4.0 cluster with node health check enabled. The deployment got stuck after deploying one control plane and worker node, and the cluster went into an error state.

Clicking on the Events tab showed the following error:

I checked the CSE log file and the capvcd logs on the ephemeral vm (before it got deleted) and found no error that would make sense to me.

I contacted CSE Engineering to discuss this issue and opened a bug for further analysis of the logs.

Root Cause

CSE Engineering debugged the logs and found that it was a bug in the product version. Here is the summary of the analysis done by Engineering. 

Read More

VCD (10.5) Service Crashing Continuously in CSE Environment

After updating my lab’s Container Service Extension to version 4.2.0, I observed that the VMware VCD service was frequently crashing. Restarting the cell service did not help much, as the VCD user interface (UI) died again after five minutes. The cell.log was throwing below exception

You will find similar log entries in the cell-runtime.log file.

Read More

Container Service Extension 4.0 on VCD 10.x – Part 4: Tenant Operations

In the previous post in this series, I discussed the CSE configuration options that a service provider can use to provide Kubernetes-as-a-service to their tenants. In this post, I’ll go over how tenants can use the Container Service Extension plugin for Kubernetes cluster deployment in a self-service manner.

If you haven’t read the previous posts in this series, you can do so by clicking on the links provided below.

1: CSE Introduction & Architecture

2: NSX Advanced Load Balancer Configuration & VCD Integration

3: Container Service Extension Configuration by Service Provider

Log in to the tenant’s org to deploy a Kubernetes cluster. The user should be assigned the “Kubernetes Cluster Author” role. To begin with the cluster creation wizard, navigate to Home > More > Kubernetes Container Clusters and click the New button.

Select the Kubernetes runtime for the cluster. CSE 4.0 only supports Tanzu Kubernetes Grid runtime.  

Choose the Kubernetes version and give the Kubernetes cluster a name.Read More

Container Service Extension 4.0 on VCD 10.x – Part 3: Service Provider Configuration

The first two posts in this series covered CSE architecture and NSX ALB deployment/configuration. This post focuses on the steps taken by a service provider to set up a CSE deployment.

You can read the previous posts in this series by clicking on the links provided below.

1: CSE Introduction & Architecture

2: NSX Advanced Load Balancer Configuration & VCD Integration

At this time, it is assumed that the Service Provider has completed the following configurations in VCD:

  • vCenter is registered in VCD.
  • NSX-T is registered in VCD.
  • A Geneve-backed network pool is created in VCD.
  • Provider VDC has been created. 

The service provider workflow for CSE deployment includes the following tasks:

  1. Import Tier-0 gateway/VRF that is created for CSE in NSX-T.
  2. Create an organization in VCD. This is a Service Provider managed organization that hosts the Container Service Extension server and any other extensions in the future. This is known as a Service/Solutions organization.
Read More

Container Service Extension 4.0 on VCD 10.x – Part 2: NSX Advanced Load Balancer Configuration

In part 1 of this blog series, I discussed Container Service Extension 4.0 platform architecture and a high-level overview of a production-grade deployment. This blog post is focused on configuring NSX Advanced Load Balancer and integrating it with VCD. 

I will not go through each and every step of the deployment & configuration as I have already written an article on the same topic in the past. I will discuss the configuration steps that I took to deploy the topology that is shown below.

Let me quickly go over the NSX-T networking setup before getting into the NSX ALB configuration.

I have deployed a new edge cluster on a dedicated vSphere cluster for traffic separation. This edge cluster resides in my compute/workload domain. The NSX-T manager managing the edges is deployed in my management domain. 

On the left side of the architecture, you can see I have one Tier-0 gateway, and VRFs carved out for NSX ALB and CSE networking.Read More