Deleting Stubborn Interconnect Configuration in HCX

I had a working HCX setup in my lab and I was doing some modification in my setup and tried chopping off my interconnect networking configuration in HCX Cloud side. Deletion of interconnect configuration was failing for me with below error

hcx-pool-delete error.JPG

Let me first explain how I landed in this situation. 

I was deleting the interconnect appliances from my on-prem to show demo to my peers on how the interconnects are deployed via HCX plugin in vSphere webclient. During the demo I did not noticed that site pairing between my on-prem HCX and cloud side HCX was broken (due to vCenter upgrade in cloud side, cert mismatch issue occurred). Read More

Creating HCX Multi Site Service-Mesh for Hybrid Mobility

This is in continuation with my last post where I discussed about what is service mesh feature of HCX and how it works. In this post we will learn how to create service mesh.

As we discussed earlier that we need to have compute/network profiles created on both on-prem and cloud side.

The compute profile describes the infrastructure at the source and destination site and provides the placement details (Resource Pool, Datastore) where the virtual appliances should be placed during deployment and the networks to which they should connect. Read More

What is HCX Multi-Site Services Mesh

Recently I upgraded HCX appliances in my lab and saw a new tab named “Multi Site Services Mesh” appearing in both cloud side and enterprise side UI and was curious to know about this new feature.

What is HCX Multi Site Services Mesh?

As we know that in order to start consuming HCX, we need to have the interconnect appliances (CGW, L2C and WAN Opt) deployed in both on-prem and cloud side. Before starting the deployment of appliances, we should have the Interconnect configuration already in place in cloud side. Read More

Managing HCX Migration via Powershell

HCX supports 3 methods for migrating VM’s to cloud:

  • Cold Migration
  • HCX Cross-Cloud vMotion
  • HCX Bulk Migration

To know more about these migration methods, please read this post  to know more about the migration methods. 

HCX migrations can be scheduled from the HCX UI using the vSphere Client or can be automated using the HCX API. In last post of this series, I demonstrated few PowerCli commands that we can use for HCX system. 

API/PowerCli is obvious choice when you think of automation. Using automation do not only help in reducing the amount of user input required in UI but reduces the chances of human errors as well. Read More

Getting Started With HCX PowerCli Module

With the release of PowerCli 11.2, support for many new VMware products was introduced and VMware HCX is one such product. PowerCli module name for HCX is “VMware.VimAutomation.HCX” and it currently have 20 cmdlets to manage HCX.

You can use windows power shell to install/upgrade your PowerCli to v11.2 using below commands:

1: Once the necessary module is installed, we can use the Get-Command to examine the cmdlets that are available for HCX.

Get-Command -Module VMware.VimAutomation.HCX Read More

Exploring HCX API

VMware Hybrid Cloud Extension is a powerful product for datacenter migration/replacement (On-Prem or from On-Prem to Cloud) and disaster recovery.  VMware HCX supports 3 major cloud at the moment namely VMware Cloud on AWS, OVH Cloud and IBM Cloud.

Although HCX interface for workload migration is very simple and even the first times can migrate workloads without much difficulty, but it is always good to know about the API offering of any product so that you can automate stuffs via scripting.

HCX API allows customers to automate all aspects of HCX including the HCX VAMI UI for initial configuration as well as consuming the HCX services which are exposed in the vSphere UI. Read More

Upgrading Clustered vRLI Deployment

In this post I will walk through steps of upgrading a clustered vRLI deployment. Before preparing for upgrade, make sure to read VMware documentation for the supported upgrade path.

One very important consideration before you start upgrading vRLI:

Upgrading vRealize Log Insight must be done from the master node’s FQDN. Upgrading using the Integrated Load Balancer IP address is not supported.

To start vRLI upgrade, login to the web interface of master node and navigate to Administration > Cluster and click on Upgrade Cluster button. Read More

Configuring AD Authentication in vRealize Log Insight

vRealize Log Insight supports 3 Authentication methods:

  • Local authentication.
  • VMware Identity Manager authentication.
  • Active Directory authentication.

You can use more than one method in the same deployment and users then select the type of authentication to use at log in.

To AD authentication to vRLI, login to web interface and navigate to Administration > Authentication page


Switch to Active Directory tab and toggle the “Enable Active Directory support” button.


Specify your domain related details and hit Test Connection button to test whether vRLI is able to talk to AD or not. Hit Save button if test is successful.  Read More

Scaling Up Standalone vRealize Log Insight Deployment

vRealize log insight can be deployed as a standalone or as a clustered solution. In a clustered deployment the first node is the master node and the remaining nodes are termed as worker nodes. The process of scaling up is pretty straight forward and in this post I will walk through the steps of doing so.

Few things which you should consider before expanding a vRealize Log Insight deployment are:

  • vRealize Log Insight does not support WAN clustering (also called geo-clustering or remote clustering). All nodes in the cluster should be deployed in the same Layer 2 LAN. 
  • Configure a minimum of three nodes in a vRealize Log Insight cluster. 2 node cluster is not supported.
  • Verify that the versions of the vRealize Log Insight master and worker nodes are same. Do not add an older version vRealize Log Insight worker to a newer version vRealize Log Insight master node.
  • External load balancers are not supported for vRealize Log Insight clusters. You need to use the vRealize log insight integrated load balancer (ILB). 

Let’s jump into lab to see the process in action.

In this post I am not including deployment steps of my first vRLI instance as it’s a straightforward forward process. If you are still interested to see the deployment steps then you can follow this old post of mine. Read More

Distributed vRA Automated Upgrade via vRLCM

In this post I will walk through steps of upgrading a distributed vRA 7.4 environment to v7.5. This is continuation of my earlier post where I deployed vRA 7.4 via vRLCM.

Upgrade Prerequisites

This post assumes that you have met all the prerequisites of vRA upgrade mentioned in this document

Important: If you are doing upgrade in a distributed environment, then make sure you have disabled the secondary members of pool and all monitors removed for the pool members during the upgrade process. 

To upgrade a vRA deployment, login to vRLCM and navigate to Home > Environments and click on view details. Read More

Cancelling Request in vRealize Suite Lifecycle Manager via API

vRLCM is a great tool but the only shortcoming which is still there with v 2.0 is the ability to cancel any running task via GUI. I faced this situation when I was trying to add a remote collector node to an existing vROPS deployment and task kept running for more than 4 hours.

While searching on internet for how we can stop/cancel/delete a request in vRLCM, came across this thread on VMware Code website, where it was mentioned that it’s not possible from GUI and we need to use REST API.

Below steps shows how to use vRLCM API Read More

vRA Distributed Install using vRealize Suite Lifecycle Manger

In first post of this series, I talked briefly about what vRealize Suite Lifecycle Manager is and its capabilities. Also I covered the installation and initial configuration settings of the appliance.

In this post I will walk through steps of deploying vRA 7.4 distributed install in automated fashion using vRLCM.

Before trying vRLCM, I did a vRA distributed install manually because I wanted to understand the flow of distributed install. If you are new to his topic then I would suggest reading below posts before you can start using vRLCM to automate deployments: Read More

Installing & Configuring vRealize Suite Life Cycle Manager 2.0

vRealize Suite Life Cycle Manager 2.0 was released in September 2018 and with this release a lot of new features were added. Please refer to this post to learn What’s new in vRLCM 2.0.

What is vRealize Suite Lifecycle Manager?

vRealize Suite Lifecycle Manager automates install, configuration, upgrade, patch, configuration management, drift remediation and health from within a single pane of glass, and automates Day 0 to Day 2 operations of the entire vRealize Suite, enabling simplified operational experience for customers. Read More

vRA 7.4 Distributed Install: Part 4: vRA Distributed Install

In last post of this series , I talked about how to configure NSX based load balancer for vRA environment. In this post I will walk through vRA appliance deployment.

If you are not following along this series, then I recommend reading earlier posts of this series from below links:

1: Introduction & Reference Architecture

2: Lab Setup

3: Load Balancer Configuration

Download vRA 7.4 appliance and deploy 2 instances of vRA VM’s.

Once both the appliance boots up, connect to the vami of first appliance by typing https://<vra1-fqdn>:5480/ Read More

vRA 7.4 Distributed Install: Part 3: Load Balancer Configuration

In last post of this series, I talked about my lab setup. In this post I will walk through the load balancer configuration that needs to be in place for supporting the distributed install.

If you are not following along this series, then I recommend reading earlier posts of this series from below links:

1: Introduction & Reference Architecture

2: Lab Setup

Although it’s not mandatory to have the load balancer configured when kicking the distributed install, as we can configure it post vRA deployment, but it is recommended to configure this before attempting the install. Read More