vRA Distributed Install using vRealize Suite Lifecycle Manger

By | 11/02/2019

In first post of this series, I talked briefly about what vRealize Suite Lifecycle Manager is and its capabilities. Also I covered the installation and initial configuration settings of the appliance.

In this post I will walk through steps of deploying vRA 7.4 distributed install in automated fashion using vRLCM.

Before trying vRLCM, I did a vRA distributed install manually because I wanted to understand the flow of distributed install. If you are new to his topic then I would suggest reading below posts before you can start using vRLCM to automate deployments: read more

Installing & Configuring vRealize Suite Life Cycle Manager 2.0

By | 11/02/2019

vRealize Suite Life Cycle Manager 2.0 was released in September 2018 and with this release a lot of new features were added. Please refer to this post to learn What’s new in vRLCM 2.0.

What is vRealize Suite Lifecycle Manager?

vRealize Suite Lifecycle Manager automates install, configuration, upgrade, patch, configuration management, drift remediation and health from within a single pane of glass, and automates Day 0 to Day 2 operations of the entire vRealize Suite, enabling simplified operational experience for customers. read more

vRA 7.4 Distributed Install: Part 4: vRA Distributed Install

By | 10/02/2019

In last post of this series , I talked about how to configure NSX based load balancer for vRA environment. In this post I will walk through vRA appliance deployment.

If you are not following along this series, then I recommend reading earlier posts of this series from below links:

1: Introduction & Reference Architecture

2: Lab Setup

3: Load Balancer Configuration

Download vRA 7.4 appliance and deploy 2 instances of vRA VM’s.

Once both the appliance boots up, connect to the vami of first appliance by typing https://<vra1-fqdn>:5480/ read more

vRA 7.4 Distributed Install: Part 3: Load Balancer Configuration

By | 10/02/2019

In last post of this series, I talked about my lab setup. In this post I will walk through the load balancer configuration that needs to be in place for supporting the distributed install.

If you are not following along this series, then I recommend reading earlier posts of this series from below links:

1: Introduction & Reference Architecture

2: Lab Setup

Although it’s not mandatory to have the load balancer configured when kicking the distributed install, as we can configure it post vRA deployment, but it is recommended to configure this before attempting the install. read more

vRA 7.4 Distributed Install: Part 2-Lab Setup

By | 09/02/2019

In last post of this series, I talked about high level overview of vRA distributed installation. In this post I will be discussing about my lab setup.

Management Cluster

In my management cluster I have vSphere 6.5 installed and vCenter is deployed with embedded psc. I have total of 5 hosts in my management cluster.


Host Details:


VM/Appliance Details:

  • 2x vRealize Automation 7.4 Appliances
  • 2x Windows Servers for IaaS Web
  • 2x Windows Servers for the Management Service (Active / Passive)
  • 2x Windows Servers for the DEMs/Agents


Windows Template Specifications

I deployed each of the windows vm using a template which was configured as per below:

1: Static IP set and windows domain joined.  read more

VRA 7.4 Distributed Install: Part 1-Introduction

By | 09/02/2019

vRA 7.x brought a lot of enhancements with it and one of the major enhancement was the simplicity of deploying the setup which was very complex till version 6.x. 

The second major enhancement was to cut the overall footprint of vRA. For VRA 6.x implementation, we needed at least 8 VA’s to form the core services (excluding the IaaS components). This limitation is no more with 7.x implementation.

Now a single pair of VRA VA’s forms the core services. In a distributed install, the load balanced VA’s delivers vRA’s framework services, Identity Manager, Database, vRO, and RabbitMQ. All these services are clustered and sits behind a single load balance VIP and a single SSL cert. read more

AHV Networking: Part 4: Configuring OVS For Best Performance

By | 25/01/2019

There is no dedicated storage network needed with Nutanix as AHV leverages the data network as the backplane for storage. In AHV based deployments, CVM, Hypervisor and Guest VMs connect with physical network via Open vSwitch (OVS). 

An instance of OVS is present on each AHV host and all instances of OVS in a cluster forms a single logical switch (somewhat similar to VMware vDS concept).

In a default AHV installation, all the interfaces present in the NX node are grouped together in a single bond called bro-up. A typical NX node ships with 2×10 GB and 2×1 GB interface.  read more

AHV Networking: Part 3: Change OVS Bond Mode

By | 25/01/2019

In last post of AHV Networking series, we learnt the basics of the various bond modes that are available with OVS in AHV. In this post we will learn how to change the bond mode configuration.

Lets jump into lab and start firing some commands.

1: Verify current bond mode.

SSH to any of the AHV host in cluster and run command: ovs-appctl bond/show

This command shows the current bond mode that is configured and the member interfaces that are present in the bond.


Alternatively you can connect to a CVM and run command: allssh ssh root@ ovs-vsctl show to fetch more information about a bond.  read more

AHV Networking: Part 2: Understanding OVS Bond Mode

By | 25/01/2019

In last post of this series we learnt few basics of AHV networking. In this post we will learn about network load balancing in Nutanix.

Nutanix networking is based on OVS and the networks are configured via Prism/ACLI. OVS supports 3 bond modes for network load balancing. 

1: Active/Backup

By default the bond mode is in Active/backup mode when AHV is installed. In this mode VM traffic is sent only over one of the physical uplink and rest all uplinks are in passive mode and they become active only when the active uplinks fails. read more

AHV Networking: Part 1: Basics

By | 10/01/2019

AHV Networking Overview

AHV uses Open vSwitch (OVS) to connect the CVM, the hypervisor, and guest VMs to each other and to the physical network on each node. When we install AHV, an instance of OVS is created on that host and all instance of OVS across a cluster combines to form a single logical switch.

What is Open vSwitch (OVS)

OVS is an open source software switch implemented in the Linux kernel and designed to work in a multiserver virtualization environment. OVS is a layer-2 learning switch that maintains a MAC address table. read more

Unregistering a Cluster from Prism Central

By | 29/12/2018

Once a cluster have been registered to Prism central, unregistering it via Prism UI is no longer available. This option was removed to reduce the risk of accidentally unregistering a cluster because several features require Prism Central to run your clusters.

If a cluster is unregistered from Prism Central, not only will these features not be available but the configuration for them may also be erased.

Unregistering a cluster can be done via CLI. Please follow below steps for removing a cluster from PC. read more

Scaling Out Prism Central on AHV

By | 29/12/2018

In earlier post we learnt how to deploy Prism Central on AHV using 1-Click deployment. To keep things simple, I deployed only one PC vm as I wanted to test how PC scaling out works.

If Prism Central is deployed as single vm, we can expand it to three VMs. PC scale out helps in increasing the capacity and resiliency. 

Note: Prism Central scaling is supported on AHV and ESXi clusters only.

To scale out a Prism Central instance, login to Prism Central and from gear icon select the “Prism Central Management” option. read more

Prism Central Upgrade Steps

By | 29/12/2018

In last post I demonstrated the Prism Central 1-Click installation process. In this post I will walk through the 1-Click upgrade process.

Before you start your Prism Central upgrade, there are few pre-requisites that needs to met. Below screenshot taken from Prism Central Admin guide lists the requirements.

upgrade pre-req.PNG

To upgrade Prism Central to a higher version, Login to Prism Central and click on gear icon and select “Upgrade Prism Central”.


If your Prism central have access to internet, you can download the upgrade directly from support portal.  read more

Prism Central Deployent on AHV

By | 29/12/2018

What is Prism Central?

Software to provide centralized infrastructure management, one-click simplicity and intelligent operations. Prism Central runs as a separate instance composed of either a single VM or a set of  (3) VMs.

What does Prism Central provides?

  • Manage multi-cluster from single pane of glass.
  • Single sign-On for all registered clusters.
  • Entity Explorer to search various items.
  • Global alert and notifications.
  • Multi cluster analytics dashboard.
  • Dashboard Customization.
  • Capacity Forecast and Planning

Prism Central is a must have tool for every Nutanix administrator if they have a multi cluster Nutanix environment. In this post I am not stressing on explaining each features of Prism Central. I will write a separate blog post on that. In this post I will walk through the installation procedure for prism central.   read more

My HCI Lab with Nutanix Community Edition-Part 4: Deploy Multi Node Cluster

By | 22/12/2018

In earlier post of this series, we learnt how to deploy a single node cluster. In this post we will learn how to deploy a multi node cluster using community edition.

If you are not following along this series, then I recommend reading earlier posts of this series from below links:

1: Nutanix Community Edition Introduction

2: Lab Setup

3: Deploy Single Node Cluster

During lab setup, I created a template vm to for faster deployments of CE vm’s. To create a multi-node cluster, we node to deploy at least 3 VM’s and during deployment we need to make sure to not to select “Create single-node cluster” read more