AHV Networking: Part 4: Configuring OVS For Best Performance

There is no dedicated storage network needed with Nutanix as AHV leverages the data network as the backplane for storage. In AHV based deployments, CVM, Hypervisor and Guest VMs connect with physical network via Open vSwitch (OVS). 

An instance of OVS is present on each AHV host and all instances of OVS in a cluster forms a single logical switch (somewhat similar to VMware vDS concept).

In a default AHV installation, all the interfaces present in the NX node are grouped together in a single bond called bro-up. A typical NX node ships with 2×10 GB and 2×1 GB interface. 

The following diagram illustrates the networking configuration of a single host immediately after imaging.

ahv-default-nw.png

Although the above configuration works just fine in most of the cases, as a best practice Nutanix recommends to separate the 10G and 1G interfaces into separate bonds to ensure that CVM and user VM traffic always traverse the fastest possible link. Read More

AHV Networking: Part 3: Change OVS Bond Mode

In last post of AHV Networking series, we learnt the basics of the various bond modes that are available with OVS in AHV. In this post we will learn how to change the bond mode configuration.

Lets jump into lab and start firing some commands.

1: Verify current bond mode.

SSH to any of the AHV host in cluster and run command: ovs-appctl bond/show

This command shows the current bond mode that is configured and the member interfaces that are present in the bond.

ahv-nw-2

Alternatively you can connect to a CVM and run command: allssh ssh root@192.168.5.1 ovs-vsctl show to fetch more information about a bond. 

ahv-nw-3.PNG

2: Change Bond Mode from Active/Backup to Balance-SLB

To change bond from active/backup to balance-slb, we can use the allssh command to update bond configuration on all AHV host (that are part of a cluster) in one shot. Command to do this is : allssh ssh root@192.168.5.1ovs-vsctl set port br0-up bond_mode=balance-slb

ahv-nw-4.PNG

3: Verify that bond mode has changed

Command: allssh ssh root@192.168.5.1ovs-vsctl show

ahv-nw-5.PNG

4: Change Rebalance Interval

In balance-slb mode, default rebalance interval is 10 secs which is too short.Read More

AHV Networking: Part 2: Understanding OVS Bond Mode

In last post of this series we learnt few basics of AHV networking. In this post we will learn about network load balancing in Nutanix.

Nutanix networking is based on OVS and the networks are configured via Prism/ACLI. OVS supports 3 bond modes for network load balancing. 

1: Active/Backup

By default the bond mode is in Active/backup mode when AHV is installed. In this mode VM traffic is sent only over one of the physical uplink and rest all uplinks are in passive mode and they become active only when the active uplinks fails.

With a typical NX node you will see 2×10 GB and 2×1 GB NICs and all of them are aggregated together in a bond called bond0(Older AOS)/br0-up (new AOS).

In this mode the maximum throughput of all VMs running on a Nutanix node is limited to 10 Gbps. Active/backup mode is easiest one to configure and there is no additional configuration needed on the upstream switches.Read More

AHV Networking: Part 1: Basics

AHV Networking Overview

AHV uses Open vSwitch (OVS) to connect the CVM, the hypervisor, and guest VMs to each other and to the physical network on each node. When we install AHV, an instance of OVS is created on that host and all instance of OVS across a cluster combines to form a single logical switch.

What is Open vSwitch (OVS)

OVS is an open source software switch implemented in the Linux kernel and designed to work in a multiserver virtualization environment. OVS is a layer-2 learning switch that maintains a MAC address table.

OVS have virtual ports to which the hypervisor host, CVM’s  and the user VMs connect. To learn more about OVS, please refer this article

Below diagram shows high level overview of OVS architecture in general.

ovs arch.PNG

Before moving forward in this article, lets revisit few important terminology related to OVS.

  • Bridges: It is a virtual switch which manages traffic between physical and virtual network interfaces.
Read More

Unregistering a Cluster from Prism Central

Once a cluster have been registered to Prism central, unregistering it via Prism UI is no longer available. This option was removed to reduce the risk of accidentally unregistering a cluster because several features require Prism Central to run your clusters.

If a cluster is unregistered from Prism Central, not only will these features not be available but the configuration for them may also be erased.

Unregistering a cluster can be done via CLI. Please follow below steps for removing a cluster from PC.

Note: The below steps assumes that you have not configured Nutanix Calm, Self Service Portal and Micro-segmentation etc in your Prism central. If these are configured then please follow KB 4944 for unregistration process.

1: Log on to any Controller vm of the registered cluster and verify cluster is healthy by running command: cluster status

2: Enable “remove-from-multicluster” option in the nCLI by running cmd: ncli -h true

cls-unreg-1

3: Unregister the cluster from Prism Central by running command: 

$ multicluster remove-from-multicluster external-ip-address-or-svm-ips=pc-name-or-ip username=pc-username password=pc-password force=true

cls-unreg-2.PNG

Cluster unregistration take a minute or so.Read More

Scaling Out Prism Central on AHV

In earlier post we learnt how to deploy Prism Central on AHV using 1-Click deployment. To keep things simple, I deployed only one PC vm as I wanted to test how PC scaling out works.

If Prism Central is deployed as single vm, we can expand it to three VMs. PC scale out helps in increasing the capacity and resiliency. 

Note: Prism Central scaling is supported on AHV and ESXi clusters only.

To scale out a Prism Central instance, login to Prism Central and from gear icon select the “Prism Central Management” option.

pc-expand-1.PNG

Click on the Scale Out PC button.

pc-expand-2

Note: scale out is a one-way process and once PC is expanded to 3 vm’s, you can’t revert it back to 1 vm. 

Once you have read the below warning carefully, hit continue to proceed.

pc-expand-3

First you have to provide a VIP for the PC vm’s as they will be clustered once deployed.Read More

Prism Central Upgrade Steps

In last post I demonstrated the Prism Central 1-Click installation process. In this post I will walk through the 1-Click upgrade process.

Before you start your Prism Central upgrade, there are few pre-requisites that needs to met. Below screenshot taken from Prism Central Admin guide lists the requirements.

upgrade pre-req.PNG

To upgrade Prism Central to a higher version, Login to Prism Central and click on gear icon and select “Upgrade Prism Central”.

pc-upgrade-1.PNG

If your Prism central have access to internet, you can download the upgrade directly from support portal. 

If Prism Central is not connected to internet, then you need to manually provide the upgrade binaries. Upgrade files are available here

pc-upgrade-2

Once upgrade binaries have been downloaded and staged, hit Upgrade button to start upgrade.

pc-upgrade-4

Click on yes to proceed.

pc-upgrade-5

Sit back and relax. Prism central upgrade is going to take 15 minutes at least.

pc-upgrade-7.PNG

If you are curious whats happening behind the scenes, then you can monitor the backend tasks from Task page.Read More

Prism Central Deployent on AHV

What is Prism Central?

Software to provide centralized infrastructure management, one-click simplicity and intelligent operations. Prism Central runs as a separate instance composed of either a single VM or a set of  (3) VMs.

What does Prism Central provides?

  • Manage multi-cluster from single pane of glass.
  • Single sign-On for all registered clusters.
  • Entity Explorer to search various items.
  • Global alert and notifications.
  • Multi cluster analytics dashboard.
  • Dashboard Customization.
  • Capacity Forecast and Planning

Prism Central is a must have tool for every Nutanix administrator if they have a multi cluster Nutanix environment. In this post I am not stressing on explaining each features of Prism Central. I will write a separate blog post on that. In this post I will walk through the installation procedure for prism central.  

Prism Central can be deployed directly from Prism Element. You can use one-click deploy method or the manual (imaging service) method. In this post I will demonstrate the one click method.Read More

My HCI Lab with Nutanix Community Edition-Part 4: Deploy Multi Node Cluster

In earlier post of this series, we learnt how to deploy a single node cluster. In this post we will learn how to deploy a multi node cluster using community edition.

If you are not following along this series, then I recommend reading earlier posts of this series from below links:

1: Nutanix Community Edition Introduction

2: Lab Setup

3: Deploy Single Node Cluster

During lab setup, I created a template vm to for faster deployments of CE vm’s. To create a multi-node cluster, we node to deploy at least 3 VM’s and during deployment we need to make sure to not to select “Create single-node cluster”

mn-1.PNG

Once all 3 VM’s boots up, connect to any one of the CVM and run command: check cluster status and you will see message that cluster is currently unconfigured.

mn-2.PNG

To create cluster we need to run command: cluster -s <cvm1 IP,CVM2 IP,CVM3 IP> -s create

This command will trigger cluster creation and start the necessary services.Read More

How to Change CVM Memory in Nutanix CE Platform

In last post of this series, I covered installation of Nutanix CE single-node cluster. In this post i will walk through steps of reducing CVM memory.

By default when you deploy Nutanix CE, CVM is configured with 16 GB RAM. You can verify this by logging into Prism and navigating to Home > VM view.

cvm-mem-1.PNG

Or you can SSH to AHV host and type command : virsh dominfo

Now suppose you allocated 20 GB RAM to the VM where Nutanix CE is installed, CVM will consume 16 GB out of it, leaving only 4 GB for the AHV host. But we can reduce CVM memory to 12G or 8G for lab purpose.

Follow below steps to change CVM memory.

1: Connect to CVM via ssh and stop cluster by executing command: cluster stop

cvm-mem-4.PNG

Wait for clean shutdown of cluster

cvm-mem-5.PNG

2: Once cluster is stopped on CVM, connect to AHV host via SSH and run command: virsh list –all to fetch CVM name

cvm-mem-2

Additionally you can run command : virsh dominfo to see details of CVM

cvm-mem-3

3: Stop CVM by typing command: virsh shutdown <cvm name>

cvm-mem-6.PNG

4: Reduce CVM memory by typing below commands:

  • virsh setmem  12G –config
  • virsh setmaxmem  12G –config

cvm-mem-7.PNG

Verify that new memory settings have applied.Read More