Prepare Host Clusters for NSX

By | 24/06/2018

In this post, I will cover following topics of Objective 1.2 of VCAP-NV Deploy exam:

  • Prepare vSphere Distributed Switching for NSX
  • Prepare a cluster for NSX
  • Add/Remove Hosts from cluster
  • Configure the appropriate teaming policy for a given implementation
  • Configure VXLAN Transport parameters according to a deployment plan

Lets get started.

                              Prepare vSphere Distributed Switching for NSX

NSX works only with distributed switch and not with standard switches. Before you deploy NSX and start configuring stuffs, you have to make sure that you have fully configured the VDS and have migrated portgroups/uplinks etc from VSS to VDS.

One of the most important requirement for NSX is to set the minimum MTU at VDS to 1600 bytes. So before you start adding hosts to VDS, make sure that appropriate MTU is already configured on VDS.

The requirement for 1600 bytes is due to the original Ethernet frame being wrapped (encapsulated) with additional headers for VXLAN, UDP and IP; thus increasing its size, and is now called a VXLAN Encapsulated Frame.

To verify/configure MTU on vDS, select the vDS from list and navigate to Manage > Settings > Properties tab and hit edit button.

vds-mtu.PNG

Under ‘Advanced’ tab, change the MTU to 1600 and hit OK.

vds-mtu-2

                                                     Prepare a cluster for NSX

Before you start consuming NSX services, Esxi hosts have to be prepared for NSX. During host preparation, NSX installs the necessary vibs to creates the Control and Management Plane for NSX.

Before doing host preparation, ensure following prerequisites have been met:

  • NSX Manager and Controllers deployed.
  • All Esxi hosts from a cluster has been added to vDS.
  • Forward and reverse name resolution is working fine and all DNS records are in place.
  • vCenter is configured for NSX.

To prepare Esxi hosts for NSX, click on Networking & Security and navigate to Installation > Host Preparation tab and select the appropriate cluster and click on gear button to start installation of NSX VIB’s.

hostprep-1

NSX will start pushing the required VIB’s on host. 

hostprep-2

Make sure VIB’s are successfully installed on each host and overall cluster/hosts status is green.

hostprep-3

                                        Add/Remove Hosts from cluster

When you add a new host to a cluster that is prepared for NSX, VIB’s are automatically pushed to the new host.  The best way to add Esxi hosts to an existing NSX prepared cluster is:

  1. Add the host to vCenter but not in the NSX prepared cluster. Keep the host into maintenance mode.
  2. Add the newly added host to same vDS where other hosts of the NSX prepared cluster are joined. 
  3. Move the newly added Esxi host into the cluster and wait for VIB installation to complete.
  4. Once the VIB installation is completed, remove host from maintenance mode.

Remove Hosts from Cluster

To remove a host from NSX prepared cluster, follow below steps: 

  • Place the host in maintenance mode.
  • Drag and drop the host out of the cluster. This will remove the NSX VIB’s from the host. 
  • Reboot the Esxi host.
  • Remove the host from maintenance mode.

Sometimes NSX fails to remove a VIB from the Esxi host and in such situation we have to manually remove the VIB’s from host via command line. To remove VIB’s from a host via command line, connect to host via SSH and execute below commands:

For NSX 6.2

For NSX 6.3 and later

               Configure the appropriate teaming policy for a given implementation

NIC teaming policy determines how network traffic is load balanced over the host uplinks. With vDS we have 5 NIC tearing policy that can be used. 

While configuring VXLAN on Esxi host, we have to select a teaming policy for the VTEP’s. The default policy “failover” only creates one VTEP/host and all VXLAN traffic is sent over single physical uplink. But we have other teaming policy available as well for VXLAN which load balances the traffic by sending it out via multiple uplinks.

To use NIC teaming for VTEP’s, Esxi hosts must have a minimum of two uplinks. These uplinks might be connected to the same switch or between 2 switches (for redundancy). 

Below image taken from NSX Reference Design Guide, gives a rough idea of which teaming policy can be used in NSX

nsx9

Multi-VTEP simply means having more than one VTEP kernel interfaces. In a Multi- VTEP deployment we have 1:1 mapping with the physical uplinks of the vSwitch. That means each VTEP will send/receive traffic on a specific pNIC interface.

This article has explained VTEP teaming policy beautifully via various examples.

           Configure VXLAN Transport parameters according to a deployment plan

I am not going to cover the VXLAN configuration steps here as I have previously wrote an article on the same.

And that’s it for this post.

I hope you enjoyed reading this post. Feel free to share this on social media if it is worth sharing. Be sociable :)

Category: NSX