In last post of this series we saw how to prepare Esxi host and Cluster for NSX. In this post we will be talking little bit about VXLAN, what are its benefits and how to configure VXLAN on Esxi hosts.

If you have missed earlier posts of this series you can read them from here:

1: Introduction to VMware NSX

2: Installing and Configuring NSX Manager

3: Deploying NSX Controllers

4: Preparing Esxi Hosts and Cluster

Lets start our discussion with what is VXLAN.

Virtual Extensible LAN (VXLAN) is an encapsulation protocol for running an overlay network on existing Layer 3 infrastructure. An overlay network is a virtual network that is built on top of existing network Layer 2 and Layer 3 technologies to support elastic compute architectures.

In VXLAN the original layer 2 frame is encapsulated in a User Datagram Protocol (UDP) packet and delivered over a transport network. This technology provides the ability to extend layer 2 networks across layer 3 boundaries and consume capacity across clusters.

Why we should use VXLAN?

VXLAN enables you to create a logical network for your virtual machines across different networks. You can create a layer 2 network on top of your layer 3 networks.

If you are from networking background then you are aware that the traditional VLAN identifiers are 12-bits long—this naming limits networks to 4094 VLANs. If you are a cloud service provider, then it is likely to happen that you are going to hit this limit in nearby future as your environment grows.

The primary goal of VXLAN is to extend the virtual LAN (VLAN) address space by adding a 24-bit segment ID and increasing the number of available IDs to 16 million. The VXLAN segment ID in each frame differentiates individual logical networks so millions of isolated Layer 2 VXLAN networks can co-exist on a common Layer 3 infrastructure. As with VLANs, only virtual machines (VMs) within the same logical network can communicate with each other.

VXLAN Benefits?

  1. You can theoretically create as many as 16 million VXLANs in an administrative domain.
  2. You can enable migration of virtual machines between servers that exist in separate Layer 2 domains by tunneling the traffic over Layer 3 networks. This functionality allows you to dynamically allocate resources within or between data centers without being constrained by Layer 2 boundaries or being forced to create large or geographically stretched Layer 2 domains.

Since you have a bit of idea now about VXLAN, let’s jump into lab and see how to configure it on Esxi hosts.

VXLAN can be configured by logging into vCenter Web Client and navigating to Networking & Security > Installation > Host Preparation

You will see that VXLAN status is “Not Configured”. Click on that and a new wizard will open to configure VXLAN settings.


Provide the following info to configure the VXLAN Networking.

  • Switch – Select the vDS from the drop-down for attaching the new VXLAN VMkernel interface.
  • VLAN – Enter the VLAN ID to use for VXLAN VMkernel interface. If you are not using and VLAN in your environment Enter “0″. It will pass traffic as untagged.
  • MTU – The recommended minimum value of MTU is 1600, which allows for the overhead incurred by VXLAN encapsulation.
  • VMKNic IP Addressing – You can specify either IP Pool or DHCP for IP addressing.


If you have not created any IP Pool yet, then you can do so by selecting “New IP Pool” and a new wizard will be launched to create a new pool.

Provide a name for the pool and define the IP/Netmask/gateway/DNS etc along with Range of IP that will be used in this pool.


Once you hit OK, you will return to the original window. Select the pool which you have just created.

Next setting is to select VMKNic teaming policy. This option is define the teaming policy used for bonding the physical NICs for use with the VTEP port group. The value of VTEP changes as you select the appropriate policy.

A very good read on various teaming policy available under NSX is explained here

I have left with the default Teaming policy to “Fail Over”. Hit OK once you are done with supplying all info.


Now you will see VXLAN preparation on your cluster is started.


Within a minute or so you will see that VXLAN status has been changed from busy to Configured.


VXLAN configuration will create a new VMkernel port on each host in the cluster as the VXLAN Tunnel Endpoint (VTEP). You can verify this by selecting your host and navigating to Manage > Networking > VMkernel Adapters


You can verify that IP allocated to these VMkernel interfaces are from your defined pool by clicking on Logical Network Preparation tab> VXLAN Transport.


Next we need to Configure the VXLAN ID Pool to identify VXLAN networks:-

On the Logical Network Preparation tab, click the Segment ID button and Click Edit to open the Segment ID pool dialog box to configure ID Pool.


Enter the Segment ID Pool and Click Ok to complete.

Note: VMware NSX™ VNI ID starts from 5000.


Next task is to Configure a Global Transport Zone:-

A transport zone specifies the hosts and clusters that are associated with logical switches created in the zone. Hosts in a transport zone are automatically added to the logical switches that you create.

On the Logical Network Preparation tab, click Transport Zones and Click the green plus sign to open the New Transport Zone dialog box.


Provide a name for the transport zone and select the Replication Mode as per your environment need.

A little note on Replication Mode

Unicast has no physical network requirements apart from the MTU. All traffic is replicated by the VTEPs. In NSX, the default mode of traffic replication is unicast.  Unicast has Higher overhead on the source VTEP and UTEP.

Multicast mode uses the VTEP as a proxy. In multicast, the VTEP never goes to the NSX Controller instance. As soon as the VTEP receives the broadcast traffic, the VTEP multicasts the traffic to all devices. Multicast has lowest overhead on the source VTEP.

Hybrid mode is not the default mode of operation in NSX for vSphere, but is important for larger scale operations. Also the configuration overhead or complexity of L2 IGMP is significantly lower than multicast routing.


After clicking OK you will see the newly created Transport Zone


We are done with configuring VXLAN on Esxi hosts here. In next post of this series we will learn about logical switching.

I hope you enjoyed reading this post. Feel free to share this on social media if it is worth sharing. Be sociable :)

Posted in: NSX.
Last Modified: February 12, 2017