In last post of this series we discussed about Uplink profiles. In this post we will learning about transport zones and its types.
If you are not following along this series, then I recommend reading earlier posts of this series from below links:
What is transport zone in NSX-T?
As per vmware documentation
A transport zone is a container that defines the potential reach of transport nodes. Transport nodes are hypervisor hosts and NSX Edges that will participate in an NSX-T overlay.
What is meant by above is that if two or more Esxi hosts that are configured as transport nodes participate in the same transport zone, then VMs on these different hosts using the overlay network can communicate with each other. Segregation of traffic is achieved by use of N-VDS.
What is N-VDS?
N-VDS stands for NSX Managed Virtual Distributed Switch. The main function of the N-VDS is to forward the traffic of the components (VM’s, ESG, DLR etc) running on transport nodes. N-VDS forms the data plane for the transport nodes. When you add a transport node to a transport zone, the N-VDS associated with the transport zone is installed on the transport node.
On Esxi hosts, N-VDS is implemented via nsx-vSwitch module. This module is installed and loaded in kernel when you configure Esxi hosts as fabric nodes.
[root@esxi-08:~] esxcfg-module -l | grep nsx-vswitch
nsx-vswitch 3 344
The pNIC’s that you can attach to a N-VDS must not be shared with any other components such as vSS, vDS or any other N-VDS.
Lets get back to transport zones and discuss about the types of transport zones.
- Overlay Transport Zone: This transport zone can be used by both transport nodes and NSX edges.
- VLAN Transport Zone: This can only be used by NSX Edges and is deployed on Edge when the edge is added to VLAN Transport Zone.
Before creating transport zone, we need to create an IP pool which will contain a range of IP addresses that will be assigned to the Tunnel Endpoints (TEPs). TEPs are used on the overlay network to identify the transport nodes. Remember VTEPS in NSX-v?
To create an IP pool navigate to Home > Inventory > Groups > IP Pools and click on the + button to create a new IP Pool.
Provide a name and pool of IP’s for TEP’s. Hit Add button to complete the wizard.
Next thing which you need is uplink profile which we already created in last post of this series.
To crete a new transport zone, navigate to Home > Fabric > Transport Zones and click on + button to add a new transport zone.
Provide name for the transport zone and N-VDS.
N-VDS Mode: There are 2 modes for N-VDS: Standard or Enhanced Datapath.
Enhanced datapath N-VDS has the performance capabilities to support NFV workloads, supports both VLAN and overlay networks.
Select traffic type as overlay and hit add button to complete the transport zone creation wizard.
Make sure status of Transport zone reports as up post creation.
Now we need to add transport nodes to the transport zones. I have not deployed any NSX-T edge yet so the only transport nodes I have at the moment are my Esxi host.
To add transport nodes, navigate to Fabric > Nodes > Transport Nodes and click on + Add button.
Under General tab, Provide a name for the host and select the Esxi host by clicking on down arrow button for node option.
Select the transport zone where you want to add this host and move it under ‘selected’ window. Do not hit Add button yet.
Switch to N-VDS tab.
For N-VDS-Name select the N-VDS which you created during transport zone creation.
NIOC Profile: I selected the default profile available within NSX-T
Uplink Profile: Which you created earlier.
IP Assignment: Select the IP pool created by you earlier.
Physical NIC’s: This is where you do the actual mapping of physical nic with the uplink identifier which you created during transport zone creation.
Repeat the process to add all hosts as transport nodes. Make sure that configuration state reads as success and status as up.
Now if we ssh into one of the Esxi host, we will see that there is a new internal switch, which allows all the VMs on the hosts that are part of the transport zone to communicate over the overlay network. To verify existence of new switch run command: esxcfg-vswitch -l
Also a new vmkernel port is added on the hosts.
Now if you try to ping the Tunnel Endpoint of the 2nd transport node (Esxi-09 in my case) with vxlan packet, it should ping if overlay network is setup correctly.
<span style="color: #000000;">[root@esxi-08:~] vmkping ++netstack=vxlan 192.168.109.232
PING 192.168.109.232 (192.168.109.232): 56 data bytes
64 bytes from 192.168.109.232: icmp_seq=0 ttl=64 time=0.850 ms
64 bytes from 192.168.109.232: icmp_seq=1 ttl=64 time=0.381 ms
64 bytes from 192.168.109.232: icmp_seq=2 ttl=64 time=0.408 ms
--- 192.168.109.232 ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 0.381/0.546/0.850 ms</span>
And there you go. We have tested now the vxlan connectivity between the 2 transport nodes, its time to do logical switching which I will be covering in next post of this series.
I hope you enjoyed reading this post. Feel free to share this on social media if it is worth sharing. Be sociable