Layer 2 Bridging With NSX-T

In this post, I will be talking about the Layer 2 Bridging functionality of NSX-T and discuss use cases and design considerations when planning to implement this feature. Let’s get started.

NSX-T supports L2 bridging between Overlay logical segments and VLAN-backed networks. When workloads connected to the NSX-T overlay segment require L2 connectivity to either VLAN-connected workloads or need to reach a physical device (such as a physical GW, LB, or Firewall), NSX-T Layer 2 bridge can be leveraged. 

Use Cases of Layer 2 Bridging

A few of the use cases that come to the top of my mind are: 

1: Migrating workloads connected to VLAN port groups to NSX-T overlay segments: Customers who are looking for migrating workloads from legacy vSphere infrastructure to SDDC can leverage Layer 2 Bridging to seamlessly move their workloads.

When planning for migrations, some of the challenges associated with migrations are Re-IP of workloads, migrating firewall rules, NAT rules, etc. There are situations where re-IP is not feasible because of application dependencies. With L2 bridging, you don’t have to worry about this stuff at all.

2: Migrate from NSX-V to NSX-T: NSX-V is phasing out slowly and there are a lot of customers who are still using NSX-V in their infrastructure. It’s possible to migrate workloads connected to NSX-V logical switches to NSX-T overlay segments. 

3: Leveraging NSX-T Gateway Firewall: VLAN-backed workloads can leverage the NSX security services by having the traffic routed over a T1 or T0 Gateway.

It’s time to jump into the lab and see things in action.

Before configuring L2 Bridge, I just want to talk about my lab topology which is shown below.

The left-hand side of the diagram shows VLAN-based networks. There are three VLANs that are connected to the Top Of Rack switch. VLAN 201 hosts application VM, VLAN 202 is for Web VMs, and VLAN 203 is for Database VMs. Subnets corresponding to these VLANs are marked as well in the diagram.

The right-hand side of the diagram shows NSX-T based overlay networks. Subnets corresponding to overlay segments have been matched with VLAN-based networks. The only difference between the two is the default gateway IP. For VLAN-backed network it’s .1 and for overlay segments default gateway is .253.

Once the L2 Bridge is configured, I will demonstrate the migration of the three-tier application to the NSX-T environment without making any configuration changes to the App, Web & DB virtual machines.

Now it’s time to jump into the lab and see things in action.

Step 1: Create Trunk Portgroup in vSphere

Create a new trunk port group in vSphere allowing the VLANs that you are planning to bridge with overlay. 

Set Promiscuous mode and Forged transmits on this port group to Accept.

Set teaming policy for this port group to explicit failover order choosing one uplink as active and the other as standby.

Step 2: Enable Reverse Path Filter on ESXi host

Run the following command to enable the reverse filter on the ESXi host where the Edge VM is running:

# esxcli system settings advanced set -o /Net/ReversePathFwdCheckPromisc -i 1

Note: After running the above command, disable/enable Promiscuous mode on the trunk port group so that the reverse filter becomes active.

Step 3: Create VLAN Transport Zone for Bridging

Create a new VLAN Transport Zone for edge bridging. This transport zone will be added to Edge VMs in the next step. 

Step 4: Associate Edge Nodes with Bridge TZ

Bridging between the physical and virtual networks is performed via Edge nodes, so we must add edges to the bridge transport zone which we just created.

To associate bridge transport zone with Edge VMS, edit the edge vm properties under System > Configuration > Fabric > Nodes > Edge Transport Nodes.

Click on Add Switch to add a new N-VDS on the edge.

Provide a name for the N-VDS and add the Bridge TZ here.

For Uplink Profile, select nsx-edge-single-uplink-profile, and under uplink mapping, map the edge uplink to the trunk port group created in Step-1.

Step 5: Create Edge Bridge Profile

The Edge bridge profile is the configuration item that ties together the Edge bridge with the overlay segments which will take part in bridging.

To create an Edge Bridge Profile, navigate to Networking > Connectivity > Segments > Edge Bridge Profiles and click on Add Edge Bridge profile.

Provide a name for the profile and specify the Primary & backup edge nodes. Make sure failover is set to Preemptive. 

Also, select the edge cluster with which this profile will be associated. 

Step 6: Create Overlay Segments for Bridging

Create necessary overlay segments with the same subnets that exist for VLAN connected workloads. Make sure default gateway for the chosen subnet is different from the gateway of VLAN based port groups. 

Step 7: Associate Overlay Segment with the Edge Bridge Profile

To associate an overlay segment with a bridge profile, edit the settings of the segment and click on Set under Edge Bridges.

Click on Add Edge Bridge and select the profile created in Step-5. Also, select the bridge transport zone which we created earlier. 

Specify the VNI to VLAN mapping: This is where we map an overlay segment with the VLAN that exists on the physical network. Since my App-LS corresponds to VLAN 201, I have entered the same under VLAN. 

And that’s it for the bridge configuration. Now it’s time to test the setup.

Test L2 Bridge

In vSphere, I have 2 VMs namely L2-Bridge-Test01 & L2-Bridge-Test02 connected on networks VLAN-201 & App-LS (overlay) respectively.  

From L2-Bridge-Test01 VM I pinged 172.16.21.253 IP which is the default gateway of overlay segment and ping test passed. 

I then pinged the IP of L2-Bridge-Test02 VM and ping test passed again.

From L2-Bridge-Test02 VM, I pinged 172.16.21.1 IP which is the default gateway of the VLAN network and exists on my Top of rack switch. Got successful ping response. 

Ping responded for L2-Bridge-Test01 as well.

As next step of testing, L2-Bridge-Test01 VM was migrated from VLAN-201 network to App-LS network with continuous ping. 

Not a single ping was missed. Although latency did drop after icmp_seq 73 (when cutover happened)

Once the migration of VM is done from VLAN network to overlay network, we need to change the default gateway of VM to point it to IP configured as the gateway of overlay segment. 

I hope you enjoyed reading this post. Feel free to share this on social media if it is worth sharing 🙂

Leave a Reply