How to Force Delete a Stale Logical Segment in NSX 3.x

I ran into a problem recently when disabling NSX in my lab where I couldn’t remove a logical segment. This logical segment was previously attached to NSX Edge virtual machines. The logical segments still had a stale port, even after the Edge virtual machines were deleted.

Any attempt to delete the segment through UI/API resulted in the error Segment has 1 VMs or VIFs attached. Disconnect all VMs and VIFs before deleting a segment.” 

GUI Error

API Error

So how do I delete the stale segments in this case? The answer is through API, but before deleting the segments, you must delete the stale VIFs. 

Follow the procedure given below to delete stale VIFs.

1: List Logical Ports associated with the Stale Segment

This API call will list all segments. You can pass the segment uuid in the above command to list a specific segment (stale one).Read More

Quick Tip: Configure NSX Manager GUI Idle Timeout

The NSX Manager Web-UI has a default timeout of 1800 seconds. i.e., the NSX Manager UI will time out after 30 minutes of inactivity. This timeout looks reasonable for a production deployment, but since security is not an issue in lab settings, you might want to change it to a little bit higher. On top of that, it is annoying to get kicked out of the UI after 30 minutes of idle session.

In this post, I will demonstrate how you can change the timeout value.

Run the command get service http to see the currently configured value:

Run the command set service http session-timeout 0 to fully remove the session timeout.Read More

NSX-T Routing With OSPF

Introduction

NSX-T 3.1.1 introduced support for OSPFv2 routing protocol for Tier-0 gateways. This feature was one of the most awaited features for some time. The introduction of OSPF to NSX-T solves one of the major hindrances that was stopping customers from migrating to NSX-T.

There are lots of customers who are still running NSX-V in their environment and OSPF as routing protocol used in their infrastructure. Now since NSX-T supports OSPF, customers can do a greenfield deployment of NSX-T and switch workloads from NSX-V to NSX-T using the L2 bridge and without much changes to their physical network.

Since this feature is pretty new, it will be interesting to see how soon customers adopt this in their environment. 

Disclaimer: This post is inspired by an original blog post written by  Peter Milchov

Before jumping into the lab, let’s revisit some important facts associated with OSPF support.

  • NSX-T 3.1.1 supports OSPFv2 only.
Read More

Layer 2 Bridging With NSX-T

In this post, I will be talking about the Layer 2 Bridging functionality of NSX-T and discuss use cases and design considerations when planning to implement this feature. Let’s get started.

NSX-T supports L2 bridging between Overlay logical segments and VLAN-backed networks. When workloads connected to the NSX-T overlay segment require L2 connectivity to either VLAN-connected workloads or need to reach a physical device (such as a physical GW, LB, or Firewall), NSX-T Layer 2 bridge can be leveraged. 

Use Cases of Layer 2 Bridging

A few of the use cases that come to the top of my mind are: 

1: Migrating workloads connected to VLAN port groups to NSX-T overlay segments: Customers who are looking for migrating workloads from legacy vSphere infrastructure to SDDC can leverage Layer 2 Bridging to seamlessly move their workloads.

When planning for migrations, some of the challenges associated with migrations are Re-IP of workloads, migrating firewall rules, NAT rules, etc.Read More

NSX-T VRF Lite with VCD 10.2

VMware Cloud Director 10.2 release introduced key features in the networking and security areas and bridged the gap between VCD and NSX-T integration. In this release of VCD, the following NSX-T enhancements are added:

  • VRF Lite support
  • Distributed Firewall
  • Cross-VDC networking
  • NSX Advanced Load Balancer (Avi) integration

These improvements will help partners expand their network and security services with VMware Cloud Director and NSX-T.

In this post, I will be talking about tenant networking using NSX-T VRF Lite.

One of the key components in VCD networking is External Network which provides uplink connectivity to tenant virtual machines to allow them to talk to the outside world (Internet, VPN etc). External networks can be either

  • Shared: Allowing multiple tenant edge gateways to use the same external network.
  • Dedicated: One-to-one relationship between the external network and the NSX-T edge gateway, and no other edge gateways can connect to the external network.

Dedicating an external network to an edge gateway provides tenants with additional edge gateway services, such as Route Advertisement management and BGP configuration.Read More

Getting Started With NSX ALB: Part-3-NSX-T Integration

In the previous post of this series, I discussed Avi controller deployment and basic configuration. It’s time to integrate NSX-T with NSX ALB. High-level steps of NSX-T integration can be summarized as below:

  • Create a Content Library in vCenter
  • Deploy a Tier-1 gateway for Avi Management.
  • Create Logical Segments in NSX-T for Avi SE VM’s.
  • Create credentials for NSX-T and vCenter in Avi.
  • Register NSX-T with Avi Controller.
  • Create an IPAM profile. 

Let’s get started.

Create a Content Library in vCenter

Deployment of Avi Service Engine VM’s is done automatically by Avi Controller when we create Virtual Service. For this to work, an empty content library needs to be created in vCenter server as the controller pushes Avi SE ova into the content library and then deploys the SE VM’s. 

Deploy Tier-1 gateway for Avi Management

You can use the existing Tier-1 gateway or can deploy a new one (dedicated) for Avi management.Read More

Load Balancing VMware Cloud Director with NSX-T

Recently I tested NSX-T 3.1 integration with VCD 10.2 in my lab and blogged about it. It was a simple single cell deployment as I was just testing the integration. Later I scaled my lab to 3 nodes VCD cell and also used the NSX-T load balancer feature to test the load balancing of VCD cells.

In order to use NSX-T load balancer, we can deploy VCD cells in 2 different ways:

  • Deploy VCD cells on overlay segments connected to T1 gateway and configure LB straight away (easy method).
  • Deploy VCD cells on VLAN backed portgroups and load balance them via a dedicated T1 gateway.

In this post, I will demonstrate the second method. Before jumping into the lab, let me show you what is already there in my infrastructure.

In my lab, NSX-T is following VDS + NVDS architecture. Management SDDC where VCD cells are deployed have a VDS named ‘Cloud-VDS’ and I have a dedicated distributed portgroup named ‘VCD-Mgmt’ which is backed by VLAN 1800 and all my VCD cells are connected to this portgroup. Read More

NSX-T integration with VMware Cloud Director 10

VMware Cloud Director relies on the NSX network virtualization platform to provide on-demand creation and management of networks and networking services. NSX-V has been a building block of VCD infrastructure for quite a long time. With the release of NSX-T Datacenter, VMware clearly mentioned that NSX-T is the future of software-defined networking, and as a result customers slowly started migrating from NSX-V to NSX-T.

NSX-T 2.3 was the first version of  NSX-T which VCD (9.5) supported, but the integration was very basic and there were a lot of functionalities that were not available and it was stopping customers from using NSX-T full-fledged with VCD. NSX-T 2.5 added more functionalities in terms of VCD integration, but it was still lacking some features.

With the release of NSX-T 3.0, the game has changed and NSX-T is more tightly coupled with VCD and thus customers can leverage almost all functionalities of NSX-T with VCD. Read More

NSX-T Federation-Part 4: Configure Stretched Networking

Welcome to the fourth part of the NSX Federation series. In the last post, I talked about configuring local and global NSX-T managers to enable federation. In this post, I will show how we can leverage to configure stretched networking across sites. 

If you have missed the earlier post of this series, you can read them using the below links:

1: NSX-T Federation-Introduction & Architecture

2: NSX-T Federation-Lab Setup

3: Configure Federation

NSX-T Federation Topology

Before diving into the lab, I want to do a quick recap of the lab topology that I will be building in this post.

The following components in my lab are already built out :

1: Cross Link Router: This router is responsible for facilitating communication between Site-A & Site-B SDDC/NSX.

  • Site-A ToR01/02 are forming BGP neighborship with Cross Link Router and advertising necessary subnets to enable inter-site communication.
  • Site-B ToR01/02 are also BGP peering with the Cross Link Router and advertising subnets. 
Read More

NSX-T Federation-Part 3: Configure Federation

Welcome to the third post of the NSX Federation series. In part 1 of this series, I discussed the architecture of the NSX-T federation, and part 2 was focussed on my lab walkthrough.

In this post, I will show how to configure federation in NSX-T.

If you have missed the earlier posts of this series, you can read them using the below links:

1: NSX-T Federation-Introduction & Architecture

2: NSX-T Federation-Lab Setup

Let’s get started.

Federation Prerequisites

Before attempting to deploy and configure federation, you have to ensure that the following prerequisites are in place:

  • There must be a latency of 150 ms or less between sites.
  • Global Manager supports only Policy Mode. The federation does not support Manager Mode.
  • The Global Manager and all Local Managers must have NSX-T 3.0 installed.
  • NSX T Edge Clusters at each site are configured with RTEP IPs.
  • Intra-location tunnel endpoints (TEP) and inter-location tunnel endpoints (RTEP) must use separate VLANs.
Read More