As we discussed in first post of this series that NSX-T was born to meet the demands of the containerized workload, multi-hypervisor and multi-cloud.
The best use case that you can think of NSX-T is that it provides seamless connectivity and security services for all types of endpoints including virtual machines, containers and bare metal. It doesn’t really matter where these endpoints are. It could be in your on-prem datacenter, a remote office or in the cloud.
In this post we will look how NSX-T architecture looks like.
Like NSX-V, NSX-T too contains a management plane, data plane and a control plane. Lets discuss about each plane individually here.
- NSX-T uses in-kernel modules for ESXi and KVM hypervisors for constructing data plane.
- Since NSX-T is decoupled from vSphere, it don’t rely on vSphere vSwitch for network connectivity. NSX-T data plane introduces a host switch called N-VDS (NSX Managed Virtual Distributed Switch).
- All create, read, update and delete operations are performed via the NSX-T Manager.
- Data plane offers features such as Logical routing, Logical switching, DFW, NAT, DHCP etc.
- NSX-T control plane is formed by Central Control Cluster (CCP) + Local Control Plane (LCP) that runs on the hypervisors (Esxi/KVM)
- CCP controller nodes are deployed as VM’s that can run on an Esxi host or KVM.
- Like NSX-V, controllers in NSX-T is responsible for slicing logical switching and logical routing.
- NSX Manager which is deployed via OVA file forms the management plane for NSX-T.
- The management plane handles authentication, monitoring and inventory collection from the compute managers.
- NSX-T manager can be integrated with various Cloud Management Platforms (CMP) via Rest API’s.
- Although vCenter is decoupled from NSX-T management plane, you can add vCenter server as compute manager to leverage vSphere features.
Below diagram is a high level diagram of NSX-T achitecture.
And that’s it for this post. In next post of this series we will deploy NSX-T in lab.