In last 2 posts of this series we understood what NSX is and how to install/configure NSX manager.
If you have missed earlier posts of this series, you can read them from below links:
In this post we will be talking about NSX controllers. Before diving into lab, we will first discuss a little bit theory about NSX controllers and its importance.
NSX controllers are the control plane for NSX. They are deployed in a cluster arrangement, so as you deploy these, you can add more controllers for better performance and high availability so that if you loose one of em, you do not loose control functionality. These are important, if you loose enough of these, things stop working.
NSX controllers stores following tables:
1: MAC Table
2: ARP Table
3: VTEP Table
One or more VMkernel interfaces on each vSphere host for VXLAN functionality. NSX controller keep these tables and the reason is that you may have your NSX deployment spread over a big DC or multiple DC.
You may have your vSphere hosts and cluster spread over multiple layer 3 network, so as these VM’s or host need to talk to each other, because we overlay routing networks with what looks like single layer 2 adjacent VLAN’s for the VM’s, if they do things like ARP lookup table or MAC add lookup or you need to know the IP of the VTEP interface on other hosts, normally you will send out a broadcast.
But we dont want to broadcast the stuffs all over the place, so these controllers keep these tables and if a host need those, it will send a copy of these records and thus help in greatly reducing broadcast traffic that we want to get rid of.
NSX controllers considerations:
1: Deployed in odd numbers
Controllers uses a cluster and uses a voting quoram. They should be deployed in odd numbers and should be resilient. The min which you can deploy is 1 but 1 is not resilient and is not supported by VMware. (You can choose to deploy 1 controller in lab environments to test things).
If you have 3 node nsx controller cluster, it allows you to tolerate failure of 1 node. But if 2 goes down things would stop working. These clusters wants a voting majority. The idea here is that in case of a split brain or there is a segmentation and 2 controllers end up in one partition and other one in another partition, the side that has 2 controllers knows that they have majority as they started with 3 nodes and they can institute changes.
If you have only 2 nodes and they split into different partitions, they cant push any type of changes as both dont have majority. The max currently supported is 5.
2: Not in data path
But that doesn’t mean you allow all of them to fail. If you have a 3 node cluster and one of them fail, either fix it or deploy a new node so that you can always have voting majority available
3: Work is striped across the controllers using the concept of slices.
Controllers scale for both performance and availability. Slicing method is used to spread the workload. Every job is divided into slices and then its spread across available nodes. When a new controller is added or existing one fails, these slices can be redistributed.
This can be understood easily via below 2 pictures
In this picture there are 3 controllers and each one have been assigned a workload
Now controller 3 failed and workload are being shifted to available 2 controllers
NSX Controller Cluster Functions
NSX controllers mainly perform these 2 functions:
1: VXLAN functions
2: Distributed Router
An election is held to find out the master for each kind of roles. When a controller fails, a new election is held to promote a new controller as master.
Controllers are deployed by NSX manager. You dont need any kind of ova/ovf files for deploying these. Each deployed controller has 4 GB RAM and 4 vCPU by default.
We have talked enough of theory now. It’s time to jump into lab and see some action.
To deploy a new controller navigate to Installation section under Networking & Security and click on green ‘+‘ button
Type the name of the controller and select the Cluster/Resource Pool and datastore where you want to deploy the controller. Also in connected To option select the same layer 2 portgroup in which your NSX manager resides.
For IP Pool click on select
If you have not created any IP pool for the NSX controllers yet, do so by clicking on New IP Pool
Provide a name for the IP pool and Gateway, DNS and Prefix length (Netmask). Also define a pool of IP under Static IP Pool. It can be a continuous IP Range or comma separated if the IP’s are not continuous. Hit OK once you are done filling up the relevant entries.
Select the newly created pool and hit OK to continue.
At last type in the password for accessing controllers over SSH and hit OK to finish the wizard.
You will see that NSX manager is now deploying a new controller for you. Also in Recent Tasks pane you will see a task triggered for Deploying an OVF template.
Once the controller deployment is successful, you will see the status as connected. Now you can deploy additional controllers.
Again click on green ‘+’ button to spin up a new controller. Provide a name and necessary info and hit OK.
Note:- Password option will only appear for the First NSX Controller Node. For 2nd and 3rd node same Password will be used so there will not be password field.
I have deployed 3 controllers in my lab and all shows connected.
If you switch to Host and Cluster view in vCenter, you will see 3 VM’s deployed which corresponds to the 3 controllers.
Now let’s have a look on controller cluster status
NSX Control Cluster Status
Once you SSH to the controller, you can start asking some questions to them.
Say for example you can ask for the cluster status (How are you doing today buddy)
# show control-cluster status
You will see the output with controller stating that yea I am fine. My join status is complete. I am connected to cluster majority and I can be safely restarted. Here is my cluster ID and UUID. I am happy, I am activated and ready to go.
NSX Control Cluster Connection Status
To verify the Controller Node’s intra-cluster communication connections status we can use below command:
# show control-cluster connections
The master Controller listens on port 2878 (you can see “Y” in the “listening” column).The other Controller nodes will have a dash (-) in the “listening” column for Port 2878.
NSX Control Cluster Role Status
Next is to query about the controller role. You can ask the controller “Are you the one who is incharge here”. Are you the master of anything.
Below command can be used to query the role status
# show control-cluster roles
Below is the output from my 3 NSX controller node. Each controller node can be master for different role.
At this point we have NSX manager installed and NSX controllers deployed, so we have our management plane and control plane established. We are ready for the Host preparation and we will cover this part in our next post.
I hope you enjoyed reading this post. Feel free to share this on social media if it is worth sharing. Be sociable