Configuring and Managing VMkernel TCP/IP Stacks

While working through VCAP6-Deploy blueprint I stumbled upon topic TCP/IP stack configuration. I have seen this many times in my lab while configuring networking stuffs and had a basic idea of what is the purpose of using custom TCP/IP stack. Suprisingly this feature is there with vSphere 5.5, but I never noticed it.

I decided to discover more about TCP/IP stack in my lab and share my experience through this write up. I will start with very basic.

What is VMkernel TCP/IP and why to use it?

The purpose of a TCP/IP Stack configuration in VMware vSphere Hosts is to setup the Networking Parameters which will allow the communication between the Hosts themselves including the Virtual Machines, other Virtual Appliances and last but not least the Network Storage.

We all know that as per networking best practices, its good to have dedicated VMkernel adapters for different type of traffics such as management, vMotion, vSAN, iSCSI and vSphere Replication etc. The TCP/IP stack creates “traffic profiles” which can be used in conjunction with VMkernel adapter configuration to achieve separation of duties.

Benefits of a Separate TCP/IP Stack

Before vSphere 5.5, there was a single TCP/IP stack and which was shared by all type of system traffic. Although there was nothing wrong this configuration, but problem arises when you want to segment traffic using multiple subnets/vLAN’s in your environment. With single TCP/IP stack all kind of traffic was using one default gateway.

In vSphere 5.5 VMware introduced capability of creating custom TCP/IP stack via command line with limitation that only certain type of traffic could make use of custom stack. Custom TCP/IP stacks can be used to handle the network traffic of other applications and services, which may require separate DNS and default gateway configurations. With custom TCP/IP stack, you get following benefits:

  • Separate Memory Heap.
  • Personalized ARP Table.
  • Personalized Routing Table which helps avoiding routing table conflicts that might appear when many features are using a common TCP/IP stack.
  • Isolate traffic to improve network security.

Default TCP/IP stacks available in vSphere

In vSphere 6, VMware introduced 2 new TCP/IP stack other than default one. Now we have following TCP/IP stacks available by default:

Default TCP/IP stack: This stack handles management traffic between ESXi hosts and vCenter server. Also other traffic like vMotion, NFS/iSCSI storage, HA and vSphere FT can use this stack. This stack shares a single default gateway between all configured network services.

vMotion TCP/IP stack:  This stack is optimized for handling vMotion traffic. By creating a VMkernel port on the vMotion TCP/IP stack you can isolate vMotion traffic to this stack. The use of this stack completely disables vMotion traffic from the default TCP/IP stack.

vMotion TCP/IP stack allow vMotions to occur at a Layer 3 level for both within a single vCenter as well as from vCenter to vCenter.

Provisioning TCP/IP stack: The provisioning TCP/IP stack is used for cold VM migration, cloning and snapshotting traffic. In case of a long-distance vMotion, NFC traffic can be configured to use the provisioning TCP/IP stack.

A common use case for using dedicated provisioning TCP/IP stack is VDI environments or in those environment where VM snapshots is used frequently.

How to create custom TCP/IP stack

With vSphere 6, a custom TCP/IP stack cannot be created in the Web Client interface and we have to rely on Esxi CLI for this. However once a custom stack has been created from command line, you can edit the properties of newly created stack from Web Client.

To create a new TCP/IP stack, SSH to Esxi host and use below command:

# esxcli network ip netstack add –N “Name_of_Stack”

For e.g

[root@esxi05:~] esxcli network ip netstack add -N "VR-Traffic"

Get Runtime info of newly created stack

[root@esxi05:~] esxcli network ip netstack get -N "VR-Traffic"
 Key: VR-Traffic
 Name: VR-Traffic
 Enabled: true
 Max Connections: 11000
 Current Max Connections: 11000
 Congestion Control Algorithm: newreno
 IPv6 Enabled: true
 Current IPv6 Enabled: false
 State: 4660

List all TCP/IP stack

[root@esxi05:~] esxcli network ip netstack list
 Key: defaultTcpipStack
 Name: defaultTcpipStack
 State: 4660
 Key: VR-Traffic
 Name: VR-Traffic
 State: 4660

Once the custom stack is created, you can modify the properties by logging into Web Client and navigating to Esxi Host > Manage > Networking > TCP/IP configuration


Modify the newly created stack by selecting it and clicking on pencil button to edit it and define following:

DNS Configuration. If a VMkernel adapter is configured to use DHCP, the custom TCP/IP stack can be set to use the DNS settings that the DHCP server hands out. Alternatively, a static set of two DNS servers can be set, as well as the search domains.

Default Gateway.

Congestion Control Algorithm. The algorithm specified here affects vSphere’s response when congestion is suspected on the network stack. There are two options:

Max Number of Connections. The default is set to 11,000.



There is a very good video available on youtube on this.

And thats it for this post. I hope you find this post informational. Feel free to share this on social media if it is worth sharing. Be sociable 🙂

Add a Comment