There is no dedicated storage network needed with Nutanix as AHV leverages the data network as the backplane for storage. In AHV based deployments, CVM, Hypervisor and Guest VMs connect with physical network via Open vSwitch (OVS).
An instance of OVS is present on each AHV host and all instances of OVS in a cluster forms a single logical switch (somewhat similar to VMware vDS concept).
In a default AHV installation, all the interfaces present in the NX node are grouped together in a single bond called bro-up. A typical NX node ships with 2×10 GB and 2×1 GB interface.
The following diagram illustrates the networking configuration of a single host immediately after imaging.
Although the above configuration works just fine in most of the cases, as a best practice Nutanix recommends to separate the 10G and 1G interfaces into separate bonds to ensure that CVM and user VM traffic always traverse the fastest possible link.
Think of a scenario when you have a mix of workloads (prod + dev/test) running in same cluster and all VMs are using your 10 Gig NICs to send out traffic to physical world. I am sure you don’t want your test/dev VM’s to choke your 10G NICs and want to keep them separated from your production workload traffic.
To move test/dev vm traffic to the slower 1G interfaces, we need to create a new bridge and decouple the 1G interfaces from default bridge/bond and associate them with the new bridge.
Below diagram shows the Nutanix recommended OVS configuration.
Lets jump into lab and see how to do this via command line as we don’t have any such option available in GUI.
1: Identify your 1G and 10G NICs
Connect to a CVM and run command: allssh “manage_ovs show_interfaces” to check which interfaces are the 1 Gig and which of them are the 10 Gig.
2: Create a new bridge
To create a new bridge, run command: allssh ssh firstname.lastname@example.org “ovs-vsctl add-br <bridge-name>”
3: Verify presence of new bridge on all AHV host
Command: allssh ssh email@example.com “ovs-vsctl show | grep <bridge-name>”
4: Update default bridge to use only 10G interfaces
command: allssh manage_ovs –bridge_name br0 –interfaces 10g update_uplinks
5: Update new bridge to use 1G interfaces
command: allssh manage_ovs –bridge_name br1 –interfaces 1g update_uplinks
6: Verify that 1G links have been removed from default bridge
command: allssh manage_ovs –bridge_name br0 show_uplinks
Now we have separated the slower 1G interfaces from the default bridge. The only caveat in this configuration is that any new network created by prism is gonna attach to bridge br0 by default. In order to associate new networks to bridge br1, we need to use Acropolis Command line.
Launch Acropolis Command line by typing “acli” on any of the CVM and use below command to create a new network and associate it to the newly created bridge.
net.create <name of nw> vswitch_name=<bridge-name> vlan vlan id>
then do a net.list to verify the newly created network is available. Now if you login to Prism, you will get an option to select this network when attaching a NIC to a VM.
To know more on this topic, please see below video by Jason Burns from Nutanix.
And that’s it for this post.
I hope you enjoyed reading this post. Feel free to share this on social media if it is worth sharing