AHV Networking: Part 1: Basics

AHV Networking Overview

AHV uses Open vSwitch (OVS) to connect the CVM, the hypervisor, and guest VMs to each other and to the physical network on each node. When we install AHV, an instance of OVS is created on that host and all instance of OVS across a cluster combines to form a single logical switch.

What is Open vSwitch (OVS)

OVS is an open source software switch implemented in the Linux kernel and designed to work in a multiserver virtualization environment. OVS is a layer-2 learning switch that maintains a MAC address table.

OVS have virtual ports to which the hypervisor host, CVM’s  and the user VMs connect. To learn more about OVS, please refer this article

Below diagram shows high level overview of OVS architecture in general.

ovs arch.PNG

Before moving forward in this article, lets revisit few important terminology related to OVS.

  • Bridges: It is a virtual switch which manages traffic between physical and virtual network interfaces.
  • Ports: Ports are logical constructs created in a bridge that represent connectivity to the virtual switch.
  • Bonds: Bond aggregates the physical interfaces together to share network traffic.

AHV Network Architecture

For any new AHV installation, this is how the OVS architecture looks like.

AHV NW Arch.png

virbr0 is an internal switch that connects the CVM with the hypervisor and all storage traffic passes through this internal vswitch. The interface towards the hypervisor (virbr0) have IP address and the CVM interface (eth1) gets



Also the host and the cvm connects to bridge br0 using interface br0 and eth0 respectively and it contains the ip address that you specify during installation.

Note: Interface br0 and eth0 connects the AHV host and the CVM to physical network.



By default all the interfaces that are present in an AHV host are bonded together in bond br0-up.

Note: In older version of AOS, you will see bond name as br0

show uplinks

OVS bonds allow for several load-balancing modes, including active-backup, balance-slb, and balance-tcp. During AHV installation we don’t get any option for defining the bond mode and it defaults to active-backup.

In later post of this series we will learn how to change the load balancing for a bond and also how to create new bridges and move interfaces to new bridge.

By default, any network which you create from the Prism UI is attached to bridge br0. You can list the networks using acli command net.list

net list.PNG

Checking AHV Host Network Configuration in Prism

To view the networking configuration for any host, login to Prism and switch to Network view. Select option “Host”in the search bar and click on any of the host.


It will show you the current configuration for the host. 

In this diagram we can see that there is one bridge and one bond and all the 4 interfaces of the host is currently connected to bond “br0-up”


View AHV Host Network Configuration via CLI

You can also check an AHV host configuration via CLI. I have listed few examples below.

1: Host Physical NIC Status: To verify uplinks attached to default bridge connect to CVM via SSH and execute command: manage_ovs –bridge_name br0 show_uplinks

show uplinks.PNG

2: Verify Interfaces: Command manage_ovs show_interfaces will show the NIC speed and the mode. Also it will show you the link state for all the interfaces.

From below diagram we can conclude that interface eth0 and eth1 are the 1G interfaces and eth2 and eth3 are the 10G interfaces

show interfaces.PNG

3: OVS Bond Status: We can verify the current load balancing policy for a bond and also which interfaces are currently present in a bond. 

$ ssh root@ “ovs-appctl bond/show br0-up”

bond status.PNG

4: VM’s Network: To verify the number of VM’s connected to a given network, we need to use acli command: net.list_vms <nw-name>

vm networks.PNG

And that’s it for this post.

I hope you enjoyed reading this post. Feel free to share this on social media if it is worth sharing :)