Nutanix AHV Networking Advanced Features

Nutanix AHV: OVS Networking Advanced Features

Nutanix AHV Networking advanced Features: Nutanix AHV uses OVS vSwitch to deliver the advanced networking features and services Nutanix AHV NFV, Nutanix AHV Services Chaining, with best throughput performance, each Guest VM is connected to vNIC that is called TAP interface and each virtual TAP interfaces has three types: Nutanix AHV interfaces: kDirectNic , kNetworkFunctionNic , kNormalNic has different functionality to work in different use-cases.

Lets Explore the Nutanix AHV advanced networking features of Virtual interfaces / vNIC type:

  • kNormalNic
  • kDirectNic
  • kNetworkFunctionNic
  • kNativeNetwork

Nutanix AHV: kNormalNIC

kNormalNIC: kNormalNIC is a tap interface. It is connected to the OVS and all the Network I/O functions are processed by the OVS Database and the OVS kernel before forwarding it to the AHV Uplink. Interfaces connect to br0.local.

Br0.local : Local bridges are where UVM’s are plugged in through tap devices. It also has a downstream patch port that connects to the mux bridge.

Service chaining Packet flow
Nutanix AHV- Networking Packet flow

Read also: Nutanix AHV: VLAN Tagging on Guest VM

Nutanix AHV: kDirectNIC

kDirectNIC: kDirectNIC is also a tap interface. It also gets connected to the OVS bridge, however the I/O Functions are pass-through to the AHV Uplink.

KDirectNIC is usually use in Windows / Linux VM cluster mode to communicate the clustered VM in to each other through virtual IP because kNormalNIC is suitable in this case.

Guest VM is using kDirectNic: If only few particular VMs are affected by Flow | Security policy is not blocking traffic to protected VMs, then check what type of NIC these VMs are using, as security policies are only applied to VMs with kNormalNic. By default, all newly created VMs get kNormalNic assigned.

To check NIC type, run the following command on any CVM:

acli vm.nic_get <vm name> |egrep "mac_addr:| type:"

Sample output for VM with kDirectNic:

nutanix@CVM:~$ acli vm.nic_get test_vm |egrep "mac_addr:| type:"
  mac_addr: "50:6b:8d:c8:c0:db"
  type: "kDirectNic"

To change NIC type, run the following command:

acli vm.nic_update <vm name> <NIC MAC address> type=kNormalNic

Note: User VMs configured with kDirectNic cannot be protected by Nutanix Flow Security policies.

Read also: Nutanix AHV: Windows VM Kernel Memory dump

Nutanix AHV: kNetworkFunctionNIC

kNetworkFunctionNIC: This is a tap interface and it is used to connected to Br.nf, Nutanix AHV uses kNetworkFunctionNIC by Service chaining in Nutanix Flow Security policy software embedded in Nutanix Prism Central.

KNetworkFunctionNIc is useful for deploying firewall, packet capture, or intrusion detection VMs to intercept or mirror traffic to and from VMs on AHV and direct this traffic through or to another VM also on AHV.

Traffic Direction Options

  • Flow Security Policies (VM to VM) – Requires Flow Licenses
  • Directing an Entire AHV Network (VM to VM) – PC Required. No Flow Required
  • Directing a Single VM NIC (VM to VM) – PC Required. No Flow Required

kNetworkFunctionNIC Commands in Nutanix AHV

<acropolis> vm.nic_create vendor-1 type=kNetworkFunctionNic network_function_nic_type=kIngress
<acropolis> vm.nic_create vendor-1 type=kNetworkFunctionNic network_function_nic_type=kEgress
#REPLACE ABOVE TWO STEPS WITH THE FOLLOWING FOR TAP
<acropolis> vm.nic_create vendor-1 type=kNetworkFunctionNic network_function_nic_type=kTap
#Tie the NFV VM to a single host so it is guaranteed to capture traffic on this AHV host only.

The NFV I referred to is a special VM that runs on every single AHV host in the cluster. You provision this VM and mark it as an agent VM. Then you add it to a network function chain. This VM can run any OS that’s supported on AHV, and you can decide whether to hook up a single interface as a tap, or multiple interfaces as inline.

Read also: Nutanix AHV: Deploy Windows Server 2008 VM


This NFV VM can receive, inspect, and capture in tap mode. In inline mode it can do these function AND decide to reject or transmit the traffic. In the example diagram above, imagine that VM as a Palo Alto Networks VM-Series firewall. I’ve also used the Snort IDS in my own lab.

With this type of NFV configured in a network function chain, you can only capture traffic sent or received by VMs running on AHV. You cannot capture traffic sent by physical hosts, or send in ERSPAN type traffic to the NFV VM.

Read more: How to create service chain using REST API

Nutanix AHV Service Chaining

Nutanix AHV Service chaining allows us to intercept all traffic and forward to a packet processor (NFV, appliance, virtual appliance, etc.) functions trasparently as part of the network path.

Common uses for service chaining:

  • Firewall (e.g. Palo Alto, ete.)
  • Load balancer (e.g. F5, Netscaler, etc.)
  • IDS/IPS/network monitors (e.g. packet capture)

Within service chaining there are two types of way:

Nutanix Service Chaining Packet processor
Nutanix Service Chaining
  • Inline packet processor
    • Intercepts packets inline as they flow through OVS
    • Can modify and allow/deny packet
    • Common uses: firewalls and load balancers
  • Tap packet processor
    • Inspects packets as they flow, can only read as it’s a tap into the packet flow
    • Common uses: IDS/IPS/network monitor

Read also: Nutanix AHV Cluster size Maximums

Any service chaining is done after the Flow – Micro-segmentation rules are applied and before the packet leaves the local OVS. This occurs in the network function bridge (br.nf):

Br.nf: Network Function Bridge is where network function VM’s are connected to OVS through tap devices. Packets will be redirected to and from these VM’s through the ports corresponding to those tap devices.

Br.mx and Br.dmx: Mux and Demux Bridges combine and split traffic from multiple local bridges,  to the single bridge chain, and from the bridge chain to the multiple uplink bridges. Vlan tags are converted into network id’s as they enter the bridge chain from either side, and converted back into the original vlan tags as they leave the bridge chain.

Nutanix Service Chaining Flow
Nutanix Service Chaining Flow

NOTE: it is possible to string together multiple NFV / packet processors in a single chain.

Read more: https://next.nutanix.com/blog-40/what-s-new-in-ahv-networking-part-4-24521

Enable Service Chaining In Nutanix AHV

Bridge / Service chaining on Nutanix AHV host is disabled: If bridge chaining is disabled, then all VMs running on that AHV host will not be able to apply Security Policies. Read more Nutanix AHV Core Architecture

Run the following command on any CVM to check the bridge chaining status:

hostssh cat /root/acropolis_ovs_config

Sample output if bridge chaining is disabled:

nutanix@CVM:~$ hostssh cat /root/acropolis_ovs_config 
============= xx.xx.xx.1 ============ 
[DEFAULT] 
disable_bridge_chain = True 

Sample output if bridge chaining is enabled:

nutanix@CVM:~$ hostssh cat /root/acropolis_ovs_config 
============= xx.xx.xx.1 ============ 
[DEFAULT] 
disable_bridge_chain = False

If you receive an error message saying that the “/root/acropolis_ovs_config” file is not found, then it means that settings are default and bridge chaining is enabled.

Run the following command to enable bridge chaining on all hosts (this is permitted since there is not a bond being recreated):

allssh "manage_ovs enable_bridge_chain"

After that, reapply the security policy and verify that it works as expected.

Read also: Nutanix AHV Node Failed to detect Network card NICs after Replacement

Conclusion

Nutanix AHV hypervisor supports many OVS vSwitch advanced features to give more flexibility and Nutanix Infra and Guest VM security hardended into the Nutanix AHV networking stack.

Thanks to being with HyperHCI Tech Blog to stay tuned for latest and trending technical post.!