Red Hat Training

A Red Hat training course is available for Red Hat OpenStack Platform

Chapter 7. Performance

Red Hat OpenStack Platform 10 director configures the Compute nodes to enforce resource partitioning and fine tuning to achieve line rate performance for the guest VNFs. The key performance factors in the NFV use case are throughput, latency and jitter.

DPDK-accelerated OVS enables high performance packet switching between physical NICs and virtual machines. OVS 2.5 with DPDK 2.2 adds support for vhost-user multiqueue allowing scalable performance. OVS-DPDK provides line rate performance for guest VNFs.

SR-IOV networking provides enhanced performance characteristics, including improved throughput for specific networks and virtual machines.

Other important features for performance tuning include huge pages, NUMA alignment, host isolation and CPU pinning. VNF flavors require huge pages for better performance. Host isolation and CPU pinning improve NFV performance and prevent spurious packet loss.

For more details on these features and performance tuning for NFV, see NFV Tuning for Performance.

7.1. Configuring RX/TX queue size

You can experience packet loss at high packet rates above 3.5mpps for many reasons, such as:

  • a network interrupt
  • a SMI
  • packet processing latency in the Virtual Network Function

To prevent packet loss, increase the queue size from the default of 256 to a maximum of 1024.

Prerequisites

  • To configure RX, ensure that you have libvirt v2.3 and QEMU v2.7.
  • To configure TX, ensure that you have libvirt v3.7 and QEMU v2.10.

Procedure

  • To increase the RX and TX queue size, include the following lines in the parameter_defaults: section of a relevant director role. Here is an example with ComputeOvsDpdk role:

    parameter_defaults:
      ComputeOvsDpdkParameters:
        -NovaLibvirtRxQueueSize: 1024
        -NovaLibvirtTxQueueSize: 1024

Testing

  • You can observe the values for RX queue size and TX queue size in the nova.conf file:

    [libvirt]
    rx_queue_size=1024
    tx_queue_size=1024
  • You can check the values for RX queue size and TX queue size in the VM instance XML file generated by libvirt on the compute host.

    <devices>
       <interface type='vhostuser'>
         <mac address='56:48:4f:4d:5e:6f'/>
         <source type='unix' path='/tmp/vhost-user1' mode='server'/>
         <model type='virtio'/>
         <driver name='vhost' rx_queue_size='1024'   tx_queue_size='1024' />
         <address type='pci' domain='0x0000' bus='0x00' slot='0x10' function='0x0'/>
       </interface>
    </devices>

    To verify the values for RX queue size and TX queue size, use the following command on a KVM host:

    $ virsh dumpxml <vm name> | grep queue_size
  • You can check for improved performance, such as 3.8 mpps/core at 0 frame loss.