Assigning vhostuser ports to PMD on NUMA node 1 fails if NUMA node 0 is fully isolated in Red Hat OpenStack Platform 10

Solution In Progress - Updated -

Issue

Assuming a scenario of a hypervisor with 2 NUMA nodes, where all PMD core_ids are isolated on NUMA 0 via

ovs-vsctl set interface $i options:n_rxq=$n other_config:pmd-rxq-affinity="$a"

The instance spans 2 NUMA nodes.

In specific situations, qemu-kvm / openvswitch are trying to assign the vhostuser ports only on NUMA 1, leading to failed assignments of the vhostuser interface and no instance connectivity at all. While this may be explained with NUMA affinity, the question is why this is eventually failing instead of trying to assign vhostuser ports to PMDs on the other NUMA node. Wouldn't lowered performance be preferable to a failure?

Problem:

[root@overcloud-compute-0 heat-admin]# virsh dumpxml 9 | grep pin
    <vcpupin vcpu='0' cpuset='19'/>
    <vcpupin vcpu='1' cpuset='63'/>
    <vcpupin vcpu='2' cpuset='57'/>
    <vcpupin vcpu='3' cpuset='13'/>
    <vcpupin vcpu='4' cpuset='56'/>
    <vcpupin vcpu='5' cpuset='12'/>
    <vcpupin vcpu='6' cpuset='58'/>
    <vcpupin vcpu='7' cpuset='14'/>
    <vcpupin vcpu='8' cpuset='83'/>
    <vcpupin vcpu='9' cpuset='39'/>
    <vcpupin vcpu='10' cpuset='40'/>
    <vcpupin vcpu='11' cpuset='84'/>
    <vcpupin vcpu='12' cpuset='42'/>
    <vcpupin vcpu='13' cpuset='86'/>
    <vcpupin vcpu='14' cpuset='33'/>
    <vcpupin vcpu='15' cpuset='77'/>
    <emulatorpin cpuset='12-14,19,33,39-40,42,56-58,63,77,83-84,86'/>
[root@overcloud-compute-0 heat-admin]# lscpu
Architecture:          x86_64
CPU op-mode(s):        32-bit, 64-bit
Byte Order:            Little Endian
CPU(s):                88
On-line CPU(s) list:   0-87
Thread(s) per core:    2
Core(s) per socket:    22
Socket(s):             2
NUMA node(s):          2
Vendor ID:             GenuineIntel
CPU family:            6
Model:                 79
Model name:            Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz
Stepping:              1
CPU MHz:               2802.507
CPU max MHz:           3600.0000
CPU min MHz:           1200.0000
BogoMIPS:              4399.72
Virtualization:        VT-x
L1d cache:             32K
L1i cache:             32K
L2 cache:              256K
L3 cache:              56320K
NUMA node0 CPU(s):     0-21,44-65
NUMA node1 CPU(s):     22-43,66-87
Flags:                 fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch epb cat_l3 cdp_l3 intel_pt tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm rdt_a rdseed adx smap xsaveopt cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts
[root@overcloud-compute-0 heat-admin]# tail /var/log/openvswitch/ovs-vswitchd.log
2017-10-24T17:25:25.982Z|00107|dpdk(vhost_thread1)|INFO|State of queue 0 ( tx_qid 0 ) of vhost device '/var/run/openvswitch/vhue66ecbf9-09'changed to 'enabled'
2017-10-24T17:25:25.982Z|00108|dpdk(vhost_thread1)|INFO|State of queue 0 ( tx_qid 0 ) of vhost device '/var/run/openvswitch/vhu4fb66f90-e3'changed to 'enabled'
2017-10-24T17:25:25.984Z|00109|dpdk(vhost_thread1)|INFO|vHost Device '/var/run/openvswitch/vhueb090a0a-6f' has been added on numa node 0
2017-10-24T17:25:26.014Z|00110|dpdk(vhost_thread1)|INFO|vHost Device '/var/run/openvswitch/vhue66ecbf9-09' has been added on numa node 0
2017-10-24T17:25:26.014Z|00111|dpdk(vhost_thread1)|INFO|vHost Device '/var/run/openvswitch/vhu4fb66f90-e3' has been added on numa node 0
2017-10-24T17:25:26.478Z|00289|dpif_netdev|INFO|Created 12 pmd threads on numa node 1
2017-10-24T17:25:26.490Z|00290|dpif_netdev|INFO|Created 4 pmd threads on numa node 0
2017-10-24T17:25:26.490Z|00291|dpif_netdev|WARN|There's no available pmd thread on numa node 0
2017-10-24T17:25:26.490Z|00292|dpif_netdev|WARN|There's no available pmd thread on numa node 0
2017-10-24T17:25:26.490Z|00293|dpif_netdev|WARN|There's no available pmd thread on numa node 0
[root@overcloud-compute-0 heat-admin]# virsh dumpxml 9 | grep vhu
      <source type='unix' path='/var/run/openvswitch/vhueb090a0a-6f' mode='client'/>
      <source type='unix' path='/var/run/openvswitch/vhu4fb66f90-e3' mode='client'/>
      <source type='unix' path='/var/run/openvswitch/vhue66ecbf9-09' mode='client'/>
[root@overcloud-compute-0 heat-admin]# ovs-appctl dpif-netdev/pmd-rxq-show | egrep 'vhueb090a0a-6f|vhu4fb66f90-e3|vhue66ecbf9-09'
[root@overcloud-compute-0 heat-admin]#
[root@overcloud-compute-0 heat-admin]# ovs-appctl dpif-netdev/pmd-rxq-show | egrep 'vhueb090a0a-6f|vhu4fb66f90-e3|vhue66ecbf9-09'
[root@overcloud-compute-0 heat-admin]# for i in `echo 'vhueb090a0a-6f|vhu4fb66f90-e3|vhue66ecbf9-09' | tr '|' ' '` ; do ovs-vsctl set interface $i options:n_rxq=4 other_config:pmd-rxq-affinity="0:4,1:48,2:5,3:49" ; done
(reverse-i-search)`ovs': for i in `echo 'vhueb090a0a-6f|vhu4fb66f90-e3|vhue66ecbf9-09' | tr '|' ' '` ; do ^Cs-vsctl set interface $i options:n_rxq=4 other_config:pmd-rxq-affinity="0:4,1:48,2:5,3:49" ; done
(reverse-i-search)`list': ovs-vsctl ^Cst port dpdk0
[root@overcloud-compute-0 heat-admin]# ^C
[root@overcloud-compute-0 heat-admin]# ^C
[root@overcloud-compute-0 heat-admin]# ovs-appctl dpif-netdev/pmd-rxq-show | egrep 'vhueb090a0a-6f|vhu4fb66f90-e3|vhue66ecbf9-09'                                                           port: vhu4fb66f90-e3    queue-id: 1
        port: vhue66ecbf9-09    queue-id: 1
        port: vhueb090a0a-6f    queue-id: 1
        port: vhu4fb66f90-e3    queue-id: 0
        port: vhue66ecbf9-09    queue-id: 0
        port: vhueb090a0a-6f    queue-id: 0
        port: vhu4fb66f90-e3    queue-id: 3
        port: vhue66ecbf9-09    queue-id: 3
        port: vhueb090a0a-6f    queue-id: 3
        port: vhu4fb66f90-e3    queue-id: 2
        port: vhue66ecbf9-09    queue-id: 2
        port: vhueb090a0a-6f    queue-id: 2

http://docs.openvswitch.org/en/latest/howto/dpdk/#port-rxq-assigment-to-pmd-threads

If there are no non-isolated PMD threads, non-pinned RX queues will not be polled. Also, if provided core_id is not available (ex. this core_id not in pmd-cpu-mask), RX queue will not be polled by any PMD thread.

Environment

Red Hat OpenStack Platform 10
Open vSwitch before 2.8

Subscriber exclusive content

A Red Hat subscription provides unlimited access to our knowledgebase, tools, and much more.

Current Customers and Partners

Log in for full access

Log In

New to Red Hat?

Learn more about Red Hat subscriptions

Using a Red Hat product through a public cloud?

How to access this content