Jump To Close Expand all Collapse all Table of contents OVS-DPDK End to End Troubleshooting Guide Preface 1. Preliminary Checks 2. Validating an OVS-DPDK Deployment Expand section "2. Validating an OVS-DPDK Deployment" Collapse section "2. Validating an OVS-DPDK Deployment" 2.1. Confirming OpenStack Expand section "2.1. Confirming OpenStack" Collapse section "2.1. Confirming OpenStack" 2.1.1. Show the Network Agents 2.1.2. Show the Hosts in the Compute Service 2.2. Confirming Compute Node OVS Configuration 2.3. Confirming OVS for Instance Configuration 2.4. Other Helpful Commands 2.5. Simple Compute Node CPU Partitioning and Memory Checks Expand section "2.5. Simple Compute Node CPU Partitioning and Memory Checks" Collapse section "2.5. Simple Compute Node CPU Partitioning and Memory Checks" 2.5.1. Detecting CPUs 2.5.2. Detecting PMD Threads 2.5.3. Detecting NUMA node 2.5.4. Detecting Isolated CPUs 2.5.5. Detecting CPUs Dedicated to Nova Instances 2.5.6. Confirming Huge Pages Configuration 2.6. Causes for Packet Drops Expand section "2.6. Causes for Packet Drops" Collapse section "2.6. Causes for Packet Drops" 2.6.1. OVS-DPDK Too Slow to Drain Physical NICs 2.6.2. VM Too Slow to Drain vhost-user 2.6.3. OVS-DPDK Too Slow to Drain vhost-user 2.6.4. Packet Loss on Egress Physical Interface 3. NFV Command Cheatsheet Expand section "3. NFV Command Cheatsheet" Collapse section "3. NFV Command Cheatsheet" 3.1. UNIX Sockets 3.2. IP 3.3. OVS 3.4. IRQ 3.5. Processes 3.6. KVM 3.7. CPU 3.8. NUMA 3.9. Memory 3.10. PCI 3.11. Tuned 3.12. Profiling Process 3.13. Block I/O 3.14. Real Time 3.15. Security 3.16. Juniper Contrail vRouter 3.17. Containers 3.18. OpenStack 4. High Packet Loss in the TX Queue of the Instance’s Tap Interface Expand section "4. High Packet Loss in the TX Queue of the Instance’s Tap Interface" Collapse section "4. High Packet Loss in the TX Queue of the Instance’s Tap Interface" 4.1. Symptom 4.2. Diagnosis Expand section "4.2. Diagnosis" Collapse section "4.2. Diagnosis" 4.2.1. Workaround 4.2.2. Diagnostic Steps 4.3. Solution 5. TX Drops on Instance VHU Interfaces with Open vSwitch DPDK Expand section "5. TX Drops on Instance VHU Interfaces with Open vSwitch DPDK" Collapse section "5. TX Drops on Instance VHU Interfaces with Open vSwitch DPDK" 5.1. Symptom Expand section "5.1. Symptom" Collapse section "5.1. Symptom" 5.1.1. Explanation for Packet Drops 5.1.2. Explanation for other drops 5.1.3. Increasing the TX and RX queue lengths for DPDK 5.2. Diagnosis 5.3. Solution 6. Interpreting the output of the pmd-stats-show command in Open vSwitch with DPDK Expand section "6. Interpreting the output of the pmd-stats-show command in Open vSwitch with DPDK" Collapse section "6. Interpreting the output of the pmd-stats-show command in Open vSwitch with DPDK" 6.1. Symptom 6.2. Diagnosis 6.3. Solution Expand section "6.3. Solution" Collapse section "6.3. Solution" 6.3.1. Idle PMD 6.3.2. PMD under load test with packet drop 6.3.3. PMD under loadtest with 50% of mpps capacity 6.3.4. Hit vs miss vs lost 7. Attaching and Detaching SR-IOV ports in nova Expand section "7. Attaching and Detaching SR-IOV ports in nova" Collapse section "7. Attaching and Detaching SR-IOV ports in nova" 7.1. Symptom 7.2. Diagnosis 7.3. Solution 8. Configure and Test LACP Bonding with Open vSwitch DPDK Expand section "8. Configure and Test LACP Bonding with Open vSwitch DPDK" Collapse section "8. Configure and Test LACP Bonding with Open vSwitch DPDK" 8.1. Configuring the Switch Ports for LACP 8.2. Configuring Linux Kernel Bonding for LACP as a Baseline 8.3. Configuring OVS DPDK Bonding for LACP Expand section "8.3. Configuring OVS DPDK Bonding for LACP" Collapse section "8.3. Configuring OVS DPDK Bonding for LACP" 8.3.1. Prepare Open vSwitch 8.3.2. Configure LACP Bond 8.3.3. Enabling / Disabling Ports from OVS 9. Deploying different bond modes with OVS DPDK Expand section "9. Deploying different bond modes with OVS DPDK" Collapse section "9. Deploying different bond modes with OVS DPDK" 9.1. Solution 10. Receiving the Could not open network device dpdk0 (No such device) in ovs-vsctl show message Expand section "10. Receiving the Could not open network device dpdk0 (No such device) in ovs-vsctl show message" Collapse section "10. Receiving the Could not open network device dpdk0 (No such device) in ovs-vsctl show message" 10.1. Symptom 10.2. Diagnosis 10.3. Solution 11. Insufficient Free Host Memory Pages Available to Allocate Guest RAM with Open vSwitch DPDK Expand section "11. Insufficient Free Host Memory Pages Available to Allocate Guest RAM with Open vSwitch DPDK" Collapse section "11. Insufficient Free Host Memory Pages Available to Allocate Guest RAM with Open vSwitch DPDK" 11.1. Symptom 11.2. Diagnosis Expand section "11.2. Diagnosis" Collapse section "11.2. Diagnosis" 11.2.1. Diagnostic Steps 11.3. Solution 12. Troubleshoot OVS DPDK PMD CPU Usage with perf and Collect and Send the Troubleshooting Data Expand section "12. Troubleshoot OVS DPDK PMD CPU Usage with perf and Collect and Send the Troubleshooting Data" Collapse section "12. Troubleshoot OVS DPDK PMD CPU Usage with perf and Collect and Send the Troubleshooting Data" 12.1. Diagnosis Expand section "12.1. Diagnosis" Collapse section "12.1. Diagnosis" 12.1.1. PMD Threads 12.1.2. Additional Data 12.1.3. Open vSwitch Logs 13. Using virsh emulatorpin in virtual environments with NFV Expand section "13. Using virsh emulatorpin in virtual environments with NFV" Collapse section "13. Using virsh emulatorpin in virtual environments with NFV" 13.1. Symptom 13.2. Solution Expand section "13.2. Solution" Collapse section "13.2. Solution" 13.2.1. qemu-kvm Emulator Threads 13.2.2. Default Behavior for Emulator Thread Pinning 13.2.3. About the Impact of isolcpus on Emulator Thread Scheduling 13.2.4. Optimal Location of Emulator Threads Expand section "13.2.4. Optimal Location of Emulator Threads" Collapse section "13.2.4. Optimal Location of Emulator Threads" 13.2.4.1. Optimal Placement of Emulator Threads with DPDK Networking Within the Instance and netdev datapath in Open vSwitch 13.2.4.2. Optimal Placement of Emulator Threads with DPDK Networking Within the Instance and System datapath in Open vSwitch 13.2.4.3. Optimal Placement of Emulator Threads with Kernel Networking within the Instance and netdev datapath in Open vSwitch 13.3. Diagnosis Expand section "13.3. Diagnosis" Collapse section "13.3. Diagnosis" 13.3.1. The Demonstration Environment 13.3.2. How Emulatorpin works Legal Notice Settings Close Language: 한국어 简体中文 日本語 English Language: 한국어 简体中文 日本語 English Format: Multi-page Single-page PDF Format: Multi-page Single-page PDF Language and Page Formatting Options Language: 한국어 简体中文 日本語 English Language: 한국어 简体中文 日本語 English Format: Multi-page Single-page PDF Format: Multi-page Single-page PDF Chapter 1. Preliminary Checks This guide assumes that you are familiar with the planning and deployment procedures in the following documents: Planning your OVS-DPDK deployment Configuring an OVS-DPDK Deployment Previous Next