Red Hat Training
A Red Hat training course is available for Red Hat OpenStack Platform
Chapter 3. NFV Command Cheatsheet
This chapter contains many of the most commonly used commands for Red Hat OpenStack Platform 10 system observability.
Some of the commands below may not be available by default. To install the required tools for a given node, run the following command:sudo yum install tuna qemu-kvm-tools perf kernel-tools dmidecode
3.1. UNIX Sockets
Use these commands to show process ports and UNIX socket domains.
Action | Command |
---|---|
Show all TCP and UDP SOCKETS in all states (LISTEN, ESTABLISHED, CLOSE_WAIT, etc) without hostname lookup | # lsof -ni |
Show all TCP SOCKETS in all states (LISTEN, ESTABLISHED, CLOSE_WAIT, etc) without hostname lookup | # lsof -nit |
Show all UDP SOCKETS in all states (LISTEN, ESTABLISHED, CLOSE_WAIT, etc) without hostname lookup | # lsof -niu |
Show all TCP and UDP SOCKETS in all states (LISTEN, ESTABLISHED, CLOSE_WAIT, etc) without hostname lookup for IPv4 | # lsof -ni4 |
Show all TCP and UDP SOCKETS in all states (LISTEN, ESTABLISHED, CLOSE_WAIT, etc) without hostname lookup for IPv6 | # lsof -ni6 |
Show all related SOCKETS (LISTEN, ESTABLISHED, CLOSE_WAIT, etc) without hostname lookup for a given port | # lsof -ni :4789 |
Show all SOCKETS in LISTEN state without hostname lookup | # ss -ln |
Show all SOCKETS in LISTEN state without hostname lookup for IPv4 | # ss -ln4 |
Show all SOCKETS in LISTEN state without hostname lookup for IPv6 | # ss -ln6 |
3.2. IP
Use these commands to show IP L2 and L3 configs, drivers, PCI busses, and network statistics.
Action | Command |
---|---|
Show all L2 (both physical and virtual) interfaces and their statistics | # ip -s link show |
Show all L3 interfaces and their statistics | # ip -s addr show |
Show default (main) IP routing table | # ip route show |
Show routing rules of a given routing table | # ip route show table external |
Show all routing tables | # ip rule show |
Show routing rules for a given destination | # ip route get 1.1.1.1 |
Show all Linux namespaces | # ip netns show |
Log in into a Linux namespace | # ip netns exec ns0 bash |
Show detailed network interface counters of a given interface | # tail /sys/class/net/ens6/statistics/* |
Show detailed bonding information of a given bond device | # cat /proc/net/bonding/bond1 |
Show global network interface counter view | # cat /proc/net/dev |
Show physical connection type (TP, FIBER etc), link speed mode supported and connected for a given network interface | # ethtool ens6 |
Show Linux driver, driver version, firmware, and PCIe BUS ID of a given network interface | # ethtool -i ens6 |
Show default, enabled, and disabled hardware offloads for a given network interface | # ethtool -k ens6 |
Show MQ (multiqueue) configuration for a given network interface | # ethtool -l ens6 |
Change MQ setup for both RX and TX for a given network interface | # ethtool -L ens6 combined 8 |
Change MQ setup only for TX for a given network interface | # ethtool -L ens6 tx 8 |
Show queue size for a given network interface | # ethtool -g ens6 |
Change RX queue size for a given network interface | # ethtool -G ens6 rx 4096 |
Show enhanced network statistics | # cat /proc/net/softnet_stat |
Show quick important network device info (Interface name, MAC, NUMA, PCIe slot, firmware, kernel driver) | # biosdevname -d |
Show kernel internal drop counters. For more information, see: Monitoring network data processing. | # cat /proc/net/softnet_stat |
3.3. OVS
Use these commands to show Open vSwitch related information.
Action | Command |
---|---|
OVS DPDK human readable statistics | |
Show OVS basic info (version, dpdk enabled, PMD cores, lcore, ODL bridge mapping, balancing, auto-balancing etc) | # ovs-vsctl list Open_vSwitch |
Show OVS global switching view | # ovs-vsctl show |
Show OVS all detailed interfaces | # ovs-vsctl list interface |
Show OVS details for one interface (link speed, MAC, status, stats, etc) | # ovs-vsctl list interface dpdk0 |
Show OVS counters for a given interface | # ovs-vsctl get interface dpdk0 statistics |
Show OVS all detailed ports | # ovs-vsctl list port |
Show OVS details for one port (link speed, MAC, status, stats, etc) | # ovs-vsctl list port vhu3gf0442-00 |
Show OVS details for one bridge (datapath type, multicast snooping, stp status etc) | # ovs-vsctl list bridge br-int |
Show OVS log status | # ovs-appctl vlog/list |
Change all OVS log to debug | # ovs-appctl vlog/set dbg |
Change one specific OVS subsystem to debug mode for the file log output | # ovs-appctl vlog/set file:backtrace:dbg |
Disable all OVS logs | # ovs-appctl vlog/set off |
Change all OVS subsystems to debug for file log output only | # ovs-appctl vlog/set file:dbg |
Show all OVS advanced commands | # ovs-appctl list-commands |
Show all OVS bonds | # ovs-appctl bond/list |
Show details about a specific OVS bond (status, bond mode, forwarding mode, LACP status, bond members, bond member status, link status) | # ovs-appctl bond/show bond1 |
Show advanced LACP information for members, bond and partner switch | # ovs-appctl lacp/show |
Show OVS interface counters | # ovs-appctl dpctl/show -s |
Show OVS interface counters highlighting differences between iterations | # watch -d -n1 "ovs-appctl dpctl/show -s|grep -A4 -E '(dpdk|dpdkvhostuser)'|grep -v '\-\-'" |
Show OVS mempool info for a given port | # ovs-appctl netdev-dpdk/get-mempool-info dpdk0 |
Show PMD performance statistics | # ovs-appctl dpif-netdev/pmd-stats-show |
Show PMD performance statistics in a consistent way | # ovs-appctl dpif-netdev/pmd-stats-clear && sleep 60s && ovs-appctl dpif-netdev/pmd-stats-show |
Show DPDK interface statistics human readable | # ovs-vsctl get interface dpdk0 statistics|sed -e "s/,/\n/g" -e "s/[\",\{,\}, ]//g" -e "s/=/ =⇒ /g" |
Show OVS mapping between ports/queue and PMD threads | # ovs-appctl dpif-netdev/pmd-rxq-show |
Trigger OVS PMD rebalance (based on PMD cycles utilization) | # ovs-appctl dpif-netdev/pmd-rxq-rebalance |
Create affinity between an OVS port and a specific PMD (disabling the PMD from any balancing) | # ovs-vsctl set interface dpdk other_config:pmd-rxq-affinity="0:2,1:4" |
(OVS 2.11+ and FDP18.09) Set PMD balancing based on cycles | # ovs-vsctl set Open_vSwitch . other_config:pmd-rxq-assign=cycles |
(OVS 2.11+ and FDP18.09) Set PMD balancing in round robin | # ovs-vsctl set Open_vSwitch . other_config:pmd-rxq-assign=roundrobin |
Set number of OVS-DPDK Physical ports queues | # ovs-vsctl set interface dpdk options:n_rxq=2 |
Set number of OVS-DPDK Physical ports queue sizes | # ovs-vsctl set Interface dpdk0 options:n_rxq_desc=4096 # ovs-vsctl set Interface dpdk0 options:n_txq_desc=4096 |
Show OVS MAC address table (used for action=normal) | # ovs-appctl fdb/show br-provider |
Set OVS vSwitch MAC Address table aging time (default 300s) | # ovs-vsctl set bridge br-provider other_config:mac-aging-time=900 |
Set OVS vSwitch MAC Address table size (default 2048s) | # ovs-vsctl set bridge br-provider other_config:mac-table-size=204800 |
Show OVS datapath flows (kernel space) | # ovs-dpctl dump-flows -m |
Show OVS datapath flows (dpdk) | # ovs-appctl dpif/dump-flows -m br-provider |
Show mapping between datapath flows port number and port name | # ovs-dpctl show |
Show OVS OpenFlow rules in a given bridge | # ovs-ofctl dump-flows br-provider |
Show mapping between OpenFlow flows port number and port name | # ovs-ofctl show br-provider |
(OVS 2.11+) - Enable auto-rebalance | # ovs-vsctl set Open_vSwitch . other_config:pmd-auto-lb="true" |
(OVS 2.11+) - Change auto-rebalance interval to a different value (default 1 minute) | # ovs-vsctl set Open_vSwitch . other_config:pmd-auto-lb-rebalance-intvl="5" |
Detailed OVS internal configs | # man ovs-vswitchd.conf.db |
To download OVS tcpdump | # curl -O -L ovs-tcpdump.in |
To perform a packet capture from a DPDK interface | # ovs-tcpdump.py --db-sock unix:/var/run/openvswitch/db.sock -i <bond/vhu> <tcpdump standard arguments such as -v -nn -e -w <path/to/file>> |
(OVS 2.10+) Detailed PMD performance stats | # ovs-appctl dpif-netdev/pmd-perf-show |
3.4. IRQ
Use these commands to show Interrupt Request Line (IRQ) software and hardware interrupts.
Action | Command |
---|---|
Show SoftIRQ balancing per CPU executed by the ksoftirqd workers | # cat /proc/softirqs | less -S |
Show SoftIRQ balancing per CPU executed by the ksoftirqd workers every second | # watch -n1 -d -t "cat /proc/softirqs" |
Show hardware and software interrupts (NMI, LOC, TLB, RSE, PIN, NPI, PIW) balancing per CPU | # cat /proc/interrupts | less -S |
Show hardware and software interrupts (NMI, LOC, TLB, RSE, PIN, NPI, PIW) balancing per CPU every second | # watch -n1 -d -t "cat /proc/interrupts" |
Show Timer interrupts | # cat /proc/interrupts | grep -E "LOC|CPU" | less -S |
Show Timer interrupts every second | # watch -n1 -d -t "cat /proc/interrupts | grep -E 'LOC|CPU'" |
Show default IRQ CPU affinity | # cat /proc/irq/default_smp_affinity |
Show IRQ affinity for a given IRQ (CPUMask) | # cat /proc/irq/89/smp_affinity |
Show IRQ affinity for a given IRQ (DEC) | # cat /proc/irq/89/smp_affinity_list |
Set IRQ affinity for a given IRQ (CPUMask) | # echo -n 1000 > /proc/irq/89/smp_affinity |
Set IRQ affinity for a given IRQ (DEC) | # echo -n 12 > /proc/irq/89/smp_affinity_list |
Show hardware interrupts CPU affinity | # tuna --show_irqs |
Set IRQ affinity for a given IRQ (DEC supporting rage, e.g. 0-4 means from 0 to 4) | # tuna --irqs=<IRQ> --cpus=<CPU> --move |
Show IRQ CPU utilization distribution | # mpstat -I CPU | less -S |
Show IRQ CPU utilization distribution for a given CPU | # mpstat -I CPU -P 4 | less -S |
Show SoftIRQ CPU utilization distribution | # mpstat -I SCPU | less -S |
Show SoftIRQ CPU utilization distribution for a given CPU | # mpstat -I SCPU -P 4 | less -S |
3.5. Processes
Use these commands to show processes and threads in Linux, Process Scheduler, and CPU Affinity.
Action | Command |
---|---|
Show for a given process name distribution CPU usage and CPU affinity including all process threads | # pidstat -p $(pidof qemu-kvm) -t |
Show for a given process name distribution CPU usage and CPU affinity including all process threads, every 10 seconds for 30 iterations | # pidstat -p $(pidof qemu-kvm) -t 10 30 |
Show for a given process name page faults and memory utilization including all process threads | # pidstat -p $(pidof qemu-kvm) -t -r |
Show for a given process name I/O statistics including all process threads | # pidstat -p $(pidof qemu-kvm) -t -d |
Show for a given process name its PID, all the child PID(s) including the process name, and the CPU Time | # ps -T -C qemu-kvm |
Show for a given process and all the child PID(s) real-time performance statistics | # top -H -p $(pidof qemu-kvm) |
Show all system threads with process scheduler type, priority, command, CPU Affinity, and Context Switching information | # tuna --show_threads |
Set for a given PID RealTime (FIFO) scheduling with highest priority | # tuna --threads=<PID> --priority=FIFO:99 |
Show PMD and CPU threads rescheduling activities | # watch -n1 -d "grep -E 'pmd|CPU' /proc/sched_debug" |
Browser scheduler internal operation statistics | # less /proc/sched_debug |
Show comprehensive process statistics and affinity view:
| # top |
Show all system processes and their CPU affinity | # ps -eF |
Show all system processes displaying sleeping and running processes and, when sleeping, at which function | # ps -elfL |
Show CPU Affinity for a given PID | # taskset --pid $(pidof qemu-kvm) |
Set a CPU Affinity for a given PID | # taskset --pid --cpu-list 0-9,20-29 $(pidof <Process>) |
3.6. KVM
Use these commands to show Kernel-based Virtual Machine (KVM) related domain statistics.
Action | Command |
---|---|
Show real-time KVM hypervisor statistics (VMExit, VMEntry, vCPU wakeup, context switching, timer, Halt Pool, vIRQ) | # kvm_stat |
Show deep KVM hypervisor statistics | # kvm_stat --once |
Show real-time KVM hypervisor statistics for a given guest (VMExit, VMEntry, vCPU wakeup, context switching, timer, Halt Pool, vIRQ) | # kvm_stat --guest=<VM name> |
Show deep KVM hypervisor statistics for a given guest | # kvm_stat --once --guest=<VM name> |
Show KVM profiling trap statistics | # perf kvm stat live |
Show KVM profiling statistics | # perf kvm top |
Show vCPU Pinning for a given VM | # virsh vcpupin <Domain name/ID> |
Show QEMU Emulator Thread for a given VM | # virsh emulatorpin <Domain name/ID> |
Show NUMA Pinning for a given VM | # virsh numatune <Domain name/ID> |
Show memory statistics for a given VM | # virsh dommemstat <Domain name/ID> |
Show vCPU statistics for a given VM | # virsh nodecpustats <Domain name/ID> |
Show all vNIC for a given VM | # virsh domiflist <Domain name/ID> |
Show vNIC statistics for a given VM (does not work with DPDK VHU) | # virsh domifstat <Domain name/ID> <vNIC> |
Show all vDisk for a given VM | # virsh domblklist <Domain name/ID> |
Show vDisk statistics for a given VM | # virsh domblkstat <Domain name/ID> <vDisk> |
Show all statistics for a given VM | # virsh domstats <Domain name/ID> |
3.7. CPU
Use these commands to show CPU utilization, process CPU distribution, frequency, and SMI.
Action | Command |
---|---|
Show for a given process name distribution CPU usage and CPU affinity including all process threads | # pidstat -p $(pidof qemu-kvm) -t |
Show virtual memory, I/O, and CPU statistics | # vmstat 1 |
Show detailed CPU usage aggregated | # mpstat |
Show detailed CPU usage distribution | # mpstat -P ALL |
Show detailed CPU usage distribution for a given CPU (it does not support a range) | # mpstat -P 2,3,4,5 |
Show detailed CPU usage distribution for a given CPU every 10 seconds for 30 iteration | # mpstat -P 2,3,4,5 10 30 |
Show hardware limits and frequency policy for a given CPU frequency | # cpupower -c 24 frequency-info |
Show current CPU frequency information | # cpupower -c all frequency-info|grep -E "current CPU frequency|analyzing CPU" |
Show frequency and CPU % C-States stats for all CPU(s) | # cpupower monitor |
Show real-time frequency and CPU % C-States stats for all CPUs highlighting any variation | # watch -n1 -d "cpupower monitor" |
Show more detailed frequency and CPU % C-States stats for all CPU including SMI (useful for RT) | # turbostat --interval 1 |
Show more detailed frequency and CPU % C-States stats for a given CPU including SMI (useful for RT) | # turbostat --interval 1 --cpu 4 |
Show CPU details and ISA supported | # lscpu |
Specific for Intel CPU: Display very low-level details about CPU Usage, CPU IPC, CPU Execution in %, L3 and L2 Cache Hit, Miss, Miss per instruction, Temperature, Memory channel usage, and QPI/UPI Usage | git clone Processor Counter Monitor make ./pcm.x" |
3.8. NUMA
Use these commands to show Non-Uniform Memory Access (NUMA) statistics and process distribution.
Action | Command |
---|---|
Show hardware NUMA topology | # numactl -H |
Show NUMA statistics | # numastat -n |
Show meminfo like system-wide memory usage | # numastat -m |
Show NUMA memory details and balancing for a given process name | # numastat qemu-kvm |
Show for a given NUMA node specific statistics | # /sys/devices/system/node/node<NUMA node number>/numastat |
Show in a very clear why NUMA topology with NUMA nodes and PCI devices | # lstopo --physical |
Generate an graph (svg format) of the physical NUMA topology with related devices | # lstopo --physical --output-format svg > topology.svg |
3.9. Memory
Use these commands to show memory statistics, huge pages, DPC, physical DIMM, and frequency.
Action | Command |
---|---|
Show meminfo like system-wide memory usage | # numastat -m |
Show virtual memory, I/O, and CPU statistics | # vmstat 1 |
Show global memory information | # cat /proc/meminfo |
Show the total number of 2MB huge pages for a given NUMA node | # /sys/devices/system/node/node<NUMA node number>/hugepages/hugepages-2048kB/nr_hugepages |
Show the total number of 1GB huge pages for a given NUMA node | # /sys/devices/system/node/node<NUMA node number>/hugepages/hugepages-1048576kB/nr_hugepages |
Show the total free 2MB huge pages for a given NUMA node | # /sys/devices/system/node/node<NUMA node number>/hugepages/hugepages-2048kB/free_hugepages |
Show the total free 1GB huge pages for a given NUMA node | # /sys/devices/system/node/node<NUMA node number>/hugepages/hugepages-1048576kB/free_hugepages |
Allocate 100x 2MB huge pages in real-time to NUMA0 (NUMA node can be changed) | # echo 100 > /sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages |
Allocate 100x 1GB huge pages in real-time to NUMA0 (NUMA node can be changed) | # echo 100 > /sys/devices/system/node/node0/hugepages/hugepages-1048576kB/nr_hugepages |
Show real-time SLAB information | # slabtop |
Show detailed SLAB information | # cat /proc/slabinfo |
Show total installed memory DIMM | # dmidecode -t memory | grep Locator |
Show installed memory DIMM Speed | # dmidecode -t memory | grep Speed |
3.10. PCI
Use these commands to show PCI statistics, PCI details, and PCI driver override.
Action | Command |
---|---|
Show detailed PCI device information in system | # lspci -vvvnn |
Show PCI tree view | # lspci -vnnt |
Show PCI device NUMA information | # lspci -vmm |
Show PCIe max link speed for a given device | # lspci -s 81:00.0 -vv | grep LnkCap |
Show PCIe link speed status for a given device | # lspci -s 81:00.0 -vv | grep LnkSta |
Show PCI device and kernel driver | # driverctl list-devices |
Show PCI device driver override (typical for DPDK and SR-IOV interfaces) | # driverctl list-overrides |
Set different kernel driver for PCI device (reboot persistent) | # driverctl set-override 0000:81:00.0 vfio-pci |
Unset overridden kernel driver for PCI device (if device is in use the command will hang) | # driverctl unset-override 0000:81:00.0 |
3.11. Tuned
Use these commands to show tuned profiles, verification, and logs.
Action | Command |
---|---|
Show tuned current enabled profile and description | # tuned-adm profile_info |
Show tuned available profiles and current enabled profiles | # tuned-adm list |
Enabled a specific tuned profile | # tuned-adm profile realtime-virtual-host |
Verify current enabled profile | # tuned-adm verify |
Tuned’s log | # less /var/log/tuned/tuned.log |
3.12. Profiling Process
Use these commands to show CPU profiling, process profiling, and KVM profiling.
Section | Action | Command |
---|---|---|
Process | Profiling on specific PID | # perf record -F 99 -p PID |
Process | Profiling on specific PID for 30 seconds | # perf record -F 99 -p PID sleep 30 |
Process | Profiling real-time on specific PID | # perf top -F 99 -p PID |
CPU | Profiling on specific CPU Core list for 30 seconds for any events | # perf record -F 99 -g -C <CPU Core(s)> — sleep 30s |
CPU | Profiling real-time on specific CPU Core list for any events | # perf top -F 99 -g -C <CPU Core(s)> |
Context Switching | Profiling on specific CPU Core list for 30 seconds and looking only for Context Switching | # perf record -F 99 -g -e sched:sched_switch -C <CPU Core(s)> — sleep 30 |
KVM | Profiling KVM guest for a given time | # perf kvm stat record sleep 30s |
Cache | Profiling on specific CPU Core list for 5 seconds looking for the cache efficiency | # perf stat -C <CPU Core(s)> -B -e cache-references,cache-misses,cycles,instructions,branches,faults,migrations sleep 5 |
Report | Analyze perf profiling | # perf report |
Report | Report perf profiling in stdout | # perf report --stdio |
Report | Report KVM profiling in stdout | # perf kvm stat report |
3.13. Block I/O
Use these commands to show storage I/O distribution and I/O profiling.
Action | Command |
---|---|
Show I/O details for all system device | # iostat |
Show advanced I/O details for all system device | # iostat -x |
Show advanced I/O details for all system device every 10 seconds for 30 iterations | # iostat -x 10 30 |
Generate advanced I/O profiling for a given block device | # blktrace -d /dev/sda -w 10 && blkparse -i sda.* -d sda.bin |
Report blktrace profiling | # btt -i sda.bin |
3.14. Real Time
Use these commands to show Real Time tests related, SMI, and latency.
Action | Command |
---|---|
Identify if any SMI are blocking the normal RT kernel execution exercising the defined threshold. | # hwlatdetect --duration=3600 --threshold=25 |
Verify maximum scheduling latency for a given time with a number of additional options:
| # cyclictest --duration=3600 \ --mlockall \ --priority=99 \ --nanosleep \ --interval=200 \ --histogram=5000 \ --histfile=./output \ --threads \ --numa \ --notrace |
3.15. Security
Use these commands to verify speculative executions and the GRUB boot parameter.
Action | Command |
---|---|
Check all current Speculative execution security status | See: Spectre & Meltdown vulnerability/mitigation checker for Linux & BSD. |
GRUB parameter to disable all Speculative Execution remediation | spectre_v2=off spec_store_bypass_disable=off pti=off l1tf=off kvm-intel.vmentry_l1d_flush=never |
Verify CVE-2017-5753 (Spectre variant 1) status | # cat /sys/devices/system/cpu/vulnerabilities/spectre_v1 |
Verify IBPB and Retpoline (CVE-2017-5715 Spectre variant 2) status | # cat /sys/devices/system/cpu/vulnerabilities/spectre_v2 |
Verify KPTI (CVE-2017-5754 Meltdown) status | # cat /sys/devices/system/cpu/vulnerabilities/meltdown |
Verify Spectre-NG (CVE-2018-3639 Spectre Variant 4) status | # cat /sys/devices/system/cpu/vulnerabilities/spec_store_bypass |
Verify Foreshadow (CVE-2018-3615 Spectre Varian 5 also known as L1TF) status | # cat /sys/devices/system/cpu/vulnerabilities/l1tf |
Verify Foreshadow VMEntry L1 cache effect | # cat /sys/module/kvm_intel/parameters/vmentry_l1d_flush |
Verify SMT status | # cat /sys/devices/system/cpu/smt/control |
3.16. Juniper Contrail vRouter
Use these commands to show vRouter VIF, MPLS, Nexthost, VRF, VRF’s routes, flows, and dump information.
Action | Command |
---|---|
vRouter Kernel space human readable statistics | |
vRouter DPDK human readable statistics | |
To perform a packet capture from a DPDK interface (do not use grep after vifdump) | # vifdump vif0/234 <tcpdump standard arguments such as -v -nn -e -w <path/to/file>> |
Display all vRouter interfaces and sub-interfaces statistics and details | # vif --list |
Display vRouter statistics and details for a given interface | # vif --list --get 234 |
Display vRouter packer rate for all interfaces and sub-interfaces | # vif --list --rate |
Display vRouter packer rate for a given interfaces | # vif --list --rate --get 234 |
Display vRouter packet drop statistics for a given interface | # vif --list --get 234 --get-drop-stats |
Display vRouter flows | # flow -l |
Display real-time vRouter flow actions | # flow -r |
Display vRouter packet statistics for a given VRF (you can find VRF number from vif --list) | # vrfstats --get 0 |
Display vRouter packet statistics for all VRF | # vrfstats --dump |
Display vRouter routing table for a given VRF (you can find the VRF number from vif --list) | # rt --dump 0 |
Display vRouter IPv4 routing table for a given VRF (you can find the VRF number from vif --list) | # rt --dump 0 --family inet |
Display vRouter IPv6 routing table for a given VRF (you can find the VRF number from vif --list) | # rt --dump 0 --family inet6 |
Display vRouter forwarding table for a given VRF (you can find the VRF number from vif --list) | # rt --dump 0 --family bridge |
Display vRouter route target in a given VRF for a given address | # rt --get 0.0.0.0/0 --vrf 0 --family inet |
Display vRouter drop statistics | # dropstats |
Display vRouter drop statistics for a given DPDK core | # dropstats --core 11 |
Display vRouter MPLS labels | # mpls --dump |
Display vRouter nexthop for a given one (can be found from mpls --dump output) | # nh --get 21 |
Display all vRouter nexthops | # nh --list |
Display all vRouter VXLAN VNID | # vxlan --dump |
Display vRouter agents (supervisor, xmmp connection, vrouter agent etc) status | # contrail-status |
Restart vRouter (and all Contrail local compute node components) | # systemctl restart supervisor-vrouter |
3.17. OpenStack
Use these OpenStack commands to show VM compute nodes.
Action | Command |
---|---|
Show list of all VMs on their compute nodes sorted by compute nodes | $ nova list --fields name,OS-EXT-SRV-ATTR:host --sort host |
Show list of all VMs on their compute nodes sorted by vm name | $ nova list --fields name,OS-EXT-SRV-ATTR:host |