Chapter 2. Hardware requirements

This section describes the hardware details necessary for NFV.

You can use Red Hat Technologies Ecosystem to check for a list of certified hardware, software, cloud provider, component by choosing the category and then selecting the product version.

For a complete list of the certified hardware for Red Hat OpenStack Platform, see Red Hat OpenStack Platform certified hardware.

2.1. Tested NICs

For a list of tested NICs for NFV, see Network Adapter Support. (Customer Portal login required.)

2.2. Discovering your NUMA node topology

When you plan your deployment, you need to understand the NUMA topology of your Compute node to partition the CPU and memory resources for optimum performance. To determine the NUMA information, you can:

  • Enable hardware introspection to retrieve this information from bare-metal nodes.
  • Log onto each bare-metal node to manually collect the information.
Note

You must install and configure the undercloud before you can retrieve NUMA information through hardware introspection. See the Director Installation and Usage Guide for details.

Retrieving Hardware Introspection Details

The Bare Metal service hardware inspection extras (inspection_extras) is enabled by default to retrieve hardware details. You can use these hardware details to configure your overcloud. See Configuring the Director for details on the inspection_extras parameter in the undercloud.conf file.

For example, the numa_topology collector is part of these hardware inspection extras and includes the following information for each NUMA node:

  • RAM (in kilobytes)
  • Physical CPU cores and their sibling threads
  • NICs associated with the NUMA node

Use the openstack baremetal introspection data save _UUID_ | jq .numa_topology command to retrieve this information, with the UUID of the bare-metal node.

The following example shows the retrieved NUMA information for a bare-metal node:

{
  "cpus": [
    {
      "cpu": 1,
      "thread_siblings": [
        1,
        17
      ],
      "numa_node": 0
    },
    {
      "cpu": 2,
      "thread_siblings": [
        10,
        26
      ],
      "numa_node": 1
    },
    {
      "cpu": 0,
      "thread_siblings": [
        0,
        16
      ],
      "numa_node": 0
    },
    {
      "cpu": 5,
      "thread_siblings": [
        13,
        29
      ],
      "numa_node": 1
    },
    {
      "cpu": 7,
      "thread_siblings": [
        15,
        31
      ],
      "numa_node": 1
    },
    {
      "cpu": 7,
      "thread_siblings": [
        7,
        23
      ],
      "numa_node": 0
    },
    {
      "cpu": 1,
      "thread_siblings": [
        9,
        25
      ],
      "numa_node": 1
    },
    {
      "cpu": 6,
      "thread_siblings": [
        6,
        22
      ],
      "numa_node": 0
    },
    {
      "cpu": 3,
      "thread_siblings": [
        11,
        27
      ],
      "numa_node": 1
    },
    {
      "cpu": 5,
      "thread_siblings": [
        5,
        21
      ],
      "numa_node": 0
    },
    {
      "cpu": 4,
      "thread_siblings": [
        12,
        28
      ],
      "numa_node": 1
    },
    {
      "cpu": 4,
      "thread_siblings": [
        4,
        20
      ],
      "numa_node": 0
    },
    {
      "cpu": 0,
      "thread_siblings": [
        8,
        24
      ],
      "numa_node": 1
    },
    {
      "cpu": 6,
      "thread_siblings": [
        14,
        30
      ],
      "numa_node": 1
    },
    {
      "cpu": 3,
      "thread_siblings": [
        3,
        19
      ],
      "numa_node": 0
    },
    {
      "cpu": 2,
      "thread_siblings": [
        2,
        18
      ],
      "numa_node": 0
    }
  ],
  "ram": [
    {
      "size_kb": 66980172,
      "numa_node": 0
    },
    {
      "size_kb": 67108864,
      "numa_node": 1
    }
  ],
  "nics": [
    {
      "name": "ens3f1",
      "numa_node": 1
    },
    {
      "name": "ens3f0",
      "numa_node": 1
    },
    {
      "name": "ens2f0",
      "numa_node": 0
    },
    {
      "name": "ens2f1",
      "numa_node": 0
    },
    {
      "name": "ens1f1",
      "numa_node": 0
    },
    {
      "name": "ens1f0",
      "numa_node": 0
    },
    {
      "name": "eno4",
      "numa_node": 0
    },
    {
      "name": "eno1",
      "numa_node": 0
    },
    {
      "name": "eno3",
      "numa_node": 0
    },
    {
      "name": "eno2",
      "numa_node": 0
    }
  ]
}

2.3. Review BIOS Settings

The following listing describes the required BIOS settings for NFV:

  • C3 Power State - Disabled.
  • C6 Power State - Disabled.
  • MLC Streamer - Enabled.
  • MLC Spacial Prefetcher - Enabled.
  • DCU Data Prefetcher - Enabled.
  • DCA - Enabled.
  • CPU Power and Performance - Performance.
  • Memory RAS and Performance Config → NUMA Optimized - Enabled.
  • Turbo Boost - Disabled.

2.4. Network Adapter Fast Datapath Feature Support Matrix

Red Hat supports accelerated and non-accelerated data paths such as SR-IOV, OVS (kernel), DPDK, OVS-DPDK, OVS Offload, and others. The FDP (Fast Datapath Production) channel is included as part of the layered Red Hat products that use it, including Red Hat Virtualization, Red Hat OpenStack Platform, and Red Hat OpenShift Container Platform.

  • Red Hat Virtualization has limited support for SR-IOV and OVS (kernel) and includes the latest release of FDP when it becomes general availability (GA)
  • OpenShift Container Platform currently supports only OVS (kernel) and includes the latest release of FDP when it is GA
  • Red Hat OpenStack Platform has limited support for SR-IOV, OVS (kernel), DPDK, OVS-DPDK and OVS Offload. Where applicable, there may be a delay of several weeks before some versions of OpenStack Platform include the latest release of FDP.
FDP VersionReleasedRHELOVSDPDK

18.04

Apr 2018

7.5

2.9.0-19

17.11

18.06

Jun 2018

7.5

2.9.0-47

17.11

18.08

Aug 2018

7.5

2.9.0-54

17.11

2.4.1. Network Adapter support

Consult the following tables for information on network features supported in Red Hat Fast Datapath packages. See "Legend" below for further information.

Normally the network card vendor collaborates and tests with Red Hat to support the Fast Datapath features. DPDK for guest applications is limited to packet forwarding libraries and hardware support for poll mode drivers (PMD).

DPDK refers to the DPDK that is included in the RHEL extras repo, which is often used to enable DPDK within a guest VM, and is not part of the Fast Datapath repository. Contact Red Hat for further information, such as support for additional NICs or features.

Amazon

Model(s)DriverSR-IOVOVSOVS-DPDKOVS-OffloadDPDK

ENA 10/25G

ena

7.4

x

x

x

x

Intel 82598, 82599, X520, X540, X550

ixgbevf

7.4

x

x

x

7.4

Broadcom

Model(s)DriverSR-IOVOVSOVS-DPDKOVS-OffloadDPDK

NX1 NetXtreme-C, NetXtreme-E, StrataGX, 573xxm 574xx

bnxt

7.5

18.04

18.04

x

7.5

Emulex 10G

be2net

7.2 [TP]

x

x

x

x

Broadcom 5719/5720

tg3

x

x

x

x

x

Chelsio

Model(s)DriverSR-IOVOVSOVS-DPDKOVS-OffloadDPDK

Terminator 5/6, T420, T422, T520, T580

cxgb4, cxgb4vf

7.2 [TP]

x

x

x

x

Cisco

Model(s)DriverSR-IOVOVSOVS-DPDKOVS-OffloadDPDK

VIC 1340

enic

7.5

18.06

18.06

x

7.5

HPE

Model(s)DriverSR-IOVOVSOVS-DPDKOVS-OffloadDPDK

562 SFP+

i40e/i40evf

7.3

18.04

18.04

x

7.4

6810C 25/50G

qede

7.5

18.06

18.06

x

7.5

631 10/25G

bnxt

7.5

18.04

18.04

x

7.5

640 SFP128 10/25G

mlx5

7.5

18.04

18.04

18.04 [TP]

7.5

661 25G (Intel XXV710)

i40e/i40evf

7.3

18.04

18.04

x

7.4

Intel

Model(s)DriverSR-IOVOVSOVS-DPDKOVS-OffloadDPDK

82575, 82576, 82580, I210, I211, I350, I354, DH89xx

igb/igbvf

7.2

18.04

x

x

x

82598, 82599, X520, X540, X550

ixgbe/ixgbevf

7.2

18.04

18.04

x

7.3

X710, XL710, XXV710, X722

i40e/i40evf

7.3

18.04

18.04

x

7.4

FM10420

fm10k

x

x

x

x

x

Marvell / Cavium

Model(s)DriverSR-IOVOVSOVS-DPDKOVS-OffloadDPDK

QLE8100

qlge

x

x

x

x

x

QLE8200, QLE3200

qlcnic

7.3 [TP]

x

x

x

x

NetExtreme II 1G

bnx2

x

x

x

x

x

NetExtreme II 10G

bnx2x

7.3 [TP]

x

x

x

x

FastLinQ QL4xxxx

qede

7.5

18.06

18.06

x

7.5

LiquidIO II CN23xx

liquidio

7.5 [TP]

x

x

x

x

Mellanox

Model(s)DriverSR-IOVOVSOVS-DPDKOVS-OffloadDPDK

ConnectX-3, ConnectX-3 Pro

mlx4_en

7.3 [TP]

18.04

18.04

x

7.5

ConnectX-4, ConnectX-4 Lx, ConnectX-5

mlx5

7.5

18.04

18.04

18.04 [TP]

7.5

Microsoft

Model(s)DriverSR-IOVOVSOVS-DPDKOVS-OffloadDPDK

VM Net Adapter w/Hyper-V

hv_netvsc + ixgbevf

7.4

x

x

x

7.4

VM Net Adapter w/Hyper-V

hv_netvsc + mlx4_en

7.5

x

x

x

x

VM Net Adapter w/Azure

hv_netvsc + mlx4_en

7.4

x

x

x

x

Netronome

Model(s)DriverSR-IOVOVSOVS-DPDKOVS-OffloadDPDK

Agilio CX Series

nfp

7.5

18.04

18.04

18.06 [TP]

7.5

Solarflare

Model(s)DriverSR-IOVOVSOVS-DPDKOVS-OffloadDPDK

SFN5xxx, SFN6xxx, SFN7xxx, SFN8xxx

sfc/sfc_efx

7.2 [TP]

x

x

x

x

VMware

Model(s)DriverSR-IOVOVSOVS-DPDKOVS-OffloadDPDK

VMXNET3

vmxnet3,ixgbevf

7.4

x

x

x

x

Legend

7.x = The update release of RHEL 7 where the feature was introduced

TP = Feature is in tech preview and is not for use in production environments

x = Feature unsupported, or not present

[1] When using SR-IOV on a dual-port ConnectX-3 or ConnectX-3 Pro, only the VFs from a single port are available for use.