Chapter 2. Hardware

Two key requirements of Telcos for any application are performance and high availability (HA). In the multi-layered world of NFV, HA cannot be achieved only at the infrastructure layer. HA needs to be expansive and overarching in all aspects of design including hardware support, following best practice for layer 2 and layer 3 design of the underlay network, at the OpenStack level and last but not the least at the application layer. ETSI NFV (ETSI GS NFV-REL 001 V1.1.1 (2015-01)) defines “Service Availability” rather than talking in terms of five 9s. “At a minimum, the Service Availability requirements for NFV should be the same as those for legacy systems (for the same service)”. This refers to the end-to-end service (VNFs and infrastructure components).

Red Hat recommends full HA deployment of Red Hat OpenStack Platform using Red Hat OpenStack Director for deployment.

The lab used to validate this architecture consisted of:

  • 12 Servers
  • 2 Switches

This is shown in Table 1:

RoleServers

Controllers + Ceph Mon

3 for full HA deployment

Ceph OSDs

3 for full HA deployment

Compute Nodes

5 used in validation lab

Undercloud Server

1

Table 1: NFV validation lab components for HA deployment using Red Hat OpenStack director

2.1. Servers

While both blade servers and rack mount servers are used to deploy virtual mobile networks and NFV in general, we have chosen to go with rack mount servers as it gives us a little bit more flexibility with:

  • Plenty of bays for storage that we used for Ceph OSD
  • Direct access to physical Network Interface Cards (NICs)
  • Lastly, some telcos prefer to have the management port (IPMI) to be able to shutdown the entire server in the event it somehow gets compromised due to some security attack.

2.2. Server Specifications:

Physical CPU model is a moving target. Newer, better and faster CPUs are released by Intel and other vendors constantly. For validating this architecture the following specifications were used for server hardware:

  • 2 Intel Xeon Processor E5-2650v4 12-Core
  • 128GB of DRAM
  • Intel x540 10G NIC cards

    • NIC 0: IPMI
    • NIC 1: External OOB (Not used for OpenStack)
    • NIC 2: PXE/Provisioning
    • NIC 3: Network Isolation Bond
    • NIC 4: Network Isolation Bond
    • NIC 5: Dataplane Port (SR-IOV or DPDK)
    • NIC 6: Dataplane Port (SR-IOV or DPDK)

It is important to note that in order to use fast datapath features such as SR-IOV and OVS-DPDK, it is required to use NIC cards that support these features. http://www.intel.com/content/www/us/en/support/network-and-i-o/ethernet-products/000005722.html lists NICs that support SR-IOV.

DPDK supported NICs are listed here http://dpdk.org/doc/nics