Chapter 2. Hardware

When you deploy Red Hat OpenStack Platform with distributed compute nodes, your control plane stays at the hub. Compute nodes at the hub site are optional. At edge sites, you can have the following:

  • Compute nodes
  • Hyperconverged nodes with both Compute services and Ceph storage

2.1. Limitations to consider

  • Network latency: You must balance the latency as measured in round-trip time (RTT), with the expected number of concurrent API operations to maintain acceptable performance. Maximum TCP/IP throughput is inversely proportionate to RTT. You can mitigate some issues with high-latency connections with high bandwidth by tuning kernel TCP parameters, however contact Red Hat support if a cross-site communication exceeds 100 ms.
  • Network drop outs: If the edge site temporarily loses its connection, then no OpenStack control plane API or CLI operations can be executed at the impacted edge site for the duration of the outage. For example, Compute nodes at the edge site are consequently unable to create a snapshot of an instance, issue an auth token, or delete an image. General OpenStack control plane API and CLI operations remain functional during this outage, and can continue to serve any other edge sites that have a working connection.
  • Image type: You must use raw images when deploying a DCN architecture with Ceph storage.
  • Image sizing:

    • Overcloud node images - Overcloud node images are downloaded from the central undercloud node. These images are potentially large files that will be transferred across all necessary networks from the central site to the edge site during provisioning.
    • Instance images: If block storage is not deployed at the edge, then the Glance images will traverse the WAN during first use. The Glance images are copied or cached locally to the target edge nodes for all subsequent use. There is no size limit for glance images. Transfer times vary with available bandwidth and network latency.

      When block storage is deployed at the edge, the image is copied over the WAN asynchronously for faster boot times at the edge.

  • Provider networks: This is the recommended networking approach for DCN deployments. If you use provider networks at remote sites, then you must consider that neutron does not place any limits or checks on where you can attach available networks. For example, if you use a provider network only in edge site A, you will need to make sure you do not try to attach to the provider network in edge site B. This is because there are no validation checks on the provider network when binding it to a Compute node.
  • Site-specific networks: A limitation in DCN networking arises if you are using networks that are specific to a certain site: When deploying centralized neutron controllers with Compute nodes, there are no triggers in neutron to identify a certain Compute node as a remote node. Consequently, the Compute nodes receive a list of other Compute nodes and automatically form tunnels between each other; the tunnels are formed from edge to edge through the central site. If you are using VXLAN or Geneve, the result is that every Compute node at every site forms a tunnel with every other Compute node and Controller node whether or not they are actually local or remote. This is not an issue if you are using the same neutron networks everywhere. When using VLANs, neutron expects that all Compute nodes have the same bridge mappings, and that all VLANs are available at every site.
  • Additional sites: If you need to expand from a central site to additional remote sites, you can use the openstack cli on the undercloud to add new network segments and subnets.
  • Autonomy: There might be specific autonomy requirements for the edge site. This might vary depending on your requirements.

2.2. Networking

When designing the network for distributed compute node architecture, be aware of the supported technologies and constraints:

ML2/OVS and routed provider network technologies are supported at the edge. For more information on routed provider networks, see Deploying routed provider networks.

The following fast datapaths for NFV are supported at the edge as a Technology Preview, and therefore are not fully supported by Red Hat. These features should only be used for testing, and should not be deployed in a production environment. For more information about Technology Preview features, see Scope of Coverage Details.

  • SR-IOV
  • DPDK
  • TC/Flower offload

Fast datapaths at the edge require ML2/OVS.

  • If you deploy distributed storage, you must balance latency as measured in round-trip time (RTT), with the expected number of concurrent operations to maintain acceptable performance. You can mitigate some issues with high-latency connections with high bandwidth by tuning kernel TCP parameters, however contact Red Hat support if a cross-site communication exceeds 100 ms.
  • If edge servers are not preprovisioned, you must configure DHCP relay for introspection and provisioning on routed segments.

Routing must be configured either on the cloud or within the networking infrastructure that connects each edge site to the hub. You should implement a networking design that allocates an L3 subnet for each Red Hat OpenStack Platform cluster network (external, internal API, and so on), unique to each site.

2.2.1. Routing between edge sites

If you need full mesh connectivity between both the central location and edge sites, as well as routing between the edge sites themselves, you must design a complex solution either on the network infrastructure or on the Red Hat OpenStack Platform nodes.

The following approaches satisfy full mesh connectivity between every logical site, both central and edge, for control plane signaling. Tenant (overlay) networks are terminated on site.

There are two ways to create full mesh connectivity between the central and edge sites: Push complexity to the hardware network infrastructure

Allocate a supernet for each network function and allocate a block for each edge and leaf, then use dynamic routing on your network infrastructure to advertise and route each locally connected block.

The benefit of this procedure is that it requires only a single route on each OpenStack node per network function interface to reach the corresponding interfaces at local or remote edge sites.

  1. Reserve 16-Bit address blocks for OpenStack endpoints

    Internal API:
    Storage Front-End:
    Storage Back-End:
  2. Use smaller blocks from these to allocate addresses for each edge site or leaf. You can summarize a smaller block by using the following:


    For example, the following could be used for a site designated as leaf 2.

    Internal API:
    Storage Mgmt:
  3. Define common static routes for function summaries. Consider the following example:

    Provisioning: > 10.10.[pod#].1
    Internal API: > 10.20.[pod#].1
    Storage Front-End: > 10.40.[pod#].1
    Storage Back-End: > 10.50.[pod#].1 Push complexity to the Red Hat OpenStack Platform cluster

  1. Allocate a route per edge site for each network function (internal api, overlay, storage, and so on) in the network_data.yaml file for the cluster:

    - name: InternalApi
      name_lower: internal_api
      vip: true
      ip_subnet: ''
      allocation_pools: [{'start': '', 'end': ''}]
      gateway_ip: ''
      vlan: 0
          ip_subnet: ''
          allocation_pools: [{'start': '', 'end': ''}]
          vlan: 0
          gateway_ip: ''
          ip_subnet: ''
          allocation_pools: [{'start': '', 'end': ''}]
          vlan: 0
          gateway_ip: ''

This method allows you to more easily configure summarized static routing on the network infrastructure. Use dynamic routing on the networking infrastructure to further simplify configuration.