Chapter 1. Understanding DCN

Note

An upgrade from Red Hat OpenStack Platform (RHOSP) 16.2 to RHOSP 17.1 is not supported for Distributed Compute Node (DCN) deployments.

Distributed compute node (DCN) architecture is for edge use cases allowing remote compute and storage nodes to be deployed remotely while sharing a common centralised control plane. DCN architecture allows you to position workloads strategically closer to your operational needs for higher performance.

The central location can consist of any role, however at a minimum, requires three controllers. Compute nodes can exist at the edge, as well as at the central location.

DCN architecture is a hub and spoke routed network deployment. DCN is comparable to a spine and leaf deployment for routed provisioning and control plane networking with Red Hat OpenStack Platform director.

  • The hub is the central site with core routers and a datacenter gateway (DC-GW).
  • The spoke is the remote edge, or leaf.

Edge locations do not have controllers, making them architecturally different from traditional deployments of Red Hat OpenStack Platform:

  • Control plane services run remotely, at the central location.
  • Pacemaker is not installed.
  • The Block Storage service (cinder) runs in active/active mode.
  • Etcd is deployed as a distributed lock manager (DLM).
high level dcn

1.1. Required software for distributed compute node architecture

The following table shows the software and minimum versions required to deploy Red Hat OpenStack Platform in a distributed compute node (DCN) architecture:

PlatformVersionOptional

Red Hat Enterprise Linux

8

No

Red Hat OpenStack Platform

16.1

No

Red Hat Ceph Storage

4

Yes

1.2. Multistack design

When you deploy Red Hat OpenStack Platform (RHOSP) with a DCN design, you use Red Hat director’s capabilities for multiple stack deployment and management to deploy each site as a distinct stack.

Managing a DCN architecture as a single stack is unsupported, unless the deployment is an upgrade from Red Hat OpenStack Platform 13. There are no supported methods to split an existing stack, however you can add stacks to a pre-existing deployment. For more information, see Section A.3, “Migrating to a multistack deployment”.

The central location is a traditional stack deployment of RHOSP, however you are not required to deploy Compute nodes or Red Hat Ceph storage with the central stack.

With DCN, you deploy each location as a distinct availability zone (AZ).

1.3. DCN storage

You can deploy each edge site, either without storage, or with Ceph on hyperconverged nodes. The storage you deploy is dedicated to the site you deploy it on.

DCN architecture uses Glance multistore. For edge sites deployed without storage, additional tooling is available so that you can cache and store images in the Compute service (nova) cache. Caching glance images in nova provides the faster boot times for instances by avoiding the process of downloading images across a WAN link. For more information, see Chapter 10, Precaching glance images into nova.

1.4. DCN edge

With Distributed Compute Node architecture, the central location is deployed with the control nodes that manage the edge locations. When you then deploy an edge location, you deploy only compute nodes, making edge sites architecturally different from traditional deployments of Red Hat OpenStack Platform. At edge locations:

  • Control plane services run remotely at the central location.
  • Pacemaker does not run at DCN sites.
  • The Block Storage service (cinder) runs in active/active mode.
  • Etcd is deployed as a distributed lock manager (DLM).