Chapter 1. Understanding DCN

Distributed compute node architecture is a hub and spoke routed network deployment. It is comparable to a spine and leaf deployment for routed provisioning and control plane networking with Red Hat OpenStack Platform director. The hub is the central primary site with core routers and a datacenter gateway (DC-GW). The spoke is the remote edge, or leaf. The hub site can consist of any role, and at a minimum, requires three controllers.

Compute nodes can exist at the edge, as well as at the primary hub site.

1.1. Designing edge sites with DCN

You must deploy multiple stacks as part of a distributed compute node (DCN) architecture. Managing a DCN architecture as a single stack is unsupported, unless the deployment is an upgrade from Red Hat OpenStack Platform 13. There are no supported methods to split an existing stack, however you can add stacks to a preexisting deloyment. See Section A.2, “Migrating to a multistack deployment” for details.

Image service (glance) multi store is now supported with distributed edge architecture. With this feature, you can now have an image pool at every distributed edge site. This allows you to copy images between hub and edge sites.

1.2. Roles at the edge

If block storage is not going to be deployed at the edge, you must follow the section of the document, Section 3.12, “Deploying the central controllers without edge storage”. Without block storage at the edge:

  • Swift is used as a Glance backend
  • Compute nodes at the edge may only cache images.
  • Volume services like Cinder are not available at edge sites.

If you plan to deploy storage at the edge, you must also deploy block storage at the central location. Follow the section of the document Chapter 5, Deploying storage at the edge. With block storage at the edge:

  • Ceph RBD is used as a Glance backend
  • Images may be stored at edge sites
  • The Cinder volume service is available at all sites via the Ceph RBD driver.

The roles required for your deployment will differ based whether or not you deploy Block storage at the edge:

  • No Block storage is required at the edge:

    DisributedCompute
    This role includes the GlanceApiEdge service, so that Image services are consumed at the local edge site as opposed to the central hub location. Start by deploying up to three nodes using the DistributedCompute role, before deploying the DistributedComputeScaleOut role.
    DistributedComputeScaleOut
    This role includes the HAproxyEdge service, to enable instances created on the DistributedComputeScaleOut role to proxy requests for Image services to nodes that provide that service at the edge site. After you deploy three nodes with a role of DistributedCompute, you can use the DistributedComputeScaleOut role to scale compute resources. There is no minimum number of hosts required to deploy with the DistrubutedComputeScaleOut role.
  • Block Storage is required at the edge:

    DistributedComputeHCI

    This role includes the following:

    • Default compute services
    • Block Storage (cinder) volume service
    • Ceph Mon
    • Ceph Mgr
    • Ceph OSD
    • GlanceApiEdge
    • Etcd

      This role enables a hyperconverged deployment at the edge. You must use exactly three nodes when using the DistributedComputeHCI role.

    DistributedComputeHCIScaleOut
    This role includes the Ceph OSD service, which allows storage capacity to be scaled with compute when more nodes are added to the edge. This role also includes the HAproxyEdge service to redirect image download requests to the GlanceAPIEdge nodes at the edge site.
    DistributedCompute
    If you want to scale compute resources at the edge without storage, you can use the DistributedComputeScaleOut role.