Chapter 4. Deploy the edge without storage

You can deploy a distributed compute node cluster without Block storage at edge sites if you use Swift as a backend for glance at the central location. A site deployed without block storage cannot be updated later to have block storage due to the differing role and networking profiles for each architecure.

Important

You cannot use Ceph as a backend for glance at the central location if it will not be used at the edge. The following procedure uses lvm as the backend for Cinder which is not supported for production. You must deploy a certified block storage solution as a backend for Cinder.

4.1. Deploying the central controllers without edge storage

You can deploy a distributed compute node cluster without Block storage at edge sites if you use Swift as a backend for glance at the central location. A site deployed without block storage cannot be updated later to have block storage due to the differing role and networking profiles for each architecure.

Important: You cannot use Ceph as a backend for glance at the central location if it will not be used at the edge. The following procedure uses lvm as the backend for Cinder which is not supported for production. You must deploy a certified block storage solution as a backend for Cinder.

Deploy the central controller cluster in a similar way to a typical overcloud deployment. This cluster does not require any Compute nodes, so you can set the Compute count to 0 to override the default of 1. The central controller has particular storage and Oslo configuration requirements. Use the following procedure to address these requirements.

Procedure

The following procedure outlines the steps for the initial deployment of the central location.

Note

The following steps detail the deployment commands and environment files associated with an example DCN deployment without glance multistore. These steps do not include unrelated, but necessary, aspects of configuration, such as networking.

  1. In the home directory, create directories for each stack that you plan to deploy.

    mkdir /home/stack/central
    mkdir /home/stack/dcn0
    mkdir /home/stack/dcn1
  2. Create a file called central/overrides.yaml with settings similar to the following:

    parameter_defaults:
      NtpServer:
        - 0.pool.ntp.org
        - 1.pool.ntp.org
      ControllerCount: 3
      ComputeCount: 0
      OvercloudControlFlavor: baremetal
      OvercloudComputeFlavor: baremetal
      ControllerSchedulerHints:
        'capabilities:node': '0-controller-%index%'
      GlanceBackend: swift
    • ControllerCount: 3 specifies that three nodes will be deployed. These will use swift for glance, lvm for cinder, and host the control-plane services for edge compute nodes.
    • ComputeCount: 0 is an optional parameter to prevent Compute nodes from being deployed with the central Controller nodes.
    • GlanceBackend: swift uses Object Storage (swift) as the Image Service (glance) back end.

      The resulting configuration interacts with the distributed compute nodes (DCNs) in the following ways:

    • The Image service on the DCN creates a cached copy of the image it receives from the central Object Storage back end. The Image service uses HTTP to copy the image from Object Storage to the local disk cache.

      Note

      The central Controller node must be able to connect to the distributed compute node (DCN) site. The central Controller node can use a routed layer 3 connection.

  3. Generate roles for the central location using roles appropriate for your environment:

    openstack overcloud roles generate Controller \
    -o ~/central/control_plane_roles.yaml
  4. Generate an environment file ~/central/central-images-env.yaml:

    openstack tripleo container image prepare \
    -e containers.yaml \
    --output-env-file ~/central/central-images-env.yaml
  5. Configure the naming conventions for your site in the site-name.yaml environment file. The Nova availability zone, Cinder storage availability zone must match:

    cat > /home/stack/central/site-name.yaml << EOF
    parameter_defaults:
        NovaComputeAvailabilityZone: central
        ControllerExtraConfig:
            nova::availability_zone::default_schedule_zone: central
        NovaCrossAZAttach: false
        CinderStorageAvailabilityZone: central
    EOF
  6. Deploy the central Controller node. For example, you can use a deploy.sh file with the following contents:

    #!/bin/bash
    
    source ~/stackrc
    time openstack overcloud deploy \
    --stack central \
    --templates /usr/share/openstack-tripleo-heat-templates/ \
    -e /usr/share/openstack-tripleo-heat-templates/environments/nova-az-config.yaml \
    -e ~/central/containers-env-file.yaml \
    -e ~/central/overrides.yaml \
    -e ~/central/site-name.yaml
Note

You must include heat templates for the configuration of networking in your openstack overcloud deploy command. Designing for edge architecture requires spine and leaf networking. See Spine Leaf Networking for more details.

4.2. Deploying edge nodes without storage

You can deploy edge compute nodes that will use the central location as the control plane. This procedure shows how to add a new DCN stack to your deployment and reuse the configuration from the existing heat stack to create new environment files. The first heat stack deploys an overcloud within a centralized datacenter. Create additional heat stacks to deploy Compute nodes to a remote location.

4.2.1. Configuring the distributed compute node environment files

Procedure

  1. Generate the configuration files that the DCN sites require:

    openstack overcloud export \
    --config-download-dir /var/lib/mistral/central \
    --stack central --output-file ~/dcn-common/central-export.yaml
Important

This procedure creates a new central-export.yaml environment file and uses the passwords in the plan-environment.yaml file from the overcloud. The central-export.yaml file contains sensitive security data. To improve security, you can remove the file when you no longer require it.

Important

When you specify the directory for the --config-download-dir option, use the central hub Ansible configuration that director creates in /var/lib/mistral during deployment. Do not use Ansible configuration that you manually generate with the openstack overcloud config download command. The manually generated configuration lacks certain files that are created only during a deployment operation.

4.2.2. Deploying the Compute nodes to the DCN site

This procedure uses the Compute role to deploy Compute nodes to an availability zone (AZ) named dcn0. In a distributed compute node (DC) context, this role is used for sites without storage.

Procedure

  1. Review the overrides for the distributed compute (DCN) site in dcn0/overrides.yaml

    parameter_defaults:
      ComputeCount: 3
      ComputeFlavor: baremetal
      ComputeSchedulerHints:
        'capabilities:node': '0-compute-%index%'
      NovaAZAttach: false
  2. Create a new file called site-name.yaml in the ~/dcn0 directory with the following contents:

    resource_registry:
      OS::TripleO::Services::NovaAZConfig: /usr/share/openstack-tripleo-heat-templates/deployment/nova/nova-az-config.yaml
    parameter_defaults:
      NovaComputeAvailabilityZone: dcn0
      RootStackName: dcn0
  3. Retrieve the container images for the DCN Site:

    openstack tripleo container image prepare \
    --environment-directory dcn0 \
    -r ~/dcn0/roles_data.yaml \
    -e ~/dcn-common/central-export.yaml \
    -e ~/containers-prepare-parameter.yaml \
    --output-env-file ~/dcn0/dcn0-images-env.yaml
  4. Run the deploy.sh deployment script for dcn0:

    #!/bin/bash
    STACK=dcn0
    source ~/stackrc
    time openstack overcloud deploy \
         --stack $STACK \
         --templates /usr/share/openstack-tripleo-heat-templates/ \
         -e /usr/share/openstack-tripleo-heat-templates/environments/nova-az-config.yaml \
         -e ~/dcn-common/central-export.yaml \
         -e ~/dcn0/dcn0-images-env.yaml \
         -e ~/dcn0/site-name.yaml \
         -e ~/dcn0/overrides.yaml

    If you deploy additional edge sites that require edits to the network_data.yaml file, you must execute a stack update at the central location.

Note

You must include heat templates for the configuration of networking in your openstack overcloud deploy command. Designing for edge architecture requires spine and leaf networking. See Spine Leaf Networking for more details.