Chapter 6. Deploy the edge without storage

You can deploy a distributed compute node cluster without Block storage at edge sites if you use the Object Storage service (swift) as a back end for the Image service (glance) at the central location. A site deployed without block storage cannot be updated later to have block storage due to the differing role and networking profiles for each architecture.

Important

The following procedure uses lvm as the back end for the Block Storage service (cinder), which is not supported for production. You must deploy a certified block storage solution as a back end for the Block Storage service.

6.1. Deploying edge nodes without storage

You can deploy edge compute nodes that will use the central location as the control plane. This procedure shows how to add a new DCN stack to your deployment and reuse the configuration from the existing heat stack to create new environment files. The first heat stack deploys an overcloud within a centralized datacenter. Create additional heat stacks to deploy Compute nodes to a remote location.

6.1.1. Configuring the distributed compute node environment files

This procedure creates a new central-export.yaml environment file and uses the passwords in the plan-environment.yaml file from the overcloud. The central-export.yaml file contains sensitive security data. To improve security, you can remove the file when you no longer require it.

When you specify the directory for the --config-download-dir option, use the central hub Ansible configuration that director creates in /var/lib/mistral during deployment. Do not use Ansible configuration that you manually generate with the openstack overcloud config download command. The manually generated configuration lacks certain files that are created only during a deployment operation.

You must upload images to the central location before copying them to edge sites; a copy of each image must exist in the Image service (glance) at the central location.

You must use the RBD storage driver for the Image, Compute, and Block Storage services.

Procedure

  1. Generate the configuration files that the DCN sites require:

    openstack overcloud export \
    --config-download-dir /var/lib/mistral/central \
    --stack central --output-file ~/dcn-common/central-export.yaml
  2. Generate roles for the edge location using roles appropriate for your environment:

    openstack overcloud roles generate Compute -o ~/dcn0/dcn0_roles.yaml
Note

If you are using ML2/OVS for networking overlay, you must edit the roles file that you created to include the NeutronDhcpAgent and NeutronMetadataAgent roles:

...
    - OS::TripleO::Services::MySQLClient
    - OS::TripleO::Services::NeutronBgpVpnBagpipe
+   - OS::TripleO::Services::NeutronDhcpAgent
+   - OS::TripleO::Services::NeutronMetadataAgent
    - OS::TripleO::Services::NeutronLinuxbridgeAgent
    - OS::TripleO::Services::NeutronVppAgent
    - OS::TripleO::Services::NovaAZConfig
    - OS::TripleO::Services::NovaCompute

...

For more information, see Preparing for a routed provider network.

6.1.2. Deploying the Compute nodes to the DCN site

This procedure uses the Compute role to deploy Compute nodes to an availability zone (AZ) named dcn0. In a distributed compute node (DC) context, this role is used for sites without storage.

Procedure

  1. Review the overrides for the distributed compute (DCN) site in dcn0/overrides.yaml

    parameter_defaults:
      ComputeCount: 3
      ComputeFlavor: baremetal
      ComputeSchedulerHints:
        'capabilities:node': '0-compute-%index%'
      NovaAZAttach: false
  2. Create a new file called site-name.yaml in the ~/dcn0 directory with the following contents:

    resource_registry:
      OS::TripleO::Services::NovaAZConfig: /usr/share/openstack-tripleo-heat-templates/deployment/nova/nova-az-config.yaml
    parameter_defaults:
      NovaComputeAvailabilityZone: dcn0
      RootStackName: dcn0
  3. Retrieve the container images for the DCN Site:

    sudo openstack tripleo container image prepare \
    --environment-directory dcn0 \
    -r ~/dcn0/roles_data.yaml \
    -e ~/dcn-common/central-export.yaml \
    -e ~/containers-prepare-parameter.yaml \
    --output-env-file ~/dcn0/dcn0-images-env.yaml
  4. Run the deploy.sh deployment script for dcn0:

    #!/bin/bash
    STACK=dcn0
    source ~/stackrc
    time openstack overcloud deploy \
         --stack $STACK \
         --templates /usr/share/openstack-tripleo-heat-templates/ \
         -e /usr/share/openstack-tripleo-heat-templates/environments/nova-az-config.yaml \
         -e ~/dcn-common/central-export.yaml \
         -e ~/dcn0/dcn0-images-env.yaml \
         -e ~/dcn0/site-name.yaml \
         -e ~/dcn0/overrides.yaml

    If you deploy additional edge sites that require edits to the network_data.yaml file, you must execute a stack update at the central location.

Note

You must include heat templates for the configuration of networking in your openstack overcloud deploy command. Designing for edge architecture requires spine and leaf networking. See Spine Leaf Networking for more details.