Chapter 5. Installing the central location
When you deploy the central location for distributed compute node (DCN) architecture, you can deploy the cluster:
- With or without Compute nodes
- With or without Red Hat Ceph Storage
If you deploy Red Hat OpenStack Platform without Red Hat Ceph Storage at the central location, you cannot deploy any of your edge sites with Red Hat Ceph storage. Additionally, you do not have the option of adding Red Hat Ceph Storage to the central location later by redeploying.
5.1. Deploying the central controllers without edge storage
You can deploy a distributed compute node cluster without Block storage at edge sites if you use the Object Storage service (swift) as a back end for the Image service (glance) at the central location. A site deployed without block storage cannot be updated later to have block storage due to the differing role and networking profiles for each architecture.
Important: The following procedure uses lvm as the backend for Cinder which is not supported for production. You must deploy a certified block storage solution as a backend for Cinder.
Deploy the central controller cluster in a similar way to a typical overcloud deployment. This cluster does not require any Compute nodes, so you can set the Compute count to 0
to override the default of 1
. The central controller has particular storage and Oslo configuration requirements. Use the following procedure to address these requirements.
Procedure
The following procedure outlines the steps for the initial deployment of the central location.
The following steps detail the deployment commands and environment files associated with an example DCN deployment without glance multistore. These steps do not include unrelated, but necessary, aspects of configuration, such as networking.
In the home directory, create directories for each stack that you plan to deploy.
mkdir /home/stack/central mkdir /home/stack/dcn0 mkdir /home/stack/dcn1
Create a file called
central/overrides.yaml
with settings similar to the following:parameter_defaults: NtpServer: - 0.pool.ntp.org - 1.pool.ntp.org ControllerCount: 3 ComputeCount: 0 OvercloudControllerFlavor: baremetal OvercloudComputeFlavor: baremetal ControllerSchedulerHints: 'capabilities:node': '0-controller-%index%' GlanceBackend: swift
-
ControllerCount: 3
specifies that three nodes will be deployed. These will use swift for glance, lvm for cinder, and host the control-plane services for edge compute nodes. -
ComputeCount: 0
is an optional parameter to prevent Compute nodes from being deployed with the central Controller nodes. GlanceBackend: swift
uses Object Storage (swift) as the Image Service (glance) back end.The resulting configuration interacts with the distributed compute nodes (DCNs) in the following ways:
The Image service on the DCN creates a cached copy of the image it receives from the central Object Storage back end. The Image service uses HTTP to copy the image from Object Storage to the local disk cache.
NoteThe central Controller node must be able to connect to the distributed compute node (DCN) site. The central Controller node can use a routed layer 3 connection.
-
Generate roles for the central location using roles appropriate for your environment:
openstack overcloud roles generate Controller \ -o ~/central/control_plane_roles.yaml
Generate an environment file
~/central/central-images-env.yaml
:sudo openstack tripleo container image prepare \ -e containers.yaml \ --output-env-file ~/central/central-images-env.yaml
Configure the naming conventions for your site in the
site-name.yaml
environment file. The Nova availability zone, Cinder storage availability zone must match:cat > /home/stack/central/site-name.yaml << EOF parameter_defaults: NovaComputeAvailabilityZone: central ControllerExtraConfig: nova::availability_zone::default_schedule_zone: central NovaCrossAZAttach: false CinderStorageAvailabilityZone: central EOF
Deploy the central Controller node. For example, you can use a
deploy.sh
file with the following contents:#!/bin/bash source ~/stackrc time openstack overcloud deploy \ --stack central \ --templates /usr/share/openstack-tripleo-heat-templates/ \ -e /usr/share/openstack-tripleo-heat-templates/environments/nova-az-config.yaml \ -e ~/central/containers-env-file.yaml \ -e ~/central/overrides.yaml \ -e ~/central/site-name.yaml
You must include heat templates for the configuration of networking in your openstack overcloud deploy
command. Designing for edge architecture requires spine and leaf networking. See Spine Leaf Networking for more details.
5.2. Deploying the central site with storage
To deploy the Image service with multiple stores and Ceph Storage as the back end, complete the following steps:
Prerequisites
- Hardware for a Ceph cluster at the central location and in each availability zone, or in each geographic location where storage services are required.
- Hardware for three Image Service servers at central location and in each availability zone, or in each geographic location where storage services are required.
The following is an example deployment of two or more stacks:
-
One stack at the central location called
central
. -
One stack at an edge site called
dcn0
. -
Additional stacks deployed similarly to
dcn0
, such asdcn1
,dcn2
, and so on.
Procedure
The following procedure outlines the steps for the initial deployment of the central location.
The following steps detail the deployment commands and environment files associated with an example DCN deployment that uses the Image service with multiple stores. These steps do not include unrelated, but necessary, aspects of configuration, such as networking.
In the home directory, create directories for each stack that you plan to deploy.
mkdir /home/stack/central mkdir /home/stack/dcn0 mkdir /home/stack/dcn1
Set the name of the Ceph cluster, as well as configuration parameters relative to the available hardware. For more information, see Configuring Ceph with Custom Config Settings:
cat > /home/stack/central/ceph.yaml << EOF parameter_defaults: CephClusterName: central CephAnsibleDisksConfig: osd_scenario: lvm osd_objectstore: bluestore devices: - /dev/sda - /dev/sdb CephPoolDefaultSize: 3 CephPoolDefaultPgNum: 128 EOF
Generate roles for the central location using roles appropriate for your environment:
openstack overcloud roles generate Compute Controller CephStorage \ -o ~/central/central_roles.yaml cat > /home/stack/central/role-counts.yaml << EOF parameter_defaults: ControllerCount: 3 ComputeCount: 2 CephStorage: 3 EOF
Generate an environment file
~/central/central-images-env.yaml
sudo openstack tripleo container image prepare \ -e containers.yaml \ --output-env-file ~/central/central-images-env.yaml
Configure the naming conventions for your site in the
site-name.yaml
environment file. The Nova availability zone and the Cinder storage availability zone must match:cat > /home/stack/central/site-name.yaml << EOF parameter_defaults: NovaComputeAvailabilityZone: central ControllerExtraConfig: nova::availability_zone::default_schedule_zone: central NovaCrossAZAttach: false CinderStorageAvailabilityZone: central GlanceBackendID: central EOF
Configure a
glance.yaml
template with contents similar to the following:parameter_defaults: GlanceEnabledImportMethods: web-download,copy-image GlanceBackend: rbd GlanceStoreDescription: 'central rbd glance store' GlanceBackendID: central CephClusterName: central
After you prepare all of the other templates, deploy the
central
stack:openstack overcloud deploy \ --stack central \ --templates /usr/share/openstack-tripleo-heat-templates/ \ -r ~/central/central_roles.yaml \ ... -e /usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-ansible.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/nova-az-config.yaml \ -e ~/central/central-images-env.yaml \ -e ~/central/role-counts.yaml \ -e ~/central/site-name.yaml \ -e ~/central/ceph.yaml \ -e ~/central/glance.yaml
You must include heat templates for the configuration of networking in your openstack overcloud deploy
command. Designing for edge architecture requires spine and leaf networking. See Spine Leaf Networking for more details.
The ceph-ansible.yaml
file is configured with the following parameters:
- NovaEnableRbdBackend: true
- GlanceBackend: rbd
When you use these settings together, the glance.conf parameter image_import_plugins
is configured by heat to have a value image_conversion
, automating the conversion of QCOW2 images with commands such as glance image-create-via-import --disk-format qcow2
.
This is optimal for the Ceph RBD. If you want to disable image conversion, use the GlanceImageImportPlugin
parameter:
parameter_defaults: GlanceImageImportPlugin: []
5.3. Integrating external Ceph
You can deploy the central location of a distributed compute node (DCN) architecture and integrate a pre-deployed Red Hat Ceph Storage solution.
Prerequisites
- Hardware for a Ceph cluster at the central location and in each availability zone, or in each geographic location where storage services are required.
- Hardware for three Image Service servers at the central location and in each availability zone, or in each geographic location where storage services are required.
The following is an example deployment of two or more stacks:
-
One stack at the central location called
central
. -
One stack at an edge site called
dcn0
. -
Additional stacks deployed similarly to
dcn0
, such asdcn1
,dcn2
, and so on.
You can install the central location so that it is integrated with a pre-existing Red Hat Ceph Storage solution by following the process documented in Integrating an Overcloud with an Existing Red Hat Ceph Cluster. There are no special requirements for integrating Red Hat Ceph Storage with the central site of a DCN deployment, however you must still complete DCN specific steps before deploying the overcloud:
In the home directory, create directories for each stack that you plan to deploy. Use this to separate templates designed for their respective sites.
mkdir /home/stack/central mkdir /home/stack/dcn0 mkdir /home/stack/dcn1
Use roles that RHOSP director manages to generate roles for the central location. When you integrate with external Ceph, do not use Ceph roles:
cat > /home/stack/central/role-counts.yaml << EOF parameter_defaults: ControllerCount: 3 ComputeCount: 2 EOF
Generate an environment file
~/central/central-images-env.yaml
:sudo openstack tripleo container image prepare \ -e containers.yaml \ --output-env-file ~/central/central-images-env.yaml
Configure the naming conventions for your site in the
site-name.yaml
environment file. The Compute (nova) availability zone and the Block Storage (cinder) availability zone must match:cat > /home/stack/central/site-name.yaml << EOF parameter_defaults: NovaComputeAvailabilityZone: central ControllerExtraConfig: nova::availability_zone::default_schedule_zone: central NovaCrossAZAttach: false CinderStorageAvailabilityZone: central GlanceBackendID: central EOF
Configure a
glance.yaml
template with contents similar to the following:parameter_defaults: GlanceEnabledImportMethods: web-download,copy-image GlanceBackend: rbd GlanceStoreDescription: 'central rbd glance store' GlanceBackendID: central CephClusterName: central
When Ceph is deployed without Red Hat OpenStack Platform director, do not use the
ceph-ansible.yaml
environment file. Use theceph-ansible-external.yaml
environment file instead.openstack overcloud deploy \ --stack central \ --templates /usr/share/openstack-tripleo-heat-templates/ \ -r ~/central/central_roles.yaml \ ... -e /usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-ansible-external.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/nova-az-config.yaml \ -e ~/central/central-images-env.yaml \ -e ~/central/role-counts.yaml \ -e ~/central/site-name.yaml \ -e ~/central/ceph.yaml \ -e ~/central/glance.yaml