Red Hat Training

A Red Hat training course is available for Red Hat OpenStack Platform

Chapter 7. Creating the Overcloud

The final stage in creating your OpenStack environment is to run the openstack overcloud deploy command to create it. Before running this command, you should familiarize yourself with key options and how to include custom environment files. This chapter discusses the openstack overcloud deploy command and the options associated with it.

Warning

Do not run openstack overcloud deploy as a background process. The Overcloud creation might hang in mid-deployment if started as a background process.

7.1. Setting Overcloud Parameters

The following table lists the additional parameters when using the openstack overcloud deploy command.

Table 7.1. Deployment Parameters

Parameter

Description

Example

--templates [TEMPLATES]

The directory containing the Heat templates to deploy. If blank, the command uses the default template location at /usr/share/openstack-tripleo-heat-templates/

~/templates/my-overcloud

--stack STACK

The name of the stack to create or update

overcloud

-t [TIMEOUT], --timeout [TIMEOUT]

Deployment timeout in minutes

240

--control-scale [CONTROL_SCALE]

The number of Controller nodes to scale out

3

--compute-scale [COMPUTE_SCALE]

The number of Compute nodes to scale out

3

--ceph-storage-scale [CEPH_STORAGE_SCALE]

The number of Ceph Storage nodes to scale out

3

--block-storage-scale [BLOCK_STORAGE_SCALE]

The number of Cinder nodes to scale out

3

--swift-storage-scale [SWIFT_STORAGE_SCALE]

The number of Swift nodes to scale out

3

--control-flavor [CONTROL_FLAVOR]

The flavor to use for Controller nodes

control

--compute-flavor [COMPUTE_FLAVOR]

The flavor to use for Compute nodes

compute

--ceph-storage-flavor [CEPH_STORAGE_FLAVOR]

The flavor to use for Ceph Storage nodes

ceph-storage

--block-storage-flavor [BLOCK_STORAGE_FLAVOR]

The flavor to use for Cinder nodes

cinder-storage

--swift-storage-flavor [SWIFT_STORAGE_FLAVOR]

The flavor to use for Swift storage nodes

swift-storage

--neutron-flat-networks [NEUTRON_FLAT_NETWORKS]

(DEPRECATED) Defines the flat networks to configure in neutron plugins. Defaults to "datacentre" to permit external network creation

datacentre

--neutron-physical-bridge [NEUTRON_PHYSICAL_BRIDGE]

(DEPRECATED) An Open vSwitch bridge to create on each hypervisor. This defaults to "br-ex". Typically, this should not need to be changed

br-ex

--neutron-bridge-mappings [NEUTRON_BRIDGE_MAPPINGS]

(DEPRECATED) The logical to physical bridge mappings to use. Defaults to mapping the external bridge on hosts (br-ex) to a physical name (datacentre). You would use this for the default floating network

datacentre:br-ex

--neutron-public-interface [NEUTRON_PUBLIC_INTERFACE]

(DEPRECATED) Defines the interface to bridge onto br-ex for network nodes

nic1, eth0

--neutron-network-type [NEUTRON_NETWORK_TYPE]

(DEPRECATED) The tenant network type for Neutron

gre or vxlan

--neutron-tunnel-types [NEUTRON_TUNNEL_TYPES]

(DEPRECATED) The tunnel types for the Neutron tenant network. To specify multiple values, use a comma separated string

vxlan gre,vxlan

--neutron-tunnel-id-ranges [NEUTRON_TUNNEL_ID_RANGES]

(DEPRECATED) Ranges of GRE tunnel IDs to make available for tenant network allocation

1:1000

--neutron-vni-ranges [NEUTRON_VNI_RANGES]

(DEPRECATED) Ranges of VXLAN VNI IDs to make available for tenant network allocation

1:1000

--neutron-disable-tunneling

(DEPRECATED) Disables tunneling in case you aim to use a VLAN segmented network or flat network with Neutron

 

--neutron-network-vlan-ranges [NEUTRON_NETWORK_VLAN_RANGES]

(DEPRECATED) The Neutron ML2 and Open vSwitch VLAN mapping range to support. Defaults to permitting any VLAN on the datacentre physical network

datacentre:1:1000

--neutron-mechanism-drivers [NEUTRON_MECHANISM_DRIVERS]

(DEPRECATED) The mechanism drivers for the neutron tenant network. Defaults to "openvswitch". To specify multiple values, use a comma-separated string

openvswitch,l2population

--libvirt-type [LIBVIRT_TYPE]

Virtualization type to use for hypervisors

kvm,qemu

--ntp-server [NTP_SERVER]

Network Time Protocol (NTP) server to use to synchronize time. You can also specify multiple NTP servers in a comma-separated list, for example: --ntp-server 0.centos.pool.org,1.centos.pool.org. For a high availability cluster deployment, it is essential that your controllers are consistently referring to the same time source. Note that a typical environment might already have a designated NTP time source with established practices.

pool.ntp.org

--no-proxy [NO_PROXY]

Defines custom values for the environment variable no_proxy, which excludes certain domain extensions from proxy communication

 

--overcloud-ssh-user OVERCLOUD_SSH_USER

Defines the SSH user to access the Overcloud nodes. Normally SSH access occurs through the heat-admin user.

ocuser

-e [EXTRA HEAT TEMPLATE], --extra-template [EXTRA HEAT TEMPLATE]

Extra environment files to pass to the Overcloud deployment. Can be specified more than once. Note that the order of environment files passed to the openstack overcloud deploy command is important. For example, parameters from each sequential environment file override the same parameters from earlier environment files.

-e ~/templates/my-config.yaml

--environment-directory

The directory containing environment files to include in deployment. The command processes these environment files in numerical, then alphabetical order.

--environment-directory ~/templates

--validation-errors-fatal

The Overcloud creation process performs a set of pre-deployment checks. This option exits if any errors occur from the pre-deployment checks. It is advisable to use this option as any errors can cause your deployment to fail.

 

--validation-warnings-fatal

The Overcloud creation process performs a set of pre-deployment checks. This option exits if any non-critical warnings occur from the pre-deployment checks.

 

--dry-run

Performs validation check on the Overcloud but does not actually create the Overcloud.

 

--force-postconfig

Force the Overcloud post-deployment configuration.

--force-postconfig

--answers-file ANSWERS_FILE

Path to a YAML file with arguments and parameters.

--answers-file ~/answers.yaml

--rhel-reg

Register Overcloud nodes to the Customer Portal or Satellite 6

 

--reg-method

Registration method to use for the overcloud nodes

satellite for Red Hat Satellite 6 or Red Hat Satellite 5, portal for Customer Portal

--reg-org [REG_ORG]

Organization to use for registration

 

--reg-force

Register the system even if it is already registered

 

--reg-sat-url [REG_SAT_URL]

The base URL of the Satellite server to register Overcloud nodes. Use the Satellite’s HTTP URL and not the HTTPS URL for this parameter. For example, use http://satellite.example.com and not https://satellite.example.com. The Overcloud creation process uses this URL to determine whether the server is a Red Hat Satellite 5 or Red Hat Satellite 6 server. If a Red Hat Satellite 6 server, the Overcloud obtains the katello-ca-consumer-latest.noarch.rpm file, registers with subscription-manager, and installs katello-agent. If a Red Hat Satellite 5 server, the Overcloud obtains the RHN-ORG-TRUSTED-SSL-CERT file and registers with rhnreg_ks.

 

--reg-activation-key [REG_ACTIVATION_KEY]

Activation key to use for registration

 
Note

Run the following command for a full list of options:

$ openstack help overcloud deploy

7.2. Including Environment Files in Overcloud Creation

The -e includes an environment file to customize your Overcloud. You can include as many environment files as necessary. However, the order of the environment files is important as the parameters and resources defined in subsequent environment files take precedence. Use the following list as an example of the environment file order:

  • Any network isolation files, including the initialization file (environments/network-isolation.yaml) from the heat template collection and then your custom NIC configuration file. See Section 6.2, “Isolating Networks” for more information on network isolation.
  • Any external load balancing environment files.
  • Any storage environment files such as Ceph Storage, NFS, iSCSI, etc.
  • Any environment files for Red Hat CDN or Satellite registration.
  • Any other custom environment files.

Any environment files added to the Overcloud using the -e option become part of your Overcloud’s stack definition.

Likewise, you can add a whole directory containing environment files using the --environment-directory option. The deployment command processes the environment files in this directory in numerical, then alphabetical order. If using this method, it is recommended to use filenames with a numerical prefix to order how they are processed. For example:

$ ls -1 ~/templates
10-network-isolation.yaml
20-network-environment.yaml
30-storage-environment.yaml
40-rhel-registration.yaml

The director requires these environment files for re-deployment and post-deployment functions in Chapter 8, Performing Tasks after Overcloud Creation. Failure to include these files can result in damage to your Overcloud.

If you aim to later modify the Overcloud configuration, you should:

  1. Modify parameters in the custom environment files and Heat templates
  2. Run the openstack overcloud deploy command again with the same environment files

Do not edit the Overcloud configuration directly as such manual configuration gets overridden by the director’s configuration when updating the Overcloud stack with the director.

Important

Save the original deployment command for later use and modification. For example, save your deployment command in a script file called deploy-overcloud.sh:

#!/bin/bash
openstack overcloud deploy --templates \
  -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml \
  -e ~/templates/network-environment.yaml \
  -e ~/templates/storage-environment.yaml \
  -t 150 \
  --control-scale 3 \
  --compute-scale 3 \
  --ceph-storage-scale 3 \
  --swift-storage-scale 0 \
  --block-storage-scale 0 \
  --compute-flavor compute \
  --control-flavor control \
  --ceph-storage-flavor ceph-storage \
  --swift-storage-flavor swift-storage \
  --block-storage-flavor block-storage \
  --ntp-server pool.ntp.org \
  --libvirt-type qemu

This retains the Overcloud deployment command’s parameters and environment files for future use, such as Overcloud modifications and scaling. You can then edit and rerun this script to suit future customizations to the Overcloud.

7.3. Overcloud Creation Example

The following command is an example of how to start the Overcloud creation with custom environment files included:

$ openstack overcloud deploy --templates \
  -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml \
  -e ~/templates/network-environment.yaml \
  -e ~/templates/storage-environment.yaml \
  --control-scale 3 \
  --compute-scale 3 \
  --ceph-storage-scale 3 \
  --control-flavor control \
  --compute-flavor compute \
  --ceph-storage-flavor ceph-storage \
  --ntp-server pool.ntp.org \

This command contains the following additional options:

  • --templates - Creates the Overcloud using the Heat template collection in /usr/share/openstack-tripleo-heat-templates.
  • -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml - The -e option adds an additional environment file to the Overcloud deployment. In this case, it is an environment file that initializes network isolation configuration.
  • -e ~/templates/network-environment.yaml - The -e option adds an additional environment file to the Overcloud deployment. In this case, it is the network environment file from Section 6.2.2, “Creating a Network Environment File”.
  • -e ~/templates/storage-environment.yaml - The -e option adds an additional environment file to the Overcloud deployment. In this case, it is a custom environment file that initializes our storage configuration.
  • --control-scale 3 - Scale the Controller nodes to three.
  • --compute-scale 3 - Scale the Compute nodes to three.
  • --ceph-storage-scale 3 - Scale the Ceph Storage nodes to three.
  • --control-flavor control - Use the a specific flavor for the Controller nodes.
  • --compute-flavor compute - Use the a specific flavor for the Compute nodes.
  • --ceph-storage-flavor ceph-storage - Use the a specific flavor for the Ceph Storage nodes.
  • --ntp-server pool.ntp.org - Use an NTP server for time synchronization. This is useful for keeping the Controller node cluster in synchronization.

7.4. Monitoring the Overcloud Creation

The Overcloud creation process begins and the director provisions your nodes. This process takes some time to complete. To view the status of the Overcloud creation, open a separate terminal as the stack user and run:

$ source ~/stackrc                # Initializes the stack user to use the CLI commands
$ heat stack-list --show-nested

The heat stack-list --show-nested command shows the current stage of the Overcloud creation.

7.5. Accessing the Overcloud

The director generates a script to configure and help authenticate interactions with your Overcloud from the director host. The director saves this file, overcloudrc, in your stack user’s home director. Run the following command to use this file:

$ source ~/overcloudrc

This loads the necessary environment variables to interact with your Overcloud from the director host’s CLI. To return to interacting with the director’s host, run the following command:

$ source ~/stackrc

Each node in the Overcloud also contains a user called heat-admin. The stack user has SSH access to this user on each node. To access a node over SSH, find the IP address of the desired node:

$ nova list

Then connect to the node using the heat-admin user and the node’s IP address:

$ ssh heat-admin@192.0.2.23

7.6. Completing the Overcloud Creation

This concludes the creation of the Overcloud. For post-creation functions, see Chapter 8, Performing Tasks after Overcloud Creation.