Chapter 4. Configuring the Overcloud

This section runs through the process of creating an Overcloud that uses the external load balancer. This includes registering the nodes, configuring the network setup, and the configuring options required for the Overcloud creation command.

4.1. Setting up your Environment

This section uses a cutdown version of the process from Chapter 5. Configuring Basic Overcloud Requirements in the Red Hat OpenStack Platform 8 Director Installation and Usage.

Use the following workflow to setup our environment:

  • Create a node definition template and register blank nodes in the director.
  • Inspect hardware of all nodes.
  • Manually tag nodes into roles.
  • Create flavors and tag them into roles.

4.1.1. Initializing the Stack User

Log into the director host as the stack user and run the following command to initialize your director configuration:

$ source ~/stackrc

This sets up environment variables containing authentication details to access the director’s CLI tools.

4.1.2. Registering Nodes

A node definition template (instackenv.json) is a JSON format file and contains the hardware and power management details for registering nodes. For example:


After creating the template, save the file to the stack user’s home directory (/home/stack/instackenv.json), then import it into the director. Use the following command to accomplish this:

$ openstack baremetal import --json ~/instackenv.json

This imports the template and registers each node from the template into the director.

Assign the kernel and ramdisk images to all nodes:

$ openstack baremetal configure boot

The nodes are now registered and configured in the director.

4.1.3. Inspecting the Hardware of Nodes

After registering the nodes, inspect the hardware attribute of each node. Run the following command to inspect the hardware attributes of each node:

$ openstack baremetal introspection bulk start

Make sure this process runs to completion. This process usually takes 15 minutes for bare metal nodes.

4.1.4. Manually Tagging the Nodes

After registering and inspecting the hardware of each node, tag them into specific profiles. These profile tags match our nodes to flavors, and in turn the flavors are assigned to a deployment role.

Retrieve a list of your nodes to identify their UUIDs:

$ ironic node-list

To manually tag a node to a specific profile, add a profile option to the properties/capabilities parameter for each node. For example, to tag three nodes to use a controller profile and one node to use a compute profile, use the following commands:

$ ironic node-update 1a4e30da-b6dc-499d-ba87-0bd8a3819bc0 add properties/capabilities='profile:control,boot_option:local'
$ ironic node-update 6faba1a9-e2d8-4b7c-95a2-c7fbdc12129a add properties/capabilities='profile:control,boot_option:local'
$ ironic node-update 5e3b2f50-fcd9-4404-b0a2-59d79924b38e add properties/capabilities='profile:control,boot_option:local'
$ ironic node-update 58c3d07e-24f2-48a7-bbb6-6843f0e8ee13 add properties/capabilities='profile:compute,boot_option:local'

The addition of the profile:compute and profile:control options tag the nodes into each respective profiles.

4.2. Configuring the Network

This section examines the network configuration for the Overcloud. This includes isolating our services to use specific network traffic and configuring the Overcloud with our load balancing options.

4.2.1. Isolating the Network

The director provides methods to configure isolated overcloud networks. This means the Overcloud environment separates network traffic types into different networks, which in turn assigns network traffic to specific network interfaces or bonds. After configuring isolated networks, the director configures the OpenStack services to use the isolated networks. If no isolated networks are configured, all services run on the Provisioning network.

First, the Overcloud requires a set of network interface templates. You customize these templates to configure the node interfaces on a per role basis. These templates are standard Heat templates in YAML format. The director contains a set of example templates to get you started:

  • /usr/share/openstack-tripleo-heat-templates/network/config/single-nic-vlans - Directory containing templates for single NIC with VLANs configuration on a per role basis.
  • /usr/share/openstack-tripleo-heat-templates/network/config/bond-with-vlans - Directory containing templates for bonded NIC configuration on a per role basis.

For more information on network interface configuration, see 6.2. Isolating Networks in the Red Hat OpenStack Platform 8 Director Installation and Usage

Next, create a network environment file. This file is a Heat environment file that describes the Overcloud’s network environment and points to the network interface configuration templates. This file also defines the subnets and VLANs for our network along with IP address ranges. You customize these values for the local environment.

This scenario uses the following network environment file saved as /home/stack/network-environment.yaml:

  OS::TripleO::BlockStorage::Net::SoftwareConfig: /home/stack/templates/my-overcloud/network/config/bond-with-vlans/cinder-storage.yaml
  OS::TripleO::Compute::Net::SoftwareConfig: /home/stack/templates/my-overcloud/network/config/bond-with-vlans/compute.yaml
  OS::TripleO::Controller::Net::SoftwareConfig: /home/stack/templates/my-overcloud/network/config/bond-with-vlans/controller.yaml
  OS::TripleO::ObjectStorage::Net::SoftwareConfig: /home/stack/templates/my-overcloud/network/config/bond-with-vlans/swift-storage.yaml
  OS::TripleO::CephStorage::Net::SoftwareConfig: /home/stack/templates/my-overcloud/network/config/bond-with-vlans/ceph-storage.yaml

  TenantNetCidr: -
  InternalApiAllocationPools: [{'start': '', 'end': ''}]
  TenantAllocationPools: [{'start': '', 'end': ''}]
  StorageAllocationPools: [{'start': '', 'end': ''}]
  StorageMgmtAllocationPools: [{'start': '', 'end': ''}]
  # Leave room for floating IPs in the External allocation pool
  ExternalAllocationPools: [{'start': '', 'end': ''}]
  # Set to the router gateway on the external network
  # Gateway router for the provisioning network (or Undercloud IP)
  # The IP address of the EC2 metadata server. Generally the IP of the Undercloud
  # Define the DNS servers (maximum 2) for the overcloud nodes
  DnsServers: ["",""]
  InternalApiNetworkVlanID: 201
  StorageNetworkVlanID: 202
  StorageMgmtNetworkVlanID: 203
  TenantNetworkVlanID: 204
  ExternalNetworkVlanID: 100
  # Set to "br-ex" if using floating IPs on native VLAN on bridge br-ex
  NeutronExternalNetworkBridge: "''"
  # Customize bonding options if required
    "bond_mode=balance-tcp lacp=active other-config:lacp-fallback-ab=true"

For more information on network environment configuration, see 6.2. Isolating Networks in the Red Hat OpenStack Platform 8 Director Installation and Usage guide.

Ensure the director host has access to the Internal API network so that it can connect to the keystone_admin_ssh VIP.

4.2.2. Configuring Load Balancing Options

The director provides a method for creating an Overcloud where an external load balancers hosts the virtual IPs instead of HAProxy managing them internally. This configuration assumes a number of virtual IPs are configured on the external load balancer, one per isolated network, plus one for the Redis service, before the Overcloud deployment starts. Some of the Virtual IPs can be identical if the Overcloud node NICs configuration permits so.

You previously configured the external load balancer using settings from the previous chapter. These settings include the IPs that the director assigns to the Overcloud nodes and uses for service configuration.

The following is an example Heat environment file (external-lb.yaml) that contains the Overcloud configuration for using the external load balancer:

  # The VIP that the balancer holds on the ControlPlane.
  # The VIPs that the balancer holds for each network. These are the addresses previously binded in the load balancing configuration.
  # The VIP which the balancer holds, on the InternalApi, for the Redis service.
  # IPs assignments for the Overcloud Controller nodes. Ensure these IPs are from each respective allocation pools defined in the network environment file.
    # CIDRs
    external_cidr: "24"
    internal_api_cidr: "24"
    storage_cidr: "24"
    storage_mgmt_cidr: "24"
    tenant_cidr: "24"
  RedisPassword: p@55w0rd!
    NeutronTenantNetwork: tenant
    CeilometerApiNetwork: internal_api
    MongoDbNetwork: internal_api
    CinderApiNetwork: internal_api
    CinderIscsiNetwork: storage
    GlanceApiNetwork: storage
    GlanceRegistryNetwork: internal_api
    KeystoneAdminApiNetwork: internal_api
    KeystonePublicApiNetwork: internal_api
    NeutronApiNetwork: internal_api
    HeatApiNetwork: internal_api
    NovaApiNetwork: internal_api
    NovaMetadataNetwork: internal_api
    NovaVncProxyNetwork: internal_api
    SwiftMgmtNetwork: storage_mgmt
    SwiftProxyNetwork: storage
    HorizonNetwork: internal_api
    MemcachedNetwork: internal_api
    RabbitMqNetwork: internal_api
    RedisNetwork: internal_api
    MysqlNetwork: internal_api
    CephClusterNetwork: storage_mgmt
    CephPublicNetwork: storage
    ControllerHostnameResolveNetwork: internal_api
    ComputeHostnameResolveNetwork: internal_api
    BlockStorageHostnameResolveNetwork: internal_api
    ObjectStorageHostnameResolveNetwork: internal_api
    CephStorageHostnameResolveNetwork: storage

The parameter_defaults section contains the VIP and IP assignments for each network on OpenStack. These settings must match the same IP configuration for each service on the load balancer. This section also defines an administrative password for the Redis service (RedisPassword). This section also contains the ServiceNetMap parameter, which maps each OpenStack service to a specific networks. The load balancing configuration requires this services remap.

4.3. Configuring SSL for Load Balancing

The Overcloud uses unencrypted endpoints for its services by default. This means the Overcloud configuration requires an additional environment file to enable SSL/TLS for its endpoints.


Ensure your external load balancer has a copy of your SSL certificate and key installed.

Copy the enable-tls.yaml environment file from the Heat template collection:

$ cp -r /usr/share/openstack-tripleo-heat-templates/environments/enable-tls.yaml ~/templates/.

Edit this file and perfom the following:

  • Remove the SSLCertificate, SSLIntermediateCertificate, and SSLKey from the parameter_defaults section.
  • Remove the resource_registry section completely.
  • All that should remain is the EndpointMap parameter in parameter_defaults. EndpointMap contains a mapping of the services using HTTPS and HTTP communication. If using DNS for SSL communication, leave this section with the defaults. However, if using an IP address for the SSL certificate’s common name, replace all instances of CLOUDNAME with IP_ADDRESS. Use the following command to accomplish this:

    $ sed -i 's/CLOUDNAME/IP_ADDRESS/' ~/templates/enable-tls.yaml

    Do not substitute IP_ADDRESS or CLOUDNAME for actual values. Heat replaces these variables with the appropriate value during the Overcloud creation.

If using a self-signed certificate or the certificate signer is not in the default trust store on the Overcloud image, inject the certificate into the Overcloud image. Copy the inject-trust-anchor.yaml environment file from the Heat template collection:

$ cp -r /usr/share/openstack-tripleo-heat-templates/environments/inject-trust-anchor.yaml ~/templates/.

Edit this file and make the following changes for these parameters:


Copy the contents of the root certificate authority file into the SSLRootCertificate parameter. For example:

  SSLRootCertificate: |

The certificate authority contents require the same indentation level for all new lines.


Change the resource URL for OS::TripleO::NodeTLSCAData: to an absolute URL:

     OS::TripleO::NodeTLSCAData: /usr/share/openstack-tripleo-heat-templates/puppet/extraconfig/tls/ca-inject.yaml

If using a DNS hostname to access the Overcloud through SSL/TLS, create a new environment file (~/templates/cloudname.yaml) to define the hostname of the Overcloud’s endpoints. Use the following parameters:

The DNS hostname for the Overcloud endpoints.
A list of DNS servers to use. The configured DNS servers must contain an entry for the configured CloudName that matches the IP for the Public API.

The following is an example of the contents for this file:


The deployment command (openstack overcloud deploy) in Section 4.4, “Creating the Overcloud” uses the -e option to add environment files. Add the environment files from this section in the following order:

  • The environment file to enable SSL/TLS (enable-tls.yaml)
  • The environment file to set the DNS hostname (cloudname.yaml)
  • The environment file to inject the root certificate authority (inject-trust-anchor.yaml)

For example:

$ openstack overcloud deploy --templates [...] -e /home/stack/templates/enable-tls.yaml -e ~/templates/cloudname.yaml -e ~/templates/inject-trust-anchor.yaml

4.4. Creating the Overcloud

The creation of an Overcloud that uses an external load balancer requires additional arguments to the openstack overcloud deploy command. For example:

$ openstack overcloud deploy --templates -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml -e ~/network-environment.yaml  -e /usr/share/openstack-tripleo-heat-templates/environments/external-loadbalancer-vip.yaml -e ~/external-lb.yaml --control-scale 3 --compute-scale 1 --control-flavor control --compute-flavor compute [ADDITIONAL OPTIONS]

The above command uses the following options:

  • --templates - Creates the Overcloud from the default Heat template collection.
  • -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml - Adds an additional environment file to the Overcloud deployment. In this case, it is an environment file that initializes network isolation configuration.
  • -e ~/network-environment.yaml - Adds an additional environment file to the Overcloud deployment. In this case, it is the network environment file created previously.
  • -e /usr/share/openstack-tripleo-heat-templates/environments/external-loadbalancer-vip.yaml - Adds an additional environment file to the Overcloud deployment. In this case, it is an environment file that initializes the external load balancing configuration. Note that you should include this environment file after the network configuration files.
  • -e ~/external-lb.yaml - Adds an additional environment file to the Overcloud deployment. In this case, it is the environment file containing our external load balancer configuration. Note that you should include this environment file after the network configuration files.
  • --control-scale 3 - Scale the Controller nodes to three.
  • --compute-scale 3 - Scale the Compute nodes to three.
  • --control-flavor control - Use a specific flavor for the Controller nodes.
  • --compute-flavor compute - Use a specific flavor for the Compute nodes.

For a full list of options, run:

$ openstack help overcloud deploy

See also 7.1. Setting Overcloud Parameters in the Red Hat OpenStack Platform 8 Director Installation and Usage guide for parameter examples.

The Overcloud creation process begins and the director provisions your nodes. This process takes some time to complete. To view the status of the Overcloud creation, open a separate terminal as the stack user and run:

$ source ~/stackrc
$ heat stack-list --show-nested

4.5. Accessing the Overcloud

The director generates a script to configure and help authenticate interactions with your Overcloud from the director host. The director saves this file, overcloudrc, in your stack user’s home directory. Run the following command to use this file:

$ source ~/overcloudrc

This loads the necessary environment variables to interact with your Overcloud from the director host’s CLI. To return to interacting with the director’s host, run the following command:

$ source ~/stackrc

4.6. Completing the Overcloud Configuration

This concludes the creation of the Advanced Overcloud.

For fencing the high availability cluster, see 8.6. Fencing the Controller Nodes in the Red Hat OpenStack Platform 8 Director Installation and Usage guide.

For post-creation functions, see Chapter 8. Performing Tasks after Overcloud Creation in the Red Hat OpenStack Platform 8 Director Installation and Usage guide.