Red Hat Training

A Red Hat training course is available for Red Hat OpenStack Platform

Chapter 8. Configuring a Basic Overcloud using Pre-Provisioned Nodes

This chapter provides the basic configuration steps for using pre-provisioned nodes to configure an OpenStack Platform environment. This scenario differs from the standard overcloud creation scenarios in multiple ways:

  • You can provision nodes using an external tool and let the director control the overcloud configuration only.
  • You can use nodes without relying on the director’s provisioning methods. This is useful if creating an overcloud without power management control or using networks with DHCP/PXE boot restrictions.
  • The director does not use OpenStack Compute (nova), OpenStack Bare Metal (ironic), or OpenStack Image (glance) for managing nodes.
  • Pre-provisioned nodes use a custom partitioning layout.

This scenario provides basic configuration with no custom features. However, you can add advanced configuration options to this basic overcloud and customize it to your specifications using the instructions in the Advanced Overcloud Customization guide.

Important

Mixing pre-provisioned nodes with director-provisioned nodes in an overcloud is not supported.

Requirements

  • The director node created in Chapter 4, Installing the undercloud.
  • A set of bare metal machines for your nodes. The number of nodes required depends on the type of overcloud you intend to create (see Section 3.1, “Planning Node Deployment Roles” for information on overcloud roles). These machines also must comply with the requirements set for each node type. For these requirements, see Section 2.4, “Overcloud Requirements”. These nodes require Red Hat Enterprise Linux 7.5 or later installed as the host operating system. Red Hat recommends using the latest version available.
  • One network connection for managing the pre-provisioned nodes. This scenario requires uninterrupted SSH access to the nodes for orchestration agent configuration.
  • One network connection for the Control Plane network. There are two main scenarios for this network:

    • Using the Provisioning Network as the Control Plane, which is the default scenario. This network is usually a layer-3 (L3) routable network connection from the pre-provisioned nodes to the director. The examples for this scenario use following IP address assignments:

      Table 8.1. Provisioning Network IP Assignments

      Node NameIP Address

      Director

      192.168.24.1

      Controller 0

      192.168.24.2

      Compute 0

      192.168.24.3

    • Using a separate network. In situations where the director’s Provisioning network is a private non-routable network, you can define IP addresses for the nodes from any subnet and communicate with the director over the Public API endpoint. There are certain caveats to this scenario, which this chapter examines later in Section 8.6, “Using a Separate Network for Overcloud Nodes”.
  • All other network types in this example also use the Control Plane network for OpenStack services. However, you can create additional networks for other network traffic types.
  • If any nodes use Pacemaker resources, the service user hacluster and the service group haclient must have a UID/GID of 189. This is due to CVE-2018-16877. If you installed Pacemaker together with the operating system, the installation creates these IDs automatically. If the ID values are set incorrectly, follow the steps in the article OpenStack minor update / fast-forward upgrade can fail on the controller nodes at pacemaker step with "Could not evaluate: backup_cib" to change the ID values.
  • To prevent some services from binding to an incorrect IP address and causing deployment failures, make sure that the /etc/hosts file does not include the node-name=127.0.0.1 mapping.

8.1. Creating a User for Configuring Nodes

At a later stage in this process, the director requires SSH access to the overcloud nodes as the stack user.

  1. On each overcloud node, create the user named stack and set a password on each node. For example, use the following on the Controller node:

    [root@controller-0 ~]# useradd stack
    [root@controller-0 ~]# passwd stack  # specify a password
  2. Disable password requirements for this user when using sudo:

    [root@controller-0 ~]# echo "stack ALL=(root) NOPASSWD:ALL" | tee -a /etc/sudoers.d/stack
    [root@controller-0 ~]# chmod 0440 /etc/sudoers.d/stack
  3. Once you have created and configured the stack user on all pre-provisioned nodes, copy the stack user’s public SSH key from the director node to each overcloud node. For example, to copy the director’s public SSH key to the Controller node:

    [stack@director ~]$ ssh-copy-id stack@192.168.24.2

8.2. Registering the Operating System for Nodes

Each node requires access to a Red Hat subscription.

Important

Standalone Ceph nodes are an exception and do not require a Red Hat OpenStack Platform subscription. For standalone Ceph nodes, director requires newer ansible packages to be installed. It is essential to enable rhel-7-server-openstack-13-deployment-tools-rpms repository on all Ceph nodes without active Red Hat OpenStack Platform subscriptions to obtain Red Hat OpenStack Platform-compatible deployment tools.

The following procedure shows how to register each node to the Red Hat Content Delivery Network. Perform these steps on each node:

  1. Run the registration command and enter your Customer Portal user name and password when prompted:

    [root@controller-0 ~]# sudo subscription-manager register
  2. Find the entitlement pool for the Red Hat OpenStack Platform 13:

    [root@controller-0 ~]# sudo subscription-manager list --available --all --matches="Red Hat OpenStack"
  3. Use the pool ID located in the previous step to attach the Red Hat OpenStack Platform 13 entitlements:

    [root@controller-0 ~]# sudo subscription-manager attach --pool=pool_id
  4. Disable all default repositories:

    [root@controller-0 ~]# sudo subscription-manager repos --disable=*
  5. Enable the required Red Hat Enterprise Linux repositories.

    1. For x86_64 systems, run:

      [root@controller-0 ~]# sudo subscription-manager repos --enable=rhel-7-server-rpms --enable=rhel-7-server-extras-rpms --enable=rhel-7-server-rh-common-rpms --enable=rhel-ha-for-rhel-7-server-rpms --enable=rhel-7-server-openstack-13-rpms --enable=rhel-7-server-rhceph-3-osd-rpms --enable=rhel-7-server-rhceph-3-mon-rpms --enable=rhel-7-server-rhceph-3-tools-rpms
    2. For POWER systems, run:

      [root@controller-0 ~]# sudo subscription-manager repos --enable=rhel-7-for-power-le-rpms --enable=rhel-7-server-openstack-13-for-power-le-rpms
    Important

    Only enable the repositories listed in Section 2.5, “Repository Requirements”. Additional repositories can cause package and software conflicts. Do not enable any additional repositories.

  6. Update your system to ensure sure you have the latest base system packages:

    [root@controller-0 ~]# sudo yum update -y
    [root@controller-0 ~]# sudo reboot

The node is now ready to use for your overcloud.

8.3. Installing the User Agent on Nodes

Each pre-provisioned node uses the OpenStack Orchestration (heat) agent to communicate with the director. The agent on each node polls the director and obtains metadata tailored to each node. This metadata allows the agent to configure each node.

Install the initial packages for the orchestration agent on each node:

[root@controller-0 ~]# sudo yum -y install python-heat-agent*

8.4. Configuring SSL/TLS Access to the Director

If the director uses SSL/TLS, the pre-provisioned nodes require the certificate authority file used to sign the director’s SSL/TLS certificates. If using your own certificate authority, perform the following on each overcloud node:

  1. Copy the certificate authority file to the /etc/pki/ca-trust/source/anchors/ directory on each pre-provisioned node.
  2. Run the following command on each overcloud node:

    [root@controller-0 ~]#  sudo update-ca-trust extract

This ensures the overcloud nodes can access the director’s Public API over SSL/TLS.

8.5. Configuring Networking for the Control Plane

The pre-provisioned overcloud nodes obtain metadata from the director using standard HTTP requests. This means all overcloud nodes require L3 access to either:

  • The director’s Control Plane network, which is the subnet defined with the network_cidr parameter from your undercloud.conf file. The nodes either requires direct access to this subnet or routable access to the subnet.
  • The director’s Public API endpoint, specified as the undercloud_public_host parameter from your undercloud.conf file. This option is available if either you do not have an L3 route to the Control Plane or you aim to use SSL/TLS communication when polling the director for metadata. See Section 8.6, “Using a Separate Network for Overcloud Nodes” for additional steps for configuring your overcloud nodes to use the Public API endpoint.

The director uses a Control Plane network to manage and configure a standard overcloud. For an overcloud with pre-provisioned nodes, your network configuration might require some modification to accommodate how the director communicates with the pre-provisioned nodes.

Using Network Isolation

Network isolation allows you to group services to use specific networks, including the Control Plane. There are multiple network isolation strategies contained in the The Advanced Overcloud Customization guide. In addition, you can also define specific IP addresses for nodes on the control plane. For more information on isolation networks and creating predictable node placement strategies, see the following sections in the Advanced Overcloud Customizations guide:

Note

If using network isolation, make sure your NIC templates do not include the NIC used for undercloud access. These template can reconfigure the NIC, which can lead to connectivity and configuration problems during deployment.

Assigning IP Addresses

If not using network isolation, you can use a single Control Plane network to manage all services. This requires manual configuration of the Control Plane NIC on each node to use an IP address within the Control Plane network range. If using the director’s Provisioning network as the Control Plane, make sure the chosen overcloud IP addresses fall outside of the DHCP ranges for both provisioning (dhcp_start and dhcp_end) and introspection (inspection_iprange).

During standard overcloud creation, the director creates OpenStack Networking (neutron) ports to automatically assigns IP addresses to the overcloud nodes on the Provisioning / Control Plane network. However, this can cause the director to assign different IP addresses to the ones manually configured for each node. In this situation, use a predictable IP address strategy to force the director to use the pre-provisioned IP assignments on the Control Plane.

An example of a predictable IP strategy is to use an environment file (ctlplane-assignments.yaml) with the following IP assignments:

resource_registry:
  OS::TripleO::DeployedServer::ControlPlanePort: /usr/share/openstack-tripleo-heat-templates/deployed-server/deployed-neutron-port.yaml

parameter_defaults:
  DeployedServerPortMap:
    controller-0-ctlplane:
      fixed_ips:
        - ip_address: 192.168.24.2
      subnets:
        - cidr: 24
    compute-0-ctlplane:
      fixed_ips:
        - ip_address: 192.168.24.3
      subnets:
        - cidr: 24

In this example, the OS::TripleO::DeployedServer::ControlPlanePort resource passes a set of parameters to the director and defines the IP assignments of our pre-provisioned nodes. The DeployedServerPortMap parameter defines the IP addresses and subnet CIDRs that correspond to each overcloud node. The mapping defines:

  1. The name of the assignment, which follows the format <node_hostname>-<network> where the <node_hostname> value matches the short hostname for the node and <network> matches the lowercase name of the network. For example: controller-0-ctlplane for controller-0.example.com and compute-0-ctlplane for compute-0.example.com.
  2. The IP assignments, which use the following parameter patterns:

    • fixed_ips/ip_address - Defines the fixed IP addresses for the control plane. Use multiple ip_address parameters in a list to define multiple IP addresses.
    • subnets/cidr - Defines the CIDR value for the subnet.

A later step in this chapter uses the resulting environment file (ctlplane-assignments.yaml) as part of the openstack overcloud deploy command.

8.6. Using a Separate Network for Overcloud Nodes

By default, the director uses the Provisioning network as the overcloud Control Plane. However, if this network is isolated and non-routable, nodes cannot communicate with the director’s Internal API during configuration. In this situation, you might need to define a separate network for the nodes and configure them to communicate with the director over the Public API.

There are several requirements for this scenario:

The examples in this section use IP address assignments that differ from the main scenario:

Table 8.2. Provisioning Network IP Assignments

Node NameIP Address or FQDN

Director (Internal API)

192.168.24.1 (Provisioning Network and Control Plane)

Director (Public API)

10.1.1.1 / director.example.com

Overcloud Virtual IP

192.168.100.1

Controller 0

192.168.100.2

Compute 0

192.168.100.3

The following sections provide additional configuration for situations that require a separate network for overcloud nodes.

Orchestration Configuration

With SSL/TLS communication enabled on the undercloud, the director provides a Public API endpoint for most services. However, OpenStack Orchestration (heat) uses the internal endpoint as a default provider for metadata. This means the undercloud requires some modification so overcloud nodes can access OpenStack Orchestration on public endpoints. This modification involves changing some Puppet hieradata on the director.

The hieradata_override in your undercloud.conf allows you to specify additional Puppet hieradata for undercloud configuration. Use the following steps to modify hieradata relevant to OpenStack Orchestration:

  1. If you are not using a hieradata_override file already, create a new one. This example uses one located at /home/stack/hieradata.yaml.
  2. Include the following hieradata in /home/stack/hieradata.yaml:

    heat_clients_endpoint_type: public
    heat::engine::default_deployment_signal_transport: TEMP_URL_SIGNAL

    This changes the endpoint type from the default internal to public and changes the signaling method to use TempURLs from OpenStack Object Storage (swift).

  3. In your undercloud.conf, set the hieradata_override parameter to the path of the hieradata file:

    hieradata_override = /home/stack/hieradata.yaml
  4. Rerun the openstack undercloud install command to implement the new configuration options.

This switches the orchestration metadata server to use URLs on the director’s Public API.

IP Address Assignments

The method for IP assignments is similar to Section 8.5, “Configuring Networking for the Control Plane”. However, since the Control Plane is not routable from the deployed servers, you use the DeployedServerPortMap parameter to assign IP addresses from your chosen overcloud node subnet, including the virtual IP address to access the Control Plane. The following is a modified version of the ctlplane-assignments.yaml environment file from Section 8.5, “Configuring Networking for the Control Plane” that accommodates this network architecture:

resource_registry:
  OS::TripleO::DeployedServer::ControlPlanePort: /usr/share/openstack-tripleo-heat-templates/deployed-server/deployed-neutron-port.yaml
  OS::TripleO::Network::Ports::ControlPlaneVipPort: /usr/share/openstack-tripleo-heat-templates/deployed-server/deployed-neutron-port.yaml
  OS::TripleO::Network::Ports::RedisVipPort: /usr/share/openstack-tripleo-heat-templates/network/ports/noop.yaml 1

parameter_defaults:
  NeutronPublicInterface: eth1
  EC2MetadataIp: 192.168.100.1 2
  ControlPlaneDefaultRoute: 192.168.100.1
  DeployedServerPortMap:
    control_virtual_ip:
      fixed_ips:
        - ip_address: 192.168.100.1
      subnets:
        - cidr: 24
    controller-0-ctlplane:
      fixed_ips:
        - ip_address: 192.168.100.2
      subnets:
        - cidr: 24
    compute-0-ctlplane:
      fixed_ips:
        - ip_address: 192.168.100.3
      subnets:
        - cidr: 24
1
The RedisVipPort resource is mapped to network/ports/noop.yaml. This mapping is because the default Redis VIP address comes from the Control Plane. In this situation, we use a noop to disable this Control Plane mapping.
2
The EC2MetadataIp and ControlPlaneDefaultRoute parameters are set to the value of the Control Plane virtual IP address. The default NIC configuration templates require these parameters and you must set them to use a pingable IP address to pass the validations performed during deployment. Alternatively, customize the NIC configuration so they do not require these parameters.

8.7. Configuring Ceph Storage for Pre-Provisioned Nodes

When using ceph-ansible and servers that are already deployed, you must run commands, such as the following, from the undercloud before deployment:

export OVERCLOUD_HOSTS="192.168.1.8 192.168.1.42"

bash /usr/share/openstack-tripleo-heat-templates/deployed-server/scripts/enable-ssh-admin.sh

Using the example export command, set the OVERCLOUD_HOSTS variable to the IP addresses of the overcloud hosts intended to be used as Ceph clients (such as the Compute, Block Storage, Image, File System, Telemetry services, and so forth). The enable-ssh-admin.sh script configures a user on the overcloud nodes that Ansible uses to configure Ceph clients.

8.8. Creating the Overcloud with Pre-Provisioned Nodes

The overcloud deployment uses the standard CLI methods from Section 6.11, “Creating the Overcloud with the CLI Tools”. For pre-provisioned nodes, the deployment command requires some additional options and environment files from the core Heat template collection:

  • --disable-validations - Disables basic CLI validations for services not used with pre-provisioned infrastructure, otherwise the deployment will fail.
  • environments/deployed-server-environment.yaml - Main environment file for creating and configuring pre-provisioned infrastructure. This environment file substitutes the OS::Nova::Server resources with OS::Heat::DeployedServer resources.
  • environments/deployed-server-bootstrap-environment-rhel.yaml - Environment file to execute a bootstrap script on the pre-provisioned servers. This script installs additional packages and provides basic configuration for overcloud nodes.
  • environments/deployed-server-pacemaker-environment.yaml - Environment file for Pacemaker configuration on pre-provisioned Controller nodes. The namespace for the resources registered in this file use the Controller role name from deployed-server/deployed-server-roles-data.yaml, which is ControllerDeployedServer by default.
  • deployed-server/deployed-server-roles-data.yaml - An example custom roles file. This file replicates the default roles_data.yaml but also includes the disable_constraints: True parameter for each role. This parameter disables orchestration constraints in the generated role templates. These constraints are for services not used with pre-provisioned infrastructure.

    If using your own custom roles file, make sure to include the disable_constraints: True parameter with each role. For example:

    - name: ControllerDeployedServer
      disable_constraints: True
      CountDefault: 1
      ServicesDefault:
        - OS::TripleO::Services::CACerts
        - OS::TripleO::Services::CephMon
        - OS::TripleO::Services::CephExternal
        - OS::TripleO::Services::CephRgw
        ...

The following is an example overcloud deployment command with the environment files specific to the pre-provisioned architecture:

$ source ~/stackrc
(undercloud) $ openstack overcloud deploy \
  [other arguments] \
  --disable-validations \
  -e /usr/share/openstack-tripleo-heat-templates/environments/deployed-server-environment.yaml \
  -e /usr/share/openstack-tripleo-heat-templates/environments/deployed-server-bootstrap-environment-rhel.yaml \
  -e /usr/share/openstack-tripleo-heat-templates/environments/deployed-server-pacemaker-environment.yaml \
  -r /usr/share/openstack-tripleo-heat-templates/deployed-server/deployed-server-roles-data.yaml

This begins the overcloud configuration. However, the deployment stack pauses when the overcloud node resources enter the CREATE_IN_PROGRESS stage:

2017-01-14 13:25:13Z [overcloud.Compute.0.Compute]: CREATE_IN_PROGRESS  state changed
2017-01-14 13:25:14Z [overcloud.Controller.0.Controller]: CREATE_IN_PROGRESS  state changed

This pause is due to the director waiting for the orchestration agent on the overcloud nodes to poll the metadata server. The next section shows how to configure nodes to start polling the metadata server.

8.9. Polling the Metadata Server

The deployment is now in progress but paused at a CREATE_IN_PROGRESS stage. The next step is to configure the orchestration agent on the overcloud nodes to poll the metadata server on the director. There are two ways to accomplish this:

Important

Only use automatic configuration for the initial deployment. Do not use automatic configuration if scaling up your nodes.

Automatic Configuration

The director’s core Heat template collection contains a script that performs automatic configuration of the Heat agent on the overcloud nodes. The script requires you to source the stackrc file as the stack user to authenticate with the director and query the orchestration service:

[stack@director ~]$ source ~/stackrc

In addition, the script also requires some additional environment variables to define the nodes roles and their IP addressess. These environment variables are:

OVERCLOUD_ROLES
A space-separated list of roles to configure. These roles correlate to roles defined in your roles data file.
[ROLE]_hosts
Each role requires an environment variable with a space-separated list of IP addresses for nodes in the role.

The following commands demonstrate how to set these environment variables:

(undercloud) $ export OVERCLOUD_ROLES="ControllerDeployedServer ComputeDeployedServer"
(undercloud) $ export ControllerDeployedServer_hosts="192.168.100.2"
(undercloud) $ export ComputeDeployedServer_hosts="192.168.100.3"

Run the script to configure the orchestration agent on each overcloud node:

(undercloud) $ /usr/share/openstack-tripleo-heat-templates/deployed-server/scripts/get-occ-config.sh
Note

The script accesses the pre-provisioned nodes over SSH using the same user executing the script. In this case, the script authenticates with the stack user.

The script accomplishes the following:

  • Queries the director’s orchestration services for the metadata URL for each node.
  • Accesses the node and configures the agent on each node with its specific metadata URL.
  • Restarts the orchestration agent service.

Once the script completes, the overcloud nodes start polling orchestration service on the director. The stack deployment continues.

Manual configuration

If you prefer to manually configure the orchestration agent on the pre-provisioned nodes, use the following command to query the orchestration service on the director for each node’s metadata URL:

[stack@director ~]$ source ~/stackrc
(undercloud) $ for STACK in $(openstack stack resource list -n5 --filter name=deployed-server -c stack_name -f value overcloud) ; do STACKID=$(echo $STACK | cut -d '-' -f2,4 --output-delimiter " ") ; echo "== Metadata URL for $STACKID ==" ; openstack stack resource metadata $STACK deployed-server | jq -r '.["os-collect-config"].request.metadata_url' ; echo ; done

This displays the stack name and metadata URL for each node:

== Metadata URL for ControllerDeployedServer 0 ==
http://192.168.24.1:8080/v1/AUTH_6fce4e6019264a5b8283e7125f05b764/ov-edServer-ts6lr4tm5p44-deployed-server-td42md2tap4g/43d302fa-d4c2-40df-b3ac-624d6075ef27?temp_url_sig=58313e577a93de8f8d2367f8ce92dd7be7aac3a1&temp_url_expires=2147483586

== Metadata URL for ComputeDeployedServer 0 ==
http://192.168.24.1:8080/v1/AUTH_6fce4e6019264a5b8283e7125f05b764/ov-edServer-wdpk7upmz3eh-deployed-server-ghv7ptfikz2j/0a43e94b-fe02-427b-9bfe-71d2b7bb3126?temp_url_sig=8a50d8ed6502969f0063e79bb32592f4203a136e&temp_url_expires=2147483586

On each overcloud node:

  1. Remove the existing os-collect-config.conf template. This ensures the agent does not override our manual changes:

    $ sudo /bin/rm -f /usr/libexec/os-apply-config/templates/etc/os-collect-config.conf
  2. Configure the /etc/os-collect-config.conf file to use the corresponding metadata URL. For example, the Controller node uses the following:

    [DEFAULT]
    collectors=request
    command=os-refresh-config
    polling_interval=30
    
    [request]
    metadata_url=http://192.168.24.1:8080/v1/AUTH_6fce4e6019264a5b8283e7125f05b764/ov-edServer-ts6lr4tm5p44-deployed-server-td42md2tap4g/43d302fa-d4c2-40df-b3ac-624d6075ef27?temp_url_sig=58313e577a93de8f8d2367f8ce92dd7be7aac3a1&temp_url_expires=2147483586
  3. Save the file.
  4. Restart the os-collect-config service:

    [stack@controller ~]$ sudo systemctl restart os-collect-config

After you have configured and restarted them, the orchestration agents poll the director’s orchestration service for overcloud configuration. The deployment stack continues its creation and the stack for each node eventually changes to CREATE_COMPLETE.

8.10. Monitoring the Overcloud Creation

The overcloud configuration process begins. This process takes some time to complete. To view the status of the overcloud creation, open a separate terminal as the stack user and run:

[stack@director ~]$ source ~/stackrc
(undercloud) $ heat stack-list --show-nested

The heat stack-list --show-nested command shows the current stage of the overcloud creation.

8.11. Accessing the Overcloud

The director generates a script to configure and help authenticate interactions with your overcloud from the director host. The director saves this file, overcloudrc, in your stack user’s home director. Run the following command to use this file:

(undercloud) $ source ~/overcloudrc

This loads the necessary environment variables to interact with your overcloud from the director host’s CLI. The command prompt changes to indicate this:

(overcloud) $

To return to interacting with the director’s host, run the following command:

(overcloud) $ source ~/stackrc
(undercloud) $

8.12. Scaling Pre-Provisioned Nodes

The process for scaling pre-provisioned nodes is similar to the standard scaling procedures in Chapter 12, Scaling overcloud nodes. However, the process for adding new pre-provisioned nodes differs since pre-provisioned nodes do not use the standard registration and management process from OpenStack Bare Metal (ironic) and OpenStack Compute (nova).

Scaling Up Pre-Provisioned Nodes

When scaling up the overcloud with pre-provisioned nodes, you need to configure the orchestration agent on each node to correspond to the director’s node count.

The general process for scaling up pre-provisioned nodes includes the following steps:

  1. Prepare the new pre-provisioned nodes according to the Requirements.
  2. Scale up the nodes. See Chapter 12, Scaling overcloud nodes for these instructions.
  3. After executing the deployment command, wait until the director creates the new node resources. Manually configure the pre-provisioned nodes to poll the director’s orchestration server metadata URL as per the instructions in Section 8.9, “Polling the Metadata Server”.

Scaling Down Pre-Provisioned Nodes

When scaling down the overcloud with pre-provisioned nodes, follow the scale down instructions as normal as shown in Chapter 12, Scaling overcloud nodes.

In most scaling operations, you must obtain the UUID value of the node to pass to openstack overcloud node delete. To obtain this UUID, list the resources for the specific role:

$ openstack stack resource list overcloud -c physical_resource_id -c stack_name -n5 --filter type=OS::TripleO::<RoleName>Server

Replace <RoleName> in the above command with the actual name of the role that you are scaling down. For example, for the ComputeDeployedServer role:

$ openstack stack resource list overcloud -c physical_resource_id -c stack_name -n5 --filter type=OS::TripleO::ComputeDeployedServerServer

Use the stack_name column in the command output to identify the UUID associated with each node. The stack_name includes the integer value of the index of the node in the Heat resource group. For example, in the following sample output:

+------------------------------------+----------------------------------+
| physical_resource_id               | stack_name                       |
+------------------------------------+----------------------------------+
| 294d4e4d-66a6-4e4e-9a8b-           | overcloud-ComputeDeployedServer- |
| 03ec80beda41                       | no7yfgnh3z7e-1-ytfqdeclwvcg      |
| d8de016d-                          | overcloud-ComputeDeployedServer- |
| 8ff9-4f29-bc63-21884619abe5        | no7yfgnh3z7e-0-p4vb3meacxwn      |
| 8c59f7b1-2675-42a9-ae2c-           | overcloud-ComputeDeployedServer- |
| 2de4a066f2a9                       | no7yfgnh3z7e-2-mmmaayxqnf3o      |
+------------------------------------+----------------------------------+

The indices 0, 1, or 2 in the stack_name column correspond to the node order in the Heat resource group. Pass the corresponding UUID value from the physical_resource_id column to openstack overcloud node delete command.

Once you have removed overcloud nodes from the stack, power off these nodes. Under a standard deployment, the bare metal services on the director control this function. However, with pre-provisioned nodes, you should either manually shutdown these nodes or use the power management control for each physical system. If you do not power off the nodes after removing them from the stack, they might remain operational and reconnect as part of the overcloud environment.

After powering down the removed nodes, reprovision them back to a base operating system configuration so that they do not unintentionally join the overcloud in the future

Note

Do not attempt to reuse nodes previously removed from the overcloud without first reprovisioning them with a fresh base operating system. The scale down process only removes the node from the overcloud stack and does not uninstall any packages.

8.13. Removing a Pre-Provisioned Overcloud

Removing an entire overcloud that uses pre-provisioned nodes uses the same procedure as a standard overcloud. See Section 9.12, “Removing the Overcloud” for more details.

After removing the overcloud, power off all nodes and reprovision them back to a base operating system configuration.

Note

Do not attempt to reuse nodes previously removed from the overcloud without first reprovisioning them with a fresh base operating system. The removal process only deletes the overcloud stack and does not uninstall any packages.

8.14. Completing the Overcloud Creation

This concludes the creation of the overcloud using pre-provisioned nodes. For post-creation functions, see Chapter 9, Performing Tasks after Overcloud Creation.