Chapter 9. Configuring a basic overcloud with pre-provisioned nodes
This chapter contains basic configuration procedures that you can use to configure a Red Hat OpenStack Platform (RHOSP) environment with pre-provisioned nodes. This scenario differs from the standard overcloud creation scenarios in several ways:
- You can provision nodes with an external tool and let the director control the overcloud configuration only.
- You can use nodes without relying on the director provisioning methods. This is useful if you want to create an overcloud without power management control, or use networks with DHCP/PXE boot restrictions.
- The director does not use OpenStack Compute (nova), OpenStack Bare Metal (ironic), or OpenStack Image (glance) to manage nodes.
- Pre-provisioned nodes can use a custom partitioning layout that does not rely on the QCOW2 overcloud-full image.
This scenario includes only basic configuration with no custom features. However, you can add advanced configuration options to this basic overcloud and customize it to your specifications with the instructions in the Advanced Overcloud Customization guide.
You cannot combine pre-provisioned nodes with director-provisioned nodes.
9.1. Pre-provisioned node requirements
Before you begin deploying an overcloud with pre-provisioned nodes, ensure that the following configuration is present in your environment:
- The director node that you created in Chapter 4, Installing director on the undercloud.
- A set of bare metal machines for your nodes. The number of nodes required depends on the type of overcloud you intend to create. These machines must comply with the requirements set for each node type. These nodes require Red Hat Enterprise Linux 8.2 installed as the host operating system. Red Hat recommends using the latest version available.
- One network connection for managing the pre-provisioned nodes. This scenario requires uninterrupted SSH access to the nodes for orchestration agent configuration.
One network connection for the Control Plane network. There are two main scenarios for this network:
Using the Provisioning Network as the Control Plane, which is the default scenario. This network is usually a layer-3 (L3) routable network connection from the pre-provisioned nodes to director. The examples for this scenario use following IP address assignments:
Table 9.1. Provisioning Network IP assignments
Node name IP address
- Using a separate network. In situations where the director’s Provisioning network is a private non-routable network, you can define IP addresses for nodes from any subnet and communicate with director over the Public API endpoint. For more information about the requirements for this scenario, see Section 9.6, “Using a separate network for pre-provisioned nodes”.
- All other network types in this example also use the Control Plane network for OpenStack services. However, you can create additional networks for other network traffic types.
If any nodes use Pacemaker resources, the service user
haclusterand the service group
haclientmust have a UID/GID of 189. This is due to CVE-2018-16877. If you installed Pacemaker together with the operating system, the installation creates these IDs automatically. If the ID values are set incorrectly, follow the steps in the article OpenStack minor update / fast-forward upgrade can fail on the controller nodes at pacemaker step with "Could not evaluate: backup_cib" to change the ID values.
To prevent some services from binding to an incorrect IP address and causing deployment failures, make sure that the
/etc/hostsfile does not include the
9.2. Creating a user on pre-provisioned nodes
When you configure an overcloud with pre-provisioned nodes, director requires SSH access to the overcloud nodes. On the pre-provisioned nodes, you must create a user with SSH key authentication and configure passwordless sudo access for that user. After you create a user on pre-provisioned nodes, you can use the
--overcloud-ssh-key options with the
openstack overcloud deploy command to create an overcloud with pre-provisioned nodes.
By default, the values for the overcloud SSH user and overcloud SSH key are the
stack user and
~/.ssh/id_rsa. To create the
stack user, complete the following steps.
On each overcloud node, create the
stackuser and set a password. For example, run the following commands on the Controller node:
[root@controller-0 ~]# useradd stack [root@controller-0 ~]# passwd stack # specify a password
Disable password requirements for this user when using
[root@controller-0 ~]# echo "stack ALL=(root) NOPASSWD:ALL" | tee -a /etc/sudoers.d/stack [root@controller-0 ~]# chmod 0440 /etc/sudoers.d/stack
After you create and configure the
stackuser on all pre-provisioned nodes, copy the
stackuser’s public SSH key from the director node to each overcloud node. For example, to copy the director’s public SSH key to the Controller node, run the following command:
[stack@director ~]$ ssh-copy-id firstname.lastname@example.org
To copy your SSH keys, you might have to temporarily set
PasswordAuthentication Yes in the SSH configuration of each overcloud node. After you copy the SSH keys, set
PasswordAuthentication No and use the SSH keys to authenticate in the future.
9.3. Registering the operating system for pre-provisioned nodes
Each node requires access to a Red Hat subscription. Complete the following steps on each node to register your nodes with the Red Hat Content Delivery Network.
Enable only the repositories listed. Additional repositories can cause package and software conflicts. Do not enable any additional repositories.
Run the registration command and enter your Customer Portal user name and password when prompted:
[heat-admin@controller-0 ~]$ sudo subscription-manager register
Find the entitlement pool for Red Hat OpenStack Platform 16.1:
[heat-admin@controller-0 ~]$ sudo subscription-manager list --available --all --matches="Red Hat OpenStack"
Use the pool ID located in the previous step to attach the Red Hat OpenStack Platform 16 entitlements:
[heat-admin@controller-0 ~]$ sudo subscription-manager attach --pool=pool_id
Disable all default repositories:
[heat-admin@controller-0 ~]$ sudo subscription-manager repos --disable=*
Enable the required Red Hat Enterprise Linux repositories.
For x86_64 systems, run the following command:
[heat-admin@controller-0 ~]$ sudo subscription-manager repos --enable=rhel-8-for-x86_64-baseos-eus-rpms --enable=rhel-8-for-x86_64-appstream-eus-rpms --enable=rhel-8-for-x86_64-highavailability-eus-rpms --enable=ansible-2.9-for-rhel-8-x86_64-rpms --enable=openstack-16.1-for-rhel-8-x86_64-rpms --enable=fast-datapath-for-rhel-8-x86_64-rpms --enable=advanced-virt-for-rhel-8-x86_64-rpms
For POWER systems, run the following command:
[heat-admin@controller-0 ~]$ sudo subscription-manager repos --enable=rhel-8-for-ppc64le-baseos-rpms --enable=rhel-8-for-ppc64le-appstream-rpms --enable=rhel-8-for-ppc64le-highavailability-rpms --enable=ansible-2.8-for-rhel-8-ppc64le-rpms --enable=openstack-16-for-rhel-8-ppc64le-rpms --enable=fast-datapath-for-rhel-8-ppc64le-rpms
container-toolsrepository module to version
[heat-admin@controller-0 ~]$ sudo dnf module disable -y container-tools:rhel8 [heat-admin@controller-0 ~]$ sudo dnf module enable -y container-tools:2.0
If the overcloud uses Ceph Storage nodes, enable the relevant Ceph Storage repositories:
[heat-admin@cephstorage-0 ~]$ sudo subscription-manager repos --enable=rhel-8-for-x86_64-baseos-rpms --enable=rhel-8-for-x86_64-appstream-rpms --enable=rhel-8-for-x86_64-highavailability-rpms --enable=ansible-2.9-for-rhel-8-x86_64-rpms --enable=openstack-16.1-deployment-tools-for-rhel-8-x86_64-rpms
Lock each node to Red Hat Enterprise Linux 8.2 before you execute the
[heat-admin@controller-0 ~]$ sudo subscription-manager release --set=8.2
Update your system to ensure you have the latest base system packages:
[heat-admin@controller-0 ~]$ sudo dnf update -y [heat-admin@controller-0 ~]$ sudo reboot
The node is now ready to use for your overcloud.
9.4. Configuring SSL/TLS access to director
If the director uses SSL/TLS, the pre-provisioned nodes require the certificate authority file used to sign the director’s SSL/TLS certificates. If you use your own certificate authority, perform the following actions on each overcloud node.
Copy the certificate authority file to the
/etc/pki/ca-trust/source/anchors/directory on each pre-provisioned node.
Run the following command on each overcloud node:
[root@controller-0 ~]# sudo update-ca-trust extract
These steps ensure that the overcloud nodes can access the director’s Public API over SSL/TLS.
9.5. Configuring networking for the control plane
The pre-provisioned overcloud nodes obtain metadata from director using standard HTTP requests. This means all overcloud nodes require L3 access to either:
The director Control Plane network, which is the subnet that you define with the
network_cidrparameter in your
undercloud.conffile. The overcloud nodes require either direct access to this subnet or routable access to the subnet.
The director Public API endpoint, that you specify with the
undercloud_public_hostparameter in your
undercloud.conffile. This option is available if you do not have an L3 route to the Control Plane or if you want to use SSL/TLS communication. For more information about configuring your overcloud nodes to use the Public API endpoint, see Section 9.6, “Using a separate network for pre-provisioned nodes”.
Director uses the Control Plane network to manage and configure a standard overcloud. For an overcloud with pre-provisioned nodes, your network configuration might require some modification to accommodate communication between the director and the pre-provisioned nodes.
Using network isolation
You can use network isolation to group services to use specific networks, including the Control Plane. There are multiple network isolation strategies in the the Advanced Overcloud Customization guide. You can also define specific IP addresses for nodes on the Control Plane. For more information about isolating networks and creating predictable node placement strategies, see the following sections in the Advanced Overcloud Customizations guide:
If you use network isolation, ensure that your NIC templates do not include the NIC used for undercloud access. These templates can reconfigure the NIC, which introduces connectivity and configuration problems during deployment.
Assigning IP addresses
If you do not use network isolation, you can use a single Control Plane network to manage all services. This requires manual configuration of the Control Plane NIC on each node to use an IP address within the Control Plane network range. If you are using the director Provisioning network as the Control Plane, ensure that the overcloud IP addresses that you choose are outside of the DHCP ranges for both provisioning (
dhcp_end) and introspection (
During standard overcloud creation, director creates OpenStack Networking (neutron) ports and automatically assigns IP addresses to the overcloud nodes on the Provisioning / Control Plane network. However, this can cause director to assign different IP addresses to the ones that you configure manually for each node. In this situation, use a predictable IP address strategy to force director to use the pre-provisioned IP assignments on the Control Plane.
For example, you can use an environment file
ctlplane-assignments.yaml with the following IP assignments to implement a predictable IP strategy:
resource_registry: OS::TripleO::DeployedServer::ControlPlanePort: /usr/share/openstack-tripleo-heat-templates/deployed-server/deployed-neutron-port.yaml parameter_defaults: DeployedServerPortMap: controller-0-ctlplane: fixed_ips: - ip_address: 192.168.24.2 subnets: - cidr: 192.168.24.0/24 network: tags: 192.168.24.0/24 compute-0-ctlplane: fixed_ips: - ip_address: 192.168.24.3 subnets: - cidr: 192.168.24.0/24 network: tags: - 192.168.24.0/24
In this example, the
OS::TripleO::DeployedServer::ControlPlanePort resource passes a set of parameters to director and defines the IP assignments of your pre-provisioned nodes. Use the
DeployedServerPortMap parameter to define the IP addresses and subnet CIDRs that correspond to each overcloud node. The mapping defines the following attributes:
The name of the assignment, which follows the format
<node_hostname>value matches the short host name for the node, and
<network>matches the lowercase name of the network. For example:
The IP assignments, which use the following parameter patterns:
fixed_ips/ip_address- Defines the fixed IP addresses for the control plane. Use multiple
ip_addressparameters in a list to define multiple IP addresses.
subnets/cidr- Defines the CIDR value for the subnet.
A later section in this chapter uses the resulting environment file (
ctlplane-assignments.yaml) as part of the
openstack overcloud deploy command.
9.6. Using a separate network for pre-provisioned nodes
By default, director uses the Provisioning network as the overcloud Control Plane. However, if this network is isolated and non-routable, nodes cannot communicate with the director Internal API during configuration. In this situation, you might need to define a separate network for the nodes and configure them to communicate with the director over the Public API.
There are several requirements for this scenario:
- The overcloud nodes must accommodate the basic network configuration from Section 9.5, “Configuring networking for the control plane”.
- You must enable SSL/TLS on the director for Public API endpoint usage. For more information, see Section 4.2, “Director configuration parameters” and Chapter 20, Configuring custom SSL/TLS certificates.
You must define an accessible fully qualified domain name (FQDN) for director. This FQDN must resolve to a routable IP address for the director. Use the
undercloud_public_hostparameter in the
undercloud.conffile to set this FQDN.
The examples in this section use IP address assignments that differ from the main scenario:
Table 9.2. Provisioning network IP assignments
|Node Name||IP address or FQDN|
Director (Internal API)
192.168.24.1 (Provisioning Network and Control Plane)
Director (Public API)
10.1.1.1 / director.example.com
Overcloud Virtual IP
The following sections provide additional configuration for situations that require a separate network for overcloud nodes.
IP address assignments
The method for IP assignments is similar to Section 9.5, “Configuring networking for the control plane”. However, since the Control Plane is not routable from the deployed servers, you must use the
DeployedServerPortMap parameter to assign IP addresses from your chosen overcloud node subnet, including the virtual IP address to access the Control Plane. The following example is a modified version of the
ctlplane-assignments.yaml environment file from Section 9.5, “Configuring networking for the control plane” that accommodates this network architecture:
resource_registry: OS::TripleO::DeployedServer::ControlPlanePort: /usr/share/openstack-tripleo-heat-templates/deployed-server/deployed-neutron-port.yaml OS::TripleO::Network::Ports::ControlPlaneVipPort: /usr/share/openstack-tripleo-heat-templates/deployed-server/deployed-neutron-port.yaml OS::TripleO::Network::Ports::RedisVipPort: /usr/share/openstack-tripleo-heat-templates/network/ports/noop.yaml OS::TripleO::Network::Ports::OVNDBsVipPort: /usr/share/openstack-tripleo-heat-templates/network/ports/noop.yaml 1 parameter_defaults: NeutronPublicInterface: eth1 DeployedServerPortMap: control_virtual_ip: fixed_ips: - ip_address: 192.168.100.1 subnets: - cidr: 24 controller-0-ctlplane: fixed_ips: - ip_address: 192.168.100.2 subnets: - cidr: 24 compute-0-ctlplane: fixed_ips: - ip_address: 192.168.100.3 subnets: - cidr: 24
OVNDBsVipPortresources are mapped to
network/ports/noop.yaml. This mapping is necessary because the default Redis and OVNDBs VIP addresses come from the Control Plane. In this situation, use a
noopto disable this Control Plane mapping.
9.7. Mapping pre-provisioned node hostnames
When you configure pre-provisioned nodes, you must map heat-based hostnames to their actual hostnames so that
ansible-playbook can reach a resolvable host. Use the
HostnameMap to map these values.
Create an environment file, for example
hostname-map.yaml, and include the
HostnameMapparameter and the hostname mappings. Use the following syntax:
parameter_defaults: HostnameMap: [HEAT HOSTNAME]: [ACTUAL HOSTNAME] [HEAT HOSTNAME]: [ACTUAL HOSTNAME]
[HEAT HOSTNAME]usually conforms to the following convention:
parameter_defaults: HostnameMap: overcloud-controller-0: controller-00-rack01 overcloud-controller-1: controller-01-rack02 overcloud-controller-2: controller-02-rack03 overcloud-novacompute-0: compute-00-rack01 overcloud-novacompute-1: compute-01-rack01 overcloud-novacompute-2: compute-02-rack01
9.8. Mapping network interfaces to aliases
In Red Hat OpenStack Platform 16.1, overcloud network interface mapping does not happen automatically on pre-provisioned nodes. Instead, you must define the mapping manually in the
/etc/os-net-config/mapping.yaml file on each pre-provisioned node.
- Log in to each pre-provisioned node.
/etc/os-net-config/mapping.yamlfile and include the details of your interface mapping:
interface_mapping: nic1: em1 nic2: em2
9.9. Configuring Ceph Storage for pre-provisioned nodes
Complete the following steps on the undercloud host to configure
ceph-ansible for nodes that are already deployed.
On the undercloud host, create an environment variable,
OVERCLOUD_HOSTS, and set the variable to a space-separated list of IP addresses of the overcloud hosts that you want to use as Ceph clients:
export OVERCLOUD_HOSTS="192.168.1.8 192.168.1.42"
The default overcloud plan name is
overcloud. If you use a different name, create an environment variable
OVERCLOUD_PLANto store your custom name:
<custom-stack-name>with the name of your stack.
enable-ssh-admin.shscript to configure a user on the overcloud nodes that Ansible can use to configure Ceph clients:
When you run the
openstack overcloud deploy command, Ansible configures the hosts that you define in the
OVERCLOUD_HOSTS variable as Ceph clients.
9.10. Creating the overcloud with pre-provisioned nodes
The overcloud deployment uses the standard CLI methods from Section 7.14, “Deployment command”. For pre-provisioned nodes, the deployment command requires some additional options and environment files from the core heat template collection:
--disable-validations- Use this option to disable basic CLI validations for services not used with pre-provisioned infrastructure. If you do not disable these validations, the deployment fails.
environments/deployed-server-environment.yaml- Include this environment file to create and configure the pre-provisioned infrastructure. This environment file substitutes the
The following command is an example overcloud deployment command with the environment files specific to the pre-provisioned architecture:
$ source ~/stackrc (undercloud) $ openstack overcloud deploy \ --disable-validations \ -e /usr/share/openstack-tripleo-heat-templates/environments/deployed-server-environment.yaml \ -e /home/stack/templates/hostname-map.yaml \ --overcloud-ssh-user stack \ --overcloud-ssh-key ~/.ssh/id_rsa \ <OTHER OPTIONS>
--overcloud-ssh-key options are used to SSH into each overcloud node during the configuration stage, create an initial
tripleo-admin user, and inject an SSH key into
/home/tripleo-admin/.ssh/authorized_keys. To inject the SSH key, specify the credentials for the initial SSH connection with
--overcloud-ssh-key (defaults to
~/.ssh/id_rsa). To limit exposure to the private key that you specify with the
--overcloud-ssh-key option, director never passes this key to any API service, such as heat or the Workflow service (mistral), and only the director
openstack overcloud deploy command uses this key to enable access for the
9.11. Overcloud deployment output
When the overcloud creation completes, director provides a recap of the Ansible plays that were executed to configure the overcloud:
PLAY RECAP ************************************************************* overcloud-compute-0 : ok=160 changed=67 unreachable=0 failed=0 overcloud-controller-0 : ok=210 changed=93 unreachable=0 failed=0 undercloud : ok=10 changed=7 unreachable=0 failed=0 Tuesday 15 October 2018 18:30:57 +1000 (0:00:00.107) 1:06:37.514 ****** ========================================================================
Director also provides details to access your overcloud.
Ansible passed. Overcloud configuration completed. Overcloud Endpoint: http://192.168.24.113:5000 Overcloud Horizon Dashboard URL: http://192.168.24.113:80/dashboard Overcloud rc file: /home/stack/overcloudrc Overcloud Deployed
9.12. Accessing the overcloud
Director generates a script to configure and help authenticate interactions with your overcloud from the undercloud. Director saves this file,
overcloudrc, in the home directory of the
stack user. Run the following command to use this file:
(undercloud) $ source ~/overcloudrc
This command loads the environment variables that are necessary to interact with your overcloud from the undercloud CLI. The command prompt changes to indicate this:
To return to interacting with the undercloud, run the following command:
(overcloud) $ source ~/stackrc (undercloud) $
9.13. Scaling pre-provisioned nodes
The process for scaling pre-provisioned nodes is similar to the standard scaling procedures in Chapter 16, Scaling overcloud nodes. However, the process to add new pre-provisioned nodes differs because pre-provisioned nodes do not use the standard registration and management process from OpenStack Bare Metal (ironic) and OpenStack Compute (nova).
Scaling up pre-provisioned nodes
When scaling up the overcloud with pre-provisioned nodes, you must configure the orchestration agent on each node to correspond to the director node count.
Perform the following actions to scale up overcloud nodes:
- Prepare the new pre-provisioned nodes according to Section 9.1, “Pre-provisioned node requirements”.
- Scale up the nodes. For more information, see Chapter 16, Scaling overcloud nodes.
- After you execute the deployment command, wait until the director creates the new node resources and launches the configuration.
Scaling down pre-provisioned nodes
When scaling down the overcloud with pre-provisioned nodes, follow the scale down instructions in Chapter 16, Scaling overcloud nodes.
In scale-down operations, you can use hostnames for both OSP provisioned or pre-provisioned nodes. You can also use the UUID for OSP provisioned nodes. However, there is no UUID for pre-provisoned nodes, so you always use hostnames. Pass the hostname or UUID value to the
openstack overcloud node delete command.
Identify the name of the node that you want to remove.
$ openstack stack resource list overcloud -n5 --filter type=OS::TripleO::ComputeDeployedServerServer
Pass the corresponding node name from the
stack_namecolumn to the
openstack overcloud node deletecommand:
$ openstack overcloud node delete --stack <overcloud> <stack>
<overcloud>with the name or UUID of the overcloud stack.
<stack_name>with the name of the node that you want to remove. You can include multiple node names in the
openstack overcloud node deletecommand.
Ensure that the
openstack overcloud node deletecommand runs to completion:
$ openstack stack list
The status of the
UPDATE_COMPLETEwhen the delete operation is complete.
After you remove overcloud nodes from the stack, power off these nodes. In a standard deployment, the bare metal services on the director control this function. However, with pre-provisioned nodes, you must either manually shut down these nodes or use the power management control for each physical system. If you do not power off the nodes after removing them from the stack, they might remain operational and reconnect as part of the overcloud environment.
After you power off the removed nodes, reprovision them to a base operating system configuration so that they do not unintentionally join the overcloud in the future
Do not attempt to reuse nodes previously removed from the overcloud without first reprovisioning them with a fresh base operating system. The scale down process only removes the node from the overcloud stack and does not uninstall any packages.
Removing a pre-provisioned overcloud
To remove an entire overcloud that uses pre-provisioned nodes, see Section 12.6, “Removing the overcloud” for the standard overcloud remove procedure. After you remove the overcloud, power off all nodes and reprovision them to a base operating system configuration.
Do not attempt to reuse nodes previously removed from the overcloud without first reprovisioning them with a fresh base operating system. The removal process only deletes the overcloud stack and does not uninstall any packages.