Chapter 7. Configuring a basic overcloud with pre-provisioned nodes
This chapter contains basic configuration procedures for using pre-provisioned nodes to configure an OpenStack Platform environment. This scenario differs from the standard overcloud creation scenarios in several ways:
- You can provision nodes using an external tool and let the director control the overcloud configuration only.
- You can use nodes without relying on the director’s provisioning methods. This is useful if you want to create an overcloud without power management control or use networks with DHCP/PXE boot restrictions.
- The director does not use OpenStack Compute (nova), OpenStack Bare Metal (ironic), or OpenStack Image (glance) for managing nodes.
-
Pre-provisioned nodes can use a custom partitioning layout that does not rely on the QCOW2
overcloud-full
image.
This scenario includes only basic configuration with no custom features. However, you can add advanced configuration options to this basic overcloud and customize it to your specifications using the instructions in the Advanced Overcloud Customization guide.
Combining pre-provisioned nodes with director-provisioned nodes in an overcloud is not supported.
7.1. Pre-provisioned node requirements
- The director node created in Chapter 4, Installing director.
- A set of bare metal machines for your nodes. The number of nodes required depends on the type of overcloud you intend to create. These machines must comply with the requirements set for each node type. These nodes require Red Hat Enterprise Linux 8.2.
- One network connection for managing the pre-provisioned nodes. This scenario requires uninterrupted SSH access to the nodes for orchestration agent configuration.
One network connection for the Control Plane network. There are two main scenarios for this network:
Using the Provisioning Network as the Control Plane, which is the default scenario. This network is usually a layer-3 (L3) routable network connection from the pre-provisioned nodes to the director. The examples for this scenario use following IP address assignments:
Table 7.1. Provisioning Network IP Assignments
Node Name IP Address Director
192.168.24.1
Controller 0
192.168.24.2
Compute 0
192.168.24.3
- Using a separate network. In situations where the director’s Provisioning network is a private non-routable network, you can define IP addresses for nodes from any subnet and communicate with the director over the Public API endpoint. There are certain caveats to this scenario, which this chapter examines later in Section 7.6, “Using a separate network for pre-provisioned nodes”.
- All other network types in this example also use the Control Plane network for OpenStack services. However, you can create additional networks for other network traffic types.
-
If any nodes use Pacemaker resources, the service user
hacluster
and the service grouphaclient
must have a UID/GID of 189. This is due to CVE-2018-16877. If you installed Pacemaker together with the operating system, the installation creates these IDs automatically. If the ID values are set incorrectly, follow the steps in the article OpenStack minor update / fast-forward upgrade can fail on the controller nodes at pacemaker step with "Could not evaluate: backup_cib" to change the ID values. -
To prevent some services from binding to an incorrect IP address and causing deployment failures, make sure that the
/etc/hosts
file does not include thenode-name=127.0.0.1
mapping.
7.2. Creating a user on pre-provisioned nodes
When configuring an overcloud with pre-provisioned nodes, the director requires SSH access to the overcloud nodes as the stack
user. To create the stack
user, complete the following steps.
Procedure
On each overcloud node, create the
stack
user and set a password on each node. For example, run the following commands on the Controller node:[root@controller-0 ~]# useradd stack [root@controller-0 ~]# passwd stack # specify a password
Disable password requirements for this user when using
sudo
:[root@controller-0 ~]# echo "stack ALL=(root) NOPASSWD:ALL" | tee -a /etc/sudoers.d/stack [root@controller-0 ~]# chmod 0440 /etc/sudoers.d/stack
After creating and configuring the
stack
user on all pre-provisioned nodes, copy thestack
user’s public SSH key from the director node to each overcloud node. For example, to copy the director’s public SSH key to the Controller node, run the following command:[stack@director ~]$ ssh-copy-id stack@192.168.24.2
7.3. Registering the operating system for pre-provisioned nodes
Each node requires access to a Red Hat subscription. Complete the following steps on each node to register each respective node to the Red Hat Content Delivery Network.
Procedure
Run the registration command and enter your Customer Portal user name and password when prompted:
[root@controller-0 ~]# sudo subscription-manager register
Find the entitlement pool for the Red Hat OpenStack Platform 15:
[root@controller-0 ~]# sudo subscription-manager list --available --all --matches="Red Hat OpenStack"
Use the pool ID located in the previous step to attach the Red Hat OpenStack Platform 15 entitlements:
[root@controller-0 ~]# sudo subscription-manager attach --pool=pool_id
Disable all default repositories:
[root@controller-0 ~]# sudo subscription-manager repos --disable=*
Enable the required Red Hat Enterprise Linux repositories.
For x86_64 systems, run:
[root@controller-0 ~]# sudo subscription-manager repos --enable=rhel-8-for-x86_64-baseos-rpms --enable=rhel-8-for-x86_64-appstream-rpms --enable=rhel-8-for-x86_64-highavailability-rpms --enable=ansible-2.8-for-rhel-8-x86_64-rpms --enable=openstack-15-for-rhel-8-x86_64-rpms --enable=rhceph-4-osd-for-rhel-8-x86_64-rpms--enable=rhceph-4-mon-for-rhel-8-x86_64-rpms --enable=rhceph-4-tools-for-rhel-8-x86_64-rpms --enable=advanced-virt-for-rhel-8-x86_64-rpms --enable=fast-datapath-for-rhel-8-x86_64-rpms
For POWER systems, run:
[root@controller-0 ~]# sudo subscription-manager repos --enable=rhel-8-for-ppc64le-baseos-rpms --enable=rhel-8-for-ppc64le-appstream-rpms --enable=rhel-8-for-ppc64le-highavailability-rpms --enable=ansible-2.8-for-rhel-8-ppc64le-rpms --enable=openstack-15-for-rhel-8-ppc64le-rpms --enable=advanced-virt-for-rhel-8-ppc64le-rpms
ImportantEnable only the repositories listed. Additional repositories can cause package and software conflicts. Do not enable any additional repositories.
Update your system to ensure you have the latest base system packages:
[root@controller-0 ~]# sudo dnf update -y [root@controller-0 ~]# sudo reboot
The node is now ready to use for your overcloud.
7.4. Configuring SSL/TLS access to director
If the director uses SSL/TLS, the pre-provisioned nodes require the certificate authority file used to sign the director’s SSL/TLS certificates. If using your own certificate authority, perform the following actions on each overcloud node.
Procedure
-
Copy the certificate authority file to the
/etc/pki/ca-trust/source/anchors/
directory on each pre-provisioned node. Run the following command on each overcloud node:
[root@controller-0 ~]# sudo update-ca-trust extract
These steps ensure the overcloud nodes can access the director’s Public API over SSL/TLS.
7.5. Configuring networking for the control plane
The pre-provisioned overcloud nodes obtain metadata from the director using standard HTTP requests. This means all overcloud nodes require L3 access to either:
-
The director’s Control Plane network, which is the subnet defined with the
network_cidr
parameter in yourundercloud.conf
file. The overcloud nodes require either direct access to this subnet or routable access to the subnet. -
The director’s Public API endpoint, specified as the
undercloud_public_host
parameter in yourundercloud.conf
file. This option is available if you do not have an L3 route to the Control Plane or you aim to use SSL/TLS communication. See Section 7.6, “Using a separate network for pre-provisioned nodes” for additional information about configuring your overcloud nodes to use the Public API endpoint.
The director uses the Control Plane network to manage and configure a standard overcloud. For an overcloud with pre-provisioned nodes, your network configuration might require some modification to accommodate communication between the director and the pre-provisioned nodes.
Using Network Isolation
You can use network isolation to group services to use specific networks, including the Control Plane. There are multiple network isolation strategies in the The Advanced Overcloud Customization guide. You can also define specific IP addresses for nodes on the control plane. For more information about isolating networks and creating predictable node placement strategies, see the following sections in the Advanced Overcloud Customizations guide:
If you use network isolation, ensure your NIC templates do not include the NIC used for undercloud access. These template can reconfigure the NIC, which introduces connectivity and configuration problems during deployment.
Assigning IP Addresses
If you do not use network isolation, you can use a single Control Plane network to manage all services. This requires manual configuration of the Control Plane NIC on each node to use an IP address within the Control Plane network range. If using the director’s Provisioning network as the Control Plane, ensure the chosen overcloud IP addresses fall outside of the DHCP ranges for both provisioning (dhcp_start
and dhcp_end
) and introspection (inspection_iprange
).
During standard overcloud creation, the director creates OpenStack Networking (neutron) ports and automatically assigns IP addresses to the overcloud nodes on the Provisioning / Control Plane network. However, this can cause the director to assign different IP addresses to the ones you configure manually for each node. In this situation, use a predictable IP address strategy to force the director to use the pre-provisioned IP assignments on the Control Plane.
For example, you can use an environment file ctlplane-assignments.yaml
with the following IP assignments to implement a predictable IP strategy:
resource_registry: OS::TripleO::DeployedServer::ControlPlanePort: /usr/share/openstack-tripleo-heat-templates/deployed-server/deployed-neutron-port.yaml parameter_defaults: DeployedServerPortMap: controller-0-ctlplane: fixed_ips: - ip_address: 192.168.24.2 subnets: - cidr: 192.168.24.0/24 network: tags: 192.168.24.0/24 compute-0-ctlplane: fixed_ips: - ip_address: 192.168.24.3 subnets: - cidr: 192.168.24.0/24 network: tags: - 192.168.24.0/24
In this example, the OS::TripleO::DeployedServer::ControlPlanePort
resource passes a set of parameters to the director and defines the IP assignments of our pre-provisioned nodes. The DeployedServerPortMap
parameter defines the IP addresses and subnet CIDRs that correspond to each overcloud node. The mapping defines the following attributes:
-
The name of the assignment, which follows the format
<node_hostname>-<network>
where the<node_hostname>
value matches the short hostname for the node and<network>
matches the lowercase name of the network. For example:controller-0-ctlplane
forcontroller-0.example.com
andcompute-0-ctlplane
forcompute-0.example.com
. The IP assignments, which use the following parameter patterns:
-
fixed_ips/ip_address
- Defines the fixed IP addresses for the control plane. Use multipleip_address
parameters in a list to define multiple IP addresses. -
subnets/cidr
- Defines the CIDR value for the subnet.
-
A later section in this chapter uses the resulting environment file (ctlplane-assignments.yaml
) as part of the openstack overcloud deploy
command.
7.6. Using a separate network for pre-provisioned nodes
By default, the director uses the Provisioning network as the overcloud Control Plane. However, if this network is isolated and non-routable, nodes cannot communicate with the director’s Internal API during configuration. In this situation, you might need to define a separate network for the nodes and configure them to communicate with the director over the Public API.
There are several requirements for this scenario:
- The overcloud nodes must accommodate the basic network configuration from <configuring-networking-for-the-control-plane>>.
- You must enable SSL/TLS on the director for Public API endpoint usage. For more information, see Section 4.2, “Director configuration parameters” and Chapter 15, Configuring custom SSL/TLS certificates.
-
You must define an accessible fully qualified domain name (FQDN) for director. This FQDN must resolve to a routable IP address for the director. Use the
undercloud_public_host
parameter in theundercloud.conf
file to set this FQDN.
The examples in this section use IP address assignments that differ from the main scenario:
Table 7.2. Provisioning Network IP Assignments
Node Name | IP Address or FQDN |
---|---|
Director (Internal API) | 192.168.24.1 (Provisioning Network and Control Plane) |
Director (Public API) | 10.1.1.1 / director.example.com |
Overcloud Virtual IP | 192.168.100.1 |
Controller 0 | 192.168.100.2 |
Compute 0 | 192.168.100.3 |
The following sections provide additional configuration for situations that require a separate network for overcloud nodes.
IP Address Assignments
The method for IP assignments is similar to Section 7.5, “Configuring networking for the control plane”. However, since the Control Plane is not routable from the deployed servers, you must use the DeployedServerPortMap
parameter to assign IP addresses from your chosen overcloud node subnet, including the virtual IP address to access the Control Plane. The following example is a modified version of the ctlplane-assignments.yaml
environment file from Section 7.5, “Configuring networking for the control plane” that accommodates this network architecture:
resource_registry: OS::TripleO::DeployedServer::ControlPlanePort: /usr/share/openstack-tripleo-heat-templates/deployed-server/deployed-neutron-port.yaml OS::TripleO::Network::Ports::ControlPlaneVipPort: /usr/share/openstack-tripleo-heat-templates/deployed-server/deployed-neutron-port.yaml OS::TripleO::Network::Ports::RedisVipPort: /usr/share/openstack-tripleo-heat-templates/network/ports/noop.yaml 1 parameter_defaults: NeutronPublicInterface: eth1 EC2MetadataIp: 192.168.100.1 2 ControlPlaneDefaultRoute: 192.168.100.1 DeployedServerPortMap: control_virtual_ip: fixed_ips: - ip_address: 192.168.100.1 subnets: - cidr: 24 controller-0-ctlplane: fixed_ips: - ip_address: 192.168.100.2 subnets: - cidr: 24 compute-0-ctlplane: fixed_ips: - ip_address: 192.168.100.3 subnets: - cidr: 24
- 1
- The
RedisVipPort
resource is mapped tonetwork/ports/noop.yaml
. This mapping is necessary because the default Redis VIP address comes from the Control Plane. In this situation, we use anoop
to disable this Control Plane mapping. - 2
- The
EC2MetadataIp
andControlPlaneDefaultRoute
parameters are set to the value of the Control Plane virtual IP address. The default NIC configuration templates require these parameters and you must set them to use a pingable IP address to pass the validations performed during deployment. Alternatively, customize the NIC configuration so they do not require these parameters.
7.7. Mapping pre-provisioned node hostnames
When configuring pre-provisioned nodes, you must map Heat-based hostnames to their actual hostnames so that ansible-playbook
can reach a resolvable host. Use the HostnameMap
to map these values.
Procedure
Create an environment file, for example
hostname-map.yaml
, and include theHostnameMap
parameter and the hostname mappings. Use the following syntax:parameter_defaults: HostnameMap: [HEAT HOSTNAME]: [ACTUAL HOSTNAME] [HEAT HOSTNAME]: [ACTUAL HOSTNAME]
The
[HEAT HOSTNAME]
usually conforms to the following convention:[STACK NAME]-[ROLE]-[INDEX]
:parameter_defaults: HostnameMap: overcloud-controller-0: controller-00-rack01 overcloud-controller-1: controller-01-rack02 overcloud-controller-2: controller-02-rack03 overcloud-novacompute-0: compute-00-rack01 overcloud-novacompute-1: compute-01-rack01 overcloud-novacompute-2: compute-02-rack01
-
Save the
hostname-map.yaml
file.
7.8. Configuring Ceph Storage for Pre-Provisioned Nodes
Complete the following steps on the undercloud host to configure ceph-ansible
for nodes that are already deployed.
Procedure
On the undercloud host, create an environment variable,
OVERCLOUD_HOSTS
, and set the variable to a space-separated list of IP addresses of the overcloud hosts that you want to use as Ceph clients:export OVERCLOUD_HOSTS="192.168.1.8 192.168.1.42"
Run the
enable-ssh-admin.sh
script to configure a user on the overcloud nodes that Ansible can use to configure Ceph clients:bash /usr/share/openstack-tripleo-heat-templates/deployed-server/scripts/enable-ssh-admin.sh
When you run the openstack overcloud deploy
command, Ansible configures the hosts that you define in the OVERCLOUD_HOSTS
variable as Ceph clients.
7.9. Creating the Overcloud with Pre-Provisioned Nodes
The overcloud deployment uses the standard CLI methods from Section 6.11, “Deployment command”. For pre-provisioned nodes, the deployment command requires some additional options and environment files from the core Heat template collection:
-
--disable-validations
- Disables basic CLI validations for services not used with pre-provisioned infrastructure, otherwise the deployment will fail. -
environments/deployed-server-environment.yaml
- Primary environment file for creating and configuring pre-provisioned infrastructure. This environment file substitutes theOS::Nova::Server
resources withOS::Heat::DeployedServer
resources. -
environments/deployed-server-bootstrap-environment-rhel.yaml
- Environment file to execute a bootstrap script on the pre-provisioned servers. This script installs additional packages and includes basic configuration for overcloud nodes. -
environments/deployed-server-pacemaker-environment.yaml
- Environment file for Pacemaker configuration on pre-provisioned Controller nodes. The namespace for the resources registered in this file use the Controller role name fromdeployed-server/deployed-server-roles-data.yaml
, which isControllerDeployedServer
by default. deployed-server/deployed-server-roles-data.yaml
- An example custom roles file. This file replicates the defaultroles_data.yaml
but also includes thedisable_constraints: True
parameter for each role. This parameter disables orchestration constraints in the generated role templates. These constraints are for services that pre-provisioned infrastructure does not use.If you want to use a custom roles file, ensure you include the
disable_constraints: True
parameter for each role:- name: ControllerDeployedServer disable_constraints: True CountDefault: 1 ServicesDefault: - OS::TripleO::Services::CACerts - OS::TripleO::Services::CephMon - OS::TripleO::Services::CephExternal - OS::TripleO::Services::CephRgw ...
The following command is an example overcloud deployment command with the environment files specific to the pre-provisioned architecture:
$ source ~/stackrc (undercloud) $ openstack overcloud deploy \ [other arguments] \ --disable-validations \ -e /usr/share/openstack-tripleo-heat-templates/environments/deployed-server-environment.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/deployed-server-bootstrap-environment-rhel.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/deployed-server-pacemaker-environment.yaml \ -e /home/stack/templates/hostname-map.yaml / -r /usr/share/openstack-tripleo-heat-templates/deployed-server/deployed-server-roles-data.yaml --overcloud-ssh-user stack \ --overcloud-ssh-key ~/.ssh/id_rsa \ [OTHER OPTIONS]
The --overcloud-ssh-user
and --overcloud-ssh-key
options are used to SSH into each overcloud node during the configuration stage, create an initial tripleo-admin
user, and inject an SSH key into /home/tripleo-admin/.ssh/authorized_keys
. To inject the SSH key, specify the credentials for the initial SSH connection with --overcloud-ssh-user
and --overcloud-ssh-key
(defaults to ~/.ssh/id_rsa
). To limit exposure to the private key you specify with the --overcloud-ssh-key
option, the director never passes this key to any API service, such as Heat or Mistral, and only the director’s openstack overcloud deploy
command uses this key to enable access for the tripleo-admin
user.
7.10. Overcloud deployment output
Once the overcloud creation completes, the director provides a recap of the Ansible plays executed to configure the overcloud:
PLAY RECAP ************************************************************* overcloud-compute-0 : ok=160 changed=67 unreachable=0 failed=0 overcloud-controller-0 : ok=210 changed=93 unreachable=0 failed=0 undercloud : ok=10 changed=7 unreachable=0 failed=0 Tuesday 15 October 2018 18:30:57 +1000 (0:00:00.107) 1:06:37.514 ****** ========================================================================
The director also provides details to access your overcloud.
Ansible passed. Overcloud configuration completed. Overcloud Endpoint: http://192.168.24.113:5000 Overcloud Horizon Dashboard URL: http://192.168.24.113:80/dashboard Overcloud rc file: /home/stack/overcloudrc Overcloud Deployed
7.11. Accessing the overcloud
The director generates a script to configure and help authenticate interactions with your overcloud from the director host. The director saves this file, overcloudrc
, in your stack
user’s home director. Run the following command to use this file:
(undercloud) $ source ~/overcloudrc
This loads environment variables necessary to interact with your overcloud from the director host’s CLI. The command prompt changes to indicate this:
(overcloud) $
To return to interacting with the director’s host, run the following command:
(overcloud) $ source ~/stackrc (undercloud) $
Each node in the overcloud also contains a heat-admin
user. The stack
user has SSH access to this user on each node. To access a node over SSH, find the IP address of the desired node:
(undercloud) $ openstack server list
Then connect to the node using the heat-admin
user and the node’s IP address:
(undercloud) $ ssh heat-admin@192.168.24.23
7.12. Scaling pre-provisioned nodes
The process for scaling pre-provisioned nodes is similar to the standard scaling procedures in Chapter 12, Scaling overcloud nodes. However, the process for adding new pre-provisioned nodes differs since pre-provisioned nodes do not use the standard registration and management process from OpenStack Bare Metal (ironic) and OpenStack Compute (nova).
Scaling Up Pre-Provisioned Nodes
When scaling up the overcloud with pre-provisioned nodes, you must configure the orchestration agent on each node to correspond to the director’s node count.
Perform the following actions to scale up overcloud nodes:
- Prepare the new pre-provisioned nodes according to the Section 7.1, “Pre-provisioned node requirements”.
- Scale up the nodes. See Chapter 12, Scaling overcloud nodes for these instructions.
- After executing the deployment command, wait until the director creates the new node resources and launches the configuration.
Scaling Down Pre-Provisioned Nodes
When scaling down the overcloud with pre-provisioned nodes, follow the scale down instructions as normal as shown in Chapter 12, Scaling overcloud nodes.
In most scaling operations, you must obtain the UUID value of the node you want to remove and pass this value to the openstack overcloud node delete
command. To obtain this UUID, list the resources for the specific role:
$ openstack stack resource list overcloud -c physical_resource_id -c stack_name -n5 --filter type=OS::TripleO::<RoleName>Server
Replace <RoleName>
with the actual name of the role that you want to scale down. For example, for the ComputeDeployedServer
role, run the following command:
$ openstack stack resource list overcloud -c physical_resource_id -c stack_name -n5 --filter type=OS::TripleO::ComputeDeployedServerServer
Use the stack_name
column in the command output to identify the UUID associated with each node. The stack_name
includes the integer value of the index of the node in the Heat resource group:
+------------------------------------+----------------------------------+ | physical_resource_id | stack_name | +------------------------------------+----------------------------------+ | 294d4e4d-66a6-4e4e-9a8b- | overcloud-ComputeDeployedServer- | | 03ec80beda41 | no7yfgnh3z7e-1-ytfqdeclwvcg | | d8de016d- | overcloud-ComputeDeployedServer- | | 8ff9-4f29-bc63-21884619abe5 | no7yfgnh3z7e-0-p4vb3meacxwn | | 8c59f7b1-2675-42a9-ae2c- | overcloud-ComputeDeployedServer- | | 2de4a066f2a9 | no7yfgnh3z7e-2-mmmaayxqnf3o | +------------------------------------+----------------------------------+
The indices 0, 1, or 2 in the stack_name
column correspond to the node order in the Heat resource group. Pass the corresponding UUID value from the physical_resource_id
column to openstack overcloud node delete
command.
Once you have removed overcloud nodes from the stack, power off these nodes. In a standard deployment, the bare metal services on the director control this function. However, with pre-provisioned nodes, you must either manually shutdown these nodes or use the power management control for each physical system. If you do not power off the nodes after removing them from the stack, they might remain operational and reconnect as part of the overcloud environment.
After powering off the removed nodes, reprovision them to a base operating system configuration so that they do not unintentionally join the overcloud in the future
Do not attempt to reuse nodes previously removed from the overcloud without first reprovisioning them with a fresh base operating system. The scale down process only removes the node from the overcloud stack and does not uninstall any packages.
7.13. Removing a Pre-Provisioned Overcloud
To remove an entire overcloud that uses pre-provisioned nodes, see Section 10.5, “Removing the overcloud” for the standard overcloud remove procedure. After removing the overcloud, power off all nodes and reprovision them to a base operating system configuration.
Do not attempt to reuse nodes previously removed from the overcloud without first reprovisioning them with a fresh base operating system. The removal process only deletes the overcloud stack and does not uninstall any packages.
7.14. Next steps
This concludes the creation of the overcloud using pre-provisioned nodes. For post-creation functions, see Chapter 9, Performing overcloud post-installation tasks.