Chapter 8. Configuring a Basic Overcloud using Pre-Provisioned Nodes
This chapter provides the basic configuration steps for using pre-provisioned nodes to configure an OpenStack Platform environment. This scenario differs from the standard overcloud creation scenarios in multiple ways:
- You can provision nodes using an external tool and let the director control the overcloud configuration only.
- You can use nodes without relying on the director’s provisioning methods. This is useful if creating an overcloud without power management control or using networks with DHCP/PXE boot restrictions.
- The director does not use OpenStack Compute (nova), OpenStack Bare Metal (ironic), or OpenStack Image (glance) for managing nodes.
- Pre-provisioned nodes use a custom partitioning layout.
This scenario provides basic configuration with no custom features. However, you can add advanced configuration options to this basic overcloud and customize it to your specifications using the instructions in the Advanced Overcloud Customization guide.
Mixing pre-provisioned nodes with director-provisioned nodes in an overcloud is not supported.
- The director node created in Chapter 4, Installing the Undercloud.
- A set of bare metal machines for your nodes. The number of nodes required depends on the type of overcloud you intend to create (see Section 3.1, “Planning Node Deployment Roles” for information on overcloud roles). These machines also must comply with the requirements set for each node type. For these requirements, see Section 2.4, “Overcloud Requirements”. These nodes require a Red Hat Enterprise Linux 7.3 operating system.
- One network connection for managing the pre-provisioned nodes. This scenario requires uninterrupted SSH access to the nodes for orchestration agent configuration.
One network connection for the Control Plane network. There are two main scenarios for this network:
Using the Provisioning Network as the Control Plane, which is the default scenario. This network is usually a layer-3 (L3) routable network connection from the pre-provisioned nodes to the director. The examples for this scenario use following IP address assignments:
Table 8.1. Provisioning Network IP Assignments
Node Name IP Address
- Using a separate network. In situations where the director’s Provisioning network is a private non-routable network, you can define IP addresses for the nodes from any subnet and communicate with the director over the Public API endpoint. There are certain caveats to this scenario, which this chapter examines later in Section 8.6, “Using a Separate Network for Overcloud Nodes”.
- All other network types in this example also use the Control Plane network for OpenStack services. However, you can create additional networks for other network traffic types.
8.1. Creating a User for Configuring Nodes
At a later stage in this process, the director requires SSH access to the overcloud nodes as the
On each overcloud node, create the user named
stackand set a password on each node. For example, use the following on the Controller node:
[root@controller ~]# useradd stack [root@controller ~]# passwd stack # specify a password
Disable password requirements for this user when using
[root@controller ~]# echo "stack ALL=(root) NOPASSWD:ALL" | tee -a /etc/sudoers.d/stack [root@controller ~]# chmod 0440 /etc/sudoers.d/stack
Once you have created and configured the
stackuser on all pre-provisioned nodes, copy the
stackuser’s public SSH key from the director node to each overcloud node. For example, to copy the director’s public SSH key to the Controller node:
[stack@director ~]$ ssh-copy-id email@example.com
8.2. Registering the Operating System for Nodes
Each node requires access to a Red Hat subscription. The following procedure shows how to register each node to the Red Hat Content Delivery Network. Perform these steps on each node:
Run the registration command and enter your Customer Portal user name and password when prompted:
[root@controller ~]# sudo subscription-manager register
Find the entitlement pool for the Red Hat OpenStack Platform 12:
[root@controller ~]# sudo subscription-manager list --available --all --matches="Red Hat OpenStack"
Use the pool ID located in the previous step to attach the Red Hat OpenStack Platform 12 entitlements:
[root@controller ~]# sudo subscription-manager attach --pool=pool_id
Disable all default repositories:
[root@controller ~]# sudo subscription-manager repos --disable=*
Enable the required Red Hat Enterprise Linux repositories.
For x86_64 systems, run:
[root@controller ~]# sudo subscription-manager repos --enable=rhel-7-server-rpms --enable=rhel-7-server-extras-rpms --enable=rhel-7-server-rh-common-rpms --enable=rhel-ha-for-rhel-7-server-rpms --enable=rhel-7-server-openstack-12-rpms --enable=rhel-7-server-rhceph-2-osd-rpms --enable=rhel-7-server-rhceph-2-mon-rpms --enable=rhel-7-server-rhceph-2-tools-rpms
For POWER systems, run:
[root@controller ~]# sudo subscription-manager repos --enable=rhel-7-for-power-le-rpms --enable=rhel-7-server-openstack-12-for-power-le-rpms
Only enable the repositories listed in Section 2.5, “Repository Requirements”. Additional repositories can cause package and software conflicts. Do not enable any additional repositories.
Update your system to ensure sure you have the latest base system packages:
[root@controller ~]# sudo yum update -y [root@controller ~]# sudo reboot
The node is now ready to use for your overcloud.
8.3. Installing the User Agent on Nodes
Each pre-provisioned node uses the OpenStack Orchestration (heat) agent to communicate with the director. The agent on each node polls the director and obtains metadata tailored to each node. This metadata allows the agent to configure each node.
Install the initial packages for the orchestration agent on each node:
[root@controller ~]# sudo yum -y install python-heat-agent*
8.4. Configuring SSL/TLS Access to the Director
If the director uses SSL/TLS, the pre-provisioned nodes require the certificate authority file used to sign the director’s SSL/TLS certificates. If using your own certificate authority, perform the following on each overcloud node:
Copy the certificate authority file to the
/etc/pki/ca-trust/source/anchors/directory on each pre-provisioned node.
Run the following command on each overcloud node:
[root@controller ~]# sudo update-ca-trust extract
This ensures the overcloud nodes can access the director’s Public API over SSL/TLS.
8.5. Configuring Networking for the Control Plane
The pre-provisioned overcloud nodes obtain metadata from the director using standard HTTP requests. This means all overcloud nodes require L3 access to either:
The director’s Control Plane network, which is the subnet defined with the
network_cidrparameter from your
undercloud.conffile. The nodes either requires direct access to this subnet or routable access to the subnet.
The director’s Public API endpoint, specified as the
undercloud_public_hostparameter from your
undercloud.conffile. This option is available if either you do not have an L3 route to the Control Plane or you aim to use SSL/TLS communication when polling the director for metadata. See Section 8.6, “Using a Separate Network for Overcloud Nodes” for additional steps for configuring your overcloud nodes to use the Public API endpoint.
The director uses a Control Plane network to manage and configure a standard overcloud. For an overcloud with pre-provisioned nodes, your network configuration might require some modification to accommodate how the director communicates with the pre-provisioned nodes.
Using Network Isolation
Network isolation allows you to group services to use specific networks, including the Control Plane. There are multiple network isolation strategies contained in the The Advanced Overcloud Customization guide. In addition, you can also define specific IP addresses for nodes on the control plane. For more information on isolation networks and creating predictable node placement strategies, see the following sections in the Advanced Overcloud Customizations guide:
If using network isolation, make sure your NIC templates do not include the NIC used for undercloud access. These template can reconfigure the NIC, which can lead to connectivity and configuration problems during deployment.
Assigning IP Addresses
If not using network isolation, you can use a single Control Plane network to manage all services. This requires manual configuration of the Control Plane NIC on each node to use an IP address within the Control Plane network range. If using the director’s Provisioning network as the Control Plane, make sure the chosen overcloud IP addresses fall outside of the DHCP ranges for both provisioning (
dhcp_end) and introspection (
During standard overcloud creation, the director creates OpenStack Networking (neutron) ports to automatically assigns IP addresses to the overcloud nodes on the Provisioning / Control Plane network. However, this can cause the director to assign different IP addresses to the ones manually configured for each node. In this situation, use a predictable IP address strategy to force the director to use the pre-provisioned IP assignments on the Control Plane.
An example of a predictable IP strategy is to use an environment file (
ctlplane-assignments.yaml) with the following IP assignments:
resource_registry: OS::TripleO::DeployedServer::ControlPlanePort: /usr/share/openstack-tripleo-heat-templates/deployed-server/deployed-neutron-port.yaml parameter_defaults: DeployedServerPortMap: controller-ctlplane: fixed_ips: - ip_address: 192.168.24.2 subnets: - cidr: 24 compute-ctlplane: fixed_ips: - ip_address: 192.168.24.3 subnets: - cidr: 24
In this example, the
OS::TripleO::DeployedServer::ControlPlanePort resource passes a set of parameters to the director and defines the IP assignments of our pre-provisioned nodes. The
DeployedServerPortMap parameter defines the IP addresses and subnet CIDRs that correspond to each overcloud node. The mapping defines:
The name of the assignment, which follows the format
<node_hostname>-<network>. For example:
The IP assignments, which use the following parameter patterns:
fixed_ips/ip_address- Defines the fixed IP addresses for the control plane. Use multiple
ip_addressparameters in a list to define multiple IP addresses.
subnets/cidr- Defines the CIDR value for the subnet.
A later step in this chapter uses the resulting environment file (
ctlplane-assignments.yaml) as part of the
openstack overcloud deploy command.
8.6. Using a Separate Network for Overcloud Nodes
By default, the director uses the Provisioning network as the overcloud Control Plane. However, if this network is isolated and non-routable, nodes cannot communicate with the director’s Internal API during configuration. In this situation, you might need to define a separate network for the nodes and configure them to communicate with the director over the Public API.
There are several requirements for this scenario:
- The overcloud nodes must accommodate the basic network configuration from Section 8.5, “Configuring Networking for the Control Plane”.
- You must enable SSL/TLS on the director for Public API endpoint usage. For more information, see Section 4.6, “Configuring the Director” and Appendix A, SSL/TLS Certificate Configuration.
You must define an accessible fully qualified domain name (FQDN) for director. This FQDN must resolve to a routable IP address for the director. Use the
undercloud_public_hostparameter in the
undercloud.conffile to set this FQDN.
The examples in this section use IP address assignments that differ from the main scenario:
Table 8.2. Provisioning Network IP Assignments
|Node Name||IP Address or FQDN|
Director (Internal API)
192.168.24.1 (Provisioning Network and Control Plane)
Director (Public API)
10.1.1.1 / director.example.com
Overcloud Virtual IP
The following sections provide additional configuration for situations that require a separate network for overcloud nodes.
With SSL/TLS communication enabled on the undercloud, the director provides a Public API endpoint for most services. However, OpenStack Orchestration (heat) uses the internal endpoint as a default provider for metadata. This means the undercloud requires some modification so overcloud nodes can access OpenStack Orchestration on public endpoints. This modification involves changing some Puppet hieradata on the director.
hieradata_override in your
undercloud.conf allows you to specify additional Puppet hieradata for undercloud configuration. Use the following steps to modify hieradata relevant to OpenStack Orchestration:
If you are not using a
hieradata_overridefile already, create a new one. This example uses one located at
Include the following hieradata in
heat_clients_endpoint_type: public heat::engine::default_deployment_signal_transport: TEMP_URL_SIGNAL
This changes the endpoint type from the default
publicand changes the signaling method to use TempURLs from OpenStack Object Storage (swift).
undercloud.conf, set the
hieradata_overrideparameter to the path of the hieradata file:
hieradata_override = /home/stack/hieradata.yaml
openstack overcloud installcommand to implement the new configuration options.
This switches the orchestration metadata server to use URLs on the director’s Public API.
IP Address Assignments
The method for IP assignments is similar to Section 8.5, “Configuring Networking for the Control Plane”. However, since the Control Plane is not routable from the deployed servers, you use the
DeployedServerPortMap parameter to assign IP addresses from your chosen overcloud node subnet, including the virtual IP address to access the Control Plane. The following is a modified version of the
ctlplane-assignments.yaml environment file from Section 8.5, “Configuring Networking for the Control Plane” that accommodates this network architecture:
resource_registry: OS::TripleO::DeployedServer::ControlPlanePort: /usr/share/openstack-tripleo-heat-templates/deployed-server/deployed-neutron-port.yaml OS::TripleO::Network::Ports::ControlPlaneVipPort: /usr/share/openstack-tripleo-heat-templates/deployed-server/deployed-neutron-port.yaml OS::TripleO::Network::Ports::RedisVipPort: /usr/share/openstack-tripleo-heat-templates/network/ports/noop.yaml 1 parameter_defaults: NeutronPublicInterface: eth1 EC2MetadataIp: 192.168.100.1 2 ControlPlaneDefaultRoute: 192.168.100.1 DeployedServerPortMap: control_virtual_ip: fixed_ips: - ip_address: 192.168.100.1 subnets: - cidr: 24 controller0-ctlplane: fixed_ips: - ip_address: 192.168.100.2 subnets: - cidr: 24 compute0-ctlplane: fixed_ips: - ip_address: 192.168.100.3 subnets: - cidr: 24
RedisVipPortresource is mapped to
network/ports/noop.yaml. This mapping is because the default Redis VIP address comes from the Control Plane. In this situation, we use a
noopto disable this Control Plane mapping.
ControlPlaneDefaultRouteparameters are set to the value of the Control Plane virtual IP address. The default NIC configuration templates require these parameters and you must set them to use a pingable IP address to pass the validations performed during deployment. Alternatively, customize the NIC configuration so they do not require these parameters.
8.7. Creating the Overcloud with Pre-Provisioned Nodes
The overcloud deployment uses the standard CLI methods from Section 6.8, “Creating the Overcloud with the CLI Tools”. For pre-provisioned nodes, the deployment command requires some additional options and environment files from the core Heat template collection:
--disable-validations- Disables basic CLI validations for services not used with pre-provisioned infrastructure, otherwise the deployment will fail.
environments/deployed-server-environment.yaml- Main environment file for creating and configuring pre-provisioned infrastructure. This environment file substitutes the
environments/deployed-server-bootstrap-environment-rhel.yaml- Environment file to execute a bootstrap script on the pre-provisioned servers. This script installs additional packages and provides basic configuration for overcloud nodes.
environments/deployed-server-pacemaker-environment.yaml- Environment file for Pacemaker configuration on pre-provisioned Controller nodes. The namespace for the resources registered in this file use the Controller role name from
deployed-server/deployed-server-roles-data.yaml, which is
deployed-server/deployed-server-roles-data.yaml- An example custom roles file. This file replicates the default
roles_data.yamlbut also includes the
disable_constraints: Trueparameter for each role. This parameter disables orchestration constraints in the generated role templates. These constraints are for services not used with pre-provisioned infrastructure.
If using your own custom roles file, make sure to include the
disable_constraints: Trueparameter with each role. For example:
- name: ControllerDeployedServer disable_constraints: True CountDefault: 1 ServicesDefault: - OS::TripleO::Services::CACerts - OS::TripleO::Services::CephMon - OS::TripleO::Services::CephExternal - OS::TripleO::Services::CephRgw ...
The following is an example overcloud deployment command with the environment files specific to the pre-provisioned architecture:
$ source ~/stackrc (undercloud) $ openstack overcloud deploy \ [other arguments] \ --disable-validations \ -e /usr/share/openstack-tripleo-heat-templates/environments/deployed-server-environment.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/deployed-server-bootstrap-environment-rhel.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/deployed-server-pacemaker-environment.yaml \ -r /usr/share/openstack-tripleo-heat-templates/deployed-server/deployed-server-roles-data.yaml
This begins the overcloud configuration. However, the deployment stack pauses when the overcloud node resources enter the
2017-01-14 13:25:13Z [overcloud.Compute.0.Compute]: CREATE_IN_PROGRESS state changed 2017-01-14 13:25:14Z [overcloud.Controller.0.Controller]: CREATE_IN_PROGRESS state changed
This pause is due to the director waiting for the orchestration agent on the overcloud nodes to poll the metadata server. The next section shows how to configure nodes to start polling the metadata server.
8.8. Polling the Metadata Server
The deployment is now in progress but paused at a
CREATE_IN_PROGRESS stage. The next step is to configure the orchestration agent on the overcloud nodes to poll the metadata server on the director. There are two ways to accomplish this:
Only use automatic configuration for the initial deployment. Do not use automatic configuration if scaling up your nodes.
The director’s core Heat template collection contains a script that performs automatic configuration of the Heat agent on the overcloud nodes. The script requires you to source the
stackrc file as the
stack user to authenticate with the director and query the orchestration service:
[stack@director ~]$ source ~/stackrc
In addition, the script also requires some additional environment variables to define the nodes roles and their IP addressess. These environment variables are:
- A space-separated list of roles to configure. These roles correlate to roles defined in your roles data file.
- Each role requires an environment variable with a space-separated list of IP addresses for nodes in the role.
The following commands demonstrate how to set these environment variables:
(undercloud) $ export OVERCLOUD_ROLES="ControllerDeployedServer ComputeDeployedServer" (undercloud) $ export ControllerDeployedServer_hosts="192.168.100.2" (undercloud) $ export ComputeDeployedServer_hosts="192.168.100.3"
Run the script to configure the orchestration agent on each overcloud node:
(undercloud) $ /usr/share/openstack-tripleo-heat-templates/deployed-server/scripts/get-occ-config.sh
The script accesses the pre-provisioned nodes over SSH using the same user executing the script. In this case, the script authenticates with the
The script accomplishes the following:
- Queries the director’s orchestration services for the metadata URL for each node.
- Accesses the node and configures the agent on each node with its specific metadata URL.
- Restarts the orchestration agent service.
Once the script completes, the overcloud nodes start polling orchestration service on the director. The stack deployment continues.
If you prefer to manually configure the orchestration agent on the pre-provisioned nodes, use the following command to query the orchestration service on the director for each node’s metadata URL:
[stack@director ~]$ source ~/stackrc (undercloud) $ for STACK in $(openstack stack resource list -n5 --filter name=deployed-server -c stack_name -f value overcloud) ; do STACKID=$(echo $STACK | cut -d '-' -f2,4 --output-delimiter " ") ; echo "== Metadata URL for $STACKID ==" ; openstack stack resource metadata $STACK deployed-server | jq -r '.["os-collect-config"].request.metadata_url' ; echo ; done
This displays the stack name and metadata URL for each node:
== Metadata URL for ControllerDeployedServer 0 == http://192.168.24.1:8080/v1/AUTH_6fce4e6019264a5b8283e7125f05b764/ov-edServer-ts6lr4tm5p44-deployed-server-td42md2tap4g/43d302fa-d4c2-40df-b3ac-624d6075ef27?temp_url_sig=58313e577a93de8f8d2367f8ce92dd7be7aac3a1&temp_url_expires=2147483586 == Metadata URL for ComputeDeployedServer 0 == http://192.168.24.1:8080/v1/AUTH_6fce4e6019264a5b8283e7125f05b764/ov-edServer-wdpk7upmz3eh-deployed-server-ghv7ptfikz2j/0a43e94b-fe02-427b-9bfe-71d2b7bb3126?temp_url_sig=8a50d8ed6502969f0063e79bb32592f4203a136e&temp_url_expires=2147483586
On each overcloud node:
Remove the existing
os-collect-config.conftemplate. This ensures the agent does not override our manual changes:
$ sudo /bin/rm -f /usr/libexec/os-apply-config/templates/etc/os-collect-config.conf
/etc/os-collect-config.conffile to use the corresponding metadata URL. For example, the Controller node uses the following:
[DEFAULT] collectors=request command=os-refresh-config polling_interval=30 [request] metadata_url=http://192.168.24.1:8080/v1/AUTH_6fce4e6019264a5b8283e7125f05b764/ov-edServer-ts6lr4tm5p44-deployed-server-td42md2tap4g/43d302fa-d4c2-40df-b3ac-624d6075ef27?temp_url_sig=58313e577a93de8f8d2367f8ce92dd7be7aac3a1&temp_url_expires=2147483586
- Save the file.
[stack@controller ~]$ sudo systemctl restart os-collect-config
After you have configured and restarted them, the orchestration agents poll the director’s orchestration service for overcloud configuration. The deployment stack continues its creation and the stack for each node eventually changes to
8.9. Monitoring the Overcloud Creation
The overcloud configuration process begins. This process takes some time to complete. To view the status of the overcloud creation, open a separate terminal as the
stack user and run:
[stack@director ~]$ source ~/stackrc (undercloud) $ heat stack-list --show-nested
heat stack-list --show-nested command shows the current stage of the overcloud creation.
8.10. Accessing the Overcloud
The director generates a script to configure and help authenticate interactions with your overcloud from the director host. The director saves this file,
overcloudrc, in your
stack user’s home director. Run the following command to use this file:
(undercloud) $ source ~/overcloudrc
This loads the necessary environment variables to interact with your overcloud from the director host’s CLI. The command prompt changes to indicate this:
To return to interacting with the director’s host, run the following command:
(overcloud) $ source ~/stackrc (undercloud) $
8.11. Scaling Pre-Provisioned Nodes
The process for scaling pre-provisioned nodes is similar to the standard scaling procedures in Chapter 10, Scaling the Overcloud. However, the process for adding new pre-provisioned nodes differs since pre-provisioned nodes do not use the standard registration and management process from OpenStack Bare Metal (ironic) and OpenStack Compute (nova).
Scaling Up Pre-Provisioned Nodes
When scaling up the overcloud with pre-provisioned nodes, you need to configure the orchestration agent on each node to correspond to the director’s node count.
The general process for scaling up the nodes is:
- Prepare the new pre-provisioned nodes as per the Requirements.
- Scale up the nodes. See Chapter 10, Scaling the Overcloud for these instructions.
- After executing the deployment command, wait until the director creates the new node resources. Manually configure the pre-provisioned nodes to poll the director’s orchestration server metadata URL as per the instructions in Section 8.8, “Polling the Metadata Server”.
Scaling Down Pre-Provisioned Nodes
When scaling down the overcloud with pre-provisioned nodes, follow the scale down instructions as normal as shown in Chapter 10, Scaling the Overcloud.
In most scaling operations, you need to obtain the UUID value of the node to pass to
openstack overcloud node delete. To obtain this UUID, list the resources for the specific role:
$ openstack stack resource list overcloud -c physical_resource_id -c stack_name -n5 --filter type=OS::TripleO::<RoleName>Server
<RoleName> in the above command with the actual name of the role that you are scaling down. For example, for the
$ openstack stack resource list overcloud -c physical_resource_id -c stack_name -n5 --filter type=OS::TripleO::ComputeDeployedServerServer
stack_name column in the command output to identify the UUID associated with each node. The
stack_name includes the integer value of the index of the node in the Heat resource group. For example, in the following sample output:
+------------------------------------+----------------------------------+ | physical_resource_id | stack_name | +------------------------------------+----------------------------------+ | 294d4e4d-66a6-4e4e-9a8b- | overcloud-ComputeDeployedServer- | | 03ec80beda41 | no7yfgnh3z7e-1-ytfqdeclwvcg | | d8de016d- | overcloud-ComputeDeployedServer- | | 8ff9-4f29-bc63-21884619abe5 | no7yfgnh3z7e-0-p4vb3meacxwn | | 8c59f7b1-2675-42a9-ae2c- | overcloud-ComputeDeployedServer- | | 2de4a066f2a9 | no7yfgnh3z7e-2-mmmaayxqnf3o | +------------------------------------+----------------------------------+
The indices 0, 1, or 2 in the
stack_name column correspond to the node order in the Heat resource group. Pass the corresponding UUID value from the
physical_resource_id column to
openstack overcloud node delete command.
Once you have removed overcloud nodes from the stack, power off these nodes. Under a standard deployment, the bare metal services on the director control this function. However, with pre-provisioned nodes, you should either manually shutdown these nodes or use the power management control for each physical system. If you do not power off the nodes after removing them from the stack, they might remain operational and reconnect as part of the overcloud environment.
After powering down the removed nodes, reprovision them back to a base operating system configuration so that they do not unintentionally join the overcloud in the future
Do not attempt to reuse nodes previously removed from the overcloud without first reprovisioning them with a fresh base operating system. The scale down process only removes the node from the overcloud stack and does not uninstall any packages.
8.12. Removing a Pre-Provisioned Overcloud
Removing an entire overcloud that uses pre-provisioned nodes uses the same procedure as a standard overcloud. See Section 9.13, “Removing the Overcloud” for more details.
After removing the overcloud, power off all nodes and reprovision them back to a base operating system configuration.
Do not attempt to reuse nodes previously removed from the overcloud without first reprovisioning them with a fresh base operating system. The removal process only deletes the overcloud stack and does not uninstall any packages.
8.13. Completing the Overcloud Creation
This concludes the creation of the overcloud using pre-provisioned nodes. For post-creation functions, see Chapter 9, Performing Tasks after Overcloud Creation.