Chapter 4. Installing the undercloud
The first step to creating your Red Hat OpenStack Platform environment is to install the director on the undercloud system. This involves a few prerequisite steps to enable the necessary subscriptions and repositories.
4.1. Creating the stack user
The director installation process requires a non-root user to execute commands. Use the following procedure to create the user named
stack and set a password.
Log into your undercloud as the
[root@director ~]# useradd stack
Set a password for the user:
[root@director ~]# passwd stack
Disable password requirements when using
[root@director ~]# echo "stack ALL=(root) NOPASSWD:ALL" | tee -a /etc/sudoers.d/stack [root@director ~]# chmod 0440 /etc/sudoers.d/stack
Switch to the new
[root@director ~]# su - stack [stack@director ~]$
Continue the director installation as the
4.2. Setting the undercloud hostname
The undercloud requires a fully qualified domain name for its installation and configuration process. This means you might need to set the hostname of your undercloud.
Check the base and full hostname of the undercloud:
[stack@director ~]$ hostname [stack@director ~]$ hostname -f
If either of the previous commands do not report the correct hostname or report an error, use
hostnamectlto set a hostname:
[stack@director ~]$ sudo hostnamectl set-hostname manager.example.com [stack@director ~]$ sudo hostnamectl set-hostname --transient manager.example.com
The director also requires an entry for the system’s hostname and base name in
/etc/hosts. For example, if the system is named
10.0.0.1for its IP address, then
/etc/hostsrequires an entry like:
10.0.0.1 manager.example.com manager
4.3. Registering and updating your undercloud
Before installing the director:
- Register the undercloud using Red Hat Subscription Manager
- Subscribe and enable the relevant repositories
- Perform an update of your Red Hat Enterprise Linux packages
Register your system with the Content Delivery Network, entering your Customer Portal user name and password when prompted:
[stack@director ~]$ sudo subscription-manager register
Find the entitlement pool ID for Red Hat OpenStack Platform director. For example:
[stack@director ~]$ sudo subscription-manager list --available --all --matches="Red Hat OpenStack" Subscription Name: Name of SKU Provides: Red Hat Single Sign-On Red Hat Enterprise Linux Workstation Red Hat CloudForms Red Hat OpenStack Red Hat Software Collections (for RHEL Workstation) Red Hat Virtualization SKU: SKU-Number Contract: Contract-Number Pool ID: Valid-Pool-Number-123456 Provides Management: Yes Available: 1 Suggested: 1 Service Level: Support-level Service Type: Service-Type Subscription Type: Sub-type Ends: End-date System Type: Physical
Pool IDvalue and attach the Red Hat OpenStack Platform 13 entitlement:
[stack@director ~]$ sudo subscription-manager attach --pool=Valid-Pool-Number-123456
Disable all default repositories, and then enable the required Red Hat Enterprise Linux repositories:
[stack@director ~]$ sudo subscription-manager repos --disable=* [stack@director ~]$ sudo subscription-manager repos --enable=rhel-7-server-rpms --enable=rhel-7-server-extras-rpms --enable=rhel-7-server-rh-common-rpms --enable=rhel-ha-for-rhel-7-server-rpms --enable=rhel-7-server-openstack-13-rpms
These repositories contain packages the director installation requires.Important
Only enable the repositories listed in Section 2.5, “Repository Requirements”. Additional repositories can cause package and software conflicts. Do not enable any additional repositories.
Perform an update on your system to make sure you have the latest base system packages:
[stack@director ~]$ sudo yum update -y [stack@director ~]$ sudo reboot
The system is now ready for the director installation.
4.4. Installing the director packages
The following procedure installs packages relevant to the Red hat OpenStack Platform director.
Install the command line tools for director installation and configuration:
[stack@director ~]$ sudo yum install -y python-tripleoclient
If you aim to create an overcloud with Ceph Storage nodes, install the additional
[stack@director ~]$ sudo yum install -y ceph-ansible
4.5. Configuring the director
The director installation process requires certain settings to determine your network configurations. The settings are stored in a template located in the
stack user’s home directory as
undercloud.conf. This procedure demonstrates how to use the default template as a foundation for your configuration.
Red Hat provides a basic template to help determine the required settings for your installation. Copy this template to the
stackuser’s home directory:
[stack@director ~]$ cp /usr/share/instack-undercloud/undercloud.conf.sample ~/undercloud.conf
undercloud.conffile. This file contains settings to configure your undercloud. If you omit or comment out a parameter, the undercloud installation uses the default value.
4.6. Director configuration parameters
The following is a list of parameters for configuring the
The following parameters are defined in the
[DEFAULT] section of the
- Defines the fully qualified host name for the undercloud. If set, the undercloud installation configures all system host name settings. If left unset, the undercloud uses the current host name, but the user must configure all system host name settings appropriately.
The IP address defined for the director’s Provisioning NIC. This is also the IP address the director uses for its DHCP and PXE boot services. Leave this value as the default
192.168.24.1/24unless you are using a different subnet for the Provisioning network, for example, if it conflicts with an existing IP address or subnet in your environment.
The IP address defined for the director’s Public API when using SSL/TLS. This is an IP address for accessing the director endpoints externally over SSL/TLS. The director configuration attaches this IP address to its software bridge as a routed IP address, which uses the
The IP address defined for the director’s Admin API when using SSL/TLS. This is an IP address for administration endpoint access over SSL/TLS. The director configuration attaches this IP address to its software bridge as a routed IP address, which uses the
- A list of DNS nameservers to use for the undercloud hostname resolution.
- A list of network time protocol servers to help synchronize the undercloud’s date and time.
The DNS domain name to use when deploying the overcloud.Note
The overcloud parameter
CloudDomainmust be set to a matching value.
List of routed network subnets for provisioning and introspection. See Subnets for more information. The default value only includes the
The local subnet to use for PXE boot and DHCP interfaces. The
local_ipaddress should reside in this subnet. The default is
- The location and filename of the certificate for OpenStack SSL/TLS communication. Ideally, you obtain this certificate from a trusted certificate authority. Otherwise generate your own self-signed certificate using the guidelines in Appendix A, SSL/TLS Certificate Configuration. These guidelines also contain instructions on setting the SELinux context for your certificate, whether self-signed or from an authority.
Defines whether to generate an SSL/TLS certificate during the undercloud installation, which is used for the
undercloud_service_certificateparameter. The undercloud installation saves the resulting certificate
/etc/pki/tls/certs/undercloud-[undercloud_public_vip].pem. The CA defined in the
certificate_generation_caparameter signs this certificate.
certmongernickname of the CA that signs the requested certificate. Only use this option if you have set the
generate_service_certificateparameter. If you select the
localCA, certmonger extracts the local CA certificate to
/etc/pki/ca-trust/source/anchors/cm-local-ca.pemand adds it to the trust chain.
- The Kerberos principal for the service using the certificate. Only use this if your CA requires a Kerberos principal, such as in FreeIPA.
The chosen interface for the director’s Provisioning NIC. This is also the device the director uses for its DHCP and PXE boot services. Change this value to your chosen device. To see which device is connected, use the
ip addrcommand. For example, this is the result of an
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 52:54:00:75:24:09 brd ff:ff:ff:ff:ff:ff inet 192.168.122.178/24 brd 192.168.122.255 scope global dynamic eth0 valid_lft 3462sec preferred_lft 3462sec inet6 fe80::5054:ff:fe75:2409/64 scope link valid_lft forever preferred_lft forever 3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noop state DOWN link/ether 42:0b:c2:a5:c1:26 brd ff:ff:ff:ff:ff:ff
In this example, the External NIC uses
eth0and the Provisioning NIC uses
eth1, which is currently not configured. In this case, set the
eth1. The configuration script attaches this interface to a custom bridge defined with the
MTU to use for the
hieradataoverride file. If set, the undercloud installation copies this file under
/etc/puppet/hieradataand sets it as the first file in the hierarchy. Use this to provide custom configuration to services beyond the
Path to network configuration override template. If set, the undercloud uses a JSON format template to configure the networking with
os-net-config. This ignores the network parameters set in
/usr/share/instack-undercloud/templates/net-config.json.templatefor an example.
The bridge the director uses for node introspection. This is custom bridge that the director configuration creates. The
LOCAL_INTERFACEattaches to this bridge. Leave this as the default
A range of IP address that the director’s introspection service uses during the PXE boot and provisioning process. Use comma-separated values to define the start and end of this range. For example,
192.168.24.100,192.168.24.120. Make sure this range contains enough IP addresses for your nodes and does not conflict with the range for
Defines whether to enable extra hardware collection during the inspection process. Requires
python-hardware-detectpackage on the introspection image.
Runs a set of benchmarks during node introspection. Set to
trueto enable. This option is necessary if you intend to perform benchmark analysis when inspecting the hardware of registered nodes. See Section 6.2, “Inspecting the Hardware of Nodes” for more details.
- Defines whether to support introspection of nodes with UEFI-only firmware. For more information, see Appendix D, Alternative Boot Modes.
Automatically enroll any unknown node that PXE-boots the introspection ramdisk. New nodes use the
fake_pxedriver as a default but you can set
discovery_default_driverto override. You can also use introspection rules to specify driver information for newly enrolled nodes.
Sets the default driver for automatically enrolled nodes. Requires
enable_node_discoveryenabled and you must include the driver in the
enabled_driverslist. See Appendix B, Power Management Drivers for a list of supported drivers.
Sets the log level of undercloud services to
DEBUG. Set this value to
- Defines whether to update packages during the undercloud installation.
Defines whether to install the validation tools. The default is set to
false, but you can can enable using
Defines whether to install OpenStack Telemetry services (ceilometer, aodh, panko, gnocchi) in the undercloud. In Red Hat OpenStack Platform, the metrics backend for telemetry is provided by gnocchi. Setting
truewill install and set up telemetry services automatically. The default value is
false, which disables telemetry on the undercloud.
Defines Whether to install the director’s web UI. This allows you to perform overcloud planning and deployments through a graphical web interface. For more information, see Chapter 7, Configuring a Basic Overcloud with the Web UI. Note that the UI is only available with SSL/TLS enabled using either the
- Defines whether to install the requirements to run validations.
Defines whether to install the
novajoinmetadata service in the Undercloud.
Defines the one time password to register the Undercloud node to an IPA server. This is required when
Defines whether to use iPXE or standard PXE. The default is
true, which enables iPXE. Set to
falseto set to standard PXE. For more information, see Appendix D, Alternative Boot Modes.
- Maximum number of times the scheduler attempts to deploy an instance. Keep this greater or equal to the number of bare metal nodes you expect to deploy at once to work around potential race condition when scheduling.
- Defines whether to wipe the hard drive between deployments and after introspection.
- A list of hardware types to enable for the undercloud. See Appendix B, Power Management Drivers for a list of supported drivers.
The following parameters are defined in the
[auth] section of the
- undercloud_db_password; undercloud_admin_token; undercloud_admin_password; undercloud_glance_password; etc
The remaining parameters are the access details for all of the director’s services. No change is required for the values. The director’s configuration script automatically generates these values if blank in
undercloud.conf. You can retrieve all values after the configuration script completes.Important
The configuration file examples for these parameters use
<None>as a placeholder value. Setting these values to
<None>leads to a deployment error.
Each provisioning subnet is a named section in the
undercloud.conf file. For example, to create a subnet called
[ctlplane-subnet] cidr = 192.168.24.0/24 dhcp_start = 192.168.24.5 dhcp_end = 192.168.24.24 inspection_iprange = 192.168.24.100,192.168.24.120 gateway = 192.168.24.1 masquerade = true
You can specify as many provisioning networks as necessary to suit your environment.
The gateway for the overcloud instances. This is the undercloud host, which forwards traffic to the External network. Leave this as the default
192.168.24.1unless you are either using a different IP address for the director or want to directly use an external gateway.Note
The director’s configuration script also automatically enables IP forwarding using the relevant
The network that the director uses to manage overcloud instances. This is the Provisioning network, which the undercloud’s
neutronservice manages. Leave this as the default
192.168.24.0/24unless you are using a different subnet for the Provisioning network.
Defines the network that will masquerade for external access. This provides the Provisioning network with a degree of network address translation (NAT) so that it has external access through the director. Leave this as the default (
192.168.24.0/24) unless you are using a different subnet for the Provisioning network.
- dhcp_start; dhcp_end
- The start and end of the DHCP allocation range for overcloud nodes. Ensure this range contains enough IP addresses to allocate your nodes.
Modify the values for these parameters to suit your configuration. When complete, save the file.
4.7. Installing the director
The following procedure installs the director and performs some basic post-installation tasks.
Run the following command to install the director on the undercloud:
[stack@director ~]$ openstack undercloud install
This launches the director’s configuration script. The director installs additional packages and configures its services to suit the settings in the
undercloud.conf. This script takes several minutes to complete.
The script generates two files when complete:
undercloud-passwords.conf- A list of all passwords for the director’s services.
stackrc- A set of initialization variables to help you access the director’s command line tools.
The script also starts all OpenStack Platform services automatically. Check the enabled services using the following command:
[stack@director ~]$ sudo systemctl list-units openstack-*
The script adds the
stackuser to the
dockergroup to give the
stackuser has access to container management commands. Refresh the
stackuser’s permissions with the following command:
[stack@director ~]$ exec su -l stack
The command prompts you to log in again. Enter the stack user’s password.
To initialize the
stackuser to use the command line tools, run the following command:
[stack@director ~]$ source ~/stackrc
The prompt now indicates OpenStack commands authenticate and execute against the undercloud;
(undercloud) [stack@director ~]$
The director installation is complete. You can now use the director’s command line tools.
4.8. Obtaining images for overcloud nodes
The director requires several disk images for provisioning overcloud nodes. This includes:
- An introspection kernel and ramdisk - Used for bare metal system introspection over PXE boot.
- A deployment kernel and ramdisk - Used for system provisioning and deployment.
- An overcloud kernel, ramdisk, and full image - A base overcloud system that is written to the node’s hard disk.
The following procedure shows how to obtain and install these images.
stackrcfile to enable the director’s command line tools:
[stack@director ~]$ source ~/stackrc
(undercloud) [stack@director ~]$ sudo yum install rhosp-director-images rhosp-director-images-ipa
Extract the archives to the
imagesdirectory on the
stackuser’s home (
(undercloud) [stack@director ~]$ cd ~/images (undercloud) [stack@director images]$ for i in /usr/share/rhosp-director-images/overcloud-full-latest-13.0.tar /usr/share/rhosp-director-images/ironic-python-agent-latest-13.0.tar; do tar -xvf $i; done
Import these images into the director:
(undercloud) [stack@director images]$ openstack overcloud image upload --image-path /home/stack/images/
This uploads the following images into the director:
These are the images for deployment and the overcloud. The script also installs the introspection images on the director’s PXE server.
Check these images have uploaded successfully:
(undercloud) [stack@director images]$ openstack image list +--------------------------------------+------------------------+ | ID | Name | +--------------------------------------+------------------------+ | 765a46af-4417-4592-91e5-a300ead3faf6 | bm-deploy-ramdisk | | 09b40e3d-0382-4925-a356-3a4b4f36b514 | bm-deploy-kernel | | ef793cd0-e65c-456a-a675-63cd57610bd5 | overcloud-full | | 9a51a6cb-4670-40de-b64b-b70f4dd44152 | overcloud-full-initrd | | 4f7e33f4-d617-47c1-b36f-cbe90f132e5d | overcloud-full-vmlinuz | +--------------------------------------+------------------------+
This list will not show the introspection PXE images. The director copies these files to
(undercloud) [stack@director images]$ ls -l /httpboot total 341460 -rwxr-xr-x. 1 root root 5153184 Mar 31 06:58 agent.kernel -rw-r--r--. 1 root root 344491465 Mar 31 06:59 agent.ramdisk -rw-r--r--. 1 ironic-inspector ironic-inspector 337 Mar 31 06:23 inspector.ipxe
overcloud-full.qcow2 image is a flat partition image. However, you can also import and use whole disk images. See Appendix C, Whole Disk Images for more information.
4.9. Setting a nameserver for the control plane
Overcloud nodes require a nameserver so that they can resolve hostnames through DNS. For a standard overcloud without network isolation, the nameserver is defined using the undercloud’s control plane subnet. Use the following procedure to define nameservers for the environment.
stackrcfile to enable the director’s command line tools:
[stack@director ~]$ source ~/stackrc
Set the nameservers for the
(undercloud) [stack@director images]$ openstack subnet set --dns-nameserver [nameserver1-ip] --dns-nameserver [nameserver2-ip] ctlplane-subnet
--dns-nameserveroption for each nameserver.
View the subnet to verify the nameserver:
(undercloud) [stack@director images]$ openstack subnet show ctlplane-subnet +-------------------+-----------------------------------------------+ | Field | Value | +-------------------+-----------------------------------------------+ | ... | | | dns_nameservers | 18.104.22.168 | | ... | | +-------------------+-----------------------------------------------+
If you aim to isolate service traffic onto separate networks, the overcloud nodes use the
DnsServer parameter in your network environment files.
4.10. Next Steps
This completes the undercloud configuration. The next chapter explores basic overcloud configuration, including registering nodes, inspecting them, and then tagging them into various node roles.