Chapter 3. Deploying the Undercloud
As a technician, you can deploy an undercloud, which provides users with the ability to deploy and manage overclouds with the Red Hat OpenStack Platform Director interface.
3.1. Prerequisites
- Have a valid Red Hat Hyperconverged Infrastructure for Cloud subscription.
- Have access to Red Hat’s software repositories through Red Hat’s Content Delivery Network (CDN).
3.2. Understanding Ironic’s disk cleaning between deployments
Enabling Ironic’s disk cleaning feature will permanently delete all data from all the disks on a node before that node becomes available again for deployment.
There are two facts that you should consider before enabling Ironic’s disk cleaning feature:
- When director deploys Ceph it uses the ceph-disk command to prepare each OSD. Before ceph-disk prepares an OSD, it checks if the disk which will host the new OSD has data from an older OSD and if it does, then it will fail the disk preparation in order to not overwrite that data. It does this as a safety feature so that data is not lost.
- If a deployment attempt with director fails and is then repeated after the overcloud is deleted, then by default the data from the previous deployment will still be on the server disks. This data may cause the repeated deployment to fail because of how the ceph-disk command behaves.
If an overcloud node is accidentally deleted and disk cleaning is enabled, then the data will be removed and can only be put back into the environment by rebuilding the node with Red Hat OpenStack Platform Director.
3.3. Installing the undercloud
Several steps must be completed to install the undercloud. This procedure is installing the Red Hat OpenStack Platform director (RHOSP-d) as the undercloud. Here is a summary of the installation steps:
- Create an installation user.
- Create directories for templates and images.
- Verify/Set the RHOSP-d node name.
- Register the RHOSP-d node.
- Install the RHOSP-d software.
- Configure the RHOSP-d software.
- Obtain and import disk images for the overcloud.
- Set a DNS server on the undercloud’s subnet.
Prerequisites
- Have access to Red Hat’s software repositories through Red Hat’s Content Delivery Network (CDN).
-
Having
root
access to the Red Hat OpenStack Platform director (RHOSP-d) node.
Procedure
The RHOSP-d installation requires a non-root user with
sudo
privileges to do the installation.Create a user named
stack
:[root@director ~]# useradd stack
Set a password for
stack
. When prompted, enter the new password:[root@director ~]# passwd stack
Configure
sudo
access for thestack
user:[root@director ~]# echo "stack ALL=(root) NOPASSWD:ALL" | tee -a /etc/sudoers.d/stack [root@director ~]# chmod 0440 /etc/sudoers.d/stack
Switch to the
stack
user:[root@director ~]# su - stack
The RHOSP-d installation will be done as the
stack
user.
Create two new directories in the
stack
user’s home directory, one namedtemplates
and the other namedimages
:[stack@director ~]$ mkdir ~/images [stack@director ~]$ mkdir ~/templates
These directories will organize the system image files and Heat template files used to create the overcloud environment later.
The installing and configuring process requires a fully qualified domain name (FQDN), along with an entry in the
/etc/hosts
file.Verify the RHOSP-d node’s host name:
[stack@director ~]$ hostname -f
If needed, set the host name:
sudo hostnamectl set-hostname FQDN_HOST_NAME sudo hostnamectl set-hostname --transient FQDN_HOST_NAME
- Replace…
FQDN_HOST_NAME with the fully qualified domain name (FQDN) of the RHOSP-d node.
Example
[stack@director ~]$ sudo hostnamectl set-hostname director.example.com [stack@director ~]$ sudo hostnamectl set-hostname --transient director.example.com
Add an entry for the RHOSP-d node name to the
/etc/hosts
file. Add the following line to the/etc/hosts
file:sudo echo "127.0.0.1 FQDN_HOST_NAME SHORT_HOST_NAME localhost localhost.localdomain localhost4 localhost4.localdomain4" >> /etc/hosts
- Replace…
- FQDN_HOST_NAME with the full qualified domain name of the RHOSP-d node.
SHORT_HOST_NAME with the short domain name of the RHOSP-d node.
Example
[stack@director ~]$ sudo echo "127.0.0.1 director.example.com director localhost localhost.localdomain localhost4 localhost4.localdomain4" >> /etc/hosts
Register the RHOSP-d node on the Red Hat Content Delivery Network (CDN), and enable the required Red Hat software repositories using the Red Hat Subscription Manager.
Register the RHOSP-d node:
[stack@director ~]$ sudo subscription-manager register
When prompted, enter an authorized Customer Portal user name and password.
Lookup the valid
Pool ID
for the RHOSP entitlement:[stack@director ~]$ sudo subscription-manager list --available --all --matches="*Hyperconverged*"
Example Output
Subscription Name: Red Hat Hyperconverged Infrastructure for Cloud Provides: Red Hat OpenStack Red Hat Ceph Storage SKU: RS00160 Contract: 1111111 Pool ID: a1b2c3d4e5f6g7h8i9 Provides Management: Yes Available: 1 Suggested: 1 Service Level: Self-Support Service Type: L1-L3 Subscription Type: Standard Ends: 05/27/2018 System Type: Virtual
Using the
Pool ID
from the previous step, attach the RHOSP entitlement:[stack@director ~]$ sudo subscription-manager attach --pool=POOL_ID
- Replace…
POOL_ID with the valid pool id from the previous step.
Example
[stack@director ~]$ sudo subscription-manager attach --pool=a1b2c3d4e5f6g7h8i9
Disable the default software repositories, and enable the required software repositories:
[stack@director ~]$ sudo subscription-manager repos --disable=* [stack@director ~]$ sudo subscription-manager repos --enable=rhel-7-server-rpms --enable=rhel-7-server-extras-rpms --enable=rhel-7-server-rh-common-rpms --enable=rhel-ha-for-rhel-7-server-rpms --enable=rhel-7-server-openstack-13-rpms
If needed, update the base system software to the latest package versions, and reboot the RHOSP-d node:
[stack@director ~]$ sudo yum update [stack@director ~]$ sudo reboot
Wait for the node to be completely up and running before continuing to the next step.
Install all the RHOSP-d software packages:
[stack@director ~]$ sudo yum install python-tripleoclient ceph-ansible
Configure the RHOSP-d software.
Red Hat provides a basic undercloud configuration template to use. Copy the
undercloud.conf.sample
file to thestack
user’s home directory, namedundercloud.conf
:[stack@director ~]$ cp /usr/share/instack-undercloud/undercloud.conf.sample ~/undercloud.conf
The undercloud configuration template contains two sections:
[DEFAULT]
and[auth]
. Open theundercloud.conf
file for editing. Edit theundercloud_hostname
with the RHOSP-d node name. Uncomment the following parameters under the[DEFAULT]
section in theundercloud.conf
file by deleting the#
before the parameter. Edit the parameter values with the appropriate values as required for this solution’s network configuration:Parameter
Network
Edit Value?
Example Value
local_ip
Provisioning
Yes
192.0.2.1/24
network_gateway
Provisioning
Yes
192.0.2.1
undercloud_public_vip
Provisioning
Yes
192.0.2.2
undercloud_admin_vip
Provisioning
Yes
192.0.2.3
local_interface
Provisioning
Yes
eth1
network_cidr
Provisioning
Yes
192.0.2.0/24
masquerade_network
Provisioning
Yes
192.0.2.0/24
dhcp_start
Provisioning
Yes
192.0.2.5
dhcp_end
Provisioning
Yes
192.0.2.24
inspection_interface
Provisioning
No
br-ctlplane
inspection_iprange
Provisioning
Yes
192.0.2.100,192.0.2.120
inspection_extras
N/A
Yes
true
inspection_runbench
N/A
Yes
false
inspection_enable_uefi
N/A
Yes
true
Save the changes after editing the
undercloud.conf
file. See the Undercloud configuration parameters for detailed descriptions of these configuration parameters.NoteConsider enabling Ironic’s disk cleaning feature, if overcloud nodes are going to be repurposed again. See the Understanding Ironic disk cleaning between deployments section for more details.
Run the RHOSP-d configuration script:
[stack@director ~]$ openstack undercloud install
NoteThis script will take several minutes to complete. This script will install additional software packages and generates two files:
undercloud-passwords.conf
- A list of all passwords for the director’s services.
stackrc
- A set of initialization variables to help you access the director’s command line tools.
Verify that the configuration script started and enabled all of the RHOSP services:
[stack@director ~]$ sudo systemctl list-units openstack-*
The configuration script gives the
stack
user access to all the container management commands. Refresh thestack
user’s permissions:[stack@director ~]$ exec su -l stack
Initialize the
stack
user’s environment to use the RHOSP-d command-line tools:[stack@director ~]$ source ~/stackrc
The command-line prompt will change, which indicates that OpenStack commands will authenticate and execute against the undercloud:
Example
(undercloud) [stack@director ~]$
The RHOSP-d requires several disk images for provisioning the overcloud nodes.
Obtain these disk images by installing
rhosp-director-images
andrhosp-director-images-ipa
software packages:(undercloud) [stack@director ~]$ sudo yum install rhosp-director-images rhosp-director-images-ipa
Extract the archive files to the
images
directory in thestack
user’s home directory:(undercloud) [stack@director ~]$ cd ~/images (undercloud) [stack@director ~]$ for x in /usr/share/rhosp-director-images/overcloud-full-latest-13.0.tar /usr/share/rhosp-director-images/ironic-python-agent-latest-13.0.tar ; do tar -xvf $x ; done
Import the disk images into the RHOSP-d:
(undercloud) [stack@director ~]$ openstack overcloud image upload --image-path /home/stack/images/
To view a list of imported disk images, execute the following command:
(undercloud) [stack@director ~]$ openstack image list
Image Name
Image Type
Image Description
bm-deploy-kernel
Deployment
Kernel file used for provisioning and deploying systems.
bm-deploy-ramdisk
Deployment
RAMdisk file used for provisioning and deploying systems.
overcloud-full-vmlinuz
Overcloud
Kernel file used for the base system, which is written to the node’s disk.
overcloud-full-initrd
Overcloud
RAMdisk file used for the base system, which is written to the node’s disk.
overcloud-full
Overcloud
The rest of the software needed for the base system, which is written to the node’s disk.
NoteThe
openstack image list
command will not display the introspection PXE disk images. The introspection PXE disk images are copied to the/httpboot/
directory.(undercloud) [stack@director images]$ ls -l /httpboot total 341460 -rwxr-xr-x. 1 root root 5153184 Mar 31 06:58 agent.kernel -rw-r--r--. 1 root root 344491465 Mar 31 06:59 agent.ramdisk -rw-r--r--. 1 ironic-inspector ironic-inspector 337 Mar 31 06:23 inspector.ipxe
Set the DNS server so that it resolves the overcloud node host names.
List the subnets:
(undercloud) [stack@director ~]$ openstack subnet list
Define the name server using the undercloud’s
neutron
subnet:openstack subnet set --dns-nameserver DNS_NAMESERVER_IP SUBNET_NAME_or_ID
- Replace…
- DNS_NAMESERVER_IP with the IP address of the DNS server.
SUBNET_NAME_or_ID with the
neutron
subnet name or id.Example
(undercloud) [stack@director ~]$ openstack subnet set --dns-nameserver 192.0.2.4 local-subnet
NoteReuse the
--dns-nameserver DNS_NAMESERVER_IP
option for each name server.
Verify the DNS server by viewing the subnet details:
(undercloud) [stack@director ~]$ openstack subnet show SUBNET_NAME_or_ID
- Replace…
SUBNET_NAME_or_ID with the
neutron
subnet name or id.Example
(undercloud) [stack@director ~]$ openstack subnet show local-subnet +-------------------+-----------------------------------------------+ | Field | Value | +-------------------+-----------------------------------------------+ | ... | | | dns_nameservers | 192.0.2.4 | | ... | | +-------------------+-----------------------------------------------+
Additional Resources
-
For more information on all the undercloud configuration parameters located in the
undercloud.conf
file, see the Configuring the Director section in the RHOSP Director Installation and Usage Guide.
3.4. Configuring the undercloud to clean the disks before deploying the overcloud
Updating the undercloud configuration file to clean disks before deploying the overcloud.
Enabling this feature will destroy all data on all disks before they are provisioned in the overcloud deployment.
Prerequisites
Procedure
There are two options, an automatic or manual way to cleaning the disks before deploying the overcloud:
First option is automatically cleaning the disks by editing the
undercloud.conf
file, and add the following line:clean_nodes = True
NoteThe bare metal provisioning service runs a
wipefs --force --all
command to accomplish the cleaning.
WarningEnabling this feature will destroy all data on all disks before they are provisioned in the overcloud deployment. Also, this will do an additional power cycle after the first introspection and before each deployment.
The second option is to keep automatic cleaning off and run the following commands for each Ceph node:
[stack@director ~]$ openstack baremetal node manage NODE [stack@director ~]$ openstack baremetal node clean NODE --clean-steps '[{"interface": "deploy", "step": "erase_devices_metadata"}]' [stack@director ~]$ openstack baremetal node provide NODE
- Replace…
- NODE with the Ceph host name.