Chapter 5. Deploying the Undercloud and Overcloud

This section provides the installation and configuration details for installing both the Red Hat OpenStack Platform 10 undercloud and overcloud. Specifically, the installation and configuration details are provided for installing the Red Hat OpenStack Platform 10 director, customizing the environment configuration files (heat templates) for the overcloud deployment, and deploying the overcloud. Customization of the Red Hat OpenStack Platform 10 heat templates is performed to provide environment specific details for the overcloud deployment. Additionally, configuration details for the new monitoring and logging environments are provided in this section.

5.1. Installation and Configuration of the Red Hat OpenStack Platform 10 director

Red Hat OpenStack Platform 10 director can be installed on a bare metal machine or a virtual machine. The system resource requirements are similar for a virtual machine or a physical bare metal server. The following are the minimum system configuration requirements.

  • 8 CPU cores
  • 16GB RAM
  • 40GB Disk Storage
  • Two Network Interfaces with a minimum of 1GB, 10GB is recommended for the Provisioning Network

Create the Red Hat OpenStack Platfom 10 director Virtual Machine

In this reference architecture Red Hat OpenStack Platform 10 director is installed on a virtual machine. The virtual machine is hosted on a HPE ProLiant DL360 Gen9 server running Red Hat Enterprise Linux 7.3 with KVM enabled. Create a virtual machine with the following requirements.

  • CPU – 8 virtual CPU cores
  • Memory – 16,384 MiB Ram
  • Storage – 100 GiB virtual disk
  • Two Network adapters – e1000
  • Software Selection – Infrastructure Server

Connect one network adapter to the Provisioning Network and the other to the External Network as shown in Figure 2.

5.1.1. Install and Configure the Operating System

Install Red Hat Enterprise Linux 7.3 on the newly created virtual machine that will be used for Red Hat OpenStack Platform 10 director. The following section will provide a high level overview of the Red Hat OpenStack Platform 10 Undercloud installation and configuration. This overview is specific to the reference architecture described in this document. Please refer to the Red Hat OpenStack Platform 10 installation guide for complete installation documentation.

Create a stack user and assign a password

# useradd stack
# passwd stack (specify password)

Add user stack to sudoer file

# echo "stack ALL=(root) NOPASSWD:ALL" | tee -a /etc/sudoers.d/stack
# chmod 0440 /etc/sudoers.d/stack

Create template and image directories

# su - stack
$ mkdir images
$ mkdir templates

Configure hostname

$ sudo hostnamectl set-hostname <RHOSP director FQDN> director.example.com
$ sudo hostnamectl set-hostname --transient <RHOSP director FQDN> director.example.com

Add the Red Hat OpenStack Platform director FQDN to /etc/hosts

Register and attach repositories

$ Register and attach repositories
$ sudo subscription-manager register
$ sudo subscription-manager list --available --all
$ sudo subscription-manager attach --pool=<pool_id>
$ sudo subscription-manager repos --disable=*
$ sudo subscription-manager repos --enable=rhel-7-server-rpms --enable=rhel-7-server-extras-rpms --enable=rhel-7-server-rh-common-rpms --enable=rhel-ha-for-rhel-7-server-rpms --enable=rhel-7-server-openstack-10-rpms
$ sudo yum update -y
$ sudo reboot

Install the tripleoclient

$ sudo yum install -y python-tripleoclient

Copy and edit the undercloud.conf file
Installing the python-tripleoclient provides a sample undercloud.conf.sample file in /usr/share/instack-undercloud/. Copy this file to /home/stack/ and modify for the Red Hat OpenStack Platform undercloud deployment. Below is an example of the modifications made to the undercloud.conf file for the Red Hat OpenStack Platform deployment documented in this reference architecture.

undercloud_hostname = director.hpecloud.lab.eng.bos.redhat.com
local_ip = 192.168.20.20/24
network_gateway = 192.168.20.20
undercloud_public_vip = 192.168.20.2
undercloud_admin_vip = 192.168.20.3
undercloud_service_certificate = /etc/pki/instack-certs/undercloud.pem
local_interface = ens4
network_cidr = 192.168.20.0/24
masquerade_network = 192.168.20.0/24
dhcp_start = 192.168.20.50
dhcp_end = 192.168.20.99
inspection_interface = br-ctlplane
inspection_iprange = 192.168.20.100,192.168.20.120
generate_service_certificate = true
certificate_generation_ca = local
undercloud_debug = false

5.1.2. Install the Undercloud

Once the tripleoclient has been installed and the undercloud.conf file has been modified the undercloud can be installed. Execute the following command to install the undercloud:

$ openstack undercloud install

When the installation script completes source the stackrc file and verify the undercloud is operational

$ openstack service list

5.1.3. Configure the Undercloud

Install, copy, and extract the image files

$ sudo yum install rhosp-director-images rhosp-director-images-ipa
$ for i in /usr/share/rhosp-director-images/overcloud-full-latest-10.0.tar /usr/share/rhosp-director-images/ironic-python-agent-latest-10.0.tar; do tar -xvf $i; done

The following image files will extracted to /home/stack/images/

  • overcloud-full.qcow2
  • overcloud-full.initrd
  • overcloud-full.vmlinuz
  • ironic-python-agent.initramfs
  • ironic-python-agent.kernel

Modify image files (optional)

The overcloud image can be modified using virt-customize to add a specific password for the root user. This is an optional step, but can be useful when troubleshooting the overcloud deployment.

$ sudo systemctl start libvirtd
$ virt-customize -a overcloud-full.qcow2 --root-password password:redhat --run-command 'sed -i -e "s/.*PasswordAuthentication.*/PasswordAuthentication yes/" /etc/ssh/sshd_config' --run-command 'sed -i -e "s/.*PermitRootLogin.*/PermitRootLogin yes/" /etc/ssh/sshd_config'
$ sudo systemctl stop libvirtd

Upload the overcloud images to the Undercloud Glance repository

$ openstack overcloud image upload --image-path /home/stack/images/
$ openstack image list

Set the DNS Names servers on the undercloud subnet

Get the UUID of the undercloud subnet by executing:

$ openstack subnet list

Update the DNS names servers by passing the UUID of the subnet and the DNS name servers

$ openstack subnet set --dns-nameservers 10.5.30.160 03dae028-f9bf-47dc-bc2b-d54a72910abc

Create instack.json and perform introspection

The introspection process will contact each node to be used in the overcloud deployment and build an inventory that will be stored in the swift data of the undercloud. The first step in performing an introspection is to create a instackenv.json file that contains authentication and connection information for each node. This reference architecture was tested with both the generic pxe_impitool driver and the pxe_ilo. The pxe_ilo driver is documented in this reference architecture. A new local user was created in iLo named root with the following account privileges; Administer User Accounts, Remote Console Access, Virtual Power and Reset, Virtual Media, and Configure iLO Settings. This account was used to perform the introspection.

Below is an excerpt from the instackenv.json file. Refer to appendix B for an example of the full instackenv.json.

"nodes":[
     {
	"pm_type":"pxe_ilo",
	"mac":[
	     "94:18:82:08:f0:14"
	      ],
	"capabilities": "profile:ceph-storage,boot_option:local",
	"cpu":"2",
	"memory":"4096",
	"disk":"146",
	"arch":"x86_64",
	"pm_user":"root",
	"pm_password":"redhat",
	"pm_addr":"10.19.20.140"
    }

Notice the line "capabilities": "profile:ceph-storage,boot_option:local" in the above entry. This assigns the ceph-storage profile to the ProLiant HPE DL380 Gen 9 with the iLO address of 10.19.20.140. The ceph-storage profile is assigned to the each on the three HPE prolaint DL380 Gen9 servers that are configured with additional drives for Ceph OSD and journals. A similar entry must be created for each node where introspection will be performed. Refer to Appendix B for complete instack.json file example.

Import the instack.json file to register the node with Red Hat OpenStack Platform director using the following command:

$ openstack baremetal import --json ~/instackenv.json
$ openstack baremetal configure boot

Verify the introspection pxe images are installed in /httpboot

  • agent.kernel
  • agent.ramdisk
  • inspector.ipxe

Run the introspection:

Import the instackenv.json file

$ openstack baremetal import --json ~/instackenv.json

Configure the boot

$ openstack baremetal configure boot

List the imported baremetal nodes
All nodes should show a Provisioning State of available and the Power State is power off.

$ openstack baremetal node list

| UUID | Name | Instance UUID | Power State | Provisioning State | Maintenance
| af95182a-3107-423c-a9fa-8b14fb44d825 | None | None | power off |available | False
| 93f85651-aba0-436b-95d4-783d81622960 | None | None | power off |available | False
| 2faac1e8-715e-4f17-8314-7bace3e4ec01 | None | None | power off |available | False
| 27f4be16-88dd-466f-a80c-cfaa3c8dec09 | None | None | power off |available | False
| bbc764a0-e66b-406e-8c02-a2b2da21b38c | None | None | power off |available | False
| 1138f2f3-ebfc-43c5-8ac0-e28a62f6be21 | None | None | power off |available | False
| bd3c80f3-92e7-4e75-8272-8ad73bd7efed | None | None | power off |available | False
| efeb181e-dd3a-4ad1-bc03-dbcfedb2bc97 | None | None | power off |available | False
| 858f277f-5dd8-457d-9510-45d22173fc1e | None | None | power off |available | False
| 87132020-612d-46b9-99ed-0a1509e67254 | None | None | power off |available | False

Set all nodes from available to manageable

$ for node in $(openstack baremetal node list --fields uuid -f value) ; do openstack baremetal node manage $node ; done

Execute openstack baremetal node list to verify the Provisioning State has been set to manageable as shown below.

$ openstack baremetal node list

| UUID | Name | Instance UUID | Power State | Provisioning State | Maintenance
| af95182a-3107-423c-a9fa-8b14fb44d825 | None | None | power off |manageable | False
| 93f85651-aba0-436b-95d4-783d81622960 | None | None | power off |manageable | False
| 2faac1e8-715e-4f17-8314-7bace3e4ec01 | None | None | power off |manageable | False
| 27f4be16-88dd-466f-a80c-cfaa3c8dec09 | None | None | power off |manageable | False
| bbc764a0-e66b-406e-8c02-a2b2da21b38c | None | None | power off |manageable | False
| 1138f2f3-ebfc-43c5-8ac0-e28a62f6be21 | None | None | power off |manageable | False
| bd3c80f3-92e7-4e75-8272-8ad73bd7efed | None | None | power off |manageable | False
| efeb181e-dd3a-4ad1-bc03-dbcfedb2bc97 | None | None | power off |manageable | False
| 858f277f-5dd8-457d-9510-45d22173fc1e | None | None | power off |manageable | False
| 87132020-612d-46b9-99ed-0a1509e67254 | None | None | power off |manageable | False

Perform the introspection

$ openstack overcloud node introspect --all-manageable --provide

Monitor the progress

$ sudo journalctl -l -u openstack-ironic-inspector -u openstack-ironic-inspector-dnsmasq -u openstack-ironic-conductor -f

When the introspection is complete the following message should be displayed on the screen

Started Mistral Workflow. Execution ID:
625f7c8e-adb0-4541-a9f1-a282dc4c562b
Waiting for introspection to finish...
Introspection for UUID 1138f2f3-ebfc-43c5-8ac0-e28a62f6be21 finished successfully.
Introspection for UUID 93f85651-aba0-436b-95d4-783d81622960 finished successfully.
Introspection for UUID efeb181e-dd3a-4ad1-bc03-dbcfedb2bc97 finished successfully.
Introspection for UUID 87132020-612d-46b9-99ed-0a1509e67254 finished successfully.
Introspection for UUID 858f277f-5dd8-457d-9510-45d22173fc1e finished successfully.
Introspection for UUID af95182a-3107-423c-a9fa-8b14fb44d825 finished successfully.
Introspection for UUID 27f4be16-88dd-466f-a80c-cfaa3c8dec09 finished successfully.
Introspection for UUID 2faac1e8-715e-4f17-8314-7bace3e4ec01 finished successfully.
Introspection for UUID bd3c80f3-92e7-4e75-8272-8ad73bd7efed finished successfully.
Introspection for UUID bbc764a0-e66b-406e-8c02-a2b2da21b38c finished successfully.
Introspection completed.

List the Nodes after Introspection

Once the introspection is complete executing the openstack baremetal node list command should show the Provisioning State as available, Maintenance as False, and Power State is power off for each of the baremetal nodes.

$ openstack baremetal node list

| UUID | Name | Instance UUID | Power State |*Provisioning State | Maintenance
| 35b90609-6783-4958-b84f-a8415cd49438 | None | None | power off | available | False
| 152cc51f-2dd5-474e-a04d-a029fac39175 | None | None | power off | available | False
| 6761cb95-b5c6-44ad-931e-dd3bc1de92f9 | None | None | power off | available | False
| d5ba620e-11ef-4ce3-bfce-ce60c927a5eb | None | None | power off | available | False
| 138963ce-02f9-4944-90eb-16513c124727 | None | None | power off | available | False
| df3453a9-c86a-4620-81ff-45254468860c | None | None | power off | available | False
| eed0b799-9f35-472d-a43f-89442a1bc48b | None | None | power off | available | False
| d1bd0e50-ff9d-4654-9830-b36b2223e914 | None | None | power off | available | False
| 61826a3a-5381-44bd-b92d-2fd89da00b4d | None | None | power off | available | False
| 24670e3b-e34e-4489-94d0-b9a0a1aa23ae | None | None | power off | available | False

List the Profiles

Executing the openstack overcloud profiles list command will display the Node UUID, Provisioning State, and the Current Profile. The Current Profile was assigned in the instackenv.json "capabilities" setting for each node. No additional profile have been assigned and the nodes have not been deploy, so the Node Name and Possible Profiles columns are blank.

$ openstack overcloud profiles list

| Node UUID | Node Name | Provision State | Current Profile | Possible Profiles
| af95182a-3107-423c-a9fa-8b14fb44d825 |    | available | compute |           |
| 93f85651-aba0-436b-95d4-783d81622960 |    | available | compute |           |
| 2faac1e8-715e-4f17-8314-7bace3e4ec01 |    | available | compute |           |
| 27f4be16-88dd-466f-a80c-cfaa3c8dec09 |    | available | compute |           |
| bbc764a0-e66b-406e-8c02-a2b2da21b38c |    | available | control |           |
| 1138f2f3-ebfc-43c5-8ac0-e28a62f6be21 |    | available | control |           |
| bd3c80f3-92e7-4e75-8272-8ad73bd7efed |    | available | control |           |
| efeb181e-dd3a-4ad1-bc03-dbcfedb2bc97 |    | available | ceph-storage |      |
| 858f277f-5dd8-457d-9510-45d22173fc1e |    | available | ceph-storage |      |
| 87132020-612d-46b9-99ed-0a1509e67254 |    | available | ceph-storage |      |

Configure Red Hat Ceph Storage Nodes
The next set of commands will review the introspection inventory for the Red Hat Ceph Storage nodes and set the root disks. The root disk should be set for all the Red Hat OpenStack Platform 10 nodes. Since the Controller and Compute nodes have a single logical disk, it is not as critical as setting the root disk for the Red Hat Ceph Storage nodes, which have multiple logical disks. To accomplish this the ironic introspection data for each node will be exported to a new directory under /home/stack.

Checking the ironic inventory for the disks

Export the ironic password:

Create a directory for the exported data in the stack user’s home directory /home/stack/

$ mkdir swift-data
$ export SWIFT_PASSWORD=`sudo crudini --get /etc/ironic-inspector/inspector.conf swift password`

SWIFT_PASSWORD=ff25ca2e11ebf0373f7854788c3298b8767688d7

The above command requires installing the crudini package on the Red Hat OpenStack Platform director. Alternatively, this password can be found in /home/stack/undercloud-passwords.conf

undercloud_ironic_password=ff25ca2e11ebf0373f7854788c3298b8767688d7

Execute the following commands to export the introspection data for each node.

$ cd swift-data
$ for node in $(ironic node-list | grep -v UUID| awk '{print $2}'); do swift -U service:ironic -K $SWIFT_PASSWORD download ironic-inspector inspector_data-$node; done

This command will create a file for each node that contains the introspection data. Use the following command to identify the disks and device names for each node.

$ for node in $(ironic node-list | grep -v UUID| awk '{print $2}'); do echo "NODE: $node" ; cat inspector_data-$node | jq '.inventory.disks' ; echo "-----" ; done

This output will display the following:

NODE: af95182a-3107-423c-a9fa-8b14fb44d825
[
 {
  "size": 1200210141184,
  "rotational": true,
  "vendor": "HP",
  "name": "/dev/sda",
  "wwn_vendor_extension": "0x947935a19c341de5",
  "wwn_with_extension": "0x600508b1001cee69947935a19c341de5",
  "model": "LOGICAL VOLUME",
  "wwn": "0x600508b1001cee69",
  "serial": "600508b1001cee69947935a19c341de5"
 }
]

Refer back to the Profile List, generated by the openstack overcloud profile list command, comparing the NODE: af95182a-3107-423c-a9fa-8b14fb44d825 to the Node ID UUID in the profile list table, we can see that this data belongs to a node that has the compute profile assigned. The compute node and controller nodes will only display a single logical disk. The ceph-storage nodes will display 13 logical disks, one for the operating system, 12 for ceph osd, and two SSD drives for ceph journal files.

The “serial” key reflects the Drive Unique ID of the logical drives created using the HP Smart Storage Administrator. This information was captured earlier in the section titled “Red Hat Ceph Storage Configuration” when creating the logical drives on the Red Hat Ceph Storage nodes.

The journal SSD drives can be identified by the “rotational : false” key pair value.

In our example the device names for the Red Hat Ceph Storage nodes use the following conventions:

  • /dev/sda is the operating system
  • /dev/sdb - /dev/sdk are used for ceph osd
  • /dev/sdl and /dev/sdm are solid state drives used for ceph journals.

The device names for the osd and journal files will be passed as parameters for the storage environment in the extraParameters.yaml file defined in the next section.

Perform the following commands to set the root disks on the ceph-storage nodes.

Red Hat Ceph Storage node 1

$ ironic node-update d294282a-3136-4324-9d92-0531432a94d6 add properties/root_device='{"serial":"600508b1001c34622cdd92447070f364"}'

Red Hat Ceph Storage node 2

$ ironic node-update ae8d1b83-3387-4568-97d8-38e52279f422 add properties/root_device='{"serial":"600508b1001c98c51329ff4030ab314f"}'

Red Hat Ceph Storage node 3

$ ironic node-update f96decbd-7057-470f-8575-293f4a1a2811 add properties/root_device='{"serial":"600508b1001c19d84afd81f166812fd3"}'

Red Hat Ceph Storage Tuning

The following setting for the default journal size is set during the installation.

ceph::profile::params::osd_journal_size: 5120

Configuring the placement groups will be discussed in the next section.

Heat Templates

In the stack user’s home directory create a directory structure to hold any customized Heat templates. In this reference architecture there six files that hold our customized configuration templates and scripts.
Those files include:

  • templates/mytemplates/extraParams.yaml
  • templates/mytemplates/nic-configs/compute.yaml
  • templates/mytemplates/nic-configs/controller.yaml
  • templates/mytemplates/nic-configs/ceph-storage.yaml
  • templates/mytemplates/first-boot/wipe-disks.yaml
  • templates/mytemplates/first-boot/wipe-disks.sh

The contents of these files can be found in Appendix B of this document. Updated versions can also be found at:
https://github.com/RHsyseng/OSP-HPE

Modifications to the network environment, storage, monitoring, and logging environments are consolidated into the extraParameters.yaml file. The nic-configs directory contains the network configuration files for the controller, compute, and ceph-storage nodes. These files provide the configuration information for creating the bond interfaces, assigning the bonded interfaces to their respective bridge, and assigning the VLANs to the interface for the storage and cloud management trunks.

The wipe-disks.yaml and the wipe-disks.sh will be executed on the first boot of the ceph nodes to wipe out the disks designated for OSD and journals.

Network

The ControlPlane network uses the eno1 interface on each of the servers, this interface is used for communication between Red Hat OpenStack Platform 10 director and the Red Hat OpenStack Platform 10 compute, controller, and Red Hat Ceph Storage nodes. The variables for the ControlPlaneIP, ControlPlaneSubnetCidr, and EC2MetadataIP are defined in the extraParameters.yaml file. Below is an excerpt from the controller.yaml file. Refer to Appendix B to view the entire controller.yaml, compute.yaml, and ceph-storage.yaml network configuration files.

type: interface
           name: eno1
           use_dhcp: false
           addresses:
             -
                ip_netmask:
                 list_join:
                   - '/'
                   - - {get_param: ControlPlaneIp}
                     - {get_param: ControlPlaneSubnetCidr}
           routes:
             -
               ip_netmask: 169.254.169.254/32
               next_hop: {get_param: EC2MetadataIp}

Bonding

The bonded interfaces are defined in the network configuration files, controller.yaml, compute.yaml, and ceph-storage.yaml. The cloud networking bond that contains the Internal API, External, Storage Management, and Tenant networks, are defined as an ovs_bond and attached to an ovs_bridge in the configuration files. The ovs_bridge is defined in the extraParameter.yaml file and the ovs_bond name is bond0. An excerpt of the controller.yaml file is shown below:

            -
             type: ovs_bridge
             name: {get_input: bridge_name}
             dns_servers: {get_param: DnsServers}
             members:
               -
                type: ovs_bond
                name: bond0
                bonding_options: {get_param: BondInterfaceOvsOptions}
                members:
                  -
                   type: interface
                   name: eno49
                   mtu: 9000
                   primary: true
                  -
                   type: interface
                   name: ens2f0
                   mtu: 9000
               -
                type: vlan
                device: bond0
                vlan_id: {get_param: ExternalNetworkVlanID}
                addresses:
                  -
                   ip_netmask: {get_param: ExternalIpSubnet}
                routes:
                  -
                   default: true
                   next_hop: {get_param: ExternalInterfaceDefaultRoute}
               -
                type: vlan
                device: bond0
                vlan_id: {get_param: InternalApiNetworkVlanID}
                addresses:
                  -
                   ip_netmask: {get_param: InternalApiIpSubnet}
               -
                type: vlan
                device: bond0
                vlan_id: {get_param: TenantNetworkVlanID}
                addresses:
                  -
                   ip_netmask: {get_param: TenantIpSubnet}
               -
                type: vlan
                device: bond0
                mtu: 9000
                vlan_id: {get_param: StorageMgmtNetworkVlanID}
                addresses:
                  -
                   ip_netmask: {get_param: StorageMgmtIpSubnet}

The Storage network is also defined in the network configuration files. In this case the ovs_bridge name is defined in the configuration file as br_storage and the ovs_bond name is bond1. Below is an excerpt from the controller.yaml file:

            -
             type: ovs_bridge
             name: br_storage
             dns_servers: {get_param: DnsServers}
             members:
               -
                type: ovs_bond
                name: bond1
                bonding_options: {get_param: BondInterfaceOvsOptions}
                members:
                  -
                   type: interface
                   name: eno50
                   primary: true
                   mtu: 9000
                  -
                   type: interface
                   name: ens2f1
                   mtu: 9000
               -
                type: vlan
                device: bond1
                mtu: 9000
                vlan_id: {get_param: StorageNetworkVlanID}
                addresses:
                  -
                   ip_netmask: {get_param: StorageIpSubnet}

The network interfaces used for these bonds are 10Gigabit interfaces. Bond0 uses the two 10Gb interfaces, eno49 and ens2f0. Bond1 uses the two 10Gigabit interfaces eno50 and ens2f1. The BondInterfaceOvsOptions for the ovs bonded interfaces is set to "slb-balance lacp=off" and defined in the extraParameters.yaml file. The vlan parameters, VlanID and IpSubnets, are also defined in the extraParameters.yaml file.

Monitoring and Logging Configuration

Monitoring is performed with a Sensu client that sends alerts to a preconfigured Sensu monitoring server and presented through a Uchiwa dashboard. Figure 3 depicts the Uchiwa dashboard to a Sensu server that is recieving alerts from the Red Hat Openstack Platform 10 deployment used in this reference architecture.

Uchiwa dashboard for Sensu Monitoring

Figure 3: Uchiwa dashboard for Sensu Monitoring

The log files are collected using fluentd and sent to a preconfigured logging server, then presented via a Kibana dashboard. The Kibana dashboard shown in Figure 4 illustrates the event logging from the Red Hat Openstack Platform 10 deployment used in this reference architecture.

Kibana dashboard for Fluentd Logging

Figure 4: Kibana dashboard for Fluentd Logging

Including the monitoring and logging environments in the Red Hat OpenStack Platform 10 deployment will install the client side monitoring and logging components on the Red Hat OpenStack Platform 10 overcloud nodes. The server side components (Sensu, Fluentd, and Kibanna) for receiving monitors and log files are not installed as part of the Red Hat OpenStack Platform 10 deployment. For an automated deployment of the Sensu monitoring server and Fluentd logging server refer to the opstools-ansible git repository. The opstools-ansible project is an opensource community supported project and not part of this reference architecture. The ansible playbooks in this repository will install and configure the server side components to work with the monitoring and logging clients on the Red Hat OpenStack Platform 10 overcloud nodes. The opstools-ansible git project installs OpenStack specific Sensu checks required to monitor the OpenStack services. The Sensu client will automatically subscribe to these checks during the Red Hat OpenStack Platform 10 deployment. The monitoring and logging environments are configured by specifying the parameter variables in the extraParameters.yaml. Below is an excerpt for the extraParameters.yaml file that illustrates the monitoring and logging variables:

#Monitoring Parameters
 MonitoringRabbitHost: 192.168.20.201
 MonitoringRabbitPort: 5672
 MonitoringRabbitUserName: sensu
 MonitoringRabbitPassword: sensu
 MonitoringRabbitUseSSL: false
 MonitoringRabbitVhost: "/sensu"
 #Logging
 parameter_defaults:
 LoggingServers:
   - host: 192.168.20.202
     port: 24224

In this example, the Sensu monitoring is installed on 192.168.20.201 and the Logging server is installed on 192.168.20.202. These servers must be available in the existing infrastructure or installed separately. The Sensu monitoring server and Fluentd logging server are not installed as part of the Red Hat OpenStack Platform 10 deployment or this reference architecture. The opstools-ansible project, found at https://github.com/centos-opstools/opstools-ansible, provides ansible playbooks that will install and configure the Sensu monitoring and Logging servers. In this example monitoring and logging is configured to communicate over the control plane network. The complete extraParameters.yaml file can be found in Appendix B at the end of this document.

5.2. Deploy the Overcloud

Installing the Overcloud from a Command line

The following script launches our openstack overcloud deploy command and will deploy an overcloud named osphpe:

source stackrc
openstack overcloud deploy \
--templates \
-e/usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml \
-e/usr/share/openstack-tripleo-heat-templates/environments/storage-environment.yaml \
-e/usr/share/openstack-tripleo-heat-templates/environments/monitoring-environment.yaml \
-e/usr/share/openstack-tripleo-heat-templates/environments/logging-environment.yaml \
-e /home/stack/templates/mytemplates/extraParams.yaml \
--stack osphpe \
--debug \
--log-file overcloudDeploy.log \
--ceph-storage-flavor ceph-storage \
--ceph-storage-scale 3 \
--control-flavor control \
--control-scale 3 \
--compute-flavor compute \
--compute-scale 4 \
--block-storage-scale 0 \
--swift-storage-scale 0 \
--ntp-server 10.16.255.1 \

The --templates option in the openstack overcloud deploy command specifies the location of the environment templates that will be used during the deployment. The extraParams.yaml file is used to specify custom environment parameters (variables) for the network, storage, logging, and monitoring environments. The --stack option defines the name for the overcloud that will be deployed.

Installing the Overcloud with GUI

Red Hat OpenStack Platform 10 supports the overcloud deployment using the TripleO GUI. This installation offers the installer the ability to customize the overcloud environment, validate the configuration, and launch the overcloud deployer.
Demonstrating the GUI installer is beyond the scope of this document.

Monitoring the Deployment

Below are some commands that are helpful in monitoring the overcloud deployment.

  • openstack baremetal node list
  • openstack server list
  • openstack stack event list osphpe
  • openstack stack resource list osphpe -n5

Additionally, opening an iLO remote console will allow the deployer to view the provisioning of the Red Hat OpenStack Platform 10 nodes. This may not be feasible for larger environments.
A successful completion the overcloud deploy script should return a 0 with following message that includes the Overcloud Endpoint URL:

Overcloud Endpoint: http://10.19.20.157:5000/v2.0
Overcloud Deployed
clean_up DeployOvercloud:
END return value: 0

The deployment should take approximately one hour to complete. If the deployment is taking considerably longer than one hour, it will eventually time out and return a 1, indicating an error condition. Refer to Appendix C for help with troubleshooting a failed deployment.

Accessing the Overcloud

Once the overcloud has been successfully deployed, an environment file for the overcloud is created in the stack user’s home directory. The default file name is overcloudrc. The actual file name will be the name of the stack with rc appended to it. In this reference architecture the overcloud stack name is osphpe, therefore the environment file that is created as part of the Red Hat OpenStack Platform 10 deployment is /home/stack/osphperc. This file contains the openstack environment variables for accessing the overcloud, including the default OS_USERNAME, OS_PASSWORD, OS_AUTH_URL, and OS_TENANT_NAME. Below is an example of the osphperc file:

export OS_NO_CACHE=True
export OS_CLOUDNAME=osphpe
export OS_AUTH_URL=http://10.19.20.154:5000/v2.0
export NOVA_VERSION=1.1
export COMPUTE_API_VERSION=1.1
export OS_USERNAME=admin
export no_proxy=,10.19.20.156,192.168.20.58,10.19.20.158,192.168.20.58,10.19.20.154,192.168.20.60
export OS_PASSWORD=4Gpy773FwnzGJsVevyP2EVbmT
export PYTHONWARNINGS="ignore:Certificate has no, ignore:A true SSLContext object is not available"
export OS_TENANT_NAME=admin

Source /home/stack/osphperc file to execute openstack commands against the newly deployed overcloud named osphpe. Additionally, the Red Hat OpenStack Platform 10 Horizon interface can be accessed from a browser using the OS_AUTH_URL IP Address. The user name and password for the horizon interface are also defined in the osphperc file as; OS_USERNAME=admin and OS_PASSWORD=4Gpy773FwnzGJsVevyP2EVbmT. The admin password, OS_PASSWORD, is randomly generated and will be unique for each Red Hat OpenStack Platform 10 deployment.