Chapter 6. Post Deployment and Validation
This section provides post deployment and validation information to ensure both the Red Hat OpenStack Platform 10 and the Red Hat Ceph Storage cluster are operational. After the successful deployment of the overcloud, verify the ceph cluster is operational and modify the configuration as necessary using the ceph commands described in this section. To ensure proper operation of the Red Hat OpenStack Platform 10, deploy a test virtual machine instance and configure a Floating IP Address to test external network connectivity.
6.1. Red Hat Ceph Storage Post Deployment
Verify the Red Hat Ceph Storage deployment
The Red Hat Ceph Storage monitors are running on the overcloud controller nodes. The controller nodes are accessible using ssh and the heat-admin user account via the control plane IP address assigned to each controller node. To determine the control plane IP address for each of the Red Hat OpenStack Platform nodes, execute openstack server list using the stack user from the Red Hat OpenStack Platform director.
$ source stackrc $ openstack server list
A table will be displayed showing the status of all the Red Hat OpenStack Platform nodes along with the control plane IP address. An execrpt from the output of the command is shown below. In our example the controller-0 control plane IP address is 192.168.20.54.
osphpe-controller-0 | ACTIVE | ctlplane=192.168.20.64 | overcloud-full +
Accessing and verifying the Red Hat Ceph Storage cluster
From the Red Hat OpenStack Platform director, log into the overcloud controller using ssh heat-admin@<controller-0 ctlplane IP address>
The ceph osd tree command will display the individual OSD devices on each node.
$ sudo ceph osd tree
The output of the ceph osd tree command is shown below. This command will show the status of the individual OSD devices.
ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY -1 32.73285 root default -2 10.91095 host osphpe-cephstorage-0 0 1.09109 osd.0 up 1.00000 1.00000 1 1.09109 osd.1 up 1.00000 1.00000 4 1.09109 osd.4 up 1.00000 1.00000 5 1.09109 osd.5 up 1.00000 1.00000 8 1.09109 osd.8 up 1.00000 1.00000 11 1.09109 osd.11 up 1.00000 1.00000 13 1.09109 osd.13 up 1.00000 1.00000 18 1.09109 osd.18 up 1.00000 1.00000 21 1.09109 osd.21 up 1.00000 1.00000 24 1.09109 osd.24 up 1.00000 1.00000 -3 10.91095 host osphpe-cephstorage-2 2 1.09109 osd.2 up 1.00000 1.00000 3 1.09109 osd.3 up 1.00000 1.00000 6 1.09109 osd.6 up 1.00000 1.00000 9 1.09109 osd.9 up 1.00000 1.00000 12 1.09109 osd.12 up 1.00000 1.00000 14 1.09109 osd.14 up 1.00000 1.00000 16 1.09109 osd.16 up 1.00000 1.00000 20 1.09109 osd.20 up 1.00000 1.00000 23 1.09109 osd.23 up 1.00000 1.00000 26 1.09109 osd.26 up 1.00000 1.00000 -4 10.91095 host osphpe-cephstorage-1 7 1.09109 osd.7 up 1.00000 1.00000 10 1.09109 osd.10 up 1.00000 1.00000 15 1.09109 osd.15 up 1.00000 1.00000 17 1.09109 osd.17 up 1.00000 1.00000 19 1.09109 osd.19 up 1.00000 1.00000 22 1.09109 osd.22 up 1.00000 1.00000 25 1.09109 osd.25 up 1.00000 1.00000 27 1.09109 osd.27 up 1.00000 1.00000 28 1.09109 osd.28 up 1.00000 1.00000 29 1.09109 osd.29 up 1.00000 1.00000
Launch sudo ceph health to verify ceph cluster health status.
$ sudo ceph health HEALTH_WARN too few PGs per OSD (22 < min 30) +
The ceph health command will issue a warning due to insufficient placement groups. The default placement groups can be set as part the ceph variables in the extraParams.yaml. However, increasing the default value may not be appropriate for all OSD storage pools.
Configure the number of ceph placement groups per pool
List the ceph pools
$ sudo ceph osd lspools 0 rbd,1 metrics,2 images,3 backups,4 volumes,5 vms
Use the Ceph Placement Groups (PGs) per Pool Calculator to determine the correct number of placement groups for the OpenStack pools. The Ceph Placement Groups per Pool Calculator can be found at https://access.redhat.com/labs/cephpgc/. Below is an example of the recommendations generated from the ceph placement group calculator using OpenStack block storage with replicated ceph pools:
- backups 512
- volumes 512
- vms 256
- images 128
- metrics 128
Increase Placement Groups
The following commands can be used to increase the number of placement groups:
ceph osd pool set {pool-name} pg_num {pg_num}
ceph osd pool set {pool-name} pgp_num {pgp_num}
The rbd pool is not necessary and can be deleted. The following command will delete the rbd pool.
$ sudo ceph osd pool delete rbd rbd --yes-i-really-really-mean-it
For addtional post installation tuning refer to the Red Hat Ceph Storage documentation.
6.2. Create a Tenant and Deploy an Instance
In this section we will perform the following operations to verify the operation of the Red Hat OpenStack Platform 10 cloud deployment:
- Create Tenant Project
- Create a Private Network
- Create and Configure a Subnet
- Create Router and Add the Router Interfaces
- Create Floating IPs
- Create Keypairs
- Create a Security Group
- Download and customize a cloud image
- Upload the cloud image to Glance
- Create Flavor for instance deployment
- Deploy an Instance
Create an OpenStack Tenant
The name of the OpenStack cloud in this example is osphpe, as mentioned in Chapter 5, an environment file was created in the stack user’s home directory named osphperc. Source this file to ensure the openstack commands are executed using the overcloud (osphpe) keystone auth url and admin account.
$ source osphperc
Create a new OpenStack project called hpedemo-tenant.
$ openstack project create hpedemo-tenant
Create a new user account, this user account will be used for creating network resources and virtual machine instances in the new OpenStack project named hpedemo-tenant.
$ openstack user create hpeuser --password redhat
Add the hpeuser as a member to the hpedemo-tenant project.
$ openstack role add --user hpeuser --project hpedemo-tenant _member_
Create an OpenStack environment file for the new OpenStack project
Next, create an environment file named keystonerc_hpedemo (the name is not important). This file will set the environment to use the hpeuser account and the project to hpedemo-tenant.
$ vi keystonerc_hpedemo
[source]
export OS_USERNAME=hpeuser
export OS_TENANT_NAME=hpedemo-tenant
export OS_PASSWORD=redhat
export OS_CLOUDNAME=osphpe
export OS_AUTH_URL=<keystone auth URL>
$ source kestonerc_hpedemo
Create a Private Network and Subnet
Execute the following commands to create a private network for the hpedemo-tenant project.
$ openstack network list $ openstack network create net1 $ openstack subnet create --name hpedemo-tenant-subnet net1 10.2.2.0/24 $ openstack subnet list $ openstack subnet set --dns-nameserver <dns-IP Address> <subnet ID>
Create Router and assign interfaces
Create a router and add an interface using the subnet ID from the openstack subnet list command executed in the previous step.
$ openstack router create router1 $ neutron router-interface-add router1 <subnet ID>
Create the External Network and Allocate Floating IP Range
The external network and subnet are created using the OpenStack admin credentials. Source the osphperc file to set the overcloud environment variables for the OpenStack admin user.
$ source osphperc $ neutron net-create nova --router:external --provider:network_type vlan --provider:physical_network datacentre --provider:segmentation 104 $ neutron subnet-create --name public --gateway 10.19.20.254 --allocation-pool start=10.19.20.180,end=10.19.20.190 nova 10.19.20.128/25
Execute openstack network list to capture the UUID of the external network created above.
$ openstack network list $ neutron router-gateway-set router1 <ExternalNet ID>
Modify the Security Group to allow SSH and ICMP
Modify the default security group for the hpedemo-tenant OpenStack project to allow icmp (ping) and ssh access to the virtual machine instances. Additionally, in the next section the virtual machine instances will be required to communicate with a Sensu monitoring host on port 5672 (rabbitmq). Open port 5672 to allow the virtual machine instances to communicate with a Sensu Monitoring host.
$ source keystonerc_hpedemo $ openstack security group list $ openstack security group rule create --ingress --protocol icmp <default security group ID> $ openstack security group rule create --ingress --dst-port 22 <default security group ID> $ openstack security group rule create --ingress --dst-port 5672 <default security group ID> $ openstack security group rule create --egress --dst-port 5672 <default security group ID>
Verify the ports have been opened.
$ openstack security group show <security group ID>
Create keypair
Create a keypair in the hpedemo-tenant project and download the keypair to the local directory. The keypair will be used for ssh access by the default user (cloud-user) account in the Red Hat Enterprise Linux 7 cloud image.
$ openstack keypair create hpeuserkp > hpeuserkp.pem
The default permissions on the downloaded keypair file, hpeuserkp.pem, must be changed or a warning will be generated indicating that the private key file is unprotected and will be ignored. Change the keyfile permissions to read only for the user.
$ chmod 400 hpeuserkp.pem
Download and customize a cloud image
The Red Hat Enterprise Linux 7 cloud image will be used to test the ability to create virutal machine instances. The downloaded image is in qcow2 format. It is important to convert this virtual machine image format to raw.
$ sudo yum install -y rhel-guest-image-7.noarch $ qemu-img convert -f qcow2 -O raw /usr/share/rhel-guest-image-7/*.qcow2 ./rhel7.raw
The following command will customize the image to allow root login. Enabling root log in is an optional step which can be useful when accessing the virtual machine instance from the console and troubleshooting network connectivity issues.
$ sudo systemctl start libvirtd $ virt-customize -a rhel7.raw --root-password password:redhat --run-command 'sed -i -e "s/.*PasswordAuthentication.*/PasswordAuthentication yes/" /etc/ssh/sshd_config' --run-command 'sed -i -e "s/.*PermitRootLogin.*/PermitRootLogin yes/" /etc/ssh/sshd_config' $ sudo systemctl stop libvirtd
Upload the Image to Glance
The guest image will now be uploaded into the Glance repository of the overcloud. Set the environment to use the hpeuser and hpedemo-tenant project (source keystonerc_hpedemo) to ensure the hpeuser created earlier has access to the image in the Glance repository.
$ source keystonerc_hpedemo $ openstack image create --disk-format raw --container-format bare --file rhel7.raw rhel7 $ openstack image list
Create a flavor for instance deployment
In this version of OpenStack there are no default compute flavors. A new flavor must be created using the OpenStack admin account, set the environment to use the admin account (source osphperc).
$ source osphperc $ openstack flavor create m1.small --id 1 --ram 4096 --disk 20 --vcpus 4
Boot an new Virtual Machine Instance
Create a virtual machine instance in the hpedemo-tenant project by executing the following commands.
$ source keystonerc_hpedemo $ openstack server create --flavor 1 --image rhel7 --key-name hpeuserkp inst1 $ openstack server list
The openstack server list command may have to be executed a few times until the virtual machine instance shows a state of Active, other states that are displayed prior to Active include Spawning and Building.
Access the instance and verify connectivity
The virtual machine instance must have a Floating IP address attached to the instance before the instance can be access from the External network. The following command will create a floating IP address and then attach the Floating IP address to the instance.
$ openstack floating ip create nova $ openstack floating ip list $ openstack server add floating ip inst1 <floating IP>
The virtual machine instance is now accessible from the Exernal network. Using the ssh -i command with the keypair that was previously downloaded and the Floating IP address, log into the virtual machine instance.
$ ssh -i hpeuserkp.pem cloud-user@<floating IP>

Where did the comment section go?
Red Hat's documentation publication system recently went through an upgrade to enable speedier, more mobile-friendly content. We decided to re-evaluate our commenting platform to ensure that it meets your expectations and serves as an optimal feedback mechanism. During this redesign, we invite your input on providing feedback on Red Hat documentation via the discussion platform.