Red Hat Training

A Red Hat training course is available for Red Hat OpenStack Platform

Chapter 8. Performing Tasks after Overcloud Creation

This chapter explores some of the functions you perform after creating your Overcloud of choice.

8.1. Creating the Overcloud Tenant Network

The Overcloud requires a Tenant network for instances. Source the overcloud and create an initial Tenant network in Neutron. For example:

$ source ~/overcloudrc
$ neutron net-create default
$ neutron subnet-create --name default --gateway 172.20.1.1 default 172.20.0.0/16

This creates a basic Neutron network called default. The Overcloud automatically assigns IP addresses from this network using an internal DHCP mechanism.

Confirm the created network with neutron net-list:

$ neutron net-list
+-----------------------+-------------+----------------------------------------------------+
| id                    | name        | subnets                                            |
+-----------------------+-------------+----------------------------------------------------+
| 95fadaa1-5dda-4777... | default     | 7e060813-35c5-462c-a56a-1c6f8f4f332f 172.20.0.0/16 |
+-----------------------+-------------+----------------------------------------------------+

8.2. Creating the Overcloud External Network

You previously configured the node interfaces to use the External network in Section 6.2, “Isolating Networks”. However, you still need to create this network on the Overcloud so that you can assign floating IP addresses to instances.

Using a Native VLAN

This procedure assumes a dedicated interface or native VLAN for the External network.

Source the overcloud and create an External network in Neutron. For example:

$ source ~/overcloudrc
$ neutron net-create public --router:external --provider:network_type flat --provider:physical_network datacentre
$ neutron subnet-create --name public --enable_dhcp=False --allocation-pool=start=10.1.1.51,end=10.1.1.250 --gateway=10.1.1.1 public 10.1.1.0/24

In this example, you create a network with the name public. The Overcloud requires this specific name for the default floating IP pool. This is also important for the validation tests in Section 8.5, “Validating the Overcloud”.

This command also maps the network to the datacentre physical network. As a default, datacentre maps to the br-ex bridge. Leave this option as the default unless you have used custom neutron settings during the Overcloud creation.

Using a Non-Native VLAN

If not using the native VLAN, assign the network to a VLAN using the following commands:

$ source ~/overcloudrc
$ neutron net-create public --router:external --provider:network_type vlan --provider:physical_network datacentre --provider:segmentation_id 104
$ neutron subnet-create --name public --enable_dhcp=False --allocation-pool=start=10.1.1.51,end=10.1.1.250 --gateway=10.1.1.1 public 10.1.1.0/24

The provider:segmentation_id value defines the VLAN to use. In this case, you can use 104.

Confirm the created network with neutron net-list:

$ neutron net-list
+-----------------------+-------------+---------------------------------------------------+
| id                    | name        | subnets                                           |
+-----------------------+-------------+---------------------------------------------------+
| d474fe1f-222d-4e32... | public      | 01c5f621-1e0f-4b9d-9c30-7dc59592a52f 10.1.1.0/24  |
+-----------------------+-------------+---------------------------------------------------+

8.3. Creating Additional Floating IP Networks

Floating IP networks can use any bridge, not just br-ex, as long as you meet the following conditions:

  • NeutronExternalNetworkBridge is set to "''" in your network environment file.
  • You have mapped the additional bridge during deployment. For example, to map a new bridge called br-floating to the floating physical network:

    $ openstack overcloud deploy --templates -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml -e ~/templates/network-environment.yaml --neutron-bridge-mappings datacentre:br-ex,floating:br-floating

Create the Floating IP network after creating the Overcloud:

$ neutron net-create ext-net --router:external --provider:physical_network floating --provider:network_type vlan --provider:segmentation_id 105
$ neutron subnet-create --name ext-subnet --enable_dhcp=False --allocation-pool start=10.1.2.51,end=10.1.2.250 --gateway 10.1.2.1 ext-net 10.1.2.0/24

8.4. Creating the Overcloud Provider Network

A provider network is a network attached physically to a network existing outside of the deployed Overcloud. This can be an existing infrastructure network or a network that provides external access directly to instances through routing instead of floating IPs.

When creating a provider network, you associate it with a physical network, which uses a bridge mapping. This is similar to floating IP network creation. You add the provider network to both the Controller and the Compute nodes because the Compute nodes attach VM virtual network interfaces directly to the attached network interface.

For example, if the desired provider network is a VLAN on the br-ex bridge, use the following command to add a provider network on VLAN 201:

$ neutron net-create --provider:physical_network datacentre --provider:network_type vlan --provider:segmentation_id 201 --shared provider_network

This command creates a shared network. It is also possible to specify a tenant instead of specifying --shared. That network will only be available to the specified tenant. If you mark a provider network as external, only the operator may create ports on that network.

Add a subnet to a provider network if you want neutron to provide DHCP services to the tenant instances:

$ neutron subnet-create --name provider-subnet --enable_dhcp=True --allocation-pool start=10.9.101.50,end=10.9.101.100 --gateway 10.9.101.254 provider_network 10.9.101.0/24

Other networks might require access externally through the provider network. In this situation, create a new router so that other networks can route traffic through the provider network:

$ neutron router-create external
$ neutron router-gateway-set external provider_network

Attach other networks to this router. For example, if you had a subnet called subnet1, you can attach it to the router with the following commands:

$ neutron router-interface-add external subnet1

This adds subnet1 to the routing table and allows traffic using subnet1 to route to the provider network.

8.5. Validating the Overcloud

The Overcloud uses Tempest to conduct a series of integration tests. This procedure shows how to validate your Overcloud using Tempest. If running this test from the Undercloud, ensure the Undercloud host has access to the Overcloud’s Internal API network. For example, add a temporary VLAN on the Undercloud host to access the Internal API network (ID: 201) using the 172.16.0.201/24 address:

$ source ~/stackrc
$ sudo ovs-vsctl add-port br-ctlplane vlan201 tag=201 -- set interface vlan201 type=internal
$ sudo ip l set dev vlan201 up; sudo ip addr add 172.16.0.201/24 dev vlan201

Before running Tempest, check that the heat_stack_owner role exists in your Overcloud:

$ source ~/overcloudrc
$ openstack role list
+----------------------------------+------------------+
| ID                               | Name             |
+----------------------------------+------------------+
| 6226a517204846d1a26d15aae1af208f | swiftoperator    |
| 7c7eb03955e545dd86bbfeb73692738b | heat_stack_owner |
+----------------------------------+------------------+

If the role does not exist, create it:

$ keystone role-create --name heat_stack_owner

Install the Tempest toolset:

$ sudo yum install openstack-tempest

Set up a tempest directory in your stack user’s home directory and copy a local version of the Tempest suite:

$ mkdir ~/tempest
$ cd ~/tempest
$ /usr/share/openstack-tempest-10.0.0/tools/configure-tempest-directory

This creates a local version of the Tempest tool set.

After the Overcloud creation process completed, the director created a file named ~/tempest-deployer-input.conf. This file provides a set of Tempest configuration options relevant to your Overcloud. Run the following command to use this file to configure Tempest:

$ tools/config_tempest.py --deployer-input ~/tempest-deployer-input.conf --debug --create identity.uri $OS_AUTH_URL identity.admin_password $OS_PASSWORD --network-id d474fe1f-222d-4e32-9242-cd1fefe9c14b

The $OS_AUTH_URL and $OS_PASSWORD environment variables use values set from the overcloudrc file sourced previously. The --network-id is the UUID of the external network created in Section 8.2, “Creating the Overcloud External Network”.

Important

The configuration script downloads the Cirros image for the Tempest tests. Make sure the director has access to the Internet or uses a proxy with access to the Internet. Set the http_proxy environment variable to use a proxy for command line operations.

Run the full suite of Tempest tests with the following command:

$ tools/run-tests.sh
Note

The full Tempest test suite might take hours. Alternatively, run part of the tests using the '.*smoke' option.

$ tools/run-tests.sh '.*smoke'

Each test runs against the Overcloud, and the subsequent output displays each test and its result. You can see more information about each test in the tempest.log file generated in the same directory. For example, the output might show the following failed test:

{2} tempest.api.compute.servers.test_servers.ServersTestJSON.test_create_specify_keypair [18.305114s] ... FAILED

This corresponds to a log entry that contains more information. Search the log for the last two parts of the test namespace separated with a colon. In this example, search for ServersTestJSON:test_create_specify_keypair in the log:

$ grep "ServersTestJSON:test_create_specify_keypair" tempest.log -A 4
2016-03-17 14:49:31.123 10999 INFO tempest_lib.common.rest_client [req-a7a29a52-0a52-4232-9b57-c4f953280e2c ] Request (ServersTestJSON:test_create_specify_keypair): 500 POST http://192.168.201.69:8774/v2/2f8bef15b284456ba58d7b149935cbc8/os-keypairs 4.331s
2016-03-17 14:49:31.123 10999 DEBUG tempest_lib.common.rest_client [req-a7a29a52-0a52-4232-9b57-c4f953280e2c ] Request - Headers: {'Content-Type': 'application/json', 'Accept': 'application/json', 'X-Auth-Token': '<omitted>'}
        Body: {"keypair": {"name": "tempest-key-722237471"}}
    Response - Headers: {'status': '500', 'content-length': '128', 'x-compute-request-id': 'req-a7a29a52-0a52-4232-9b57-c4f953280e2c', 'connection': 'close', 'date': 'Thu, 17 Mar 2016 04:49:31 GMT', 'content-type': 'application/json; charset=UTF-8'}
        Body: {"computeFault": {"message": "The server has either erred or is incapable of performing the requested operation.", "code": 500}} _log_request_full /usr/lib/python2.7/site-packages/tempest_lib/common/rest_client.py:414
Note

The -A 4 option shows the next four lines, which are usually the request header and body, then the response header and body.

After completing the validation, remove any temporary connections to the Overcloud’s Internal API. In this example, use the following commands to remove the previously created VLAN on the Undercloud:

$ source ~/stackrc
$ sudo ovs-vsctl del-port vlan201

8.6. Fencing the Controller Nodes

Fencing is the process of isolating a node to protect a cluster and its resources. Without fencing, a faulty node can cause data corruption in a cluster.

The director uses Pacemaker to provide a highly available cluster of Controller nodes. Pacemaker uses a process called STONITH (Shoot-The-Other-Node-In-The-Head) to help fence faulty nodes. By default, STONITH is disabled on your cluster and requires manual configuration so that Pacemaker can control the power management of each node in the cluster.

Note

Login to each node as the heat-admin user from the stack user on the director. The Overcloud creation automatically copies the stack user’s SSH key to each node’s heat-admin.

Verify you have a running cluster with pcs status:

$ sudo pcs status
Cluster name: openstackHA
Last updated: Wed Jun 24 12:40:27 2015
Last change: Wed Jun 24 11:36:18 2015
Stack: corosync
Current DC: lb-c1a2 (2) - partition with quorum
Version: 1.1.12-a14efad
3 Nodes configured
141 Resources configured

Verify that stonith is disabled with pcs property show:

$ sudo pcs property show
Cluster Properties:
cluster-infrastructure: corosync
cluster-name: openstackHA
dc-version: 1.1.12-a14efad
have-watchdog: false
stonith-enabled: false

The Controller nodes contain a set of fencing agents for the various power management devices the director supports. This includes:

Table 8.1. Fence Agents

Device

Type

fence_ipmilan

The Intelligent Platform Management Interface (IPMI)

fence_idrac, fence_drac5

Dell Remote Access Controller (DRAC)

fence_ilo

Integrated Lights-Out (iLO)

fence_ucs

Cisco UCS - For more information, see Configuring Cisco Unified Computing System (UCS) Fencing on an OpenStack High Availability Environment

fence_xvm, fence_virt

Libvirt and SSH

The rest of this section uses the IPMI agent (fence_ipmilan) as an example.

View a full list of IPMI options that Pacemaker supports:

$ sudo pcs stonith describe fence_ipmilan

Each node requires configuration of IPMI devices to control the power management. This involves adding a stonith device to Pacemaker for each node. Use the following commands for the cluster:

Note

The second command in each example is to prevent the node from asking to fence itself.

For Controller node 0:

$ sudo pcs stonith create my-ipmilan-for-controller-0 fence_ipmilan pcmk_host_list=overcloud-controller-0 ipaddr=192.0.2.205 login=admin passwd=p@55w0rd! lanplus=1 cipher=1 op monitor interval=60s
$ sudo pcs constraint location my-ipmilan-for-controller-0 avoids overcloud-controller-0

For Controller node 1:

$ sudo pcs stonith create my-ipmilan-for-controller-1 fence_ipmilan pcmk_host_list=overcloud-controller-1 ipaddr=192.0.2.206 login=admin passwd=p@55w0rd! lanplus=1 cipher=1 op monitor interval=60s
$ sudo pcs constraint location my-ipmilan-for-controller-1 avoids overcloud-controller-1

For Controller node 2:

$ sudo pcs stonith create my-ipmilan-for-controller-2 fence_ipmilan pcmk_host_list=overcloud-controller-2 ipaddr=192.0.2.207 login=admin passwd=p@55w0rd! lanplus=1 cipher=1 op monitor interval=60s
$ sudo pcs constraint location my-ipmilan-for-controller-2 avoids overcloud-controller-2

Run the following command to see all stonith resources:

$ sudo pcs stonith show

Run the following command to see a specific stonith resource:

$ sudo pcs stonith show [stonith-name]

Finally, enable fencing by setting the stonith property to true:

$ sudo pcs property set stonith-enabled=true

Verify the property:

$ sudo pcs property show

8.7. Modifying the Overcloud Environment

Sometimes you might intend to modify the Overcloud to add additional features, or change the way it operates. To modify the Overcloud, make modifications to your custom environment files and Heat templates, then rerun the openstack overcloud deploy command from your initial Overcloud creation. For example, if you created an Overcloud using Chapter 7, Creating the Overcloud, you would rerun the following command:

$ openstack overcloud deploy --templates -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml -e ~/templates/network-environment.yaml -e ~/templates/storage-environment.yaml --control-scale 3 --compute-scale 3 --ceph-storage-scale 3 --control-flavor control --compute-flavor compute --ceph-storage-flavor ceph-storage --ntp-server pool.ntp.org

The director checks the overcloud stack in heat, and then updates each item in the stack with the environment files and heat templates. It does not recreate the Overcloud, but rather changes the existing Overcloud.

If you aim to include a new environment file, add it to the openstack overcloud deploy command with a -e option. For example:

$ openstack overcloud deploy --templates -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml -e ~/templates/network-environment.yaml -e ~/templates/storage-environment.yaml -e ~/templates/new-environment.yaml --control-scale 3 --compute-scale 3 --ceph-storage-scale 3 --control-flavor control --compute-flavor compute --ceph-storage-flavor ceph-storage --ntp-server pool.ntp.org

This includes the new parameters and resources from the environment file into the stack.

Important

It is advisable not to make manual modifications to the Overcloud’s configuration as the director might overwrite these modifications later.

8.8. Importing Virtual Machines into the Overcloud

Use the following procedure if you have an existing OpenStack environment and aim to migrate its virtual machines to your Red Hat OpenStack Platform environment.

Create a new image by taking a snapshot of a running server and download the image.

$ nova image-create instance_name image_name
$ glance image-download image_name --file exported_vm.qcow2

Upload the exported image into the Overcloud and launch a new instance.

$ glance image-create --name imported_image --file exported_vm.qcow2 --disk-format qcow2 --container-format bare
$ nova boot --poll --key-name default --flavor m1.demo --image imported_image --nic net-id=net_id imported
Important

Each VM disk has to be copied from the existing OpenStack environment and into the new Red Hat OpenStack Platform. Snapshots using QCOW will lose their original layering system.

8.9. Migrating VMs from an Overcloud Compute Node

In some situations, you might perform maintenance on an Overcloud Compute node. To prevent downtime, migrate the VMs on the Compute node to another Compute node in the Overcloud using the following procedures.

The director configures all Compute nodes to provide secure migration. All Compute nodes also require a shared SSH key to provide each host’s nova user with access to other Compute nodes during the migration process. The director creates this key automatically.

Important

The latest update of Red Hat OpenStack Platform 9 includes patches required for live migration capabilities. The director’s core template collection did not include this functionality in the initial release but is now included in the openstack-tripleo-heat-templates-2.0.0-57.el7ost package and later versions.

Update your environment to use the Heat templates from the openstack-tripleo-heat-templates-2.0.0-57.el7ost package or later versions.

For more information, see "Red Hat OpenStack Platform director (TripleO) CVE-2017-2637 bug and Red Hat OpenStack Platform ".

To migrate an instance, source the overcloudrc, and obtain a list of the current nova services:

$ source ~/stack/overcloudrc
$ nova service-list

Disable the nova-compute service on the node you intend to migrate.

$ nova service-disable [hostname] nova-compute

This prevents new instances from being scheduled on it.

Begin the process of migrating instances off the node. The following command migrates a single instance:

$ openstack server migrate [server-name]

Run this command for each instance you need to migrate from the disabled Compute node.

Retrieve the current status of the migration process with the following command:

$ nova migration-list

When migration of each instance completes, its state in nova will change to VERIFY_RESIZE. This gives you an opportunity to confirm that the migration completed successfully, or to roll it back. To confirm the migration, use the command:

$ nova resize-confirm [server-name]

Run this command for each instance with a VERIFY_RESIZE status.

This migrates all instances from a host. You can now perform maintenance on the host without any instance downtime. To return the host to an enabled state, run the following command:

$ nova service-enable [hostname] nova-compute

8.10. Protecting the Overcloud from Removal

To avoid accidental removal of the Overcloud with the heat stack-delete overcloud command, Heat contains a set of policies to restrict certain actions. Edit the /etc/heat/policy.json and find the following parameter:

"stacks:delete": "rule:deny_stack_user"

Change it to:

"stacks:delete": "rule:deny_everybody"

Save the file.

This prevents removal of the Overcloud with the heat client. To allow removal of the Overcloud, revert the policy to the original value.

8.11. Removing the Overcloud

The whole Overcloud can be removed when desired.

Delete any existing Overcloud:

$ heat stack-delete overcloud

Confirm the deletion of the Overcloud:

$ heat stack-list

Deletion takes a few minutes.

Once the removal completes, follow the standard steps in the deployment scenarios to recreate your Overcloud.