Red Hat Training
A Red Hat training course is available for Red Hat OpenStack Platform
Chapter 8. Performing Tasks after Overcloud Creation
This chapter contains information about some of the functions you can perform after creating your overcloud.
8.1. Checking overcloud deployment status
To check the deployment status of the overcloud, use the openstack overcloud status
command. This command returns the result of all deployment steps.
Procedure
Source the
stackrc
file:$ source ~/stackrc
Run the deployment status command:
$ openstack overcloud status
The output of this command displays the status of the overcloud:
+-----------+---------------------+---------------------+-------------------+ | Plan Name | Created | Updated | Deployment Status | +-----------+---------------------+---------------------+-------------------+ | overcloud | 2018-05-03 21:24:50 | 2018-05-03 21:27:59 | DEPLOY_SUCCESS | +-----------+---------------------+---------------------+-------------------+
If your overcloud uses a different name, use the
--plan
argument to select an overcloud with a different name:$ openstack overcloud status --plan my-deployment
8.2. Managing containerized services
OpenStack Platform runs services in containers on the undercloud and overcloud nodes. In certain situations, you might need to control the individual services on a host. This section contains information about some common docker
commands you can run on a node to manage containerized services. For more comprehensive information about using docker
to manage containers, see "Working with Docker formatted containers" in the Getting Started with Containers guide.
Listing containers and images
To list running containers, run the following command:
$ sudo docker ps
To include stopped or failed containers in the command output, add the --all
option to the command:
$ sudo docker ps --all
To list container images, run the following command:
$ sudo docker images
Inspecting container properties
To view the properties of a container or container images, use the docker inspect
command. For example, to inspect the keystone
container, run the following command:
$ sudo docker inspect keystone
Managing basic container operations
To restart a containerized service, use the docker restart
command. For example, to restart the keystone
container, run the following command:
$ sudo docker restart keystone
To stop a containerized service, use the docker stop
command. For example, to stop the keystone
container, run the following command:
$ sudo docker stop keystone
To start a stopped containerized service, use the docker start
command. For example, to start the keystone
container, run the following command:
$ sudo docker start keystone
Any changes to the service configuration files within the container revert after restarting the container. This is because the container regenerates the service configuration based on files on the node’s local file system in /var/lib/config-data/puppet-generated/
. For example, if you edit /etc/keystone/keystone.conf
within the keystone
container and restart the container, the container regenerates the configuration using /var/lib/config-data/puppet-generated/keystone/etc/keystone/keystone.conf
on the node’s local file system, which overwrites any the changes made within the container before the restart.
Monitoring containers
To check the logs for a containerized service, use the docker logs
command. For example, to view the logs for the keystone
container, run the following command:
$ sudo docker logs keystone
Accessing containers
To enter the shell for a containerized service, use the docker exec
command to launch /bin/bash
. For example, to enter the shell for the keystone
container, run the following command:
$ sudo docker exec -it keystone /bin/bash
To enter the shell for the keystone
container as the root user, run the following command:
$ sudo docker exec --user 0 -it <NAME OR ID> /bin/bash
To exit from the container, run the following command:
# exit
Enabling swift-ring-builder on undercloud and overcloud
For continuity considerations in Object Storage (swift) builds, the swift-ring-builder
and swift_object_server
commands are no longer packaged on the undercloud or overcloud nodes. However, the commands are still available in the containers. To run them inside the respective containers:
docker exec -ti -u swift swift_object_server swift-ring-builder /etc/swift/object.builder
If you require these commands, install the following package as the stack
user on the undercloud or the heat-admin
user on the overcloud:
sudo yum install -y python-swift sudo yum install -y python2-swiftclient
For information about troubleshooting OpenStack Platform containerized services, see Section 17.7.3, “Containerized Service Failures”.
8.3. Creating the Overcloud Tenant Network
The overcloud requires a Tenant network for instances. Source the overcloud
and create an initial Tenant network in Neutron:
$ source ~/overcloudrc (overcloud) $ openstack network create default (overcloud) $ openstack subnet create default --network default --gateway 172.20.1.1 --subnet-range 172.20.0.0/16
These command creates a basic Neutron network named default
. The overcloud automatically assigns IP addresses from this network using an internal DHCP mechanism.
Confirm the created network:
(overcloud) $ openstack network list +-----------------------+-------------+--------------------------------------+ | id | name | subnets | +-----------------------+-------------+--------------------------------------+ | 95fadaa1-5dda-4777... | default | 7e060813-35c5-462c-a56a-1c6f8f4f332f | +-----------------------+-------------+--------------------------------------+
8.4. Creating the Overcloud External Network
You must create the External network on the overcloud so that you can assign floating IP addresses to instances.
Using a Native VLAN
This procedure assumes a dedicated interface or native VLAN for the External network.
Source the overcloud
and create an External network in Neutron:
$ source ~/overcloudrc (overcloud) $ openstack network create public --external --provider-network-type flat --provider-physical-network datacentre (overcloud) $ openstack subnet create public --network public --dhcp --allocation-pool start=10.1.1.51,end=10.1.1.250 --gateway 10.1.1.1 --subnet-range 10.1.1.0/24
In this example, you create a network with the name public
. The overcloud requires this specific name for the default floating IP pool. This name is also important for the validation tests in Section 8.8, “Validating the Overcloud”.
This command also maps the network to the datacentre
physical network. As a default, datacentre
maps to the br-ex
bridge. Leave this option as the default unless you have used custom neutron settings during the overcloud creation.
Using a Non-Native VLAN
If you are not using the native VLAN, run the following commands to assign the network to a VLAN:
$ source ~/overcloudrc (overcloud) $ openstack network create public --external --provider-network-type vlan --provider-physical-network datacentre --provider-segment 104 (overcloud) $ openstack subnet create public --network public --dhcp --allocation-pool start=10.1.1.51,end=10.1.1.250 --gateway 10.1.1.1 --subnet-range 10.1.1.0/24
The provider:segmentation_id
value defines the VLAN to use. In this case, you can use 104.
Confirm the created network:
(overcloud) $ openstack network list +-----------------------+-------------+--------------------------------------+ | id | name | subnets | +-----------------------+-------------+--------------------------------------+ | d474fe1f-222d-4e32... | public | 01c5f621-1e0f-4b9d-9c30-7dc59592a52f | +-----------------------+-------------+--------------------------------------+
8.5. Creating Additional Floating IP Networks
Floating IP networks can use any bridge, not just br-ex
, as long as you meet the following conditions:
-
NeutronExternalNetworkBridge
is set to"''"
in your network environment file. You have mapped the additional bridge during deployment. For example, to map a new bridge called
br-floating
to thefloating
physical network, include theNeutronBridgeMappings
parameter in an environment file:parameter_defaults: NeutronBridgeMappings: "datacentre:br-ex,floating:br-floating"
Create the Floating IP network after creating the overcloud:
$ source ~/overcloudrc (overcloud) $ openstack network create ext-net --external --provider-physical-network floating --provider-network-type vlan --provider-segment 105 (overcloud) $ openstack subnet create ext-subnet --network ext-net --dhcp --allocation-pool start=10.1.2.51,end=10.1.2.250 --gateway 10.1.2.1 --subnet-range 10.1.2.0/24
8.6. Creating the Overcloud Provider Network
A provider network is a network physically attached to a network that exists outside of the deployed overcloud. This can be an existing infrastructure network or a network that provides external access directly to instances through routing instead of floating IPs.
When creating a provider network, you associate it with a physical network, which uses a bridge mapping. This is similar to floating IP network creation. You add the provider network to both the Controller and the Compute nodes because the Compute nodes attach VM virtual network interfaces directly to the attached network interface.
For example, if the desired provider network is a VLAN on the br-ex bridge, use the following command to add a provider network on VLAN 201:
$ source ~/overcloudrc (overcloud) $ openstack network create provider_network --provider-physical-network datacentre --provider-network-type vlan --provider-segment 201 --share
This command creates a shared network. It is also possible to specify a tenant instead of specifying --share
. The new network is available only to the specified tenant. If you mark a provider network as external, only the operator may create ports on that network.
Add a subnet to a provider network if you want neutron to provide DHCP services to the tenant instances:
(overcloud) $ openstack subnet create provider-subnet --network provider_network --dhcp --allocation-pool start=10.9.101.50,end=10.9.101.100 --gateway 10.9.101.254 --subnet-range 10.9.101.0/24
Other networks might require access externally through the provider network. In this situation, create a new router so that other networks can route traffic through the provider network:
(overcloud) $ openstack router create external (overcloud) $ openstack router set --external-gateway provider_network external
Attach other networks to this router. For example, run the following command to attach a subnet ‘subnet1’ to the router:
(overcloud) $ openstack router add subnet external subnet1
This command adds subnet1
to the routing table and allows traffic using subnet1
to route to the provider network.
8.7. Creating a basic Overcloud flavor
Validation steps in this guide assume that your installation contains flavors. If you have not already created at least one flavor, use the following commands to create a basic set of default flavors that have a range of storage and processing capabilities:
$ openstack flavor create m1.tiny --ram 512 --disk 0 --vcpus 1 $ openstack flavor create m1.smaller --ram 1024 --disk 0 --vcpus 1 $ openstack flavor create m1.small --ram 2048 --disk 10 --vcpus 1 $ openstack flavor create m1.medium --ram 3072 --disk 10 --vcpus 2 $ openstack flavor create m1.large --ram 8192 --disk 10 --vcpus 4 $ openstack flavor create m1.xlarge --ram 8192 --disk 10 --vcpus 8
Command options
- ram
-
Use the
ram
option to define the maximum RAM for the flavor. - disk
-
Use the
disk
option to define the hard disk space for the flavor. - vcpus
-
Use the
vcpus
option to define the quantity of virtual CPUs for the flavor.
Use $ openstack flavor create --help
to learn more about the openstack flavor create
command.
8.8. Validating the Overcloud
The overcloud uses the OpenStack Integration Test Suite (tempest) tool set to conduct a series of integration tests. This section contains information about preparations for running the integration tests. For full instruction on using the OpenStack Integration Test Suite, see the OpenStack Integration Test Suite Guide.
Before Running the Integration Test Suite
If running this test from the undercloud, ensure that the undercloud host has access to the overcloud’s Internal API network. For example, add a temporary VLAN on the undercloud host to access the Internal API network (ID: 201) using the 172.16.0.201/24 address:
$ source ~/stackrc (undercloud) $ sudo ovs-vsctl add-port br-ctlplane vlan201 tag=201 -- set interface vlan201 type=internal (undercloud) $ sudo ip l set dev vlan201 up; sudo ip addr add 172.16.0.201/24 dev vlan201
Before running the OpenStack Integration Test Suite, check that the heat_stack_owner
role exists in your overcloud:
$ source ~/overcloudrc (overcloud) $ openstack role list +----------------------------------+------------------+ | ID | Name | +----------------------------------+------------------+ | 6226a517204846d1a26d15aae1af208f | swiftoperator | | 7c7eb03955e545dd86bbfeb73692738b | heat_stack_owner | +----------------------------------+------------------+
If the role does not exist, create it:
(overcloud) $ openstack role create heat_stack_owner
After Running the Integration Test Suite
After completing the validation, remove any temporary connections to the overcloud’s Internal API. In this example, use the following commands to remove the previously created VLAN on the undercloud:
$ source ~/stackrc (undercloud) $ sudo ovs-vsctl del-port vlan201
8.9. Modifying the Overcloud Environment
Sometimes you might want to modify the overcloud to add additional features, or change the way it operates. To modify the overcloud, make modifications to your custom environment files and Heat templates, then rerun the openstack overcloud deploy
command from your initial overcloud creation. For example, if you created an overcloud using Section 6.11, “Deployment command”, rerun the following command:
$ source ~/stackrc (undercloud) $ openstack overcloud deploy --templates \ -e ~/templates/node-info.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml \ -e ~/templates/network-environment.yaml \ -e ~/templates/storage-environment.yaml \ --ntp-server pool.ntp.org
The director checks the overcloud
stack in heat, and then updates each item in the stack with the environment files and heat templates. The director does not recreate the overcloud, but rather changes the existing overcloud.
Removing parameters from custom environment files does not revert the parameter value to the default configuration. You must identify the default value from the core heat template collection in /usr/share/openstack-tripleo-heat-templates
and set the value in your custom environment file manually.
If you aim to include a new environment file, add it to the openstack overcloud deploy
command with the`-e` option. For example:
$ source ~/stackrc (undercloud) $ openstack overcloud deploy --templates \ -e ~/templates/new-environment.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml \ -e ~/templates/network-environment.yaml \ -e ~/templates/storage-environment.yaml \ -e ~/templates/node-info.yaml \ --ntp-server pool.ntp.org
This command includes the new parameters and resources from the environment file into the stack.
It is not advisable to make manual modifications to the overcloud configuration as the director might overwrite these modifications later.
8.10. Running the dynamic inventory script
The director can run Ansible-based automation on your OpenStack Platform environment. The director uses the tripleo-ansible-inventory
command to generate a dynamic inventory of nodes in your environment.
Procedure
To view a dynamic inventory of nodes, run the
tripleo-ansible-inventory
command after sourcingstackrc
:$ source ~/stackrc (undercloud) $ tripleo-ansible-inventory --list
The
--list
option returns details about all hosts. This command outputs the dynamic inventory in a JSON format:{"overcloud": {"children": ["controller", "compute"], "vars": {"ansible_ssh_user": "heat-admin"}}, "controller": ["192.168.24.2"], "undercloud": {"hosts": ["localhost"], "vars": {"overcloud_horizon_url": "http://192.168.24.4:80/dashboard", "overcloud_admin_password": "abcdefghijklm12345678", "ansible_connection": "local"}}, "compute": ["192.168.24.3"]}
To execute Ansible playbooks on your environment, run the
ansible
command and include the full path of the dynamic inventory tool using the-i
option. For example:(undercloud) $ ansible [HOSTS] -i /bin/tripleo-ansible-inventory [OTHER OPTIONS]
Replace
[HOSTS]
with the type of hosts to use. For example:-
controller
for all Controller nodes -
compute
for all Compute nodes -
overcloud
for all overcloud child nodes i.e.controller
andcompute
-
undercloud
for the undercloud -
"*"
for all nodes
-
Replace
[OTHER OPTIONS]
with additional Ansible options. Some useful options include:-
--ssh-extra-args='-o StrictHostKeyChecking=no'
to bypasses confirmation on host key checking. -
-u [USER]
to change the SSH user that executes the Ansible automation. The default SSH user for the overcloud is automatically defined using theansible_ssh_user
parameter in the dynamic inventory. The-u
option overrides this parameter. -
-m [MODULE]
to use a specific Ansible module. The default iscommand
, which executes Linux commands. -
-a [MODULE_ARGS]
to define arguments for the chosen module.
-
Custom Ansible automation on the overcloud is not part of the standard overcloud stack. Subsequent execution of the openstack overcloud deploy
command might override Ansible-based configuration for OpenStack Platform services on overcloud nodes.
8.11. Importing Virtual Machines into the Overcloud
If you have an existing OpenStack environment and want to migrate its virtual machines to your Red Hat OpenStack Platform environment, complete the following steps:
- Create a new image by taking a snapshot of a running server and download the image.
$ source ~/overcloudrc (overcloud) $ openstack server image create instance_name --name image_name (overcloud) $ openstack image save image_name --file exported_vm.qcow2
- Upload the exported image into the overcloud and launch a new instance.
(overcloud) $ openstack image create imported_image --file exported_vm.qcow2 --disk-format qcow2 --container-format bare (overcloud) $ openstack server create imported_instance --key-name default --flavor m1.demo --image imported_image --nic net-id=net_id
These commands copy each VM disk from the existing OpenStack environment and into the new Red Hat OpenStack Platform. Snapshots using QCOW will lose their original layering system.
8.12. Migrating instances from a Compute node
In some situations, you might perform maintenance on an overcloud Compute node. To prevent downtime, migrate the VMs on the Compute node to another Compute node in the overcloud.
The director configures all Compute nodes to provide secure migration. All Compute nodes also require a shared SSH key to provide each host’s nova
user with access to other Compute nodes during the migration process. The director creates this key using the OS::TripleO::Services::NovaCompute
composable service. This composable service is one of the main services included on all Compute roles by default (see "Composable Services and Custom Roles" in Advanced Overcloud Customization).
Procedure
From the undercloud, select a Compute Node and disable it:
$ source ~/overcloudrc (overcloud) $ openstack compute service list (overcloud) $ openstack compute service set [hostname] nova-compute --disable
List all instances on the Compute node:
(overcloud) $ openstack server list --host [hostname] --all-projects
Use one of the following commands to migrate your instances:
Migrate the instance to a specific host of your choice:
(overcloud) $ openstack server migrate [instance-id] --live [target-host]--wait
Let
nova-scheduler
automatically select the target host:(overcloud) $ nova live-migration [instance-id]
Live migrate all instances at once:
$ nova host-evacuate-live [hostname]
NoteThe
nova
command might cause some deprecation warnings, which are safe to ignore.
- Wait until migration completes.
Confirm the migration was successful:
(overcloud) $ openstack server list --host [hostname] --all-projects
- Continue migrating instances until none remain on the chosen Compute Node.
This process migrates all instances from a Compute node. You can now perform maintenance on the node without any instance downtime. To return the Compute node to an enabled state, run the following command:
$ source ~/overcloudrc (overcloud) $ openstack compute service set [hostname] nova-compute --enable
8.13. Protecting the Overcloud from Removal
Heat contains a set of default policies in code that you can override by creating /etc/heat/policy.json
and adding customized rules. Add the following policy to deny everyone the permissions for deleting the overcloud.
{"stacks:delete": "rule:deny_everybody"}
This prevents removal of the overcloud with the heat
client. To allow removal of the overcloud, delete the custom policy and save /etc/heat/policy.json
.
8.14. Removing the Overcloud
Delete any existing overcloud:
$ source ~/stackrc (undercloud) $ openstack overcloud delete overcloud
Confirm the deletion of the overcloud:
(undercloud) $ openstack stack list
Deletion takes a few minutes.
Once the removal completes, follow the standard steps in the deployment scenarios to recreate your overcloud.
8.15. Review the Token Flush Interval
The Identity Service (keystone) uses a token-based system for access control against the other OpenStack services. Over time, the database accumulates a large number of unused tokens. A default cron job flushes the token table every day. It is recommended that you monitor your environment and adjust the token flush interval as needed.
To adjust the interval, include the KeystoneCronToken
parameter in an environment file. For more information, see the Overcloud Parameters guide.