Menu Close

Red Hat Training

A Red Hat training course is available for Red Hat OpenStack Platform

Chapter 7. Keeping OpenStack Platform Updated

This process provides instructions on how to keep your OpenStack Platform environment updated from minor versions to minor version. This is a minor update within Red Hat OpenStack Platform 12.

Prerequisites

  • You have upgraded the overcloud to Red Hat OpenStack Platform 12.
  • New packages and container images are available within Red Hat OpenStack Platform 12.

7.1. Validating the Undercloud before an Update

The following is a set of steps to check the functionality of your Red Hat OpenStack Platform 12 undercloud before an update.

Procedure

  1. Source the undercloud access details:

    $ source ~/stackrc
  2. Check for failed Systemd services:

    (undercloud) $ sudo systemctl list-units --state=failed 'openstack*' 'neutron*' 'httpd' 'docker'
  3. Check the undercloud free space:

    (undercloud) $ df -h

    +Use the "Undercloud Reqirements" as a basis to determine if you have adequate free space.

  4. Check that clocks are synchronized on the undercloud:

    (undercloud) $ sudo ntpstat
  5. Check the undercloud network services:

    (undercloud) $ openstack network agent list

    All agents should be Alive and their state should be UP.

  6. Check the undercloud compute services:

    (undercloud) $ openstack compute service list

    All agents' status should be enabled and their state should be up

Related Information

7.2. Validating the Overcloud before an Update

The following is a set of steps to check the functionality of your Red Hat OpenStack Platform 12 overcloud before an update.

Procedure

  1. Source the undercloud access details:

    $ source ~/stackrc
  2. Check the status of your bare metal nodes:

    (undercloud) $ openstack baremetal node list

    All nodes should have a valid power state (on) and maintenance mode should be false.

  3. Check for failed Systemd services:

    (undercloud) $ for NODE in $(openstack server list -f value -c Networks | cut -d= -f2); do echo "=== $NODE ===" ; ssh heat-admin@$NODE "sudo systemctl list-units --state=failed 'openstack*' 'neutron*' 'httpd' 'docker' 'ceph*'" ; done
  4. Check for failed containerized services:

    (undercloud) $ for NODE in $(openstack server list -f value -c Networks | cut -d= -f2); do echo "=== $NODE ===" ; ssh heat-admin@$NODE "sudo docker ps -f 'exited=1' --all" ; done
    (undercloud) $ for NODE in $(openstack server list -f value -c Networks | cut -d= -f2); do echo "=== $NODE ===" ; ssh heat-admin@$NODE "sudo docker ps -f 'status=dead' -f 'status=restarting'" ; done
  5. Check the HAProxy connection to all services. Obtain the Control Plane VIP address and authentication details for the haproxy.stats service:

    (undercloud) $ NODE=$(openstack server list --name controller-0 -f value -c Networks | cut -d= -f2); ssh heat-admin@$NODE sudo 'grep "listen haproxy.stats" -A 6 /var/lib/config-data/puppet-generated/haproxy/etc/haproxy/haproxy.cfg'

    Use these details in the following cURL request:

    (undercloud) $ curl -s -u admin:<PASSWORD> "http://<IP ADDRESS>:1993/;csv" | egrep -vi "(frontend|backend)" | awk -F',' '{ print $1" "$2" "$18 }'

    Replace <PASSWORD> and <IP ADDRESS> details with the respective details from the haproxy.stats service. The resulting list shows the OpenStack Platform services on each node and their connection status.

  6. Check overcloud database replication health:

    (undercloud) $ for NODE in $(openstack server list --name controller -f value -c Networks | cut -d= -f2); do echo "=== $NODE ===" ; ssh heat-admin@$NODE "sudo docker exec clustercheck clustercheck" ; done
  7. Check RabbitMQ cluster health:

    (undercloud) $ for NODE in $(openstack server list --name controller -f value -c Networks | cut -d= -f2); do echo "=== $NODE ===" ; ssh heat-admin@$NODE "sudo docker exec $(ssh heat-admin@$NODE "sudo docker ps -f 'name=.*rabbitmq.*' -q") rabbitmqctl node_health_check" ; done
  8. Check Pacemaker resource health:

    (undercloud) $ NODE=$(openstack server list --name controller-0 -f value -c Networks | cut -d= -f2); ssh heat-admin@$NODE "sudo pcs status"

    Look for:

    • All cluster nodes online.
    • No resources stopped on any cluster nodes.
    • No failed pacemaker actions.
  9. Check the disk space on each overcloud node:

    (undercloud) $ for NODE in $(openstack server list -f value -c Networks | cut -d= -f2); do echo "=== $NODE ===" ; ssh heat-admin@$NODE "sudo df -h --output=source,fstype,avail -x overlay -x tmpfs -x devtmpfs" ; done
  10. Check overcloud Ceph Storage cluster health. The following command runs the ceph tool on a Controller node to check the cluster:

    (undercloud) $ NODE=$(openstack server list --name controller-0 -f value -c Networks | cut -d= -f2); ssh heat-admin@$NODE "sudo ceph -s"
  11. Check Ceph Storage OSD for free space. The following command runs the ceph tool on a Controller node to check the free space:

    (undercloud) $ NODE=$(openstack server list --name controller-0 -f value -c Networks | cut -d= -f2); ssh heat-admin@$NODE "sudo ceph df"
  12. Check that clocks are synchronized on overcloud nodes

    (undercloud) $ for NODE in $(openstack server list -f value -c Networks | cut -d= -f2); do echo "=== $NODE ===" ; ssh heat-admin@$NODE "sudo ntpstat" ; done
  13. Source the overcloud access details:

    (undercloud) $ source ~/overcloudrc
  14. Check the overcloud network services:

    (overcloud) $ openstack network agent list

    All agents should be Alive and their state should be UP.

  15. Check the overcloud compute services:

    (overcloud) $ openstack compute service list

    All agents' status should be enabled and their state should be up

  16. Check the overcloud volume services:

    (overcloud) $ openstack volume service list

    All agents' status should be enabled and their state should be up.

Related Information

7.3. Keeping the Undercloud Updated

The director provides commands to update the packages on the undercloud node. This allows you to perform a minor update within the current version of your OpenStack Platform environment. This is a minor update within Red Hat OpenStack Platform 12.

Prerequisites

  • You are using Red Hat OpenStack Platform 12.
  • You have performed a backup of the undercloud.

Procedure

  1. Log into the director as the stack user.
  2. Update the python-tripleoclient package and its dependencies to ensure you have the latest scripts for the minor version update:

    $ sudo yum update -y python-tripleoclient
  3. The director uses the openstack undercloud upgrade command to update the Undercloud environment. Run the command:

    $ openstack undercloud upgrade
  4. Reboot the node:

    $ sudo reboot
  5. Wait until the node boots.
  6. Check the status of all services:

    $ sudo systemctl list-units "openstack*" "neutron*" "openvswitch*"
    Note

    It might take approximately 10 minutes for the openstack-nova-compute to become active after a reboot.

  7. Verify the existence of your overcloud and its nodes:

    $ source ~/stackrc
    $ openstack server list
    $ openstack baremetal node list
    $ openstack stack list

It is important to keep your overcloud images up to date to ensure the image configuration matches the requirements of the latest openstack-tripleo-heat-template package. To ensure successful deployments and scaling operations in the future, update your overclouds images using the instructions in Section 7.4, “Keeping the Overcloud Images Updated”.

7.4. Keeping the Overcloud Images Updated

The undercloud update process might download new image archives from the rhosp-director-images and rhosp-director-images-ipa packages. This process updates these images on your undercloud within Red Hat OpenStack Platform 12.

Prerequisites

  • You are using Red Hat OpenStack Platform 12.
  • You have updated to the latest minor release of your current undercloud.

Procedure

  1. Check the yum log to determine if new image archives are available:

    $ sudo grep "rhosp-director-images" /var/log/yum.log
  2. If new archives are available, replace your current images with new images. To install the new images, first remove any existing images from the images directory on the stack user’s home (/home/stack/images):

    $ rm -rf ~/images/*
  3. Extract the archives:

    $ cd ~/images
    $ for i in /usr/share/rhosp-director-images/overcloud-full-latest-12.0.tar /usr/share/rhosp-director-images/ironic-python-agent-latest-12.0.tar; do tar -xvf $i; done
  4. Import the latest images into the director and configure nodes to use the new images:

    $ cd ~
    $ openstack overcloud image upload --update-existing --image-path /home/stack/images/
    $ openstack overcloud node configure $(openstack baremetal node list -c UUID -f csv --quote none | sed "1d" | paste -s -d " ")
  5. To finalize the image update, verify the existence of the new images:

    $ openstack image list
    $ ls -l /httpboot

    The director is now updated and using the latest images. You do not need to restart any services after the update.

7.5. Keeping the Overcloud Updated

The director provides commands to update the packages on all overcloud nodes. This allows you to perform a minor update within the current version of your OpenStack Platform environment. This is a minor update within Red Hat OpenStack Platform 12.

Prerequisites

  • You are using Red Hat OpenStack Platform 12.
  • You have updated to the latest minor release of your current undercloud.
  • You have performed a backup of the overcloud.

Procedure

  1. Find the latest tag for the containerized service images:

    $ openstack overcloud container image tag discover \
      --image registry.access.redhat.com/rhosp12/openstack-base:latest \
      --tag-from-label version-release

    Make a note of the most recent tag.

  2. Create an updated environment file for your container image source. Run using the openstack overcloud container image prepare command. For example, to use images from registry.access.redhat.com:

    $ openstack overcloud container image prepare \
      --namespace=registry.access.redhat.com/rhosp12 \
      --prefix=openstack- \
      --tag [TAG] \ 1
      --set ceph_namespace=registry.access.redhat.com/rhceph \
      --set ceph_image=rhceph-2-rhel7 \
      --set ceph_tag=latest \
      --env-file=/home/stack/templates/overcloud_images.yaml \
      -e /home/stack/templates/custom_environment_file.yaml 2
    1
    Replace [TAG] with the tag obtained from the previous step.
    2
    Include all additional environment files with the -e parameter. The director checks the custom resources in all included environment files and identifies the container images required for the containerized services.

    For more information about generating this environment file for different source types, see "Configuring Container Registry Details" in the Director Installation and Usage guide.

  3. Run the openstack overcloud update stack command to update the container image locations in your overcloud:

    $ openstack overcloud update stack --init-minor-update \
      --container-registry-file /home/stack/templates/overcloud_images.yaml

    The --init-minor-update only performs an update of the parameters in the overcloud stack. It does not perform the actual package or container update. Wait until this command completes.

  4. Perform a package and container update using the openstack overcloud update command. Using the --nodes option to upgrade node for each role. For example, the following command updates nodes in the Controller role

    $ openstack overcloud update stack --nodes Controller

    Run this command for each role group in the following order:

    • Controller
    • CephStorage
    • Compute
    • ObjectStorage
    • Any custom roles such as Database, MessageBus, Networker, and so forth.
  5. The update process starts for the chosen role starts. The director uses an Ansible playbook to perform the update and displays the output of each task.
  6. Update the next role group. Repeat until you have updated all nodes.
  7. The update process does not reboot any nodes in the Overcloud automatically. Updates to the kernel or Open vSwitch require a reboot. Check the /var/log/yum.log file on each node to see if either the kernel or openvswitch packages have updated their major or minor versions. If they have, reboot each node using the "Rebooting Nodes" procedures in the Director Installation and Usage guide.