Chapter 25. Performing post-upgrade actions

After you have completed the overcloud upgrade, you must perform some post-upgrade configuration to ensure that your environment is fully supported and ready for future operations.

25.1. Removing unnecessary packages and directories from the undercloud

After the Leapp upgrade, remove the unnecessary packages and directories that remain on the undercloud.

Procedure

  1. Remove the unnecessary packages

    $ sudo dnf -y remove --exclude=python-pycadf-common python2*
  2. Remove the content from the /httpboot and /tftpboot directories that includes old images used in Red Hat OpenStack 13:

    $ sudo rm -rf /httpboot /tftpboot

25.2. Validating the post-upgrade functionality

Run the post-upgrade validation group to check the post-upgrade functionality.

Procedure

  1. Source the stackrc file.

    $ source ~/stackrc
  2. Run the openstack tripleo validator run command with the --group post-upgrade option:

    $ openstack tripleo validator run --group post-upgrade
  3. Review the results of the validation report. To view detailed output from a specific validation, run the openstack tripleo validator show run --full command against the UUID of the specific validation from the report:

    $ openstack tripleo validator show run --full <UUID>
Important

A FAILED validation does not prevent you from deploying or running Red Hat OpenStack Platform. However, a FAILED validation can indicate a potential issue with a production environment.

25.3. Upgrading the overcloud images

You must replace your current overcloud images with new versions. The new images ensure that the director can introspect and provision your nodes using the latest version of OpenStack Platform software.

Prerequisites

  • You have upgraded the undercloud to the latest version.

Procedure

  1. Log in to the undercloud as the stack user.
  2. Source the stackrc file.

    $ source ~/stackrc
  3. Install the packages containing the overcloud QCOW2 archives:

    $ sudo dnf install rhosp-director-images rhosp-director-images-ipa
  4. Remove any existing images from the images directory on the stack user’s home (/home/stack/images):

    $ rm -rf ~/images/*
  5. Extract the archives:

    $ cd ~/images
    $ for i in /usr/share/rhosp-director-images/overcloud-full-latest-16.1.tar /usr/share/rhosp-director-images/ironic-python-agent-latest-16.1.tar; do tar -xvf $i; done
    $ cd ~
  6. Import the latest images into the director:

    $ openstack overcloud image upload --update-existing --image-path /home/stack/images/
  7. Configure your nodes to use the new images:

    $ openstack overcloud node configure $(openstack baremetal node list -c UUID -f value)
Important

When you deploy overcloud nodes, ensure that the overcloud image version corresponds to the respective heat template version. For example, use the OpenStack Platform 16.1 images only with the OpenStack Platform 16.1 heat templates.

Important

The new overcloud-full image replaces the old overcloud-full image. If you made changes to the old image, you must repeat the changes in the new image, especially if you want to deploy new nodes in the future.

25.4. Updating CPU pinning parameters

Red Hat OpenStack Platform 16.1 uses new parameters for CPU pinning:

NovaComputeCpuDedicatedSet
Sets the dedicated (pinned) CPUs.
NovaComputeCpuSharedSet
Sets the shared (unpinned) CPUs.

You must migrate the CPU pinning configuration from the NovaVcpuPinSet parameter to the NovaComputeCpuDedicatedSet and NovaComputeCpuSharedSet parameters after completing the upgrade to Red Hat OpenStack Platform 16.1.

Procedure

  1. Log in to the undercloud as the stack user.
  2. If your Compute nodes support simultaneous multithreading (SMT) but you created instances with the hw:cpu_thread_policy=isolate policy, you must perform one of the following options:

    • Unset the hw:cpu_thread_policy thread policy and resize the instances:

      1. Source your overcloud authentication file:

        $ source ~/overcloudrc
      2. Unset the hw:cpu_thread_policy property of the flavor:

        (overcloud) $ openstack flavor unset --property hw:cpu_thread_policy <flavor>
        Note
        • Unsetting the hw:cpu_thread_policy attribute sets the policy to the default prefer policy, which sets the instance to use an SMT-enabled Compute node if available. You can also set the hw:cpu_thread_policy attribute to require, which sets a hard requirements for an SMT-enabled Compute node.
        • If the Compute node does not have an SMT architecture or enough CPU cores with available thread siblings, scheduling will fail. To prevent this, set hw:cpu_thread_policy to prefer instead of require. The default prefer policy ensures that thread siblings are used when available.
        • If you use hw:cpu_thread_policy=isolate, you must have SMT disabled or use a platform that does not support SMT.
      3. Convert the instances to use the new thread policy.

        (overcloud) $ openstack server resize --flavor <flavor> <server>
        (overcloud) $ openstack server resize confirm <server>

        Repeat this step for all pinned instances using the hw:cpu_thread_policy=isolated policy.

    • Migrate instances from the Compute node and disable SMT on the Compute node:

      1. Source your overcloud authentication file:

        $ source ~/overcloudrc
      2. Disable the Compute node from accepting new virtual machines:

        (overcloud) $ openstack compute service list
        (overcloud) $ openstack compute service set <hostname> nova-compute --disable
      3. Migrate all instances from the Compute node. For more information on instance migration, see Migrating virtual machine instances between Compute nodes.
      4. Reboot the Compute node and disable SMT in the BIOS of the Compute node.
      5. Boot the Compute node.
      6. Re-enable the Compute node:

        (overcloud) $ openstack compute service set <hostname> nova-compute --enable
  3. Source the stackrc file:

    $ source ~/stackrc
  4. Edit the environment file that contains the NovaVcpuPinSet parameter.
  5. Migrate the CPU pinning configuration from the NovaVcpuPinSet parameter to NovaComputeCpuDedicatedSet and NovaComputeCpuSharedSet:

    • Migrate the value of NovaVcpuPinSet to NovaComputeCpuDedicatedSet for hosts that were previously used for pinned instances.
    • Migrate the value of NovaVcpuPinSet to NovaComputeCpuSharedSet for hosts that were previously used for unpinned instances.
    • If there is no value set for NovaVcpuPinSet, then all Compute node cores should be assigned to either NovaComputeCpuDedicatedSet or NovaComputeCpuSharedSet, depending on the type of instances you intend to host on the nodes.

    For example, your previous environment file might contain the following pinning configuration:

    parameter_defaults:
      ...
      NovaVcpuPinSet: 1,2,3,5,6,7
      ...

    To migrate the configuration to a pinned configuration, set the NovaComputeCpuDedicatedSet parameter and unset the NovaVcpuPinSet parameter:

    parameter_defaults:
      ...
      NovaComputeCpuDedicatedSet: 1,2,3,5,6,7
      NovaVcpuPinSet: ""
      ...

    To migrate the configuration to an unpinned configuration, set the NovaComputeCpuSharedSet parameter and unset the NovaVcpuPinSet parameter:

    parameter_defaults:
      ...
      NovaComputeCpuSharedSet: 1,2,3,5,6,7
      NovaVcpuPinSet: ""
      ...
    Important

    Ensure the configuration of either NovaComputeCpuDedicatedSet or NovaComputeCpuSharedSet matches the configuration defined in NovaVcpuPinSet. To change the configuration for either of these, or to configure both NovaComputeCpuDedicatedSet or NovaComputeCpuSharedSet, ensure the Compute nodes with the pinning configuration are not running any instances before updating the configuration.

  6. Save the file.
  7. Run the deployment command to update the overcloud with the new CPU pinning parameters.

    (undercloud) $ openstack overcloud deploy \
        --stack _STACK NAME_ \
        --templates \
        ...
        -e /home/stack/templates/<compute_environment_file>.yaml
        ...

25.5. Migrating users to the member role

In Red Hat OpenStack Platform 13, the default member role is called _member_.
In Red Hat OpenStack Platform 16.1, the default member role is called member.

When you complete the upgrade from Red Hat OpenStack Platform 13 to Red Hat OpenStack Platform 16.1, users that you assigned to the _member_ role still have that role. You can migrate all of the users to the member role by using the following steps.

Prerequisites

  • You have upgraded the overcloud to the latest version.

Procedure

  1. List all of the users on your cloud that have the _member_ role:

    openstack role assignment list --names --role _member_ --sort-column project
  2. For each user, remove the _member_ role, and apply the member role:

    openstack role remove --user <user> --project <project>  _member_
    openstack role add --user <user> --project <project>  member