Chapter 5. Rebooting the overcloud

After a minor Red Hat OpenStack version update, reboot your overcloud. The reboot refreshes the nodes with any associated kernel, system-level, and container component updates. These updates may provide performance and security benefits.

Plan downtime to perform the following reboot procedures.

5.1. Rebooting Controller and composable nodes

Complete the following steps to reboot Controller nodes and standalone nodes based on composable roles, excluding Compute nodes and Ceph Storage nodes.

Procedure

  1. Log in to the node that you want to reboot.
  2. Optional: If the node uses Pacemaker resources, stop the cluster:

    [heat-admin@overcloud-controller-0 ~]$ sudo pcs cluster stop
  3. Reboot the node:

    [heat-admin@overcloud-controller-0 ~]$ sudo reboot
  4. Wait until the node boots.
  5. Check the services. For example:

    1. If the node uses Pacemaker services, check that the node has rejoined the cluster:

      [heat-admin@overcloud-controller-0 ~]$ sudo pcs status
    2. If the node uses Systemd services, check that all services are enabled:

      [heat-admin@overcloud-controller-0 ~]$ sudo systemctl status
    3. If the node uses containerized services, check that all containers on the node are active:

      [heat-admin@overcloud-controller-0 ~]$ sudo podman ps

5.2. Rebooting a Ceph Storage (OSD) cluster

Complete the following steps to reboot a cluster of Ceph Storage (OSD) nodes.

Prerequisites

  • On a Ceph Monitor or Controller node that is running the ceph-mon service, check that the Red Hat Ceph Storage cluster status is healthy and the pg status is active+clean:

    $ sudo podman exec -it ceph-mon-controller-0 ceph -s

    If the Ceph cluster is healthy, it returns a status of HEALTH_OK.

    If the Ceph cluster status is unhealthy, it returns a status of HEALTH_WARN or HEALTH_ERR. For troubleshooting guidance, see the Red Hat Ceph Storage 4 Troubleshooting Guide.

Procedure

  1. Log in to a Ceph Monitor or Controller node that is running the ceph-mon service, and disable Ceph Storage cluster rebalancing temporarily:

    $ sudo podman exec -it ceph-mon-controller-0 ceph osd set noout
    $ sudo podman exec -it ceph-mon-controller-0 ceph osd set norebalance
    Note

    If you have a multistack or distributed compute node (DCN) architecture, you must specify the cluster name when you set the noout and norebalance flags. For example: sudo podman exec -it ceph-mon-controller-0 ceph osd set noout --cluster <cluster_name>

  2. Select the first Ceph Storage node that you want to reboot and log in to the node.
  3. Reboot the node:

    $ sudo reboot
  4. Wait until the node boots.
  5. Log into the node and check the cluster status:

    $ sudo podman exec -it ceph-mon-controller-0 ceph status

    Check that the pgmap reports all pgs as normal (active+clean).

  6. Log out of the node, reboot the next node, and check its status. Repeat this process until you have rebooted all Ceph storage nodes.
  7. When complete, log in to a Ceph Monitor or Controller node that is running the ceph-mon service, and re-enable cluster rebalancing:

    $ sudo podman exec -it ceph-mon-controller-0 ceph osd unset noout
    $ sudo podman exec -it ceph-mon-controller-0 ceph osd unset norebalance
    Note

    If you have a multistack or distributed compute node (DCN) architecture, you must specify the cluster name when you unset the noout and norebalance flags. For example: sudo podman exec -it ceph-mon-controller-0 ceph osd set noout --cluster <cluster_name>

  8. Perform a final status check to verify that the cluster reports HEALTH_OK:

    $ sudo podman exec -it ceph-mon-controller-0 ceph status

5.3. Rebooting Compute nodes

Complete the following steps to reboot Compute nodes. To ensure minimal downtime of instances in your Red Hat OpenStack Platform environment, this procedure also includes instructions about migrating instances from the Compute node that you want to reboot. This involves the following workflow:

  • Decide whether to migrate instances to another Compute node before rebooting the node.
  • Select and disable the Compute node you want to reboot so that it does not provision new instances.
  • Migrate the instances to another Compute node.
  • Reboot the empty Compute node.
  • Enable the empty Compute node.

Prerequisites

Before you reboot the Compute node, you must decide whether to migrate instances to another Compute node while the node is rebooting.

Review the list of migration constraints that you might run into when migrating virtual machine instances between Compute nodes. For more information, see Migration constraints in Configuring the Compute Service for Instance Creation.

If you cannot migrate the instances, you can set the following core template parameters to control the state of the instances after the Compute node reboots:

NovaResumeGuestsStateOnHostBoot
Determines whether to return instances to the same state on the Compute node after reboot. When set to False, the instances remain down and you must start them manually. Default value is: False
NovaResumeGuestsShutdownTimeout
Number of seconds to wait for an instance to shut down before rebooting. It is not recommended to set this value to 0. Default value is: 300

For more information about overcloud parameters and their usage, see Overcloud Parameters.

Procedure

  1. Log in to the undercloud as the stack user.
  2. List all Compute nodes and their UUIDs:

    $ source ~/stackrc
    (undercloud) $ openstack server list --name compute

    Identify the UUID of the Compute node that you want to reboot.

  3. From the undercloud, select a Compute node. Disable the node:

    $ source ~/overcloudrc
    (overcloud) $ openstack compute service list
    (overcloud) $ openstack compute service set <hostname> nova-compute --disable
  4. List all instances on the Compute node:

    (overcloud) $ openstack server list --host <hostname> --all-projects
  5. If you decide not to migrate instances, skip to this step.
  6. If you decide to migrate the instances to another Compute node, use one of the following commands:

    • Migrate the instance to a different host:

      (overcloud) $ openstack server migrate <instance_id> --live <target_host> --wait
    • Let nova-scheduler automatically select the target host:

      (overcloud) $ nova live-migration <instance_id>
    • Live migrate all instances at once:

      $ nova host-evacuate-live <hostname>
      Note

      The nova command might cause some deprecation warnings, which are safe to ignore.

  7. Wait until migration completes.
  8. Confirm that the migration was successful:

    (overcloud) $ openstack server list --host <hostname> --all-projects
  9. Continue to migrate instances until none remain on the chosen Compute node.
  10. Log in to the Compute node and reboot the node:

    [heat-admin@overcloud-compute-0 ~]$ sudo reboot
  11. Wait until the node boots.
  12. Re-enable the Compute node:

    $ source ~/overcloudrc
    (overcloud) $ openstack compute service set <hostname>  nova-compute --enable
  13. Check that the Compute node is enabled:

    (overcloud) $ openstack compute service list