Chapter 10. Rebooting the environment

A situation might occur where you need to reboot the environment. For example, when you might need to modify the physical servers, or you might need to recover from a power outage. In this situation, it is important to make sure your Ceph Storage nodes boot correctly.

Make sure to boot the nodes in the following order:

  • Boot all Ceph Monitor nodes first - This ensures the Ceph Monitor service is active in your high availability cluster. By default, the Ceph Monitor service is installed on the Controller node. If the Ceph Monitor is separate from the Controller in a custom role, make sure this custom Ceph Monitor role is active.
  • Boot all Ceph Storage nodes - This ensures the Ceph OSD cluster can connect to the active Ceph Monitor cluster on the Controller nodes.

10.1. Rebooting a Ceph Storage (OSD) cluster

Complete the following steps to reboot a cluster of Ceph Storage (OSD) nodes.

Procedure

  1. Log in to a Ceph MON or Controller node and disable Ceph Storage cluster rebalancing temporarily:

    $ sudo podman exec -it ceph-mon-controller-0 ceph osd set noout
    $ sudo podman exec -it ceph-mon-controller-0 ceph osd set norebalance
  2. Select the first Ceph Storage node that you want to reboot and log in to the node.
  3. Reboot the node:

    $ sudo reboot
  4. Wait until the node boots.
  5. Log in to the node and check the cluster status:

    $ sudo podman exec -it ceph-mon-controller-0 ceph status

    Check that the pgmap reports all pgs as normal (active+clean).

  6. Log out of the node, reboot the next node, and check its status. Repeat this process until you have rebooted all Ceph storage nodes.
  7. When complete, log into a Ceph MON or Controller node and re-enable cluster rebalancing:

    $ sudo podman exec -it ceph-mon-controller-0 ceph osd unset noout
    $ sudo podman exec -it ceph-mon-controller-0 ceph osd unset norebalance
  8. Perform a final status check to verify that the cluster reports HEALTH_OK:

    $ sudo podman exec -it ceph-mon-controller-0 ceph status

If a situation occurs where all overcloud nodes boot at the same time, the Ceph OSD services might not start correctly on the Ceph Storage nodes. In this situation, reboot the Ceph Storage OSDs so they can connect to the Ceph Monitor service.

Verify a HEALTH_OK status of the Ceph Storage node cluster with the following command:

$ sudo ceph status