Chapter 9. Post-Deployment
The following subsections describe several post-deployment operations for managing the Ceph cluster.
9.1. Accessing the Overcloud
The director generates a script to configure and help authenticate interactions with your overcloud from the director host. The director saves this file (
overcloudrc) in your
stack user’s home directory. Run the following command to use this file:
$ source ~/overcloudrc
This loads the necessary environment variables to interact with your overcloud from the director host’s CLI. To return to interacting with the director’s host, run the following command:
$ source ~/stackrc
9.2. Monitoring Ceph Storage Nodes
After you create the overcloud, check the status of the Ceph Storage Cluster to ensure that it works correctly.
Log in to a Controller node as the
$ nova list $ ssh firstname.lastname@example.org
Check the health of the cluster:
$ sudo podman exec ceph-mon-$HOSTNAME ceph health
If the cluster has no issues, the command reports back
HEALTH_OK. This means the cluster is safe to use.
Log in to an overcloud node that runs the Ceph monitor service and check the status of all OSDs in the cluster:
$ sudo podman exec ceph-mon-$HOSTNAME ceph osd tree
Check the status of the Ceph Monitor quorum:
$ sudo ceph quorum_status
This shows the monitors participating in the quorum and which one is the leader.
Verify that all Ceph OSDs are running:
$ ceph osd stat
For more information on monitoring Ceph Storage clusters, see Monitoring in the Red Hat Ceph Storage Administration Guide.