Chapter 4. Verify external Ceph Storage cluster integration

After you deploy the overcloud, confirm that Red Hat OpenStack Platform (RHOSP) services can write to the Red Hat Ceph Storage cluster.

4.1. Gathering IDs

To verify that you integrated a Red Hat Ceph Storage cluster, you must first create an image, a Compute instance, and a volume and gather their respective IDs.


  1. Create an image with the Image service (glance).

    For more information about how to create an image, see Import an image in the Creating and Managing Images guide.

  2. Record the image ID for later use.
  3. Create a Compute (nova) instance.

    For more information about how to create an instance, see Creating an instance in the Creating and Managing Instances guide.

  4. Record the instance ID for later use.
  5. Create a Block Storage (cinder) volume.

    For more information about how to create a Block Storage volume, see Create a volume in the Storage Guide.

  6. Record the volume ID for later use.

4.2. Verifying the Ceph Storage cluster

When you configure an external Ceph Storage cluster, you create pools and a client.openstack user to access those pools. After you deploy the overcloud, you can use the file that contains the credentials of the client.openstack user to list the contents of Red Hat OpenStack Platform (RHOSP) pools.

List the contents of the pools and confirm that the IDs of the Image service (glance) image, the instance, and the volume exist on the Ceph Storage cluster.


  1. Source the undercloud credentials:

    [stack@undercloud-0 ~]$ source stackrc
  2. List the available servers to retrieve the IP addresses of nodes on the system:

    (undercloud) [stack@undercloud-0 ~]$ openstack server list
    | ID | Name | Status | Networks | Image | Flavor |
    | d5a621bd-d109-41ae-a381-a42414397802 | compute-0 | ACTIVE | ctlplane= | overcloud-full | compute |
    | 496ab196-d6cb-447d-a118-5bafc5166cf2 | controller-0 | ACTIVE | ctlplane= | overcloud-full | controller |
    | c01e730d-62f2-426a-a964-b31448f250b3 | controller-2 | ACTIVE | ctlplane= | overcloud-full | controller |
    | 36df59b3-66f3-452e-9aec-b7e7f7c54b86 | controller-1 | ACTIVE | ctlplane= | overcloud-full | controller |
    | f8f00497-246d-4e40-8a6a-b5a60fa66483 | compute-1 | ACTIVE | ctlplane= | overcloud-full | compute |
  3. Use SSH to log in to any Compute node:

    (undercloud) [stack@undercloud-0 ~]$ ssh heat-admin@
  4. Switch to the root user:

    [heat-admin@compute-0 ~]$ sudo su -
  5. Confirm that the files /etc/ceph/ceph.conf and /etc/ceph/ceph.client.openstack.keyring exist:

    [root@compute-0 ~]# ls -l /etc/ceph/ceph.conf
    -rw-r--r--. 1 root root 1170 Sep 29 23:25 /etc/ceph/ceph.conf
    [root@compute-0 ~]# ls -l /etc/ceph/ceph.client.openstack.keyring
    -rw-------. 1 ceph ceph 253 Sep 29 23:25 /etc/ceph/ceph.client.openstack.keyring
  6. Enter the following command to force the nova_compute container to use the rbd command to list the contents of the appropriate pool.

    # podman exec nova_compute /usr/bin/rbd --conf /etc/ceph/ceph.conf --keyring /etc/ceph/ceph.client.openstack.keyring --cluster ceph --id openstack ls vms

    The pool name must match the pool names of the images, VMs, and volumes that you created when you configured the Ceph Storage cluster. For more information, see Configuring the existing Ceph Storage cluster. The IDs of the image, Compute instance, and volume must match the IDs that you recorded in Gathering IDs.


    The example command is prefixed with podman exec nova_compute because /usr/bin/rbd, which is provided by the ceph-common package, is not installed on overcloud nodes by default. However, it is available in the nova_compute container. The command lists block device images. For more information, see Listing the block device images in the Ceph Storage Block Device Guide.

    The following examples show how to confirm whether an ID for each service is present for each pool by using the IDs from Gathering IDs.

    # podman exec nova_compute /usr/bin/rbd --conf /etc/ceph/ceph.conf --keyring /etc/ceph/ceph.client.openstack.keyring --cluster ceph --id openstack ls images | grep 4485d4c0-24c3-42ec-a158-4d3950fa020b
    # podman exec nova_compute /usr/bin/rbd --conf /etc/ceph/ceph.conf --keyring /etc/ceph/ceph.client.openstack.keyring --cluster ceph --id openstack ls vms | grep 64bcb731-e7a4-4dd5-a807-ee26c669482f
    # podman exec nova_compute /usr/bin/rbd --conf /etc/ceph/ceph.conf --keyring /etc/ceph/ceph.client.openstack.keyring --cluster ceph --id openstack ls volumes | grep aeac15e8-b67f-454f-9486-46b3d75daff4

4.3. Troubleshooting failed verification

If the verification procedures fail, verify that the Ceph key for the openstack.client user and the Ceph Storage monitor IPs or hostnames can be used together to read, write, and delete from the Ceph Storage pools that you created for the Red Hat OpenStack Platform (RHOSP).


  1. To shorten the amount of typing you must do in this procedure, log in to a Compute node and create an alias for the rbd command:

    # alias rbd="podman exec nova_compute /usr/bin/rbd --conf /etc/ceph/ceph.conf --keyring /etc/ceph/ceph.client.openstack.keyring --cluster ceph --id openstack"
  2. Confirm that you can write test data to the pool as a new object:

    # rbd create --size 1024 vms/foo
  3. Confirm that you can see the test data:

    # rbd ls vms | grep foo
  4. Delete the test data:

    # rbd rm vms/foo

If this procedure fails, contact your Ceph Storage administrator for assistance. If this procedure succeeds, but you cannot create Compute instances, glance images, or cinder volumes, contact Red Hat Support.