Chapter 2. Installing and configuring Ceph for OpenStack

As a storage administrator, you must install and configure Ceph before the Red Hat OpenStack Platform can use the Ceph block devices.

2.1. Prerequisites

  • A new or existing Red Hat Ceph Storage cluster.

2.2. Creating Ceph pools for OpenStack

You can create Ceph pools for use with OpenStack. By default, Ceph block devices use the rbd pool, but you can use any available pool.

Prerequisites

  • A running Red Hat Ceph Storage cluster.

Procedure

  1. Verify the Red Hat Ceph Storage cluster is running, and is in a HEALTH_OK state:

    [root@mon ~]# ceph -s
  2. Create the Ceph pools:

    Example

    [root@mon ~]# ceph osd pool create volumes 128
    [root@mon ~]# ceph osd pool create backups 128
    [root@mon ~]# ceph osd pool create images 128
    [root@mon ~]# ceph osd pool create vms 128

    In the above example, 128 is the number of placement groups.

    Important

    Red Hat recommends using the Ceph Placement Group’s per Pool Calculator to calculate a suitable number of placement groups for the pools.

Additional Resources

  • See the Pools chapter in the Storage Strategies guide for more details on creating pools.

2.3. Installing the Ceph client on OpenStack

You can install the Ceph client packages on the Red Hat OpenStack Platform to access the Ceph storage cluster.

Prerequisites

  • A running Red Hat Ceph Storage cluster.
  • Access to the Ceph software repository.
  • Root-level access to the OpenStack Nova, Cinder, Cinder Backup and Glance nodes.

Procedure

  1. On the OpenStack Nova, Cinder, Cinder Backup nodes install the following packages:

    [root@nova ~]# dnf install python-rbd ceph-common
  2. On the OpenStack Glance host install the python-rbd package:

    [root@glance ~]# dnf install python-rbd

2.4. Copying the Ceph configuration file to OpenStack

Copying the Ceph configuration file to the nova-compute, cinder-backup, cinder-volume, and glance-api nodes.

Prerequisites

  • A running Red Hat Ceph Storage cluster.
  • Access to the Ceph software repository.
  • Root-level access to the OpenStack Nova, Cinder, and Glance nodes.

Procedure

  1. Copy the Ceph configuration file from the Ceph Monitor host to the OpenStack Nova, Cinder, Cinder Backup and Glance nodes:

    [root@mon ~]# scp /etc/ceph/ceph.conf OPENSTACK_NODES:/etc/ceph

2.5. Configuring Ceph client authentication

You can configure authentication for the Ceph client to access the Red Hat OpenStack Platform.

Prerequisites

  • Root-level access to the Ceph Monitor host.
  • A running Red Hat Ceph Storage cluster.

Procedure

  1. From a Ceph Monitor host, create new users for Cinder, Cinder Backup and Glance:

    [root@mon ~]# ceph auth get-or-create client.cinder mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=vms, allow rx pool=images'
    
    [root@mon ~]# ceph auth get-or-create client.cinder-backup mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=backups'
    
    [root@mon ~]# ceph auth get-or-create client.glance mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=images'
  2. Add the keyrings for client.cinder, client.cinder-backup and client.glance to the appropriate nodes and change their ownership:

    [root@mon ~]# ceph auth get-or-create client.cinder | ssh CINDER_VOLUME_NODE sudo tee /etc/ceph/ceph.client.cinder.keyring
    [root@mon ~]# ssh CINDER_VOLUME_NODE chown cinder:cinder /etc/ceph/ceph.client.cinder.keyring
    
    [root@mon ~]# ceph auth get-or-create client.cinder-backup | ssh CINDER_BACKUP_NODE tee /etc/ceph/ceph.client.cinder-backup.keyring
    [root@mon ~]# ssh CINDER_BACKUP_NODE chown cinder:cinder /etc/ceph/ceph.client.cinder-backup.keyring
    
    [root@mon ~]# ceph auth get-or-create client.glance | ssh GLANCE_API_NODE sudo tee /etc/ceph/ceph.client.glance.keyring
    [root@mon ~]# ssh GLANCE_API_NODE chown glance:glance /etc/ceph/ceph.client.glance.keyring
  3. OpenStack Nova nodes need the keyring file for the nova-compute process:

    [root@mon ~]# ceph auth get-or-create client.cinder | ssh NOVA_NODE tee /etc/ceph/ceph.client.cinder.keyring
  4. The OpenStack Nova nodes also need to store the secret key of the client.cinder user in libvirt. The libvirt process needs the secret key to access the cluster while attaching a block device from Cinder. Create a temporary copy of the secret key on the OpenStack Nova nodes:

    [root@mon ~]# ceph auth get-key client.cinder | ssh NOVA_NODE tee client.cinder.key

    If the storage cluster contains Ceph block device images that use the exclusive-lock feature, ensure that all Ceph block device users have permissions to blocklist clients:

    [root@mon ~]# ceph auth caps client.ID mon 'allow r, allow command "osd blacklist"' osd 'EXISTING_OSD_USER_CAPS'
  5. Return to the OpenStack Nova host:

    [root@mon ~]# ssh NOVA_NODE
  6. Generate a UUID for the secret, and save the UUID of the secret for configuring nova-compute later:

    [root@nova ~]# uuidgen > uuid-secret.txt
    Note

    You do not necessarily need the UUID on all the Nova compute nodes. However, from a platform consistency perspective, it’s better to keep the same UUID.

  7. On the OpenStack Nova nodes, add the secret key to libvirt and remove the temporary copy of the key:

    cat > secret.xml <<EOF
    <secret ephemeral='no' private='no'>
      <uuid>`cat uuid-secret.txt`</uuid>
      <usage type='ceph'>
        <name>client.cinder secret</name>
      </usage>
    </secret>
    EOF
  8. Set and define the secret for libvirt:

    [root@nova ~]# virsh secret-define --file secret.xml
    [root@nova ~]# virsh secret-set-value --secret $(cat uuid-secret.txt) --base64 $(cat client.cinder.key) && rm client.cinder.key secret.xml

Additional Resources