Red Hat Training

A Red Hat training course is available for Red Hat Ceph Storage

Chapter 3. Configuring OpenStack to Use Ceph

3.1. Configuring Cinder

The cinder-volume nodes require the Ceph block device driver, the volume pool, the user and the UUID of the secret to interact with Ceph block devices. To configure Cinder, perform the following steps:

  1. Open the Cinder configuration file.

    # vim /etc/cinder/cinder.conf
  2. In the [DEFAULT] section, enable Ceph as a backend for Cinder.

    enabled_backends = ceph
  3. Ensure that the Glance API version is set to 2. If you are configuring multiple cinder back ends in enabled_backends, the glance_api_version = 2 setting must be in the [DEFAULT] section and not the [ceph] section.

    glance_api_version = 2
  4. Create a [ceph] section in the cinder.conf file. Add the Ceph settings in the following steps under the [ceph] section.
  5. Specify the volume_driver setting and set it to use the Ceph block device driver. For example:

    volume_driver = cinder.volume.drivers.rbd.RBDDriver
  6. Specify the cluster name and Ceph configuration file location. In typical deployments the Ceph cluster has a cluster name of ceph and a Ceph configuration file at /etc/ceph/ceph.conf. If the Ceph cluster name is not ceph, specify the cluster name and configuration file path appropriately. For example:

    rbd_cluster_name = us-west
    rbd_ceph_conf = /etc/ceph/us-west.conf
  7. By default, OSP stores Ceph volumes in the rbd pool. To use the volumes pool created earlier, specify the rbd_pool setting and set the volumes pool. For example:

    rbd_pool = volumes
  8. OSP does not have a default user name or a UUID of the secret for volumes. Specify rbd_user and set it to the cinder user. Then, specify the rbd_secret_uuid setting and set it to the generated UUID stored in the uuid-secret.txt file. For example:

    rbd_user = cinder
    rbd_secret_uuid = 4b5fd580-360c-4f8c-abb5-c83bb9a3f964
  9. Specify the following settings:

    rbd_flatten_volume_from_snapshot = false
    rbd_max_clone_depth = 5
    rbd_store_chunk_size = 4
    rados_connect_timeout = -1

The resulting configuration should look something like this:

[DEFAULT]
enabled_backends = ceph
glance_api_version = 2
...

[ceph]
volume_driver = cinder.volume.drivers.rbd.RBDDriver
rbd_cluster_name = ceph
rbd_pool = volumes
rbd_user = cinder
rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_flatten_volume_from_snapshot = false
rbd_secret_uuid = 4b5fd580-360c-4f8c-abb5-c83bb9a3f964
rbd_max_clone_depth = 5
rbd_store_chunk_size = 4
rados_connect_timeout = -1
Note

Consider removing the default [lvm] section and its settings.

3.2. Configuring Cinder Backup

The cinder-backup node requires a specific daemon. To configure Cinder backup, perform the following steps:

  1. Open the Cinder configuration file.

    # vim /etc/cinder/cinder.conf
  2. Go to the [ceph] section of the configuration file.
  3. Specify the backup_driver setting and set it to the Ceph driver.

    backup_driver = cinder.backup.drivers.ceph
  4. Specify the backup_ceph_conf setting and specify the path to the Ceph configuration file.

    backup_ceph_conf = /etc/ceph/ceph.conf
    Note

    The Cinder backup Ceph configuration file may be different from the Ceph configuration file used for Cinder. For example, it may point to a different Ceph cluster.

  5. Specify the Ceph pool for backups.

    backup_ceph_pool = backups
    Note

    While it is possible to use the same pool for Cinder Backups as used with Cinder, it is NOT recommended. Consider using a pool with a different CRUSH hierarchy.

  6. Specify the backup_ceph_user setting and specify the user as cinder-backup.

    backup_ceph_user = cinder-backup
  7. Specify the following settings:

    backup_ceph_chunk_size = 134217728
    backup_ceph_stripe_unit = 0
    backup_ceph_stripe_count = 0
    restore_discard_excess_bytes = true

With the Cinder settings included, the [ceph] section of the cinder.conf file should look something like this:

[ceph]
volume_driver = cinder.volume.drivers.rbd.RBDDriver
rbd_cluster_name = ceph
rbd_pool = volumes
rbd_user = cinder
rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_flatten_volume_from_snapshot = false
rbd_secret_uuid = 4b5fd580-360c-4f8c-abb5-c83bb9a3f964
rbd_max_clone_depth = 5
rbd_store_chunk_size = 4
rados_connect_timeout = -1

backup_driver = cinder.backup.drivers.ceph
backup_ceph_user = cinder-backup
backup_ceph_conf = /etc/ceph/ceph.conf
backup_ceph_chunk_size = 134217728
backup_ceph_pool = backups
backup_ceph_stripe_unit = 0
backup_ceph_stripe_count = 0
restore_discard_excess_bytes = true

Check to see if Cinder backup is enabled under /etc/openstack-dashboard/. The setting should be in a file called local_settings, or local_settings.py. For example:

cat  /etc/openstack-dashboard/local_settings | grep enable_backup

If enable_backup is set to False, set it to True. For example:

OPENSTACK_CINDER_FEATURES = {
    'enable_backup': True,
}

3.3. Configuring Glance

To use Ceph block devices by default, edit the /etc/glance/glance-api.conf file. Uncomment the following settings if necessary and change their values accordingly. If you used different pool, user or Ceph configuration file settings apply the appropriate values.

# vim /etc/glance/glance-api.conf
stores = rbd
default_store = rbd
rbd_store_chunk_size = 8
rbd_store_pool = images
rbd_store_user = glance
rbd_store_ceph_conf = /etc/ceph/ceph.conf

To enable copy-on-write (CoW) cloning set show_image_direct_url to True.

show_image_direct_url = True
Important

Enabling CoW exposes the back end location via Glance’s API, so the endpoint should not be publicly accessible.

Disable cache management if necessary. The flavor should be set to keystone only, not keystone+cachemanagement.

flavor = keystone

Red Hat recommends the following properties for images:

hw_scsi_model=virtio-scsi
hw_disk_bus=scsi
hw_qemu_guest_agent=yes
os_require_quiesce=yes

The virtio-scsi controller gets better performance and provides support for discard operations. For systems using SCSI/SAS drives, connect every cinder block device to that controller. Also, enable the QEMU guest agent and send fs-freeze/thaw calls through the QEMU guest agent.

3.4. Configuring Nova

On every nova-compute node, edit the Ceph configuration file to configure the ephemeral backend for Nova and to boot all the virtual machines directly into Ceph.

  1. Open the Ceph configuration file.

    # vim /etc/ceph/ceph.conf
  2. Add the following section to the [client] section of the Ceph configuration file:

    [client]
    rbd cache = true
    rbd cache writethrough until flush = true
    rbd concurrent management ops = 20
    admin socket = /var/run/ceph/guests/$cluster-$type.$id.$pid.$cctid.asok
    log file = /var/log/ceph/qemu-guest-$pid.log
  3. Make directories for the admin socket and log file, and change their permissions to use the qemu user and libvirtd group.

    mkdir -p /var/run/ceph/guests/ /var/log/ceph/
    chown qemu:libvirt /var/run/ceph/guests /var/log/ceph/
    Note

    The directories must be allowed by SELinux or AppArmor.

On every nova-compute node, edit the /etc/nova/nova.conf file under the [libvirt]` section and configure the following settings:

[libvirt]
images_type = rbd
images_rbd_pool = vms
images_rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_user = cinder
rbd_secret_uuid = 4b5fd580-360c-4f8c-abb5-c83bb9a3f964
disk_cachemodes="network=writeback"
inject_password = false
inject_key = false
inject_partition = -2
live_migration_flag="VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE,VIR_MIGRATE_PERSIST_DEST,VIR_MIGRATE_TUNNELLED"
hw_disk_discard = unmap

If the Ceph configuration file is not /etc/ceph/ceph.conf, provide the correct path. Replace the UUID in rbd_user_secret with the UUID in the uuid-secret.txt file.

3.5. Restarting OpenStack Services

To activate the Ceph block device drivers, load the block device pool names and Ceph user names into the configuration, restart the appropriate OpenStack services after modifying the corresponding configuration files.

# systemctl restart openstack-cinder-volume
# systemctl restart openstack-cinder-backup
# systemctl restart openstack-glance-api
# systemctl restart openstack-nova-compute