Chapter 3. Configuring OpenStack to Use Ceph
3.1. Configuring Cinder
The cinder-volume nodes require the Ceph block device driver, the volume pool, the user and the UUID of the secret to interact with Ceph block devices. To configure Cinder, perform the following steps:
Open the Cinder configuration file.
# vim /etc/cinder/cinder.conf
In the
[DEFAULT]section, enable Ceph as a backend for Cinder.enabled_backends = ceph
Ensure that the Glance API version is set to 2. If you are configuring multiple cinder back ends in
enabled_backends, theglance_api_version = 2setting must be in the[DEFAULT]section and not the[ceph]section.glance_api_version = 2
-
Create a
[ceph]section in thecinder.conffile. Add the Ceph settings in the following steps under the[ceph]section. Specify the
volume_driversetting and set it to use the Ceph block device driver. For example:volume_driver = cinder.volume.drivers.rbd.RBDDriver
Specify the cluster name and Ceph configuration file location. In typical deployments the Ceph cluster has a cluster name of
cephand a Ceph configuration file at/etc/ceph/ceph.conf. If the Ceph cluster name is notceph, specify the cluster name and configuration file path appropriately. For example:rbd_cluster_name = us-west rbd_ceph_conf = /etc/ceph/us-west.conf
By default, OSP stores Ceph volumes in the
rbdpool. To use thevolumespool created earlier, specify therbd_poolsetting and set thevolumespool. For example:rbd_pool = volumes
OSP does not have a default user name or a UUID of the secret for volumes. Specify
rbd_userand set it to thecinderuser. Then, specify therbd_secret_uuidsetting and set it to the generated UUID stored in theuuid-secret.txtfile. For example:rbd_user = cinder rbd_secret_uuid = 4b5fd580-360c-4f8c-abb5-c83bb9a3f964
Specify the following settings:
rbd_flatten_volume_from_snapshot = false rbd_max_clone_depth = 5 rbd_store_chunk_size = 4 rados_connect_timeout = -1
The resulting configuration should look something like this:
[DEFAULT] enabled_backends = ceph glance_api_version = 2 ... [ceph] volume_driver = cinder.volume.drivers.rbd.RBDDriver rbd_cluster_name = ceph rbd_pool = volumes rbd_user = cinder rbd_ceph_conf = /etc/ceph/ceph.conf rbd_flatten_volume_from_snapshot = false rbd_secret_uuid = 4b5fd580-360c-4f8c-abb5-c83bb9a3f964 rbd_max_clone_depth = 5 rbd_store_chunk_size = 4 rados_connect_timeout = -1
Consider removing the default [lvm] section and its settings.
3.2. Configuring Cinder Backup
The cinder-backup node requires a specific daemon. To configure Cinder backup, perform the following steps:
Open the Cinder configuration file.
# vim /etc/cinder/cinder.conf
-
Go to the
[ceph]section of the configuration file. Specify the
backup_driversetting and set it to the Ceph driver.backup_driver = cinder.backup.drivers.ceph
Specify the
backup_ceph_confsetting and specify the path to the Ceph configuration file.backup_ceph_conf = /etc/ceph/ceph.conf
NoteThe Cinder backup Ceph configuration file may be different from the Ceph configuration file used for Cinder. For example, it may point to a different Ceph cluster.
Specify the Ceph pool for backups.
backup_ceph_pool = backups
NoteWhile it is possible to use the same pool for Cinder Backups as used with Cinder, it is NOT recommended. Consider using a pool with a different CRUSH hierarchy.
Specify the
backup_ceph_usersetting and specify the user ascinder-backup.backup_ceph_user = cinder-backup
Specify the following settings:
backup_ceph_chunk_size = 134217728 backup_ceph_stripe_unit = 0 backup_ceph_stripe_count = 0 restore_discard_excess_bytes = true
With the Cinder settings included, the [ceph] section of the cinder.conf file should look something like this:
[ceph] volume_driver = cinder.volume.drivers.rbd.RBDDriver rbd_cluster_name = ceph rbd_pool = volumes rbd_user = cinder rbd_ceph_conf = /etc/ceph/ceph.conf rbd_flatten_volume_from_snapshot = false rbd_secret_uuid = 4b5fd580-360c-4f8c-abb5-c83bb9a3f964 rbd_max_clone_depth = 5 rbd_store_chunk_size = 4 rados_connect_timeout = -1 backup_driver = cinder.backup.drivers.ceph backup_ceph_user = cinder-backup backup_ceph_conf = /etc/ceph/ceph.conf backup_ceph_chunk_size = 134217728 backup_ceph_pool = backups backup_ceph_stripe_unit = 0 backup_ceph_stripe_count = 0 restore_discard_excess_bytes = true
Check to see if Cinder backup is enabled under /etc/openstack-dashboard/. The setting should be in a file called local_settings, or local_settings.py. For example:
cat /etc/openstack-dashboard/local_settings | grep enable_backup
If enable_backup is set to False, set it to True. For example:
OPENSTACK_CINDER_FEATURES = {
'enable_backup': True,
}3.3. Configuring Glance
To use Ceph block devices by default, edit the /etc/glance/glance-api.conf file. Uncomment the following settings if necessary and change their values accordingly. If you used different pool, user or Ceph configuration file settings apply the appropriate values.
# vim /etc/glance/glance-api.conf
stores = rbd default_store = rbd rbd_store_chunk_size = 8 rbd_store_pool = images rbd_store_user = glance rbd_store_ceph_conf = /etc/ceph/ceph.conf
To enable copy-on-write (CoW) cloning set show_image_direct_url to True.
show_image_direct_url = True
Enabling CoW exposes the back end location via Glance’s API, so the endpoint should not be publicly accessible.
Disable cache management if necessary. The flavor should be set to keystone only, not keystone+cachemanagement.
flavor = keystone
Red Hat recommends the following properties for images:
hw_scsi_model=virtio-scsi hw_disk_bus=scsi hw_qemu_guest_agent=yes os_require_quiesce=yes
The virtio-scsi controller gets better performance and provides support for discard operations. For systems using SCSI/SAS drives, connect every cinder block device to that controller. Also, enable the QEMU guest agent and send fs-freeze/thaw calls through the QEMU guest agent.
3.4. Configuring Nova
On every nova-compute node, edit the Ceph configuration file to configure the ephemeral backend for Nova and to boot all the virtual machines directly into Ceph.
Open the Ceph configuration file.
# vim /etc/ceph/ceph.conf
Add the following section to the
[client]section of the Ceph configuration file:[client] rbd cache = true rbd cache writethrough until flush = true rbd concurrent management ops = 20 admin socket = /var/run/ceph/guests/$cluster-$type.$id.$pid.$cctid.asok log file = /var/log/ceph/qemu-guest-$pid.log
Make directories for the admin socket and log file, and change their permissions to use the
qemuuser andlibvirtdgroup.mkdir -p /var/run/ceph/guests/ /var/log/ceph/ chown qemu:libvirt /var/run/ceph/guests /var/log/ceph/
NoteThe directories must be allowed by SELinux or AppArmor.
On every nova-compute node, edit the /etc/nova/nova.conf file under the [libvirt]` section and configure the following settings:
[libvirt] images_type = rbd images_rbd_pool = vms images_rbd_ceph_conf = /etc/ceph/ceph.conf rbd_user = cinder rbd_secret_uuid = 4b5fd580-360c-4f8c-abb5-c83bb9a3f964 disk_cachemodes="network=writeback" inject_password = false inject_key = false inject_partition = -2 live_migration_flag="VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE,VIR_MIGRATE_PERSIST_DEST,VIR_MIGRATE_TUNNELLED" hw_disk_discard = unmap
If the Ceph configuration file is not /etc/ceph/ceph.conf, provide the correct path. Replace the UUID in rbd_user_secret with the UUID in the uuid-secret.txt file.
3.5. Restarting OpenStack Services
To activate the Ceph block device drivers, load the block device pool names and Ceph user names into the configuration, restart the appropriate OpenStack services after modifying the corresponding configuration files.
# systemctl restart openstack-cinder-volume # systemctl restart openstack-cinder-backup # systemctl restart openstack-glance-api # systemctl restart openstack-nova-compute

Where did the comment section go?
Red Hat's documentation publication system recently went through an upgrade to enable speedier, more mobile-friendly content. We decided to re-evaluate our commenting platform to ensure that it meets your expectations and serves as an optimal feedback mechanism. During this redesign, we invite your input on providing feedback on Red Hat documentation via the discussion platform.