Chapter 5. Customizing the storage service
The director heat template collection contains the necessary templates and environment files to enable a basic Ceph Storage configuration.
Director uses the
/usr/share/openstack-tripleo-heat-templates/environments/cephadm/cephadm.yaml environment file to add configuration to the Ceph Storage cluster deployed by
openstack overcloud ceph deploy and integrate it with your overcloud during deployment.
5.1. Configuring a custom environment file
Director applies basic, default settings to the deployed Red Hat Ceph Storage cluster. You must define additional configuration in a custom environment file.
Log in to the undercloud as the
Create a file to define the custom configuration.
parameter_defaultssection to the file.
Add the custom configuration parameters. For more information about parameter definitions, see Overcloud parameters.
parameter_defaults: CinderEnableIscsiBackend: false CinderEnableRbdBackend: true CinderBackupBackend: ceph NovaEnableRbdBackend: true GlanceBackend: rbdNote
Parameters defined in a custom configuration file override any corresponding default settings in
- Save the file.
The custom configuration is applied during overcloud deployment.
5.2. Red Hat Ceph Storage placement groups
Placement groups (PGs) facilitate dynamic and efficient object tracking at scale. In the event of OSD failure or Ceph Storage cluster rebalancing, Ceph can move or replicate a placement group and the contents of the placement group. This allows a Ceph Storage cluster to rebalance and recover efficiently.
The placement group and replica count settings are not changed from the defaults unless the following parameters are included in a Ceph configuration file:
When the overcloud is deployed with the
openstack overcloud deploy command, a pool is created for every enabled Red Hat OpenStack Platform service. For example, the following command creates pools for the Compute service (nova), the Block Storage service (cinder), and the Image service (glance):
openstack overcloud deploy --templates \ -e /usr/share/openstack-tripleo-heat-templates/environments/cephadm/cephadm-rbd-only.yaml
-e environments/cinder-backup.yaml to the command, creates a pool called
openstack overcloud deploy --templates \ -e /usr/share/openstack-tripleo-heat-templates/environments/cephadm/cephadm-rbd-only.yaml -e environments/cinder-backup.yaml
It is not necessary to configure a placement group number per pool; the
pg_autoscale_mode attribute is enabled by default. However, it is recommended to configure the
pg_num attributes. This minimimzes data rebalancing.
To set the
target_size_ratio attribute per pool, use a configuration file entry similar to the following example:
parameter_defaults: CephPools: - name: volumes target_size_ratio: 0.4 application: rbd - name: images target_size_ratio: 0.1 application: rbd - name: vms target_size_ratio: 0.3 application: rbd
In this example, the percentage of data used per service will be:
- Cinder volumes - 40%
- Glance images - 10%
- Nova vms - 30%
- Free space for other pools - 20%
Set these values based on your expected usage . If you do not override the
CephPools parameter, each pool uses the default placement group number. Though the autoscaler will adjust this number automatically over time based on usage, the data will be moved within the Ceph cluster. This uses computational resources.
If you prefer to set a placement group number instead of a target size ratio, replace
target_size_ratio in the example with
pg_num. Use a different integer per pool based on your expected usage.
See the Red Hat Ceph Storage Hardware Guide for Red Hat Ceph Storage processor, network interface card, and power management interface recommendations.
5.3. Enabling Ceph Metadata Server
The Ceph Metadata Server (MDS) runs the
ceph-mds daemon. This daemon manages metadata related to files stored on CephFS. CephFS can be consumed natively or through the NFS protocol.
Red Hat supports deploying Ceph MDS with the native CephFS and CephFS NFS back ends for the Shared File Systems service (manila).
To enable Ceph MDS, use the following environment file when you deploy the overcloud:
By default, Ceph MDS is deployed on the Controller node. You can deploy Ceph MDS on its own dedicated node.
5.4. Ceph Object Gateway object storage
The Ceph Object Gateway (RGW) provides an interface to access object storage capabilities within a Red Hat Ceph Storage cluster.
When you use director to deploy Ceph, director automatically enables RGW. This is a direct replacement for the Object Storage service (swift). Services that normally use the Object Storage service can use RGW instead without additional configuration. The Object Storage service remains available as an object storage option for upgraded Ceph clusters.
There is no requirement for a separate RGW environment file to enable it. For more information about environment files for other object storage options, see Section 5.5, “Deployment options for Red Hat OpenStack Platform object storage”.
By default, Ceph Storage allows 250 placement groups per Object Storage Daemon (OSD). When you enable RGW, Ceph Storage creates the following six additional pools required by RGW:
In your deployment,
<zone_name> is replaced with the name of the zone to which the pools belong.
5.5. Deployment options for Red Hat OpenStack Platform object storage
There are three options for deploying overcloud object storage:
Ceph Object Gateway (RGW)
To deploy RGW as described in Section 5.4, “Ceph Object Gateway object storage”, include the following environment file during overcloud deployment:
This environment file configures both Ceph block storage (RBD) and RGW.
Object Storage service (swift)
To deploy the Object Storage service (swift) instead of RGW, include the following environment file during overcloud deployment:
cephadm-rbd-only.yamlfile configures Ceph RBD but not RGW.Note
If you used the Object Storage service (swift) before upgrading your Red Hat Ceph Storage cluster, you can continue to use the Object Storage service (swift) instead of RGW by replacing the
environments/ceph-ansible/ceph-ansible.yamlfile with the
environments/cephadm/cephadm-rbd-only.yamlduring the upgrade. For more information, see Performing a minor update of Red Hat OpenStack Platform.
Red Hat OpenStack Platform does not support migration from the Object Storage service (swift) to Ceph Object Gateway (RGW).
No object storage
To deploy Ceph with RBD but not with RGW or the Object Storage service (swift), include the following environment files during overcloud deployment:
-e environments/cephadm/cephadm-rbd-only.yaml -e environments/disable-swift.yaml
cephadm-rbd-only.yamlfile configures RBD but not RGW. The
disable-swift.yamlfile ensures that the Object Storage service (swift) does not deploy.
5.6. Configuring the Block Storage Backup Service to use Ceph
The Block Storage Backup service (cinder-backup) is disabled by default. It must be enabled to use it with Ceph.
To enable the Block Storage Backup service (cinder-backup), use the following environment file when you deploy the overcloud:
5.7. Configuring multiple bonded interfaces for Ceph nodes
Use a bonded interface to combine multiple NICs and add redundancy to a network connection. If you have enough NICs on your Ceph nodes, you can create multiple bonded interfaces on each node to expand redundancy capability.
Use a bonded interface for each network connection the node requires. This provides both redundancy and a dedicated connection for each network.
See Provisioning the overcloud networks in the Installing and managing Red Hat OpenStack Platform with director guide for information and procedures.