Chapter 6. Defining performance tiers for varying workloads in a Ceph Storage cluster with director
- This procedure is currently for new director deployments only.
You can use Red Hat OpenStack Platform (RHOSP) director to deploy different Red Hat Ceph Storage performance tiers. You can combine Ceph CRUSH rules and the
CephPools director parameter to use the device classes feature and build different tiers to accommodate workloads that have different performance requirements. For example, you can define a HDD class for normal workloads and an SSD class that distributes data only over SSDs for high performance loads. In this scenario, when you create a new Block Storage volume, you can choose the performance tier, either HDDs or SSDs.
Ceph autodetects the disk type and assigns it to the corresponding device class, either HDD, SSD, or NVMe based on the hardware properties exposed by the Linux kernel. However, you can also customize the category according to your needs.
- Red Hat Ceph Storage (RHCS) version 4.1 or later.
To deploy different Red Hat Ceph Storage performance tiers, create a new environment file that contains the CRUSH map details and then include it in the deployment command.
In the following procedures, each Ceph Storage node contains three OSDs,
sdc are spinning disks and
sdc is a SSD. Ceph automatically detects the correct disk type. You then configure two CRUSH rules, HDD and SSD, to map to the two respective device classes. The HDD rule is the default and applies to all pools unless you configure pools with a different rule.
Finally, you create an extra pool called
fastpool and map it to the SSD rule. This pool is ultimately exposed through a Block Storage (cinder) back end. Any workload that consumes this Block Storage back end is backed by SSD only for fast performances. You can leverage this for either data or boot from volume.
6.1. Configuring the performance tiers
Director does not expose specific parameters to cover this feature, however, you can generate the
ceph-ansible expected variables by completing the following steps.
Log in to the undercloud node as the
Create an environment file, such as
/home/stack/templates/ceph-config.yaml, to contain the Ceph config parameters and the device classes variables. Alternatively, you can add the following configurations to an existing environment file.
In the environment file, use the
CephAnsibleDisksConfigparameter to list the block devices that you want to use as Ceph OSDs:
CephAnsibleDisksConfig: devices: - /dev/sdb - /dev/sdc - /dev/sdd osd_scenario: lvm osd_objectstore: bluestore
Optional: Ceph automatically detects the type of disk and assigns it to the corresponding device class. However, you can also use the
crush_device_classproperty to force a specific device to belong to a specific class or create your own custom classes. The following example contains the same list of OSDs with specified classes:
CephAnsibleDisksConfig: lvm_volumes: - data: '/dev/sdb' crush_device_class: 'hdd' - data: '/dev/sdc' crush_device_class: 'hdd' - data: '/dev/sdd' crush_device_class: 'ssd' osd_scenario: lvm osd_objectstore: bluestore
crush_rulesparameter must contain a rule for each class that you define or that Ceph detects automatically. When you create a new pool, if no rule is specified, the rule that you want Ceph to use as the default is selected.
CephAnsibleExtraConfig: crush_rule_config: true create_crush_tree: true crush_rules: - name: HDD root: default type: host class: hdd default: true - name: SSD root: default type: host class: ssd default: false
rule_nameparameter to specify the tier for each pool that does not use the default rule. In the following example, the
fastpoolpool uses the SSD device class that is configured as a fast tier, to manage Block Storage volumes.
<appropriate_PG_num>with the appropriate number of placement groups (PGs). Alternatively, use the placement group auto-scaler to calculate the number of PGs for the Ceph pools.
For more information, see Assigning custom attributes to different Ceph pools.
CinderRbdExtraPoolsparameter to configure
fastpoolas a Block Storage back end.
CephPools: - name: fastpool pg_num: <appropraiate_PG_num> rule_name: SSD application: rbd CinderRbdExtraPools: fastpool
- Use the
Use the following example to ensure that your environment file contains the correct values:
parameter_defaults: CephAnsibleDisksConfig: devices: - '/dev/sdb' - '/dev/sdc' - '/dev/sdd' osd_scenario: lvm osd_objectstore: bluestore CephAnsibleExtraConfig: crush_rule_config: true create_crush_tree: true crush_rules: - name: HDD root: default type: host class: hdd default: true - name: SSD root: default type: host class: ssd default: false CinderRbdExtraPools: fastpool CephPools: - name: fastpool pg_num: <appropriate_PG_num> rule_name: SSD application: rbd
Include the new environment file in the
openstack overcloud deploycommand. Replace
<existing_overcloud_environment_files>with the list of environment files that are part of your existing deployment.
$ openstack overcloud deploy \ --templates \ … -e <existing_overcloud_environment_files> \ -e /home/stack/templates/ceph-config.yaml \ …
6.2. Mapping a Block Storage (cinder) type to your new Ceph pool
After you complete the configuration steps, make the performance tiers feature available to RHOSP tenants by using Block Storage (cinder) to create a type that is mapped to the
fastpool tier that you created.
Log in to the undercloud node as the
$ source overcloudrc
Check the Block Storage volume existing types:
$ cinder type-list
Create the new Block Storage volume fast_tier:
$ cinder type-create fast_tier
Check that the Block Storage type is created:
$ cinder type-list
fast_tierBlock Storage type is available, set the
fastpoolas the Block Storage volume back end for the new tier that you created:
$ cinder type-key fast_tier set volume_backend_name=tripleo_ceph_fastpool
Use the new tier to create new volumes:
$ cinder create 1 --volume-type fast_tier --name fastdisk
If you apply the environment file to an existing Ceph cluster, the pre-existing Ceph pools are not updated with the new rules. For this reason, you must enter the following command after the deployment completes to set the rules to the specified pools.
$ ceph osd pool set <pool> crush_rule <rule>
- Replace <pool> with the name of the pool that you want to apply the new rule to.
Replace <rule> with one of the rule names that you specified with the
Replace <appropriate_PG_num> with the appropriate number of placement groups or a
For every rule that you change with this command, update the existing entry or add a new entry in the
CephPools parameter in your existing templates:
CephPools: - name: <pool> pg_num: <appropriate_PG_num> rule_name: <rule> application: rbd
6.3. Verifying that the CRUSH rules are created and that your pools are set to the correct CRUSH rule
Log in to the overcloud Controller node as the
To verify that your OSD tiers are successfully set, enter the following command. Replace <controller_hostname> with the name of your host Controller node.
$ sudo podman exec -it ceph-mon-<controller_hostname> ceph osd tree
- In the resulting tree view, verify that the CLASS column displays the correct device class for each OSD that you set.
Also verify that the OSDs are properly assigned to the device classes with following command. Replace <controller_hostname> with the name of your host Controller node.
$ sudo podman exec -it ceph-mon-<controller hostname> ceph osd crush tree --show-shadow
Compare the resulting hierarchy with the results of the following command to ensure that the same values apply for each rule.
- Replace <controller_hostname> with the name of your host Controller node.
Replace <rule_name> with the name of the rule you want to check.
$ sudo podman exec <controller hostname> ceph osd crush rule dump <rule_name>
Verify that the rules name and ID that you created are correct according to the
crush_rulesparameter that you used during deployment. Replace <controller_hostname> with the name of your host Controller node.
$ sudo podman exec -it ceph-mon-<controller_hostname> ceph osd crush rule dump | grep -E "rule_(id|name)"
Verify that the Ceph pools are tied to the correct CRUSH rule ID that you retrieved in Step 3. Replace <controller_hostname> with the name of your host Controller node.
$ sudo podman exec -it ceph-mon-<controller_hostname> ceph osd dump | grep pool
- For each pool, ensure that the rule ID matches the rule name that you expect.