Chapter 5. Customizing the Ceph Storage Cluster
Deploying containerized Ceph Storage involves the use of /usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-ansible.yaml during overcloud deployment (as described in Chapter 4, Customizing the Storage Service). This environment file also defines the following resources:
-
CephAnsibleDisksConfig- used to map the Ceph Storage node disk layout. See Section 5.1, “Mapping the Ceph Storage Node Disk Layout” for more details. -
CephConfigOverrides- used to apply all other custom settings to your Ceph cluster.
Use these resources to override any defaults set by the director for containerized Ceph Storage.
The /usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-ansible.yaml environment file uses playbooks provided by the ceph-ansible package. As such, you need to install this package on your undercloud first:
$ sudo yum install ceph-ansible
To customize your Ceph cluster, define your custom parameters in a new environment file, namely /home/stack/templates/ceph-config.yaml. You can arbitrarily apply global Ceph cluster settings using the following syntax in the parameter_defaults section of your environment file:
parameter_defaults:
CephConfigOverrides:
KEY: VALUEReplace KEY: VALUE with the Ceph cluster settings you want to apply. For example, consider the following snippet:
For example, consider the following snippet:
parameter_defaults:
CephConfigOverrides
journal_size: 2048
max_open_files: 131072This will result in the following settings defined in the configuration file of your Ceph cluster:
[global] journal_size: 2048 max_open_files: 131072
See the Red Hat Ceph Storage Configuration Guide for detailed information about supported parameters.
5.1. Mapping the Ceph Storage Node Disk Layout
When you deploy containerized Ceph Storage, you need to map the disk layout and specify dedicated block devices for the Ceph OSD service. You can do this in the environment file you created earlier to define your custom Ceph parameters — namely, /home/stack/templates/ceph-config.yaml.
Use the CephAnsibleDisksConfig resource in parameter_defaults to map your disk layout. This resource uses the following variables:
| Variable | Required? | Default value (if unset) | Description |
|---|---|---|---|
| osd_scenario | Yes | collocated | Sets the journaling scenario; as in, whether OSDs should be created with journals that are either:
- co-located on the same device (
- stored on dedicated devices ( |
| devices | Yes | NONE. Variable must be set. | A list of block devices to be used on the node for OSDs. |
| dedicated_devices |
Yes (only if | devices |
A list of block devices that maps each entry under devices to a dedicated journaling block device. This variable is only usable when |
| dmcrypt | No | false |
Sets whether data stored on OSDs are encrypted ( |
| osd_objectstore | No | filestore |
Sets the storage back end used by Ceph. Currently, Red Hat Ceph only supports |
The default journaling scenario is set to osd_scenario=collocated, which has lower hardware requirements consistent with most testing environments. In a typical production environment, however, journals are stored on dedicated devices (osd_scenario=non-collocated) to accommodate heavier I/O workloads. For related information, see Identifying a Performance Use Case.
List each block device to be used by OSDs as a simple list under the devices variable. For example:
devices: - /dev/sda - /dev/sdb - /dev/sdc - /dev/sdd
If osd_scenario=non-collocated, you must also map each entry in devices to a corresponding entry in dedicated_devices. For example, given the following snippet in /home/stack/templates/ceph-config.yaml:
osd_scenario: non-collocated devices: - /dev/sda - /dev/sdb - /dev/sdc - /dev/sdd dedicated_devices: - /dev/sdf - /dev/sdf - /dev/sdg - /dev/sdg dmcrypt: true
Each Ceph Storage node in the resulting Ceph cluster would have the following characteristics:
-
/dev/sdawill have/dev/sdf1as its journal -
/dev/sdbwill have/dev/sdf2as its journal -
/dev/sdcwill have/dev/sdg1as its journal -
/dev/sddwill have/dev/sdg2as its journal - Data stored on OSDs will be encrypted.
In some nodes, disk paths (for example, /dev/sdb, /dev/sdc) may not point to the exact same block device during reboots. If this is the case with your CephStorage nodes, specify each disk through its /dev/disk/by-path/ symlink. For example:
parameter_defaults:
CephAnsibleDisksConfig:
devices:
- /dev/disk/by-path/pci-0000:03:00.0-scsi-0:0:10:0
- /dev/disk/by-path/pci-0000:03:00.0-scsi-0:0:11:0
dedicated_devices
- /dev/nvme0n1
- /dev/nvme0n1This ensures that the block device mapping is consistent throughout deployments.
Because the list of OSD devices must be set prior to overcloud deployment, it may not be possible to identify and set the PCI path of disk devices. In this case, deploy the node without using the disks in question (as a compute node, for example) and run the following command on the deployed node. Use the output to define the PCI path of the disk device.
[root@overcloud-novacompute-0 ~]# ls -l /dev/disk/by-path/ total 0 lrwxrwxrwx. 1 root root 9 Jul 11 20:12 pci-0000:00:11.5-ata-1.0 -> ../../sda lrwxrwxrwx. 1 root root 10 Jul 11 20:12 pci-0000:00:11.5-ata-1.0-part1 -> ../../sda1 lrwxrwxrwx. 1 root root 9 Jul 11 20:12 pci-0000:00:11.5-ata-2.0 -> ../../sdb lrwxrwxrwx. 1 root root 9 Jul 11 20:12 pci-0000:00:11.5-ata-3.0 -> ../../sdc lrwxrwxrwx. 1 root root 9 Jul 11 20:12 pci-0000:00:11.5-ata-4.0 -> ../../sdd lrwxrwxrwx. 1 root root 13 Jul 11 20:12 pci-0000:1a:00.0-nvme-1 -> ../../nvme0n1 [root@overcloud-novacompute-0 ~]#
For more information about naming conventions for storage devices, see Persistent Naming.
For more details about each journaling scenario and disk mapping for containerized Ceph Storage, see the OSD Scenarios section of the project documentation for ceph-ansible.
5.2. Assigning Custom Attributes to Different Ceph Pools
By default, Ceph pools created through the director have the same placement group (pg_num and pgp_num) and sizes. You can use either method in Chapter 5, Customizing the Ceph Storage Cluster to override these settings globally; that is, doing so will apply the same values for all pools.
You can also apply different attributes to each Ceph pool. To do so, use the CephPools parameter, as in:
parameter_defaults:
CephPools:
- name: POOL
pg_num: 128
Replace POOL with the name of the pool you want to configure along with the pg_num setting to indicate number of placement groups. This overrides the default pg_num for the specified pool.
You can also create new custom pools through the CephPools parameter. For example, to add a pool called custompool:
parameter_defaults:
CephPools:
- name: custompool
pg_num: 128This creates a new custom pool in addition to the default pools.
For typical pool configurations of common Ceph use cases, see the Ceph Placement Groups (PGs) per Pool Calculator. This calculator is normally used to generate the commands for manually configuring your Ceph pools. In this deployment, the director will configure the pools based on your specifications.
