Chapter 6. Customizing the Ceph Storage Cluster

Deploying containerized Ceph Storage involves the use of /usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-ansible.yaml during overcloud deployment (as described in Chapter 5, Customizing the Storage Service). This environment file also defines the following resources:

CephAnsibleDisksConfig
This resource maps the Ceph Storage node disk layout. See Section 6.2, “Mapping the Ceph Storage Node Disk Layout” for more details.
CephConfigOverrides
This resource applies all other custom settings to your Ceph cluster.

Use these resources to override any defaults set by the director for containerized Ceph Storage.

The /usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-ansible.yaml environment file uses playbooks provided by the ceph-ansible package. As such, you need to install this package on your undercloud first:

$ sudo yum install ceph-ansible

To customize your Ceph cluster, define your custom parameters in a new environment file, namely /home/stack/templates/ceph-config.yaml. You can arbitrarily apply global Ceph cluster settings using the following syntax in the parameter_defaults section of your environment file:

parameter_defaults:
  CephConfigOverrides:
    KEY:VALUE

Replace KEY and VALUE with the Ceph cluster settings you want to apply. For example, consider the following snippet:

parameter_defaults:
  CephConfigOverrides:
    max_open_files: 131072

This will result in the following settings defined in the configuration file of your Ceph cluster:

[global]
max_open_files: 131072

See the Red Hat Ceph Storage Configuration Guide for information about supported parameters.

Note

The CephConfigOverrides parameter applies only to the [global] section of the ceph.conf file. You cannot make changes to other sections, for example the [osd] section, with the CephConfigOverrides parameter.

The ceph-ansible tool has a group_vars directory that you can use to set many different Ceph parameters. For more information, see 3.2. Installing a Red Hat Ceph Storage Cluster in the Installation Guide for Red Hat Enterprise Linux.

To change the variable defaults in director, you can use the CephAnsibleExtraConfig parameter to pass the new values in heat environment files. For example, to set the ceph-ansible group variable journal_size to 40960, create an environment file with the following journal_size definition:

parameter_defaults:
  CephAnsibleExtraConfig:
    journal_size: 40960
Important

Change ceph-ansible group variables with the override parameters; do not edit group variables directly in the /usr/share/ceph-ansible directory on the undercloud.

6.1. Ceph containers for OpenStack Platform with Ceph Storage

A Ceph container is required to configure OpenStack Platform to use Ceph, even with an external Ceph cluster. To be compatible with Red Hat Enterprise Linux 8, OpenStack Platform 15 requires Red Hat Ceph Storage 4. The Ceph Storage 4 container is hosted at registry.redhat.io, a registry which requires authentication.

You can use the heat environment parameter ContainerImageRegistryCredentials to authenticate at registry.redhat.io, as described in Container image preparation parameters.

Note

At this time, Red Hat Ceph Storage 4 is available only in the beta release of OpenStack Platform 15. The appropriate container to use is listed in the Red Hat Container Catalog.

6.2. Mapping the Ceph Storage Node Disk Layout

When you deploy containerized Ceph Storage, you need to map the disk layout and specify dedicated block devices for the Ceph OSD service. You can do this in the environment file you created earlier to define your custom Ceph parameters — namely, /home/stack/templates/ceph-config.yaml.

Use the CephAnsibleDisksConfig resource in parameter_defaults to map your disk layout. This resource uses the following variables:

VariableRequired?Default value (if unset)Description

osd_scenario

Yes

lvm

NOTE: For new deployments using Ceph 3.2 and later, lvm is the default. For Ceph 3.1 and earlier, the default is collocated

With Ceph 3.2, lvm allows ceph-ansible to use ceph-volume to configure OSDs and BlueStore WAL devices. With Ceph 3.1, the values set the journaling scenario, such as whether OSDs should be created with journals that are either:

- co-located on the same device for filestore (collocated), or

- stored on dedicated devices for filestore (non-collocated).

devices

Yes

NONE. Variable must be set.

A list of block devices to be used on the node for OSDs.

dedicated_devices

Yes (only if osd_scenario is non-collocated)

devices

A list of block devices that maps each entry under devices to a dedicated journaling block device. This variable is only usable when osd_scenario=non-collocated.

dmcrypt

No

false

Sets whether data stored on OSDs are encrypted (true) or not (false).

osd_objectstore

No

bluestore

NOTE: For new deployments using Ceph 3.2 and later, bluestore is the default. For Ceph 3.1 and earlier, the default is filestore.

Sets the storage back end used by Ceph.

6.2.1. Using BlueStore in Ceph 3.2 and later

To specify the block devices to be used as Ceph OSDs, use a variation of the following:

parameter_defaults:
  CephAnsibleDisksConfig:
    devices:
      - /dev/sdb
      - /dev/sdc
      - /dev/sdd
      - /dev/nvme0n1
    osd_scenario: lvm
    osd_objectstore: bluestore

Because /dev/nvme0n1 is in a higher performing device class—​it is an SSD and the other devices are HDDs—​the example parameter defaults produce three OSDs that run on /dev/sdb, /dev/sdc, and /dev/sdd. The three OSDs use /dev/nvme0n1 as a BlueStore WAL device. The ceph-volume tool does this by using the batch subcommand. The same setup is duplicated per Ceph storage node and assumes uniform hardware. If the BlueStore WAL data resides on the same disks as the OSDs, then the parameter defaults could be changed to the following:

parameter_defaults:
  CephAnsibleDisksConfig:
    devices:
      - /dev/sdb
      - /dev/sdc
      - /dev/sdd
    osd_scenario: lvm
    osd_objectstore: bluestore

6.2.2. Using FileStore in Ceph 3.1 and earlier

Important

The default journaling scenario is set to osd_scenario=collocated, which has lower hardware requirements consistent with most testing environments. In a typical production environment, however, journals are stored on dedicated devices (osd_scenario=non-collocated) to accommodate heavier I/O workloads. For related information, see Identifying a Performance Use Case.

List each block device to be used by the OSDs as a simple list under the devices variable. For example:

devices:
  - /dev/sda
  - /dev/sdb
  - /dev/sdc
  - /dev/sdd

If osd_scenario=non-collocated, you must also map each entry in devices to a corresponding entry in dedicated_devices. For example, notice the following snippet in /home/stack/templates/ceph-config.yaml:

osd_scenario: non-collocated
devices:
  - /dev/sda
  - /dev/sdb
  - /dev/sdc
  - /dev/sdd

dedicated_devices:
  - /dev/sdf
  - /dev/sdf
  - /dev/sdg
  - /dev/sdg

Each Ceph Storage node in the resulting Ceph cluster has the following characteristics:

  • /dev/sda has /dev/sdf1 as its journal
  • /dev/sdb has /dev/sdf2 as its journal
  • /dev/sdc has /dev/sdg1 as its journal
  • /dev/sdd has /dev/sdg2 as its journal

6.2.3. Referring to devices with persistent names

In some nodes, disk paths, such as /dev/sdb and /dev/sdc, may not point to the same block device during reboots. If this is the case with your CephStorage nodes, specify each disk through its /dev/disk/by-path/ symlink.

For example:

parameter_defaults:
  CephAnsibleDisksConfig:
    devices:

      - /dev/disk/by-path/pci-0000:03:00.0-scsi-0:0:10:0
      - /dev/disk/by-path/pci-0000:03:00.0-scsi-0:0:11:0


    dedicated_devices
      - /dev/nvme0n1
      - /dev/nvme0n1

This ensures that the block device mapping is consistent throughout deployments.

Because the list of OSD devices must be set prior to overcloud deployment, it may not be possible to identify and set the PCI path of disk devices. In this case, gather the /dev/disk/by-path/symlink data for block devices during introspection.

In the following example, the first command downloads the introspection data from the undercloud Object Storage service (swift) for the server b08-h03-r620-hci and saves the data in a file called b08-h03-r620-hci.json. The second command greps for “by-path” and the results show the required data.

(undercloud) [stack@b08-h02-r620 ironic]$ openstack baremetal introspection data save b08-h03-r620-hci | jq . > b08-h03-r620-hci.json
(undercloud) [stack@b08-h02-r620 ironic]$ grep by-path b08-h03-r620-hci.json
        "by_path": "/dev/disk/by-path/pci-0000:02:00.0-scsi-0:2:0:0",
        "by_path": "/dev/disk/by-path/pci-0000:02:00.0-scsi-0:2:1:0",
        "by_path": "/dev/disk/by-path/pci-0000:02:00.0-scsi-0:2:3:0",
        "by_path": "/dev/disk/by-path/pci-0000:02:00.0-scsi-0:2:4:0",
        "by_path": "/dev/disk/by-path/pci-0000:02:00.0-scsi-0:2:5:0",
        "by_path": "/dev/disk/by-path/pci-0000:02:00.0-scsi-0:2:6:0",
        "by_path": "/dev/disk/by-path/pci-0000:02:00.0-scsi-0:2:7:0",
        "by_path": "/dev/disk/by-path/pci-0000:02:00.0-scsi-0:2:0:0",

For more information about naming conventions for storage devices, see Persistent Naming.

For details about each journaling scenario and disk mapping for containerized Ceph Storage, see the OSD Scenarios section of the project documentation for ceph-ansible.

Warning

osd_scenario: lvm is used in the example to default new deployments to bluestore as configured by ceph-volume; this is only available with ceph-ansible 3.2 or later and Ceph Luminous or later. The parameters to support filestore with ceph-ansible 3.2 are backwards compatible. Therefore, in existing FileStore deployments, do not simply change the osd_objectstore or osd_scenario parameters without first taking steps to maintain both back ends.

6.3. Assigning Custom Attributes to Different Ceph Pools

By default, Ceph pools created through the director have the same placement group (pg_num and pgp_num) and sizes. You can use either method in Chapter 6, Customizing the Ceph Storage Cluster to override these settings globally; that is, doing so will apply the same values for all pools.

You can also apply different attributes to each Ceph pool. To do so, use the CephPools parameter, as in:

parameter_defaults:
  CephPools:
    - name: POOL
      pg_num: 128
      application: rbd

Replace POOL with the name of the pool you want to configure along with the pg_num setting to indicate number of placement groups. This overrides the default pg_num for the specified pool.

If you use the CephPools parameter, you must also specify the application type. The application type for Compute, Block Storage, and Image Storage should be rbd, as shown in the examples, but depending on what the pool will be used for, you may need to specify a different application type. For example, the application type for the gnocchi metrics pool is openstack_gnocchi. See Enable Application in the Storage Strategies Guide for more information.

If you do not use the CephPools parameter, director sets the appropriate application type automatically, but only for the default pool list.

You can also create new custom pools through the CephPools parameter. For example, to add a pool called custompool:

parameter_defaults:
  CephPools:
    - name: custompool
      pg_num: 128
      application: rbd

This creates a new custom pool in addition to the default pools.

Tip

For typical pool configurations of common Ceph use cases, see the Ceph Placement Groups (PGs) per Pool Calculator. This calculator is normally used to generate the commands for manually configuring your Ceph pools. In this deployment, the director will configure the pools based on your specifications.

6.4. Mapping the Disk Layout to Non-Homogeneous Ceph Storage Nodes

By default, all nodes of a role which will host Ceph OSDs (indicated by the OS::TripleO::Services::CephOSD service in roles_data.yaml), for example CephStorage or ComputeHCI nodes, will use the global devices and dedicated_devices lists set in Section 6.2, “Mapping the Ceph Storage Node Disk Layout”. This assumes that all of these servers have homogeneous hardware. If a subset of these servers do not have homogeneous hardware, then director needs to be aware that each of these servers has different devices and dedicated_devices lists. This is known as a node-specific disk configuration.

To pass director a node-specific disk configuration, a Heat environment file, such as node-spec-overrides.yaml, must be passed to the openstack overcloud deploy command and the file’s content must identify each server by a machine unique UUID and a list of local variables which override the global variables.

The machine unique UUID may be extracted for each individual server or from the Ironic database.

To locate the UUID for an individual server, log in to the server and run:

dmidecode -s system-uuid

To extract the UUID from the Ironic database, run the following command on the undercloud:

openstack baremetal introspection data save NODE-ID | jq .extra.system.product.uuid
Warning

If the undercloud.conf does not have inspection_extras = true prior to undercloud installation or upgrade and introspection, then the machine unique UUID will not be in the Ironic database.

Important

The machine unique UUID is not the Ironic UUID.

A valid node-spec-overrides.yaml file may look like the following:

parameter_defaults:
  NodeDataLookup: {"32E87B4C-C4A7-418E-865B-191684A6883B": {"devices": ["/dev/sdc"]}}

All lines after the first two lines must be valid JSON. An easy way to verify that the JSON is valid is to use the jq command. For example:

  1. Remove the first two lines (parameter_defaults: and NodeDataLookup:) from the file temporarily.
  2. Run cat node-spec-overrides.yaml | jq .

As the node-spec-overrides.yaml file grows, jq may also be used to ensure that the embedded JSON is valid. For example, because the devices and dedicated_devices list should be the same length, use the following to verify that they are the same length before starting the deployment.

(undercloud) [stack@b08-h02-r620 tht]$ cat node-spec-c05-h17-h21-h25-6048r.yaml | jq '.[] | .devices | length'
33
30
33
(undercloud) [stack@b08-h02-r620 tht]$ cat node-spec-c05-h17-h21-h25-6048r.yaml | jq '.[] | .dedicated_devices | length'
33
30
33
(undercloud) [stack@b08-h02-r620 tht]$

In the above example, the node-spec-c05-h17-h21-h25-6048r.yaml has three servers in rack c05 in which slots h17, h21, and h25 are missing disks. A more complicated example is included at the end of this section.

After the JSON has been validated add back the two lines which makes it a valid environment YAML file (parameter_defaults: and NodeDataLookup:) and include it with a -e in the deployment.

In the example below, the updated Heat Environment File uses NodeDataLookup for Ceph deployment. All of the servers had a devices list with 35 disks except one of them had a disk missing. This environment file overrides the default devices list for only that single node and gives it the list of 34 disks it should use instead of the global list.

parameter_defaults:
  # c05-h01-6048r is missing scsi-0:2:35:0 (00000000-0000-0000-0000-0CC47A6EFD0C)
  NodeDataLookup: {
    "00000000-0000-0000-0000-0CC47A6EFD0C": {
      "devices": [
    "/dev/disk/by-path/pci-0000:03:00.0-scsi-0:2:1:0",
    "/dev/disk/by-path/pci-0000:03:00.0-scsi-0:2:32:0",
    "/dev/disk/by-path/pci-0000:03:00.0-scsi-0:2:2:0",
    "/dev/disk/by-path/pci-0000:03:00.0-scsi-0:2:3:0",
    "/dev/disk/by-path/pci-0000:03:00.0-scsi-0:2:4:0",
    "/dev/disk/by-path/pci-0000:03:00.0-scsi-0:2:5:0",
    "/dev/disk/by-path/pci-0000:03:00.0-scsi-0:2:6:0",
    "/dev/disk/by-path/pci-0000:03:00.0-scsi-0:2:33:0",
    "/dev/disk/by-path/pci-0000:03:00.0-scsi-0:2:7:0",
    "/dev/disk/by-path/pci-0000:03:00.0-scsi-0:2:8:0",
    "/dev/disk/by-path/pci-0000:03:00.0-scsi-0:2:34:0",
    "/dev/disk/by-path/pci-0000:03:00.0-scsi-0:2:9:0",
    "/dev/disk/by-path/pci-0000:03:00.0-scsi-0:2:10:0",
    "/dev/disk/by-path/pci-0000:03:00.0-scsi-0:2:11:0",
    "/dev/disk/by-path/pci-0000:03:00.0-scsi-0:2:12:0",
    "/dev/disk/by-path/pci-0000:03:00.0-scsi-0:2:13:0",
    "/dev/disk/by-path/pci-0000:03:00.0-scsi-0:2:14:0",
    "/dev/disk/by-path/pci-0000:03:00.0-scsi-0:2:15:0",
    "/dev/disk/by-path/pci-0000:03:00.0-scsi-0:2:16:0",
    "/dev/disk/by-path/pci-0000:03:00.0-scsi-0:2:17:0",
    "/dev/disk/by-path/pci-0000:03:00.0-scsi-0:2:18:0",
    "/dev/disk/by-path/pci-0000:03:00.0-scsi-0:2:19:0",
    "/dev/disk/by-path/pci-0000:03:00.0-scsi-0:2:20:0",
    "/dev/disk/by-path/pci-0000:03:00.0-scsi-0:2:21:0",
    "/dev/disk/by-path/pci-0000:03:00.0-scsi-0:2:22:0",
    "/dev/disk/by-path/pci-0000:03:00.0-scsi-0:2:23:0",
    "/dev/disk/by-path/pci-0000:03:00.0-scsi-0:2:24:0",
    "/dev/disk/by-path/pci-0000:03:00.0-scsi-0:2:25:0",
    "/dev/disk/by-path/pci-0000:03:00.0-scsi-0:2:26:0",
    "/dev/disk/by-path/pci-0000:03:00.0-scsi-0:2:27:0",
    "/dev/disk/by-path/pci-0000:03:00.0-scsi-0:2:28:0",
    "/dev/disk/by-path/pci-0000:03:00.0-scsi-0:2:29:0",
    "/dev/disk/by-path/pci-0000:03:00.0-scsi-0:2:30:0",
    "/dev/disk/by-path/pci-0000:03:00.0-scsi-0:2:31:0"
        ],
      "dedicated_devices": [
    "/dev/disk/by-path/pci-0000:81:00.0-nvme-1",
    "/dev/disk/by-path/pci-0000:81:00.0-nvme-1",
    "/dev/disk/by-path/pci-0000:81:00.0-nvme-1",
    "/dev/disk/by-path/pci-0000:81:00.0-nvme-1",
    "/dev/disk/by-path/pci-0000:81:00.0-nvme-1",
    "/dev/disk/by-path/pci-0000:81:00.0-nvme-1",
    "/dev/disk/by-path/pci-0000:81:00.0-nvme-1",
    "/dev/disk/by-path/pci-0000:81:00.0-nvme-1",
    "/dev/disk/by-path/pci-0000:81:00.0-nvme-1",
    "/dev/disk/by-path/pci-0000:81:00.0-nvme-1",
    "/dev/disk/by-path/pci-0000:81:00.0-nvme-1",
    "/dev/disk/by-path/pci-0000:81:00.0-nvme-1",
    "/dev/disk/by-path/pci-0000:81:00.0-nvme-1",
    "/dev/disk/by-path/pci-0000:81:00.0-nvme-1",
    "/dev/disk/by-path/pci-0000:81:00.0-nvme-1",
    "/dev/disk/by-path/pci-0000:81:00.0-nvme-1",
    "/dev/disk/by-path/pci-0000:81:00.0-nvme-1",
    "/dev/disk/by-path/pci-0000:81:00.0-nvme-1",
    "/dev/disk/by-path/pci-0000:84:00.0-nvme-1",
    "/dev/disk/by-path/pci-0000:84:00.0-nvme-1",
    "/dev/disk/by-path/pci-0000:84:00.0-nvme-1",
    "/dev/disk/by-path/pci-0000:84:00.0-nvme-1",
    "/dev/disk/by-path/pci-0000:84:00.0-nvme-1",
    "/dev/disk/by-path/pci-0000:84:00.0-nvme-1",
    "/dev/disk/by-path/pci-0000:84:00.0-nvme-1",
    "/dev/disk/by-path/pci-0000:84:00.0-nvme-1",
    "/dev/disk/by-path/pci-0000:84:00.0-nvme-1",
    "/dev/disk/by-path/pci-0000:84:00.0-nvme-1",
    "/dev/disk/by-path/pci-0000:84:00.0-nvme-1",
    "/dev/disk/by-path/pci-0000:84:00.0-nvme-1",
    "/dev/disk/by-path/pci-0000:84:00.0-nvme-1",
    "/dev/disk/by-path/pci-0000:84:00.0-nvme-1",
    "/dev/disk/by-path/pci-0000:84:00.0-nvme-1",
    "/dev/disk/by-path/pci-0000:84:00.0-nvme-1"
        ]
      }
    }