Chapter 2. Configuring the Block Storage service (cinder)

The Block Storage service (cinder) manages the administration, security, scheduling, and overall management of all volumes. Volumes are used as the primary form of persistent storage for Compute instances.

For more information about volume backups, see the Block Storage Backup Guide.

Important

You must install host bus adapters (HBAs) on all Controller nodes and Compute nodes in any deployment that uses the Block Storage service (cinder) and a Fibre Channel (FC) back end.

2.1. Block Storage service back ends

Red Hat OpenStack Platform (RHOSP) is deployed using director. Doing so helps ensure the correct configuration of each service, including the Block Storage service (cinder) and, by extension, its back end. Director also has several integrated back-end configurations.

Red Hat OpenStack Platform supports Red Hat Ceph Storage and NFS as Block Storage (cinder) back ends. By default, the Block Storage service uses an LVM back end as a repository for volumes. While this back end is suitable for test environments, LVM is not supported in production environments.

For instructions on how to deploy Red Hat Ceph Storage with RHOSP, see Deploying an Overcloud with Containerized Red Hat Ceph.

For instructions on how to set up NFS storage in the overcloud, see Configuring NFS Storage in the Advanced Overcloud Customization Guide.

You can also configure the Block Storage service to use supported third-party storage appliances. Director includes the necessary components for deploying different back-end solutions.

For a complete list of supported back-end appliances and drivers, see Component, Plug-In, and Driver Support in Red Hat OpenStack Platform. All third-party back-end appliances and drivers have additional deployment guides. Review the appropriate deployment guide to determine if a back-end appliance or driver requires a plugin. For more information about deploying a third-party storage appliance plugin, see Deploying a vendor plugin in the Advanced Overcloud Customization guide.

2.2. High availability of the Block Storage volume service

The Block Storage volume service (cinder-volume) is deployed on Controller nodes in active-passive mode. In this case, Pacemaker maintains the high availability (HA) of this service.

In Distributed Compute Node (DCN) deployments, the Block Storage volume service is deployed on the central site in active-passive mode. In this case, Pacemaker maintains the HA of this service. Only deploy the Block Storage volume service on an edge site that requires storage. Because Pacemaker cannot be deployed on edge sites, the Block Storage volume service must be deployed in active-active mode to ensure the HA of this service. The dcn-storage.yaml heat template performs this configuration. But you need to manually maintain this service. For more information about maintaining the Block Storage volume service at edge sites that require storage, see Maintenance commands for the Block Storage volume service at edge sites.

Important

If you use multiple storage back ends at an edge site that requires storage, then all the back ends must support active-active mode. Because if you save data on a back end that does not support active-active mode, you risk losing your data.

2.2.1. Maintenance commands for the Block Storage volume service at edge sites

After deploying the Block Storage volume service (cinder-volume) in active-active mode at an edge site that requires storage, you can use the following commands to manage the clusters and their services.

Note

These commands need a Block Storage (cinder) REST API microversion of 3.17 or later.

User goal

Command

See the service listing, including details such as cluster name, host, zone, status, state, disabled reason, and back end state.

Note

The default cluster name for the Red Hat Ceph Storage back end is tripleo@tripleo_ceph.

cinder service-list

See detailed and summary information about clusters as a whole as opposed to individual services.

cinder cluster-list

See detailed information about a specific cluster.

cinder cluster-show <clustered_service>

  • Replace <clustered_service> with the name of the clustered service.

Enable a disabled service.

cinder cluster-enable <clustered_service>

Disable a clustered service.

cinder cluster-disable <clustered_service>

2.2.2. Volume manage and unmanage

The unmanage and manage mechanisms facilitate moving volumes from one service using version X to another service using version X+1. Both services remain running during this process.

In API version 3.17 or later, you can see lists of volumes and snapshots that are available for management in Block Storage clusters. To see these lists, use the --cluster argument with cinder manageable-list or cinder snapshot-manageable-list.

In API version 3.16 and later, the cinder manage command also accepts the optional --cluster argument so that you can add previously unmanaged volumes to a Block Storage cluster.

2.2.3. Volume migration on a clustered service

With API version 3.16 and later, the cinder migrate and cinder-manage commands accept the --cluster argument to define the destination for active-active deployments.

When you migrate a volume on a Block Storage clustered service, pass the optional --cluster argument and omit the host positional argument, because the arguments are mutually exclusive.

2.2.4. Initiating Block Storage service maintenance

All Block Storage volume services perform their own maintenance when they start. In an environment with multiple volume services grouped in a cluster, you can clean up services that are not currently running.

The command work-cleanup triggers server cleanups. The command returns:

  • A list of the services that the command can clean.
  • A list of the services that the command cannot clean because they are not currently running in the cluster.
Note

The work-cleanup command works only on servers running API version 3.24 or later.

Prerequisites

Procedure

  1. Run the following command to verify whether all of the services for a cluster are running:

    $ cinder cluster-list --detailed

    Alternatively, run the cluster show command.

  2. If any services are not running, run the following command to identify those specific services:

    $ cinder service-list
  3. Run the following command to trigger the server cleanup:

    $ cinder work-cleanup [--cluster <cluster-name>] [--host <hostname>] [--binary <binary>] [--is-up <True|true|False|false>] [--disabled <True|true|False|false>] [--resource-id <resource-id>] [--resource-type <Volume|Snapshot>]
    Note

    Filters, such as --cluster, --host, and --binary, define what the command cleans. You can filter on cluster name, host name, type of service, and resource type, including a specific resource. If you do not apply filtering, the command attempts to clean everything that can be cleaned.

    The following example filters by cluster name:

    $ cinder work-cleanup --cluster tripleo@tripleo_ceph

2.3. Group volume configuration with volume types

With Red Hat OpenStack Platform you can create volume types so that you can apply associated settings to the volume type. You can apply settings during volume creation, see Section 3.1, “Creating Block Storage volumes”. You can also apply settings after you create a volume, see Section 4.5, “Block Storage volume retyping”. The following list shows some of the associated setting that you can apply to a volume type:

Settings are associated with volume types using key-value pairs called Extra Specs. When you specify a volume type during volume creation, the Block Storage scheduler applies these key-value pairs as settings. You can associate multiple key-value pairs to the same volume type.

Volume types provide the capability to provide different users with storage tiers. By associating specific performance, resilience, and other settings as key-value pairs to a volume type, you can map tier-specific settings to different volume types. You can then apply tier settings when creating a volume by specifying the corresponding volume type.

2.3.1. Listing back-end driver capabilities

Available and supported Extra Specs vary per back-end driver. Consult the driver documentation for a list of valid Extra Specs.

Alternatively, you can query the Block Storage host directly to determine which well-defined standard Extra Specs are supported by its driver. Start by logging in (through the command line) to the node hosting the Block Storage service.

Prerequisites

Procedure

# cinder service-list

This command will return a list containing the host of each Block Storage service (cinder-backup, cinder-scheduler, and cinder-volume). For example:

+------------------+---------------------------+------+---------
|      Binary      |            Host           | Zone |  Status ...
+------------------+---------------------------+------+---------
|  cinder-backup   |   localhost.localdomain   | nova | enabled ...
| cinder-scheduler |   localhost.localdomain   | nova | enabled ...
|  cinder-volume   | *localhost.localdomain@lvm* | nova | enabled ...
+------------------+---------------------------+------+---------

To display the driver capabilities (and, in turn, determine the supported Extra Specs) of a Block Storage service, run:

# cinder get-capabilities _VOLSVCHOST_

Where VOLSVCHOST is the complete name of the cinder-volume's host. For example:

# cinder get-capabilities localhost.localdomain@lvm
    +---------------------+-----------------------------------------+
    |     Volume stats    |                        Value            |
    +---------------------+-----------------------------------------+
    |     description     |                         None            |
    |     display_name    |                         None            |
    |    driver_version   |                        3.0.0            |
    |      namespace      | OS::Storage::Capabilities::localhost.loc...
    |      pool_name      |                         None            |
    |   storage_protocol  |                        iSCSI            |
    |     vendor_name     |                     Open Source         |
    |      visibility     |                         None            |
    | volume_backend_name |                         lvm             |
    +---------------------+-----------------------------------------+
    +--------------------+------------------------------------------+
    | Backend properties |                        Value             |
    +--------------------+------------------------------------------+
    |    compression     |      {u'type': u'boolean', u'description'...
    |        qos         |              {u'type': u'boolean', u'des ...
    |    replication     |      {u'type': u'boolean', u'description'...
    | thin_provisioning  | {u'type': u'boolean', u'description': u'S...
    +--------------------+------------------------------------------+

The Backend properties column shows a list of Extra Spec Keys that you can set, while the Value column provides information on valid corresponding values.

Note

Available and supported Extra Specs vary per back-end driver. Consult the driver documentation for a list of valid Extra Specs.

Alternatively, you can query the Block Storage host directly to determine which well-defined standard Extra Specs are supported by its driver. Start by logging in (through the command line) to the node hosting the Block Storage service. Then:

# cinder service-list

This command will return a list containing the host of each Block Storage service (cinder-backup, cinder-scheduler, and cinder-volume). For example:

+------------------+---------------------------+------+---------
|      Binary      |            Host           | Zone |  Status ...
+------------------+---------------------------+------+---------
|  cinder-backup   |   localhost.localdomain   | nova | enabled ...
| cinder-scheduler |   localhost.localdomain   | nova | enabled ...
|  cinder-volume   | *localhost.localdomain@lvm* | nova | enabled ...
+------------------+---------------------------+------+---------

To display the driver capabilities (and, in turn, determine the supported Extra Specs) of a Block Storage service, run:

# cinder get-capabilities _VOLSVCHOST_

Where VOLSVCHOST is the complete name of the cinder-volume's host. For example:

# cinder get-capabilities localhost.localdomain@lvm
    +---------------------+-----------------------------------------+
    |     Volume stats    |                        Value            |
    +---------------------+-----------------------------------------+
    |     description     |                         None            |
    |     display_name    |                         None            |
    |    driver_version   |                        3.0.0            |
    |      namespace      | OS::Storage::Capabilities::localhost.loc...
    |      pool_name      |                         None            |
    |   storage_protocol  |                        iSCSI            |
    |     vendor_name     |                     Open Source         |
    |      visibility     |                         None            |
    | volume_backend_name |                         lvm             |
    +---------------------+-----------------------------------------+
    +--------------------+------------------------------------------+
    | Backend properties |                        Value             |
    +--------------------+------------------------------------------+
    |    compression     |      {u'type': u'boolean', u'description'...
    |        qos         |              {u'type': u'boolean', u'des ...
    |    replication     |      {u'type': u'boolean', u'description'...
    | thin_provisioning  | {u'type': u'boolean', u'description': u'S...
    +--------------------+------------------------------------------+

The Backend properties column shows a list of Extra Spec Keys that you can set, while the Value column provides information on valid corresponding values.

2.3.2. Creating and configuring a volume type

Create volume types so that you can apply associated settings to the volume type.

Note

If the Block Storage service (cinder) is configured to use multiple back ends, then a volume type must be created for each back end.

Volume types provide the capability to provide different users with storage tiers. By associating specific performance, resilience, and other settings as key-value pairs to a volume type, you can map tier-specific settings to different volume types. You can then apply tier settings when creating a volume by specifying the corresponding volume type.

Prerequisites

Procedure

  1. As an admin user in the dashboard, select Admin > Volumes > Volume Types.
  2. Click Create Volume Type.
  3. Enter the volume type name in the Name field.
  4. Click Create Volume Type. The new type appears in the Volume Types table.
  5. Select the volume type’s View Extra Specs action.
  6. Click Create and specify the Key and Value. The key-value pair must be valid; otherwise, specifying the volume type during volume creation will result in an error. For instance, to specify a back end for this volume type, add the volume_backend_name Key and set the Value to the name of the required back end.
  7. Click Create. The associated setting (key-value pair) now appears in the Extra Specs table.

By default, all volume types are accessible to all OpenStack projects. If you need to create volume types with restricted access, you will need to do so through the CLI. For instructions, see Section 2.3.4, “Creating and configuring private volume types”.

Note

You can also associate a QoS Spec to the volume type. For more information, see Section 2.6.3, “Associating a Quality-of-Service specification with a volume type”.

2.3.3. Editing a volume type

Edit a volume type in the Dashboard to modify the Extra Specs configuration of the volume type.

Prerequisites

Procedure

  1. As an admin user in the dashboard, select Admin > Volumes > Volume Types.
  2. In the Volume Types table, select the volume type’s View Extra Specs action.
  3. On the Extra Specs table of this page, you can:

    • Add a new setting to the volume type. To do this, click Create and specify the key/value pair of the new setting you want to associate to the volume type.
    • Edit an existing setting associated with the volume type by selecting the setting’s Edit action.
    • Delete existing settings associated with the volume type by selecting the extra specs' check box and clicking Delete Extra Specs in this and the next dialog screen.

To delete a volume type, select its corresponding check boxes from the Volume Types table and click Delete Volume Types.

2.3.4. Creating and configuring private volume types

By default, all volume types are available to all projects. You can create a restricted volume type by marking it private. To do so, set the type’s is-public flag to false.

Private volume types are useful for restricting access to volumes with certain attributes. Typically, these are settings that should only be usable by specific projects; examples include new back ends or ultra-high performance configurations that are being tested.

Prerequisites

Procedure

$ cinder type-create --is-public false  <TYPE-NAME>

By default, private volume types are only accessible to their creators. However, admin users can find and view private volume types using the following command:

$ cinder type-list

This command lists both public and private volume types, and it also includes the name and ID of each one. You need the volume type’s ID to provide access to it.

Access to a private volume type is granted at the project level. To grant a project access to a private volume type, run:

$ cinder  type-access-add --volume-type <TYPE-ID> --project-id <TENANT-ID>

To view which projects have access to a private volume type, run:

$ cinder  type-access-list --volume-type <TYPE-ID>

To remove a project from the access list of a private volume type, run:

$ cinder  type-access-remove --volume-type <TYPE-ID> --project-id <TENANT-ID>
Note

By default, only users with administrative privileges can create, view, or configure access for private volume types.

2.4. Creating and configuring an internal project for the Block Storage service (cinder)

Some Block Storage features (for example, the Image-Volume cache) require the configuration of an internal tenant. The Block Storage service uses this tenant/project to manage block storage items that do not necessarily need to be exposed to normal users. Examples of such items are images cached for frequent volume cloning or temporary copies of volumes being migrated.

Prerequisites

Procedure

  1. To configure an internal project, first create a generic project and user, both named cinder-internal. To do so, log in to the Controller node and run:
# openstack project create --enable --description "Block Storage Internal Project" cinder-internal
    +-------------+----------------------------------+
    |   Property  |              Value               |
    +-------------+----------------------------------+
    | description |  Block Storage Internal Tenant   |
    |   enabled   |               True               |
    |      id     | cb91e1fe446a45628bb2b139d7dccaef |
    |     name    |         cinder-internal          |
    +-------------+----------------------------------+
# openstack user create --project cinder-internal cinder-internal
    +----------+----------------------------------+
    | Property |              Value               |
    +----------+----------------------------------+
    |  email   |               None               |
    | enabled  |               True               |
    |    id    | 84e9672c64f041d6bfa7a930f558d946 |
    |   name   |         cinder-internal          |
    |project_id| cb91e1fe446a45628bb2b139d7dccaef |
    | username |         cinder-internal          |
    +----------+----------------------------------+

The procedure for adding Extra Config options creates an internal project. For more information, see Section 2.5, “Configuring the image-volume cache”.

2.5. Configuring the image-volume cache

The Block Storage service features an optional Image-Volume cache which can be used when creating volumes from images. This cache is designed to improve the speed of volume creation from frequently-used images. For information on how to create volumes from images, see Section 3.1, “Creating Block Storage volumes”.

When enabled, the Image-Volume cache stores a copy of an image the first time a volume is created from it. This stored image is cached locally to the Block Storage back end to help improve performance the next time the image is used to create a volume. The Image-Volume cache’s limit can be set to a size (in GB), number of images, or both.

The Image-Volume cache is supported by several back ends. If you are using a third-party back end, refer to its documentation for information on Image-Volume cache support.

Note

The Image-Volume cache requires that an internal tenant be configured for the Block Storage service. For instructions, see Section 2.4, “Creating and configuring an internal project for the Block Storage service (cinder)”.

Prerequisites

Procedure

To enable and configure the Image-Volume cache on a back end (BACKEND), add the values to an ExtraConfig section of an environment file on the undercloud. For example:

parameter_defaults:
  ExtraConfig:
    cinder::config::cinder_config:
      DEFAULT/cinder_internal_tenant_project_id:
        value: TENANTID
      DEFAULT/cinder_internal_tenant_user_id:
        value: USERID
      BACKEND/image_volume_cache_enabled: 1
        value: True
      BACKEND/image_volume_cache_max_size_gb:
        value: MAXSIZE 2
      BACKEND/image_volume_cache_max_count:
        value: MAXNUMBER 3
1
Replace BACKEND with the name of the target back end (specifically, its volume_backend_name value).
2
By default, the Image-Volume cache size is only limited by the back end. Change MAXSIZE to a number in GB.
3
You can also set a maximum number of images using MAXNUMBER.

The Block Storage service database uses a time stamp to track when each cached image was last used to create an image. If either or both MAXSIZE and MAXNUMBER are set, the Block Storage service will delete cached images as needed to make way for new ones. Cached images with the oldest time stamp are deleted first whenever the Image-Volume cache limits are met.

After you create the environment file in /home/stack/templates/, log in as the stack user and deploy the configuration by running:

$ openstack overcloud deploy --templates \
-e /home/stack/templates/<ENV_FILE>.yaml

Where ENV_FILE.yaml is the name of the file with the ExtraConfig settings added earlier.

Important

If you passed any extra environment files when you created the overcloud, pass them again here using the -e option to avoid making undesired changes to the overcloud.

For more information about the openstack overcloud deploy command, see Deployment command in Director Installation and Usage.

2.6. Block Storage service (cinder) Quality-of-Service

You can map multiple performance settings to a single Quality-of-Service specification (QOS Specs). Doing so allows you to provide performance tiers for different user types.

Performance settings are mapped as key-value pairs to QOS Specs, similar to the way volume settings are associated to a volume type. However, QOS Specs are different from volume types in the following respects:

  • QOS Specs are used to apply performance settings, which include limiting read/write operations to disks. Available and supported performance settings vary per storage driver.

    To determine which QOS Specs are supported by your back end, consult the documentation of your back end device’s volume driver.

  • Volume types are directly applied to volumes, whereas QOS Specs are not. Rather, QOS Specs are associated to volume types. During volume creation, specifying a volume type also applies the performance settings mapped to the volume type’s associated QOS Specs.

You can define performance limits for volumes on a per-volume basis using basic volume QOS values. The Block Storage service supports the following options:

  • read_iops_sec
  • write_iops_sec
  • total_iops_sec
  • read_bytes_sec
  • write_bytes_sec
  • total_bytes_sec
  • read_iops_sec_max
  • write_iops_sec_max
  • total_iops_sec_max
  • read_bytes_sec_max
  • write_bytes_sec_max
  • total_bytes_sec_max
  • size_iops_sec

2.6.1. Creating and configuring a Quality-of-Service specification

As an administrator, you can create and configure a QOS Spec through the QOS Specs table. You can associate more than one key/value pair to the same QOS Spec.

Prerequisites

Procedure

  1. As an admin user in the dashboard, select Admin > Volumes > Volume Types.
  2. On the QOS Specs table, click Create QOS Spec.
  3. Enter a name for the QOS Spec.
  4. In the Consumer field, specify where the QOS policy should be enforced:

    Table 2.1. Consumer Types

    TypeDescription

    back-end

    QOS policy will be applied to the Block Storage back end.

    front-end

    QOS policy will be applied to Compute.

    both

    QOS policy will be applied to both Block Storage and Compute.

  5. Click Create. The new QOS Spec should now appear in the QOS Specs table.
  6. In the QOS Specs table, select the new spec’s Manage Specs action.
  7. Click Create, and specify the Key and Value. The key-value pair must be valid; otherwise, specifying a volume type associated with this QOS Spec during volume creation will fail.

    For example, to set read limit IOPS to 500, use the following Key/Value pair:

    read_iops_sec=500
  8. Click Create. The associated setting (key-value pair) now appears in the Key-Value Pairs table.

2.6.2. Setting capacity-derived Quality-of-Service limits

You can use volume types to implement capacity-derived Quality-of-Service (QoS) limits on volumes. This will allow you to set a deterministic IOPS throughput based on the size of provisioned volumes. Doing this simplifies how storage resources are provided to users — namely, providing a user with pre-determined (and, ultimately, highly predictable) throughput rates based on the volume size they provision.

In particular, the Block Storage service allows you to set how much IOPS to allocate to a volume based on the actual provisioned size. This throughput is set on an IOPS per GB basis through the following QoS keys:

read_iops_sec_per_gb
write_iops_sec_per_gb
total_iops_sec_per_gb

These keys allow you to set read, write, or total IOPS to scale with the size of provisioned volumes. For example, if the volume type uses read_iops_sec_per_gb=500, then a provisioned 3GB volume would automatically have a read IOPS of 1500.

Capacity-derived QoS limits are set per volume type, and configured like any normal QoS spec. In addition, these limits are supported by the underlying Block Storage service directly, and is not dependent on any particular driver.

For more information about volume types, see Section 2.3, “Group volume configuration with volume types” and Section 2.3.2, “Creating and configuring a volume type”. For instructions on how to set QoS specs, Section 2.6, “Block Storage service (cinder) Quality-of-Service”.

Warning

When you apply a volume type (or perform a volume re-type) with capacity-derived QoS limits to an attached volume, the limits will not be applied. The limits will only be applied once you detach the volume from its instance.

See Section 4.5, “Block Storage volume retyping” for information about volume re-typing.

2.6.3. Associating a Quality-of-Service specification with a volume type

As an administrator, you can associate a QOS Spec to an existing volume type using the Volume Types table.

Prerequisites

Procedure

  1. As an administrator in the dashboard, select Admin > Volumes > Volume Types.
  2. In the Volume Types table, select the type’s Manage QOS Spec Association action.
  3. Select a QOS Spec from the QOS Spec to be associated list. To disassociate a QOS specification from an existing volume type, select None.
  4. Click Associate. The selected QOS Spec now appears in the Associated QOS Spec column of the edited volume type.

2.7. Block Storage service (cinder) volume encryption

Volume encryption helps provide basic data protection in case the volume back-end is either compromised or outright stolen. Both Compute and Block Storage services are integrated to allow instances to read access and use encrypted volumes. You must deploy Barbican to take advantage of volume encryption.

Important
  • Volume encryption is not supported on file-based volumes (such as NFS).
  • Volume encryption only supports LUKS1 and not LUKS2.
  • Retyping an unencrypted volume to an encrypted volume of the same size is not supported, because encrypted volumes require additional space to store encryption data. For more information about encrypting unencrypted volumes, see Encrypting unencrypted volumes.

Volume encryption is applied through volume type. See Section 2.7.2, “Configuring Block Storage service volume encryption with the CLI” for information on encrypted volume types.

2.7.1. Configuring Block Storage service volume encryption with the Dashboard

To create encrypted volumes, you first need an encrypted volume type. Encrypting a volume type involves setting what provider class, cipher, and key size it should use.

Prerequisites

Procedure

  1. As an admin user in the dashboard, select Admin > Volumes > Volume Types.
  2. In the Actions column of the volume to be encrypted, select Create Encryption to launch the Create Volume Type Encryption wizard.
  3. From there, configure the Provider, Control Location, Cipher, and Key Size settings of the volume type’s encryption. The Description column describes each setting.

    Important

    The values listed below are the only supported options for Provider, Cipher, and Key Size.

    1. Enter luks for Provider.
    2. Enter aes-xts-plain64 for Cipher.
    3. Enter 256 for Key Size.
  4. Click Create Volume Type Encryption.

Once you have an encrypted volume type, you can invoke it to automatically create encrypted volumes. For more information on creating a volume type, see Section 2.3.2, “Creating and configuring a volume type”. Specifically, select the encrypted volume type from the Type drop-down list in the Create Volume window.

To configure an encrypted volume type through the CLI, see Section 2.7.2, “Configuring Block Storage service volume encryption with the CLI”.

You can also re-configure the encryption settings of an encrypted volume type.

  1. Select Update Encryption from the Actions column of the volume type to launch the Update Volume Type Encryption wizard.
  2. In Project > Compute > Volumes, check the Encrypted column in the Volumes table to determine whether the volume is encrypted.
  3. If the volume is encrypted, click Yes in that column to view the encryption settings.

2.7.2. Configuring Block Storage service volume encryption with the CLI

To create encrypted volumes, you first need an encrypted volume type. Encrypting a volume type involves setting what provider class, cipher, and key size it should use.

Prerequisites

Procedure

  1. Create a volume type:

    $ cinder type-create encrypt-type
  2. Configure the cipher, key size, control location, and provider settings:

    $ cinder encryption-type-create --cipher aes-xts-plain64 --key-size 256 --control-location front-end encrypt-type luks
  3. Create an encrypted volume:

    $ cinder --debug create 1 --volume-type encrypt-type --name DemoEncVol

For more information, see the Manage secrets with the OpenStack Key Manager guide.

2.7.3. Automatic deletion of volume image encryption key

The Block Storage service (cinder) creates an encryption key in the Key Management service (barbican) when it uploads an encrypted volume to the Image service (glance). This creates a 1:1 relationship between an encryption key and a stored image.

Encryption key deletion prevents unlimited resource consumption of the Key Management service. The Block Storage, Key Management, and Image services automatically manage the key for an encrypted volume, including the deletion of the key.

The Block Storage service automatically adds two properties to a volume image:

  • cinder_encryption_key_id - The identifier of the encryption key that the Key Management service stores for a specific image.
  • cinder_encryption_key_deletion_policy - The policy that tells the Image service to tell the Key Management service whether to delete the key associated with this image.
Important

The values of these properties are automatically assigned. To avoid unintentional data loss, do not adjust these values.

When you create a volume image, the Block Storage service sets the cinder_encryption_key_deletion_policy property to on_image_deletion. When you delete a volume image, the Image service deletes the corresponding encryption key if the cinder_encryption_key_deletion_policy equals on_image_deletion.

Important

Red Hat does not recommend manual manipulation of the cinder_encryption_key_id or cinder_encryption_key_deletion_policy properties. If you use the encryption key that is identified by the value of cinder_encryption_key_id for any other purpose, you risk data loss.

2.8. Deploying availability zones for Block Storage volume back ends

An availability zone is a provider-specific method of grouping cloud instances and services. Director uses CinderXXXAvailabilityZone parameters (where XXX is associated with a specific back end) to configure different availability zones for Block Storage volume back ends.

Prerequisites

Procedure

  1. Add the following parameters to the environment file to create two availability zones:

    parameter_defaults:
     CinderXXXAvailabilityZone: zone1
     CinderYYYAvailabilityZone: zone2

    Replace XXX and YYY with supported back-end values, such as:

    CinderISCSIAvailabilityZone
    CinderNfsAvailabilityZone
    CinderRbdAvailabilityZone
    Note

    Search the /usr/share/openstack-tripleo-heat-templates/deployment/cinder/ directory for the heat template associated with your back end for the correct back end value.

    The following example deploys two back ends where rbd is zone 1 and iSCSI is zone 2:

    parameter_defaults:
     CinderRbdAvailabilityZone: zone1
     CinderISCSIAvailabilityZone: zone2
  2. Deploy the overcloud and include the updated environment file.

2.9. Block Storage service (cinder) consistency groups

You can use the Block Storage (cinder) service to set consistency groups to group multiple volumes together as a single entity. This means that you can perform operations on multiple volumes at the same time instead of individually. You can use consistency groups to create snapshots for multiple volumes simultaneously. This also means that you can restore or clone those volumes simultaneously.

A volume can be a member of multiple consistency groups. However, you cannot delete, retype, or migrate volumes after you add them to a consistency group.

2.9.1. Configuring Block Storage service consistency groups

By default, Block Storage security policy disables consistency groups APIs. You must enable it here before you use the feature. The related consistency group entries in the /etc/cinder/policy.json file of the node that hosts the Block Storage API service, openstack-cinder-api list the default settings:

"consistencygroup:create" : "group:nobody",
​"consistencygroup:delete": "group:nobody",
​"consistencygroup:update": "group:nobody",
​"consistencygroup:get": "group:nobody",
​"consistencygroup:get_all": "group:nobody",
​"consistencygroup:create_cgsnapshot" : "group:nobody",
​"consistencygroup:delete_cgsnapshot": "group:nobody",
​"consistencygroup:get_cgsnapshot": "group:nobody",
​"consistencygroup:get_all_cgsnapshots": "group:nobody",

You must change these settings in an environment file and then deploy them to the overcloud by using the openstack overcloud deploy command. Do not edit the JSON file directly because the changes are overwritten next time the overcloud is deployed.

Prerequisites

Procedure

  1. Edit an environment file and add a new entry to the parameter_defaults section. This ensures that the entries are updated in the containers and are retained whenever the environment is re-deployed by director with the openstack overcloud deploy command.
  2. Add a new section to an environment file using CinderApiPolicies to set the consistency group settings. The equivalent parameter_defaults section with the default settings from the JSON file appear in the following way:

    parameter_defaults:
      CinderApiPolicies: { \
         cinder-consistencygroup_create: { key: 'consistencygroup:create', value: 'group:nobody' }, \
         cinder-consistencygroup_delete: { key: 'consistencygroup:delete', value: 'group:nobody' },  \
         cinder-consistencygroup_update: { key: 'consistencygroup:update', value: 'group:nobody' }, \
         cinder-consistencygroup_get: { key: 'consistencygroup:get', value: 'group:nobody' }, \
         cinder-consistencygroup_get_all: { key: 'consistencygroup:get_all', value: 'group:nobody' }, \
         cinder-consistencygroup_create_cgsnapshot: { key: 'consistencygroup:create_cgsnapshot', value: 'group:nobody' }, \
         cinder-consistencygroup_delete_cgsnapshot: { key: 'consistencygroup:delete_cgsnapshot', value: 'group:nobody' }, \
         cinder-consistencygroup_get_cgsnapshot: { key: 'consistencygroup:get_cgsnapshot', value: 'group:nobody' }, \
         cinder-consistencygroup_get_all_cgsnapshots: { key: 'consistencygroup:get_all_cgsnapshots', value: 'group:nobody' }, \
     }
  3. The value ‘group:nobody’ determines that no group can use this feature so it is effectively disabled. To enable it, change the group to another value.
  4. For increased security, set the permissions for both consistency group API and volume type management API to be identical. The volume type management API is set to "rule:admin_or_owner" by default in the same /etc/cinder/policy.json_ file:

    "volume_extension:types_manage": "rule:admin_or_owner",
  5. To make the consistency groups feature available to all users, set the API policy entries to allow users to create, use, and manage their own consistency groups. To do so, use rule:admin_or_owner:

    CinderApiPolicies: { \
         cinder-consistencygroup_create: { key: 'consistencygroup:create', value: 'rule:admin_or_owner' }, \
         cinder-consistencygroup_delete: { key: 'consistencygroup:delete', value: 'rule:admin_or_owner' },  \
         cinder-consistencygroup_update: { key: 'consistencygroup:update', value: 'rule:admin_or_owner' }, \
         cinder-consistencygroup_get: { key: 'consistencygroup:get', value: 'rule:admin_or_owner' }, \
         cinder-consistencygroup_get_all: { key: 'consistencygroup:get_all', value: 'rule:admin_or_owner' }, \
         cinder-consistencygroup_create_cgsnapshot: { key: 'consistencygroup:create_cgsnapshot', value: 'rule:admin_or_owner' }, \
         cinder-consistencygroup_delete_cgsnapshot: { key: 'consistencygroup:delete_cgsnapshot', value: 'rule:admin_or_owner' }, \
         cinder-consistencygroup_get_cgsnapshot: { key: 'consistencygroup:get_cgsnapshot', value: 'rule:admin_or_owner' }, \
         cinder-consistencygroup_get_all_cgsnapshots: { key: 'consistencygroup:get_all_cgsnapshots', value: 'rule:admin_or_owner’ }, \
     }
  6. When you have created the environment file in /home/stack/templates/, log in as the stack user and deploy the configuration:

    $ openstack overcloud deploy --templates \
    -e /home/stack/templates/<ENV_FILE>.yaml

    Replace <ENV_FILE.yaml> with the name of the file with the ExtraConfig settings you added.

    Important

    If you passed any extra environment files when you created the overcloud, pass them again here by using the -e option to avoid making undesired changes to the overcloud.

For more information about the openstack overcloud deploy command, see Deployment Command in the Director Installation and Usage guide.

2.9.2. Creating Block Storage consistency groups with the Dashboard

After you enable the consistency groups API, you can start creating consistency groups.

Prerequisites

Procedure

  1. As an admin user in the dashboard, select Project > Compute > Volumes > Volume Consistency Groups.
  2. Click Create Consistency Group.
  3. In the Consistency Group Information tab of the wizard, enter a name and description for your consistency group. Then, specify its Availability Zone.
  4. You can also add volume types to your consistency group. When you create volumes within the consistency group, the Block Storage service will apply compatible settings from those volume types. To add a volume type, click its + button from the All available volume types list.
  5. Click Create Consistency Group. It appears next in the Volume Consistency Groups table.

2.9.3. Managing Block Storage service consistency groups with the Dashboard

Use the Red Hat OpenStack Platform (RHOSP) Dashboard to manage consistency groups for Block Storage volumes.

Prerequisites

Procedure

  1. Optional: You can change the name or description of a consistency group by selecting Edit Consistency Group from its Action column.
  2. To add or remove volumes from a consistency group directly, as an admin user in the dashboard, select Project > Compute > Volumes > Volume Consistency Groups.
  3. Find the consistency group you want to configure. In the Actions column of that consistency group, select Manage Volumes. This launches the Add/Remove Consistency Group Volumes wizard.

    1. To add a volume to the consistency group, click its + button from the All available volumes list.
    2. To remove a volume from the consistency group, click its - button from the Selected volumes list.
  4. Click Edit Consistency Group.

2.9.4. Creating and managing consistency group snapshots for the Block Storage service

After you add volumes to a consistency group, you can now create snapshots from it.

Prerequisites

Procedure

  1. Log in as admin user from the command line on the node that hosts the openstack-cinder-api and enter:

    # export OS_VOLUME_API_VERSION=2

    This configures the client to use version 2 of the openstack-cinder-api.

  2. List all available consistency groups and their respective IDs:

    # cinder consisgroup-list
  3. Create snapshots using the consistency group:

    # cinder cgsnapshot-create --name <CGSNAPNAME> --description "<DESCRIPTION>" <CGNAMEID>

    Replace:

    • <CGSNAPNAME> with the name of the snapshot (optional).
    • <DESCRIPTION> with a description of the snapshot (optional).
    • <CGNAMEID> with the name or ID of the consistency group.
  4. Display a list of all available consistency group snapshots:

    # cinder cgsnapshot-list

2.9.5. Cloning Block Storage service consistency groups

You can also use consistency groups to create a whole batch of pre-configured volumes simultaneously. You can do this by cloning an existing consistency group or restoring a consistency group snapshot. Both processes use the same command.

Prerequisites

Procedure

  1. To clone an existing consistency group:

    # cinder consisgroup-create-from-src --source-cg <CGNAMEID> --name <CGNAME> --description "<DESCRIPTION>"

    Replace:

    • <CGNAMEID> is the name or ID of the consistency group you want to clone.
    • <CGNAME> is the name of your consistency group (optional).
    • <DESCRIPTION> is a description of your consistency group (optional).
  2. To create a consistency group from a consistency group snapshot:

    # cinder consisgroup-create-from-src --cgsnapshot <CGSNAPNAME> --name <CGNAME> --description "<DESCRIPTION>

    Replace <CGSNAPNAME> with the name or ID of the snapshot you are using to create the consistency group.

2.10. Specifying back ends for volume creation

Whenever multiple Block Storage (cinder) back ends are configured, you must also create a volume type for each back end. You can then use the type to specify which back end to use for a created volume. For more information about volume types, see Section 2.3, “Group volume configuration with volume types”.

To specify a back end when creating a volume, select its corresponding volume type from the Type list (see Section 3.1, “Creating Block Storage volumes”).

If you do not specify a back end during volume creation, the Block Storage service automatically chooses one for you. By default, the service chooses the back end with the most available free space. You can also configure the Block Storage service to choose randomly among all available back ends instead. For more information, see Section 3.5, “Allocating volumes to multiple back ends”.

2.11. Enabling LVM2 filtering on overcloud nodes

If you use LVM2 (Logical Volume Management) volumes with certain Block Storage service (cinder) back ends, the volumes that you create inside Red Hat OpenStack Platform (RHOSP) guests might become visible on the overcloud nodes that host cinder-volume or nova-compute containers. In this case, the LVM2 tools on the host scan the LVM2 volumes that the OpenStack guest creates, which can result in one or more of the following problems on Compute or Controller nodes:

  • LVM appears to see volume groups from guests
  • LVM reports duplicate volume group names
  • Volume detachments fail because LVM is accessing the storage
  • Guests fail to boot due to problems with LVM
  • The LVM on the guest machine is in a partial state due to a missing disk that actually exists
  • Block Storage service (cinder) actions fail on devices that have LVM
  • Block Storage service (cinder) snapshots fail to remove correctly
  • Errors during live migration: /etc/multipath.conf does not exist

To prevent this erroneous scanning, and to segregate guest LVM2 volumes from the host node, you can enable and configure a filter with the LVMFilterEnabled heat parameter when you deploy or update the overcloud. This filter is computed from the list of physical devices that host active LVM2 volumes. You can also allow and deny block devices explicitly with the LVMFilterAllowlist and LVMFilterDenylist parameters. You can apply this filtering globally, to specific node roles, or to specific devices.

Prerequisites

Procedure

  1. Log in to the undercloud host as the stack user.
  2. Source the undercloud credentials file:

    $ source ~/stackrc
  3. Create a new environment file, or modify an existing environment file. In this example, create a new file lvm2-filtering.yaml:

    $ touch ~/lvm2-filtering.yaml
  4. Include the following parameter in the environment file:

    parameter_defaults:
      LVMFilterEnabled: true

    You can further customize the implementation of the LVM2 filter. For example, to enable filtering only on Compute nodes, use the following configuration:

    parameter_defaults:
      ComputeParameters:
        LVMFilterEnabled: true

    These parameters also support regular expression. To enable filtering only on Compute nodes, and ignore all devices that start with /dev/sd, use the following configuration:

    parameter_defaults:
      ComputeParameters:
        LVMFilterEnabled: true
        LVMFilterDenylist:
          - /dev/sd.*
  5. Run the openstack overcloud deploy command and include the environment file that contains the LVM2 filtering configuration, as well as any other environment files that are relevant to your overcloud deployment:

    $ openstack overcloud deploy --templates \
    <environment-files> \
    -e lvm2-filtering.yaml

2.12. Multipath configuration

Use multipath to configure multiple I/O paths between server nodes and storage arrays into a single device to create redundancy and improve performance.

2.12.1. Using director to configure multipath

You can configure multipath on a Red Hat OpenStack Platform (RHOSP) overcloud deployment for greater bandwidth and networking resilience.

Important

When you configure multipath on an existing deployment, the new workloads are multipath aware. If you have any pre-existing workloads, you must shelve and unshelve the instances to enable multipath on these instances.

Prerequisites

Procedure

  1. Log in to the undercloud host as the stack user.
  2. Source the stackrc credentials file:

    $ source ~/stackrc
  3. Use an overrides environment file or create a new one, for example multipath_overrides.yaml. Add and set the following parameter:

    parameter_defaults:
      ExtraConfig:
        cinder::config::cinder_config:
          backend_defaults/use_multipath_for_image_xfer:
            value: true
    Note

    The default settings will generate a basic multipath configuration that works for most environments. However, check with your storage vendor for recommendations, because some vendors have optimized configurations that are specific to their hardware. For more information about multipath, see the Configuring device mapper multipath guide.

  4. Optional: If you have a multipath configuration file for your overcloud deployment, use the MultipathdCustomConfigFile parameter to specify the location of this file:

    1. You must copy your multipath configuration file to the /var/lib/mistral directory:

           $ sudo cp <config_file_name> /var/lib/mistral

      Replace <config_file_name> with the name of your file.

    2. Set the MultipathdCustomConfigFile parameter to this location of your multipath configuration file:

           parameter_defaults:
                MultipathdCustomConfigFile: /var/lib/mistral/<config_file_name>
      Note

      Other TripleO multipath parameters override any corresponding value in the local custom configuration file. For example, if MultipathdEnableUserFriendlyNames is False, the files on the overcloud nodes are updated to match, even if the setting is enabled in the local custom file.

      For more information about multipath parameters, see Multipath heat template parameters.

  5. Include the environment file in the openstack overcloud deploy command with any other environment files that are relevant to your environment:

    $ openstack overcloud deploy \
    --templates \
    …
    -e <existing_overcloud_environment_files> \
    -e /usr/share/openstack-tripleo-heat-templates/environments/multipathd.yaml
    -e multipath_overrides.yaml \
    …

2.12.1.1. Multipath heat template parameters

Use this to understand the following parameters that enable multipath.

ParameterDescriptionDefault value

MultipathdEnable

Defines whether to enable the multipath daemon. This parameter defaults to True through the configuration contained in the multipathd.yaml file

True

MultipathdEnableUserFriendlyNames

Defines whether to enable the assignment of a user friendly name to each path.

False

MultipathdEnableFindMultipaths

Defines whether to automatically create a multipath device for each path.

True

MultipathdSkipKpartx

Defines whether to skip automatically creating partitions on the device.

True

MultipathdCustomConfigFile

Includes a local, custom multipath configuration file on the overcloud nodes. By default, a minimal multipath.conf file is installed.

NOTE: Other TripleO multipath parameters override any corresponding value in any local, custom configuration file that you add. For example, if MultipathdEnableUserFriendlyNames is False, the files on the overcloud nodes are updated to match, even if the setting is enabled in your local, custom file.

 

2.12.2. Verifying multipath configuration

This procedure describes how to verify multipath configuration on new or existing overcloud deployments.

Prerequisites

Procedure

  1. Create a VM.
  2. Attach a non-encrypted volume to the VM.
  3. Get the name of the Compute node that contains the instance:

    $ nova show INSTANCE | grep OS-EXT-SRV-ATTR:host

    Replace INSTANCE with the name of the VM that you booted.

  4. Retrieve the virsh name of the instance:

    $ nova show INSTANCE | grep instance_name

    Replace INSTANCE with the name of the VM that you booted.

  5. Get the IP address of the Compute node:

    $ . stackrc
    $ nova list | grep compute_name

    Replace compute_name with the name from the output of the nova show INSTANCE command.

  6. SSH into the Compute node that runs the VM:

    $ ssh heat-admin@COMPUTE_NODE_IP

    Replace COMPUTE_NODE_IP with the IP address of the Compute node.

  7. Log in to the container that runs virsh:

    $ podman exec -it nova_libvirt /bin/bash
  8. Enter the following command on a Compute node instance to verify that it is using multipath in the cinder volume host location:

    virsh domblklist VIRSH_INSTANCE_NAME | grep /dev/dm

    Replace VIRSH_INSTANCE_NAME with the output of the nova show INSTANCE | grep instance_name command.

    If the instance shows a value other than /dev/dm-, the connection is non-multipath and you must refresh the connection info with the nova shelve and nova unshelve commands:

    $ nova shelve <instance>
    $ nova unshelve <instance>
    Note

    If you have more than one type of back end, you must verify the instances and volumes on all back ends, because connection info that each back end returns might vary.