Chapter 18. Storage configuration

This chapter outlines several methods that you can use to configure storage options for your overcloud.

Important

The overcloud uses local and LVM storage for the default storage options. Because these options are not supported for enterprise-level overclouds, you must configure your overcloud to use one of the storage options detailed in this chapter.

18.1. Configuring NFS storage

To configure the overcloud to use an NFS share, create a custom environment file.

Important

Red Hat recommends that you use a certified storage back end and driver. Red Hat does not recommend that you use NFS that comes from the generic NFS back end, because its capabilities are limited when compared to a certified storage back end and driver. For example, the generic NFS back end does not support features such as volume encryption and volume multi-attach. For information about supported drivers, see the Red Hat Ecosystem Catalog.

Note

There are several director heat parameters that control whether an NFS back end or a NetApp NFS Block Storage back end supports a NetApp feature called NAS secure:

  • CinderNetappNasSecureFileOperations
  • CinderNetappNasSecureFilePermissions
  • CinderNasSecureFileOperations
  • CinderNasSecureFilePermissions

Red Hat does not recommend that you enable the feature, because it interferes with normal volume operations. Director disables the feature by default, and Red Hat OpenStack Platform does not support it.

Note

For Block Storage (cinder) and Compute (nova) services, you must use NFS version 4.0 or later.

Procedure

  1. Create a custom environment file:

    $ vi /home/stack/templates/custom_env.yaml
  2. Add the following parameters to configure NFS storage:

    parameter_defaults:
      CinderEnableIscsiBackend: false
      CinderEnableNfsBackend: true
      GlanceBackend: file
      CinderNfsServers: 192.0.2.230:/cinder
      GlanceNfsEnabled: true
      GlanceNfsShare: 192.0.2.230:/glance
    Note

    The default values of the CinderNfsMountOptions and GlanceNfsOptions parameters enable NFS mount options that are sufficient for most Red Hat OpenStack Platform (RHOSP) installations. If you experience issues when you configure multiple services to share the same NFS server, contact Red Hat support.

  3. Include the environment file that contains your new content in the openstack overcloud deploy by using the -e option. Ensure that you include all other environment files that are relevant to your deployment.

    $ openstack overcloud deploy \
    	...
      -e /home/stack/templates/custom_env.yaml

18.2. Configuring Ceph Storage

Use one of the following methods to integrate Red Hat Ceph Storage into a Red Hat OpenStack Platform overcloud.

Creating an overcloud with its own Ceph Storage cluster
You can create a Ceph Storage Cluster during the creation on the overcloud. Director creates a set of Ceph Storage nodes that use the Ceph OSD to store data. Director also installs the Ceph Monitor service on the overcloud Controller nodes. This means that if an organization creates an overcloud with three highly available Controller nodes, the Ceph Monitor also becomes a highly available service. For more information, see the Deploying an Overcloud with Containerized Red Hat Ceph.
Integrating an existing Ceph Storage cluster into an overcloud
If you have an existing Ceph Storage Cluster, you can integrate this cluster into a Red Hat OpenStack Platform overcloud during deployment. This means that you can manage and scale the cluster outside of the overcloud configuration. For more information, see the Integrating an Overcloud with an Existing Red Hat Ceph Cluster.

18.3. Using an external Object Storage cluster

You can reuse an external OpenStack Object Storage (swift) cluster by disabling the default Object Storage service deployment on your Controller nodes. This disables both the proxy and storage services for Object Storage and configures haproxy and OpenStack Identify (keystone) to use the given external Object Storage endpoint.

Note

You must manage user accounts on the external Object Storage (swift) cluster manually.

Prerequisites

  • You need the endpoint IP address of the external Object Storage cluster as well as the authtoken password from the external Object Storage proxy-server.conf file. You can find this information by using the openstack endpoint list command.

Procedure

  1. Create a new file named swift-external-params.yaml with the following content:

    • Replace EXTERNAL.IP:PORT with the IP address and port of the external proxy and
    • Replace AUTHTOKEN with the authtoken password for the external proxy on the SwiftPassword line.

      parameter_defaults:
        ExternalPublicUrl: 'https://EXTERNAL.IP:PORT/v1/AUTH_%(tenant_id)s'
        ExternalInternalUrl: 'http://192.168.24.9:8080/v1/AUTH_%(tenant_id)s'
        ExternalAdminUrl: 'http://192.168.24.9:8080'
        ExternalSwiftUserTenant: 'service'
        SwiftPassword: AUTHTOKEN
  2. Save this file as swift-external-params.yaml.
  3. Deploy the overcloud with the following external Object Storage service environment files, as well as any other environment files that are relevant to your deployment:

    openstack overcloud deploy --templates \
    -e [your environment files] \
    -e /usr/share/openstack-tripleo-heat-templates/environments/swift-external.yaml \
    -e swift-external-params.yaml

18.4. Configuring Ceph Object Store to use external Ceph Object Gateway

Red Hat OpenStack Platform (RHOSP) director supports configuring an external Ceph Object Gateway (RGW) as an Object Store service. To authenticate with the external RGW service, you must configure RGW to verify users and their roles in the Identity service (keystone).

For more information about how to configure an external Ceph Object Gateway, see Configuring the Ceph Object Gateway to use Keystone authentication in the Using Keystone with the Ceph Object Gateway Guide.

Procedure

  1. Add the following parameter_defaults to a custom environment file, for example, swift-external-params.yaml, and adjust the values to suit your deployment:

    parameter_defaults:
       ExternalSwiftPublicUrl: 'http://<Public RGW endpoint or loadbalancer>:8080/swift/v1/AUTH_%(project_id)s'
       ExternalSwiftInternalUrl: 'http://<Internal RGW endpoint>:8080/swift/v1/AUTH_%(project_id)s'
       ExternalSwiftAdminUrl: 'http://<Admin RGW endpoint>:8080/swift/v1/AUTH_%(project_id)s'
       ExternalSwiftUserTenant: 'service'
       SwiftPassword: 'choose_a_random_password'
    Note

    The example code snippet contains parameter values that might differ from values that you use in your environment:

    • The default port where the remote RGW instance listens is 8080. The port might be different depending on how the external RGW is configured.
    • The swift user created in the overcloud uses the password defined by the SwiftPassword parameter. You must configure the external RGW instance to use the same password to authenticate with the Identity service by using the rgw_keystone_admin_password.
  2. Add the following code to the Ceph config file to configure RGW to use the Identity service. Replace the variable values to suit your environment:

        rgw_keystone_api_version = 3
        rgw_keystone_url = http://<public Keystone endpoint>:5000/
        rgw_keystone_accepted_roles = member, Member, admin
        rgw_keystone_accepted_admin_roles = ResellerAdmin, swiftoperator
        rgw_keystone_admin_domain = default
        rgw_keystone_admin_project = service
        rgw_keystone_admin_user = swift
        rgw_keystone_admin_password = <password_as_defined_in_the_environment_parameters>
        rgw_keystone_implicit_tenants = true
        rgw_keystone_revocation_interval = 0
        rgw_s3_auth_use_keystone = true
        rgw_swift_versioning_enabled = true
        rgw_swift_account_in_url = true
    Note

    Director creates the following roles and users in the Identity service by default:

    • rgw_keystone_accepted_admin_roles: ResellerAdmin, swiftoperator
    • rgw_keystone_admin_domain: default
    • rgw_keystone_admin_project: service
    • rgw_keystone_admin_user: swift
  3. Deploy the overcloud with the additional environment files with any other environment files that are relevant to your deployment:

    openstack overcloud deploy --templates \
    -e <your_environment_files>
    -e /usr/share/openstack-tripleo-heat-templates/environments/swift-external.yaml
    -e swift-external-params.yaml

Verification

  1. Log in to the undercloud as the stack user.
  2. Source the overcloudrc file:

    $ source ~/stackrc
  3. Verify that the endpoints exist in the Identity service (keystone):

    $ openstack endpoint list --service object-store
    
    +---------+-----------+-------+-------+---------+-----------+---------------+
    | ID | Region    | Service Name | Service Type | Enabled | Interface | URL |
    +---------+-----------+-------+-------+---------+-----------+---------------+
    | 233b7ea32aaf40c1ad782c696128aa0e | regionOne | swift | object-store | True    | admin     | http://192.168.24.3:8080/v1/AUTH_%(project_id)s |
    | 4ccde35ac76444d7bb82c5816a97abd8 | regionOne | swift | object-store | True    | public    | https://192.168.24.2:13808/v1/AUTH_%(project_id)s |
    | b4ff283f445348639864f560aa2b2b41 | regionOne | swift | object-store | True    | internal  | http://192.168.24.3:8080/v1/AUTH_%(project_id)s |
    +---------+-----------+-------+-------+---------+-----------+---------------+
  4. Create a test container:

    $ openstack container create <testcontainer>
    +----------------+---------------+------------------------------------+
    | account | container | x-trans-id |
    +----------------+---------------+------------------------------------+
    | AUTH_2852da3cf2fc490081114c434d1fc157 | testcontainer | tx6f5253e710a2449b8ef7e-005f2d29e8 |
    +----------------+---------------+------------------------------------+
  5. Create a configuration file to confirm that you can upload data to the container:

    $ openstack object create testcontainer undercloud.conf
    +-----------------+---------------+----------------------------------+
    | object          | container     | etag                             |
    +-----------------+---------------+----------------------------------+
    | undercloud.conf | testcontainer | 09fcffe126cac1dbac7b89b8fd7a3e4b |
    +-----------------+---------------+----------------------------------+
  6. Delete the test container:

    $ openstack container delete -r <testcontainer>

18.5. Configuring the image import method and shared staging area

The default settings for the OpenStack Image service (glance) are determined by the heat templates that you use when you install Red Hat OpenStack Platform. The Image service heat template is deployment/glance/glance-api-container-puppet.yaml.

You can import images with the following methods:

web-download
Use the web-download method to import an image from a URL.
glance-direct
Use the glance-direct method to import an image from a local volume.

18.5.1. Creating and deploying the glance-settings.yaml file

Use a custom environment file to configure the import parameters. These parameters override the default values that are present in the core heat template collection. The example environment content contains parameters for the interoperable image import.

parameter_defaults:
  # Configure NFS backend
  GlanceBackend: file
  GlanceNfsEnabled: true
  GlanceNfsShare: 192.168.122.1:/export/glance

  # Enable glance-direct import method
  GlanceEnabledImportMethods: glance-direct,web-download

  # Configure NFS staging area (required for glance-direct import method)
  GlanceStagingNfsShare: 192.168.122.1:/export/glance-staging

The GlanceBackend, GlanceNfsEnabled, and GlanceNfsShare parameters are defined in the Storage Configuration section in the Advanced Overcloud Customization Guide.

Use two new parameters for interoperable image import to define the import method and a shared NFS staging area.

GlanceEnabledImportMethods
Defines the available import methods, web-download (default) and glance-direct. This parameter is necessary only if you want to enable additional methods besides web-download.
GlanceStagingNfsShare
Configures the NFS staging area that the glance-direct import method uses. This space can be shared among nodes in a high-availability cluster configuration. If you want to use this parameter, you must also set the GlanceNfsEnabled parameter to true.

Procedure

  1. Create a new file, for example, glance-settings.yaml. Use the syntax from the example to populate this file.
  2. Include the glance-settings.yaml file in the openstack overcloud deploy command, as well as any other environment files that are relevant to your deployment:

    $ openstack overcloud deploy --templates -e glance-settings.yaml

For more information about using environment files, see the Including Environment Files in Overcloud Creation section in the Advanced Overcloud Customization Guide.

18.5.2. Controlling image web-import sources

You can limit the sources of web-import image downloads by adding URI blocklists and allowlists to the optional glance-image-import.conf file.

You can allow or block image source URIs at three levels:

  • scheme (allowed_schemes, disallowed_schemes)
  • host (allowed_hosts, disallowed_hosts)
  • port (allowed_ports, disallowed_ports)

If you specify both allowlist and blocklist at any level, the allowlist is honored and the blocklist is ignored.

The Image service (glance) applies the following decision logic to validate image source URIs:

  1. The scheme is checked.

    1. Missing scheme: reject
    2. If there is an allowlist, and the scheme is not present in the allowlist: reject. Otherwise, skip C and continue on to 2.
    3. If there is a blocklist, and the scheme is present in the blocklist: reject.
  2. The host name is checked.

    1. Missing host name: reject
    2. If there is an allowlist, and the host name is not present in the allowlist: reject. Otherwise, skip C and continue on to 3.
    3. If there is a blocklist, and the host name is present in the blocklist: reject.
  3. If there is a port in the URI, the port is checked.

    1. If there is a allowlist, and the port is not present in the allowlist: reject. Otherwise, skip B and continue on to 4.
    2. If there is a blocklist, and the port is present in the blocklist: reject.
  4. The URI is accepted as valid.

If you allow a scheme, either by adding it to an allowlist or by not adding it to a blocklist, any URI that uses the default port for that scheme by not including a port in the URI is allowed. If it does include a port in the URI, the URI is validated according to the default decision logic.

18.5.2.1. Example

For example, the default port for FTP is 21. Because ftp is an allowlisted scheme, this URL is allowed: ftp://example.org/some/resource But because 21 is not in the port allowlist, this URL to the same resource is rejected: ftp://example.org:21/some/resource

allowed_schemes = [http,https,ftp]
disallowed_schemes = []
allowed_hosts = []
disallowed_hosts = []
allowed_ports = [80,443]
disallowed_ports = []

18.5.2.2. Default image import blocklist and allowlist settings

The glance-image-import.conf file is an optional file that contains the following default options:

  • allowed_schemes - [http, https]
  • disallowed_schemes - empty list
  • allowed_hosts - empty list
  • disallowed_hosts - empty list
  • allowed_ports - [80, 443]
  • disallowed_ports - empty list

If you use the defaults, end users can access URIs by using only the http or https scheme. The only ports that users can specify are 80 and 443. Users do not have to specify a port, but if they do, it must be either 80 or 443.

You can find the glance-image-import.conf file in the etc/ subdirectory of the Image service source code tree. Ensure that you are looking in the correct branch for your release of Red Hat OpenStack Platform.

18.5.3. Injecting metadata on image import to control where VMs launch

End users can upload images to the Image service and use these images to launch VMs. These user-provided (non-admin) images must be launched on a specific set of compute nodes. The assignment of an instance to a compute node is controlled by image metadata properties.

The Image Property Injection plugin injects metadata properties to images during import. Specify the properties by editing the [image_import_opts] and [inject_metadata_properties] sections of the glance-image-import.conf file.

To enable the Image Property Injection plugin, add the following line to the [image_import_opts] section:

[image_import_opts]
image_import_plugins = [inject_image_metadata]

To limit the metadata injection to images provided by a certain set of users, set the ignore_user_roles parameter. For example, use the following configuration to inject one value for property1 and two values for property2 into images downloaded by any non-admin user.

[DEFAULT]
[image_conversion]
[image_import_opts]
image_import_plugins = [inject_image_metadata]
[import_filtering_opts]
[inject_metadata_properties]
ignore_user_roles = admin
inject = PROPERTY1:value,PROPERTY2:value;another value

The parameter ignore_user_roles is a comma-separated list of the Identity service (keystone) roles that the plugin ignores. This means that if the user that makes the image import call has any of these roles, the plugin does not inject any properties into the image.

The parameter inject is a comma-separated list of properties and values that are injected into the image record for the imported image. Each property and value must be quoted and separated by a colon (‘:’).

You can find the glance-image-import.conf file in the etc/ subdirectory of the Image service source code tree. Ensure that you are looking in the correct branch for your release of Red Hat OpenStack Platform.

18.6. Configuring cinder back end for the Image service

Use the GlanceBackend parameter to set the back end that the Image service uses to store images.

Important

The default maximum number of volumes you can create for a project is 10.

Procedure

  1. To configure cinder as the Image service back end, add the following line to an environment file:

    parameter_defaults:
      GlanceBackend: cinder
  2. If the cinder back end is enabled, the following parameters and values are set by default:

    cinder_store_auth_address = http://172.17.1.19:5000/v3
    cinder_store_project_name = service
    cinder_store_user_name = glance
    cinder_store_password = ****secret****
  3. To use a custom user name, or any custom value for the cinder_store_ parameters, add the ExtraConfig parameter to parameter_defaults and include your custom values:

    ExtraConfig:
        glance::config::api_config:
          glance_store/cinder_store_auth_address:
            value: "%{hiera('glance::api::authtoken::auth_url')}/v3"
          glance_store/cinder_store_user_name:
            value: <user-name>
          glance_store/cinder_store_password:
            value: "%{hiera('glance::api::authtoken::password')}"
          glance_store/cinder_store_project_name:
            value: "%{hiera('glance::api::authtoken::project_name')}"

18.7. Configuring the maximum number of storage devices to attach to one instance

By default, you can attach an unlimited number of storage devices to a single instance. To limit the maximum number of devices, add the max_disk_devices_to_attach parameter to your Compute environment file. Use the following example to change the value of max_disk_devices_to_attach to "30":

parameter_defaults:
   ComputeExtraConfig:
          nova::config::nova_config:
            compute/max_disk_devices_to_attach:
                value: '30'

Guidelines and considerations

  • The number of storage disks supported by an instance depends on the bus that the disk uses. For example, the IDE disk bus is limited to 4 attached devices.
  • Changing the max_disk_devices_to_attach on a Compute node with active instances can cause rebuilds to fail if the maximum number is lower than the number of devices already attached to instances. For example, if instance A has 26 devices attached and you change max_disk_devices_to_attach to 20, a request to rebuild instance A will fail.
  • During cold migration, the configured maximum number of storage devices is enforced only on the source for the instance that you want to migrate. The destination is not checked before the move. This means that if Compute node A has 26 attached disk devices, and Compute node B has a configured maximum of 20 attached disk devices, a cold migration of an instance with 26 attached devices from Compute node A to Compute node B succeeds. However, a subsequent request to rebuild the instance in Compute node B fails because 26 devices are already attached which exceeds the configured maximum of 20.
  • The configured maximum is not enforced on shelved offloaded instances, as they have no Compute node.
  • Attaching a large number of disk devices to instances can degrade performance on the instance. Tune the maximum number based on the boundaries of what your environment can support.
  • Instances with machine type Q35 can attach a maximum of 500 disk devices.

18.8. Improving scalability with Image service caching

Use the glance-api caching mechanism to store copies of images on Image service (glance) API servers and retrieve them automatically to improve scalability. With Image service caching, glance-api can run on multiple hosts. This means that it does not need to retrieve the same image from back end storage multiple times. Image service caching does not affect any Image service operations.

Configure Image service caching with the Red Hat OpenStack Platform director (tripleo) heat templates:

Procedure

  1. In an environment file, set the value of the GlanceCacheEnabled parameter to true, which automatically sets the flavor value to keystone+cachemanagement in the glance-api.conf heat template:

    parameter_defaults:
        GlanceCacheEnabled: true
  2. Include the environment file in the openstack overcloud deploy command when you redeploy the overcloud.
  3. Optional: Tune the glance_cache_pruner to an alternative frequency when you redeploy the overcloud. The following example shows a frequency of 5 minutes:

    parameter_defaults:
      ControllerExtraConfig:
        glance::cache::pruner::minute: '*/5'

    Adjust the frequency according to your needs to avoid file system full scenarios. Include the following elements when you choose an alternative frequency:

    • The size of the files that you want to cache in your environment.
    • The amount of available file system space.
    • The frequency at which the environment caches images.

18.9. Configuring third party storage

The following environment files are present in the core heat template collection /usr/share/openstack-tripleo-heat-templates.

Dell EMC Storage Center

Deploys a single Dell EMC Storage Center back end for the Block Storage (cinder) service.

The environment file is located at /usr/share/openstack-tripleo-heat-templates/environments/cinder-dellsc-config.yaml.

See the Dell Storage Center Back End Guide for full configuration information.

Dell EMC PS Series

Deploys a single Dell EMC PS Series back end for the Block Storage (cinder) service.

The environment file is located at /usr/share/openstack-tripleo-heat-templates/environments/cinder-dellps-config.yaml.

See the Dell EMC PS Series Back End Guide for full configuration information.

NetApp Block Storage

Deploys a NetApp storage appliance as a back end for the Block Storage (cinder) service.

The environment file is located at /usr/share/openstack-tripleo-heat-templates/environments/storage/cinder-netapp-config.yaml.

For more information about NetApp Block Storage, see the NetApp Block Storage Service (Cinder) documentation.