Red Hat Training

A Red Hat training course is available for Red Hat OpenStack Platform

Chapter 19. Storage Configuration

This chapter outlines several methods of configuring storage options for your Overcloud.

Important

By default, the overcloud uses local ephemeral storage provided by OpenStack Compute (nova) and LVM block storage provided by OpenStack Storage (cinder). However, these options are not supported for enterprise-level overclouds. Instead, use one of the storage options in this chapter.

19.1. Configuring NFS Storage

This section describes how to configure the overcloud to use an NFS share. The installation and configuration process is based on the modification of an existing environment file in the core heat template collection.

Important

Red Hat recommends that you use a certified storage back end and driver. Red Hat does not recommend that you use NFS that comes from the generic NFS back end, because its capabilities are limited when compared to a certified storage back end and driver. For example, the generic NFS back end does not support features such as volume encryption and volume multi-attach. For information about supported drivers, see the Red Hat Ecosystem Catalog.

Note

There are several director heat parameters that control whether an NFS back end or a NetApp NFS Block Storage back end supports a NetApp feature called NAS secure:

  • CinderNetappNasSecureFileOperations
  • CinderNetappNasSecureFilePermissions
  • CinderNasSecureFileOperations
  • CinderNasSecureFilePermissions

Red Hat does not recommend that you enable the feature, because it interferes with normal volume operations. Director disables the feature by default, and Red Hat OpenStack Platform does not support it.

Note

For Block Storage and Compute services, you must use NFS version 4.0 or later.

The core heat template collection contains a set of environment files in /usr/share/openstack-tripleo-heat-templates/environments/. With these environment files you can create customized configuration of some of the supported features in a director-created overcloud. This includes an environment file designed to configure storage. This file is located at /usr/share/openstack-tripleo-heat-templates/environments/storage-environment.yaml.

  1. Copy the file to the stack user’s template directory:

    $ cp /usr/share/openstack-tripleo-heat-templates/environments/storage-environment.yaml ~/templates/.
  2. Modify the following parameters:

    CinderEnableIscsiBackend
    Enables the iSCSI backend. Set to false.
    CinderEnableRbdBackend
    Enables the Ceph Storage backend. Set to false.
    CinderEnableNfsBackend
    Enables the NFS backend. Set to true.
    NovaEnableRbdBackend
    Enables Ceph Storage for Nova ephemeral storage. Set to false.
    GlanceBackend
    Define the back end to use for glance. Set to file to use file-based storage for images. The overcloud saves these files in a mounted NFS share for glance.
    CinderNfsMountOptions
    The NFS mount options for the volume storage.
    CinderNfsServers
    The NFS share to mount for volume storage. For example, 192.168.122.1:/export/cinder.
    GlanceNfsEnabled
    When GlanceBackend is set to file, GlanceNfsEnabled enables images to be stored through NFS in a shared location so that all Controller nodes have access to the images. If disabled, the overcloud stores images in the file system of the Controller node. Set to true.
    GlanceNfsShare
    The NFS share to mount for image storage. For example, 192.168.122.1:/export/glance.
    GlanceNfsOptions

    The NFS mount options for the image storage.

    The environment file contains parameters that configure different storage options for the Red Hat OpenStack Platform Block Storage (cinder) and Image (glance) services. This example demonstrates how to configure the overcloud to use an NFS share.

    The options in the environment file should look similar to the following:

    parameter_defaults:
      CinderEnableIscsiBackend: false
      CinderEnableRbdBackend: false
      CinderEnableNfsBackend: true
      NovaEnableRbdBackend: false
      GlanceBackend: file
    
      CinderNfsMountOptions: rw,sync,context=system_u:object_r:cinder_var_lib_t:s0
      CinderNfsServers: 192.0.2.230:/cinder
    
      GlanceNfsEnabled: true
      GlanceNfsShare: 192.0.2.230:/glance
      GlanceNfsOptions: rw,sync,context=system_u:object_r:glance_var_lib_t:s0

    These parameters are integrated as part of the heat template collection. Setting them as shown in the example code creates two NFS mount points for the Block Storage and Image services to use.

    Important

    Include the context=system_u:object_r:glance_var_lib_t:s0 option in the GlanceNfsOptions parameter to allow the Image service to access the /var/lib directory. Without this SELinux content, the Image service cannot to write to the mount point.

  3. Include the file when you deploy the overcloud.

19.2. Configuring Ceph Storage

The director provides two main methods for integrating Red Hat Ceph Storage into an Overcloud.

Creating an Overcloud with its own Ceph Storage Cluster
The director has the ability to create a Ceph Storage Cluster during the creation on the Overcloud. The director creates a set of Ceph Storage nodes that use the Ceph OSD to store the data. In addition, the director install the Ceph Monitor service on the Overcloud’s Controller nodes. This means if an organization creates an Overcloud with three highly available controller nodes, the Ceph Monitor also becomes a highly available service. For more information, see the Deploying an Overcloud with Containerized Red Hat Ceph guide.
Integrating a Existing Ceph Storage into an Overcloud
If you already have an existing Ceph Storage Cluster, you can integrate this during an Overcloud deployment. This means you manage and scale the cluster outside of the Overcloud configuration. For more information, see the Integrating an Overcloud with an Existing Red Hat Ceph Cluster guide.

19.3. Using an External Object Storage Cluster

You can reuse an external Object Storage (swift) cluster by disabling the default Object Storage service deployment on the controller nodes. Doing so disables both the proxy and storage services for Object Storage and configures haproxy and keystone to use the given external Swift endpoint.

Note

User accounts on the external Object Storage (swift) cluster have to be managed by hand.

You need the endpoint IP address of the external Object Storage cluster as well as the authtoken password from the external Object Storage proxy-server.conf file. You can find this information by using the openstack endpoint list command.

To deploy director with an external Swift cluster:

  1. Create a new file named swift-external-params.yaml with the following content:

    • Replace EXTERNAL.IP:PORT with the IP address and port of the external proxy and
    • Replace AUTHTOKEN with the authtoken password for the external proxy on the SwiftPassword line.

      parameter_defaults:
        ExternalPublicUrl: 'https://EXTERNAL.IP:PORT/v1/AUTH_%(tenant_id)s'
        ExternalInternalUrl: 'http://192.168.24.9:8080/v1/AUTH_%(tenant_id)s'
        ExternalAdminUrl: 'http://192.168.24.9:8080'
        ExternalSwiftUserTenant: 'service'
        SwiftPassword: AUTHTOKEN
  2. Save this file as swift-external-params.yaml.
  3. Deploy the overcloud using these additional environment files.

    openstack overcloud deploy --templates \
    -e [your environment files]
    -e /usr/share/openstack-tripleo-heat-templates/environments/swift-external.yaml
    -e swift-external-params.yaml

19.4. Configuring the Image Import Method and Shared Staging Area

The default settings for the OpenStack Image service (glance) are determined by the Heat templates used when OpenStack is installed. The Image service Heat template is tht/puppet/services/glance-api.yaml.

The interoperable image import allows two methods for image import:

  • web-download and
  • glance-direct.

The web-download method lets you import an image from a URL; the glance-direct method lets you import an image from a local volume.

19.4.1. Creating and Deploying the glance-settings.yaml File

You use an environment file to configure the import parameters. These parameters override the default values established in the Heat template. The example environment content provides parameters for the interoperable image import.

parameter_defaults:
  # Configure NFS backend
  GlanceBackend: file
  GlanceNfsEnabled: true
  GlanceNfsShare: 192.168.122.1:/export/glance

  # Enable glance-direct import method
  GlanceEnabledImportMethods: glance-direct,web-download

  # Configure NFS staging area (required for glance-direct import method)
  GlanceStagingNfsShare: 192.168.122.1:/export/glance-staging

The GlanceBackend, GlanceNfsEnabled, and GlanceNfsShare parameters are defined in the Storage Configuration section in the Advanced Overcloud Customization Guide.

Two new parameters for interoperable image import define the import method and a shared NFS staging area.

GlanceEnabledImportMethods
Defines the available import methods, web-download (default) and glance-direct. This line is only necessary if you wish to enable additional methods besides web-download.
GlanceStagingNfsShare
Configures the NFS staging area used by the glance-direct import method. This space can be shared amongst nodes in a high-availability cluster setup. Requires GlanceNfsEnabled be set to true.

To configure the settings:

  1. Create a new file called, for example, glance-settings.yaml. The contents of this file should be similar to the example above.
  2. Add the file to your OpenStack environment using the openstack overcloud deploy command:

    $ openstack overcloud deploy --templates -e glance-settings.yaml

    For additional information about using environment files, see the Including Environment Files in Overcloud Creation section in the Advanced Overcloud Customization Guide.

19.5. Configuring cinder back end for the Image service

The GlanceBackend parameter sets the back end that the Image service uses to store images.

Important

The default maximum number of volumes you can create for a project is 10.

Procedure

  1. To configure cinder as the Image service back end, add the following to the environment file:

    parameter_defaults:
      GlanceBackend: cinder
  2. If the cinder back end is enabled, the following parameters and values are set by default:

    cinder_store_auth_address = http://172.17.1.19:5000/v3
    cinder_store_project_name = service
    cinder_store_user_name = glance
    cinder_store_password = ****secret****
  3. To use a custom user name, or any custom value for the cinder_store_ parameters, add the ExtraConfig settings to parameter_defaults and pass the custom values:

    ExtraConfig:
        glance::config::api_config:
          glance_store/cinder_store_auth_address:
            value: "%{hiera('glance::api::authtoken::auth_url')}/v3"
          glance_store/cinder_store_user_name:
            value: <user-name>
          glance_store/cinder_store_password:
            value: "%{hiera('glance::api::authtoken::password')}"
          glance_store/cinder_store_project_name:
            value: "%{hiera('glance::api::authtoken::project_name')}"

19.6. Configuring the maximum number of storage devices to attach to one instance

By default, you can attach an unlimited number of storage devices to a single instance. To limit the maximum number of devices, add the max_disk_devices_to_attach parameter to your Compute environment file. The following example shows how to change the value of max_disk_devices_to_attach to "30":

parameter_defaults:
   ComputeExtraConfig:
          nova::config::nova_config:
            compute/max_disk_devices_to_attach:
                value: '30'

Guidelines and considerations

  • The number of storage disks supported by an instance depends on the bus that the disk uses. For example, the IDE disk bus is limited to 4 attached devices.
  • Changing the max_disk_devices_to_attach on a Compute node with active instances can cause rebuilds to fail if the maximum number is lower than the number of devices already attached to instances. For example, if instance A has 26 devices attached and you change max_disk_devices_to_attach to 20, a request to rebuild instance A will fail.
  • During cold migration, the configured maximum number of storage devices is only enforced on the source for the instance that you want to migrate. The destination is not checked before the move. This means that if Compute node A has 26 attached disk devices, and Compute node B has a configured maximum of 20 attached disk devices, a cold migration of an instance with 26 attached devices from Compute node A to Compute node B succeeds. However, a subsequent request to rebuild the instance in Compute node B fails because 26 devices are already attached which exceeds the configured maximum of 20.
  • The configured maximum is not enforced on shelved offloaded instances, as they have no Compute node.
  • Attaching a large number of disk devices to instances can degrade performance on the instance. You should tune the maximum number based on the boundaries of what your environment can support.
  • Instances with machine type Q35 can attach a maximum of 500 disk devices.

19.7. Configuring Third Party Storage

The director include a couple of environment files to help configure third-party storage providers. This includes:

Dell EMC Storage Center

Deploys a single Dell EMC Storage Center back end for the Block Storage (cinder) service.

The environment file is located at /usr/share/openstack-tripleo-heat-templates/environments/cinder-dellsc-config.yaml.

See the Dell Storage Center Back End Guide for full configuration information.

Dell EMC PS Series

Deploys a single Dell EMC PS Series back end for the Block Storage (cinder) service.

The environment file is located at /usr/share/openstack-tripleo-heat-templates/environments/cinder-dellps-config.yaml.

See the Dell EMC PS Series Back End Guide for full configuration information.

NetApp Block Storage

Deploys a NetApp storage appliance as a back end for the Block Storage (cinder) service.

The environment file is located at /usr/share/openstack-tripleo-heat-templates/environments/storage/cinder-netapp-config.yaml.

See the NetApp Block Storage Back End Guide for full configuration information.