Chapter 19. Storage Configuration
This chapter outlines several methods of configuring storage options for your Overcloud.
The Overcloud uses local and LVM storage for the default storage options. However, these options are not supported for enterprise-level Overclouds. It is recommended to use one of the storage options in this chapter.
19.1. Configuring NFS Storage
This section describes configuring the Overcloud to use an NFS share. The installation and configuration process is based on the modification of an existing environment file in the core Heat template collection.
The core heat template collection contains a set of environment files in /usr/share/openstack-tripleo-heat-templates/environments/. These environment templates help with custom configuration of some of the supported features in a director-created Overcloud. This includes an environment file to help configure storage. This file is located at /usr/share/openstack-tripleo-heat-templates/environments/storage-environment.yaml. Copy this file to the stack user’s template directory.
$ cp /usr/share/openstack-tripleo-heat-templates/environments/storage-environment.yaml ~/templates/.
The environment file contains some parameters to help configure different storage options for OpenStack’s block and image storage components, cinder and glance. In this example, you will configure the Overcloud to use an NFS share. Modify the following parameters:
- CinderEnableIscsiBackend
-
Enables the iSCSI backend. Set to
false. - CinderEnableRbdBackend
-
Enables the Ceph Storage backend. Set to
false. - CinderEnableNfsBackend
-
Enables the NFS backend. Set to
true. - NovaEnableRbdBackend
-
Enables Ceph Storage for Nova ephemeral storage. Set to
false. - GlanceBackend
-
Define the back end to use for Glance. Set to
fileto use file-based storage for images. The Overcloud will save these files in a mounted NFS share for Glance. - CinderNfsMountOptions
- The NFS mount options for the volume storage.
- CinderNfsServers
- The NFS share to mount for volume storage. For example, 192.168.122.1:/export/cinder.
- GlanceNfsEnabled
-
Enables Pacemaker to manage the share for image storage. If disabled, the Overcloud stores images in the Controller node’s file system. Set to
true. - GlanceNfsShare
- The NFS share to mount for image storage. For example, 192.168.122.1:/export/glance.
- GlanceNfsOptions
- The NFS mount options for the image storage.
The environment file’s options should look similar to the following:
parameter_defaults: CinderEnableIscsiBackend: false CinderEnableRbdBackend: false CinderEnableNfsBackend: true NovaEnableRbdBackend: false GlanceBackend: 'file' CinderNfsMountOptions: 'rw,sync' CinderNfsServers: '192.0.2.230:/cinder' GlanceNfsEnabled: true GlanceNfsShare: '192.0.2.230:/glance' GlanceNfsOptions: 'rw,sync,context=system_u:object_r:glance_var_lib_t:s0'
Include the context=system_u:object_r:glance_var_lib_t:s0 in the GlanceNfsOptions parameter to allow glance access to the /var/lib directory. Without this SELinux content, glance will fail to write to the mount point.
These parameters are integrated as part of the heat template collection. Setting them as such creates two NFS mount points for cinder and glance to use.
Save this file for inclusion in the Overcloud creation.
19.2. Configuring Ceph Storage
The director provides two main methods for integrating Red Hat Ceph Storage into an Overcloud.
- Creating an Overcloud with its own Ceph Storage Cluster
- The director has the ability to create a Ceph Storage Cluster during the creation on the Overcloud. The director creates a set of Ceph Storage nodes that use the Ceph OSD to store the data. In addition, the director install the Ceph Monitor service on the Overcloud’s Controller nodes. This means if an organization creates an Overcloud with three highly available controller nodes, the Ceph Monitor also becomes a highly available service. For more information, see the Deploying an Overcloud with Containerized Red Hat Ceph guide.
- Integrating a Existing Ceph Storage into an Overcloud
- If you already have an existing Ceph Storage Cluster, you can integrate this during an Overcloud deployment. This means you manage and scale the cluster outside of the Overcloud configuration. For more information, see the Integrating an Overcloud with an Existing Red Hat Ceph Cluster guide.
19.3. Using an External Object Storage Cluster
You can reuse an external Object Storage (swift) cluster by disabling the default Object Storage service deployment on the controller nodes. Doing so disables both the proxy and storage services for Object Storage and configures haproxy and keystone to use the given external Swift endpoint.
User accounts on the external Object Storage (swift) cluster have to be managed by hand.
You need the endpoint IP address of the external Object Storage cluster as well as the authtoken password from the external Object Storage proxy-server.conf file. You can find this information by using the openstack endpoint list command.
To deploy director with an external Swift cluster:
Create a new file named
swift-external-params.yamlwith the following content:-
Replace
EXTERNAL.IP:PORTwith the IP address and port of the external proxy and Replace
AUTHTOKENwith theauthtokenpassword for the external proxy on theSwiftPasswordline.parameter_defaults: ExternalPublicUrl: 'https://EXTERNAL.IP:PORT/v1/AUTH_%(tenant_id)s' ExternalInternalUrl: 'http://192.168.24.9:8080/v1/AUTH_%(tenant_id)s' ExternalAdminUrl: 'http://192.168.24.9:8080' ExternalSwiftUserTenant: 'service' SwiftPassword: AUTHTOKEN
-
Replace
-
Save this file as
swift-external-params.yaml. Deploy the overcloud using these additional environment files.
openstack overcloud deploy --templates \ -e [your environment files] -e /usr/share/openstack-tripleo-heat-templates/environments/swift-external.yaml -e swift-external-params.yaml
19.4. Configuring the Image Import Method and Shared Staging Area
The default settings for the OpenStack Image service (glance) are determined by the Heat templates used when OpenStack is installed. The Image service Heat template is tht/puppet/services/glance-api.yaml.
The interoperable image import allows two methods for image import:
- web-download and
- glance-direct.
The web-download method lets you import an image from a URL; the glance-direct method lets you import an image from a local volume.
19.4.1. Creating and Deploying the glance-settings.yaml File
You use an environment file to configure the import parameters. These parameters override the default values established in the Heat template. The example environment content provides parameters for the interoperable image import.
parameter_defaults: # Configure NFS backend GlanceBackend: file GlanceNfsEnabled: true GlanceNfsShare: 192.168.122.1:/export/glance # Enable glance-direct import method GlanceEnabledImportMethods: glance-direct,web-download # Configure NFS staging area (required for glance-direct import method) GlanceStagingNfsShare: 192.168.122.1:/export/glance-staging
The GlanceBackend, GlanceNfsEnabled, and GlanceNfsShare parameters are defined in the Storage Configuration section in the Advanced Overcloud Customization Guide.
Two new parameters for interoperable image import define the import method and a shared NFS staging area.
- GlanceEnabledImportMethods
- Defines the available import methods, web-download (default) and glance-direct. This line is only necessary if you wish to enable additional methods besides web-download.
- GlanceStagingNfsShare
- Configures the NFS staging area used by the glance-direct import method. This space can be shared amongst nodes in a high-availability cluster setup. Requires GlanceNfsEnabled be set to true.
To configure the settings:
- Create a new file called, for example, glance-settings.yaml. The contents of this file should be similar to the example above.
Add the file to your OpenStack environment using the
openstack overcloud deploycommand:$ openstack overcloud deploy --templates -e glance-settings.yaml
For additional information about using environment files, see the Including Environment Files in Overcloud Creation section in the Advanced Overcloud Customization Guide.
19.5. Configuring Third Party Storage
The director include a couple of environment files to help configure third-party storage providers. This includes:
- Dell EMC Storage Center
Deploys a single Dell EMC Storage Center back end for the Block Storage (cinder) service.
The environment file is located at
/usr/share/openstack-tripleo-heat-templates/environments/cinder-dellsc-config.yaml.See the Dell Storage Center Back End Guide for full configuration information.
- Dell EMC PS Series
Deploys a single Dell EMC PS Series back end for the Block Storage (cinder) service.
The environment file is located at
/usr/share/openstack-tripleo-heat-templates/environments/cinder-dellps-config.yaml.See the Dell EMC PS Series Back End Guide for full configuration information.
- NetApp Block Storage
Deploys a NetApp storage appliance as a back end for the Block Storage (cinder) service.
The environment file is located at
/usr/share/openstack-tripleo-heat-templates/environments/cinder-netapp-config.yaml.See the NetApp Block Storage Back End Guide for full configuration information.
