Dell Storage Center Back End Guide

Red Hat OpenStack Platform 15

A Guide to Using Dell Storage Center Storage in a Red Hat OpenStack Platform Overcloud

OpenStack Documentation Team

Abstract

This document describes how to deploy a single Dell Storage Center device as a back end to the Red Hat OpenStack Platform 15 Overcloud.

Chapter 1. Introduction

This document describes how to configure OpenStack to use one or more Dell Storage Center back ends. The following sections assume that:

  • You intend to use only Dell Storage Center devices and drivers for Block Storage back ends
  • The OpenStack Overcloud has already been deployed through Director, with a properly-functional Block Storage service
  • The Dell storage device has already been deployed and configured as a storage repository
  • You have the necessary credentials for connecting to the Enterprise Manager and Dell Storage Center Group
  • You have the username and password of an account with elevated privileges. You can use the same account that was created to deploy the Overcloud; in Creating a Director Installation User, we create and use the stack user for this purpose.

When you deploy RHOSP with the director, you must also define and orchestrate all major overcloud settings with the director. This ensures that the settings persist through any further overcloud updates. For more information about deploying RHOSP with the director, see the Director Installation and Usage guide.

This document explains how to orchestrate your Dell Storage Center back end configuration to the Block Storage service on the overcloud. This document does not discuss the different deployment configurations that are possible with the back end. For more information about the different deployment configurations that are available, see the product documentation for your device.

Note

Director has the integrated components to deploy only a single instance of a Dell Storage Center back end.

Deploying multiple instances of a Dell Storage Center back end requires a custom back end configuration. For more information, see the Custom Block Storage Back End Deployment Guide.

Chapter 2. Process Description

To configure the Dell EMC PS Series back ends, complete the following procedures:

  1. Define a single back end. To configure a single Dell device as a back end, edit the default environment file from the core heat template collection and include this file in the overcloud deployment. For more information, see Defining a single back end.
  2. Deploy the configured back end and invoke it through the director. For more information, see Deploying the Dell EMC PS Series back end.
  3. Testing the Dell EMC PS Series back end.
  4. Address any volume size discrepancies with Dell EqualLogic back ends. For more information, see Addressing volume size discrepancies with Dell EqualLogic back ends in the Dell EMC PS Series Back End Guide.

Red Hat OpenStack Platform includes the drivers that are required for all Dell devices supported by the Block Storage service. In addition, director also has the puppet manifests, environment files, and Orchestration (heat) templates that are necessary to integrate the device as a back end to the overcloud.

Chapter 3. Define a Single Back End

Important

This section describes the deployment of a single back end. Deploying multiple instances of a Dell Storage Center back end requires a custom back end configuration. For more information, see the Custom Block Storage Back End Deployment Guide.

With a Director deployment, the easiest way to define a single Dell Storage Center back end is through the integrated environment file. This file is located in the following path of the Undercloud node:

/usr/share/openstack-tripleo-heat-templates/environments/cinder-dellsc-config.yaml

Copy this file to a local path where you can edit and invoke it later. For example, to copy it to ~/templates/:

$ cp /usr/share/openstack-tripleo-heat-templates/environments/cinder-dellsc-config.yaml ~/templates/

Afterwards, open the copy (~/templates/cinder-dellsc-config.yaml) and edit it as you see fit. The following snippet displays the default contents of this file:

# A Heat environment file which can be used to enable a
# a Cinder  Dell Storage Center ISCSI backend, configured via puppet
resource_registry:
  OS::TripleO::Services::CinderBackendDellSc: ../puppet/services/cinder-backend-dellsc.yaml # 1

parameter_defaults: # 2
  CinderEnableDellScBackend: true # 3
  CinderDellScBackendName: 'tripleo_dellsc'
  CinderDellScSanIp: ''
  CinderDellScSanLogin: 'Admin'
  CinderDellScSanPassword: ''
  CinderDellScSsn: '64702'
  CinderDellScIscsiIpAddress: ''
  CinderDellScIscsiPort: '3260'
  CinderDellScApiPort: '3033'
  CinderDellScServerFolder: 'dellsc_server'
  CinderDellScVolumeFolder: 'dellsc_volume'
1
The OS::TripleO::Services::CinderBackendDellSc parameter in the resource_registry section refers to a composable service template named cinder-backend-dellsc.yaml. The director uses this template to load the necessary resources for configuring the back end. By default, the parameter specifies the path to cinder-backend-dellsc.yaml relatively. As such, update this parameter with the absolute path to the file:
resource_registry:
  OS::TripleO::Services::CinderBackendDellSc: /usr/share/openstack-tripleo-heat-templates/puppet/services/cinder-backend-dellsc.yaml
2
The parameter_defaults section contains your back end definition. Specifically, it contains the parameters that the Director should pass to the resources defined in cinder-dellsc.yaml.
3
The CinderEnableDellScBackend: true line instructs the Director to use the puppet manifests necessary for the default configuration of a Dell Storage Center back end. This includes defining the volume driver that the Block Storage service should use (specifically, cinder.volume.drivers.dell_emc.sc.dell_storagecenter_iscsi.SCISCSIDriver).

To define your Dell Storage Center back end, edit the settings in the parameter_defaults section as you see fit. The following table explains each parameter, and also lists its corresponding /etc/cinder/cinder.conf setting.

Table 3.1. Dell Storage Center settings

Parameter/etc/cinder/cinder.conf settingDescription

CinderDellScBackendName

volume_backend_name

(Required) An arbitrary name to identify the volume back end.

CinderDellScSanIp

san_ip

(Optional) The IP address used to reach the Dell Enterprise Manager.

CinderDellScSanLogin

san_login

(Required) The user name to login to the Dell Enterprise Manager at the CinderDellScSanIp. The default user name is Admin.

CinderDellScSanPassword

san_password

(Optional) The corresponding password of CinderDellScSanLogin.

CinderDellScSsn

dell_sc_ssn

(Required) The Dell Storage Center serial number to use.

CinderDellScIscsiIpAddress

iscsi_ip_address

(Optional) The Dell Storage Center ISCSI IP address to be used for creating volumes and snapshots.

CinderDellScIscsiPort

iscsi_port

(Optional) The ISCSI port of the Dell Storage Center array.

CinderDellScApiPort

dell_sc_api_port

(Optional) The Dell Enterprise Manager API port.

CinderDellScServerFolder

dell_sc_server_folder

(Required) The Server folder in Dell Storage Center where the new server definitions are placed.

CinderDellScVolumeFolder

dell_sc_volume_folder

(Required) The Server folder in Dell Storage Center where the new volumes are created.

Chapter 4. Deploy the Configured Back End

The Director installation uses a non-root user to execute commands, which includes orchestrating the deployment of the Block Storage back end. In Creating a Director Installation User, a user named stack is created for this purpose. This user is configured with elevated privileges.

To deploy the lone back end configured in Chapter 3, Define a Single Back End, first log in as the stack user to the Undercloud. Then, deploy the back end (defined in the edited ~/templates/cinder-dellsc-config.yaml) by running the following:

$ openstack overcloud deploy --templates -e ~/templates/cinder-dellsc-config.yaml
Important

If you passed any extra environment files when you created the overcloud, pass them again here using the -e option to avoid making undesired changes to the overcloud. For more information, see Modifying the Overcloud Environment in the Director Installation and Usage guide.

Test the back end after director orchestration is complete.

Chapter 5. Test the Configured Back End

After you deploy the back end, test that you can successfully create volumes on it.

Procedure

  1. Log in to the undercloud node as the stack user.
  2. Source the overcloudrc credentials file:

    $ source /home/stack/overcloudrc
  3. Create a new volume type that you can use to specify the new back end. Run the following command to create a volume type called dellsc, run:

    $ cinder type-create dellsc
  4. Map the new volume type to the new back end, tripleo_dellsc , as defined through the CinderDellScBackendName parameter in Chapter 3, Define a Single Back End:

    $ cinder type-key dellsc set volume_backend_name=tripleo_dellsc
  5. Create a new 2GB volume on the new back end:

    $ cinder create --volume-type dellsc 2
Note

For more information, see Accessing the Overcloud in the Director Installation and Usage guide.

Legal Notice

Copyright © 2021 Red Hat, Inc.
The text of and illustrations in this document are licensed by Red Hat under a Creative Commons Attribution–Share Alike 3.0 Unported license ("CC-BY-SA"). An explanation of CC-BY-SA is available at http://creativecommons.org/licenses/by-sa/3.0/. In accordance with CC-BY-SA, if you distribute this document or an adaptation of it, you must provide the URL for the original version.
Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert, Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat, Red Hat Enterprise Linux, the Shadowman logo, the Red Hat logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
Java® is a registered trademark of Oracle and/or its affiliates.
XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.
MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.
Node.js® is an official trademark of Joyent. Red Hat is not formally related to or endorsed by the official Joyent Node.js open source or commercial project.
The OpenStack® Word Mark and OpenStack logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation's permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
All other trademarks are the property of their respective owners.