Red Hat Training

A Red Hat training course is available for Red Hat OpenStack Platform

Chapter 2. Block Storage

The OpenStack Block Storage service provides persistent storage for Compute instances, working with many different storage drivers that you can configure.

2.1. Volume drivers

To use different volume drivers for the cinder-volume service, use the parameters described in these sections.
To set a volume driver, use the volume_driver flag. The default is:
volume_driver = cinder.volume.drivers.lvm.LVMISCSIDriver

2.1.1. Ceph RADOS Block Device (RBD)

If you use KVM or QEMU as your hypervisor, you can configure the Compute service to use Ceph RADOS block devices (RBD) for volumes.
Ceph is a massively scalable, open source, distributed storage system. It is comprised of an object store, block store, and a POSIX-compliant distributed file system. The platform can auto-scale to the exabyte level and beyond. It runs on commodity hardware, is self-healing and self-managing, and has no single point of failure. Ceph is in the Linux kernel and is integrated with the OpenStack cloud operating system. Due to its open-source nature, you can install and use this portable storage platform in public or private clouds.

RADOS

Ceph is based on RADOS: Reliable Autonomic Distributed Object Store. RADOS distributes objects across the storage cluster and replicates objects for fault tolerance. RADOS contains the following major components:
  • Object Storage Device (OSD) Daemon. The storage daemon for the RADOS service, which interacts with the OSD (physical or logical storage unit for your data).
    You must run this daemon on each server in your cluster. For each OSD, you can have an associated hard drive disk. For performance purposes, pool your hard drive disk with raid arrays, logical volume management (LVM), or B-tree file system (Btrfs) pooling. By default, the following pools are created: data, metadata, and RBD.
  • Meta-Data Server (MDS). Stores metadata. MDSs build a POSIX file system on top of objects for Ceph clients. However, if you do not use the Ceph file system, you do not need a metadata server.
  • Monitor (MON). A lightweight daemon that handles all communications with external applications and clients. It also provides a consensus for distributed decision making in a Ceph/RADOS cluster. For instance, when you mount a Ceph shared on a client, you point to the address of a MON server. It checks the state and the consistency of the data. In an ideal setup, you must run at least three ceph-mon daemons on separate servers.
Ceph developers recommend XFS for production deployments, Btrfs for testing, development, and any non-critical deployments. Btrfs has the correct feature set and roadmap to serve Ceph in the long-term, but XFS and ext4 provide the necessary stability for today’s deployments.
Note
If using Btrfs, ensure that you use the correct version (see Ceph Dependencies).
For more information about usable file systems, see ceph.com/ceph-storage/file-system/.

Ways to store, use, and expose data

To store and access your data, you can use the following storage systems:
  • RADOS. Use as an object, default storage mechanism.
  • RBD. Use as a block device. The Linux kernel RBD (RADOS block device) driver allows striping a Linux block device over multiple distributed object store data objects. It is compatible with the KVM RBD image.
  • CephFS. Use as a file, POSIX-compliant file system.
Ceph exposes RADOS; you can access it through the following interfaces:
  • RADOS Gateway. OpenStack Object Storage and Amazon-S3 compatible RESTful interface (see RADOS_Gateway).
  • librados, and its related C/C++ bindings.
  • RBD and QEMU-RBD. Linux kernel and QEMU block devices that stripe data across multiple objects.

Driver options

The following table contains the configuration options supported by the Ceph RADOS Block Device driver.
Deprecation notice
The volume_tmp_dir option has been deprecated and replaced by image_conversion_dir.

Table 2.1. Description of Ceph storage configuration options

Configuration option = Default value Description
[DEFAULT]
rados_connect_timeout = -1 (IntOpt) Timeout value (in seconds) used when connecting to ceph cluster. If value < 0, no timeout is set and default librados value is used.
rados_connection_interval = 5 (IntOpt) Interval value (in seconds) between connection retries to ceph cluster.
rados_connection_retries = 3 (IntOpt) Number of retries if connection to ceph cluster failed.
rbd_ceph_conf = (StrOpt) Path to the ceph configuration file
rbd_cluster_name = ceph (StrOpt) The name of ceph cluster
rbd_flatten_volume_from_snapshot = False (BoolOpt) Flatten volumes created from snapshots to remove dependency from volume to snapshot
rbd_max_clone_depth = 5 (IntOpt) Maximum number of nested volume clones that are taken before a flatten occurs. Set to 0 to disable cloning.
rbd_pool = rbd (StrOpt) The RADOS pool where rbd volumes are stored
rbd_secret_uuid = None (StrOpt) The libvirt uuid of the secret for the rbd_user volumes
rbd_store_chunk_size = 4 (IntOpt) Volumes will be chunked into objects of this size (in megabytes).
rbd_user = None (StrOpt) The RADOS client name for accessing rbd volumes - only set when using cephx authentication
volume_tmp_dir = None (StrOpt) Directory where temporary image files are stored when the volume driver does not write them directly to the volume. Warning: this option is now deprecated, use image_conversion_dir instead.

2.1.2. Dell EqualLogic volume driver

The Dell EqualLogic volume driver interacts with configured EqualLogic arrays and supports various operations.

Supported operations

  • Create, delete, attach, and detach volumes.
  • Create, list, and delete volume snapshots.
  • Clone a volume.
The OpenStack Block Storage service supports:
  • Multiple instances of Dell EqualLogic Groups or Dell EqualLogic Group Storage Pools and multiple pools on a single array.
  • Multiple instances of Dell EqualLogic Groups or Dell EqualLogic Group Storage Pools or multiple pools on a single array.
The Dell EqualLogic volume driver's ability to access the EqualLogic Group is dependent upon the generic block storage driver's SSH settings in the /etc/cinder/cinder.conf file (see Section 2.3, “Block Storage sample configuration files” for reference).

Table 2.2. Description of Dell EqualLogic volume driver configuration options

Configuration option = Default value Description
[DEFAULT]
eqlx_chap_login = admin (StrOpt) Existing CHAP account name. Note that this option is deprecated in favour of "chap_username" as specified in cinder/volume/driver.py and will be removed in next release.
eqlx_chap_password = password (StrOpt) Password for specified CHAP account name. Note that this option is deprecated in favour of "chap_password" as specified in cinder/volume/driver.py and will be removed in the next release
eqlx_cli_max_retries = 5 (IntOpt) Maximum retry count for reconnection. Default is 5.
eqlx_cli_timeout = 30 (IntOpt) Timeout for the Group Manager cli command execution. Default is 30. Note that this option is deprecated in favour of "ssh_conn_timeout" as specified in cinder/volume/drivers/san/san.py and will be removed in M release.
eqlx_group_name = group-0 (StrOpt) Group name to use for creating volumes. Defaults to "group-0".
eqlx_pool = default (StrOpt) Pool in which volumes will be created. Defaults to "default".
eqlx_use_chap = False (BoolOpt) Use CHAP authentication for targets. Note that this option is deprecated in favour of "use_chap_auth" as specified in cinder/volume/driver.py and will be removed in next release.
The following sample /etc/cinder/cinder.conf configuration lists the relevant settings for a typical Block Storage service using a single Dell EqualLogic Group:

Example 2.1. Default (single-instance) configuration

[DEFAULT]
#Required settings

volume_driver = cinder.volume.drivers.eqlx.DellEQLSanISCSIDriver
san_ip = IP_EQLX
san_login = SAN_UNAME
san_password = SAN_PW
eqlx_group_name = EQLX_GROUP
eqlx_pool = EQLX_POOL

#Optional settings

san_thin_provision = true|false
eqlx_use_chap = true|false
eqlx_chap_login = EQLX_UNAME
eqlx_chap_password = EQLX_PW
eqlx_cli_max_retries = 5
san_ssh_port = 22
ssh_conn_timeout = 30
san_private_key = SAN_KEY_PATH
ssh_min_pool_conn = 1
ssh_max_pool_conn = 5
In this example, replace the following variables accordingly:
IP_EQLX
The IP address used to reach the Dell EqualLogic Group through SSH. This field has no default value.
SAN_UNAME
The user name to login to the Group manager via SSH at the san_ip. Default user name is grpadmin.
SAN_PW
The corresponding password of SAN_UNAME. Not used when san_private_key is set. Default password is password.
EQLX_GROUP
The group to be used for a pool where the Block Storage service will create volumes and snapshots. Default group is group-0.
EQLX_POOL
The pool where the Block Storage service will create volumes and snapshots. Default pool is default. This option cannot be used for multiple pools utilized by the Block Storage service on a single Dell EqualLogic Group.
EQLX_UNAME
The CHAP login account for each volume in a pool, if eqlx_use_chap is set to true. Default account name is chapadmin.
EQLX_PW
The corresponding password of EQLX_UNAME. The default password is randomly generated in hexadecimal, so you must set this password manually.
SAN_KEY_PATH (optional)
The filename of the private key used for SSH authentication. This provides password-less login to the EqualLogic Group. Not used when san_password is set. There is no default value.
In addition, enable thin provisioning for SAN volumes using the default san_thin_provision = true setting.

Example 2.2. Multi back-end Dell EqualLogic configuration

The following example shows the typical configuration for a Block Storage service that uses two Dell EqualLogic back ends:
enabled_backends = backend1,backend2
san_ssh_port = 22
​ssh_conn_timeout = 30
​san_thin_provision = true
      ​
​[backend1]
​volume_driver = cinder.volume.drivers.eqlx.DellEQLSanISCSIDriver
​volume_backend_name = backend1
​san_ip = IP_EQLX1
​san_login = SAN_UNAME
san_password = SAN_PW
​eqlx_group_name = EQLX_GROUP
​eqlx_pool = EQLX_POOL
      ​
​[backend2]
​volume_driver = cinder.volume.drivers.eqlx.DellEQLSanISCSIDriver
​volume_backend_name = backend2
​san_ip = IP_EQLX2
san_login = SAN_UNAME
san_password = SAN_PW
​eqlx_group_name = EQLX_GROUP
​eqlx_pool = EQLX_POOL
In this example:
  • Thin provisioning for SAN volumes is enabled (san_thin_provision = true). This is recommended when setting up Dell EqualLogic back ends.
  • Each Dell EqualLogic back-end configuration ([backend1] and [backend2]) has the same required settings as a single back-end configuration, with the addition of volume_backend_name.
  • The san_ssh_port option is set to its default value, 22. This option sets the port used for SSH.
  • The ssh_conn_timeout option is also set to its default value, 30. This option sets the timeout in seconds for CLI commands over SSH.
  • The IP_EQLX1 and IP_EQLX2 refer to the IP addresses used to reach the Dell EqualLogic Group of backend1 and backend2 through SSH, respectively.
For information on configuring multiple back ends, see Configure a multiple-storage back end.

2.1.3. Dell Storage Center Fibre Channel and iSCSI drivers

The Dell Storage Center volume driver interacts with configured Storage Center arrays.
The Dell Storage Center driver manages Storage Center arrays through Enterprise Manager. Enterprise Manager connection settings and Storage Center options are defined in the cinder.conf file.
Prerequisite: Dell Enterprise Manager 2015 R1 or later must be used.

Supported operations

The Dell Storage Center volume driver provides the following Cinder volume operations:
  • Create, delete, attach (map), and detach (unmap) volumes.
  • Create, list, and delete volume snapshots.
  • Create a volume from a snapshot.
  • Copy an image to a volume.
  • Copy a volume to an image.
  • Clone a volume.
  • Extend a volume.

Extra spec options

Volume type extra specs can be used to select different Storage Profiles.
Storage Profiles control how Storage Center manages volume data. For a given volume, the selected Storage Profile dictates which disk tier accepts initial writes, as well as how data progression moves data between tiers to balance performance and cost. Predefined Storage Profiles are the most effective way to manage data in Storage Center.
By default, if no Storage Profile is specified in the volume extra specs, the default Storage Profile for the user account configured for the Block Storage driver is used. The extra spec key storagetype:storageprofile with the value of the name of the Storage Profile on the Storage Center can be set to allow to use Storage Profiles other than the default.
For ease of use from the command line, spaces in Storage Profile names are ignored. As an example, here is how to define two volume types using the High Priority and Low Priority Storage Profiles:
$ cinder type-create "GoldVolumeType"
$ cinder type-key "GoldVolumeType" set storagetype:storageprofile=highpriority
$ cinder type-create "BronzeVolumeType"
$ cinder type-key "BronzeVolumeType" set storagetype:storageprofile=lowpriority

iSCSI configuration

Use the following instructions to update the configuration file for iSCSI:

Example 2.3. Sample iSCSI Configuration

default_volume_type = delliscsi
enabled_backends = delliscsi

[delliscsi]
# Name to give this storage backend
volume_backend_name = delliscsi
# The iSCSI driver to load
volume_driver = cinder.volume.drivers.dell.dell_storagecenter_iscsi.DellStorageCenterISCSIDriver
# IP address of Enterprise Manager
san_ip = 172.23.8.101
# Enterprise Manager user name
san_login = Admin
# Enterprise Manager password
san_password = secret
# The Storage Center iSCSI IP address
iscsi_ip_address = 192.168.0.20
# The Storage Center serial number to use
dell_sc_ssn = 64702

# ==Optional settings==
# The Enterprise Manager API port
dell_sc_api_port = 3033
# Server folder to place new server definitions
dell_sc_server_folder = devstacksrv
# Volume folder to place created volumes
dell_sc_volume_folder = devstackvol/Cinder
# The iSCSI IP port
iscsi_port = 3260

Fibre Channel configuration

Use the following instructions to update the configuration file for fibre channel:

Example 2.4. Sample FC configuration

default_volume_type = dellfc
enabled_backends = dellfc

[dellfc]
# Name to give this storage backend
volume_backend_name = dellfc
# The FC driver to load
volume_driver = cinder.volume.drivers.dell.dell_storagecenter_fc.DellStorageCenterFCDriver
# IP address of Enterprise Manager
san_ip = 172.23.8.101
# Enterprise Manager user name
san_login = Admin
# Enterprise Manager password
san_password = secret
# The Storage Center serial number to use
dell_sc_ssn = 64702

# Optional settings

# The Enterprise Manager API port
dell_sc_api_port = 3033
# Server folder to place new server definitions
dell_sc_server_folder = devstacksrv
# Volume folder to place created volumes
dell_sc_volume_folder = devstackvol/Cinder

Driver options

The following table contains the configuration options specific to the Dell Storage Center volume driver.

Table 2.3. Description of Dell Storage Center volume driver configuration options

Configuration option = Default value Description
[DEFAULT]
dell_sc_api_port = 3033 (IntOpt) Dell API port
dell_sc_server_folder = openstack (StrOpt) Name of the server folder to use on the Storage Center
dell_sc_ssn = 64702 (IntOpt) Storage Center System Serial Number
dell_sc_verify_cert = False (BoolOpt) Enable HTTPS SC certificate verification.
dell_sc_volume_folder = openstack (StrOpt) Name of the volume folder to use on the Storage Center

2.1.4. EMC ScaleIO Block Storage driver configuration

ScaleIO is a software-only solution that uses existing servers' local disks and LAN to create a virtual SAN that has all of the benefits of external storage, but at a fraction of the cost and complexity. Using the driver, Block Storage hosts can connect to a ScaleIO Storage cluster.
This section explains how to configure and connect the block storage nodes to a ScaleIO storage cluster.

2.1.4.1. Support matrix

2.1.4.2. Deployment prerequisites

  • ScaleIO Gateway must be installed and accessible in the network. For installation steps, refer to the Preparing the installation Manager and the Gateway section in ScaleIO Deployment Guide. See Section 2.1.4.2.1, “Official documentation”.
  • ScaleIO Data Client (SDC) must be installed on all OpenStack nodes.
2.1.4.2.1. Official documentation
To find the ScaleIO documentation:
  1. From the left-side panel, select the relevant version (1.32 or 2.0).
  2. Search for "ScaleIO Installation Guide 1.32" or "ScaleIO 2.0 Deployment Guide" accordingly.

2.1.4.3. Supported operations

  • Create, delete, clone, attach, and detach volumes
  • Create and delete volume snapshots
  • Create a volume from a snapshot
  • Copy an image to a volume
  • Copy a volume to an image
  • Extend a volume
  • Get volume statistics
  • Manage and unmanage a volume
  • Create, list, update, and delete consistency groups
  • Create, list, update, and delete consistency group snapshots

2.1.4.4. ScaleIO QoS support

QoS support for the ScaleIO driver includes the ability to set the following capabilities in the Block Storage API cinder.api.contrib.qos_specs_manage QoS specs extension module:
  • minBWS
  • maxBWS
The QoS keys above must be created and associated with a volume type. For information about how to set the key-value pairs and associate them with a volume type, run the following commands:
$ cinder help qos-create

$ cinder help qos-key

$ cinder help qos-associate
maxBWS
The QoS I/O issue bandwidth rate limit in KBs. If not set, the I/O issue bandwidth rate has no limit. The setting must be a multiple of 1024.
maxIOPS
The QoS I/O issue bandwidth rate limit in MBs. If not set, the I/O issue bandwidth rate has no limit. The setting must be larger than 10.
Since the limits are per SDC, they will be applied after the volume is attached to an instance, and thus to a compute node/SDC.

2.1.4.5. ScaleIO thin provisioning support

The Block Storage driver supports creation of thin-provisioned volumes, in addition to thick provisioning. The provisioning type settings should be added as an extra specification of the volume type, as follows:
sio:provisioning_type = thin\thick
If the provisioning type value is not specified, the default value of "thick" will be used.

2.1.4.6. ScaleIO Block Storage driver configuration

Edit the cinder.conf file by adding the configuration below under the [DEFAULT] section of the file in case of a single back end, or under a separate section in case of multiple back ends (for example [ScaleIO]). The configuration file is usually located at /etc/cinder/cinder.conf.
For a configuration example, refer to Section 2.1.4.8, “Configuration example”.
2.1.4.6.1. ScaleIO driver name
Configure the driver name by adding the following parameter:
volume_driver = cinder.volume.drivers.emc.scaleio.ScaleIODriver
2.1.4.6.2. ScaleIO MDM server IP
The ScaleIO Meta Data Manager monitors and maintains the available resources and permissions.
To retrieve the MDM server IP address, use the drv_cfg --query_mdms command.
Configure the MDM server IP address by adding the following parameter:
san_ip = ScaleIO GATEWAY IP
2.1.4.6.3. ScaleIO Protection Domain name
ScaleIO allows multiple Protection Domains (groups of SDSs that provide backup for each other).
To retrieve the available Protection Domains, use the command scli --query_all and search for the Protection Domains section.
Configure the Protection Domain for newly created volumes by adding the following parameter:
sio_protection_domain_name = ScaleIO Protection Domain
2.1.4.6.4. ScaleIO Storage Pool name
A ScaleIO Storage Pool is a set of physical devices in a Protection Domain.
To retrieve the available Storage Pools, use the command scli --query_all and search for available Storage Pools.
Configure the Storage Pool for newly created volumes by adding the following parameter:
sio_storage_pool_name = ScaleIO Storage Pool
2.1.4.6.5. ScaleIO Storage Pools
Multiple Storage Pools and Protection Domains can be listed for use by the virtual machines.
To retrieve the available Storage Pools, use the command scli --query_all and search for available Storage Pools.
Configure the available Storage Pools by adding the following parameter:
sio_storage_pools = Comma-separated list of protection domain:storage pool name
2.1.4.6.6. ScaleIO user credentials
Block Storage requires a ScaleIO user with administrative privileges. ScaleIO recommends creating a dedicated OpenStack user account that has an administrative user role.
Refer to the ScaleIO User Guide for details on user account management.
Configure the user credentials by adding the following parameters:
san_login = ScaleIO username

san_password = ScaleIO password

2.1.4.7. Multiple back ends

Configuring multiple storage back ends allows you to create several back-end storage solutions that serve the same Compute resources.
When a volume is created, the scheduler selects the appropriate back end to handle the request, according to the specified volume type.

2.1.4.8. Configuration example

cinder.conf example file
You can update the cinder.conf file by editing the necessary parameters as follows:
[Default]
enabled_backends = scaleio

[scaleio]
volume_driver = cinder.volume.drivers.emc.scaleio.ScaleIODriver
volume_backend_name = scaleio
san_ip = GATEWAY_IP
sio_protection_domain_name = Default_domain
sio_storage_pool_name = Default_pool
sio_storage_pools = Domain1:Pool1,Domain2:Pool2
san_login = SIO_USER
san_password = SIO_PASSWD

2.1.4.9. Configuration options

The ScaleIO driver supports these configuration options:

Table 2.4. Description of EMC SIO volume driver configuration options

Configuration option = Default value Description
[DEFAULT]
sio_force_delete = False (BoolOpt) Whether to allow force delete.
sio_protection_domain_id = None (StrOpt) Protection domain id.
sio_protection_domain_name = None (StrOpt) Protection domain name.
sio_rest_server_port = 443 (StrOpt) REST server port.
sio_round_volume_capacity = True (BoolOpt) Whether to round volume capacity.
sio_server_certificate_path = None (StrOpt) Server certificate path.
sio_storage_pool_id = None (StrOpt) Storage pool id.
sio_storage_pool_name = None (StrOpt) Storage pool name.
sio_storage_pools = None (StrOpt) Storage pools.
sio_unmap_volume_before_deletion = False (BoolOpt) Whether to unmap volume before deletion.
sio_verify_server_certificate = False (BoolOpt) Whether to verify server certificate.

2.1.5. EMC VMAX iSCSI and FC drivers

The EMC VMAX drivers, EMCVMAXISCSIDriver and EMCVMAXFCDriver, support the use of EMC VMAX storage arrays under OpenStack Block Storage. They both provide equivalent functions and differ only in support for their respective host attachment methods.
The drivers perform volume operations by communicating with the backend VMAX storage. It uses a CIM client in Python called PyWBEM to perform CIM operations over HTTP.
The EMC CIM Object Manager (ECOM) is packaged with the EMC SMI-S provider. It is a CIM server that enables CIM clients to perform CIM operations over HTTP by using SMI-S in the back-end for VMAX storage operations.
The EMC SMI-S Provider supports the SNIA Storage Management Initiative (SMI), an ANSI standard for storage management. It supports the VMAX storage system.

2.1.5.1. System requirements

EMC SMI-S Provider V4.6.2.8 and higher is required. You can download SMI-S from the EMC's support web site (login is required). See the EMC SMI-S Provider release notes for installation instructions.
EMC storage VMAX Family is supported.

2.1.5.2. Supported operations

VMAX drivers support these operations:
  • Create, delete, attach, and detach volumes.
  • Create, list, and delete volume snapshots.
  • Copy an image to a volume.
  • Copy a volume to an image.
  • Clone a volume.
  • Extend a volume.
  • Retype a volume.
  • Create a volume from a snapshot.
VMAX drivers also support the following features:
  • FAST automated storage tiering policy.
  • Dynamic masking view creation.
  • Striped volume creation.

2.1.5.3. Set up the VMAX drivers

Procedure 2.1. To set up the EMC VMAX drivers

  1. Install the python-pywbem package for your distribution. To install the python-pywbem package for Red Hat Enterprise Linux, CentOS, or Fedora:
    # yum install pywbem
  2. Download SMI-S from PowerLink and install it. Add your VMAX arrays to SMI-S.
    For information, see Section 2.1.5.3.1, “Set up SMI-S” and the SMI-S release notes.
  3. Configure connectivity. For FC driver, see Section 2.1.5.3.4, “FC Zoning with VMAX”. For iSCSI driver, see Section 2.1.5.3.5, “iSCSI with VMAX”.
2.1.5.3.1. Set up SMI-S
You can install SMI-S on a non-OpenStack host. Supported platforms include different flavors of Windows, Red Hat, and SUSE Linux. SMI-S can be installed on a physical server or a VM hosted by an ESX server. Note that the supported hypervisor for a VM running SMI-S is ESX only. See the EMC SMI-S Provider release notes for more information on supported platforms and installation instructions.
Note
You must discover storage arrays on the SMI-S server before you can use the VMAX drivers. Follow instructions in the SMI-S release notes.
SMI-S is usually installed at /opt/emc/ECIM/ECOM/bin on Linux and C:\Program Files\EMC\ECIM\ECOM\bin on Windows. After you install and configure SMI-S, go to that directory and type TestSmiProvider.exe.
Use addsys in TestSmiProvider.exe to add an array. Use dv and examine the output after the array is added. Make sure that the arrays are recognized by the SMI-S server before using the EMC VMAX drivers.
2.1.5.3.2. cinder.conf configuration file
Make the following changes in /etc/cinder/cinder.conf.
Add the following entries, where 10.10.61.45 is the IP address of the VMAX iSCSI target:
enabled_backends = CONF_GROUP_ISCSI, CONF_GROUP_FC
[CONF_GROUP_ISCSI]
iscsi_ip_address = 10.10.61.45
volume_driver = cinder.volume.drivers.emc.emc_vmax_iscsi.EMCVMAXISCSIDriver
cinder_emc_config_file = /etc/cinder/cinder_emc_config_CONF_GROUP_ISCSI.xml
volume_backend_name=ISCSI_backend
[CONF_GROUP_FC]
volume_driver = cinder.volume.drivers.emc.emc_vmax_fc.EMCVMAXFCDriver
cinder_emc_config_file = /etc/cinder/cinder_emc_config_CONF_GROUP_FC.xml
volume_backend_name=FC_backend
In this example, two backend configuration groups are enabled: CONF_GROUP_ISCSI and CONF_GROUP_FC. Each configuration group has a section describing unique parameters for connections, drivers, the volume_backend_name, and the name of the EMC-specific configuration file containing additional settings. Note that the file name is in the format /etc/cinder/cinder_emc_config_[confGroup].xml.
Once the cinder.conf and EMC-specific configuration files have been created, cinder commands need to be issued in order to create and associate OpenStack volume types with the declared volume_backend_names:
$ cinder type-create VMAX_ISCSI
$ cinder type-key VMAX_ISCSI set volume_backend_name=ISCSI_backend
$ cinder type-create VMAX_FC
$ cinder type-key VMAX_FC set volume_backend_name=FC_backend
By issuing these commands, the Block Storage volume type VMAX_ISCSI is associated with the ISCSI_backend, and the type VMAX_FC is associated with the FC_backend.
Restart the cinder-volume service.
2.1.5.3.3. cinder_emc_config_CONF_GROUP_ISCSI.xml configuration file
Create the /etc/cinder/cinder_emc_config_CONF_GROUP_ISCSI.xml file. You do not need to restart the service for this change.
Add the following lines to the XML file:
<?xml version="1.0" encoding="UTF-8" ?>
<EMC>
    <EcomServerIp>1.1.1.1</EcomServerIp>
    <EcomServerPort>00</EcomServerPort>
    <EcomUserName>user1</EcomUserName>
    <EcomPassword>password1</EcomPassword>
    <PortGroups>
      <PortGroup>OS-PORTGROUP1-PG</PortGroup>
      <PortGroup>OS-PORTGROUP2-PG</PortGroup>
    </PortGroups>
   <Array>111111111111</Array>
   <Pool>FC_GOLD1</Pool>
   <FastPolicy>GOLD1</FastPolicy>
</EMC>
Where:
  • EcomServerIp and EcomServerPort are the IP address and port number of the ECOM server which is packaged with SMI-S.
  • EcomUserName and EcomPassword are credentials for the ECOM server.
  • PortGroups supplies the names of VMAX port groups that have been pre-configured to expose volumes managed by this backend. Each supplied port group should have sufficient number and distribution of ports (across directors and switches) as to ensure adequate bandwidth and failure protection for the volume connections. PortGroups can contain one or more port groups of either iSCSI or FC ports. When a dynamic masking view is created by the VMAX driver, the port group is chosen randomly from the PortGroup list, to evenly distribute load across the set of groups provided. Make sure that the PortGroups set contains either all FC or all iSCSI port groups (for a given backend), as appropriate for the configured driver (iSCSI or FC).
  • The Array tag holds the unique VMAX array serial number.
  • The Pool tag holds the unique pool name within a given array. For backends not using FAST automated tiering, the pool is a single pool that has been created by the administrator. For backends exposing FAST policy automated tiering, the pool is the bind pool to be used with the FAST policy.
  • The FastPolicy tag conveys the name of the FAST Policy to be used. By including this tag, volumes managed by this backend are treated as under FAST control. Omitting the FastPolicy tag means FAST is not enabled on the provided storage pool.
2.1.5.3.4. FC Zoning with VMAX
Zone Manager is recommended when using the VMAX FC driver, especially for larger configurations where pre-zoning would be too complex and open-zoning would raise security concerns.
2.1.5.3.5. iSCSI with VMAX
  • Make sure the iscsi-initiator-utils package is installed on the host (use apt-get, zypper, or yum, depending on Linux flavor).
  • Verify host is able to ping VMAX iSCSI target ports.

2.1.5.4. VMAX masking view and group naming info

Masking view names
Masking views are dynamically created by the VMAX FC and iSCSI drivers using the following naming conventions:
OS-[shortHostName][poolName]-I-MV (for Masking Views using iSCSI)
OS-[shortHostName][poolName]-F-MV (for Masking Views using FC)
Initiator group names
For each host that is attached to VMAX volumes using the drivers, an initiator group is created or re-used (per attachment type). All initiators of the appropriate type known for that host are included in the group. At each new attach volume operation, the VMAX driver retrieves the initiators (either WWNNs or IQNs) from OpenStack and adds or updates the contents of the Initiator Group as required. Names are of the following format:
OS-[shortHostName]-I-IG (for iSCSI initiators)
OS-[shortHostName]-F-IG (for Fibre Channel initiators)
Note
Hosts attaching to VMAX storage managed by the OpenStack environment cannot also be attached to storage on the same VMAX not being managed by OpenStack. This is due to limitations on VMAX Initiator Group membership.
FA port groups
VMAX array FA ports to be used in a new masking view are chosen from the list provided in the EMC configuration file.
Storage group names
As volumes are attached to a host, they are either added to an existing storage group (if it exists) or a new storage group is created and the volume is then added. Storage groups contain volumes created from a pool (either single-pool or FAST-controlled), attached to a single host, over a single connection type (iSCSI or FC). Names are formed:
OS-[shortHostName][poolName]-I-SG (attached over iSCSI)
OS-[shortHostName][poolName]-F-SG (attached over Fibre Channel)

2.1.5.5. Concatenated or striped volumes

In order to support later expansion of created volumes, the VMAX Block Storage drivers create concatenated volumes as the default layout. If later expansion is not required, users can opt to create striped volumes in order to optimize I/O performance.
Below is an example of how to create striped volumes. First, create a volume type. Then define the extra spec for the volume type storagetype:stripecount representing the number of meta members in the striped volume. The example below means that each volume created under the GoldStriped volume type will be striped and made up of 4 meta members.
$ cinder type-create GoldStriped
$ cinder type-key GoldStriped set volume_backend_name=GOLD_BACKEND
$ cinder type-key GoldStriped set storagetype:stripecount=4

2.1.6. EMC VNX driver

EMC VNX driver consists of EMCCLIISCSIDriver and EMCCLIFCDriver, and supports both iSCSI and FC protocol. EMCCLIISCSIDriver (VNX iSCSI driver) and EMCCLIFCDriver (VNX FC driver) are separately based on the ISCSIDriver and FCDriver defined in Block Storage.

2.1.6.1. Overview

The VNX iSCSI driver and VNX FC driver perform the volume operations by executing Navisphere CLI (NaviSecCLI) which is a command line interface used for management, diagnostics, and reporting functions for VNX.
2.1.6.1.1. System requirements
  • VNX Operational Environment for Block version 5.32 or higher.
  • VNX Snapshot and Thin Provisioning license should be activated for VNX.
  • Navisphere CLI v7.32 or higher is installed along with the driver.
2.1.6.1.2. Supported operations
  • Create, delete, attach, and detach volumes.
  • Create, list, and delete volume snapshots.
  • Create a volume from a snapshot.
  • Copy an image to a volume.
  • Clone a volume.
  • Extend a volume.
  • Migrate a volume.
  • Retype a volume.
  • Get volume statistics.
  • Create and delete consistency groups.
  • Create, list, and delete consistency group snapshots.
  • Modify consistency groups.
  • Efficient non-disruptive volume backup.

2.1.6.2. Preparation

This section contains instructions to prepare the Block Storage nodes to use the EMC VNX driver. You install the Navisphere CLI, install the driver, ensure you have correct zoning configurations, and register the driver.
2.1.6.2.1. Install Navisphere CLI
Navisphere CLI needs to be installed on all Block Storage nodes within an OpenStack deployment. You need to download different versions for different platforms.
2.1.6.2.2. Check array software
Make sure your have following software installed for certain features.

Table 2.5. Required software

Feature Software Required
All
ThinProvisioning
All
VNXSnapshots
FAST cache support
FASTCache
Create volume with type compressed
Compression
Create volume with type deduplicated
Deduplication
2.1.6.2.3. Install EMC VNX driver
Both EMCCLIISCSIDriver and EMCCLIFCDriver are included in the Block Storage installer package:
  • emc_vnx_cli.py
  • emc_cli_fc.py (for EMCCLIFCDriver)
  • emc_cli_iscsi.py (for EMCCLIISCSIDriver)
2.1.6.2.4. Network configuration
For FC Driver, FC zoning is properly configured between hosts and VNX. Check Section 2.1.6.8.2, “Register FC port with VNX” for reference.
For iSCSI Driver, make sure your VNX iSCSI port is accessible by your hosts. Check Section 2.1.6.8.3, “Register iSCSI port with VNX” for reference.
You can use initiator_auto_registration=True configuration to avoid register the ports manually. Check the detail of the configuration in Section 2.1.6.3, “Backend configuration” for reference.
If you are trying to setup multipath, see Multipath Setup in Section 2.1.6.6.1, “Multipath setup”.

2.1.6.3. Backend configuration

Make the following changes in /etc/cinder/cinder.conf file:
Note
Changes to your configuration won't take effect until your restart your cinder service.
2.1.6.3.1. Minimum configuration
Here is a sample of minimum backend configuration. See following sections for the detail of each option Replace EMCCLIFCDriver to EMCCLIISCSIDriver if your are using the iSCSI driver.
[DEFAULT]
enabled_backends = vnx_array1

[vnx_array1]
san_ip = 10.10.72.41
san_login = sysadmin
san_password = sysadmin
naviseccli_path = /opt/Navisphere/bin/naviseccli
volume_driver=cinder.volume.drivers.emc.emc_cli_fc.EMCCLIFCDriver
initiator_auto_registration=True
2.1.6.3.2. Multi-backend configuration
Here is a sample of a multi-backend configuration. See following sections for the detail of each option. Replace EMCCLIFCDriver to EMCCLIISCSIDriver if your are using the iSCSI driver.
[DEFAULT]
enabled_backends=backendA, backendB

[backendA]
storage_vnx_pool_names = Pool_01_SAS, Pool_02_FLASH
san_ip = 10.10.72.41
storage_vnx_security_file_dir = /etc/secfile/array1
naviseccli_path = /opt/Navisphere/bin/naviseccli
volume_driver=cinder.volume.drivers.emc.emc_cli_fc.EMCCLIFCDriver
initiator_auto_registration=True

[backendB]
storage_vnx_pool_names = Pool_02_SAS
san_ip = 10.10.26.101
san_login = username
san_password = password
naviseccli_path = /opt/Navisphere/bin/naviseccli
volume_driver=cinder.volume.drivers.emc.emc_cli_fc.EMCCLIFCDriver
initiator_auto_registration=True
For more details on multi-backends, see OpenStack Cloud Administration Guide
2.1.6.3.3. Required configurations
2.1.6.3.3.1. IP of the VNX Storage Processors
Specify the SP A and SP B IP to connect.
san_ip = <IP of VNX Storage Processor A>
san_secondary_ip = <IP of VNX Storage Processor B>
2.1.6.3.3.2. VNX login credentials
There are two ways to specify the credentials.
  • Use plain text username and password.
Supply for plain username and password as below.
san_login = <VNX account with administrator role>
san_password = <password for VNX account>
storage_vnx_authentication_type = global
Valid values for storage_vnx_authentication_type are: global (default), local, ldap
  • Use Security file
This approach avoids the plain text password in your cinder configuration file. Supply a security file as below:
storage_vnx_security_file_dir=<path to security file>
Check the Unisphere CLI user guide or Section 2.1.6.8.1, “Authenticate by security file” for how to create a security file.
2.1.6.3.3.3. Path to your Unisphere CLI
Specify the absolute path to your naviseccli.
naviseccli_path = /opt/Navisphere/bin/naviseccli
2.1.6.3.3.4. Driver name
  • For the FC Driver, add the following option:
volume_driver=cinder.volume.drivers.emc.emc_cli_fc.EMCCLIFCDriver
  • For iSCSI Driver, add following option:
volume_driver=cinder.volume.drivers.emc.emc_cli_iscsi.EMCCLIISCSIDriver
2.1.6.3.4. Optional configurations
2.1.6.3.4.1. VNX pool names
Specify the list of pools to be managed, separated by ','. They should already exist in VNX.
storage_vnx_pool_names = pool 1, pool 2
If this value is not specified, all pools of the array will be used.
2.1.6.3.4.2. Initiator auto registration
When initiator_auto_registration=True, the driver will automatically register initiators to all working target ports of the VNX array during volume attaching (The driver will skip those initiators that have already been registered) if the option io_port_list is not specified in cinder.conf.
If the user wants to register the initiators with some specific ports but not register with the other ports, this functionality should be disabled.
When a comma-separated list is given to io_port_list, the driver will only register the initiator to the ports specified in the list and only return target port(s) which belong to the target ports in the io_port_list instead of all target ports.
  • Example for FC ports:
    io_port_list=a-1,B-3
    a or B is Storage Processor, number 1 and 3 are Port ID.
  • Example for iSCSI ports:
    io_port_list=a-1-0,B-3-0
    a or B is Storage Processor, the first numbers 1 and 3 are Port ID and the second number 0 is Virtual Port ID
Note
  • Rather than de-registered, the registered ports will be simply bypassed whatever they are in 'io_port_list' or not.
  • The driver will raise an exception if ports in io_port_list are not existed in VNX during startup.
2.1.6.3.4.3. Force delete volumes in storage group
Some available volumes may remain in storage group on the VNX array due to some OpenStack timeout issue. But the VNX array do not allow the user to delete the volumes which are in storage group. Option force_delete_lun_in_storagegroup is introduced to allow the user to delete the available volumes in this tricky situation.
When force_delete_lun_in_storagegroup=True in the back-end section, the driver will move the volumes out of storage groups and then delete them if the user tries to delete the volumes that remain in storage group on the VNX array.
The default value of force_delete_lun_in_storagegroup is False.
2.1.6.3.4.4. Over subscription in thin provisioning
Over subscription allows that the sum of all volumes' capacity (provisioned capacity) to be larger than the pool's total capacity.
max_over_subscription_ratio in the back-end section is the ratio of provisioned capacity over total capacity.
If the value of max_over_subscription_ratio is greater than 1.0, the provisioned capacity can exceed the total capacity. The default value of max_over_subscription_ratio is 20.0, which means the provisioned capacity can be 20 times the total physical capacity.
2.1.6.3.4.5. Storage group automatic deletion
For volume attaching, the driver has a storage group on VNX for each compute node hosting the vm instances which are going to consume VNX Block Storage (using compute node's hostname as storage group's name). All the volumes attached to the VM instances in a compute node will be put into the storage group. If destroy_empty_storage_group=True, the driver will remove the empty storage group after its last volume is detached. For data safety, it does not suggest to set destroy_empty_storage_group=True unless the VNX is exclusively managed by one Block Storage node because consistent lock_path is required for operation synchronization for this behavior.
2.1.6.3.4.6. Initiator auto deregistration
Enabling storage group automatic deletion is the precondition of this function. If initiator_auto_deregistration=True is set, the driver will deregister all the initiators of the host after its storage group is deleted.
2.1.6.3.4.7. FC SAN auto zoning
The EMC VNX FC driver supports FC SAN auto zoning when ZoneManager is configured. Set zoning_mode to fabric in DEFAULT section to enable this feature. For ZoneManager configuration, refer to Block Storage official guide.
2.1.6.3.4.8. Volume number threshold
In VNX, there is a limitation on the number of pool volumes that can be created in the system. When the limitation is reached, no more pool volumes can be created even if there is remaining capacity in the storage pool. In other words, if the scheduler dispatches a volume creation request to a back end that has free capacity but reaches the volume limitation, the creation fails.
The default value of check_max_pool_luns_threshold is False. When check_max_pool_luns_threshold=True, the pool-based back end will check the limit and will report 0 free capacity to the scheduler if the limit is reached. So the scheduler will be able to skip this kind of pool-based back end that runs out of the pool volume number.
2.1.6.3.4.9. iSCSI initiators
iscsi_initiators is a dictionary of IP addresses of the iSCSI initiator ports on OpenStack Nova/Cinder nodes which want to connect to VNX via iSCSI. If this option is configured, the driver will leverage this information to find an accessible iSCSI target portal for the initiator when attaching volume. Otherwise, the iSCSI target portal will be chosen in a relative random way.
This option is only valid for iSCSI driver.
Here is an example. VNX will connect host1 with 10.0.0.1 and 10.0.0.2. And it will connect host2 with 10.0.0.3.
The key name (like host1 in the example) should be the output of command hostname.
iscsi_initiators = {"host1":["10.0.0.1", "10.0.0.2"],"host2":["10.0.0.3"]}
2.1.6.3.4.10. Default timeout
Specify the timeout(minutes) for operations like LUN migration, LUN creation, etc. For example, LUN migration is a typical long running operation, which depends on the LUN size and the load of the array. An upper bound in the specific deployment can be set to avoid unnecessary long wait.
The default value for this option is infinite.
Example:
default_timeout = 10
2.1.6.3.4.11. Max LUNs per storage group
max_luns_per_storage_group specify the max number of LUNs in a storage group. Default value is 255. It is also the max value supportedby VNX.
2.1.6.3.4.12. Ignore pool full threshold
if ignore_pool_full_threshold is set to True, driver will force LUN creation even if the full threshold of pool is reached. Default to False

2.1.6.4. Extra spec options

Extra specs are used in volume types created in cinder as the preferred property of the volume.
The Block storage scheduler will use extra specs to find the suitable back end for the volume and the Block storage driver will create the volume based on the properties specified by the extra spec.
Use following command to create a volume type:
$ cinder type-create "demoVolumeType"
Use following command to update the extra spec of a volume type:
$ cinder type-key "demoVolumeType" set provisioning:type=thin
Volume types can also be configured in OpenStack Horizon.
In VNX Driver, we defined several extra specs. They are introduced below:
2.1.6.4.1. Provisioning type
  • Key: provisioning:type
  • Possible Values:
    • thick
    Volume is fully provisioned.

    Example 2.5. creating a thick volume type:

    $ cinder type-create "ThickVolumeType"
    $ cinder type-key "ThickVolumeType" set provisioning:type=thick thick_provisioning_support='<is> True'
    • thin
    Volume is virtually provisioned

    Example 2.6. creating a thin volume type:

    $ cinder type-create "ThinVolumeType"
    $ cinder type-key "ThinVolumeType" set provisioning:type=thin thin_provisioning_support='<is> True'
    • deduplicated
    Volume is thin and deduplication is enabled. The administrator shall go to VNX to configure the system level deduplication settings. To create a deduplicated volume, the VNX Deduplication license must be activated on VNX, and specify deduplication_support=True to let Block Storage scheduler find the proper volume back end.

    Example 2.7. creating a deduplicated volume type:

    $ cinder type-create "DeduplicatedVolumeType"
    $ cinder type-key "DeduplicatedVolumeType" set provisioning:type=deduplicated deduplication_support='<is> True'
    • compressed
    Volume is thin and compression is enabled. The administrator shall go to the VNX to configure the system level compression settings. To create a compressed volume, the VNX Compression license must be activated on VNX , and use compression_support=True to let Block Storage scheduler find a volume back end. VNX does not support creating snapshots on a compressed volume.

    Example 2.8. creating a compressed volume type:

    $ cinder type-create "CompressedVolumeType"
    $ cinder type-key "CompressedVolumeType" set provisioning:type=compressed compression_support='<is> True'
  • Default: thick
Note
provisioning:type replaces the old spec key storagetype:provisioning. The latter one will be obsoleted in the next release. If both provisioning:typeand storagetype:provisioning are set in the volume type, the value of provisioning:type will be used.
2.1.6.4.2. Storage tiering support
  • Key: storagetype:tiering
  • Possible Values:
    • StartHighThenAuto
    • Auto
    • HighestAvailable
    • LowestAvailable
    • NoMovement
  • Default: StartHighThenAuto
VNX supports fully automated storage tiering which requires the FAST license activated on the VNX. The OpenStack administrator can use the extra spec key storagetype:tiering to set the tiering policy of a volume and use the key fast_support='<is> True' to let Block Storage scheduler find a volume back end which manages a VNX with FAST license activated. Here are the five supported values for the extra spec key storagetype:tiering:

Example 2.9. creating a volume types with tiering policy:

$ cinder type-create "ThinVolumeOnLowestAvaibleTier"
$ cinder type-key "CompressedVolumeOnLowestAvaibleTier" set provisioning:type=thin storagetype:tiering=Auto fast_support='<is> True'
Note
Tiering policy can not be applied to a deduplicated volume. Tiering policy of the deduplicated LUN align with the settings of the pool.
2.1.6.4.3. FAST cache support
  • Key: fast_cache_enabled
  • Possible Values:
    • True
    • False
  • Default: False
VNX has FAST Cache feature which requires the FAST Cache license activated on the VNX. Volume will be created on the backend with FAST cache enabled when True is specified.
2.1.6.4.4. Snap-copy
  • Key: copytype:snap
  • Possible Values:
    • True
    • False
  • Default: False
The VNX driver supports snap-copy, which extremely accelerates the process for creating a copied volume.
By default, the driver will do full data copy when creating a volume from a snapshot or cloning a volume, which is time-consuming especially for large volumes. When the snap-copy is used, the driver will simply create a snapshot and mount it as a volume for the 2 kinds of operations which will be instant even for large volumes.
To enable this functionality, the source volume should have copytype:snap=True in the extra specs of its volume type. Then the new volume cloned from the source or copied from the snapshot for the source, will be in fact a snap-copy instead of a full copy. If a full copy is needed, retype/migration can be used to convert the snap-copy volume to a full-copy volume which may be time-consuming.
$ cinder type-create "SnapCopy"
$ cinder type-key "SnapCopy" set copytype:snap=True
User can determine whether the volume is a snap-copy volume or not by showing its metadata. If the 'lun_type' in metadata is 'smp', the volume is a snap-copy volume. Otherwise, it is a full-copy volume.
$ cinder metadata-show <volume>
Constraints:
  • copytype:snap=True is not allowed in the volume type of a consistency group.
  • Clone and snapshot creation are not allowed on a copied volume created through the snap-copy before it is converted to a full copy.
  • The number of snap-copy volume created from a source volume is limited to 255 at one point in time.
  • The source volume which has snap-copy volume can not be deleted.
2.1.6.4.5. Pool name
  • Key: pool_name
  • Possible Values: name of the storage pool managed by cinder
  • Default: None
If the user wants to create a volume on a certain storage pool in a backend that manages multiple pools, a volume type with a extra spec specified storage pool should be created first, then the user can use this volume type to create the volume.

Example 2.10. Creating the volume type:

$ cinder type-create "HighPerf"
$ cinder type-key "HighPerf" set pool_name=Pool_02_SASFLASH volume_backend_name=vnx_41
2.1.6.4.6. Obsoleted extra specs in Mitaka
Avoid using following extra spec keys.
  • storagetype:provisioning
  • storagetype:pool

2.1.6.5. Advanced features

2.1.6.5.1. Read-only volumes
OpenStack supports read-only volumes. The following command can be used to set a volume as read-only.
$ cinder readonly-mode-update <volume> True
After a volume is marked as read-only, the driver will forward the information when a hypervisor is attaching the volume and the hypervisor will make sure the volume is read-only.
2.1.6.5.2. Efficient non-disruptive volume backup
The default implementation in Cinder for non-disruptive volume backup is not efficient since a cloned volume will be created during backup.
The approach of efficient backup is to create a snapshot for the volume and connect this snapshot (a mount point in VNX) to the Cinder host for volume backup. This eliminates migration time involved in volume clone.
Constraints:
  • Backup creation for a snap-copy volume is not allowed if the volume status is in-use since snapshot cannot be taken from this volume.

2.1.6.6. Best practice

2.1.6.6.1. Multipath setup
Enabling multipath volume access is recommended for robust data access. The major configuration includes:
  • Install multipath-tools, sysfsutils and sg3-utils on nodes hosting Nova-Compute and Cinder-Volume services (Check the operating system manual for the system distribution for specific installation steps. For Red Hat based distributions, they should be device-mapper-multipath, sysfsutils and sg3_utils).
  • Specify use_multipath_for_image_xfer=true in cinder.conf for each FC/iSCSI back end.
  • Specify iscsi_use_multipath=True in libvirt section of nova.conf. This option is valid for both iSCSI and FC driver.
For multipath-tools, here is an EMC recommended sample of /etc/multipath.conf.
user_friendly_names is not specified in the configuration and thus it will take the default value no. It is NOT recommended to set it to yes because it may fail operations such as VM live migration.
blacklist {
    # Skip the files under /dev that are definitely not FC/iSCSI devices
    # Different system may need different customization
    devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
    devnode "^hd[a-z][0-9]*"
    devnode "^cciss!c[0-9]d[0-9]*[p[0-9]*]"

    # Skip LUNZ device from VNX
    device {
        vendor "DGC"
        product "LUNZ"
        }
}

defaults {
    user_friendly_names no
    flush_on_last_del yes
}

devices {
    # Device attributed for EMC CLARiiON and VNX series ALUA
    device {
        vendor "DGC"
        product ".*"
        product_blacklist "LUNZ"
        path_grouping_policy group_by_prio
        path_selector "round-robin 0"
        path_checker emc_clariion
        features "1 queue_if_no_path"
        hardware_handler "1 alua"
        prio alua
        failback immediate
    }
}
Note
When multipath is used in OpenStack, multipath faulty devices may come out in Nova-Compute nodes due to different issues (Bug 1336683 is a typical example).
A solution to completely avoid faulty devices has not been found yet. faulty_device_cleanup.py mitigates this issue when VNX iSCSI storage is used. Cloud administrators can deploy the script in all Nova-Compute nodes and use a CRON job to run the script on each Nova-Compute node periodically so that faulty devices will not stay too long. See VNX faulty device cleanup for detailed usage and the script.

2.1.6.7. Restrictions and limitations

2.1.6.7.1. iSCSI port cache
EMC VNX iSCSI driver caches the iSCSI ports information, so that the user should restart the cinder-volume service or wait for seconds (which is configured by periodic_interval in cinder.conf) before any volume attachment operation after changing the iSCSI port configurations. Otherwise the attachment may fail because the old iSCSI port configurations were used.
2.1.6.7.2. No extending for volume with snapshots
VNX does not support extending the thick volume which has a snapshot. If the user tries to extend a volume which has a snapshot, the status of the volume would change to error_extending.
2.1.6.7.3. Limitations for deploying cinder on computer node
It is not recommended to deploy the driver on a compute node if cinder upload-to-image --force True is used against an in-use volume. Otherwise, cinder upload-to-image --force True will terminate the data access of the vm instance to the volume.
2.1.6.7.4. Storage group with host names in VNX
When the driver notices that there is no existing storage group that has the host name as the storage group name, it will create the storage group and also add the compute node's or Block Storage nodes' registered initiators into the storage group.
If the driver notices that the storage group already exists, it will assume that the registered initiators have also been put into it and skip the operations above for better performance.
It is recommended that the storage administrator does not create the storage group manually and instead relies on the driver for the preparation. If the storage administrator needs to create the storage group manually for some special requirements, the correct registered initiators should be put into the storage group as well (otherwise the following volume attaching operations will fail ).
2.1.6.7.5. EMC storage-assisted volume migration
EMC VNX driver supports storage-assisted volume migration, when the user starts migrating with cinder migrate --force-host-copy False <volume_id> <host> or cinder migrate <volume_id> <host>, cinder will try to leverage the VNX's native volume migration functionality.
In following scenarios, VNX storage-assisted volume migration will not be triggered:
  1. Volume migration between back ends with different storage protocol, ex, FC and iSCSI.
  2. Volume is to be migrated across arrays.

2.1.6.8. Appendix

2.1.6.8.1. Authenticate by security file
VNX credentials are necessary when the driver connects to the VNX system. Credentials in global, local and ldap scopes are supported. There are two approaches to provide the credentials:
The recommended one is using the Navisphere CLI security file to provide the credentials which can get rid of providing the plain text credentials in the configuration file. Following is the instruction on how to do this.
  1. Find out the Linux user id of the cinder-volume processes. Assuming the service cinder-volume is running by the account cinder.
  2. Run su as root user.
  3. In /etc/passwd, change cinder:x:113:120::/var/lib/cinder:/bin/false to cinder:x:113:120::/var/lib/cinder:/bin/bash (This temporary change is to make step 4 work.)
  4. Save the credentials on behave of cinder user to a security file (assuming the array credentials are admin/admin in global scope). In the command below, the '-secfilepath' switch is used to specify the location to save the security file.
    # su -l cinder -c '/opt/Navisphere/bin/naviseccli -AddUserSecurity -user admin -password admin -scope 0 -secfilepath <location>'
  5. Change cinder:x:113:120::/var/lib/cinder:/bin/bash back to cinder:x:113:120::/var/lib/cinder:/bin/false in /etc/passwd
  6. Remove the credentials options san_login, san_password and storage_vnx_authentication_type from cinder.conf. (normally it is /etc/cinder/cinder.conf). Add option storage_vnx_security_file_dir and set its value to the directory path of your security file generated in step 4. Omit this option if -secfilepath is not used in step 4.
  7. Restart the cinder-volume service to validate the change.
2.1.6.8.2. Register FC port with VNX
This configuration is only required when initiator_auto_registration=False.
To access VNX storage, the compute nodes should be registered on VNX first if initiator auto registration is not enabled.
To perform "Copy Image to Volume" and "Copy Volume to Image" operations, the nodes running the cinder-volume service (Block Storage nodes) must be registered with the VNX as well.
The steps mentioned below are for the compute nodes. Follow the same steps for the Block Storage nodes also (The steps can be skipped if initiator auto registration is enabled).
  1. Assume 20:00:00:24:FF:48:BA:C2:21:00:00:24:FF:48:BA:C2 is the WWN of a FC initiator port name of the compute node whose hostname and IP are myhost1 and 10.10.61.1. Register 20:00:00:24:FF:48:BA:C2:21:00:00:24:FF:48:BA:C2 in Unisphere:
    1. Login to Unisphere, go to FNM0000000000->Hosts->Initiators.
    2. Refresh and wait until the initiator 20:00:00:24:FF:48:BA:C2:21:00:00:24:FF:48:BA:C2 with SP Port A-1 appears.
    3. Click the Register button, select CLARiiON/VNX and enter the hostname (which is the output of the linux command hostname) and IP address:
      • Hostname : myhost1
      • IP : 10.10.61.1
      • Click Register
    4. Then host 10.10.61.1 will appear under Hosts->Host List as well.
  2. Register the wwn with more ports if needed.
2.1.6.8.3. Register iSCSI port with VNX
This configuration is only required when initiator_auto_registration=False.
To access VNX storage, the compute nodes should be registered on VNX first if initiator auto registration is not enabled.
To perform "Copy Image to Volume" and "Copy Volume to Image" operations, the nodes running the cinder-volume service (Block Storage nodes) must be registered with the VNX as well.
The steps mentioned below are for the compute nodes. Follow the same steps for the Block Storage nodes also (The steps can be skipped if initiator auto registration is enabled).
  1. On the compute node with IP address 10.10.61.1 and hostname myhost1, execute the following commands (assuming 10.10.61.35 is the iSCSI target):
    1. Start the iSCSI initiator service on the node
      # /etc/init.d/open-iscsi start
    2. Discover the iSCSI target portals on VNX
      # iscsiadm -m discovery -t st -p 10.10.61.35
    3. Enter /etc/iscsi
      # cd /etc/iscsi
    4. Find out the iqn of the node
      # more initiatorname.iscsi
  2. Login to VNX from the compute node using the target corresponding to the SPA port:
    # iscsiadm -m node -T iqn.1992-04.com.emc:cx.apm01234567890.a0 -p 10.10.61.35 -l
  3. Assume iqn.1993-08.org.debian:01:1a2b3c4d5f6g is the initiator name of the compute node. Register iqn.1993-08.org.debian:01:1a2b3c4d5f6g in Unisphere:
    1. Login to Unisphere, go to FNM0000000000->Hosts->Initiators .
    2. Refresh and wait until the initiator iqn.1993-08.org.debian:01:1a2b3c4d5f6g with SP Port A-8v0 appears.
    3. Click the Register button, select CLARiiON/VNX and enter the hostname (which is the output of the linux command hostname) and IP address:
      • Hostname : myhost1
      • IP : 10.10.61.1
      • Click Register
    4. Then host 10.10.61.1 will appear under Hosts->Host List as well.
  4. Logout iSCSI on the node:
    # iscsiadm -m node -u
  5. Login to VNX from the compute node using the target corresponding to the SPB port:
    # iscsiadm -m node -T iqn.1992-04.com.emc:cx.apm01234567890.b8 -p 10.10.61.36 -l
  6. In Unisphere register the initiator with the SPB port.
  7. Logout iSCSI on the node:
    # iscsiadm -m node -u
  8. Register the iqn with more ports if needed.

2.1.7. EMC XtremIO Block Storage driver configuration

The high performance XtremIO All Flash Array (AFA) offers Block Storage services to OpenStack. Using the driver, OpenStack Block Storage hosts can connect to an XtermIO Storage cluster.
This section explains how to configure and connect an OpenStack block storage host to an XtremIO storage cluster.

2.1.7.1. Support matrix

  • Xtremapp: Version 3.0 and 4.0

2.1.7.2. Supported operations

  • Create, delete, clone, attach, and detach volumes
  • Create and delete volume snapshots
  • Create a volume from a snapshot
  • Copy an image to a volume
  • Copy a volume to an image
  • Extend a volume
  • Manage and unmanage a volume
  • Get volume statistics

2.1.7.3. XtremIO Block Storage driver configuration

Edit the cinder.conf file by adding the configuration below under the [DEFAULT] section of the file in case of a single back end or under a separate section in case of multiple back ends (for example [XTREMIO]). The configuration file is usually located under the following path /etc/cinder/cinder.conf.
For a configuration example, refer to the configuration example.
2.1.7.3.1. XtremIO driver name
Configure the driver name by adding the following parameter:
  • For iSCSI volume_driver = cinder.volume.drivers.emc.xtremio.XtremIOIscsiDriver
  • For Fibre Channel volume_driver = cinder.volume.drivers.emc.xtremio.XtremIOFibreChannelDriver
2.1.7.3.2. XtremIO management server (XMS) IP
To retrieve the management IP, use the show-xms CLI command.
Configure the management IP by adding the following parameter: san_ip = XMS Management IP
2.1.7.3.3. XtremIO cluster name
In XtremIO version 4.0, a single XMS can manage multiple cluster back ends. In such setups, the administrator is required to specify the cluster name (in addition to the XMS IP). Each cluster must be defined as a separate back end.
To retrieve the Cluster Name, run the show-clusters CLI command.
Configure the cluster name by adding the xtremio_cluster_name = Cluster-Name
Note
When a single cluster is managed in XtremIO version 4.0, the cluster name is not required.
2.1.7.3.4. XtremIO user credentials
OpenStack Block Storage requires an XtremIO XMS user with administrative privileges. XtremIO recommends creating a dedicated OpenStack user account that holds an administrative user role.
Refer to the XtremIO User Guide for details on user account management
Create an XMS account using either the XMS GUI or the add-user-accountCLI command.
Configure the user credentials by adding the following parameters:
san_login = XMS username
san_password = XMS username password

2.1.7.4. Multiple back ends

Configuring multiple storage back ends enables you to create several back-end storage solutions that serve the same OpenStack Compute resources.
When a volume is created, the scheduler selects the appropriate back end to handle the request, according to the specified volume type.

2.1.7.5. Setting thin provisioning and multipathing parameters

To support thin provisioning and multipathing in the XtremIO Array, the following parameters from the Nova and Cinder configuration files should be modified as follows:
  • Thin Provisioning
    All XtremIO volumes are thin provisioned. The default value of 20 should be maintained for the max_over_subscription_ratio parameter.
    The use_cow_images parameter in thenova.conffile should be set to False as follows:
    use_cow_images = false
  • Multipathing
    The use_multipath_for_image_xfer parameter in thecinder.conf file should be set to True as follows:
    use_multipath_for_image_xfer = true

2.1.7.6. Restarting OpenStack Block Storage

Save thecinder.conffile and restart cinder by running the following command:
$ openstack-service restart cinder-volume

2.1.7.7. Configuring CHAP

The XtremIO Block Storage driver supports CHAP initiator authentication. If CHAP initiator authentication is required, set the CHAP Authentication mode to initiator.
To set the CHAP initiator mode using CLI, run the following CLI command:
$ modify-chap chap-authentication-mode=initiator
The CHAP initiator mode can also be set via the XMS GUI
Refer to XtremIO User Guide for details on CHAP configuration via GUI and CLI.
The CHAP initiator authentication credentials (username and password) are generated automatically by the Block Storage driver. Therefore, there is no need to configure the initial CHAP credentials manually in XMS.

2.1.7.8. Configuration example

cinder.conf example file
You can update the cinder.conf file by editing the necessary parameters as follows:
[Default]
enabled_backends = XtremIO

[XtremIO]
volume_driver = cinder.volume.drivers.emc.xtremio.XtremIOFibreChannelDriver
san_ip = XMS_IP
xtremio_cluster_name = Cluster01
san_login = XMS_USER
san_password = XMS_PASSWD
volume_backend_name = XtremIOAFA

2.1.8. Fujitsu ETERNUS DX driver

The Fujitsu ETERNUS DX driver provides FC and iSCSI support for ETERNUS DX S3 series.
The driver performs volume operations by communicating with ETERNUS DX. It uses a CIM client in Python called PyWBEM to perform CIM operations over HTTP. You can specify RAID Group and Thin Provisioning Pool (TPP) in ETERNUS DX as a storage pool.

System requirements

  • Firmware version V10L30 or later is required.
  • An Advanced Copy Feature license is required to create a snapshot and a clone.
  • The pywbem should be installed on the Controller node.
Note
The multipath environment with ETERNUS Multipath Driver is unsupported.

Supported operations

  • Create, delete, attach, and detach volumes.
  • Create, list, and delete volume snapshots.
  • Create a volume from a snapshot.
  • Copy an image to a volume.
  • Copy a volume to an image.
  • Clone a volume.
  • Extend a volume. [1]
  • Get volume statistics.

2.1.8.1. Configure the Fujitsu ETERNUS device

Before you can define the Fujitsu ETERNUS device as a Block Storage back end, you need to configure storage pools and ports on the device first. Consult your device documentation for details on each step:

  1. Set up a LAN connection between the Controller nodes (where the Block Storage service is hosted) and MNT ports of the ETERNUS device.
  2. Set up a SAN connection between the Compute nodes and CA ports of the ETERNUS device.
  3. Log in to the ETERNUS device using an account with the Admin role.
  4. Enable the SMI-S of ETERNUS DX.
  5. Register an Advanced Copy Feature license and configure the copy table size.
  6. Create a storage pool for volumes. This pool will be used later in the EternusPool setting in Section 2.1.8.2, “Configuring the Back End”.

    Note

    If you want to create volume snapshots on a different storage pool, create a storage pool for that as well. This pool will be used in the EternusSnapPool setting in Section 2.1.8.2, “Configuring the Back End”.

  7. Create a Snap Data Pool Volume (SDPV) to enable Snap Data Pool (SDP) for the create a snapshot function.
  8. Configure storage ports to be used by the Block Storage service. Then:

    1. Set those ports to CA mode.
    2. Enable the host-affinity settings of those storage ports. To enable host-affinity, run the following from the ETERNUS CLI for each port:

      CLI> set PROTO-parameters -host-affinity enable -port CM# CA# PORT

      Where: * PROTO defines which storage protocol is in use, as in fc (Fibre Channel) or iscsi. * CM# CA# refer to the controller enclosure where the port is located. * PORT is the port number.

2.1.8.2. Configuring the Back End

Fujitsu Eternus back ends use either of the following drivers:
  • cinder.volume.drivers.fujitsu.eternus_dx_fc.FJDXFCDrive (fibre channel)
  • cinder.volume.drivers.fujitsu.eternus_dx_iscsi.FJDXISCSIDriver (iSCSI)
The settings for Fujitsu Eternus back ends are defined in a separate XML file. To define a back end, set volume_driver to the corresponding driver and cinder_eternus_config_file to point to the back end's XML configuration file. For example, if your fibre channel back end settings are defined in /etc/cinder/eternus-dx.xml, use:
volume_driver = cinder.volume.drivers.fujitsu.eternus_dx_fc.FJDXFCDriver
cinder_eternus_config_file = /etc/cinder/eternus_dx.xml
If you set the driver without defining cinder_eternus_config_file, then the driver will use cinder_eternus_config_file = etc/cinder/cinder_fujitsu_eternus_dx.xml by default.
The XML configuration file should contain the following settings:
EternusIP
IP address of the SMI-S connection of the ETERNUS device. Specifically, use the IP address of the MNT port of the device.
EternusPort
port number for the SMI-S connection port of the ETERNUS device.
EternusUser
User name to be used for the SMI-S connection (EternusIP).
EternusPassword
Corresponding password of EternusUser on EternusIP.
EternusPool
Name of the storage pool created for volumes (from Section 2.1.8.1, “Configure the Fujitsu ETERNUS device”). Specifically, use the pool’s RAID Group name or TPP name in the ETERNUS device.
EternusSnapPool
Name of the storage pool created for volume snapshots (from Section 2.1.8.1, “Configure the Fujitsu ETERNUS device”). Specifically, use the pool’s RAID Group name in the ETERNUS device. If you did not create a different pool for snapshots, use the same value as EternusPool.
EternusISCSIIP
(ISCSI only) IP address for iSCSI connections to the ETERNUS device. You can specify multiple IPs by creating an entry for each one.
For example, with a fibre-channel back end:
<?xml version='1.0' encoding='UTF-8'?>
<FUJITSU>
<EternusIP>0.0.0.0</EternusIP>
<EternusPort>5988</EternusPort>
<EternusUser>smisuser</EternusUser>
<EternusPassword>smispassword</EternusPassword>
<EternusPool>raid5_0001</EternusPool>
<EternusSnapPool>raid5_0001</EternusSnapPool>
</FUJITSU>
With an iSCSI back end:
<?xml version='1.0' encoding='UTF-8'?>
<FUJITSU>
<EternusIP>0.0.0.0</EternusIP>
<EternusPort>5988</EternusPort>
<EternusUser>smisuser</EternusUser>
<EternusPassword>smispassword</EternusPassword>
<EternusPool>raid5_0001</EternusPool>
<EternusSnapPool>raid5_0001</EternusSnapPool>
<EternusISCSIIP>1.1.1.1</EternusISCSIIP>
<EternusISCSIIP>1.1.1.2</EternusISCSIIP>
<EternusISCSIIP>1.1.1.3</EternusISCSIIP>
<EternusISCSIIP>1.1.1.4</EternusISCSIIP>
</FUJITSU>

2.1.9. HDS HNAS iSCSI and NFS driver

This OpenStack Block Storage volume driver provides iSCSI and NFS support for Hitachi NAS Platform Models 3080, 3090, 4040, 4060, 4080 and 4100.

2.1.9.1. Supported operations

The NFS and iSCSI drivers support these operations:
  • Create, delete, attach, and detach volumes.
  • Create, list, and delete volume snapshots.
  • Create a volume from a snapshot.
  • Copy an image to a volume.
  • Copy a volume to an image.
  • Clone a volume.
  • Extend a volume.
  • Get volume statistics.
  • Manage and unmanage a volume.

2.1.9.2. HNAS storage requirements

Before using iSCSI and NFS services, use the HNAS configuration and management GUI (SMU) or SSC CLI to create storage pool(s), file system(s), and assign an EVS. Make sure that the file system used is not created as a replication target. Additionally:
For NFS:
Create NFS exports, choose a path for them (it must be different from "/") and set the Show snapshots option to hide and disable access.
Also, in the "Access Configuration" set the option norootsquash , e.g. "* (rw, norootsquash)", so HNAS cinder driver can change the permissions of its volumes.
In order to use the hardware accelerated features of NFS HNAS, we recommend setting max-nfs-version to 3. Refer to HNAS command line reference to see how to configure this option.
For iSCSI:
You need to set an iSCSI domain.

2.1.9.3. Block storage host requirements

The Block storage host requires the nfs-utils package.
If you are not using SSH, you need the HDS SSC to communicate with an HNAS array using the SSC commands. This utility package is available in the RPM package distributed with the hardware through physical media or it can be manually copied from the SMU to the Block Storage host.

2.1.9.4. Package installation

If you are installing the driver from a RPM or DEB package, follow the steps bellow:
  1. Install the dependencies:
    # yum install nfs-utils nfs-utils-lib
  2. Configure the driver as described in the Section 2.1.9.5, “Driver configuration” section.
  3. Restart all cinder services (volume, scheduler and backup).

2.1.9.5. Driver configuration

The HDS driver supports the concept of differentiated services (also referred as quality of service) by mapping volume types to services provided through HNAS.
HNAS supports a variety of storage options and file system capabilities, which are selected through the definition of volume types and the use of multiple back ends. The driver maps up to four volume types into separated exports or file systems, and can support any number if using multiple back ends.
The configuration for the driver is read from an XML-formatted file (one per back end), which you need to create and set its path in the cinder.conf configuration file. Below are the configuration needed in the cinder.conf configuration file [2]:
[DEFAULT]
enabled_backends = hnas_iscsi1, hnas_nfs1
For HNAS iSCSI driver create this section:
[hnas_iscsi1]
volume_driver = cinder.volume.drivers.hitachi.hnas_iscsi.HDSISCSIDriver
hds_hnas_iscsi_config_file = /path/to/config/hnas_config_file.xml
volume_backend_name = HNAS-ISCSI
For HNAS NFS driver create this section:
[hnas_nfs1]
volume_driver = cinder.volume.drivers.hitachi.hnas_nfs.HDSNFSDriver
hds_hnas_nfs_config_file = /path/to/config/hnas_config_file.xml
volume_backend_name = HNAS-NFS
The XML file has the following format:
<?xml version = "1.0" encoding = "UTF-8" ?>
  <config>
    <mgmt_ip0>172.24.44.15</mgmt_ip0>
    <hnas_cmd>ssc</hnas_cmd>
    <chap_enabled>False</chap_enabled>
    <ssh_enabled>False</ssh_enabled>
    <cluster_admin_ip0>10.1.1.1</cluster_admin_ip0>
    <username>supervisor</username>
    <password>supervisor</password>
    <svc_0>
      <volume_type>default</volume_type>
      <iscsi_ip>172.24.44.20</iscsi_ip>
      <hdp>fs01-husvm</hdp>
    </svc_0>
    <svc_1>
      <volume_type>platinum</volume_type>
      <iscsi_ip>172.24.44.20</iscsi_ip>
      <hdp>fs01-platinum</hdp>
    </svc_1>
  </config>

2.1.9.6. HNAS volume driver XML configuration options

An OpenStack Block Storage node using HNAS drivers can have up to four services. Each service is defined by a svc_n tag (svc_0, svc_1, svc_2, or svc_3 [3], for example). These are the configuration options available for each service label:

Table 2.6. Configuration options for service labels

Option Type Default Description
volume_type
Required
default
When a create_volume call with a certain volume type happens, the volume type will try to be matched up with this tag. In each configuration file you must define the default volume type in the service labels and, if no volume type is specified, the default is used. Other labels are case sensitive and should match exactly. If no configured volume types match the incoming requested type, an error occurs in the volume creation.
iscsi_ip
Required only for iSCSI
An iSCSI IP address dedicated to the service.
hdp
Required
For iSCSI driver: virtual file system label associated with the service.
For NFS driver: path to the volume (<ip_address>:/<path>) associated with the service.
Additionally, this entry must be added in the file used to list available NFS shares. This file is located, by default, in /etc/cinder/nfs_shares or you can specify the location in the nfs_shares_config option in the cinder.conf configuration file.
These are the configuration options available to the config section of the XML config file:

Table 2.7. Configuration options

Option Type Default Description
mgmt_ip0
Required
Management Port 0 IP address. Should be the IP address of the "Admin" EVS.
hnas_cmd
Optional
ssc
Command to communicate to HNAS array.
chap_enabled
Optional (iSCSI only)
True
Boolean tag used to enable CHAP authentication protocol.
username
Required
supervisor
It's always required on HNAS.
password
Required
supervisor
Password is always required on HNAS.
svc_0, svc_1, svc_2, svc_3
Optional
(at least one label has to be defined)
Service labels: these four predefined names help four different sets of configuration options. Each can specify HDP and a unique volume type.
cluster_admin_ip0
Optional if ssh_enabled is True
The address of HNAS cluster admin.
ssh_enabled
Optional
False
Enables SSH authentication between Block Storage host and the SMU.
ssh_private_key
Required if ssh_enabled is True
False
Path to the SSH private key used to authenticate in HNAS SMU. The public key must be uploaded to HNAS SMU using ssh-register-public-key (this is an SSH subcommand). Note that copying the public key HNAS using ssh-copy-id doesn't work properly as the SMU periodically wipe out those keys.

2.1.9.7. Service labels

HNAS driver supports differentiated types of service using the service labels. It is possible to create up to four types of them, as gold, platinum, silver and ssd, for example.
After creating the services in the XML configuration file, you must configure one volume_type per service. Each volume_type must have the metadata service_label with the same name configured in the <volume_type> section of that service. If this is not set, OpenStack Block Storage will schedule the volume creation to the pool with largest available free space or other criteria configured in volume filters.
$ cinder type-create default
$ cinder type-key default set service_label=default
$ cinder type-create platinum-tier
$ cinder type-key platinum set service_label=platinum

2.1.9.8. Multi-back-end configuration

If you use multiple back ends and intend to enable the creation of a volume in a specific back end, you must configure volume types to set the volume_backend_name option to the appropriate back end. Then, create volume_type configurations with the same volume_backend_name .
$ cinder type-create 'iscsi'
$ cinder type-key 'iscsi' set volume_backend_name = 'HNAS-ISCSI'
$ cinder type-create 'nfs'
$ cinder type-key 'nfs' set volume_backend_name = 'HNAS-NFS'
You can deploy multiple OpenStack HNAS drivers instances that each control a separate HNAS array. Each service (svc_0, svc_1, svc_2, svc_3) on the instances need to have a volume_type and service_label metadata associated with it. If no metadata is associated with a pool, OpenStack Block Storage filtering algorithm selects the pool with the largest available free space.

2.1.9.9. SSH configuration

Instead of using SSC on the Block Storage host and store its credential on the XML configuration file, HNAS driver supports SSH authentication. To configure that:
  1. If you don't have a pair of public keys already generated, create it in the Block Storage host (leave the pass-phrase empty):
    $ mkdir -p /opt/hds/ssh
    $ ssh-keygen -f /opt/hds/ssh/hnaskey
  2. Change the owner of the key to cinder (or the user the volume service will be run):
    # chown -R cinder.cinder /opt/hds/ssh
  3. Create the directory "ssh_keys" in the SMU server:
    $ ssh [manager|supervisor]@<smu-ip> 'mkdir -p /var/opt/mercury-main/home/[manager|supervisor]/ssh_keys/'
  4. Copy the public key to the "ssh_keys" directory:
    $ scp /opt/hds/ssh/hnaskey.pub [manager|supervisor]@<smu-ip>:/var/opt/mercury-main/home/[manager|supervisor]/ssh_keys/
  5. Access the SMU server:
    $ ssh [manager|supervisor]@<smu-ip>
  6. Run the command to register the SSH keys:
    $ ssh-register-public-key -u [manager|supervisor] -f ssh_keys/hnaskey.pub
  7. Check the communication with HNAS in the Block Storage host:
    $ ssh -i /opt/hds/ssh/hnaskey [manager|supervisor]@<smu-ip> 'ssc <cluster_admin_ip0> df -a'
<cluster_admin_ip0> is "localhost" for single node deployments. This should return a list of available file systems on HNAS.

2.1.9.10. Editing the XML config file:

  1. Set the "username".
  2. Enable SSH adding the line "<ssh_enabled> True</ssh_enabled>" under "<config>" section.
  3. Set the private key path: "<ssh_private_key> /opt/hds/ssh/hnaskey</ssh_private_key>" under "<config>" section.
  4. If the HNAS is in a multi-cluster configuration set "<cluster_admin_ip0>" to the cluster node admin IP. In a single node HNAS, leave it empty.
  5. Restart cinder services.
Warning
Note that copying the public key HNAS using ssh-copy-id doesn't work properly as the SMU periodically wipe out those keys.

2.1.9.11. Manage and unmanage

The manage and unmanage are two new API extensions that add some new features to the driver. The manage action on an existing volume is very similar to a volume creation. It creates a volume entry on OpenStack Block Storage DB, but instead of creating a new volume in the back end, it only adds a 'link' to an existing volume. Volume name, description, volume_type, metadata and availability_zone are supported as in a normal volume creation.
The unmanage action on an existing volume removes the volume from the OpenStack Block Storage DB, but keeps the actual volume in the back-end. From an OpenStack Block Storage perspective the volume would be deleted, but it would still exist for outside use.
How to Manage:
On the Dashboard:
For NFS:
  1. Under the tab System -> Volumes choose the option [ + Manage Volume ]
  2. Fill the fields Identifier, Host and Volume Type with volume information to be managed:
    • Identifier: ip:/type/volume_name Example: 172.24.44.34:/silver/volume-test
    • Host: host@backend-name#pool_name Example: myhost@hnas-nfs#test_silver
    • Volume Name: volume_name Example: volume-test
    • Volume Type: choose a type of volume Example: silver
For iSCSI:
  1. Under the tab System -> Volumes choose the option [ + Manage Volume ]
  2. Fill the fields Identifier, Host, Volume Name and Volume Type with volume information to be managed:
    • Identifier: filesystem-name/volume-name Example: filesystem-test/volume-test
    • Host: host@backend-name#pool_name Example: myhost@hnas-iscsi#test_silver
    • Volume Name: volume_name Example: volume-test
    • Volume Type: choose a type of volume Example: silver
By CLI:
$ cinder --os-volume-api-version 2 manage [--source-name <source-name>][--id-type <id-type>] [--name <name>][--description <description>][--volume-type <volume-type>] [--availability-zone <availability-zone>][--metadata [<key=value> [<key=value> ...]]][--bootable] <host> [<key=value> [<key=value> ...]]
Example:
For NFS:
$ cinder --os-volume-api-version 2 manage --name <volume-test> --volume-type <silver> --source-name <172.24.44.34:/silver/volume-test> <myhost@hnas-nfs#test_silver>
For iSCSI:
$ cinder --os-volume-api-version 2 manage --name <volume-test> --volume-type <silver> --source-name <filesystem-test/volume-test> <myhost@hnas-iscsi#test_silver>
How to Unmanage:
On Dashboard:
  1. Under the tab [ System -> Volumes ] choose a volume
  2. On the volume options, choose [ +Unmanage Volume ]
  3. Check the data and confirm.
By CLI:
$ cinder --os-volume-api-version 2 unmanage <volume>
Example:
$ cinder --os-volume-api-version 2 unmanage <voltest>

2.1.9.12. Additional notes

  • The get_volume_stats() function always provides the available capacity based on the combined sum of all the HDPs that are used in these services labels.
  • After changing the configuration on the storage, the OpenStack Block Storage driver must be restarted.
  • On Red Hat, if the system is configured to use SELinux, you need to set "virt_use_nfs = on" for NFS driver work properly.
    # setsebool -P virt_use_nfs on
  • It is not possible to manage a volume if there is a slash ('/') or a colon (':') on the volume name.

2.1.10. Hitachi storage volume driver

Hitachi storage volume driver provides iSCSI and Fibre Channel support for Hitachi storages.

2.1.10.1. System requirements

Supported storages:
  • Hitachi Virtual Storage Platform G1000 (VSP G1000)
  • Hitachi Virtual Storage Platform (VSP)
  • Hitachi Unified Storage VM (HUS VM)
  • Hitachi Unified Storage 100 Family (HUS 100 Family)
Required software:
  • RAID Manager Ver 01-32-03/01 or later for VSP G1000/VSP/HUS VM
  • Hitachi Storage Navigator Modular 2 (HSNM2) Ver 27.50 or later for HUS 100 Family
Note
HSNM2 needs to be installed under /usr/stonavm.
Required licenses:
  • Hitachi In-System Replication Software for VSP G1000/VSP/HUS VM
  • (Mandatory) ShadowImage in-system replication for HUS 100 Family
  • (Optional) Copy-on-Write Snapshot for HUS 100 Family
Additionally, the pexpect package is required.

2.1.10.2. Supported operations

  • Create, delete, attach and detach volumes.
  • Create, list and delete volume snapshots.
  • Create a volume from a snapshot.
  • Copy a volume to an image.
  • Copy an image to a volume.
  • Clone a volume.
  • Extend a volume.
  • Get volume statistics.

2.1.10.3. Configuration

Set up Hitachi storage
You need to specify settings as described below. For details about each step, see the user's guide of the storage device. Use a storage administrative software such as Storage Navigator to set up the storage device so that LDEVs and host groups can be created and deleted, and LDEVs can be connected to the server and can be asynchronously copied.
  1. Create a Dynamic Provisioning pool.
  2. Connect the ports at the storage to the Controller node and Compute nodes.
  3. For VSP G1000/VSP/HUS VM, set "port security" to "enable" for the ports at the storage.
  4. For HUS 100 Family, set "Host Group security"/"iSCSI target security" to "ON" for the ports at the storage.
  5. For the ports at the storage, create host groups (iSCSI targets) whose names begin with HBSD- for the Controller node and each Compute node. Then register a WWN (initiator IQN) for each of the Controller node and Compute nodes.
  6. For VSP G1000/VSP/HUS VM, perform the following:
    • Create a storage device account belonging to the Administrator User Group. (To use multiple storage devices, create the same account name for all the target storage devices, and specify the same resource group and permissions.)
    • Create a command device (In-Band), and set user authentication to ON.
    • Register the created command device to the host group for the Controller node.
    • To use the Thin Image function, create a pool for Thin Image.
  7. For HUS 100 Family, perform the following:
    • Use the command auunitaddauto to register the unit name and controller of the storage device to HSNM2.
    • When connecting via iSCSI, if you are using CHAP certification, specify the same user and password as that used for the storage port.
Set up Hitachi Gigabit Fibre Channel adaptor
Change a parameter of the hfcldd driver and update the initram file if Hitachi Gigabit Fibre Channel adaptor is used.
# /opt/hitachi/drivers/hba/hfcmgr -E hfc_rport_lu_scan 1
# dracut -f initramfs-KERNEL_VERSION.img KERNEL_VERSION
# reboot
Set up Hitachi storage volume driver
  1. Create directory.
    # mkdir /var/lock/hbsd
    # chown cinder:cinder /var/lock/hbsd
  2. Create "volume type" and "volume key".
    This example shows that HUS100_SAMPLE is created as "volume type" and hus100_backend is registered as "volume key".
    $ cinder type-create HUS100_SAMPLE
    $ cinder type-key HUS100_SAMPLE set volume_backend_name=hus100_backend
    Specify any identical "volume type" name and "volume key".
    To confirm the created "volume type", execute the following command:
    $ cinder extra-specs-list
  3. Edit /etc/cinder/cinder.conf as follows.
    If you use Fibre Channel:
    volume_driver = cinder.volume.drivers.hitachi.hbsd_fc.HBSDFCDriver
    If you use iSCSI:
    volume_driver = cinder.volume.drivers.hitachi.hbsd_iscsi.HBSDISCSIDriver
    Also, set volume_backend_name created by cinder type-key
    volume_backend_name = hus100_backend
    This table shows configuration options for Hitachi storage volume driver.

    Table 2.8. Description of Hitachi storage volume driver configuration options

    Configuration option = Default value Description
    [DEFAULT]
    hitachi_add_chap_user = False (BoolOpt) Add CHAP user
    hitachi_async_copy_check_interval = 10 (IntOpt) Interval to check copy asynchronously
    hitachi_auth_method = None (StrOpt) iSCSI authentication method
    hitachi_auth_password = HBSD-CHAP-password (StrOpt) iSCSI authentication password
    hitachi_auth_user = HBSD-CHAP-user (StrOpt) iSCSI authentication username
    hitachi_copy_check_interval = 3 (IntOpt) Interval to check copy
    hitachi_copy_speed = 3 (IntOpt) Copy speed of storage system
    hitachi_default_copy_method = FULL (StrOpt) Default copy method of storage system
    hitachi_group_range = None (StrOpt) Range of group number
    hitachi_group_request = False (BoolOpt) Request for creating HostGroup or iSCSI Target
    hitachi_horcm_add_conf = True (BoolOpt) Add to HORCM configuration
    hitachi_horcm_numbers = 200,201 (StrOpt) Instance numbers for HORCM
    hitachi_horcm_password = None (StrOpt) Password of storage system for HORCM
    hitachi_horcm_resource_lock_timeout = 600 (IntOpt) Timeout until a resource lock is released, in seconds. The value must be between 0 and 7200.
    hitachi_horcm_user = None (StrOpt) Username of storage system for HORCM
    hitachi_ldev_range = None (StrOpt) Range of logical device of storage system
    hitachi_pool_id = None (IntOpt) Pool ID of storage system
    hitachi_serial_number = None (StrOpt) Serial number of storage system
    hitachi_target_ports = None (StrOpt) Control port names for HostGroup or iSCSI Target
    hitachi_thin_pool_id = None (IntOpt) Thin pool ID of storage system
    hitachi_unit_name = None (StrOpt) Name of an array unit
    hitachi_zoning_request = False (BoolOpt) Request for FC Zone creating HostGroup
  4. Restart Block Storage service.
    When the startup is done, "MSGID0003-I: The storage backend can be used." is output into /var/log/cinder/volume.log as follows.
    2014-09-01 10:34:14.169 28734 WARNING cinder.volume.drivers.hitachi. hbsd_common [req-a0bb70b5-7c3f-422a-a29e-6a55d6508135 None None] MSGID0003-I: The storage backend can be used. (config_group: hus100_backend)

2.1.11. HPE 3PAR Fibre Channel and iSCSI drivers

The HPE3PARFCDriver and HPE3PARISCSIDriver drivers, which are based on the Block Storage service (Cinder) plug-in architecture, run volume operations by communicating with the HPE 3PAR storage system over HTTP, HTTPS, and SSH connections. The HTTP and HTTPS communications use hp3parclient, which is part of the Python standard library.
For information about how to manage HPE 3PAR storage systems, see the HPE 3PAR user documentation.

2.1.11.1. System requirements

To use the HPE 3PAR drivers, install the following software and components on the HPE 3PAR storage system:
  • HPE 3PAR Operating System software version 3.1.3 MU1 or higher.
    • Deduplication provisioning requires SSD disks and HPE 3PAR Operating System software version 3.2.1 MU1 or higher.
    • Enabling Flash Cache Policy requires the following:
      • Array must contain SSD disks.
      • HPE 3PAR Operating System software version 3.2.1 MU2 or higher.
      • python-3parclient version 4.2.0 or newer.
      • Array must have the Adaptive Flash Cache license installed.
      • Flash Cache must be enabled on the array with the CLI command createflashcache SIZE, where SIZE must be in 16 GB increments. For example, createflashcache 128g will create 128 GB of Flash Cache for each node pair in the array.
    • The Dynamic Optimization license is required to support any feature that results in a volume changing provisioning type or CPG. This may apply to the volume migrate, retype, and manage commands.
    • The Virtual Copy License is required to support any feature that involves volume snapshots. This applies to the volume snapshot-* commands.
  • HPE 3PAR drivers will now check the licenses installed on the array and disable driver capabilities based on available licenses. This will apply to thin provisioning, QoS support and volume replication.
  • HPE 3PAR Web Services API Server must be enabled and running
  • One Common Provisioning Group (CPG)
  • Additionally, you must install the python-3parclient version 4.2.0 or newer from the Python standard library on the system with the enabled Block Storage service volume drivers.

2.1.11.2. Supported operations

  • Create, delete, attach, and detach volumes.
  • Create, list, and delete volume snapshots.
  • Create a volume from a snapshot.
  • Copy an image to a volume.
  • Copy a volume to an image.
  • Clone a volume.
  • Extend a volume.
  • Migrate a volume with back-end assistance.
  • Retype a volume.
  • Manage and unmanage a volume.
  • Create, delete, update, snapshot, and clone consistency groups.
  • Create and delete consistency group snapshots.
  • Create a consistency group from a consistency group snapshot or another group.
Volume type support for both HPE 3PAR drivers includes the ability to set the following capabilities in the OpenStack Block Storage API cinder.api.contrib.types_extra_specs volume type extra specs extension module:
  • hpe3par:snap_cpg
  • hpe3par:provisioning
  • hpe3par:persona
  • hpe3par:vvs
  • hpe3par:flash_cache
To work with the default filter scheduler, the key values are case sensitive and scoped with hpe3par:. For information about how to set the key-value pairs and associate them with a volume type, run the following command:
$ cinder help type-key
Note
Volumes that are cloned only support extra specs keys cpg, snap_cpg, provisioning and vvs. The others are ignored. In addition the comments section of the cloned volume in the HPE 3PAR StoreServ storage array is not populated.
If volume types are not used or a particular key is not set for a volume type, the following defaults are used:
  • hpe3par:cpg - Defaults to the hpe3par_cpg setting in the cinder.conf file.
  • hpe3par:snap_cpg - Defaults to the hpe3par_snap setting in the cinder.conf file. If hpe3par_snap is not set, it defaults to the hpe3par_cpg setting.
  • hpe3par:provisioning - Defaults to thin provisioning, the valid values are thin, full, and dedup.
  • hpe3par:persona - Defaults to the 2 - Generic-ALUA persona. The valid values are, 1 - Generic, 2 - Generic-ALUA, 3 - Generic-legacy, 4 - HPUX-legacy, 5 - AIX-legacy, 6 - EGENERA, 7 - ONTAP-legacy, 8 - VMware, 9 - OpenVMS, 10 - HPUX, and 11 - WindowsServer.
  • hpe3par:flash_cache - Defaults to false, the valid values are true and false.
QoS support for both HPE 3PAR drivers includes the ability to set the following capabilities in the OpenStack Block Storage API cinder.api.contrib.qos_specs_manage qos specs extension module:
  • minBWS
  • maxBWS
  • minIOPS
  • maxIOPS
  • latency
  • priority
The qos keys above no longer require to be scoped but must be created and associated to a volume type. For information about how to set the key-value pairs and associate them with a volume type, run the following commands:
$ cinder help qos-create
$ cinder help qos-key
$ cinder help qos-associate
The following keys require that the HPE 3PAR StoreServ storage array has a Priority Optimization license installed.
  • hpe3par:vvs - The virtual volume set name that has been predefined by the Administrator with Quality of Service (QoS) rules associated to it. If you specify extra_specs hpe3par:vvs, the qos_specs minIOPS, maxIOPS, minBWS, and maxBWS settings are ignored.
  • minBWS - The QoS I/O issue bandwidth minimum goal in MBs. If not set, the I/O issue bandwidth rate has no minimum goal.
  • maxBWS - The QoS I/O issue bandwidth rate limit in MBs. If not set, the I/O issue bandwidth rate has no limit.
  • minIOPS - The QoS I/O issue count minimum goal. If not set, the I/O issue count has no minimum goal.
  • maxIOPS - The QoS I/O issue count rate limit. If not set, the I/O issue count rate has no limit.
  • latency - The latency goal in milliseconds.
  • priority - The priority of the QoS rule over other rules. If not set, the priority is normal, valid values are low, normal and high.
Note
Since the Icehouse release, minIOPS and maxIOPS must be used together to set I/O limits. Similarly, minBWS and maxBWS must be used together. If only one is set the other will be set to the same value.
The following keys require that the HPE 3PAR StoreServ storage array has an Adaptive Flash Cache license installed.
  • hpe3par:flash_cache - The flash-cache policy, which can be turned on and off by setting the value to true or false.

2.1.11.3. Enable the HPE 3PAR Fibre Channel and iSCSI drivers

The HP3PARFCDriver and HP3PARISCSIDriver are installed with the OpenStack software.
  1. Install the hp3parclient Python package on the OpenStack Block Storage system.
    # pip install 'python-3parclient>=4.0,<5.0'
  2. Verify that the HPE 3PAR Web Services API server is enabled and running on the HPE 3PAR storage system.
    1. Log onto the HP 3PAR storage system with administrator access.
      $ ssh 3paradm@<HP 3PAR IP Address>
    2. View the current state of the Web Services API Server.
      # showwsapi
      -Service- -State- -HTTP_State- HTTP_Port -HTTPS_State- HTTPS_Port -Version-
      Enabled   Active Enabled       8008        Enabled       8080         1.1
    3. If the Web Services API Server is disabled, start it.
      # startwsapi
  3. If the HTTP or HTTPS state is disabled, enable one of them.
    # setwsapi -http enable
    or
    # setwsapi -https enable
    Note
    To stop the Web Services API Server, use the stopwsapi command. For other options run the setwsapi –h command.
  4. If you are not using an existing CPG, create a CPG on the HPE 3PAR storage system to be used as the default location for creating volumes.
  5. Make the following changes in the /etc/cinder/cinder.conf file.
    ## REQUIRED SETTINGS# 3PAR WS API Server URL
    hpe3par_api_url=https://10.10.0.141:8080/api/v1
    
    # 3PAR username with the 'edit' role
    hpe3par_username=edit3par
    
    # 3PAR password for the user specified in hpe3par_username
    hpe3par_password=3parpass
    
    # 3PAR CPG to use for volume creation
    hpe3par_cpg=OpenStackCPG_RAID5_NL
    
    # IP address of SAN controller for SSH access to the array
    san_ip=10.10.22.241
    
    # Username for SAN controller for SSH access to the array
    san_login=3paradm
    
    # Password for SAN controller for SSH access to the array
    san_password=3parpass
    
    # FIBRE CHANNEL(uncomment the next line to enable the FC driver)
    # volume_driver=cinder.volume.drivers.hpe.hpe_3par_fc.HPE3PARFCDriver
    
    # iSCSI (uncomment the next line to enable the iSCSI driver and
    # hpe3par_iscsi_ips or iscsi_ip_address)
    #volume_driver=cinder.volume.drivers.hpe.hpe_3par_iscsi.HPE3PARISCSIDriver
    
    # iSCSI multiple port configuration
    # hpe3par_iscsi_ips=10.10.220.253:3261,10.10.222.234
    
    # Still available for single port iSCSI configuration
    #iscsi_ip_address=10.10.220.253
    
    
    # Enable HTTP debugging to 3PAR
    hpe3par_debug=False
    
    # Enable CHAP authentication for iSCSI connections.
    hpe3par_iscsi_chap_enabled=false
    
    # The CPG to use for Snapshots for volumes. If empty hpe3par_cpg will be
    # used.
    hpe3par_snap_cpg=OpenStackSNAP_CPG
    
    # Time in hours to retain a snapshot. You can't delete it before this
    # expires.
    hpe3par_snapshot_retention=48
    
    # Time in hours when a snapshot expires and is deleted. This must be
    # larger than retention.
    hpe3par_snapshot_expiration=72
    
    # The ratio of oversubscription when thin provisioned volumes are
    # involved. Default ratio is 20.0, this means that a provisioned
    # capacity can be 20 times of the total physical capacity.
    max_over_subscription_ratio=20.0
    
    # This flag represents the percentage of reserved back-end capacity.
    reserved_percentage=15
    Note
    You can enable only one driver on each cinder instance unless you enable multiple back-end support.
    Note
    You can configure one or more iSCSI addresses by using the hpe3par_iscsi_ips option. When you configure multiple addresses, the driver selects the iSCSI port with the fewest active volumes at attach time. The IP address might include an IP port by using a colon (:) to separate the address from port. If you do not define an IP port, the default port 3260 is used. Separate IP addresses with a comma (,). The iscsi_ip_address/iscsi_port options might be used as an alternative to hpe3par_iscsi_ips for single port iSCSI configuration.
  6. Save the changes to the cinder.conf file and restart the cinder-volume service.
The HPE 3PAR Fibre Channel and iSCSI drivers are now enabled on your OpenStack system. If you experience problems, review the Block Storage service log files for errors.

2.1.12. Huawei storage driver

The Huawei driver supports the iSCSI and Fibre Channel connections and enables OceanStor T series V200R002, OceanStor 18000 series V100R001 and OceanStor V3 series V300R002 storage to provide block storage services for OpenStack.

Supported operations

  • Create, delete, expand, attach, and detach volumes.
  • Create and delete a snapshot.
  • Copy an image to a volume.
  • Copy a volume to an image.
  • Create a volume from a snapshot.
  • Clone a volume.

Configure block storage nodes

  1. Modify the cinder.conf configuration file and add volume_driver and cinder_huawei_conf_file items.
    • Example for configuring a storage system:
      volume_driver = cinder.volume.drivers.huawei.HuaweiVolumeDriver
      cinder_huawei_conf_file = /etc/cinder/cinder_huawei_conf.xml
    • Example for configuring multiple storage systems:
      enabled_backends = t_iscsi, 18000_iscsi
      [t_iscsi]
      volume_driver = cinder.volume.drivers.huawei.HuaweiVolumeDriver
      cinder_huawei_conf_file = /etc/cinder/cinder_huawei_conf_t_iscsi.xml
      volume_backend_name = HuaweiTISCSIDriver
      
      [18000_iscsi]
      volume_driver = cinder.volume.drivers.huawei.HuaweiVolumeDriver
      cinder_huawei_conf_file = /etc/cinder/cinder_huawei_conf_18000_iscsi.xml
      volume_backend_name = Huawei18000ISCSIDriver
  2. In /etc/cinder, create a driver configuration file. The driver configuration file name must be the same as the cinder_huawei_conf_file item in the cinder_conf configuration file.
  3. Configure product and protocol.

    Product and Protocol indicate the storage system type and link type respectively. For the OceanStor 18000 series V100R001 storage systems, the driver configuration file is as follows:
    <?xml version='1.0' encoding='UTF-8'?>
    <config>
        <Storage>
            <Product>18000</Product>
            <Protocol>iSCSI</Protocol>
            <RestURL>https://x.x.x.x/deviceManager/rest/</RestURL>
            <UserName>xxxxxxxx</UserName>
            <UserPassword>xxxxxxxx</UserPassword>
        </Storage>
        <LUN>
            <LUNType>Thick</LUNType>
            <WriteType>1</WriteType>
            <MirrorSwitch>0</MirrorSwitch>
            <LUNcopyWaitInterval>5</LUNcopyWaitInterval>
            <Timeout>432000</Timeout>
            <StoragePool>xxxxxxxx</StoragePool>
        </LUN>
        <iSCSI>
            <DefaultTargetIP>x.x.x.x</DefaultTargetIP>
            <Initiator Name="xxxxxxxx" TargetIP="x.x.x.x"/>
            <Initiator Name="xxxxxxxx" TargetIP="x.x.x.x"/>
        </iSCSI>
        <Host OSType="Linux" HostIP="x.x.x.x, x.x.x.x"/>
    </config>
    Note

    Note for fibre channel driver configuration

    • In the configuration files of OceanStor T series V200R002 and OceanStor V3 V300R002, parameter configurations are the same with the exception of the RestURL parameter. The following describes how to configure the RestURL parameter:
      <RestURL>https://x.x.x.x:8088/deviceManager/rest/</RestURL>
    • For a Fibre Channel driver, you do not need to configure an iSCSI target IP address. Delete the iSCSI configuration from the preceding examples.
      <iSCSI>
              <DefaultTargetIP>x.x.x.x</DefaultTargetIP>
              <Initiator Name="xxxxxxxx" TargetIP="x.x.x.x"/>
              <Initiator Name="xxxxxxxx" TargetIP="x.x.x.x"/>
      </iSCSI>
    This table describes the Huawei storage driver configuration options:

    Table 2.9. Huawei storage driver configuration options

    Property Type Default Description
    Product
    Mandatory
    -
    Type of a storage product. Valid values are T, TV3, or 18000.
    Protocol Mandatory
    -
    Type of a protocol. Valid values are iSCSI or FC.
    RestURL Mandatory
    -
    Access address of the Rest port (required only for the 18000)
    UserName
    Mandatory
    -
    User name of an administrator
    UserPassword
    Mandatory
    -
    Password of an administrator
    LUNType
    Optional
    Thin
    Type of a created LUN. Valid values are Thick or Thin.
    StripUnitSize
    Optional
    64
    Stripe depth of a created LUN. The value is expressed in KB.
    This flag is not valid for a thin LUN.
    WriteType
    Optional
    1
    Cache write method. The method can be write back, write through, or Required write back. The default value is 1, indicating write back.
    MirrorSwitch
    Optional
    1
    Cache mirroring policy. The default value is 1, indicating that a mirroring policy is used.
    Prefetch Type Optional
    3
    Cache prefetch strategy. The strategy can be constant prefetch, variable prefetch, or intelligent prefetch. Default value is 3, which indicates intelligent prefetch and is not required for the OceanStor 18000 series.
    Prefetch Value Optional
    0
    Cache prefetch value.
    LUNcopyWaitInterval Optional
    5
    After LUN copy is enabled, the plug-in frequently queries the copy progress. You can set a value to specify the query interval.
    Timeout Optional
    432,000
    Timeout period for waiting LUN copy of an array to complete.
    StoragePool Mandatory
    -
    Name of a storage pool that you want to use.
    DefaultTargetIP Optional
    -
    Default IP address of the iSCSI port provided for compute nodes.
    Initiator Name Optional
    -
    Name of a compute node initiator.
    Initiator TargetIP Optional
    -
    IP address of the iSCSI port provided for compute nodes.
    OSType Optional
    Linux
    The OS type for a compute node.
    HostIP Optional
    -
    The IPs for compute nodes.
    Note for the configuration
    1. You can configure one iSCSI target port for each or all compute nodes. The driver checks whether a target port IP address is configured for the current compute node. If not, select DefaultTargetIP.
    2. Only one storage pool can be configured.
    3. For details about LUN configuration information, see the show lun general command in the command-line interface (CLI) documentation or run the help -c show lun general on the storage system CLI.
    4. After the driver is loaded, the storage system obtains any modification of the driver configuration file in real time and you do not need to restart the cinder-volume service.
  4. Restart the Cinder service.

2.1.13. IBM Storwize family and SVC volume driver

The volume management driver for Storwize family and SAN Volume Controller (SVC) provides OpenStack Compute instances with access to IBM Storwize family or SVC storage systems.

2.1.13.1. Configure the Storwize family and SVC system

Network configuration
The Storwize family or SVC system must be configured for iSCSI, Fibre Channel, or both.
If using iSCSI, each Storwize family or SVC node should have at least one iSCSI IP address. The IBM Storwize/SVC driver uses an iSCSI IP address associated with the volume's preferred node (if available) to attach the volume to the instance, otherwise it uses the first available iSCSI IP address of the system. The driver obtains the iSCSI IP address directly from the storage system; you do not need to provide these iSCSI IP addresses directly to the driver.
Note
If using iSCSI, ensure that the compute nodes have iSCSI network access to the Storwize family or SVC system.
Note
OpenStack Nova's Grizzly version supports iSCSI multipath. Once this is configured on the Nova host (outside the scope of this documentation), multipath is enabled.
If using Fibre Channel (FC), each Storwize family or SVC node should have at least one WWPN port configured. If the storwize_svc_multipath_enabled flag is set to True in the Cinder configuration file, the driver uses all available WWPNs to attach the volume to the instance (details about the configuration flags appear in the next section). If the flag is not set, the driver uses the WWPN associated with the volume's preferred node (if available), otherwise it uses the first available WWPN of the system. The driver obtains the WWPNs directly from the storage system; you do not need to provide these WWPNs directly to the driver.
Note
If using FC, ensure that the compute nodes have FC connectivity to the Storwize family or SVC system.
iSCSI CHAP authentication
If using iSCSI for data access and the storwize_svc_iscsi_chap_enabled is set to True, the driver will associate randomly-generated CHAP secrets with all hosts on the Storwize family system. OpenStack compute nodes use these secrets when creating iSCSI connections.
Note
CHAP secrets are added to existing hosts as well as newly-created ones. If the CHAP option is enabled, hosts will not be able to access the storage without the generated secrets.
Note
Not all OpenStack Compute drivers support CHAP authentication. Check compatibility before using.
Note
CHAP secrets are passed from OpenStack Block Storage to Compute in clear text. This communication should be secured to ensure that CHAP secrets are not discovered.
Configure storage pools
Each instance of the IBM Storwize/SVC driver allocates all volumes in a single pool. The pool should be created in advance and be provided to the driver using the storwize_svc_volpool_name configuration flag. Details about the configuration flags and how to provide the flags to the driver appear in the next section.
Configure user authentication for the driver
The driver requires access to the Storwize family or SVC system management interface. The driver communicates with the management using SSH. The driver should be provided with the Storwize family or SVC management IP using the san_ip flag, and the management port should be provided by the san_ssh_port flag. By default, the port value is configured to be port 22 (SSH).
Note
Make sure the compute node running the cinder-volume management driver has SSH network access to the storage system.
To allow the driver to communicate with the Storwize family or SVC system, you must provide the driver with a user on the storage system. The driver has two authentication methods: password-based authentication and SSH key pair authentication. The user should have an Administrator role. It is suggested to create a new user for the management driver. Consult your storage and security administrator regarding the preferred authentication method and how passwords or SSH keys should be stored in a secure manner.
Note
When creating a new user on the Storwize or SVC system, make sure the user belongs to the Administrator group or to another group that has an Administrator role.
If using password authentication, assign a password to the user on the Storwize or SVC system. The driver configuration flags for the user and password are san_login and san_password, respectively.
If you are using the SSH key pair authentication, create SSH private and public keys using the instructions below or by any other method. Associate the public key with the user by uploading the public key: select the "choose file" option in the Storwize family or SVC management GUI under "SSH public key". Alternatively, you may associate the SSH public key using the command line interface; details can be found in the Storwize and SVC documentation. The private key should be provided to the driver using the san_private_key configuration flag.
Create a SSH key pair with OpenSSH
You can create an SSH key pair using OpenSSH, by running:
$ ssh-keygen -t rsa
The command prompts for a file to save the key pair. For example, if you select 'key' as the filename, two files are created: key and key.pub. The key file holds the private SSH key and key.pub holds the public SSH key.
The command also prompts for a pass phrase, which should be empty.
The private key file should be provided to the driver using the san_private_key configuration flag. The public key should be uploaded to the Storwize family or SVC system using the storage management GUI or command line interface.
Note
Ensure that Cinder has read permissions on the private key file.

2.1.13.2. Configure the Storwize family and SVC driver

Enable the Storwize family and SVC driver
Set the volume driver to the Storwize family and SVC driver by setting the volume_driver option in cinder.conf as follows:
volume_driver = cinder.volume.drivers.ibm.storwize_svc.StorwizeSVCDriver
Storwize family and SVC driver options in cinder.conf
The following options specify default values for all volumes. Some can be over-ridden using volume types, which are described below.

Table 2.10. List of configuration flags for Storwize storage and SVC driver

Flag name Type Default Description
san_ip
Required
Management IP or host name
san_ssh_port
Optional
22
Management port
san_login
Required
Management login username
san_password
Required [a]
Management login password
san_private_key
Required [a]
Management login SSH private key
storwize_svc_volpool_name
Required
Default pool name for volumes
storwize_svc_vol_rsize
Optional
2
Initial physical allocation (percentage) [b]
storwize_svc_vol_warning
Optional
0 (disabled)
Space allocation warning threshold (percentage) [b]
storwize_svc_vol_autoexpand
Optional
True
Enable or disable volume auto expand [c]
storwize_svc_vol_grainsize
Optional
256
Volume grain size [b] in KB
storwize_svc_vol_compression
Optional
False
Enable or disable Real-time Compression [d]
storwize_svc_vol_easytier
Optional
True
Enable or disable Easy Tier [e]
storwize_svc_vol_iogrp
Optional
0
The I/O group in which to allocate vdisks
storwize_svc_flashcopy_timeout
Optional
120
FlashCopy timeout threshold [f] (seconds)
storwize_svc_connection_protocol
Optional
iSCSI
Connection protocol to use (currently supports 'iSCSI' or 'FC')
storwize_svc_iscsi_chap_enabled
Optional
True
Configure CHAP authentication for iSCSI connections
storwize_svc_multipath_enabled
Optional
False
Enable multipath for FC connections [g]
storwize_svc_multihost_enabled
Optional
True
Enable mapping vdisks to multiple hosts [h]
storwize_svc_vol_nofmtdisk
Optional
False
Enable or disable fast format [i]
[a] The authentication requires either a password (san_password) or SSH private key (san_private_key). One must be specified. If both are specified, the driver uses only the SSH private key.
[b] The driver creates thin-provisioned volumes by default. The storwize_svc_vol_rsize flag defines the initial physical allocation percentage for thin-provisioned volumes, or if set to -1, the driver creates full allocated volumes. More details about the available options are available in the Storwize family and SVC documentation.
[c] Defines whether thin-provisioned volumes can be auto expanded by the storage system, a value of True means that auto expansion is enabled, a value of False disables auto expansion. Details about this option can be found in the –autoexpand flag of the Storwize family and SVC command line interface mkvdisk command.
[d] Defines whether Real-time Compression is used for the volumes created with OpenStack. Details on Real-time Compression can be found in the Storwize family and SVC documentation. The Storwize or SVC system must have compression enabled for this feature to work.
[e] Defines whether Easy Tier is used for the volumes created with OpenStack. Details on EasyTier can be found in the Storwize family and SVC documentation. The Storwize or SVC system must have Easy Tier enabled for this feature to work.
[f] The driver wait timeout threshold when creating an OpenStack snapshot. This is actually the maximum amount of time that the driver waits for the Storwize family or SVC system to prepare a new FlashCopy mapping. The driver accepts a maximum wait time of 600 seconds (10 minutes).
[g] Multipath for iSCSI connections requires no storage-side configuration and is enabled if the compute host has multipath configured.
[h] This option allows the driver to map a vdisk to more than one host at a time. This scenario occurs during migration of a virtual machine with an attached volume; the volume is simultaneously mapped to both the source and destination compute hosts. If your deployment does not require attaching vdisks to multiple hosts, setting this flag to False will provide added safety.
[i] Defines whether or not the fast formatting of thick-provisioned volumes is disabled at creation. The default value is False and a value of True means that fast format is disabled. Details about this option can be found in the –nofmtdisk flag of the Storwize family and SVC command line interface mkvdisk command.

Table 2.11. Description of IBM Storwise driver configuration options

Configuration option = Default value Description
[DEFAULT]
storwize_svc_allow_tenant_qos = False (BoolOpt) Allow tenants to specify QOS on create
storwize_svc_connection_protocol = iSCSI (StrOpt) Connection protocol (iSCSI/FC)
storwize_svc_flashcopy_timeout = 120 (IntOpt) Maximum number of seconds to wait for FlashCopy to be prepared.
storwize_svc_iscsi_chap_enabled = True (BoolOpt) Configure CHAP authentication for iSCSI connections (Default: Enabled)
storwize_svc_multihostmap_enabled = True (BoolOpt) Allows vdisk to multi host mapping
storwize_svc_multipath_enabled = False (BoolOpt) Connect with multipath (FC only; iSCSI multipath is controlled by Nova)
storwize_svc_npiv_compatibility_mode = True (BoolOpt) Indicate whether svc driver is compatible for NPIV setup. If it is compatible, it will allow no wwpns being returned on get_conn_fc_wwpns during initialize_connection. It should always be set to True. It will be deprecated and removed in M release.
storwize_svc_stretched_cluster_partner = None (StrOpt) If operating in stretched cluster mode, specify the name of the pool in which mirrored copies are stored.Example: "pool2"
storwize_svc_vol_autoexpand = True (BoolOpt) Storage system autoexpand parameter for volumes (True/False)
storwize_svc_vol_compression = False (BoolOpt) Storage system compression option for volumes
storwize_svc_vol_easytier = True (BoolOpt) Enable Easy Tier for volumes
storwize_svc_vol_grainsize = 256 (IntOpt) Storage system grain size parameter for volumes (32/64/128/256)
storwize_svc_vol_iogrp = 0 (IntOpt) The I/O group in which to allocate volumes
storwize_svc_vol_rsize = 2 (IntOpt) Storage system space-efficiency parameter for volumes (percentage)
storwize_svc_vol_warning = 0 (IntOpt) Storage system threshold for volume capacity warnings (percentage)
storwize_svc_volpool_name = volpool (StrOpt) Storage system storage pool for volumes
Placement with volume types
The IBM Storwize/SVC driver exposes capabilities that can be added to the extra specs of volume types, and used by the filter scheduler to determine placement of new volumes. Make sure to prefix these keys with capabilities: to indicate that the scheduler should use them. The following extra specs are supported:
  • capabilities:volume_back-end_name - Specify a specific back-end where the volume should be created. The back-end name is a concatenation of the name of the IBM Storwize/SVC storage system as shown in lssystem, an underscore, and the name of the pool (mdisk group). For example:
    capabilities:volume_back-end_name=myV7000_openstackpool
  • capabilities:compression_support - Specify a back-end according to compression support. A value of True should be used to request a back-end that supports compression, and a value of False will request a back-end that does not support compression. If you do not have constraints on compression support, do not set this key. Note that specifying True does not enable compression; it only requests that the volume be placed on a back-end that supports compression. Example syntax:
    capabilities:compression_support='<is> True'
  • capabilities:easytier_support - Similar semantics as the compression_support key, but for specifying according to support of the Easy Tier feature. Example syntax:
    capabilities:easytier_support='<is> True'
  • capabilities:storage_protocol - Specifies the connection protocol used to attach volumes of this type to instances. Legal values are iSCSI and FC. This extra specs value is used for both placement and setting the protocol used for this volume. In the example syntax, note <in> is used as opposed to <is> used in the previous examples.
    capabilities:storage_protocol='<in> FC'
Configure per-volume creation options
Volume types can also be used to pass options to the IBM Storwize/SVC driver, which over-ride the default values set in the configuration file. Contrary to the previous examples where the "capabilities" scope was used to pass parameters to the Cinder scheduler, options can be passed to the IBM Storwize/SVC driver with the "drivers" scope.
The following extra specs keys are supported by the IBM Storwize/SVC driver:
  • rsize
  • warning
  • autoexpand
  • grainsize
  • compression
  • easytier
  • multipath
  • iogrp
These keys have the same semantics as their counterparts in the configuration file. They are set similarly; for example, rsize=2 or compression=False.
Example: Volume types
In the following example, we create a volume type to specify a controller that supports iSCSI and compression, to use iSCSI when attaching the volume, and to enable compression:
$ cinder type-create compressed
$ cinder type-key compressed set capabilities:storage_protocol='<in> iSCSI' capabilities:compression_support='<is> True' drivers:compression=True
We can then create a 50GB volume using this type:
$ cinder create --display-name "compressed volume" --volume-type compressed 50
Volume types can be used, for example, to provide users with different
  • performance levels (such as, allocating entirely on an HDD tier, using Easy Tier for an HDD-SDD mix, or allocating entirely on an SSD tier)
  • resiliency levels (such as, allocating volumes in pools with different RAID levels)
  • features (such as, enabling/disabling Real-time Compression)
QOS
The Storwize driver provides QOS support for storage volumes by controlling the I/O amount. QOS is enabled by editing the etc/cinder/cinder.conf file and setting the storwize_svc_allow_tenant_qos to True.
There are three ways to set the Storwize IOThrotting parameter for storage volumes:
  • Add the qos:IOThrottling key into a QOS specification and associate it with a volume type.
  • Add the qos:IOThrottling key into an extra specification with a volume type.
  • Add the qos:IOThrottling key to the storage volume metadata.
Note
If you are changing a volume type with QOS to a new volume type without QOS, the QOS configuration settings will be removed.

2.1.13.3. Operational notes for the Storwize family and SVC driver

Migrate volumes
In the context of OpenStack Block Storage's volume migration feature, the IBM Storwize/SVC driver enables the storage's virtualization technology. When migrating a volume from one pool to another, the volume will appear in the destination pool almost immediately, while the storage moves the data in the background.
Note
To enable this feature, both pools involved in a given volume migration must have the same values for extent_size. If the pools have different values for extent_size, the data will still be moved directly between the pools (not host-side copy), but the operation will be synchronous.
Extend volumes
The IBM Storwize/SVC driver allows for extending a volume's size, but only for volumes without snapshots.
Snapshots and clones
Snapshots are implemented using FlashCopy with no background copy (space-efficient). Volume clones (volumes created from existing volumes) are implemented with FlashCopy, but with background copy enabled. This means that volume clones are independent, full copies. While this background copy is taking place, attempting to delete or extend the source volume will result in that operation waiting for the copy to complete.
Volume retype
The IBM Storwize/SVC driver enables you to modify volume types. When you modify volume types, you can also change these extra specs properties:
  • rsize
  • warning
  • autoexpand
  • grainsize
  • compression
  • easytier
  • iogrp
  • nofmtdisk
Note
When you change the rsize, grainsize or compression properties, volume copies are asynchronously synchronized on the array.
Note
To change the iogrp property, IBM Storwize/SVC firmware version 6.4.0 or later is required.

2.1.14. IBM XIV and DS8000 volume driver

The IBM Storage Driver for OpenStack is a Block Storage driver that supports IBM XIV and IBM DS8000 storage systems over Fiber channel and iSCSI.
Set the following in your cinder.conf, and use the following options to configure it.
volume_driver = cinder.volume.drivers.xiv_ds8k.XIVDS8KDriver

Table 2.12. Description of IBM XIV and DS8000 volume driver configuration options

Configuration option = Default value Description
[DEFAULT]
san_clustername = (StrOpt) Cluster name to use for creating volumes
san_ip = (StrOpt) IP address of SAN controller
san_login = admin (StrOpt) Username for SAN controller
san_password = (StrOpt) Password for SAN controller
xiv_chap = disabled (StrOpt) CHAP authentication mode, effective only for iscsi (disabled|enabled)
xiv_ds8k_connection_type = iscsi (StrOpt) Connection type to the IBM Storage Array
xiv_ds8k_proxy = xiv_ds8k_openstack.nova_proxy.XIVDS8KNovaProxy (StrOpt) Proxy driver that connects to the IBM Storage Array
For full documentation refer to IBM's online documentation available at http://pic.dhe.ibm.com/infocenter/strhosts/ic/topic/com.ibm.help.strghosts.doc/nova-homepage.html.

2.1.15. LVM

The default volume back-end uses local volumes managed by LVM.
This driver supports different transport protocols to attach volumes, currently iSCSI and iSER.
Note
The Block Storage iSCSI LVM driver has significant performance issues. In production environments, with high I/O activity, there are many potential issues which could affect performance or data integrity.
Red Hat strongly recommends using a certified Block Storage plug-in provider for storage in a production environment. The software iSCSI LVM driver should be used and is only supported for single node evaluations and proof of concept environments.
Set the following in your cinder.conf configuration file, and use the following options to configure for iSCSI transport:
    volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
    iscsi_protocol = iscsi
Use the following options to configure for the iSER transport:
    volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
    iscsi_protocol = iser

Table 2.13. Description of LVM configuration options

Configuration option = Default value Description
[DEFAULT]
lvm_conf_file = /etc/cinder/lvm.conf (StrOpt) LVM conf file to use for the LVM driver in Cinder; this setting is ignored if the specified file does not exist (You can also specify 'None' to not use a conf file even if one exists).
lvm_mirrors = 0 (IntOpt) If >0, create LVs with multiple mirrors. Note that this requires lvm_mirrors + 2 PVs with available space
lvm_type = default (StrOpt) Type of LVM volumes to deploy; (default, thin, or auto). Auto defaults to thin if thin is supported.
volume_group = cinder-volumes (StrOpt) Name for the VG that will contain exported volumes

2.1.16. NetApp unified driver

The NetApp unified driver is a block storage driver that supports multiple storage families and protocols. A storage family corresponds to storage systems built on different NetApp technologies such as clustered Data ONTAP, Data ONTAP operating in 7-Mode, and E-Series. The storage protocol refers to the protocol used to initiate data storage and access operations on those storage systems like iSCSI and NFS. The NetApp unified driver can be configured to provision and manage OpenStack volumes on a given storage family using a specified storage protocol. The OpenStack volumes can then be used for accessing and storing data using the storage protocol on the storage family system. The NetApp unified driver is an extensible interface that can support new storage families and protocols.
Note
With the Juno release of OpenStack, OpenStack Block Storage has introduced the concept of "storage pools", in which a single OpenStack Block Storage back end may present one or more logical storage resource pools from which OpenStack Block Storage will select as a storage location when provisioning volumes.
In releases prior to Juno, the NetApp unified driver contained some "scheduling" logic that determined which NetApp storage container (namely, a FlexVol volume for Data ONTAP, or a dynamic disk pool for E-Series) that a new OpenStack Block Storage volume would be placed into.
With the introduction of pools, all scheduling logic is performed completely within the OpenStack Block Storage scheduler, as each NetApp storage container is directly exposed to the OpenStack Block Storage scheduler as a storage pool; whereas previously, the NetApp unified driver presented an aggregated view to the scheduler and made a final placement decision as to which NetApp storage container the OpenStack Block Storage volume would be provisioned into.

2.1.16.1. NetApp clustered Data ONTAP storage family

The NetApp clustered Data ONTAP storage family represents a configuration group which provides OpenStack compute instances access to clustered Data ONTAP storage systems. At present it can be configured in OpenStack Block Storage to work with iSCSI and NFS storage protocols.
2.1.16.1.1. NetApp iSCSI configuration for clustered Data ONTAP
The NetApp iSCSI configuration for clustered Data ONTAP is an interface from OpenStack to clustered Data ONTAP storage systems for provisioning and managing the SAN block storage entity; that is, a NetApp LUN which can be accessed using the iSCSI protocol.
The iSCSI configuration for clustered Data ONTAP is a direct interface from OpenStack Block Storage to the clustered Data ONTAP instance and as such does not require additional management software to achieve the desired functionality. It uses NetApp APIs to interact with the clustered Data ONTAP instance.
Configuration options for clustered Data ONTAP family with iSCSI protocol
Configure the volume driver, storage family and storage protocol to the NetApp unified driver, clustered Data ONTAP, and iSCSI respectively by setting the volume_driver, netapp_storage_family and netapp_storage_protocol options in cinder.conf as follows:
volume_driver = cinder.volume.drivers.netapp.common.NetAppDriver
netapp_storage_family = ontap_cluster
netapp_storage_protocol = iscsi
netapp_vserver = openstack-vserver
netapp_server_hostname = myhostname
netapp_server_port = port
netapp_login = username
netapp_password = password
Note
To use the iSCSI protocol, you must override the default value of netapp_storage_protocol with iscsi.

Table 2.14. Description of NetApp cDOT iSCSI driver configuration options

Configuration option = Default value Description
[DEFAULT]
netapp_login = None (StrOpt) Administrative user account name used to access the storage system or proxy server.
netapp_lun_ostype = None (StrOpt) This option defines the type of operating system that will access a LUN exported from Data ONTAP; it is assigned to the LUN at the time it is created.
netapp_lun_space_reservation = enabled (StrOpt) This option determines if storage space is reserved for LUN allocation. If enabled, LUNs are thick provisioned. If space reservation is disabled, storage space is allocated on demand.
netapp_partner_backend_name = None (StrOpt) The name of the config.conf stanza for a Data ONTAP (7-mode) HA partner. This option is only used by the driver when connecting to an instance with a storage family of Data ONTAP operating in 7-Mode, and it is required if the storage protocol selected is FC.
netapp_password = None (StrOpt) Password for the administrative user account specified in the netapp_login option.
netapp_pool_name_search_pattern = (.+) (StrOpt) This option is used to restrict provisioning to the specified pools. Specify the value of this option to be a regular expression which will be applied to the names of objects from the storage backend which represent pools in Cinder. This option is only utilized when the storage protocol is configured to use iSCSI or FC.
netapp_server_hostname = None (StrOpt) The hostname (or IP address) for the storage system or proxy server.
netapp_server_port = None (IntOpt) The TCP port to use for communication with the storage system or proxy server. If not specified, Data ONTAP drivers will use 80 for HTTP and 443 for HTTPS; E-Series will use 8080 for HTTP and 8443 for HTTPS.
netapp_size_multiplier = 1.2 (FloatOpt) The quantity to be multiplied by the requested volume size to ensure enough space is available on the virtual storage server (Vserver) to fulfill the volume creation request. Note: this option is deprecated and will be removed in favor of "reserved_percentage" in the Mitaka release.
netapp_storage_family = ontap_cluster (StrOpt) The storage family type used on the storage system; valid values are ontap_7mode for using Data ONTAP operating in 7-Mode, ontap_cluster for using clustered Data ONTAP, or eseries for using E-Series.
netapp_storage_protocol = None (StrOpt) The storage protocol to be used on the data path with the storage system.
netapp_transport_type = http (StrOpt) The transport protocol used when communicating with the storage system or proxy server.
netapp_vserver = None (StrOpt) This option specifies the virtual storage server (Vserver) name on the storage cluster on which provisioning of block storage volumes should occur.
Note
If you specify an account in the netapp_login that only has virtual storage server (Vserver) administration privileges (rather than cluster-wide administration privileges), some advanced features of the NetApp unified driver will not work and you may see warnings in the OpenStack Block Storage logs.
Tip
For more information on these options and other deployment and operational scenarios, visit the NetApp OpenStack Deployment and Operations Guide.
2.1.16.1.2. NetApp NFS configuration for clustered Data ONTAP
The NetApp NFS configuration for clustered Data ONTAP is an interface from OpenStack to a clustered Data ONTAP system for provisioning and managing OpenStack volumes on NFS exports provided by the clustered Data ONTAP system that are accessed using the NFS protocol.
The NFS configuration for clustered Data ONTAP is a direct interface from OpenStack Block Storage to the clustered Data ONTAP instance and as such does not require any additional management software to achieve the desired functionality. It uses NetApp APIs to interact with the clustered Data ONTAP instance.
Configuration options for the clustered Data ONTAP family with NFS protocol
Configure the volume driver, storage family, and storage protocol to NetApp unified driver, clustered Data ONTAP, and NFS respectively by setting the volume_driver, netapp_storage_family and netapp_storage_protocol options in cinder.conf as follows:
volume_driver = cinder.volume.drivers.netapp.common.NetAppDriver
netapp_storage_family = ontap_cluster
netapp_storage_protocol = nfs
netapp_vserver = openstack-vserver
netapp_server_hostname = myhostname
netapp_server_port = port
netapp_login = username
netapp_password = password
nfs_shares_config = /etc/cinder/nfs_shares

Table 2.15. Description of NetApp cDOT NFS driver configuration options

Configuration option = Default value Description
[DEFAULT]
expiry_thres_minutes = 720 (IntOpt) This option specifies the threshold for last access time for images in the NFS image cache. When a cache cleaning cycle begins, images in the cache that have not been accessed in the last M minutes, where M is the value of this parameter, will be deleted from the cache to create free space on the NFS share.
netapp_copyoffload_tool_path = None (StrOpt) This option specifies the path of the NetApp copy offload tool binary. Ensure that the binary has execute permissions set which allow the effective user of the cinder-volume process to execute the file.
netapp_host_type = None (StrOpt) This option defines the type of operating system for all initiators that can access a LUN. This information is used when mapping LUNs to individual hosts or groups of hosts.
netapp_host_type = None (StrOpt) This option defines the type of operating system for all initiators that can access a LUN. This information is used when mapping LUNs to individual hosts or groups of hosts.
netapp_login = None (StrOpt) Administrative user account name used to access the storage system or proxy server.
netapp_lun_ostype = None (StrOpt) This option defines the type of operating system that will access a LUN exported from Data ONTAP; it is assigned to the LUN at the time it is created.
netapp_partner_backend_name = None (StrOpt) The name of the config.conf stanza for a Data ONTAP (7-mode) HA partner. This option is only used by the driver when connecting to an instance with a storage family of Data ONTAP operating in 7-Mode, and it is required if the storage protocol selected is FC.
netapp_password = None (StrOpt) Password for the administrative user account specified in the netapp_login option.
netapp_pool_name_search_pattern = (.+) (StrOpt) This option is used to restrict provisioning to the specified pools. Specify the value of this option to be a regular expression which will be applied to the names of objects from the storage backend which represent pools in Cinder. This option is only utilized when the storage protocol is configured to use iSCSI or FC.
netapp_server_hostname = None (StrOpt) The hostname (or IP address) for the storage system or proxy server.
netapp_server_port = None (IntOpt) The TCP port to use for communication with the storage system or proxy server. If not specified, Data ONTAP drivers will use 80 for HTTP and 443 for HTTPS; E-Series will use 8080 for HTTP and 8443 for HTTPS.
netapp_storage_family = ontap_cluster (StrOpt) The storage family type used on the storage system; valid values are ontap_7mode for using Data ONTAP operating in 7-Mode, ontap_cluster for using clustered Data ONTAP, or eseries for using E-Series.
netapp_storage_protocol = None (StrOpt) The storage protocol to be used on the data path with the storage system.
netapp_transport_type = http (StrOpt) The transport protocol used when communicating with the storage system or proxy server.
netapp_vserver = None (StrOpt) This option specifies the virtual storage server (Vserver) name on the storage cluster on which provisioning of block storage volumes should occur.
thres_avl_size_perc_start = 20 (IntOpt) If the percentage of available space for an NFS share has dropped below the value specified by this option, the NFS image cache will be cleaned.
thres_avl_size_perc_stop = 60 (IntOpt) When the percentage of available space on an NFS share has reached the percentage specified by this option, the driver will stop clearing files from the NFS image cache that have not been accessed in the last M minutes, where M is the value of the expiry_thres_minutes configuration option.
Note
Additional NetApp NFS configuration options are shared with the generic NFS driver. These options can be found here: Table 2.20, “Description of NFS storage configuration options”.
Note
If you specify an account in the netapp_login that only has virtual storage server (Vserver) administration privileges (rather than cluster-wide administration privileges), some advanced features of the NetApp unified driver will not work and you may see warnings in the OpenStack Block Storage logs.
NetApp NFS Copy Offload client
A feature was added in the Icehouse release of the NetApp unified driver that enables Image Service images to be efficiently copied to a destination Block Storage volume. When the Block Storage and Image Service are configured to use the NetApp NFS Copy Offload client, a controller-side copy will be attempted before reverting to downloading the image from the Image Service. This improves image provisioning times while reducing the consumption of bandwidth and CPU cycles on the host(s) running the Image and Block Storage services. This is due to the copy operation being performed completely within the storage cluster.
The NetApp NFS Copy Offload client can be used in either of the following scenarios:
  • The Image Service is configured to store images in an NFS share that is exported from a NetApp FlexVol volume and the destination for the new Block Storage volume will be on an NFS share exported from a different FlexVol volume than the one used by the Image Service. Both FlexVols must be located within the same cluster.
  • The source image from the Image Service has already been cached in an NFS image cache within a Block Storage backend. The cached image resides on a different FlexVol volume than the destination for the new Block Storage volume. Both FlexVols must be located within the same cluster.
To use this feature, you must configure the Image Service, as follows:
  • Set the default_store configuration option to file.
  • Set the filesystem_store_datadir configuration option to the path to the Image Service NFS export.
  • Set the show_image_direct_url configuration option to True.
  • Set the show_multiple_locations configuration option to True.
    Important
    If configured without the proper policy settings, a non-admin user of the Image Service can replace active image data (that is, switch out a current image without other users knowing). See the OSSN announcement (recommended actions) for configuration information: https://wiki.openstack.org/wiki/OSSN/OSSN-0065
  • Set the filesystem_store_metadata_file configuration option to a metadata file. The metadata file should contain a JSON object that contains the correct information about the NFS export used by the Image Service, similar to:
    {
        "share_location": "nfs://192.168.0.1/myGlanceExport",
        "mount_point": "/var/lib/glance/images",
        "type": "nfs"
    }
To use this feature, you must configure the Block Storage service, as follows:
  • Set the netapp_copyoffload_tool_path configuration option to the path to the NetApp Copy Offload binary.
  • Set the glance_api_version configuration option to 2.
Important
This feature requires that:
  • The storage system must have Data ONTAP v8.2 or greater installed.
  • The vStorage feature must be enabled on each storage virtual machine (SVM, also known as a Vserver) that is permitted to interact with the copy offload client.
  • To configure the copy offload workflow, enable NFS v4.0 or greater and export it from the SVM.
Tip
To download the NetApp copy offload binary to be utilized in conjunction with the netapp_copyoffload_tool_path configuration option, visit the Utility Toolchest page at the NetApp Support portal (login is required).
Tip
For more information on these options and other deployment and operational scenarios, visit the NetApp OpenStack Deployment and Operations Guide.
2.1.16.1.3. NetApp-supported extra specs for clustered Data ONTAP
Extra specs enable vendors to specify extra filter criteria that the Block Storage scheduler uses when it determines which volume node should fulfill a volume provisioning request. When you use the NetApp unified driver with a clustered Data ONTAP storage system, you can leverage extra specs with OpenStack Block Storage volume types to ensure that OpenStack Block Storage volumes are created on storage back ends that have certain properties. For example, when you configure QoS, mirroring, or compression for a storage back end.
Extra specs are associated with OpenStack Block Storage volume types, so that when users request volumes of a particular volume type, the volumes are created on storage back ends that meet the list of requirements. For example, the back ends have the available space or extra specs. You can use the specs in the following table when you define OpenStack Block Storage volume types by using the cinder type-key command.

Table 2.16. Description of extra specs options for NetApp Unified Driver with Clustered Data ONTAP

Extra spec Type Description
netapp_raid_type String Limit the candidate volume list based on one of the following raid types: raid4, raid_dp.
netapp_disk_type String Limit the candidate volume list based on one of the following disk types: ATA, BSAS, EATA, FCAL, FSAS, LUN, MSATA, SAS, SATA, SCSI, XATA, XSAS, or SSD.
netapp:qos_policy_group[a] String Specify the name of a QoS policy group, which defines measurable Service Level Objectives, that should be applied to the OpenStack Block Storage volume at the time of volume creation. Ensure that the QoS policy group object within Data ONTAP should be defined before an OpenStack Block Storage volume is created, and that the QoS policy group is not associated with the destination FlexVol volume.
netapp_mirrored Boolean Limit the candidate volume list to only the ones that are mirrored on the storage controller.
netapp_unmirrored[b] Boolean Limit the candidate volume list to only the ones that are not mirrored on the storage controller.
netapp_dedup Boolean Limit the candidate volume list to only the ones that have deduplication enabled on the storage controller.
netapp_nodedup[b] Boolean Limit the candidate volume list to only the ones that have deduplication disabled on the storage controller.
netapp_compression Boolean Limit the candidate volume list to only the ones that have compression enabled on the storage controller.
netapp_nocompression[b] Boolean Limit the candidate volume list to only the ones that have compression disabled on the storage controller.
netapp_thin_provisioned Boolean Limit the candidate volume list to only the ones that support thin provisioning on the storage controller.
netapp_thick_provisioned[b] Boolean Limit the candidate volume list to only the ones that support thick provisioning on the storage controller.
[a] Note that this extra spec has a colon (:) in its name because it is used by the driver to assign the QoS policy group to the OpenStack Block Storage volume after it has been provisioned.
[b] In the Juno release, these negative-assertion extra specs are formally deprecated by the NetApp unified driver. Instead of using the deprecated negative-assertion extra specs (for example, netapp_unmirrored) with a value of true, use the corresponding positive-assertion extra spec (for example, netapp_mirrored) with a value of false.

2.1.16.2. NetApp Data ONTAP operating in 7-Mode storage family

The NetApp Data ONTAP operating in 7-Mode storage family represents a configuration group which provides OpenStack compute instances access to 7-Mode storage systems. At present it can be configured in OpenStack Block Storage to work with iSCSI and NFS storage protocols.
2.1.16.2.1. NetApp iSCSI configuration for Data ONTAP operating in 7-Mode
The NetApp iSCSI configuration for Data ONTAP operating in 7-Mode is an interface from OpenStack to Data ONTAP operating in 7-Mode storage systems for provisioning and managing the SAN block storage entity, that is, a LUN which can be accessed using iSCSI protocol.
The iSCSI configuration for Data ONTAP operating in 7-Mode is a direct interface from OpenStack to Data ONTAP operating in 7-Mode storage system and it does not require additional management software to achieve the desired functionality. It uses NetApp ONTAPI to interact with the Data ONTAP operating in 7-Mode storage system.
Configuration options for the Data ONTAP operating in 7-Mode storage family with iSCSI protocol
Configure the volume driver, storage family and storage protocol to the NetApp unified driver, Data ONTAP operating in 7-Mode, and iSCSI respectively by setting the volume_driver, netapp_storage_family and netapp_storage_protocol options in cinder.conf as follows:
volume_driver = cinder.volume.drivers.netapp.common.NetAppDriver
netapp_storage_family = ontap_7mode
netapp_storage_protocol = iscsi
netapp_server_hostname = myhostname
netapp_server_port = 80
netapp_login = username
netapp_password = password
Note
To use the iSCSI protocol, you must override the default value of netapp_storage_protocol with iscsi.

Table 2.17. Description of NetApp 7-Mode iSCSI driver configuration options

Configuration option = Default value Description
[DEFAULT]
netapp_login = None (StrOpt) Administrative user account name used to access the storage system or proxy server.
netapp_partner_backend_name = None (StrOpt) The name of the config.conf stanza for a Data ONTAP (7-mode) HA partner. This option is only used by the driver when connecting to an instance with a storage family of Data ONTAP operating in 7-Mode, and it is required if the storage protocol selected is FC.
netapp_password = None (StrOpt) Password for the administrative user account specified in the netapp_login option.
netapp_pool_name_search_pattern = (.+) (StrOpt) This option is used to restrict provisioning to the specified pools. Specify the value of this option to be a regular expression which will be applied to the names of objects from the storage backend which represent pools in Cinder. This option is only utilized when the storage protocol is configured to use iSCSI or FC.
netapp_server_hostname = None (StrOpt) The hostname (or IP address) for the storage system or proxy server.
netapp_server_port = None (IntOpt) The TCP port to use for communication with the storage system or proxy server. If not specified, Data ONTAP drivers will use 80 for HTTP and 443 for HTTPS; E-Series will use 8080 for HTTP and 8443 for HTTPS.
netapp_size_multiplier = 1.2 (FloatOpt) The quantity to be multiplied by the requested volume size to ensure enough space is available on the virtual storage server (Vserver) to fulfill the volume creation request. Note: this option is deprecated and will be removed in favor of "reserved_percentage" in the Mitaka release.
netapp_storage_family = ontap_cluster (StrOpt) The storage family type used on the storage system; valid values are ontap_7mode for using Data ONTAP operating in 7-Mode, ontap_cluster for using clustered Data ONTAP, or eseries for using E-Series.
netapp_storage_protocol = None (StrOpt) The storage protocol to be used on the data path with the storage system.
netapp_transport_type = http (StrOpt) The transport protocol used when communicating with the storage system or proxy server.
netapp_vfiler = None (StrOpt) The vFiler unit on which provisioning of block storage volumes will be done. This option is only used by the driver when connecting to an instance with a storage family of Data ONTAP operating in 7-Mode. Only use this option when utilizing the MultiStore feature on the NetApp storage system.
Tip
For more information on these options and other deployment and operational scenarios, visit the NetApp OpenStack Deployment and Operations Guide.
2.1.16.2.2. NetApp NFS configuration for Data ONTAP operating in 7-Mode
The NetApp NFS configuration for Data ONTAP operating in 7-Mode is an interface from OpenStack to Data ONTAP operating in 7-Mode storage system for provisioning and managing OpenStack volumes on NFS exports provided by the Data ONTAP operating in 7-Mode storage system which can then be accessed using NFS protocol.
The NFS configuration for Data ONTAP operating in 7-Mode is a direct interface from OpenStack Block Storage to the Data ONTAP operating in 7-Mode instance and as such does not require any additional management software to achieve the desired functionality. It uses NetApp ONTAPI to interact with the Data ONTAP operating in 7-Mode storage system.
Configuration options for the Data ONTAP operating in 7-Mode family with NFS protocol
Configure the volume driver, storage family, and storage protocol to the NetApp unified driver, Data ONTAP operating in 7-Mode, and NFS respectively by setting the volume_driver, netapp_storage_family and netapp_storage_protocol options in cinder.conf as follows:
volume_driver = cinder.volume.drivers.netapp.common.NetAppDriver
netapp_storage_family = ontap_7mode
netapp_storage_protocol = nfs
netapp_server_hostname = myhostname
netapp_server_port = 80
netapp_login = username
netapp_password = password
nfs_shares_config = /etc/cinder/nfs_shares

Table 2.18. Description of NetApp 7-Mode NFS driver configuration options

Configuration option = Default value Description
[DEFAULT]
expiry_thres_minutes = 720 (IntOpt) This option specifies the threshold for last access time for images in the NFS image cache. When a cache cleaning cycle begins, images in the cache that have not been accessed in the last M minutes, where M is the value of this parameter, will be deleted from the cache to create free space on the NFS share.
netapp_login = None (StrOpt) Administrative user account name used to access the storage system or proxy server.
netapp_partner_backend_name = None (StrOpt) The name of the config.conf stanza for a Data ONTAP (7-mode) HA partner. This option is only used by the driver when connecting to an instance with a storage family of Data ONTAP operating in 7-Mode, and it is required if the storage protocol selected is FC.
netapp_password = None (StrOpt) Password for the administrative user account specified in the netapp_login option.
netapp_pool_name_search_pattern = (.+) (StrOpt) This option is used to restrict provisioning to the specified pools. Specify the value of this option to be a regular expression which will be applied to the names of objects from the storage backend which represent pools in Cinder. This option is only utilized when the storage protocol is configured to use iSCSI or FC.
netapp_server_hostname = None (StrOpt) The hostname (or IP address) for the storage system or proxy server.
netapp_server_port = None (IntOpt) The TCP port to use for communication with the storage system or proxy server. If not specified, Data ONTAP drivers will use 80 for HTTP and 443 for HTTPS; E-Series will use 8080 for HTTP and 8443 for HTTPS.
netapp_storage_family = ontap_cluster (StrOpt) The storage family type used on the storage system; valid values are ontap_7mode for using Data ONTAP operating in 7-Mode, ontap_cluster for using clustered Data ONTAP, or eseries for using E-Series.
netapp_storage_protocol = None (StrOpt) The storage protocol to be used on the data path with the storage system.
netapp_transport_type = http (StrOpt) The transport protocol used when communicating with the storage system or proxy server.
netapp_vfiler = None (StrOpt) The vFiler unit on which provisioning of block storage volumes will be done. This option is only used by the driver when connecting to an instance with a storage family of Data ONTAP operating in 7-Mode. Only use this option when utilizing the MultiStore feature on the NetApp storage system.
thres_avl_size_perc_start = 20 (IntOpt) If the percentage of available space for an NFS share has dropped below the value specified by this option, the NFS image cache will be cleaned.
thres_avl_size_perc_stop = 60 (IntOpt) When the percentage of available space on an NFS share has reached the percentage specified by this option, the driver will stop clearing files from the NFS image cache that have not been accessed in the last M minutes, where M is the value of the expiry_thres_minutes configuration option.
Note
Additional NetApp NFS configuration options are shared with the generic NFS driver. For a description of these, see Table 2.20, “Description of NFS storage configuration options”.
Tip
For more information on these options and other deployment and operational scenarios, visit the NetApp OpenStack Deployment and Operations Guide.

2.1.16.3. NetApp E-Series storage family

The NetApp E-Series storage family represents a configuration group which provides OpenStack compute instances access to E-Series storage systems. At present it can be configured in OpenStack Block Storage to work with the iSCSI storage protocol.
2.1.16.3.1. NetApp iSCSI configuration for E-Series
The NetApp iSCSI configuration for E-Series is an interface from OpenStack to E-Series storage systems for provisioning and managing the SAN block storage entity; that is, a NetApp LUN which can be accessed using the iSCSI protocol.
The iSCSI configuration for E-Series is an interface from OpenStack Block Storage to the E-Series proxy instance and as such requires the deployment of the proxy instance in order to achieve the desired functionality. The driver uses REST APIs to interact with the E-Series proxy instance, which in turn interacts directly with the E-Series controllers.
The use of multipath and DM-MP are required when using the OpenStack Block Storage driver for E-Series. In order for OpenStack Block Storage and OpenStack Compute to take advantage of multiple paths, the following configuration options must be correctly configured:
  • The use_multipath_for_image_xfer option should be set to True in the cinder.conf file within the driver-specific stanza (for example, [myDriver]).
  • The iscsi_use_multipath option should be set to True in the nova.conf file within the [libvirt] stanza.
Configuration options for E-Series storage family with iSCSI protocol
Configure the volume driver, storage family, and storage protocol to the NetApp unified driver, E-Series, and iSCSI respectively by setting the volume_driver, netapp_storage_family and netapp_storage_protocol options in cinder.conf as follows:
volume_driver = cinder.volume.drivers.netapp.common.NetAppDriver
netapp_storage_family = eseries
netapp_storage_protocol = iscsi
netapp_server_hostname = myhostname
netapp_server_port = 80
netapp_login = username
netapp_password = password
netapp_controller_ips = 1.2.3.4,5.6.7.8
netapp_sa_password = arrayPassword
netapp_storage_pools = pool1,pool2
use_multipath_for_image_xfer = True
Note
To use the E-Series driver, you must override the default value of netapp_storage_family with eseries.
Note
To use the iSCSI protocol, you must override the default value of netapp_storage_protocol with iscsi.

Table 2.19. Description of NetApp E-Series driver configuration options

Configuration option = Default value Description
[DEFAULT]
netapp_controller_ips = None (StrOpt) This option is only utilized when the storage family is configured to eseries. This option is used to restrict provisioning to the specified controllers. Specify the value of this option to be a comma separated list of controller hostnames or IP addresses to be used for provisioning.
netapp_enable_multiattach = False (BoolOpt) This option specifies whether the driver should allow operations that require multiple attachments to a volume. An example would be live migration of servers that have volumes attached. When enabled, this backend is limited to 256 total volumes in order to guarantee volumes can be accessed by more than one host.
netapp_host_type = None (StrOpt) This option defines the type of operating system for all initiators that can access a LUN. This information is used when mapping LUNs to individual hosts or groups of hosts.
netapp_login = None (StrOpt) Administrative user account name used to access the storage system or proxy server.
netapp_partner_backend_name = None (StrOpt) The name of the config.conf stanza for a Data ONTAP (7-mode) HA partner. This option is only used by the driver when connecting to an instance with a storage family of Data ONTAP operating in 7-Mode, and it is required if the storage protocol selected is FC.
netapp_password = None (StrOpt) Password for the administrative user account specified in the netapp_login option.
netapp_pool_name_search_pattern = (.+) (StrOpt) This option is used to restrict provisioning to the specified pools. Specify the value of this option to be a regular expression which will be applied to the names of objects from the storage backend which represent pools in Cinder. This option is only utilized when the storage protocol is configured to use iSCSI or FC.
netapp_sa_password = None (StrOpt) Password for the NetApp E-Series storage array.
netapp_server_hostname = None (StrOpt) The hostname (or IP address) for the storage system or proxy server.
netapp_server_port = None (IntOpt) The TCP port to use for communication with the storage system or proxy server. If not specified, Data ONTAP drivers will use 80 for HTTP and 443 for HTTPS; E-Series will use 8080 for HTTP and 8443 for HTTPS.
netapp_storage_family = ontap_cluster (StrOpt) The storage family type used on the storage system; valid values are ontap_7mode for using Data ONTAP operating in 7-Mode, ontap_cluster for using clustered Data ONTAP, or eseries for using E-Series.
netapp_transport_type = http (StrOpt) The transport protocol used when communicating with the storage system or proxy server.
netapp_webservice_path = /devmgr/v2 (StrOpt) This option is used to specify the path to the E-Series proxy application on a proxy server. The value is combined with the value of the netapp_transport_type, netapp_server_hostname, and netapp_server_port options to create the URL used by the driver to connect to the proxy application.
Tip
For more information on these options and other deployment and operational scenarios, visit the NetApp OpenStack Deployment and Operations Guide.

2.1.16.4. Upgrading prior NetApp drivers to the NetApp unified driver

NetApp introduced a new unified block storage driver in Havana for configuring different storage families and storage protocols. This requires defining upgrade path for NetApp drivers which existed in releases prior to Havana. This section covers the upgrade configuration for NetApp drivers to the new unified configuration and a list of deprecated NetApp drivers.
2.1.16.4.1. Upgraded NetApp drivers
This section describes how to update OpenStack Block Storage configuration from a pre-Havana release to the unified driver format.
Driver upgrade configuration
  1. NetApp iSCSI direct driver for Clustered Data ONTAP in Grizzly (or earlier).
    volume_driver = cinder.volume.drivers.netapp.iscsi.NetAppDirectCmodeISCSIDriver
    NetApp unified driver configuration.
    volume_driver = cinder.volume.drivers.netapp.common.NetAppDriver
    netapp_storage_family = ontap_cluster
    netapp_storage_protocol = iscsi
  2. NetApp NFS direct driver for Clustered Data ONTAP in Grizzly (or earlier).
    volume_driver = cinder.volume.drivers.netapp.nfs.NetAppDirectCmodeNfsDriver
    NetApp unified driver configuration.
    volume_driver = cinder.volume.drivers.netapp.common.NetAppDriver
    netapp_storage_family = ontap_cluster
    netapp_storage_protocol = nfs
  3. NetApp iSCSI direct driver for Data ONTAP operating in 7-Mode storage controller in Grizzly (or earlier)
    volume_driver = cinder.volume.drivers.netapp.iscsi.NetAppDirect7modeISCSIDriver
    NetApp unified driver configuration
    volume_driver = cinder.volume.drivers.netapp.common.NetAppDriver
    netapp_storage_family = ontap_7mode
    netapp_storage_protocol = iscsi
  4. NetApp NFS direct driver for Data ONTAP operating in 7-Mode storage controller in Grizzly (or earlier)
    volume_driver = cinder.volume.drivers.netapp.nfs.NetAppDirect7modeNfsDriver
    NetApp unified driver configuration
    volume_driver = cinder.volume.drivers.netapp.common.NetAppDriver
    netapp_storage_family = ontap_7mode
    netapp_storage_protocol = nfs
2.1.16.4.2. Deprecated NetApp drivers
This section lists the NetApp drivers in earlier releases that are deprecated in Havana.
  1. NetApp iSCSI driver for clustered Data ONTAP.
    volume_driver = cinder.volume.drivers.netapp.iscsi.NetAppCmodeISCSIDriver
  2. NetApp NFS driver for clustered Data ONTAP.
    volume_driver = cinder.volume.drivers.netapp.nfs.NetAppCmodeNfsDriver
  3. NetApp iSCSI driver for Data ONTAP operating in 7-Mode storage controller.
    volume_driver = cinder.volume.drivers.netapp.iscsi.NetAppISCSIDriver
  4. NetApp NFS driver for Data ONTAP operating in 7-Mode storage controller.
    volume_driver = cinder.volume.drivers.netapp.nfs.NetAppNFSDriver
Note
For support information on deprecated NetApp drivers in the Havana release, visit the NetApp OpenStack Deployment and Operations Guide.

2.1.17. NFS driver

The Network File System (NFS) is a distributed file system protocol originally developed by Sun Microsystems in 1984. An NFS server exports one or more of its file systems, known as shares. An NFS client can mount these exported shares on its own file system. You can perform file actions on this mounted remote file system as if the file system were local.

2.1.17.1. How the NFS driver works

The NFS driver, and other drivers based on it, work quite differently than a traditional block storage driver.
The NFS driver does not actually allow an instance to access a storage device at the block level. Instead, files are created on an NFS share and mapped to instances, which emulates a block device. This works in a similar way to QEMU, which stores instances in the /var/lib/nova/instances directory.

2.1.17.2. Enable the NFS driver and related options

To use Cinder with the NFS driver, first set the volume_driver in cinder.conf:
volume_driver=cinder.volume.drivers.nfs.NfsDriver
The following table contains the options supported by the NFS driver.

Table 2.20. Description of NFS storage configuration options

Configuration option = Default value Description
[DEFAULT]
nfs_mount_attempts = 3 (IntOpt) The number of attempts to mount nfs shares before raising an error. At least one attempt will be made to mount an nfs share, regardless of the value specified.
nfs_mount_options = None (StrOpt) Mount options passed to the nfs client. See section of the nfs man page for details.
nfs_mount_point_base = $state_path/mnt (StrOpt) Base dir containing mount points for nfs shares.
nfs_oversub_ratio = 1.0 (FloatOpt) This will compare the allocated to available space on the volume destination. If the ratio exceeds this number, the destination will no longer be valid. Note that this option is deprecated in favor of "max_oversubscription_ratio" and will be removed in the Mitaka release.
nfs_shares_config = /etc/cinder/nfs_shares (StrOpt) File with the list of available nfs shares
nfs_sparsed_volumes = True (BoolOpt) Create volumes as sparsed files which take no space.If set to False volume is created as regular file.In such case volume creation takes a lot of time.
nfs_used_ratio = 0.95 (FloatOpt) Percent of ACTUAL usage of the underlying volume before no new volumes can be allocated to the volume destination. Note that this option is deprecated in favor of "reserved_percentage" and will be removed in the Mitaka release.
Note
As of the Icehouse release, the NFS driver (and other drivers based off it) will attempt to mount shares using version 4.1 of the NFS protocol (including pNFS). If the mount attempt is unsuccessful due to a lack of client or server support, a subsequent mount attempt that requests the default behavior of the mount.nfs command will be performed. On most distributions, the default behavior is to attempt mounting first with NFS v4.0, then silently fall back to NFS v3.0 if necessary. If the nfs_mount_options configuration option contains a request for a specific version of NFS to be used, or if specific options are specified in the shares configuration file specified by the nfs_shares_config configuration option, the mount will be attempted as requested with no subsequent attempts.

2.1.17.3. How to use the NFS driver

  1. Access to one or more NFS servers. Creating an NFS server is outside the scope of this document. This example assumes access to the following NFS servers and mount points:
    • 192.168.1.200:/storage
    • 192.168.1.201:/storage
    • 192.168.1.202:/storage
    This example demonstrates the use of with this driver with multiple NFS servers. Multiple servers are not required. One is usually enough.
  2. Add your list of NFS servers to the file you specified with the nfs_shares_config option. For example, if the value of this option was set to /etc/cinder/shares.txt, then:
    # cat /etc/cinder/shares.txt
    192.168.1.200:/storage 192.168.1.201:/storage 192.168.1.202:/storage
    Comments are allowed in this file. They begin with a #.
  3. Configure the nfs_mount_point_base option. This is a directory where cinder-volume mounts all NFS shares stored in shares.txt. For this example, /var/lib/cinder/nfs is used. You can, of course, use the default value of $state_path/mnt.
  4. Start the cinder-volume service. /var/lib/cinder/nfs should now contain a directory for each NFS share specified in shares.txt. The name of each directory is a hashed name:
    # ls /var/lib/cinder/nfs/
    ... 46c5db75dc3a3a50a10bfd1a456a9f3f ...
  5. You can now create volumes as you normally would:
    $ nova volume-create --display-name myvol 5
    # ls /var/lib/cinder/nfs/46c5db75dc3a3a50a10bfd1a456a9f3f
    volume-a8862558-e6d6-4648-b5df-bb84f31c8935
    This volume can also be attached and deleted just like other volumes. However, snapshotting is not supported.

NFS driver notes

  • cinder-volume manages the mounting of the NFS shares as well as volume creation on the shares. Keep this in mind when planning your OpenStack architecture. If you have one master NFS server, it might make sense to only have one cinder-volume service to handle all requests to that NFS server. However, if that single server is unable to handle all requests, more than one cinder-volume service is needed as well as potentially more than one NFS server.
  • Because data is stored in a file and not actually on a block storage device, you might not see the same IO performance as you would with a traditional block storage driver. Test accordingly.
  • Despite possible IO performance loss, having volume data stored in a file might be beneficial. For example, backing up volumes can be as easy as copying the volume files.
    Note
    Regular IO flushing and syncing still stands.

2.1.18. SolidFire

The SolidFire Cluster is a high performance all SSD iSCSI storage device that provides massive scale out capability and extreme fault tolerance. A key feature of the SolidFire cluster is the ability to set and modify during operation specific QoS levels on a volume for volume basis. The SolidFire cluster offers this along with de-duplication, compression, and an architecture that takes full advantage of SSDs.
To configure the use of a SolidFire cluster with Block Storage, modify your cinder.conf file as follows:
volume_driver = cinder.volume.drivers.solidfire.SolidFireDriver
san_ip = 172.17.1.182         # the address of your MVIP
san_login = sfadmin           # your cluster admin login
san_password = sfpassword     # your cluster admin password
sf_account_prefix = ''        # prefix for tenant account creation on solidfire cluster
Warning
Older versions of the SolidFire driver (prior to Icehouse) created a unique account prefixed with $cinder-volume-service-hostname-$tenant-id on the SolidFire cluster for each tenant. Unfortunately, this account formation resulted in issues for High Availability (HA) installations and installations where the cinder-volume service can move to a new node. The current default implementation does not experience this issue as no prefix is used. For installations created on a prior release, the OLD default behavior can be configured by using the keyword "hostname" in sf_account_prefix.

Table 2.21. Description of SolidFire driver configuration options

Configuration option = Default value Description
[DEFAULT]
sf_account_prefix = None (StrOpt) Create SolidFire accounts with this prefix. Any string can be used here, but the string "hostname" is special and will create a prefix using the cinder node hostname (previous default behavior). The default is NO prefix.
sf_allow_template_caching = True (BoolOpt) Create an internal cache of copy of images when a bootable volume is created to eliminate fetch from glance and qemu-conversion on subsequent calls.
sf_allow_tenant_qos = False (BoolOpt) Allow tenants to specify QOS on create
sf_api_port = 443 (IntOpt) SolidFire API port. Useful if the device api is behind a proxy on a different port.
sf_emulate_512 = True (BoolOpt) Set 512 byte emulation on volume creation;
sf_enable_volume_mapping = True (BoolOpt) Create an internal mapping of volume IDs and account. Optimizes lookups and performance at the expense of memory, very large deployments may want to consider setting to False.
sf_svip = None (StrOpt) Overrides default cluster SVIP with the one specified. This is required or deployments that have implemented the use of VLANs for iSCSI networks in their cloud.
sf_template_account_name = openstack-vtemplate (StrOpt) Account name on the SolidFire Cluster to use as owner of template/cache volumes (created if does not exist).

2.1.19. Tintri

Tintri VMstore is a smart storage that sees, learns and adapts for cloud and virtualization. The Tintri Cinder driver will interact with configured VMstore running Tintri OS 4.0 and above. It supports various operations using Tintri REST APIs and NFS protocol.
To configure the use of a Tintri VMstore with Block Storage, perform the following actions:
  1. Edit the etc/cinder/cinder.conf file and set the cinder.volume.drivers.tintri options:
    volume_driver=cinder.volume.drivers.tintri.TintriDriver
    # Mount options passed to the nfs client. See section of the
    # nfs man page for details. (string value)
    nfs_mount_options=vers=3,lookupcache=pos
    
    #
    # Options defined in cinder.volume.drivers.tintri
    #
    
    # The hostname (or IP address) for the storage system (string
    # value)
    tintri_server_hostname={Tintri VMstore Management IP}
    
    # User name for the storage system (string value)
    tintri_server_username={username}
    
    # Password for the storage system (string value)
    tintri_server_password={password}
    
    # API version for the storage system (string value)
    #tintri_api_version=v310
    
    # Following options needed for NFS configuration
    # File with the list of available nfs shares (string value)
    #nfs_shares_config=/etc/cinder/nfs_shares
  2. Edit the etc/nova/nova.conf file, and set the nfs_mount_options:
    nfs_mount_options=vers=3
  3. Edit the /etc/cinder/nfs_shares file, and add the Tintri VMstore mount points associated with the configured VMstore management IP in the cinder.conf file:
    {vmstore_data_ip}:/tintri/{submount1}
    {vmstore_data_ip}:/tintri/{submount2}

Table 2.22. Description of Tintri volume driver configuration options

Configuration option = Default value Description
[DEFAULT]
tintri_api_version = v310 (StrOpt) API version for the storage system
tintri_server_hostname = None (StrOpt) The hostname (or IP address) for the storage system
tintri_server_password = None (StrOpt) Password for the storage system
tintri_server_username = None (StrOpt) User name for the storage system

2.1.20. Violin Memory 7000 Series FSP volume driver

The OpenStack V7000 driver package from Violin Memory adds Block Storage service support for Violin 7300 Flash Storage Platforms (FSPs) and 7700 FSP controllers.
The driver package release can be used with any OpenStack Liberty deployment for all 7300 FSPs and 7700 FSP controllers running Concerto 7.5.3 and later using Fibre Channel HBAs.

2.1.20.1. System requirements

To use the Violin driver, the following are required:
  • Violin 7300/7700 series FSP with:
    • Concerto OS version 7.5.3 or later
    • Fibre channel host interfaces
  • The Violin block storage driver: This driver implements the block storage API calls. The driver is included with the OpenStack Liberty release.
  • The vmemclient library: This is the Violin Array Communications library to the Flash Storage Platform through a REST-like interface. The client can be installed using the python pip installer tool. Further information on vmemclient can be found on PyPI.
    pip install vmemclient
    

2.1.20.2. Supported operations

  • Create, delete, attach, and detach volumes.
  • Create, list, and delete volume snapshots.
  • Create a volume from a snapshot.
  • Copy an image to a volume.
  • Copy a volume to an image.
  • Clone a volume.
  • Extend a volume.
note
Listed operations are supported for thick, thin, and dedup luns, with the exception of cloning. Cloning operations are supported only on thick luns.

2.1.20.3. Driver configuration

Once the array is configured as per the installation guide, it is simply a matter of editing the cinder configuration file to add or modify the parameters. The driver currently only supports fibre channel configuration.
2.1.20.3.1. Fibre channel configuration
Set the following in your cinder.conf configuration file, replacing the variables using the guide in the following section:
volume_driver = cinder.volume.drivers.violin.v7000_fcp.V7000FCPDriver
volume_backend_name = vmem_violinfsp
extra_capabilities = VMEM_CAPABILITIES
san_ip = VMEM_MGMT_IP
san_login = VMEM_USER_NAME
san_password = VMEM_PASSWORD
use_multipath_for_image_xfer = true
2.1.20.3.2. Configuration parameters
Description of configuration value placeholders:
VMEM_CAPABILITIES
User defined capabilities, a JSON formatted string specifying key-value pairs (string value). The ones particularly supported are dedup and thin. Only these two capabilities are listed here in cinder.conf file, indicating this backend be selected for creating luns which have a volume type associated with them that have dedup or thin extra_specs specified. For example, if the FSP is configured to support dedup luns, set the associated driver capabilities to: {"dedup":"True","thin":"True"}.
VMEM_MGMT_IP
External IP address or host name of the Violin 7300 Memory Gateway. This can be an IP address or host name.
VMEM_USER_NAME
Log-in user name for the Violin 7300 Memory Gateway or 7700 FSP controller. This user must have administrative rights on the array or controller.
VMEM_PASSWORD
Log-in user's password.


[1] Volume extension is executable only when you use TPP as a storage pool.
[2] The configuration file location may differ.
[3] There is no relative precedence or weight among these four labels.