3. Volume drivers
To use different volume drivers for the cinder-volume service, use the parameters described in these sections.
The volume drivers are included in the Block Storage repository (https://github.com/openstack/cinder). To set a volume driver, use the volume_driver flag. The default is:
volume_driver=cinder.volume.drivers.lvm.LVMISCSIDriver
3.1. Ceph RADOS Block Device (RBD)
If you use KVM or QEMU as your hypervisor, you can configure the Compute service to use Ceph RADOS block devices (RBD) for volumes.
Ceph is a massively scalable, open source, distributed storage system. It is comprised of an object store, block store, and a POSIX-compliant distributed file system. The platform can auto-scale to the exabyte level and beyond. It runs on commodity hardware, is self-healing and self-managing, and has no single point of failure. Ceph is in the Linux kernel and is integrated with the OpenStack cloud operating system. Due to its open-source nature, you can install and use this portable storage platform in public or private clouds.
Figure 1.1. Ceph architecture

RADOS
Ceph is based on RADOS: Reliable Autonomic Distributed Object Store. RADOS distributes objects across the storage cluster and replicates objects for fault tolerance. RADOS contains the following major components:
Object Storage Device (OSD) Daemon. The storage daemon for the RADOS service, which interacts with the OSD (physical or logical storage unit for your data).
You must run this daemon on each server in your cluster. For each OSD, you can have an associated hard drive disk. For performance purposes, pool your hard drive disk with raid arrays, logical volume management (LVM), or B-tree file system (
Btrfs) pooling. By default, the following pools are created: data, metadata, and RBD.Meta-Data Server (MDS). Stores metadata. MDSs build a POSIX file system on top of objects for Ceph clients. However, if you do not use the Ceph file system, you do not need a metadata server.
Monitor (MON). A lightweight daemon that handles all communications with external applications and clients. It also provides a consensus for distributed decision making in a Ceph/RADOS cluster. For instance, when you mount a Ceph shared on a client, you point to the address of a MON server. It checks the state and the consistency of the data. In an ideal setup, you must run at least three
ceph-mondaemons on separate servers.
Ceph developers recommend that you use Btrfs as a file system for storage. XFS might be a better alternative for production environments;XFS is an excellent alternative to Btrfs. The ext4 file system is also compatible but does not exploit the power of Ceph.
If using Btrfs, ensure that you use the correct version (see Ceph Dependencies).
For more information about usable file systems, see ceph.com/ceph-storage/file-system/.
Ways to store, use, and expose data
To store and access your data, you can use the following storage systems:
RADOS. Use as an object, default storage mechanism.
RBD. Use as a block device. The Linux kernel RBD (rados block device) driver allows striping a Linux block device over multiple distributed object store data objects. It is compatible with the KVM RBD image.
CephFS. Use as a file, POSIX-compliant file system.
Ceph exposes RADOS; you can access it through the following interfaces:
RADOS Gateway. OpenStack Object Storage and Amazon-S3 compatible RESTful interface (see RADOS_Gateway).
librados, and its related C/C++ bindings.
rbd and QEMU-RBD. Linux kernel and QEMU block devices that stripe data across multiple objects.
Driver options
The following table contains the configuration options supported by the Ceph RADOS Block Device driver.
Table 1.1. Description of configuration options for storage_ceph
| Configuration option = Default value | Description |
|---|---|
| [DEFAULT] | |
| rbd_ceph_conf = | (StrOpt) Path to the ceph configuration file to use. |
| rbd_flatten_volume_from_snapshot = False | (BoolOpt) Flatten volumes created from snapshots to remove dependency. |
| rbd_max_clone_depth = 5 | (IntOpt) Maximum number of nested clones that can be taken of a volume before enforcing a flatten prior to next clone. A value of zero disables cloning. |
| rbd_pool = rbd | (StrOpt) The RADOS pool in which rbd volumes are stored. |
| rbd_secret_uuid = None | (StrOpt) The libvirt uuid of the secret for the rbd_uservolumes. |
| rbd_user = None | (StrOpt) The RADOS client name for accessing rbd volumes - only set when using cephx authentication. |
| volume_tmp_dir = None | (StrOpt) Where to store temporary image files if the volume driver does not write them directly to the volume. |
3.2. Dell EqualLogic volume driver
The Dell EqualLogic volume driver interacts with configured Dell EqualLogic Groups and supports various operations, including:
Volume creation, deletion, and extension
Volume attachment and detachment
Snapshot creation and deletion
Clone creation
The OpenStack Block storage service supports multiple instances of Dell EqualLogic Groups or Dell EqualLogic Group Storage Pools, and/or multiple pools on a single array.
The Dell EqualLogic volume driver's ability to access the EqualLogic Group is dependent upon the generic block storage driver's SSH settings in the /etc/cinder/cinder.conf file (see Section 5, “Block Storage sample configuration files” for reference).
Table 1.2. Description of configuration options for eqlx
| Configuration option = Default value | Description |
|---|---|
| [DEFAULT] | |
| eqlx_chap_login = admin | (StrOpt) Existing CHAP account name |
| eqlx_chap_password = password | (StrOpt) Password for specified CHAP account name |
| eqlx_cli_max_retries = 5 | (IntOpt) Maximum retry count for reconnection |
| eqlx_cli_timeout = 30 | (IntOpt) Timeout for the Group Manager cli command execution |
| eqlx_group_name = group-0 | (StrOpt) Group name to use for creating volumes |
| eqlx_pool = default | (StrOpt) Pool in which volumes will be created |
| eqlx_use_chap = False | (BoolOpt) Use CHAP authentication for targets? |
The following sample /etc/cinder/cinder.conf configuration displays the relevant settings for a typical Block Storage service using a single Dell EqualLogic Group:
Example 1.1. Default (single-instance) configuration
[DEFAULT] #Required settings volume_driver=cinder.volume.drivers.eqlx.DellEQLSanISCSIDriver san_ip=IP_EQLX san_login=SAN_UNAME san_password=SAN_PW eqlx_group_name=EQLX_GROUP eqlx_pool=EQLX_POOL #Optional settings san_thin_provision=true eqlx_use_chap=true|false eqlx_chap_login=EQLX_UNAME eqlx_chap_password=EQLX_PW eqlx_cli_timeout=30 eqlx_cli_max_retries=5 san_ssh_port=22 ssh_conn_timeout=30 san_private_key=SAN_KEY_PATH ssh_min_pool_conn=1 ssh_max_pool_conn=5
In this example, replace the following variables accordingly:
- IP_EQLX
The IP address used to reach the Dell EqualLogic Group through SSH. This field has no default value.
- SAN_UNAME
The user name to login to the Group manager via SSH at the
san_ip. Default user name isgrpadmin.- SAN_PW
The corresponding password of SAN_UNAME. Not used when
san_private_keyis set. Default password ispassword.- EQLX_GROUP
The group to be used for a pool where the Block Storage service will create volumes and snapshots. Default group is
group-0.- EQLX_POOL
The pool where the Block Storage service will create volumes and snapshots. Default pool is
default. This option cannot be used for multiple pools utilized by the Block Storage service on a single Dell EqualLogic Group.- EQLX_UNAME
The CHAP login account for each volume in a pool, if
eqlx_use_chapis set totrue. Default account name ischapadmin.- EQLX_PW
The corresponding password of EQLX_UNAME. The default password is randomly generated in hexadecimal, so you must set this password manually.
- SAN_KEY_PATH (optional)
The filename of the private key used for SSH authentication. This provides password-less login to the EqualLogic Group. Not used when
san_passwordis set. There is no default value.
In addition, we recommend that you enable thin provisioning for SAN volumes. To do so, use the default san_thin_provision=true setting.
For information on how to configure a Block Storage service with multiple Dell EqualLogic back-ends, refer to the Cloud Administrator Guide:
3.3. GlusterFS driver
GlusterFS is an open-source scalable distributed file system that is able to grow to petabytes and beyond in size. More information can be found on Gluster's homepage.
This driver enables use of GlusterFS in a similar fashion as the NFS driver. It supports basic volume operations, and like NFS, does not support snapshot/clone.
You must use a Linux kernel of version 3.4 or greater (or version 2.6.32 or greater in Red Hat Enterprise Linux/CentOS 6.3+) when working with Gluster-based volumes. See Bug 1177103 for more information.
To use Block Storage with GlusterFS, first set the volume_driver in cinder.conf:
volume_driver=cinder.volume.drivers.glusterfs.GlusterfsDriver
The following table contains the configuration options supported by the GlusterFS driver.
Table 1.3. Description of configuration options for storage_glusterfs
| Configuration option = Default value | Description |
|---|---|
| [DEFAULT] | |
| glusterfs_mount_point_base = $state_path/mnt | (StrOpt) Base dir containing mount points for gluster shares. |
| glusterfs_qcow2_volumes = False | (BoolOpt) Create volumes as QCOW2 files rather than raw files. |
| glusterfs_shares_config = /etc/cinder/glusterfs_shares | (StrOpt) File with the list of available gluster shares |
| glusterfs_sparsed_volumes = True | (BoolOpt) Create volumes as sparsed files which take no space.If set to False volume is created as regular file.In such case volume creation takes a lot of time. |
3.4. HP MSA Fibre Channel driver
The HP MSA fiber channel driver runs volume operations on the storage array over HTTP.
A VDisk must be created on the HP MSA array first. This can be done using the web interface or the command-line interface of the array.
The following options must be defined in the cinder-volume configuration file (/etc/cinder/cinder.conf):
Set the
volume_driveroption tocinder.volume.drivers.san.hp.hp_msa_fc.HPMSAFCDriverSet the
san_ipoption to the hostname or IP address of your HP MSA array.Set the
san_loginoption to the login of an existing user of the HP MSA array.Set the
san_passwordoption to the password for this user.
3.5. LVM
The default volume back-end uses local volumes managed by LVM.
This driver supports different transport protocols to attach volumes, currently ISCSI and ISER.
Set the following in your cinder.conf, and use the following options to configure for ISCSI transport:
volume_driver=cinder.volume.drivers.lvm.ISCSIDriver
and for the ISER transport:
volume_driver=cinder.volume.drivers.lvm.ISERDriver
Table 1.4. Description of configuration options for lvm
| Configuration option = Default value | Description |
|---|---|
| [DEFAULT] | |
| lvm_mirrors = 0 | (IntOpt) If set, create lvms with multiple mirrors. Note that this requires lvm_mirrors + 2 pvs with available space |
| lvm_type = default | (StrOpt) Type of LVM volumes to deploy; (default or thin) |
| volume_group = cinder-volumes | (StrOpt) Name for the VG that will contain exported volumes |
3.6. NetApp unified driver
The NetApp unified driver is a block storage driver that supports multiple storage families and protocols. A storage family corresponds to storage systems built on different NetApp technologies such as clustered Data ONTAP, Data ONTAP operating in 7-Mode, and E-Series. The storage protocol refers to the protocol used to initiate data storage and access operations on those storage systems like iSCSI and NFS. The NetApp unified driver can be configured to provision and manage OpenStack volumes on a given storage family using a specified storage protocol. The OpenStack volumes can then be used for accessing and storing data using the storage protocol on the storage family system. The NetApp unified driver is an extensible interface that can support new storage families and protocols.
3.6.1. NetApp clustered Data ONTAP storage family
The NetApp clustered Data ONTAP storage family represents a configuration group which provides OpenStack compute instances access to clustered Data ONTAP storage systems. At present it can be configured in OpenStack Block Storage to work with iSCSI and NFS storage protocols.
3.6.1.1. NetApp iSCSI configuration for clustered Data ONTAP
The NetApp iSCSI configuration for clustered Data ONTAP is an interface from OpenStack to clustered Data ONTAP storage systems for provisioning and managing the SAN block storage entity; that is, a NetApp LUN which can be accessed using the iSCSI protocol.
The iSCSI configuration for clustered Data ONTAP is a direct interface from OpenStack Block Storage to the clustered Data ONTAP instance and as such does not require additional management software to achieve the desired functionality. It uses NetApp APIs to interact with the clustered Data ONTAP instance.
Configuration options for clustered Data ONTAP family with iSCSI protocol
Configure the volume driver, storage family and storage protocol to the NetApp unified driver, clustered Data ONTAP, and iSCSI respectively by setting the volume_driver, netapp_storage_family and netapp_storage_protocol options in cinder.conf as follows:
volume_driver=cinder.volume.drivers.netapp.common.NetAppDriver netapp_storage_family=ontap_cluster netapp_storage_protocol=iscsi netapp_vserver=openstack-vserver netapp_server_hostname=myhostname netapp_server_port=80 netapp_login=username netapp_password=password
You must override the default value of netapp_storage_protocol with iscsi in order to utilize the iSCSI protocol.
Table 1.5. Description of configuration options for netapp_cdot_iscsi
| Configuration option = Default value | Description |
|---|---|
| [DEFAULT] | |
| netapp_login = None | (StrOpt) Administrative user account name used to access the storage system or proxy server. |
| netapp_password = None | (StrOpt) Password for the administrative user account specified in the netapp_login option. |
| netapp_server_hostname = None | (StrOpt) The hostname (or IP address) for the storage system or proxy server. |
| netapp_server_port = 80 | (IntOpt) The TCP port to use for communication with the storage system or proxy server. Traditionally, port 80 is used for HTTP and port 443 is used for HTTPS; however, this value should be changed if an alternate port has been configured on the storage system or proxy server. |
| netapp_size_multiplier = 1.2 | (FloatOpt) The quantity to be multiplied by the requested volume size to ensure enough space is available on the virtual storage server (Vserver) to fulfill the volume creation request. |
| netapp_storage_family = ontap_cluster | (StrOpt) The storage family type used on the storage system; valid values are ontap_7mode for using Data ONTAP operating in 7-Mode, ontap_cluster for using clustered Data ONTAP, or eseries for using E-Series. |
| netapp_storage_protocol = None | (StrOpt) The storage protocol to be used on the data path with the storage system; valid values are iscsi or nfs. |
| netapp_transport_type = http | (StrOpt) The transport protocol used when communicating with the storage system or proxy server. Valid values are http or https. |
| netapp_vserver = None | (StrOpt) This option specifies the virtual storage server (Vserver) name on the storage cluster on which provisioning of Block Storage volumes should occur. If using the NFS storage protocol, this parameter is mandatory for storage service catalog support (utilized by Block Storage volume type extra_specs support). If this option is specified, the exports belonging to the Vserver will only be used for provisioning in the future. Block storage volumes on exports not belonging to the Vserver specified by this option will continue to function normally. |
If you specify an account in the netapp_login that only has virtual storage server (Vserver) administration privileges (rather than cluster-wide administration privileges), some advanced features of the NetApp unified driver will not work and you may see warnings in the OpenStack Block Storage logs.
For more information on these options and other deployment and operational scenarios, visit the OpenStack NetApp community.
3.6.1.2. NetApp NFS configuration for clustered Data ONTAP
The NetApp NFS configuration for clustered Data ONTAP is an interface from OpenStack to a clustered Data ONTAP system for provisioning and managing OpenStack volumes on NFS exports provided by the clustered Data ONTAP system that are accessed using the NFS protocol.
The NFS configuration for clustered Data ONTAP is a direct interface from OpenStack Block Storage to the clustered Data ONTAP instance and as such does not require any additional management software to achieve the desired functionality. It uses NetApp APIs to interact with the clustered Data ONTAP instance.
Configuration options for the clustered Data ONTAP family with NFS protocol
Configure the volume driver, storage family and storage protocol to NetApp unified driver, clustered Data ONTAP, and NFS respectively by setting the volume_driver, netapp_storage_family and netapp_storage_protocol options in cinder.conf as follows:
volume_driver=cinder.volume.drivers.netapp.common.NetAppDriver netapp_storage_family=ontap_cluster netapp_storage_protocol=nfs netapp_vserver=openstack-vserver netapp_server_hostname=myhostname netapp_server_port=80 netapp_login=username netapp_password=password nfs_shares_config=/etc/cinder/nfs_shares
Table 1.6. Description of configuration options for netapp_cdot_nfs
| Configuration option = Default value | Description |
|---|---|
| [DEFAULT] | |
| expiry_thres_minutes = 720 | (IntOpt) This option specifies the threshold for last access time for images in the NFS image cache. When a cache cleaning cycle begins, images in the cache that have not been accessed in the last M minutes, where M is the value of this parameter, will be deleted from the cache to create free space on the NFS share. |
| netapp_copyoffload_tool_path = None | (StrOpt) This option specifies the path of the NetApp copy offload tool binary. Ensure that the binary has execute permissions set which allow the effective user of the cinder-volume process to execute the file. |
| netapp_login = None | (StrOpt) Administrative user account name used to access the storage system or proxy server. |
| netapp_password = None | (StrOpt) Password for the administrative user account specified in the netapp_login option. |
| netapp_server_hostname = None | (StrOpt) The hostname (or IP address) for the storage system or proxy server. |
| netapp_server_port = 80 | (IntOpt) The TCP port to use for communication with the storage system or proxy server. Traditionally, port 80 is used for HTTP and port 443 is used for HTTPS; however, this value should be changed if an alternate port has been configured on the storage system or proxy server. |
| netapp_storage_family = ontap_cluster | (StrOpt) The storage family type used on the storage system; valid values are ontap_7mode for using Data ONTAP operating in 7-Mode, ontap_cluster for using clustered Data ONTAP, or eseries for using E-Series. |
| netapp_storage_protocol = None | (StrOpt) The storage protocol to be used on the data path with the storage system; valid values are iscsi or nfs. |
| netapp_transport_type = http | (StrOpt) The transport protocol used when communicating with the storage system or proxy server. Valid values are http or https. |
| netapp_vserver = None | (StrOpt) This option specifies the virtual storage server (Vserver) name on the storage cluster on which provisioning of Block Storage volumes should occur. If using the NFS storage protocol, this parameter is mandatory for storage service catalog support (utilized by Block Storage volume type extra_specs support). If this option is specified, the exports belonging to the Vserver will only be used for provisioning in the future. Block storage volumes on exports not belonging to the Vserver specified by this option will continue to function normally. |
| thres_avl_size_perc_start = 20 | (IntOpt) If the percentage of available space for an NFS share has dropped below the value specified by this option, the NFS image cache will be cleaned. |
| thres_avl_size_perc_stop = 60 | (IntOpt) When the percentage of available space on an NFS share has reached the percentage specified by this option, the driver will stop clearing files from the NFS image cache that have not been accessed in the last M minutes, where M is the value of the expiry_thres_minutes configuration option. |
Additional NetApp NFS configuration options are shared with the generic NFS driver. These options can be found here: Table 1.11, “Description of configuration options for storage_nfs”.
If you specify an account in the netapp_login that only has virtual storage server (Vserver) administration privileges (rather than cluster-wide administration privileges), some advanced features of the NetApp unified driver will not work and you may see warnings in the OpenStack Block Storage logs.
NetApp NFS Copy Offload client
A feature was added in the Icehouse release of the NetApp unified driver that enables Image Service images to be efficiently copied to a destination Block Storage volume. When the Block Storage and Image Service are configured to use the NetApp NFS Copy Offload client, a controller-side copy will be attempted before reverting to downloading the image from the Image Service. This improves image provisioning times while reducing the consumption of bandwidth and CPU cycles on the host(s) running the Image and Block Storage services. This is due to the copy operation being performed completely within the storage cluster.
The NetApp NFS Copy Offload client can be used in either of the following scenarios:
The Image Service is configured to store images in an NFS share that is exported from a NetApp FlexVol volume and the destination for the new Block Storage volume will be on an NFS share exported from a different FlexVol volume than the one used by the Image Service. Both FlexVols must be located within the same cluster.
The source image from the Image Service has already been cached in an NFS image cache within a Block Storage backend. The cached image resides on a different FlexVol volume than the destination for the new Block Storage volume. Both FlexVols must be located within the same cluster.
To use this feature, you must configure the Image Service, as follows:
Set the
default_storeconfiguration option tofile.Set the
filesystem_store_datadirconfiguration option to the path to the Image Service NFS export.Set the
show_image_direct_urlconfiguration option toTrue.Set the
show_multiple_locationsconfiguration option toTrue.Set the
filesystem_store_metadata_fileconfiguration option to a metadata file. The metadata file should contain a JSON object that contains the correct information about the NFS export used by the Image Service, similar to:{ "share_location": "nfs://192.168.0.1/myGlanceExport", "mount_point": "/var/lib/glance/images", "type": "nfs" }
To use this feature, you must configure the Block Storage service, as follows:
Set the
netapp_copyoffload_tool_pathconfiguration option to the path to the NetApp Copy Offload binary.Set the
glance_api_versionconfiguration option to2.
This feature requires that:
The storage system must have Data ONTAP v8.2 or greater installed.
The vStorage feature must be enabled on each storage virtual machine (SVM, also known as a Vserver) that is permitted to interact with the copy offload client.
To configure the copy offload workflow, enable NFS v4.0 or greater and export it from the SVM.
To download the NetApp copy offload binary to be utilized in conjunction with the netapp_copyoffload_tool_path configuration option, please visit the download page at the NetApp OpenStack Community site.
For more information on these options and other deployment and operational scenarios, visit the OpenStack NetApp community.
3.6.1.3. NetApp-supported extra specs for clustered Data ONTAP
Extra specs enable vendors to specify extra filter criteria that the Block Storage scheduler uses when it determines which volume node should fulfill a volume provisioning request. When you use the NetApp unified driver with a clustered Data ONTAP storage system, you can leverage extra specs with OpenStack Block Storage volume types to ensure that OpenStack Block Storage volumes are created on storage back ends that have certain properties. For example, when you configure QoS, mirroring, or compression for a storage back end.
Extra specs are associated with OpenStack Block Storage volume types, so that when users request volumes of a particular volume type, the volumes are created on storage back ends that meet the list of requirements. For example, the back ends have the available space or extra specs. You can use the specs in the following table when you define OpenStack Block Storage volume types by using the cinder type-key command.
Table 1.7. Description of extra specs options for NetApp Unified Driver with Clustered Data ONTAP
| Extra spec | Type | Description | |||||||||||||||||||||||||||||||||||||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
netapp:raid_type
|
String |
Limit the candidate volume list based on one of the following raid types: raid4, raid_dp.
|
|||||||||||||||||||||||||||||||||||||||||||||||
netapp:disk_type
|
String |
Limit the candidate volume list based on one of the following disk types: ATA, BSAS, EATA, FCAL, FSAS, LUN, MSATA, SAS, SATA, SCSI, XATA, XSAS, or SSD.
|
|||||||||||||||||||||||||||||||||||||||||||||||
netapp:qos_policy_group
|
String | Limit the candidate volume list based on the name of a QoS policy group, which defines measurable Service Level Objectives that apply to the storage objects with which the policy group is associated. | |||||||||||||||||||||||||||||||||||||||||||||||
netapp_mirrored[a]
|
Boolean | Limit the candidate volume list to only the ones that are mirrored on the storage controller. | |||||||||||||||||||||||||||||||||||||||||||||||
netapp_unmirrored[a]
|
Boolean | Limit the candidate volume list to only the ones that are not mirrored on the storage controller. | |||||||||||||||||||||||||||||||||||||||||||||||
netapp_dedup[a]
|
Boolean | Limit the candidate volume list to only the ones that have deduplication enabled on the storage controller. | |||||||||||||||||||||||||||||||||||||||||||||||
netapp_nodedup[a]
|
Boolean | Limit the candidate volume list to only the ones that have deduplication disabled on the storage controller. | |||||||||||||||||||||||||||||||||||||||||||||||
netapp_compression[a]
|
Boolean | Limit the candidate volume list to only the ones that have compression enabled on the storage controller. | |||||||||||||||||||||||||||||||||||||||||||||||
netapp_nocompression[a]
|
Boolean | Limit the candidate volume list to only the ones that have compression disabled on the storage controller. | |||||||||||||||||||||||||||||||||||||||||||||||
netapp_thin_provisioned[a]
|
Boolean | Limit the candidate volume list to only the ones that support thin provisioning on the storage controller. | |||||||||||||||||||||||||||||||||||||||||||||||
netapp_thick_provisioned[a]
|
Boolean | Limit the candidate volume list to only the ones that support thick provisioning on the storage controller. | |||||||||||||||||||||||||||||||||||||||||||||||
[a]
If both the positive and negative specs for a pair are specified (for example, | |||||||||||||||||||||||||||||||||||||||||||||||||
It is recommended to only set the value of extra specs to True when combining multiple specs to enforce a certain logic set. If you desire to remove volumes with a certain feature enabled from consideration from the OpenStack Block Storage volume scheduler, be sure to use the negated spec name with a value of True rather than setting the positive spec to a value of False.
3.6.2. NetApp Data ONTAP operating in 7-Mode storage family
The NetApp Data ONTAP operating in 7-Mode storage family represents a configuration group which provides OpenStack compute instances access to 7-Mode storage systems. At present it can be configured in OpenStack Block Storage to work with iSCSI and NFS storage protocols.
3.6.2.1. NetApp iSCSI configuration for Data ONTAP operating in 7-Mode
The NetApp iSCSI configuration for Data ONTAP operating in 7-Mode is an interface from OpenStack to Data ONTAP operating in 7-Mode storage systems for provisioning and managing the SAN block storage entity, that is, a LUN which can be accessed using iSCSI protocol.
The iSCSI configuration for Data ONTAP operating in 7-Mode is a direct interface from OpenStack to Data ONTAP operating in 7-Mode storage system and it does not require additional management software to achieve the desired functionality. It uses NetApp ONTAPI to interact with the Data ONTAP operating in 7-Mode storage system.
Configuration options for the Data ONTAP operating in 7-Mode storage family with iSCSI protocol
Configure the volume driver, storage family and storage protocol to the NetApp unified driver, Data ONTAP operating in 7-Mode, and iSCSI respectively by setting the volume_driver, netapp_storage_family and netapp_storage_protocol options in cinder.conf as follows:
volume_driver=cinder.volume.drivers.netapp.common.NetAppDriver netapp_storage_family=ontap_7mode netapp_storage_protocol=iscsi netapp_server_hostname=myhostname netapp_server_port=80 netapp_login=username netapp_password=password
You must override the default value of netapp_storage_protocol with iscsi in order to utilize the iSCSI protocol.
Table 1.8. Description of configuration options for netapp_7mode_iscsi
| Configuration option = Default value | Description |
|---|---|
| [DEFAULT] | |
| netapp_login = None | (StrOpt) Administrative user account name used to access the storage system or proxy server. |
| netapp_password = None | (StrOpt) Password for the administrative user account specified in the netapp_login option. |
| netapp_server_hostname = None | (StrOpt) The hostname (or IP address) for the storage system or proxy server. |
| netapp_server_port = 80 | (IntOpt) The TCP port to use for communication with the storage system or proxy server. Traditionally, port 80 is used for HTTP and port 443 is used for HTTPS; however, this value should be changed if an alternate port has been configured on the storage system or proxy server. |
| netapp_size_multiplier = 1.2 | (FloatOpt) The quantity to be multiplied by the requested volume size to ensure enough space is available on the virtual storage server (Vserver) to fulfill the volume creation request. |
| netapp_storage_family = ontap_cluster | (StrOpt) The storage family type used on the storage system; valid values are ontap_7mode for using Data ONTAP operating in 7-Mode, ontap_cluster for using clustered Data ONTAP, or eseries for using E-Series. |
| netapp_storage_protocol = None | (StrOpt) The storage protocol to be used on the data path with the storage system; valid values are iscsi or nfs. |
| netapp_transport_type = http | (StrOpt) The transport protocol used when communicating with the storage system or proxy server. Valid values are http or https. |
| netapp_vfiler = None | (StrOpt) The vFiler unit on which provisioning of block storage volumes will be done. This option is only used by the driver when connecting to an instance with a storage family of Data ONTAP operating in 7-Mode and the storage protocol selected is iSCSI. Only use this option when utilizing the MultiStore feature on the NetApp storage system. |
| netapp_volume_list = None | (StrOpt) This option is only utilized when the storage protocol is configured to use iSCSI. This option is used to restrict provisioning to the specified controller volumes. Specify the value of this option to be a comma separated list of NetApp controller volume names to be used for provisioning. |
For more information on these options and other deployment and operational scenarios, visit the OpenStack NetApp community.
3.6.2.2. NetApp NFS configuration for Data ONTAP operating in 7-Mode
The NetApp NFS configuration for Data ONTAP operating in 7-Mode is an interface from OpenStack to Data ONTAP operating in 7-Mode storage system for provisioning and managing OpenStack volumes on NFS exports provided by the Data ONTAP operating in 7-Mode storage system which can then be accessed using NFS protocol.
The NFS configuration for Data ONTAP operating in 7-Mode is a direct interface from OpenStack Block Storage to the Data ONTAP operating in 7-Mode instance and as such does not require any additional management software to achieve the desired functionality. It uses NetApp ONTAPI to interact with the Data ONTAP operating in 7-Mode storage system.
Configuration options for the Data ONTAP operating in 7-Mode family with NFS protocol
Configure the volume driver, storage family and storage protocol to the NetApp unified driver, Data ONTAP operating in 7-Mode, and NFS respectively by setting the volume_driver, netapp_storage_family and netapp_storage_protocol options in cinder.conf as follows:
volume_driver=cinder.volume.drivers.netapp.common.NetAppDriver netapp_storage_family=ontap_7mode netapp_storage_protocol=nfs netapp_server_hostname=myhostname netapp_server_port=80 netapp_login=username netapp_password=password nfs_shares_config=/etc/cinder/nfs_shares
Table 1.9. Description of configuration options for netapp_7mode_nfs
| Configuration option = Default value | Description |
|---|---|
| [DEFAULT] | |
| expiry_thres_minutes = 720 | (IntOpt) This option specifies the threshold for last access time for images in the NFS image cache. When a cache cleaning cycle begins, images in the cache that have not been accessed in the last M minutes, where M is the value of this parameter, will be deleted from the cache to create free space on the NFS share. |
| netapp_login = None | (StrOpt) Administrative user account name used to access the storage system or proxy server. |
| netapp_password = None | (StrOpt) Password for the administrative user account specified in the netapp_login option. |
| netapp_server_hostname = None | (StrOpt) The hostname (or IP address) for the storage system or proxy server. |
| netapp_server_port = 80 | (IntOpt) The TCP port to use for communication with the storage system or proxy server. Traditionally, port 80 is used for HTTP and port 443 is used for HTTPS; however, this value should be changed if an alternate port has been configured on the storage system or proxy server. |
| netapp_storage_family = ontap_cluster | (StrOpt) The storage family type used on the storage system; valid values are ontap_7mode for using Data ONTAP operating in 7-Mode, ontap_cluster for using clustered Data ONTAP, or eseries for using E-Series. |
| netapp_storage_protocol = None | (StrOpt) The storage protocol to be used on the data path with the storage system; valid values are iscsi or nfs. |
| netapp_transport_type = http | (StrOpt) The transport protocol used when communicating with the storage system or proxy server. Valid values are http or https. |
| thres_avl_size_perc_start = 20 | (IntOpt) If the percentage of available space for an NFS share has dropped below the value specified by this option, the NFS image cache will be cleaned. |
| thres_avl_size_perc_stop = 60 | (IntOpt) When the percentage of available space on an NFS share has reached the percentage specified by this option, the driver will stop clearing files from the NFS image cache that have not been accessed in the last M minutes, where M is the value of the expiry_thres_minutes configuration option. |
Additional NetApp NFS configuration options are shared with the generic NFS driver. These options can be found here: Table 1.11, “Description of configuration options for storage_nfs”.
For more information on these options and other deployment and operational scenarios, visit the OpenStack NetApp community.
3.6.3. NetApp E-Series storage family
The NetApp E-Series storage family represents a configuration group which provides OpenStack compute instances access to E-Series storage systems. At present it can be configured in OpenStack Block Storage to work with the iSCSI storage protocol.
3.6.3.1. NetApp iSCSI configuration for E-Series
The NetApp iSCSI configuration for E-Series is an interface from OpenStack to E-Series storage systems for provisioning and managing the SAN block storage entity; that is, a NetApp LUN which can be accessed using the iSCSI protocol.
The iSCSI configuration for E-Series is an interface from OpenStack Block Storage to the E-Series proxy instance and as such requires the deployment of the proxy instance in order to achieve the desired functionality. The driver uses REST APIs to interact with the E-Series proxy instance, which in turn interacts directly with the E-Series controllers.
Configuration options for E-Series storage family with iSCSI protocol
Configure the volume driver, storage family and storage protocol to the NetApp unified driver, E-Series, and iSCSI respectively by setting the volume_driver, netapp_storage_family and netapp_storage_protocol options in cinder.conf as follows:
volume_driver=cinder.volume.drivers.netapp.common.NetAppDriver netapp_storage_family=eseries netapp_storage_protocol=iscsi netapp_server_hostname=myhostname netapp_server_port=80 netapp_login=username netapp_password=password netapp_controller_ips=1.2.3.4,5.6.7.8 netapp_sa_password=arrayPassword netapp_storage_pools=pool1,pool2
You must override the default value of netapp_storage_family with eseries in order to utilize the E-Series driver.
You must override the default value of netapp_storage_protocol with iscsi in order to utilize the iSCSI protocol.
Table 1.10. Description of configuration options for netapp_eseries_iscsi
| Configuration option = Default value | Description |
|---|---|
| [DEFAULT] | |
| netapp_controller_ips = None | (StrOpt) This option is only utilized when the storage family is configured to eseries. This option is used to restrict provisioning to the specified controllers. Specify the value of this option to be a comma separated list of controller hostnames or IP addresses to be used for provisioning. |
| netapp_login = None | (StrOpt) Administrative user account name used to access the storage system or proxy server. |
| netapp_password = None | (StrOpt) Password for the administrative user account specified in the netapp_login option. |
| netapp_sa_password = None | (StrOpt) Password for the NetApp E-Series storage array. |
| netapp_server_hostname = None | (StrOpt) The hostname (or IP address) for the storage system or proxy server. |
| netapp_server_port = 80 | (IntOpt) The TCP port to use for communication with the storage system or proxy server. Traditionally, port 80 is used for HTTP and port 443 is used for HTTPS; however, this value should be changed if an alternate port has been configured on the storage system or proxy server. |
| netapp_storage_family = ontap_cluster | (StrOpt) The storage family type used on the storage system; valid values are ontap_7mode for using Data ONTAP operating in 7-Mode, ontap_cluster for using clustered Data ONTAP, or eseries for using E-Series. |
| netapp_storage_pools = None | (StrOpt) This option is used to restrict provisioning to the specified storage pools. Only dynamic disk pools are currently supported. Specify the value of this option to be a comma separated list of disk pool names to be used for provisioning. |
| netapp_transport_type = http | (StrOpt) The transport protocol used when communicating with the storage system or proxy server. Valid values are http or https. |
| netapp_webservice_path = /devmgr/v2 | (StrOpt) This option is used to specify the path to the E-Series proxy application on a proxy server. The value is combined with the value of the netapp_transport_type, netapp_server_hostname, and netapp_server_port options to create the URL used by the driver to connect to the proxy application. |
For more information on these options and other deployment and operational scenarios, visit the OpenStack NetApp community.
3.6.4. Upgrading prior NetApp drivers to the NetApp unified driver
NetApp introduced a new unified block storage driver in Havana for configuring different storage families and storage protocols. This requires defining upgrade path for NetApp drivers which existed in releases prior to Havana. This section covers the upgrade configuration for NetApp drivers to the new unified configuration and a list of deprecated NetApp drivers.
3.6.4.1. Upgraded NetApp drivers
This section describes how to update OpenStack Block Storage configuration from a pre-Havana release to the new unified driver format.
Driver upgrade configuration
NetApp iSCSI direct driver for Clustered Data ONTAP in Grizzly (or earlier)
volume_driver=cinder.volume.drivers.netapp.iscsi.NetAppDirectCmodeISCSIDriver
NetApp Unified Driver configuration
volume_driver=cinder.volume.drivers.netapp.common.NetAppDriver netapp_storage_family=ontap_cluster netapp_storage_protocol=iscsi
NetApp NFS direct driver for Clustered Data ONTAP in Grizzly (or earlier)
volume_driver=cinder.volume.drivers.netapp.nfs.NetAppDirectCmodeNfsDriver
NetApp Unified Driver configuration
volume_driver=cinder.volume.drivers.netapp.common.NetAppDriver netapp_storage_family=ontap_cluster netapp_storage_protocol=nfs
NetApp iSCSI direct driver for Data ONTAP operating in 7-Mode storage controller in Grizzly (or earlier)
volume_driver=cinder.volume.drivers.netapp.iscsi.NetAppDirect7modeISCSIDriver
NetApp Unified Driver configuration
volume_driver=cinder.volume.drivers.netapp.common.NetAppDriver netapp_storage_family=ontap_7mode netapp_storage_protocol=iscsi
NetApp NFS direct driver for Data ONTAP operating in 7-Mode storage controller in Grizzly (or earlier)
volume_driver=cinder.volume.drivers.netapp.nfs.NetAppDirect7modeNfsDriver
NetApp Unified Driver configuration
volume_driver=cinder.volume.drivers.netapp.common.NetAppDriver netapp_storage_family=ontap_7mode netapp_storage_protocol=nfs
3.6.4.2. Deprecated NetApp drivers
This section lists the NetApp drivers in previous releases that are deprecated in Havana.
NetApp iSCSI driver for clustered Data ONTAP.
volume_driver=cinder.volume.drivers.netapp.iscsi.NetAppCmodeISCSIDriver
NetApp NFS driver for clustered Data ONTAP.
volume_driver=cinder.volume.drivers.netapp.nfs.NetAppCmodeNfsDriver
NetApp iSCSI driver for Data ONTAP operating in 7-Mode storage controller.
volume_driver=cinder.volume.drivers.netapp.iscsi.NetAppISCSIDriver
NetApp NFS driver for Data ONTAP operating in 7-Mode storage controller.
volume_driver=cinder.volume.drivers.netapp.nfs.NetAppNFSDriver
See the OpenStack NetApp community for support information on deprecated NetApp drivers in the Havana release.
3.7. NFS driver
The Network File System (NFS) is a distributed file system protocol originally developed by Sun Microsystems in 1984. An NFS server exports one or more of its file systems, known as shares. An NFS client can mount these exported shares on its own file system. You can perform file actions on this mounted remote file system as if the file system were local.
3.7.1. How the NFS driver works
The NFS driver, and other drivers based on it, work quite differently than a traditional block storage driver.
The NFS driver does not actually allow an instance to access a storage device at the block level. Instead, files are created on an NFS share and mapped to instances, which emulates a block device. This works in a similar way to QEMU, which stores instances in the /var/lib/nova/instances directory.
3.7.2. Enable the NFS driver and related options
To use Block Storage with the NFS driver, first set the volume_driver in cinder.conf:
volume_driver=cinder.volume.drivers.nfs.NfsDriver
The following table contains the options supported by the NFS driver.
Table 1.11. Description of configuration options for storage_nfs
| Configuration option = Default value | Description |
|---|---|
| [DEFAULT] | |
| nfs_mount_options = None | (StrOpt) Mount options passed to the nfs client. See section of the nfs man page for details. |
| nfs_mount_point_base = $state_path/mnt | (StrOpt) Base dir containing mount points for nfs shares. |
| nfs_oversub_ratio = 1.0 | (FloatOpt) This will compare the allocated to available space on the volume destination. If the ratio exceeds this number, the destination will no longer be valid. |
| nfs_shares_config = /etc/cinder/nfs_shares | (StrOpt) File with the list of available nfs shares |
| nfs_sparsed_volumes = True | (BoolOpt) Create volumes as sparsed files which take no space.If set to False volume is created as regular file.In such case volume creation takes a lot of time. |
| nfs_used_ratio = 0.95 | (FloatOpt) Percent of ACTUAL usage of the underlying volume before no new volumes can be allocated to the volume destination. |
The NFS driver (and other drivers based off it) attempts to mount shares using version 4.1 of the NFS protocol (including pNFS). If the mount attempt is unsuccessful due to a lack of client or server support, a subsequent mount attempt that requests the default behavior of the mount.nfs command will be performed. On most distributions, the default behavior is to attempt mounting first with NFS v4.0, then silently fall back to NFS v3.0 if necessary. If the nfs_mount_options configuration option contains a request for a specific version of NFS to be used, or if specific options are specified in the shares configuration file specified by the nfs_shares_config configuration option, the mount will be attempted as requested with no subsequent attempts.
3.7.3. How to use the NFS driver
Access to one or more NFS servers. Creating an NFS server is outside the scope of this document. This example assumes access to the following NFS servers and mount points:
192.168.1.200:/storage192.168.1.201:/storage192.168.1.202:/storage
This example demonstrates the use of with this driver with multiple NFS servers. Multiple servers are not required. One is usually enough.
Add your list of NFS servers to the file you specified with the
nfs_shares_configoption. For example, if the value of this option was set to/etc/cinder/shares.txt, then:#cat /etc/cinder/shares.txt192.168.1.200:/storage 192.168.1.201:/storage 192.168.1.202:/storageComments are allowed in this file. They begin with a
#.Configure the
nfs_mount_point_baseoption. This is a directory wherecinder-volumemounts all NFS shares stored inshares.txt. For this example,/var/lib/cinder/nfsis used. You can, of course, use the default value of$state_path/mnt.Start the
cinder-volumeservice./var/lib/cinder/nfsshould now contain a directory for each NFS share specified inshares.txt. The name of each directory is a hashed name:#ls /var/lib/cinder/nfs/... 46c5db75dc3a3a50a10bfd1a456a9f3f ...You can now create volumes as you normally would:
$nova volume-create --display-name=myvol 5#ls /var/lib/cinder/nfs/46c5db75dc3a3a50a10bfd1a456a9f3fvolume-a8862558-e6d6-4648-b5df-bb84f31c8935This volume can also be attached and deleted just like other volumes. However, snapshotting is not supported.
NFS driver notes
cinder-volumemanages the mounting of the NFS shares as well as volume creation on the shares. Keep this in mind when planning your OpenStack architecture. If you have one master NFS server, it might make sense to only have onecinder-volumeservice to handle all requests to that NFS server. However, if that single server is unable to handle all requests, more than onecinder-volumeservice is needed as well as potentially more than one NFS server.Because data is stored in a file and not actually on a block storage device, you might not see the same IO performance as you would with a traditional block storage driver. Please test accordingly.
Despite possible IO performance loss, having volume data stored in a file might be beneficial. For example, backing up volumes can be as easy as copying the volume files.
NoteRegular IO flushing and syncing still stands.
3.8. SolidFire
The SolidFire Cluster is a high performance all SSD iSCSI storage device that provides massive scale out capability and extreme fault tolerance. A key feature of the SolidFire cluster is the ability to set and modify during operation specific QoS levels on a volume for volume basis. The SolidFire cluster offers this along with de-duplication, compression, and an architecture that takes full advantage of SSDs.
To configure the use of a SolidFire cluster with Block Storage, modify your cinder.conf file as follows:
volume_driver=cinder.volume.drivers.solidfire.SolidFire san_ip=172.17.1.182 # the address of your MVIP san_login=sfadmin # your cluster admin login san_password=sfpassword # your cluster admin password sf_account_prefix='' # prefix for tenant account creation on solidfire cluster (see warning below)
The SolidFire driver creates a unique account prefixed with $cinder-volume-service-hostname-$tenant-id on the SolidFire cluster for each tenant that accesses the cluster through the Volume API. Unfortunately, this account formation results in issues for High Availability (HA) installations and installations where the cinder-volume service can move to a new node. HA installations can return an Account Not Found error because the call to the SolidFire cluster is not always going to be sent from the same node. In installations where the cinder-volume service moves to a new node, the same issue can occur when you perform operations on existing volumes, such as clone, extend, delete, and so on.
Set the sf_account_prefix option to an empty string ('') in the cinder.conf file. This setting results in unique accounts being created on the SolidFire cluster, but the accounts are prefixed with the tenant-id or any unique identifier that you choose and are independent of the host where the cinder-volume service resides.
Table 1.12. Description of configuration options for solidfire
| Configuration option = Default value | Description |
|---|---|
| [DEFAULT] | |
| sf_account_prefix = None | (StrOpt) Create SolidFire accounts with this prefix. Any string can be used here, but the string "hostname" is special and will create a prefix using the Block Storage node hostname (previous default behavior). The default is NO prefix. |
| sf_allow_tenant_qos = False | (BoolOpt) Allow tenants to specify QOS on create |
| sf_api_port = 443 | (IntOpt) SolidFire API port. Useful if the device API is behind a proxy on a different port. |
| sf_emulate_512 = True | (BoolOpt) Set 512 byte emulation on volume creation; |
3.9. VMware VMDK driver
Use the VMware VMDK driver to enable management of the OpenStack Block Storage volumes on vCenter-managed data stores. Volumes are backed by VMDK files on data stores that use any VMware-compatible storage technology such as NFS, iSCSI, FiberChannel, and vSAN.
The VMware ESX VMDK driver is deprecated as of the Icehouse release and might be removed in Juno or a subsequent release. The VMware vCenter VMDK driver continues to be fully supported.
3.9.1. Functional context
The VMware VMDK driver connects to vCenter, through which it can dynamically access all the data stores visible from the ESX hosts in the managed cluster.
When you create a volume, the VMDK driver creates a VMDK file on demand. The VMDK file creation completes only when the volume is subsequently attached to an instance, because the set of data stores visible to the instance determines where to place the volume.
The running vSphere VM is automatically reconfigured to attach the VMDK file as an extra disk. Once attached, you can log in to the running vSphere VM to rescan and discover this extra disk.
3.9.2. Configuration
The recommended volume driver for OpenStack Block Storage is the VMware vCenter VMDK driver. When you configure the driver, you must match it with the appropriate OpenStack Compute driver from VMware and both drivers must point to the same server.
In the nova.conf file, use this option to define the Compute driver:
compute_driver=vmwareapi.VMwareVCDriver
In the cinder.conf file, use this option to define the volume driver:
volume_driver=cinder.volume.drivers.vmware.vmdk.VMwareVcVmdkDriver
The following table lists various options that the drivers support for the OpenStack Block Storage configuration (cinder.conf):
Table 1.13. Description of configuration options for vmware
| Configuration option = Default value | Description |
|---|---|
| [DEFAULT] | |
| vmware_api_retry_count = 10 | (IntOpt) Number of times VMware ESX/VC server API must be retried upon connection related issues. |
| vmware_host_ip = None | (StrOpt) IP address for connecting to VMware ESX/VC server. |
| vmware_host_password = None | (StrOpt) Password for authenticating with VMware ESX/VC server. |
| vmware_host_username = None | (StrOpt) Username for authenticating with VMware ESX/VC server. |
| vmware_host_version = None | (StrOpt) Optional string specifying the VMware VC server version. The driver attempts to retrieve the version from VMware VC server. Set this configuration only if you want to override the VC server version. |
| vmware_image_transfer_timeout_secs = 7200 | (IntOpt) Timeout in seconds for VMDK volume transfer between Block Storage and the Image service. |
| vmware_max_objects_retrieval = 100 | (IntOpt) Max number of objects to be retrieved per batch. Query results will be obtained in batches from the server and not in one shot. Server may still limit the count to something less than the configured value. |
| vmware_task_poll_interval = 5 | (IntOpt) The interval (in seconds) for polling remote tasks invoked on VMware ESX/VC server. |
| vmware_volume_folder = cinder-volumes | (StrOpt) Name for the folder in the VC datacenter that will contain Block Storage volumes. |
| vmware_wsdl_location = None | (StrOpt) Optional VIM service WSDL Location e.g http://<server>/vimService.wsdl. Optional over-ride to default location for bug work-arounds. |
3.9.3. VMDK disk type
The VMware VMDK drivers support the creation of VMDK disk files of type thin, thick, or eagerZeroedThick. Use the vmware:vmdk_type extra spec key with the appropriate value to specify the VMDK disk file type. The following table captures the mapping between the extra spec entry and the VMDK disk file type:
Table 1.14. Extra spec entry to VMDK disk file type mapping
| Disk file type | Extra spec key | Extra spec value |
| thin | vmware:vmdk_type | thin |
| thick | vmware:vmdk_type | thick |
| eagerZeroedThick | vmware:vmdk_type | eagerZeroedThick |
If you do not specify a vmdk_type extra spec entry, the default disk file type is thin.
The following example shows how to create a thick VMDK volume by using the appropriate vmdk_type:
$cinder type-create thick_volume$cinder type-key thick_volume set vmware:vmdk_type=thick$cinder create --volume-type thick_volume --display-name volume1 1
3.9.4. Clone type
With the VMware VMDK drivers, you can create a volume from another source volume or a snapshot point. The VMware vCenter VMDK driver supports the full and linked/fast clone types. Use the vmware:clone_type extra spec key to specify the clone type. The following table captures the mapping for clone types:
Table 1.15. Extra spec entry to clone type mapping
| Clone type | Extra spec key | Extra spec value |
| full | vmware:clone_type | full |
| linked/fast | vmware:clone_type | linked |
If you do not specify the clone type, the default is full.
The following example shows linked cloning from another source volume:
$cinder type-create fast_clone$cinder type-key fast_clone set vmware:clone_type=linked$cinder create --volume-type fast_clone --source-volid 25743b9d-3605-462b-b9eb-71459fe2bb35 --display-name volume1 1
The VMware ESX VMDK driver ignores the extra spec entry and always creates a full clone.
3.9.5. Use vCenter storage policies to specify back-end data stores
This section describes how to configure back-end data stores using storage policies. In vCenter, you can create one or more storage policies and expose them as a Block Storage volume-type to a vmdk volume. The storage policies are exposed to the vmdk driver through the extra spec property with the vmware:storage_profile key.
For example, assume a storage policy in vCenter named gold_policy. and a Block Storage volume type named vol1 with the extra spec key vmware:storage_profile set to the value gold_policy. Any Block Storage volume creation that uses the vol1 volume type places the volume only in data stores that match the gold_policy storage policy.
The Block Storage back-end configuration for vSphere data stores is automatically determined based on the vCenter configuration. If you configure a connection to connect to vCenter version 5.5 or later in the cinder.conf file, the use of storage policies to configure back-end data stores is automatically supported.
You must configure any data stores that you configure for the Block Storage service for the Compute service.
Procedure 1.1. To configure back-end data stores by using storage policies
In vCenter, tag the data stores to be used for the back end.
OpenStack also supports policies that are created by using vendor-specific capabilities; for example vSAN-specific storage policies.
NoteThe tag value serves as the policy. For details, see Section 3.9.7, “Storage policy-based configuration in vCenter”.
Set the extra spec key
vmware:storage_profilein the desired Block Storage volume types to the policy name that you created in the previous step.Optionally, for the
vmware_host_versionparameter, enter the version number of your vSphere platform. For example,5.5.This setting overrides the default location for the corresponding WSDL file. Among other scenarios, you can use this setting to prevent WSDL error messages during the development phase or to work with a newer version of vCenter.
Complete the other vCenter configuration parameters as appropriate.
The following considerations apply to configuring SPBM for the Block Storage service:
Any volume that is created without an associated policy (that is to say, without an associated volume type that specifies
vmware:storage_profileextra spec), there is no policy-based placement for that volume.
3.9.6. Supported operations
The VMware vCenter and ESX VMDK drivers support these operations:
Create volume
Create volume from another source volume. (Supported only if source volume is not attached to an instance.)
Create volume from snapshot
Create volume from an Image service image
Attach volume (When a volume is attached to an instance, a reconfigure operation is performed on the instance to add the volume's VMDK to it. The user must manually rescan and mount the device from within the guest operating system.)
Detach volume
Create snapshot (Allowed only if volume is not attached to an instance.)
Delete snapshot (Allowed only if volume is not attached to an instance.)
Upload as image to the Image service (Allowed only if volume is not attached to an instance.)
Although the VMware ESX VMDK driver supports these operations, it has not been extensively tested.
3.9.7. Storage policy-based configuration in vCenter
You can configure Storage Policy-Based Management (SPBM) profiles for vCenter data stores supporting the Compute, Image Service, and Block Storage components of an OpenStack implementation.
In a vSphere OpenStack deployment, SPBM enables you to delegate several data stores for storage, which reduces the risk of running out of storage space. The policy logic selects the data store based on accessibility and available storage space.
3.9.8. Prerequisites
Determine the data stores to be used by the SPBM policy.
Determine the tag that identifies the data stores in the OpenStack component configuration.
Create separate policies or sets of data stores for separate OpenStack components.
3.9.9. Create storage policies in vCenter
Procedure 1.2. To create storage policies in vCenter
In vCenter, create the tag that identifies the data stores:
From the Home screen, click .
Specify a name for the tag.
Specify a tag category. For example,
spbm-cinder.
Apply the tag to the data stores to be used by the SPBM policy.
NoteFor details about creating tags in vSphere, see the vSphere documentation.
In vCenter, create a tag-based storage policy that uses one or more tags to identify a set of data stores.
NoteYou use this tag name and category when you configure the
*.conffile for the OpenStack component. For details about creating tags in vSphere, see the vSphere documentation.
3.9.10. Data store selection
If storage policy is enabled, the driver initially selects all the data stores that match the associated storage policy.
If two or more data stores match the storage policy, the driver chooses a data store that is connected to the maximum number of hosts.
In case of ties, the driver chooses the data store with lowest space utilization, where space utilization is defined by the (1-freespace/totalspace) metric.
These actions reduce the number of volume migrations while attaching the volume to instances.
The volume must be migrated if the ESX host for the instance cannot access the data store that contains the volume.