-
Language:
English
-
Language:
English
Red Hat Training
A Red Hat training course is available for Red Hat OpenStack Platform
Chapter 2. Block Storage
2.1. Volume drivers
cinder-volume
service, use the parameters described in these sections.
volume_driver
flag. The default is:
volume_driver = cinder.volume.drivers.lvm.LVMISCSIDriver
2.1.1. Ceph RADOS Block Device (RBD)
RADOS
- Object Storage Device (OSD) Daemon. The storage daemon for the RADOS service, which interacts with the OSD (physical or logical storage unit for your data).You must run this daemon on each server in your cluster. For each OSD, you can have an associated hard drive disk. For performance purposes, pool your hard drive disk with raid arrays, logical volume management (LVM), or B-tree file system (
Btrfs
) pooling. By default, the following pools are created: data, metadata, and RBD. - Meta-Data Server (MDS). Stores metadata. MDSs build a POSIX file system on top of objects for Ceph clients. However, if you do not use the Ceph file system, you do not need a metadata server.
- Monitor (MON). A lightweight daemon that handles all communications with external applications and clients. It also provides a consensus for distributed decision making in a Ceph/RADOS cluster. For instance, when you mount a Ceph shared on a client, you point to the address of a MON server. It checks the state and the consistency of the data. In an ideal setup, you must run at least three
ceph-mon
daemons on separate servers.
Btrfs
for testing, development, and any non-critical deployments. Btrfs has the correct feature set and roadmap to serve Ceph in the long-term, but XFS and ext4 provide the necessary stability for today’s deployments.
Btrfs
, ensure that you use the correct version (see Ceph Dependencies).
Ways to store, use, and expose data
- RADOS. Use as an object, default storage mechanism.
- RBD. Use as a block device. The Linux kernel RBD (RADOS block device) driver allows striping a Linux block device over multiple distributed object store data objects. It is compatible with the KVM RBD image.
- CephFS. Use as a file, POSIX-compliant file system.
- RADOS Gateway. OpenStack Object Storage and Amazon-S3 compatible RESTful interface (see RADOS_Gateway).
- librados, and its related C/C++ bindings.
- RBD and QEMU-RBD. Linux kernel and QEMU block devices that stripe data across multiple objects.
Driver options
volume_tmp_dir
option has been deprecated and replaced by image_conversion_dir
.
Table 2.1. Description of Ceph storage configuration options
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
rados_connect_timeout = -1
|
(IntOpt) Timeout value (in seconds) used when connecting to ceph cluster. If value < 0, no timeout is set and default librados value is used. |
rados_connection_interval = 5
|
(IntOpt) Interval value (in seconds) between connection retries to ceph cluster. |
rados_connection_retries = 3
|
(IntOpt) Number of retries if connection to ceph cluster failed. |
rbd_ceph_conf =
|
(StrOpt) Path to the ceph configuration file |
rbd_cluster_name = ceph
|
(StrOpt) The name of ceph cluster |
rbd_flatten_volume_from_snapshot = False
|
(BoolOpt) Flatten volumes created from snapshots to remove dependency from volume to snapshot |
rbd_max_clone_depth = 5
|
(IntOpt) Maximum number of nested volume clones that are taken before a flatten occurs. Set to 0 to disable cloning. |
rbd_pool = rbd
|
(StrOpt) The RADOS pool where rbd volumes are stored |
rbd_secret_uuid = None
|
(StrOpt) The libvirt uuid of the secret for the rbd_user volumes |
rbd_store_chunk_size = 4
|
(IntOpt) Volumes will be chunked into objects of this size (in megabytes). |
rbd_user = None
|
(StrOpt) The RADOS client name for accessing rbd volumes - only set when using cephx authentication |
volume_tmp_dir = None
|
(StrOpt) Directory where temporary image files are stored when the volume driver does not write them directly to the volume. Warning: this option is now deprecated, use image_conversion_dir instead. |
2.1.2. Dell EqualLogic volume driver
Supported operations
- Create, delete, attach, and detach volumes.
- Create, list, and delete volume snapshots.
- Clone a volume.
- Multiple instances of Dell EqualLogic Groups or Dell EqualLogic Group Storage Pools and multiple pools on a single array.
- Multiple instances of Dell EqualLogic Groups or Dell EqualLogic Group Storage Pools or multiple pools on a single array.
/etc/cinder/cinder.conf
file (see Section 2.3, “Block Storage sample configuration files” for reference).
Table 2.2. Description of Dell EqualLogic volume driver configuration options
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
eqlx_chap_login = admin
|
(StrOpt) Existing CHAP account name. Note that this option is deprecated in favour of "chap_username" as specified in cinder/volume/driver.py and will be removed in next release. |
eqlx_chap_password = password
|
(StrOpt) Password for specified CHAP account name. Note that this option is deprecated in favour of "chap_password" as specified in cinder/volume/driver.py and will be removed in the next release |
eqlx_cli_max_retries = 5
|
(IntOpt) Maximum retry count for reconnection. Default is 5. |
eqlx_cli_timeout = 30
|
(IntOpt) Timeout for the Group Manager cli command execution. Default is 30. Note that this option is deprecated in favour of "ssh_conn_timeout" as specified in cinder/volume/drivers/san/san.py and will be removed in M release. |
eqlx_group_name = group-0
|
(StrOpt) Group name to use for creating volumes. Defaults to "group-0". |
eqlx_pool = default
|
(StrOpt) Pool in which volumes will be created. Defaults to "default". |
eqlx_use_chap = False
|
(BoolOpt) Use CHAP authentication for targets. Note that this option is deprecated in favour of "use_chap_auth" as specified in cinder/volume/driver.py and will be removed in next release. |
/etc/cinder/cinder.conf
configuration lists the relevant settings for a typical Block Storage service using a single Dell EqualLogic Group:
Example 2.1. Default (single-instance) configuration
[DEFAULT] #Required settings volume_driver = cinder.volume.drivers.eqlx.DellEQLSanISCSIDriver san_ip = IP_EQLX san_login = SAN_UNAME san_password = SAN_PW eqlx_group_name = EQLX_GROUP eqlx_pool = EQLX_POOL #Optional settings san_thin_provision = true|false eqlx_use_chap = true|false eqlx_chap_login = EQLX_UNAME eqlx_chap_password = EQLX_PW eqlx_cli_max_retries = 5 san_ssh_port = 22 ssh_conn_timeout = 30 san_private_key = SAN_KEY_PATH ssh_min_pool_conn = 1 ssh_max_pool_conn = 5
- IP_EQLX
- The IP address used to reach the Dell EqualLogic Group through SSH. This field has no default value.
- SAN_UNAME
- The user name to login to the Group manager via SSH at the
san_ip
. Default user name isgrpadmin
. - SAN_PW
- The corresponding password of SAN_UNAME. Not used when
san_private_key
is set. Default password ispassword
. - EQLX_GROUP
- The group to be used for a pool where the Block Storage service will create volumes and snapshots. Default group is
group-0
. - EQLX_POOL
- The pool where the Block Storage service will create volumes and snapshots. Default pool is
default
. This option cannot be used for multiple pools utilized by the Block Storage service on a single Dell EqualLogic Group. - EQLX_UNAME
- The CHAP login account for each volume in a pool, if
eqlx_use_chap
is set totrue
. Default account name ischapadmin
. - EQLX_PW
- The corresponding password of EQLX_UNAME. The default password is randomly generated in hexadecimal, so you must set this password manually.
- SAN_KEY_PATH (optional)
- The filename of the private key used for SSH authentication. This provides password-less login to the EqualLogic Group. Not used when
san_password
is set. There is no default value.
san_thin_provision = true
setting.
Example 2.2. Multi back-end Dell EqualLogic configuration
enabled_backends = backend1,backend2 san_ssh_port = 22 ssh_conn_timeout = 30 san_thin_provision = true [backend1] volume_driver = cinder.volume.drivers.eqlx.DellEQLSanISCSIDriver volume_backend_name = backend1 san_ip = IP_EQLX1 san_login = SAN_UNAME san_password = SAN_PW eqlx_group_name = EQLX_GROUP eqlx_pool = EQLX_POOL [backend2] volume_driver = cinder.volume.drivers.eqlx.DellEQLSanISCSIDriver volume_backend_name = backend2 san_ip = IP_EQLX2 san_login = SAN_UNAME san_password = SAN_PW eqlx_group_name = EQLX_GROUP eqlx_pool = EQLX_POOL
- Thin provisioning for SAN volumes is enabled (
san_thin_provision = true
). This is recommended when setting up Dell EqualLogic back ends. - Each Dell EqualLogic back-end configuration (
[backend1]
and[backend2]
) has the same required settings as a single back-end configuration, with the addition ofvolume_backend_name
. - The
san_ssh_port
option is set to its default value, 22. This option sets the port used for SSH. - The
ssh_conn_timeout
option is also set to its default value, 30. This option sets the timeout in seconds for CLI commands over SSH. - The
IP_EQLX1
andIP_EQLX2
refer to the IP addresses used to reach the Dell EqualLogic Group ofbackend1
andbackend2
through SSH, respectively.
2.1.3. Dell Storage Center Fibre Channel and iSCSI drivers
cinder.conf
file.
Supported operations
- Create, delete, attach (map), and detach (unmap) volumes.
- Create, list, and delete volume snapshots.
- Create a volume from a snapshot.
- Copy an image to a volume.
- Copy a volume to an image.
- Clone a volume.
- Extend a volume.
Extra spec options
storagetype:storageprofile
with the value of the name of the Storage Profile on the Storage Center can be set to allow to use Storage Profiles other than the default.
High Priority
and Low Priority
Storage Profiles:
$
cinder type-create "GoldVolumeType"
$
cinder type-key "GoldVolumeType" set storagetype:storageprofile=highpriority
$
cinder type-create "BronzeVolumeType"
$
cinder type-key "BronzeVolumeType" set storagetype:storageprofile=lowpriority
iSCSI configuration
Example 2.3. Sample iSCSI Configuration
default_volume_type = delliscsi enabled_backends = delliscsi [delliscsi] # Name to give this storage backend volume_backend_name = delliscsi # The iSCSI driver to load volume_driver = cinder.volume.drivers.dell.dell_storagecenter_iscsi.DellStorageCenterISCSIDriver # IP address of Enterprise Manager san_ip = 172.23.8.101 # Enterprise Manager user name san_login = Admin # Enterprise Manager password san_password = secret # The Storage Center iSCSI IP address iscsi_ip_address = 192.168.0.20 # The Storage Center serial number to use dell_sc_ssn = 64702 # ==Optional settings== # The Enterprise Manager API port dell_sc_api_port = 3033 # Server folder to place new server definitions dell_sc_server_folder = devstacksrv # Volume folder to place created volumes dell_sc_volume_folder = devstackvol/Cinder # The iSCSI IP port iscsi_port = 3260
Fibre Channel configuration
Example 2.4. Sample FC configuration
default_volume_type = dellfc enabled_backends = dellfc [dellfc] # Name to give this storage backend volume_backend_name = dellfc # The FC driver to load volume_driver = cinder.volume.drivers.dell.dell_storagecenter_fc.DellStorageCenterFCDriver # IP address of Enterprise Manager san_ip = 172.23.8.101 # Enterprise Manager user name san_login = Admin # Enterprise Manager password san_password = secret # The Storage Center serial number to use dell_sc_ssn = 64702 # Optional settings # The Enterprise Manager API port dell_sc_api_port = 3033 # Server folder to place new server definitions dell_sc_server_folder = devstacksrv # Volume folder to place created volumes dell_sc_volume_folder = devstackvol/Cinder
Driver options
Table 2.3. Description of Dell Storage Center volume driver configuration options
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
dell_sc_api_port = 3033
|
(IntOpt) Dell API port |
dell_sc_server_folder = openstack
|
(StrOpt) Name of the server folder to use on the Storage Center |
dell_sc_ssn = 64702
|
(IntOpt) Storage Center System Serial Number |
dell_sc_verify_cert = False
|
(BoolOpt) Enable HTTPS SC certificate verification. |
dell_sc_volume_folder = openstack
|
(StrOpt) Name of the volume folder to use on the Storage Center |
2.1.4. EMC VMAX iSCSI and FC drivers
EMCVMAXISCSIDriver
and EMCVMAXFCDriver
, support the use of EMC VMAX storage arrays under OpenStack Block Storage. They both provide equivalent functions and differ only in support for their respective host attachment methods.
2.1.4.1. System requirements
2.1.4.2. Supported operations
- Create, delete, attach, and detach volumes.
- Create, list, and delete volume snapshots.
- Copy an image to a volume.
- Copy a volume to an image.
- Clone a volume.
- Extend a volume.
- Retype a volume.
- Create a volume from a snapshot.
- FAST automated storage tiering policy.
- Dynamic masking view creation.
- Striped volume creation.
2.1.4.3. Set up the VMAX drivers
Procedure 2.1. To set up the EMC VMAX drivers
- Install the python-pywbem package for your distribution. To install the python-pywbem package for Red Hat Enterprise Linux, CentOS, or Fedora:
#
yum install pywbem
- Download SMI-S from PowerLink and install it. Add your VMAX arrays to SMI-S.For information, see Section 2.1.4.3.1, “Set up SMI-S” and the SMI-S release notes.
- Change configuration files. See Section 2.1.4.3.2, “
cinder.conf
configuration file” and Section 2.1.4.3.3, “cinder_emc_config_CONF_GROUP_ISCSI.xml
configuration file”. - Configure connectivity. For FC driver, see Section 2.1.4.3.4, “FC Zoning with VMAX”. For iSCSI driver, see Section 2.1.4.3.5, “iSCSI with VMAX”.
2.1.4.3.1. Set up SMI-S
/opt/emc/ECIM/ECOM/bin
on Linux and C:\Program Files\EMC\ECIM\ECOM\bin
on Windows. After you install and configure SMI-S, go to that directory and type TestSmiProvider.exe.
2.1.4.3.2. cinder.conf
configuration file
/etc/cinder/cinder.conf
.
10.10.61.45
is the IP address of the VMAX iSCSI target:
enabled_backends = CONF_GROUP_ISCSI, CONF_GROUP_FC [CONF_GROUP_ISCSI] iscsi_ip_address = 10.10.61.45 volume_driver = cinder.volume.drivers.emc.emc_vmax_iscsi.EMCVMAXISCSIDriver cinder_emc_config_file = /etc/cinder/cinder_emc_config_CONF_GROUP_ISCSI.xml volume_backend_name=ISCSI_backend [CONF_GROUP_FC] volume_driver = cinder.volume.drivers.emc.emc_vmax_fc.EMCVMAXFCDriver cinder_emc_config_file = /etc/cinder/cinder_emc_config_CONF_GROUP_FC.xml volume_backend_name=FC_backend
CONF_GROUP_ISCSI
and CONF_GROUP_FC
. Each configuration group has a section describing unique parameters for connections, drivers, the volume_backend_name, and the name of the EMC-specific configuration file containing additional settings. Note that the file name is in the format /etc/cinder/cinder_emc_config_[confGroup].xml
.
cinder.conf
and EMC-specific configuration files have been created, cinder commands need to be issued in order to create and associate OpenStack volume types with the declared volume_backend_names:
$
cinder type-create VMAX_ISCSI
$
cinder type-key VMAX_ISCSI set volume_backend_name=ISCSI_backend
$
cinder type-create VMAX_FC
$
cinder type-key VMAX_FC set volume_backend_name=FC_backend
VMAX_ISCSI
is associated with the ISCSI_backend, and the type VMAX_FC
is associated with the FC_backend.
cinder-volume
service.
2.1.4.3.3. cinder_emc_config_CONF_GROUP_ISCSI.xml
configuration file
/etc/cinder/cinder_emc_config_CONF_GROUP_ISCSI.xml
file. You do not need to restart the service for this change.
<?xml version="1.0" encoding="UTF-8" ?> <EMC> <EcomServerIp>1.1.1.1</EcomServerIp> <EcomServerPort>00</EcomServerPort> <EcomUserName>user1</EcomUserName> <EcomPassword>password1</EcomPassword> <PortGroups> <PortGroup>OS-PORTGROUP1-PG</PortGroup> <PortGroup>OS-PORTGROUP2-PG</PortGroup> </PortGroups> <Array>111111111111</Array> <Pool>FC_GOLD1</Pool> <FastPolicy>GOLD1</FastPolicy> </EMC>
EcomServerIp
andEcomServerPort
are the IP address and port number of the ECOM server which is packaged with SMI-S.EcomUserName
andEcomPassword
are credentials for the ECOM server.PortGroups
supplies the names of VMAX port groups that have been pre-configured to expose volumes managed by this backend. Each supplied port group should have sufficient number and distribution of ports (across directors and switches) as to ensure adequate bandwidth and failure protection for the volume connections.PortGroups
can contain one or more port groups of either iSCSI or FC ports. When a dynamic masking view is created by the VMAX driver, the port group is chosen randomly from thePortGroup
list, to evenly distribute load across the set of groups provided. Make sure that thePortGroups
set contains either all FC or all iSCSI port groups (for a given backend), as appropriate for the configured driver (iSCSI or FC).- The
Array
tag holds the unique VMAX array serial number. - The
Pool
tag holds the unique pool name within a given array. For backends not using FAST automated tiering, the pool is a single pool that has been created by the administrator. For backends exposing FAST policy automated tiering, the pool is the bind pool to be used with the FAST policy. - The
FastPolicy
tag conveys the name of the FAST Policy to be used. By including this tag, volumes managed by this backend are treated as under FAST control. Omitting theFastPolicy
tag means FAST is not enabled on the provided storage pool.
2.1.4.3.4. FC Zoning with VMAX
2.1.4.3.5. iSCSI with VMAX
- Make sure the iscsi-initiator-utils package is installed on the host (use apt-get, zypper, or yum, depending on Linux flavor).
- Verify host is able to ping VMAX iSCSI target ports.
2.1.4.4. VMAX masking view and group naming info
Masking view names
OS-[shortHostName][poolName]-I-MV (for Masking Views using iSCSI)
OS-[shortHostName][poolName]-F-MV (for Masking Views using FC)
Initiator group names
OS-[shortHostName]-I-IG (for iSCSI initiators)
OS-[shortHostName]-F-IG (for Fibre Channel initiators)
FA port groups
Storage group names
OS-[shortHostName][poolName]-I-SG (attached over iSCSI)
OS-[shortHostName][poolName]-F-SG (attached over Fibre Channel)
2.1.4.5. Concatenated or striped volumes
storagetype:stripecount
representing the number of meta members in the striped volume. The example below means that each volume created under the GoldStriped
volume type will be striped and made up of 4 meta members.
$
cinder type-create GoldStriped
$
cinder type-key GoldStriped set volume_backend_name=GOLD_BACKEND
$
cinder type-key GoldStriped set storagetype:stripecount=4
2.1.5. EMC VNX driver
EMCCLIISCSIDriver
(VNX iSCSI driver) and EMCCLIFCDriver
(VNX FC driver) are separately based on the ISCSIDriver
and FCDriver
defined in Block Storage.
2.1.5.1. Overview
2.1.5.1.1. System requirements
- VNX Operational Environment for Block version 5.32 or higher.
- VNX Snapshot and Thin Provisioning license should be activated for VNX.
- Navisphere CLI v7.32 or higher is installed along with the driver.
2.1.5.1.2. Supported operations
- Create, delete, attach, and detach volumes.
- Create, list, and delete volume snapshots.
- Create a volume from a snapshot.
- Copy an image to a volume.
- Clone a volume.
- Extend a volume.
- Migrate a volume.
- Retype a volume.
- Get volume statistics.
- Create and delete consistency groups.
- Create, list, and delete consistency group snapshots.
- Modify consistency groups.
- Efficient non-disruptive volume backup.
2.1.5.2. Preparation
2.1.5.2.1. Install Navisphere CLI
- For all other variants of Linux, Navisphere CLI is available at Downloads for VNX2 Series or Downloads for VNX1 Series.
- After installation, set the security level of Navisphere CLI to low:
$
/opt/Navisphere/bin/naviseccli security -certificate -setLevel low
2.1.5.2.2. Check array software
Table 2.4. Required software
Feature | Software Required |
All
|
ThinProvisioning
|
All
|
VNXSnapshots
|
FAST cache support
|
FASTCache
|
Create volume with type
compressed
|
Compression
|
Create volume with type
deduplicated
|
Deduplication
|
2.1.5.2.3. Install EMC VNX driver
EMCCLIISCSIDriver
and EMCCLIFCDriver
are included in the Block Storage installer package:
emc_vnx_cli.py
emc_cli_fc.py
(forEMCCLIFCDriver
)emc_cli_iscsi.py
(forEMCCLIISCSIDriver
)
2.1.5.2.4. Network configuration
initiator_auto_registration=True
configuration to avoid register the ports manually. Check the detail of the configuration in Section 2.1.5.3, “Backend configuration” for reference.
2.1.5.3. Backend configuration
/etc/cinder/cinder.conf
file:
2.1.5.3.1. Minimum configuration
EMCCLIFCDriver
to EMCCLIISCSIDriver
if your are using the iSCSI driver.
[DEFAULT] enabled_backends = vnx_array1 [vnx_array1] san_ip = 10.10.72.41 san_login = sysadmin san_password = sysadmin naviseccli_path = /opt/Navisphere/bin/naviseccli volume_driver=cinder.volume.drivers.emc.emc_cli_fc.EMCCLIFCDriver initiator_auto_registration=True
2.1.5.3.2. Multi-backend configuration
EMCCLIFCDriver
to EMCCLIISCSIDriver
if your are using the iSCSI driver.
[DEFAULT] enabled_backends=backendA, backendB [backendA] storage_vnx_pool_names = Pool_01_SAS, Pool_02_FLASH san_ip = 10.10.72.41 storage_vnx_security_file_dir = /etc/secfile/array1 naviseccli_path = /opt/Navisphere/bin/naviseccli volume_driver=cinder.volume.drivers.emc.emc_cli_fc.EMCCLIFCDriver initiator_auto_registration=True [backendB] storage_vnx_pool_names = Pool_02_SAS san_ip = 10.10.26.101 san_login = username san_password = password naviseccli_path = /opt/Navisphere/bin/naviseccli volume_driver=cinder.volume.drivers.emc.emc_cli_fc.EMCCLIFCDriver initiator_auto_registration=True
2.1.5.3.3. Required configurations
2.1.5.3.3.1. IP of the VNX Storage Processors
san_ip = <IP of VNX Storage Processor A> san_secondary_ip = <IP of VNX Storage Processor B>
2.1.5.3.3.2. VNX login credentials
- Use plain text username and password.
san_login = <VNX account with administrator role> san_password = <password for VNX account> storage_vnx_authentication_type = global
storage_vnx_authentication_type
are: global
(default), local
, ldap
- Use Security file
storage_vnx_security_file_dir=<path to security file>
2.1.5.3.3.3. Path to your Unisphere CLI
naviseccli_path = /opt/Navisphere/bin/naviseccli
2.1.5.3.3.4. Driver name
- For the FC Driver, add the following option:
volume_driver=cinder.volume.drivers.emc.emc_cli_fc.EMCCLIFCDriver
- For iSCSI Driver, add following option:
volume_driver=cinder.volume.drivers.emc.emc_cli_iscsi.EMCCLIISCSIDriver
2.1.5.3.4. Optional configurations
2.1.5.3.4.1. VNX pool names
storage_vnx_pool_names = pool 1, pool 2
2.1.5.3.4.2. Initiator auto registration
initiator_auto_registration=True
, the driver will automatically register initiators to all working target ports of the VNX array during volume attaching (The driver will skip those initiators that have already been registered) if the option io_port_list
is not specified in cinder.conf.
io_port_list
, the driver will only register the initiator to the ports specified in the list and only return target port(s) which belong to the target ports in the io_port_list
instead of all target ports.
- Example for FC ports:
io_port_list=a-1,B-3
a
orB
is Storage Processor, number1
and3
are Port ID. - Example for iSCSI ports:
io_port_list=a-1-0,B-3-0
a
orB
is Storage Processor, the first numbers1
and3
are Port ID and the second number0
is Virtual Port ID
- Rather than de-registered, the registered ports will be simply bypassed whatever they are in 'io_port_list' or not.
- The driver will raise an exception if ports in
io_port_list
are not existed in VNX during startup.
2.1.5.3.4.3. Force delete volumes in storage group
available
volumes may remain in storage group on the VNX array due to some OpenStack timeout issue. But the VNX array do not allow the user to delete the volumes which are in storage group. Option force_delete_lun_in_storagegroup
is introduced to allow the user to delete the available
volumes in this tricky situation.
force_delete_lun_in_storagegroup=True
in the back-end section, the driver will move the volumes out of storage groups and then delete them if the user tries to delete the volumes that remain in storage group on the VNX array.
force_delete_lun_in_storagegroup
is False
.
2.1.5.3.4.4. Over subscription in thin provisioning
max_over_subscription_ratio
in the back-end section is the ratio of provisioned capacity over total capacity.
max_over_subscription_ratio
is greater than 1.0, the provisioned capacity can exceed the total capacity. The default value of max_over_subscription_ratio
is 20.0, which means the provisioned capacity can be 20 times the total physical capacity.
2.1.5.3.4.5. Storage group automatic deletion
destroy_empty_storage_group=True
, the driver will remove the empty storage group after its last volume is detached. For data safety, it does not suggest to set destroy_empty_storage_group=True
unless the VNX is exclusively managed by one Block Storage node because consistent lock_path is required for operation synchronization for this behavior.
2.1.5.3.4.6. Initiator auto deregistration
initiator_auto_deregistration=True
is set, the driver will deregister all the initiators of the host after its storage group is deleted.
2.1.5.3.4.7. FC SAN auto zoning
zoning_mode
to fabric
in DEFAULT
section to enable this feature. For ZoneManager configuration, refer to Block Storage official guide.
2.1.5.3.4.8. Volume number threshold
check_max_pool_luns_threshold
is False
. When check_max_pool_luns_threshold=True
, the pool-based back end will check the limit and will report 0 free capacity to the scheduler if the limit is reached. So the scheduler will be able to skip this kind of pool-based back end that runs out of the pool volume number.
2.1.5.3.4.9. iSCSI initiators
iscsi_initiators
is a dictionary of IP addresses of the iSCSI initiator ports on OpenStack Nova/Cinder nodes which want to connect to VNX via iSCSI. If this option is configured, the driver will leverage this information to find an accessible iSCSI target portal for the initiator when attaching volume. Otherwise, the iSCSI target portal will be chosen in a relative random way.
host1
with 10.0.0.1
and 10.0.0.2
. And it will connect host2
with 10.0.0.3
.
host1
in the example) should be the output of command hostname
.
iscsi_initiators = {"host1":["10.0.0.1", "10.0.0.2"],"host2":["10.0.0.3"]}
2.1.5.3.4.10. Default timeout
default_timeout = 10
2.1.5.3.4.11. Max LUNs per storage group
max_luns_per_storage_group
specify the max number of LUNs in a storage group. Default value is 255. It is also the max value supportedby VNX.
2.1.5.3.4.12. Ignore pool full threshold
ignore_pool_full_threshold
is set to True
, driver will force LUN creation even if the full threshold of pool is reached. Default to False
2.1.5.4. Extra spec options
$
cinder type-create "demoVolumeType"
$
cinder type-key "demoVolumeType" set provisioning:type=thin
2.1.5.4.1. Provisioning type
- Key:
provisioning:type
- Possible Values:
thick
Volume is fully provisioned.Example 2.5. creating a
thick
volume type:$
cinder type-create "ThickVolumeType"
$
cinder type-key "ThickVolumeType" set provisioning:type=thick thick_provisioning_support='<is> True'
thin
Volume is virtually provisionedExample 2.6. creating a
thin
volume type:$
cinder type-create "ThinVolumeType"
$
cinder type-key "ThinVolumeType" set provisioning:type=thin thin_provisioning_support='<is> True'
deduplicated
Volume isthin
and deduplication is enabled. The administrator shall go to VNX to configure the system level deduplication settings. To create a deduplicated volume, the VNX Deduplication license must be activated on VNX, and specifydeduplication_support=True
to let Block Storage scheduler find the proper volume back end.Example 2.7. creating a
deduplicated
volume type:$
cinder type-create "DeduplicatedVolumeType"
$
cinder type-key "DeduplicatedVolumeType" set provisioning:type=deduplicated deduplication_support='<is> True'
compressed
Volume isthin
and compression is enabled. The administrator shall go to the VNX to configure the system level compression settings. To create a compressed volume, the VNX Compression license must be activated on VNX , and usecompression_support=True
to let Block Storage scheduler find a volume back end. VNX does not support creating snapshots on a compressed volume.Example 2.8. creating a
compressed
volume type:$
cinder type-create "CompressedVolumeType"
$
cinder type-key "CompressedVolumeType" set provisioning:type=compressed compression_support='<is> True'
- Default:
thick
provisioning:type
replaces the old spec key storagetype:provisioning
. The latter one will be obsoleted in the next release. If both provisioning:type
and storagetype:provisioning
are set in the volume type, the value of provisioning:type
will be used.
2.1.5.4.2. Storage tiering support
- Key:
storagetype:tiering
- Possible Values:
StartHighThenAuto
Auto
HighestAvailable
LowestAvailable
NoMovement
- Default:
StartHighThenAuto
storagetype:tiering
to set the tiering policy of a volume and use the key fast_support='<is> True'
to let Block Storage scheduler find a volume back end which manages a VNX with FAST license activated. Here are the five supported values for the extra spec key storagetype:tiering
:
Example 2.9. creating a volume types with tiering policy:
$
cinder type-create "ThinVolumeOnLowestAvaibleTier"
$
cinder type-key "CompressedVolumeOnLowestAvaibleTier" set provisioning:type=thin storagetype:tiering=Auto fast_support='<is> True'
2.1.5.4.3. FAST cache support
- Key:
fast_cache_enabled
- Possible Values:
True
False
- Default:
False
True
is specified.
2.1.5.4.4. Snap-copy
- Key:
copytype:snap
- Possible Values:
True
False
- Default:
False
copytype:snap=True
in the extra specs of its volume type. Then the new volume cloned from the source or copied from the snapshot for the source, will be in fact a snap-copy instead of a full copy. If a full copy is needed, retype/migration can be used to convert the snap-copy volume to a full-copy volume which may be time-consuming.
$
cinder type-create "SnapCopy"
$
cinder type-key "SnapCopy" set copytype:snap=True
$
cinder metadata-show <volume>
copytype:snap=True
is not allowed in the volume type of a consistency group.- Clone and snapshot creation are not allowed on a copied volume created through the snap-copy before it is converted to a full copy.
- The number of snap-copy volume created from a source volume is limited to 255 at one point in time.
- The source volume which has snap-copy volume can not be deleted.
2.1.5.4.5. Pool name
- Key:
pool_name
- Possible Values: name of the storage pool managed by cinder
- Default: None
Example 2.10. Creating the volume type:
$
cinder type-create "HighPerf"
$
cinder type-key "HighPerf" set pool_name=Pool_02_SASFLASH volume_backend_name=vnx_41
2.1.5.4.6. Obsoleted extra specs in Liberty
storagetype:provisioning
storagetype:pool
2.1.5.5. Advanced features
2.1.5.5.1. Read-only volumes
$
cinder readonly-mode-update <volume> True
2.1.5.5.2. Efficient non-disruptive volume backup
- Backup creation for a snap-copy volume is not allowed if the volume status is
in-use
since snapshot cannot be taken from this volume.
2.1.5.6. Best practice
2.1.5.6.1. Multipath setup
- Install
multipath-tools
,sysfsutils
andsg3-utils
on nodes hosting Nova-Compute and Cinder-Volume services (Check the operating system manual for the system distribution for specific installation steps. For Red Hat based distributions, they should bedevice-mapper-multipath
,sysfsutils
andsg3_utils
). - Specify
use_multipath_for_image_xfer=true
in cinder.conf for each FC/iSCSI back end. - Specify
iscsi_use_multipath=True
inlibvirt
section ofnova.conf
. This option is valid for both iSCSI and FC driver.
/etc/multipath.conf
.
user_friendly_names
is not specified in the configuration and thus it will take the default value no
. It is NOT recommended to set it to yes
because it may fail operations such as VM live migration.
blacklist { # Skip the files under /dev that are definitely not FC/iSCSI devices # Different system may need different customization devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*" devnode "^hd[a-z][0-9]*" devnode "^cciss!c[0-9]d[0-9]*[p[0-9]*]" # Skip LUNZ device from VNX device { vendor "DGC" product "LUNZ" } } defaults { user_friendly_names no flush_on_last_del yes } devices { # Device attributed for EMC CLARiiON and VNX series ALUA device { vendor "DGC" product ".*" product_blacklist "LUNZ" path_grouping_policy group_by_prio path_selector "round-robin 0" path_checker emc_clariion features "1 queue_if_no_path" hardware_handler "1 alua" prio alua failback immediate } }
faulty_device_cleanup.py
mitigates this issue when VNX iSCSI storage is used. Cloud administrators can deploy the script in all Nova-Compute nodes and use a CRON job to run the script on each Nova-Compute node periodically so that faulty devices will not stay too long. See VNX faulty device cleanup for detailed usage and the script.
2.1.5.7. Restrictions and limitations
2.1.5.7.1. iSCSI port cache
periodic_interval
in cinder.conf
) before any volume attachment operation after changing the iSCSI port configurations. Otherwise the attachment may fail because the old iSCSI port configurations were used.
2.1.5.7.2. No extending for volume with snapshots
error_extending
.
2.1.5.7.3. Limitations for deploying cinder on computer node
cinder upload-to-image --force True
is used against an in-use volume. Otherwise, cinder upload-to-image --force True
will terminate the data access of the vm instance to the volume.
2.1.5.7.4. Storage group with host names in VNX
2.1.5.7.5. EMC storage-assisted volume migration
cinder migrate --force-host-copy False <volume_id> <host>
or cinder migrate <volume_id> <host>
, cinder will try to leverage the VNX's native volume migration functionality.
- Volume migration between back ends with different storage protocol, ex, FC and iSCSI.
- Volume is to be migrated across arrays.
2.1.5.8. Appendix
2.1.5.8.1. Authenticate by security file
- Find out the Linux user id of the
cinder-volume
processes. Assuming the servicecinder-volume
is running by the accountcinder
. - Run
su
as root user. - In
/etc/passwd
, changecinder:x:113:120::/var/lib/cinder:/bin/false
tocinder:x:113:120::/var/lib/cinder:/bin/bash
(This temporary change is to make step 4 work.) - Save the credentials on behave of
cinder
user to a security file (assuming the array credentials areadmin/admin
inglobal
scope). In the command below, the '-secfilepath' switch is used to specify the location to save the security file.#
su -l cinder -c '/opt/Navisphere/bin/naviseccli -AddUserSecurity -user admin -password admin -scope 0 -secfilepath <location>'
- Change
cinder:x:113:120::/var/lib/cinder:/bin/bash
back tocinder:x:113:120::/var/lib/cinder:/bin/false
in/etc/passwd
- Remove the credentials options
san_login
,san_password
andstorage_vnx_authentication_type
from cinder.conf. (normally it is/etc/cinder/cinder.conf
). Add optionstorage_vnx_security_file_dir
and set its value to the directory path of your security file generated in step 4. Omit this option if-secfilepath
is not used in step 4. - Restart the
cinder-volume
service to validate the change.
2.1.5.8.2. Register FC port with VNX
initiator_auto_registration=False
.
cinder-volume
service (Block Storage nodes) must be registered with the VNX as well.
- Assume
20:00:00:24:FF:48:BA:C2:21:00:00:24:FF:48:BA:C2
is the WWN of a FC initiator port name of the compute node whose hostname and IP aremyhost1
and10.10.61.1
. Register20:00:00:24:FF:48:BA:C2:21:00:00:24:FF:48:BA:C2
in Unisphere:- Login to Unisphere, go to FNM0000000000->Hosts->Initiators.
- Refresh and wait until the initiator
20:00:00:24:FF:48:BA:C2:21:00:00:24:FF:48:BA:C2
with SP PortA-1
appears. - Click the Register button, select CLARiiON/VNX and enter the hostname (which is the output of the linux command
hostname
) and IP address:- Hostname :
myhost1
- IP :
10.10.61.1
- Click Register
- Then host
10.10.61.1
will appear under Hosts->Host List as well.
- Register the wwn with more ports if needed.
2.1.5.8.3. Register iSCSI port with VNX
initiator_auto_registration=False
.
- On the compute node with IP address
10.10.61.1
and hostnamemyhost1
, execute the following commands (assuming10.10.61.35
is the iSCSI target):- Start the iSCSI initiator service on the node
#
/etc/init.d/open-iscsi start
- Discover the iSCSI target portals on VNX
#
iscsiadm -m discovery -t st -p 10.10.61.35
- Enter
/etc/iscsi
#
cd /etc/iscsi
- Find out the iqn of the node
#
more initiatorname.iscsi
- Login to VNX from the compute node using the target corresponding to the SPA port:
#
iscsiadm -m node -T iqn.1992-04.com.emc:cx.apm01234567890.a0 -p 10.10.61.35 -l
- Assume
iqn.1993-08.org.debian:01:1a2b3c4d5f6g
is the initiator name of the compute node. Registeriqn.1993-08.org.debian:01:1a2b3c4d5f6g
in Unisphere:- Login to Unisphere, go to FNM0000000000->Hosts->Initiators .
- Refresh and wait until the initiator
iqn.1993-08.org.debian:01:1a2b3c4d5f6g
with SP PortA-8v0
appears. - Click the Register button, select CLARiiON/VNX and enter the hostname (which is the output of the linux command
hostname
) and IP address:- Hostname :
myhost1
- IP :
10.10.61.1
- Click Register
- Then host
10.10.61.1
will appear under Hosts->Host List as well.
- Logout iSCSI on the node:
#
iscsiadm -m node -u
- Login to VNX from the compute node using the target corresponding to the SPB port:
#
iscsiadm -m node -T iqn.1992-04.com.emc:cx.apm01234567890.b8 -p 10.10.61.36 -l
- In Unisphere register the initiator with the SPB port.
- Logout iSCSI on the node:
#
iscsiadm -m node -u
- Register the iqn with more ports if needed.
2.1.6. EMC XtremIO Block Storage driver configuration
2.1.6.1. Support matrix
- Xtremapp: Version 3.0 and 4.0
2.1.6.2. Supported operations
- Create, delete, clone, attach, and detach volumes
- Create and delete volume snapshots
- Create a volume from a snapshot
- Copy an image to a volume
- Copy a volume to an image
- Extend a volume
- Manage and unmanage a volume
- Get volume statistics
2.1.6.3. XtremIO Block Storage driver configuration
cinder.conf
file by adding the configuration below under the [DEFAULT]
section of the file in case of a single back end or under a separate section in case of multiple back ends (for example [XTREMIO]). The configuration file is usually located under the following path /etc/cinder/cinder.conf
.
2.1.6.3.1. XtremIO driver name
- For iSCSI
volume_driver = cinder.volume.drivers.emc.xtremio.XtremIOIscsiDriver
- For Fibre Channel
volume_driver = cinder.volume.drivers.emc.xtremio.XtremIOFibreChannelDriver
2.1.6.3.2. XtremIO management server (XMS) IP
san_ip = XMS Management IP
2.1.6.3.3. XtremIO cluster name
xtremio_cluster_name = Cluster-Name
2.1.6.3.4. XtremIO user credentials
san_login = XMS username
san_password = XMS username password
2.1.6.4. Multiple back ends
2.1.6.5. Setting thin provisioning and multipathing parameters
- Thin ProvisioningAll XtremIO volumes are thin provisioned. The default value of 20 should be maintained for the
max_over_subscription_ratio
parameter.Theuse_cow_images
parameter in thenova.conf
file should be set to False as follows:use_cow_images = false
- MultipathingThe
use_multipath_for_image_xfer
parameter in thecinder.conf
file should be set to True as follows:use_multipath_for_image_xfer = true
2.1.6.6. Restarting OpenStack Block Storage
cinder.conf
file and restart cinder by running the following command:
$
openstack-service restart cinder-volume
2.1.6.7. Configuring CHAP
$
modify-chap chap-authentication-mode=initiator
2.1.6.8. Configuration example
cinder.conf example file
cinder.conf
file by editing the necessary parameters as follows:
[Default] enabled_backends = XtremIO [XtremIO] volume_driver = cinder.volume.drivers.emc.xtremio.XtremIOFibreChannelDriver san_ip = XMS_IP xtremio_cluster_name = Cluster01 san_login = XMS_USER san_password = XMS_PASSWD volume_backend_name = XtremIOAFA
2.1.7. HDS HNAS iSCSI and NFS driver
2.1.7.1. Supported operations
- Create, delete, attach, and detach volumes.
- Create, list, and delete volume snapshots.
- Create a volume from a snapshot.
- Copy an image to a volume.
- Copy a volume to an image.
- Clone a volume.
- Extend a volume.
- Get volume statistics.
- Manage and unmanage a volume.
2.1.7.2. HNAS storage requirements
replication target
. Additionally:
- For NFS:
- Create NFS exports, choose a path for them (it must be different from "/") and set the Show snapshots option to
hide and disable access
.Also, in the "Access Configuration" set the optionnorootsquash
, e.g."* (rw, norootsquash)",
so HNAS cinder driver can change the permissions of its volumes.In order to use the hardware accelerated features of NFS HNAS, we recommend settingmax-nfs-version
to 3. Refer to HNAS command line reference to see how to configure this option. - For iSCSI:
- You need to set an iSCSI domain.
2.1.7.3. Block storage host requirements
2.1.7.4. Package installation
- Install the dependencies:
#
yum install nfs-utils nfs-utils-lib
- Configure the driver as described in the Section 2.1.7.5, “Driver configuration” section.
- Restart all cinder services (volume, scheduler and backup).
2.1.7.5. Driver configuration
cinder.conf
configuration file. Below are the configuration needed in the cinder.conf
configuration file [1]:
[DEFAULT] enabled_backends = hnas_iscsi1, hnas_nfs1
[hnas_iscsi1] volume_driver = cinder.volume.drivers.hitachi.hnas_iscsi.HDSISCSIDriver hds_hnas_iscsi_config_file = /path/to/config/hnas_config_file.xml volume_backend_name = HNAS-ISCSI
[hnas_nfs1] volume_driver = cinder.volume.drivers.hitachi.hnas_nfs.HDSNFSDriver hds_hnas_nfs_config_file = /path/to/config/hnas_config_file.xml volume_backend_name = HNAS-NFS
<?xml version = "1.0" encoding = "UTF-8" ?> <config> <mgmt_ip0>172.24.44.15</mgmt_ip0> <hnas_cmd>ssc</hnas_cmd> <chap_enabled>False</chap_enabled> <ssh_enabled>False</ssh_enabled> <cluster_admin_ip0>10.1.1.1</cluster_admin_ip0> <username>supervisor</username> <password>supervisor</password> <svc_0> <volume_type>default</volume_type> <iscsi_ip>172.24.44.20</iscsi_ip> <hdp>fs01-husvm</hdp> </svc_0> <svc_1> <volume_type>platinum</volume_type> <iscsi_ip>172.24.44.20</iscsi_ip> <hdp>fs01-platinum</hdp> </svc_1> </config>
2.1.7.6. HNAS volume driver XML configuration options
svc_n
tag (svc_0
, svc_1
, svc_2
, or svc_3
[2], for example). These are the configuration options available for each service label:
Table 2.5. Configuration options for service labels
Option | Type | Default | Description |
volume_type
|
Required
|
default
|
When a
create_volume call with a certain volume type happens, the volume type will try to be matched up with this tag. In each configuration file you must define the default volume type in the service labels and, if no volume type is specified, the default is used. Other labels are case sensitive and should match exactly. If no configured volume types match the incoming requested type, an error occurs in the volume creation.
|
iscsi_ip
|
Required only for iSCSI
|
|
An iSCSI IP address dedicated to the service.
|
hdp
|
Required
|
|
For iSCSI driver: virtual file system label associated with the service.
For NFS driver: path to the volume (<ip_address>:/<path>) associated with the service.
Additionally, this entry must be added in the file used to list available NFS shares. This file is located, by default, in
/etc/cinder/nfs_shares or you can specify the location in the nfs_shares_config option in the cinder.conf configuration file.
|
config
section of the XML config file:
Table 2.6. Configuration options
Option | Type | Default | Description |
mgmt_ip0
|
Required
|
|
Management Port 0 IP address. Should be the IP address of the "Admin" EVS.
|
hnas_cmd
|
Optional
|
ssc
|
Command to communicate to HNAS array.
|
chap_enabled
|
Optional (iSCSI only)
|
True
|
Boolean tag used to enable CHAP authentication protocol.
|
username
|
Required
|
supervisor
|
It's always required on HNAS.
|
password
|
Required
|
supervisor
|
Password is always required on HNAS.
|
svc_0, svc_1, svc_2, svc_3
|
Optional
|
(at least one label has to be defined)
|
Service labels: these four predefined names help four different sets of configuration options. Each can specify HDP and a unique volume type.
|
cluster_admin_ip0
|
Optional if
ssh_enabled is True
|
The address of HNAS cluster admin.
|
|
ssh_enabled
|
Optional
|
False
|
Enables SSH authentication between Block Storage host and the SMU.
|
ssh_private_key
|
Required if
ssh_enabled is True
|
False
|
Path to the SSH private key used to authenticate in HNAS SMU. The public key must be uploaded to HNAS SMU using
ssh-register-public-key (this is an SSH subcommand). Note that copying the public key HNAS using ssh-copy-id doesn't work properly as the SMU periodically wipe out those keys.
|
2.1.7.7. Service labels
volume_type
per service. Each volume_type
must have the metadata service_label
with the same name configured in the <volume_type>
section of that service. If this is not set, OpenStack Block Storage will schedule the volume creation to the pool with largest available free space or other criteria configured in volume filters.
$
cinder type-create default
$
cinder type-key default set service_label=default
$
cinder type-create platinum-tier
$
cinder type-key platinum set service_label=platinum
2.1.7.8. Multi-back-end configuration
volume_backend_name
option to the appropriate back end. Then, create volume_type
configurations with the same volume_backend_name
.
$
cinder type-create 'iscsi'
$
cinder type-key 'iscsi' set volume_backend_name = 'HNAS-ISCSI'
$
cinder type-create 'nfs'
$
cinder type-key 'nfs' set volume_backend_name = 'HNAS-NFS'
svc_0, svc_1, svc_2, svc_3
) on the instances need to have a volume_type and service_label metadata associated with it. If no metadata is associated with a pool, OpenStack Block Storage filtering algorithm selects the pool with the largest available free space.
2.1.7.9. SSH configuration
- If you don't have a pair of public keys already generated, create it in the Block Storage host (leave the pass-phrase empty):
$
mkdir -p /opt/hds/ssh
$
ssh-keygen -f /opt/hds/ssh/hnaskey
- Change the owner of the key to
cinder
(or the user the volume service will be run):#
chown -R cinder.cinder /opt/hds/ssh
- Create the directory "ssh_keys" in the SMU server:
$
ssh [manager|supervisor]@<smu-ip> 'mkdir -p /var/opt/mercury-main/home/[manager|supervisor]/ssh_keys/'
- Copy the public key to the "ssh_keys" directory:
$
scp /opt/hds/ssh/hnaskey.pub [manager|supervisor]@<smu-ip>:/var/opt/mercury-main/home/[manager|supervisor]/ssh_keys/
- Access the SMU server:
$
ssh [manager|supervisor]@<smu-ip>
- Run the command to register the SSH keys:
$
ssh-register-public-key -u [manager|supervisor] -f ssh_keys/hnaskey.pub
- Check the communication with HNAS in the Block Storage host:
$
ssh -i /opt/hds/ssh/hnaskey [manager|supervisor]@<smu-ip> 'ssc <cluster_admin_ip0> df -a'
<cluster_admin_ip0>
is "localhost" for single node deployments. This should return a list of available file systems on HNAS.
2.1.7.10. Editing the XML config file:
- Set the "username".
- Enable SSH adding the line
"<ssh_enabled> True</ssh_enabled>"
under"<config>"
section. - Set the private key path:
"<ssh_private_key> /opt/hds/ssh/hnaskey</ssh_private_key>"
under"<config>"
section. - If the HNAS is in a multi-cluster configuration set
"<cluster_admin_ip0>"
to the cluster node admin IP. In a single node HNAS, leave it empty. - Restart cinder services.
2.1.7.11. Manage and unmanage
- Under the tab System -> Volumes choose the option [ + Manage Volume ]
- Fill the fields Identifier, Host and Volume Type with volume information to be managed:
- Identifier: ip:/type/volume_name Example: 172.24.44.34:/silver/volume-test
- Host: host@backend-name#pool_name Example: myhost@hnas-nfs#test_silver
- Volume Name: volume_name Example: volume-test
- Volume Type: choose a type of volume Example: silver
- Under the tab System -> Volumes choose the option [ + Manage Volume ]
- Fill the fields Identifier, Host, Volume Name and Volume Type with volume information to be managed:
- Identifier: filesystem-name/volume-name Example: filesystem-test/volume-test
- Host: host@backend-name#pool_name Example: myhost@hnas-iscsi#test_silver
- Volume Name: volume_name Example: volume-test
- Volume Type: choose a type of volume Example: silver
$
cinder --os-volume-api-version 2 manage [--source-name <source-name>][--id-type <id-type>] [--name <name>][--description <description>][--volume-type <volume-type>] [--availability-zone <availability-zone>][--metadata [<key=value> [<key=value> ...]]][--bootable] <host> [<key=value> [<key=value> ...]]
$
cinder --os-volume-api-version 2 manage --name <volume-test> --volume-type <silver> --source-name <172.24.44.34:/silver/volume-test> <myhost@hnas-nfs#test_silver>
$
cinder --os-volume-api-version 2 manage --name <volume-test> --volume-type <silver> --source-name <filesystem-test/volume-test> <myhost@hnas-iscsi#test_silver>
- Under the tab [ System -> Volumes ] choose a volume
- On the volume options, choose [ +Unmanage Volume ]
- Check the data and confirm.
$
cinder --os-volume-api-version 2 unmanage <volume>
$
cinder --os-volume-api-version 2 unmanage <voltest>
2.1.7.12. Additional notes
- The
get_volume_stats()
function always provides the available capacity based on the combined sum of all the HDPs that are used in these services labels. - After changing the configuration on the storage, the OpenStack Block Storage driver must be restarted.
- On Red Hat, if the system is configured to use SELinux, you need to set
"virt_use_nfs = on"
for NFS driver work properly.#
setsebool -P virt_use_nfs on
- It is not possible to manage a volume if there is a slash ('/') or a colon (':') on the volume name.
2.1.8. Hitachi storage volume driver
2.1.8.1. System requirements
- Hitachi Virtual Storage Platform G1000 (VSP G1000)
- Hitachi Virtual Storage Platform (VSP)
- Hitachi Unified Storage VM (HUS VM)
- Hitachi Unified Storage 100 Family (HUS 100 Family)
- RAID Manager Ver 01-32-03/01 or later for VSP G1000/VSP/HUS VM
- Hitachi Storage Navigator Modular 2 (HSNM2) Ver 27.50 or later for HUS 100 Family
/usr/stonavm.
- Hitachi In-System Replication Software for VSP G1000/VSP/HUS VM
- (Mandatory) ShadowImage in-system replication for HUS 100 Family
- (Optional) Copy-on-Write Snapshot for HUS 100 Family
2.1.8.2. Supported operations
- Create, delete, attach and detach volumes.
- Create, list and delete volume snapshots.
- Create a volume from a snapshot.
- Copy a volume to an image.
- Copy an image to a volume.
- Clone a volume.
- Extend a volume.
- Get volume statistics.
2.1.8.3. Configuration
Set up Hitachi storage
- Create a Dynamic Provisioning pool.
- Connect the ports at the storage to the Controller node and Compute nodes.
- For VSP G1000/VSP/HUS VM, set "port security" to "enable" for the ports at the storage.
- For HUS 100 Family, set "Host Group security"/"iSCSI target security" to "ON" for the ports at the storage.
- For the ports at the storage, create host groups (iSCSI targets) whose names begin with HBSD- for the Controller node and each Compute node. Then register a WWN (initiator IQN) for each of the Controller node and Compute nodes.
- For VSP G1000/VSP/HUS VM, perform the following:
- Create a storage device account belonging to the Administrator User Group. (To use multiple storage devices, create the same account name for all the target storage devices, and specify the same resource group and permissions.)
- Create a command device (In-Band), and set user authentication to ON.
- Register the created command device to the host group for the Controller node.
- To use the Thin Image function, create a pool for Thin Image.
- For HUS 100 Family, perform the following:
- Use the command auunitaddauto to register the unit name and controller of the storage device to HSNM2.
- When connecting via iSCSI, if you are using CHAP certification, specify the same user and password as that used for the storage port.
Set up Hitachi Gigabit Fibre Channel adaptor
#
/opt/hitachi/drivers/hba/hfcmgr -E hfc_rport_lu_scan 1
#
dracut -f initramfs-KERNEL_VERSION.img KERNEL_VERSION
#
reboot
Set up Hitachi storage volume driver
- Create directory.
#
mkdir /var/lock/hbsd
#
chown cinder:cinder /var/lock/hbsd
- Create "volume type" and "volume key".This example shows that HUS100_SAMPLE is created as "volume type" and hus100_backend is registered as "volume key".
$
cinder type-create HUS100_SAMPLE
$
cinder type-key HUS100_SAMPLE set volume_backend_name=hus100_backend
Specify any identical "volume type" name and "volume key".To confirm the created "volume type", execute the following command:$
cinder extra-specs-list
- Edit
/etc/cinder/cinder.conf
as follows.If you use Fibre Channel:volume_driver = cinder.volume.drivers.hitachi.hbsd_fc.HBSDFCDriver
If you use iSCSI:volume_driver = cinder.volume.drivers.hitachi.hbsd_iscsi.HBSDISCSIDriver
Also, setvolume_backend_name
created by cinder type-keyvolume_backend_name = hus100_backend
This table shows configuration options for Hitachi storage volume driver.Table 2.7. Description of Hitachi storage volume driver configuration options
Configuration option = Default value Description [DEFAULT] hitachi_add_chap_user
= False(BoolOpt) Add CHAP user hitachi_async_copy_check_interval
= 10(IntOpt) Interval to check copy asynchronously hitachi_auth_method
= None(StrOpt) iSCSI authentication method hitachi_auth_password
= HBSD-CHAP-password(StrOpt) iSCSI authentication password hitachi_auth_user
= HBSD-CHAP-user(StrOpt) iSCSI authentication username hitachi_copy_check_interval
= 3(IntOpt) Interval to check copy hitachi_copy_speed
= 3(IntOpt) Copy speed of storage system hitachi_default_copy_method
= FULL(StrOpt) Default copy method of storage system hitachi_group_range
= None(StrOpt) Range of group number hitachi_group_request
= False(BoolOpt) Request for creating HostGroup or iSCSI Target hitachi_horcm_add_conf
= True(BoolOpt) Add to HORCM configuration hitachi_horcm_numbers
= 200,201(StrOpt) Instance numbers for HORCM hitachi_horcm_password
= None(StrOpt) Password of storage system for HORCM hitachi_horcm_resource_lock_timeout
= 600(IntOpt) Timeout until a resource lock is released, in seconds. The value must be between 0 and 7200. hitachi_horcm_user
= None(StrOpt) Username of storage system for HORCM hitachi_ldev_range
= None(StrOpt) Range of logical device of storage system hitachi_pool_id
= None(IntOpt) Pool ID of storage system hitachi_serial_number
= None(StrOpt) Serial number of storage system hitachi_target_ports
= None(StrOpt) Control port names for HostGroup or iSCSI Target hitachi_thin_pool_id
= None(IntOpt) Thin pool ID of storage system hitachi_unit_name
= None(StrOpt) Name of an array unit hitachi_zoning_request
= False(BoolOpt) Request for FC Zone creating HostGroup - Restart Block Storage service.When the startup is done, "MSGID0003-I: The storage backend can be used." is output into
/var/log/cinder/volume.log
as follows.2014-09-01 10:34:14.169 28734 WARNING cinder.volume.drivers.hitachi. hbsd_common [req-a0bb70b5-7c3f-422a-a29e-6a55d6508135 None None] MSGID0003-I: The storage backend can be used. (config_group: hus100_backend)
2.1.9. Huawei storage driver
Supported operations
- Create, delete, expand, attach, and detach volumes.
- Create and delete a snapshot.
- Copy an image to a volume.
- Copy a volume to an image.
- Create a volume from a snapshot.
- Clone a volume.
Configure block storage nodes
- Modify the
cinder.conf
configuration file and addvolume_driver
andcinder_huawei_conf_file items
.- Example for configuring a storage system:
volume_driver = cinder.volume.drivers.huawei.HuaweiVolumeDriver cinder_huawei_conf_file = /etc/cinder/cinder_huawei_conf.xml
- Example for configuring multiple storage systems:
enabled_backends = t_iscsi, 18000_iscsi [t_iscsi] volume_driver = cinder.volume.drivers.huawei.HuaweiVolumeDriver cinder_huawei_conf_file = /etc/cinder/cinder_huawei_conf_t_iscsi.xml volume_backend_name = HuaweiTISCSIDriver [18000_iscsi] volume_driver = cinder.volume.drivers.huawei.HuaweiVolumeDriver cinder_huawei_conf_file = /etc/cinder/cinder_huawei_conf_18000_iscsi.xml volume_backend_name = Huawei18000ISCSIDriver
- In
/etc/cinder
, create a driver configuration file. The driver configuration file name must be the same as thecinder_huawei_conf_file
item in thecinder_conf
configuration file. Configure product and protocol.
Product and Protocol indicate the storage system type and link type respectively. For the OceanStor 18000 series V100R001 storage systems, the driver configuration file is as follows:<?xml version='1.0' encoding='UTF-8'?> <config> <Storage> <Product>18000</Product> <Protocol>iSCSI</Protocol> <RestURL>https://x.x.x.x/deviceManager/rest/</RestURL> <UserName>xxxxxxxx</UserName> <UserPassword>xxxxxxxx</UserPassword> </Storage> <LUN> <LUNType>Thick</LUNType> <WriteType>1</WriteType> <MirrorSwitch>0</MirrorSwitch> <LUNcopyWaitInterval>5</LUNcopyWaitInterval> <Timeout>432000</Timeout> <StoragePool>xxxxxxxx</StoragePool> </LUN> <iSCSI> <DefaultTargetIP>x.x.x.x</DefaultTargetIP> <Initiator Name="xxxxxxxx" TargetIP="x.x.x.x"/> <Initiator Name="xxxxxxxx" TargetIP="x.x.x.x"/> </iSCSI> <Host OSType="Linux" HostIP="x.x.x.x, x.x.x.x"/> </config>
NoteNote for fibre channel driver configuration
- In the configuration files of OceanStor T series V200R002 and OceanStor V3 V300R002, parameter configurations are the same with the exception of the RestURL parameter. The following describes how to configure the RestURL parameter:
<RestURL>https://x.x.x.x:8088/deviceManager/rest/</RestURL>
- For a Fibre Channel driver, you do not need to configure an iSCSI target IP address. Delete the iSCSI configuration from the preceding examples.
<iSCSI> <DefaultTargetIP>x.x.x.x</DefaultTargetIP> <Initiator Name="xxxxxxxx" TargetIP="x.x.x.x"/> <Initiator Name="xxxxxxxx" TargetIP="x.x.x.x"/> </iSCSI>
This table describes the Huawei storage driver configuration options:Table 2.8. Huawei storage driver configuration options
Property Type Default Description Product
Mandatory-Type of a storage product. Valid values areT
,TV3
, or18000
.Protocol
Mandatory -Type of a protocol. Valid values are iSCSI
orFC
.RestURL
Mandatory -Access address of the Rest port (required only for the 18000) UserName
Mandatory-User name of an administratorUserPassword
Mandatory-Password of an administratorLUNType
OptionalThinType of a created LUN. Valid values areThick
orThin
.StripUnitSize
Optional64Stripe depth of a created LUN. The value is expressed in KB.This flag is not valid for a thin LUN.WriteType
Optional1Cache write method. The method can be write back, write through, or Required write back. The default value is1
, indicating write back.MirrorSwitch
Optional1Cache mirroring policy. The default value is1
, indicating that a mirroring policy is used.Prefetch Type
Optional 3Cache prefetch strategy. The strategy can be constant prefetch, variable prefetch, or intelligent prefetch. Default value is3
, which indicates intelligent prefetch and is not required for the OceanStor 18000 series.Prefetch Value
Optional 0Cache prefetch value.LUNcopyWaitInterval
Optional 5After LUN copy is enabled, the plug-in frequently queries the copy progress. You can set a value to specify the query interval.Timeout
Optional 432,000Timeout period for waiting LUN copy of an array to complete.StoragePool
Mandatory -Name of a storage pool that you want to use.DefaultTargetIP
Optional -Default IP address of the iSCSI port provided for compute nodes.Initiator Name
Optional -Name of a compute node initiator.Initiator TargetIP
Optional -IP address of the iSCSI port provided for compute nodes.OSType
Optional LinuxThe OS type for a compute node. HostIP
Optional -The IPs for compute nodes. Note for the configuration- You can configure one iSCSI target port for each or all compute nodes. The driver checks whether a target port IP address is configured for the current compute node. If not, select
DefaultTargetIP
. - Only one storage pool can be configured.
- For details about LUN configuration information, see the show lun general command in the command-line interface (CLI) documentation or run the help -c show lun general on the storage system CLI.
- After the driver is loaded, the storage system obtains any modification of the driver configuration file in real time and you do not need to restart the
cinder-volume
service.
- Restart the Cinder service.
2.1.10. IBM Storwize family and SVC volume driver
2.1.10.1. Configure the Storwize family and SVC system
Network configuration
storwize_svc_multipath_enabled
flag is set to True in the Cinder configuration file, the driver uses all available WWPNs to attach the volume to the instance (details about the configuration flags appear in the next section). If the flag is not set, the driver uses the WWPN associated with the volume's preferred node (if available), otherwise it uses the first available WWPN of the system. The driver obtains the WWPNs directly from the storage system; you do not need to provide these WWPNs directly to the driver.
iSCSI CHAP authentication
storwize_svc_iscsi_chap_enabled
is set to True
, the driver will associate randomly-generated CHAP secrets with all hosts on the Storwize family system. OpenStack compute nodes use these secrets when creating iSCSI connections.
Configure storage pools
storwize_svc_volpool_name
configuration flag. Details about the configuration flags and how to provide the flags to the driver appear in the next section.
Configure user authentication for the driver
san_ip
flag, and the management port should be provided by the san_ssh_port
flag. By default, the port value is configured to be port 22 (SSH).
cinder-volume
management driver has SSH network access to the storage system.
san_login
and san_password
, respectively.
san_private_key
configuration flag.
Create a SSH key pair with OpenSSH
$
ssh-keygen -t rsa
key
and key.pub
. The key
file holds the private SSH key and key.pub
holds the public SSH key.
san_private_key
configuration flag. The public key should be uploaded to the Storwize family or SVC system using the storage management GUI or command line interface.
2.1.10.2. Configure the Storwize family and SVC driver
Enable the Storwize family and SVC driver
volume_driver
option in cinder.conf
as follows:
volume_driver = cinder.volume.drivers.ibm.storwize_svc.StorwizeSVCDriver
Storwize family and SVC driver options in cinder.conf
Table 2.9. List of configuration flags for Storwize storage and SVC driver
Flag name | Type | Default | Description | ||||||||||||||||||||||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
san_ip
|
Required
|
|
Management IP or host name
|
||||||||||||||||||||||||||||||||||||||||||||||
san_ssh_port
|
Optional
|
22
|
Management port
|
||||||||||||||||||||||||||||||||||||||||||||||
san_login
|
Required
|
|
Management login username
|
||||||||||||||||||||||||||||||||||||||||||||||
san_password
|
Required [a]
|
|
Management login password
|
||||||||||||||||||||||||||||||||||||||||||||||
san_private_key
|
Required [a]
|
|
Management login SSH private key
|
||||||||||||||||||||||||||||||||||||||||||||||
storwize_svc_volpool_name
|
Required
|
|
Default pool name for volumes
|
||||||||||||||||||||||||||||||||||||||||||||||
storwize_svc_vol_rsize
|
Optional
|
2
|
Initial physical allocation (percentage) [b]
|
||||||||||||||||||||||||||||||||||||||||||||||
storwize_svc_vol_warning
|
Optional
|
0 (disabled)
|
Space allocation warning threshold (percentage) [b]
|
||||||||||||||||||||||||||||||||||||||||||||||
storwize_svc_vol_autoexpand
|
Optional
|
True
|
Enable or disable volume auto expand [c]
|
||||||||||||||||||||||||||||||||||||||||||||||
storwize_svc_vol_grainsize
|
Optional
|
256
|
Volume grain size [b] in KB
|
||||||||||||||||||||||||||||||||||||||||||||||
storwize_svc_vol_compression
|
Optional
|
False
|
Enable or disable Real-time Compression [d]
|
||||||||||||||||||||||||||||||||||||||||||||||
storwize_svc_vol_easytier
|
Optional
|
True
|
Enable or disable Easy Tier [e]
|
||||||||||||||||||||||||||||||||||||||||||||||
storwize_svc_vol_iogrp
|
Optional
|
0
|
The I/O group in which to allocate vdisks
|
||||||||||||||||||||||||||||||||||||||||||||||
storwize_svc_flashcopy_timeout
|
Optional
|
120
|
FlashCopy timeout threshold [f] (seconds)
|
||||||||||||||||||||||||||||||||||||||||||||||
storwize_svc_connection_protocol
|
Optional
|
iSCSI
|
Connection protocol to use (currently supports 'iSCSI' or 'FC')
|
||||||||||||||||||||||||||||||||||||||||||||||
storwize_svc_iscsi_chap_enabled
|
Optional
|
True
|
Configure CHAP authentication for iSCSI connections
|
||||||||||||||||||||||||||||||||||||||||||||||
storwize_svc_multipath_enabled
|
Optional
|
False
|
Enable multipath for FC connections [g]
|
||||||||||||||||||||||||||||||||||||||||||||||
storwize_svc_multihost_enabled
|
Optional
|
True
|
Enable mapping vdisks to multiple hosts [h]
|
||||||||||||||||||||||||||||||||||||||||||||||
storwize_svc_vol_nofmtdisk
|
Optional
|
False
|
Enable or disable fast format [i]
|
||||||||||||||||||||||||||||||||||||||||||||||
[a]
The authentication requires either a password ( san_password ) or SSH private key (san_private_key ). One must be specified. If both are specified, the driver uses only the SSH private key.
[b]
The driver creates thin-provisioned volumes by default. The storwize_svc_vol_rsize flag defines the initial physical allocation percentage for thin-provisioned volumes, or if set to -1 , the driver creates full allocated volumes. More details about the available options are available in the Storwize family and SVC documentation.
[c]
Defines whether thin-provisioned volumes can be auto expanded by the storage system, a value of True means that auto expansion is enabled, a value of False disables auto expansion. Details about this option can be found in the –autoexpand flag of the Storwize family and SVC command line interface mkvdisk command.
[d]
Defines whether Real-time Compression is used for the volumes created with OpenStack. Details on Real-time Compression can be found in the Storwize family and SVC documentation. The Storwize or SVC system must have compression enabled for this feature to work.
[e]
Defines whether Easy Tier is used for the volumes created with OpenStack. Details on EasyTier can be found in the Storwize family and SVC documentation. The Storwize or SVC system must have Easy Tier enabled for this feature to work.
[f]
The driver wait timeout threshold when creating an OpenStack snapshot. This is actually the maximum amount of time that the driver waits for the Storwize family or SVC system to prepare a new FlashCopy mapping. The driver accepts a maximum wait time of 600 seconds (10 minutes).
[g]
Multipath for iSCSI connections requires no storage-side configuration and is enabled if the compute host has multipath configured.
[h]
This option allows the driver to map a vdisk to more than one host at a time. This scenario occurs during migration of a virtual machine with an attached volume; the volume is simultaneously mapped to both the source and destination compute hosts. If your deployment does not require attaching vdisks to multiple hosts, setting this flag to False will provide added safety.
[i]
Defines whether or not the fast formatting of thick-provisioned volumes is disabled at creation. The default value is False and a value of True means that fast format is disabled. Details about this option can be found in the –nofmtdisk flag of the Storwize family and SVC command line interface mkvdisk command.
|
Table 2.10. Description of IBM Storwise driver configuration options
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
storwize_svc_allow_tenant_qos = False
|
(BoolOpt) Allow tenants to specify QOS on create |
storwize_svc_connection_protocol = iSCSI
|
(StrOpt) Connection protocol (iSCSI/FC) |
storwize_svc_flashcopy_timeout = 120
|
(IntOpt) Maximum number of seconds to wait for FlashCopy to be prepared. |
storwize_svc_iscsi_chap_enabled = True
|
(BoolOpt) Configure CHAP authentication for iSCSI connections (Default: Enabled) |
storwize_svc_multihostmap_enabled = True
|
(BoolOpt) Allows vdisk to multi host mapping |
storwize_svc_multipath_enabled = False
|
(BoolOpt) Connect with multipath (FC only; iSCSI multipath is controlled by Nova) |
storwize_svc_npiv_compatibility_mode = True
|
(BoolOpt) Indicate whether svc driver is compatible for NPIV setup. If it is compatible, it will allow no wwpns being returned on get_conn_fc_wwpns during initialize_connection. It should always be set to True. It will be deprecated and removed in M release. |
storwize_svc_stretched_cluster_partner = None
|
(StrOpt) If operating in stretched cluster mode, specify the name of the pool in which mirrored copies are stored.Example: "pool2" |
storwize_svc_vol_autoexpand = True
|
(BoolOpt) Storage system autoexpand parameter for volumes (True/False) |
storwize_svc_vol_compression = False
|
(BoolOpt) Storage system compression option for volumes |
storwize_svc_vol_easytier = True
|
(BoolOpt) Enable Easy Tier for volumes |
storwize_svc_vol_grainsize = 256
|
(IntOpt) Storage system grain size parameter for volumes (32/64/128/256) |
storwize_svc_vol_iogrp = 0
|
(IntOpt) The I/O group in which to allocate volumes |
storwize_svc_vol_rsize = 2
|
(IntOpt) Storage system space-efficiency parameter for volumes (percentage) |
storwize_svc_vol_warning = 0
|
(IntOpt) Storage system threshold for volume capacity warnings (percentage) |
storwize_svc_volpool_name = volpool
|
(StrOpt) Storage system storage pool for volumes |
Placement with volume types
extra specs
of volume types, and used by the filter scheduler to determine placement of new volumes. Make sure to prefix these keys with capabilities:
to indicate that the scheduler should use them. The following extra specs
are supported:
- capabilities:volume_back-end_name - Specify a specific back-end where the volume should be created. The back-end name is a concatenation of the name of the IBM Storwize/SVC storage system as shown in
lssystem
, an underscore, and the name of the pool (mdisk group). For example:capabilities:volume_back-end_name=myV7000_openstackpool
- capabilities:compression_support - Specify a back-end according to compression support. A value of
True
should be used to request a back-end that supports compression, and a value ofFalse
will request a back-end that does not support compression. If you do not have constraints on compression support, do not set this key. Note that specifyingTrue
does not enable compression; it only requests that the volume be placed on a back-end that supports compression. Example syntax:capabilities:compression_support='<is> True'
- capabilities:easytier_support - Similar semantics as the
compression_support
key, but for specifying according to support of the Easy Tier feature. Example syntax:capabilities:easytier_support='<is> True'
- capabilities:storage_protocol - Specifies the connection protocol used to attach volumes of this type to instances. Legal values are
iSCSI
andFC
. Thisextra specs
value is used for both placement and setting the protocol used for this volume. In the example syntax, note <in> is used as opposed to <is> used in the previous examples.capabilities:storage_protocol='<in> FC'
Configure per-volume creation options
extra specs
keys are supported by the IBM Storwize/SVC driver:
- rsize
- warning
- autoexpand
- grainsize
- compression
- easytier
- multipath
- iogrp
rsize=2
or compression=False
.
Example: Volume types
$
cinder type-create compressed
$
cinder type-key compressed set capabilities:storage_protocol='<in> iSCSI' capabilities:compression_support='<is> True' drivers:compression=True
$
cinder create --display-name "compressed volume" --volume-type compressed 50
- performance levels (such as, allocating entirely on an HDD tier, using Easy Tier for an HDD-SDD mix, or allocating entirely on an SSD tier)
- resiliency levels (such as, allocating volumes in pools with different RAID levels)
- features (such as, enabling/disabling Real-time Compression)
QOS
etc/cinder/cinder.conf
file and setting the storwize_svc_allow_tenant_qos
to True.
IOThrotting
parameter for storage volumes:
- Add the
qos:IOThrottling
key into a QOS specification and associate it with a volume type. - Add the
qos:IOThrottling
key into an extra specification with a volume type. - Add the
qos:IOThrottling
key to the storage volume metadata.
2.1.10.3. Operational notes for the Storwize family and SVC driver
Migrate volumes
extent_size
. If the pools have different values for extent_size
, the data will still be moved directly between the pools (not host-side copy), but the operation will be synchronous.
Extend volumes
Snapshots and clones
Volume retype
- rsize
- warning
- autoexpand
- grainsize
- compression
- easytier
- iogrp
- nofmtdisk
rsize
, grainsize
or compression
properties, volume copies are asynchronously synchronized on the array.
iogrp
property, IBM Storwize/SVC firmware version 6.4.0 or later is required.
2.1.11. IBM XIV and DS8000 volume driver
cinder.conf
, and use the following options to configure it.
volume_driver = cinder.volume.drivers.xiv_ds8k.XIVDS8KDriver
Table 2.11. Description of IBM XIV and DS8000 volume driver configuration options
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
san_clustername =
|
(StrOpt) Cluster name to use for creating volumes |
san_ip =
|
(StrOpt) IP address of SAN controller |
san_login = admin
|
(StrOpt) Username for SAN controller |
san_password =
|
(StrOpt) Password for SAN controller |
xiv_chap = disabled
|
(StrOpt) CHAP authentication mode, effective only for iscsi (disabled|enabled) |
xiv_ds8k_connection_type = iscsi
|
(StrOpt) Connection type to the IBM Storage Array |
xiv_ds8k_proxy = xiv_ds8k_openstack.nova_proxy.XIVDS8KNovaProxy
|
(StrOpt) Proxy driver that connects to the IBM Storage Array |
2.1.12. LVM
cinder.conf
configuration file, and use the following options to configure for iSCSI transport:
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver iscsi_protocol = iscsi
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver iscsi_protocol = iser
Table 2.12. Description of LVM configuration options
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
lvm_conf_file = /etc/cinder/lvm.conf
|
(StrOpt) LVM conf file to use for the LVM driver in Cinder; this setting is ignored if the specified file does not exist (You can also specify 'None' to not use a conf file even if one exists). |
lvm_mirrors = 0
|
(IntOpt) If >0, create LVs with multiple mirrors. Note that this requires lvm_mirrors + 2 PVs with available space |
lvm_type = default
|
(StrOpt) Type of LVM volumes to deploy; (default, thin, or auto). Auto defaults to thin if thin is supported. |
volume_group = cinder-volumes
|
(StrOpt) Name for the VG that will contain exported volumes |
2.1.13. NetApp unified driver
2.1.13.1. NetApp clustered Data ONTAP storage family
2.1.13.1.1. NetApp iSCSI configuration for clustered Data ONTAP
Configuration options for clustered Data ONTAP family with iSCSI protocol
volume_driver
, netapp_storage_family
and netapp_storage_protocol
options in cinder.conf
as follows:
volume_driver = cinder.volume.drivers.netapp.common.NetAppDriver netapp_storage_family = ontap_cluster netapp_storage_protocol = iscsi netapp_vserver = openstack-vserver netapp_server_hostname = myhostname netapp_server_port = port netapp_login = username netapp_password = password
netapp_storage_protocol
with iscsi
.
Table 2.13. Description of NetApp cDOT iSCSI driver configuration options
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
netapp_login = None
|
(StrOpt) Administrative user account name used to access the storage system or proxy server. |
netapp_lun_ostype = None
|
(StrOpt) This option defines the type of operating system that will access a LUN exported from Data ONTAP; it is assigned to the LUN at the time it is created. |
netapp_lun_space_reservation = enabled
|
(StrOpt) This option determines if storage space is reserved for LUN allocation. If enabled, LUNs are thick provisioned. If space reservation is disabled, storage space is allocated on demand. |
netapp_partner_backend_name = None
|
(StrOpt) The name of the config.conf stanza for a Data ONTAP (7-mode) HA partner. This option is only used by the driver when connecting to an instance with a storage family of Data ONTAP operating in 7-Mode, and it is required if the storage protocol selected is FC. |
netapp_password = None
|
(StrOpt) Password for the administrative user account specified in the netapp_login option. |
netapp_pool_name_search_pattern = (.+)
|
(StrOpt) This option is used to restrict provisioning to the specified pools. Specify the value of this option to be a regular expression which will be applied to the names of objects from the storage backend which represent pools in Cinder. This option is only utilized when the storage protocol is configured to use iSCSI or FC. |
netapp_server_hostname = None
|
(StrOpt) The hostname (or IP address) for the storage system or proxy server. |
netapp_server_port = None
|
(IntOpt) The TCP port to use for communication with the storage system or proxy server. If not specified, Data ONTAP drivers will use 80 for HTTP and 443 for HTTPS; E-Series will use 8080 for HTTP and 8443 for HTTPS. |
netapp_size_multiplier = 1.2
|
(FloatOpt) The quantity to be multiplied by the requested volume size to ensure enough space is available on the virtual storage server (Vserver) to fulfill the volume creation request. Note: this option is deprecated and will be removed in favor of "reserved_percentage" in the Mitaka release. |
netapp_storage_family = ontap_cluster
|
(StrOpt) The storage family type used on the storage system; valid values are ontap_7mode for using Data ONTAP operating in 7-Mode, ontap_cluster for using clustered Data ONTAP, or eseries for using E-Series. |
netapp_storage_protocol = None
|
(StrOpt) The storage protocol to be used on the data path with the storage system. |
netapp_transport_type = http
|
(StrOpt) The transport protocol used when communicating with the storage system or proxy server. |
netapp_vserver = None
|
(StrOpt) This option specifies the virtual storage server (Vserver) name on the storage cluster on which provisioning of block storage volumes should occur. |
netapp_login
that only has virtual storage server (Vserver) administration privileges (rather than cluster-wide administration privileges), some advanced features of the NetApp unified driver will not work and you may see warnings in the OpenStack Block Storage logs.
2.1.13.1.2. NetApp NFS configuration for clustered Data ONTAP
Configuration options for the clustered Data ONTAP family with NFS protocol
volume_driver
, netapp_storage_family
and netapp_storage_protocol
options in cinder.conf
as follows:
volume_driver = cinder.volume.drivers.netapp.common.NetAppDriver netapp_storage_family = ontap_cluster netapp_storage_protocol = nfs netapp_vserver = openstack-vserver netapp_server_hostname = myhostname netapp_server_port = port netapp_login = username netapp_password = password nfs_shares_config = /etc/cinder/nfs_shares
Table 2.14. Description of NetApp cDOT NFS driver configuration options
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
expiry_thres_minutes = 720
|
(IntOpt) This option specifies the threshold for last access time for images in the NFS image cache. When a cache cleaning cycle begins, images in the cache that have not been accessed in the last M minutes, where M is the value of this parameter, will be deleted from the cache to create free space on the NFS share. |
netapp_copyoffload_tool_path = None
|
(StrOpt) This option specifies the path of the NetApp copy offload tool binary. Ensure that the binary has execute permissions set which allow the effective user of the cinder-volume process to execute the file. |
netapp_host_type = None
|
(StrOpt) This option defines the type of operating system for all initiators that can access a LUN. This information is used when mapping LUNs to individual hosts or groups of hosts. |
netapp_host_type = None
|
(StrOpt) This option defines the type of operating system for all initiators that can access a LUN. This information is used when mapping LUNs to individual hosts or groups of hosts. |
netapp_login = None
|
(StrOpt) Administrative user account name used to access the storage system or proxy server. |
netapp_lun_ostype = None
|
(StrOpt) This option defines the type of operating system that will access a LUN exported from Data ONTAP; it is assigned to the LUN at the time it is created. |
netapp_partner_backend_name = None
|
(StrOpt) The name of the config.conf stanza for a Data ONTAP (7-mode) HA partner. This option is only used by the driver when connecting to an instance with a storage family of Data ONTAP operating in 7-Mode, and it is required if the storage protocol selected is FC. |
netapp_password = None
|
(StrOpt) Password for the administrative user account specified in the netapp_login option. |
netapp_pool_name_search_pattern = (.+)
|
(StrOpt) This option is used to restrict provisioning to the specified pools. Specify the value of this option to be a regular expression which will be applied to the names of objects from the storage backend which represent pools in Cinder. This option is only utilized when the storage protocol is configured to use iSCSI or FC. |
netapp_server_hostname = None
|
(StrOpt) The hostname (or IP address) for the storage system or proxy server. |
netapp_server_port = None
|
(IntOpt) The TCP port to use for communication with the storage system or proxy server. If not specified, Data ONTAP drivers will use 80 for HTTP and 443 for HTTPS; E-Series will use 8080 for HTTP and 8443 for HTTPS. |
netapp_storage_family = ontap_cluster
|
(StrOpt) The storage family type used on the storage system; valid values are ontap_7mode for using Data ONTAP operating in 7-Mode, ontap_cluster for using clustered Data ONTAP, or eseries for using E-Series. |
netapp_storage_protocol = None
|
(StrOpt) The storage protocol to be used on the data path with the storage system. |
netapp_transport_type = http
|
(StrOpt) The transport protocol used when communicating with the storage system or proxy server. |
netapp_vserver = None
|
(StrOpt) This option specifies the virtual storage server (Vserver) name on the storage cluster on which provisioning of block storage volumes should occur. |
thres_avl_size_perc_start = 20
|
(IntOpt) If the percentage of available space for an NFS share has dropped below the value specified by this option, the NFS image cache will be cleaned. |
thres_avl_size_perc_stop = 60
|
(IntOpt) When the percentage of available space on an NFS share has reached the percentage specified by this option, the driver will stop clearing files from the NFS image cache that have not been accessed in the last M minutes, where M is the value of the expiry_thres_minutes configuration option. |
netapp_login
that only has virtual storage server (Vserver) administration privileges (rather than cluster-wide administration privileges), some advanced features of the NetApp unified driver will not work and you may see warnings in the OpenStack Block Storage logs.
NetApp NFS Copy Offload client
- The Image Service is configured to store images in an NFS share that is exported from a NetApp FlexVol volume and the destination for the new Block Storage volume will be on an NFS share exported from a different FlexVol volume than the one used by the Image Service. Both FlexVols must be located within the same cluster.
- The source image from the Image Service has already been cached in an NFS image cache within a Block Storage backend. The cached image resides on a different FlexVol volume than the destination for the new Block Storage volume. Both FlexVols must be located within the same cluster.
- Set the
default_store
configuration option tofile
. - Set the
filesystem_store_datadir
configuration option to the path to the Image Service NFS export. - Set the
show_image_direct_url
configuration option toTrue
. - Set the
show_multiple_locations
configuration option toTrue
.ImportantIf configured without the proper policy settings, a non-admin user of the Image Service can replace active image data (that is, switch out a current image without other users knowing). See the OSSN announcement (recommended actions) for configuration information: https://wiki.openstack.org/wiki/OSSN/OSSN-0065 - Set the
filesystem_store_metadata_file
configuration option to a metadata file. The metadata file should contain a JSON object that contains the correct information about the NFS export used by the Image Service, similar to:{ "share_location": "nfs://192.168.0.1/myGlanceExport", "mount_point": "/var/lib/glance/images", "type": "nfs" }
- Set the
netapp_copyoffload_tool_path
configuration option to the path to the NetApp Copy Offload binary. - Set the
glance_api_version
configuration option to2
.
- The storage system must have Data ONTAP v8.2 or greater installed.
- The vStorage feature must be enabled on each storage virtual machine (SVM, also known as a Vserver) that is permitted to interact with the copy offload client.
- To configure the copy offload workflow, enable NFS v4.0 or greater and export it from the SVM.
netapp_copyoffload_tool_path
configuration option, visit the Utility Toolchest page at the NetApp Support portal (login is required).
2.1.13.1.3. NetApp-supported extra specs for clustered Data ONTAP
Table 2.15. Description of extra specs options for NetApp Unified Driver with Clustered Data ONTAP
Extra spec | Type | Description | |||||||||||||||||||||||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
netapp_raid_type
|
String |
Limit the candidate volume list based on one of the following raid types: raid4, raid_dp .
|
|||||||||||||||||||||||||||||||||||||||||||||||
netapp_disk_type
|
String |
Limit the candidate volume list based on one of the following disk types: ATA, BSAS, EATA, FCAL, FSAS, LUN, MSATA, SAS, SATA, SCSI, XATA, XSAS, or SSD.
|
|||||||||||||||||||||||||||||||||||||||||||||||
netapp:qos_policy_group [a]
|
String | Specify the name of a QoS policy group, which defines measurable Service Level Objectives, that should be applied to the OpenStack Block Storage volume at the time of volume creation. Ensure that the QoS policy group object within Data ONTAP should be defined before an OpenStack Block Storage volume is created, and that the QoS policy group is not associated with the destination FlexVol volume. | |||||||||||||||||||||||||||||||||||||||||||||||
netapp_mirrored
|
Boolean | Limit the candidate volume list to only the ones that are mirrored on the storage controller. | |||||||||||||||||||||||||||||||||||||||||||||||
netapp_unmirrored [b]
|
Boolean | Limit the candidate volume list to only the ones that are not mirrored on the storage controller. | |||||||||||||||||||||||||||||||||||||||||||||||
netapp_dedup
|
Boolean | Limit the candidate volume list to only the ones that have deduplication enabled on the storage controller. | |||||||||||||||||||||||||||||||||||||||||||||||
netapp_nodedup [b]
|
Boolean | Limit the candidate volume list to only the ones that have deduplication disabled on the storage controller. | |||||||||||||||||||||||||||||||||||||||||||||||
netapp_compression
|
Boolean | Limit the candidate volume list to only the ones that have compression enabled on the storage controller. | |||||||||||||||||||||||||||||||||||||||||||||||
netapp_nocompression [b]
|
Boolean | Limit the candidate volume list to only the ones that have compression disabled on the storage controller. | |||||||||||||||||||||||||||||||||||||||||||||||
netapp_thin_provisioned
|
Boolean | Limit the candidate volume list to only the ones that support thin provisioning on the storage controller. | |||||||||||||||||||||||||||||||||||||||||||||||
netapp_thick_provisioned [b]
|
Boolean | Limit the candidate volume list to only the ones that support thick provisioning on the storage controller. | |||||||||||||||||||||||||||||||||||||||||||||||
[a]
Note that this extra spec has a colon ( : ) in its name because it is used by the driver to assign the QoS policy group to the OpenStack Block Storage volume after it has been provisioned.
[b]
In the Juno release, these negative-assertion extra specs are formally deprecated by the NetApp unified driver. Instead of using the deprecated negative-assertion extra specs (for example, netapp_unmirrored ) with a value of true , use the corresponding positive-assertion extra spec (for example, netapp_mirrored ) with a value of false .
|
2.1.13.2. NetApp Data ONTAP operating in 7-Mode storage family
2.1.13.2.1. NetApp iSCSI configuration for Data ONTAP operating in 7-Mode
Configuration options for the Data ONTAP operating in 7-Mode storage family with iSCSI protocol
volume_driver
, netapp_storage_family
and netapp_storage_protocol
options in cinder.conf
as follows:
volume_driver = cinder.volume.drivers.netapp.common.NetAppDriver netapp_storage_family = ontap_7mode netapp_storage_protocol = iscsi netapp_server_hostname = myhostname netapp_server_port = 80 netapp_login = username netapp_password = password
netapp_storage_protocol
with iscsi
.
Table 2.16. Description of NetApp 7-Mode iSCSI driver configuration options
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
netapp_login = None
|
(StrOpt) Administrative user account name used to access the storage system or proxy server. |
netapp_partner_backend_name = None
|
(StrOpt) The name of the config.conf stanza for a Data ONTAP (7-mode) HA partner. This option is only used by the driver when connecting to an instance with a storage family of Data ONTAP operating in 7-Mode, and it is required if the storage protocol selected is FC. |
netapp_password = None
|
(StrOpt) Password for the administrative user account specified in the netapp_login option. |
netapp_pool_name_search_pattern = (.+)
|
(StrOpt) This option is used to restrict provisioning to the specified pools. Specify the value of this option to be a regular expression which will be applied to the names of objects from the storage backend which represent pools in Cinder. This option is only utilized when the storage protocol is configured to use iSCSI or FC. |
netapp_server_hostname = None
|
(StrOpt) The hostname (or IP address) for the storage system or proxy server. |
netapp_server_port = None
|
(IntOpt) The TCP port to use for communication with the storage system or proxy server. If not specified, Data ONTAP drivers will use 80 for HTTP and 443 for HTTPS; E-Series will use 8080 for HTTP and 8443 for HTTPS. |
netapp_size_multiplier = 1.2
|
(FloatOpt) The quantity to be multiplied by the requested volume size to ensure enough space is available on the virtual storage server (Vserver) to fulfill the volume creation request. Note: this option is deprecated and will be removed in favor of "reserved_percentage" in the Mitaka release. |
netapp_storage_family = ontap_cluster
|
(StrOpt) The storage family type used on the storage system; valid values are ontap_7mode for using Data ONTAP operating in 7-Mode, ontap_cluster for using clustered Data ONTAP, or eseries for using E-Series. |
netapp_storage_protocol = None
|
(StrOpt) The storage protocol to be used on the data path with the storage system. |
netapp_transport_type = http
|
(StrOpt) The transport protocol used when communicating with the storage system or proxy server. |
netapp_vfiler = None
|
(StrOpt) The vFiler unit on which provisioning of block storage volumes will be done. This option is only used by the driver when connecting to an instance with a storage family of Data ONTAP operating in 7-Mode. Only use this option when utilizing the MultiStore feature on the NetApp storage system. |
2.1.13.2.2. NetApp NFS configuration for Data ONTAP operating in 7-Mode
Configuration options for the Data ONTAP operating in 7-Mode family with NFS protocol
volume_driver
, netapp_storage_family
and netapp_storage_protocol
options in cinder.conf
as follows:
volume_driver = cinder.volume.drivers.netapp.common.NetAppDriver netapp_storage_family = ontap_7mode netapp_storage_protocol = nfs netapp_server_hostname = myhostname netapp_server_port = 80 netapp_login = username netapp_password = password nfs_shares_config = /etc/cinder/nfs_shares
Table 2.17. Description of NetApp 7-Mode NFS driver configuration options
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
expiry_thres_minutes = 720
|
(IntOpt) This option specifies the threshold for last access time for images in the NFS image cache. When a cache cleaning cycle begins, images in the cache that have not been accessed in the last M minutes, where M is the value of this parameter, will be deleted from the cache to create free space on the NFS share. |
netapp_login = None
|
(StrOpt) Administrative user account name used to access the storage system or proxy server. |
netapp_partner_backend_name = None
|
(StrOpt) The name of the config.conf stanza for a Data ONTAP (7-mode) HA partner. This option is only used by the driver when connecting to an instance with a storage family of Data ONTAP operating in 7-Mode, and it is required if the storage protocol selected is FC. |
netapp_password = None
|
(StrOpt) Password for the administrative user account specified in the netapp_login option. |
netapp_pool_name_search_pattern = (.+)
|
(StrOpt) This option is used to restrict provisioning to the specified pools. Specify the value of this option to be a regular expression which will be applied to the names of objects from the storage backend which represent pools in Cinder. This option is only utilized when the storage protocol is configured to use iSCSI or FC. |
netapp_server_hostname = None
|
(StrOpt) The hostname (or IP address) for the storage system or proxy server. |
netapp_server_port = None
|
(IntOpt) The TCP port to use for communication with the storage system or proxy server. If not specified, Data ONTAP drivers will use 80 for HTTP and 443 for HTTPS; E-Series will use 8080 for HTTP and 8443 for HTTPS. |
netapp_storage_family = ontap_cluster
|
(StrOpt) The storage family type used on the storage system; valid values are ontap_7mode for using Data ONTAP operating in 7-Mode, ontap_cluster for using clustered Data ONTAP, or eseries for using E-Series. |
netapp_storage_protocol = None
|
(StrOpt) The storage protocol to be used on the data path with the storage system. |
netapp_transport_type = http
|
(StrOpt) The transport protocol used when communicating with the storage system or proxy server. |
netapp_vfiler = None
|
(StrOpt) The vFiler unit on which provisioning of block storage volumes will be done. This option is only used by the driver when connecting to an instance with a storage family of Data ONTAP operating in 7-Mode. Only use this option when utilizing the MultiStore feature on the NetApp storage system. |
thres_avl_size_perc_start = 20
|
(IntOpt) If the percentage of available space for an NFS share has dropped below the value specified by this option, the NFS image cache will be cleaned. |
thres_avl_size_perc_stop = 60
|
(IntOpt) When the percentage of available space on an NFS share has reached the percentage specified by this option, the driver will stop clearing files from the NFS image cache that have not been accessed in the last M minutes, where M is the value of the expiry_thres_minutes configuration option. |
2.1.13.3. NetApp E-Series storage family
2.1.13.3.1. NetApp iSCSI configuration for E-Series
- The
use_multipath_for_image_xfer
option should be set toTrue
in thecinder.conf
file within the driver-specific stanza (for example,[myDriver]
). - The
iscsi_use_multipath
option should be set toTrue
in thenova.conf
file within the[libvirt]
stanza.
Configuration options for E-Series storage family with iSCSI protocol
volume_driver
, netapp_storage_family
and netapp_storage_protocol
options in cinder.conf
as follows:
volume_driver = cinder.volume.drivers.netapp.common.NetAppDriver netapp_storage_family = eseries netapp_storage_protocol = iscsi netapp_server_hostname = myhostname netapp_server_port = 80 netapp_login = username netapp_password = password netapp_controller_ips = 1.2.3.4,5.6.7.8 netapp_sa_password = arrayPassword netapp_storage_pools = pool1,pool2 use_multipath_for_image_xfer = True
netapp_storage_family
with eseries
.
netapp_storage_protocol
with iscsi
.
Table 2.18. Description of NetApp E-Series driver configuration options
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
netapp_controller_ips = None
|
(StrOpt) This option is only utilized when the storage family is configured to eseries. This option is used to restrict provisioning to the specified controllers. Specify the value of this option to be a comma separated list of controller hostnames or IP addresses to be used for provisioning. |
netapp_enable_multiattach = False
|
(BoolOpt) This option specifies whether the driver should allow operations that require multiple attachments to a volume. An example would be live migration of servers that have volumes attached. When enabled, this backend is limited to 256 total volumes in order to guarantee volumes can be accessed by more than one host. |
netapp_host_type = None
|
(StrOpt) This option defines the type of operating system for all initiators that can access a LUN. This information is used when mapping LUNs to individual hosts or groups of hosts. |
netapp_login = None
|
(StrOpt) Administrative user account name used to access the storage system or proxy server. |
netapp_partner_backend_name = None
|
(StrOpt) The name of the config.conf stanza for a Data ONTAP (7-mode) HA partner. This option is only used by the driver when connecting to an instance with a storage family of Data ONTAP operating in 7-Mode, and it is required if the storage protocol selected is FC. |
netapp_password = None
|
(StrOpt) Password for the administrative user account specified in the netapp_login option. |
netapp_pool_name_search_pattern = (.+)
|
(StrOpt) This option is used to restrict provisioning to the specified pools. Specify the value of this option to be a regular expression which will be applied to the names of objects from the storage backend which represent pools in Cinder. This option is only utilized when the storage protocol is configured to use iSCSI or FC. |
netapp_sa_password = None
|
(StrOpt) Password for the NetApp E-Series storage array. |
netapp_server_hostname = None
|
(StrOpt) The hostname (or IP address) for the storage system or proxy server. |
netapp_server_port = None
|
(IntOpt) The TCP port to use for communication with the storage system or proxy server. If not specified, Data ONTAP drivers will use 80 for HTTP and 443 for HTTPS; E-Series will use 8080 for HTTP and 8443 for HTTPS. |
netapp_storage_family = ontap_cluster
|
(StrOpt) The storage family type used on the storage system; valid values are ontap_7mode for using Data ONTAP operating in 7-Mode, ontap_cluster for using clustered Data ONTAP, or eseries for using E-Series. |
netapp_transport_type = http
|
(StrOpt) The transport protocol used when communicating with the storage system or proxy server. |
netapp_webservice_path = /devmgr/v2
|
(StrOpt) This option is used to specify the path to the E-Series proxy application on a proxy server. The value is combined with the value of the netapp_transport_type, netapp_server_hostname, and netapp_server_port options to create the URL used by the driver to connect to the proxy application. |
2.1.13.4. Upgrading prior NetApp drivers to the NetApp unified driver
2.1.13.4.1. Upgraded NetApp drivers
Driver upgrade configuration
- NetApp iSCSI direct driver for Clustered Data ONTAP in Grizzly (or earlier).
volume_driver = cinder.volume.drivers.netapp.iscsi.NetAppDirectCmodeISCSIDriver
NetApp unified driver configuration.volume_driver = cinder.volume.drivers.netapp.common.NetAppDriver netapp_storage_family = ontap_cluster netapp_storage_protocol = iscsi
- NetApp NFS direct driver for Clustered Data ONTAP in Grizzly (or earlier).
volume_driver = cinder.volume.drivers.netapp.nfs.NetAppDirectCmodeNfsDriver
NetApp unified driver configuration.volume_driver = cinder.volume.drivers.netapp.common.NetAppDriver netapp_storage_family = ontap_cluster netapp_storage_protocol = nfs
- NetApp iSCSI direct driver for Data ONTAP operating in 7-Mode storage controller in Grizzly (or earlier)
volume_driver = cinder.volume.drivers.netapp.iscsi.NetAppDirect7modeISCSIDriver
NetApp unified driver configurationvolume_driver = cinder.volume.drivers.netapp.common.NetAppDriver netapp_storage_family = ontap_7mode netapp_storage_protocol = iscsi
- NetApp NFS direct driver for Data ONTAP operating in 7-Mode storage controller in Grizzly (or earlier)
volume_driver = cinder.volume.drivers.netapp.nfs.NetAppDirect7modeNfsDriver
NetApp unified driver configurationvolume_driver = cinder.volume.drivers.netapp.common.NetAppDriver netapp_storage_family = ontap_7mode netapp_storage_protocol = nfs
2.1.13.4.2. Deprecated NetApp drivers
- NetApp iSCSI driver for clustered Data ONTAP.
volume_driver = cinder.volume.drivers.netapp.iscsi.NetAppCmodeISCSIDriver
- NetApp NFS driver for clustered Data ONTAP.
volume_driver = cinder.volume.drivers.netapp.nfs.NetAppCmodeNfsDriver
- NetApp iSCSI driver for Data ONTAP operating in 7-Mode storage controller.
volume_driver = cinder.volume.drivers.netapp.iscsi.NetAppISCSIDriver
- NetApp NFS driver for Data ONTAP operating in 7-Mode storage controller.
volume_driver = cinder.volume.drivers.netapp.nfs.NetAppNFSDriver
2.1.14. NFS driver
2.1.14.1. How the NFS driver works
/var/lib/nova/instances
directory.
2.1.14.2. Enable the NFS driver and related options
volume_driver
in cinder.conf
:
volume_driver=cinder.volume.drivers.nfs.NfsDriver
Table 2.19. Description of NFS storage configuration options
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
nfs_mount_attempts = 3
|
(IntOpt) The number of attempts to mount nfs shares before raising an error. At least one attempt will be made to mount an nfs share, regardless of the value specified. |
nfs_mount_options = None
|
(StrOpt) Mount options passed to the nfs client. See section of the nfs man page for details. |
nfs_mount_point_base = $state_path/mnt
|
(StrOpt) Base dir containing mount points for nfs shares. |
nfs_oversub_ratio = 1.0
|
(FloatOpt) This will compare the allocated to available space on the volume destination. If the ratio exceeds this number, the destination will no longer be valid. Note that this option is deprecated in favor of "max_oversubscription_ratio" and will be removed in the Mitaka release. |
nfs_shares_config = /etc/cinder/nfs_shares
|
(StrOpt) File with the list of available nfs shares |
nfs_sparsed_volumes = True
|
(BoolOpt) Create volumes as sparsed files which take no space.If set to False volume is created as regular file.In such case volume creation takes a lot of time. |
nfs_used_ratio = 0.95
|
(FloatOpt) Percent of ACTUAL usage of the underlying volume before no new volumes can be allocated to the volume destination. Note that this option is deprecated in favor of "reserved_percentage" and will be removed in the Mitaka release. |
nfs_mount_options
configuration option contains a request for a specific version of NFS to be used, or if specific options are specified in the shares configuration file specified by the nfs_shares_config
configuration option, the mount will be attempted as requested with no subsequent attempts.
2.1.14.3. How to use the NFS driver
- Access to one or more NFS servers. Creating an NFS server is outside the scope of this document. This example assumes access to the following NFS servers and mount points:
192.168.1.200:/storage
192.168.1.201:/storage
192.168.1.202:/storage
This example demonstrates the use of with this driver with multiple NFS servers. Multiple servers are not required. One is usually enough. - Add your list of NFS servers to the file you specified with the
nfs_shares_config
option. For example, if the value of this option was set to/etc/cinder/shares.txt
, then:#
cat /etc/cinder/shares.txt
192.168.1.200:/storage 192.168.1.201:/storage 192.168.1.202:/storage
Comments are allowed in this file. They begin with a#
. - Configure the
nfs_mount_point_base
option. This is a directory wherecinder-volume
mounts all NFS shares stored inshares.txt
. For this example,/var/lib/cinder/nfs
is used. You can, of course, use the default value of$state_path/mnt
. - Start the
cinder-volume
service./var/lib/cinder/nfs
should now contain a directory for each NFS share specified inshares.txt
. The name of each directory is a hashed name:#
ls /var/lib/cinder/nfs/
... 46c5db75dc3a3a50a10bfd1a456a9f3f ...
- You can now create volumes as you normally would:
$
nova volume-create --display-name myvol 5
#
ls /var/lib/cinder/nfs/46c5db75dc3a3a50a10bfd1a456a9f3f
volume-a8862558-e6d6-4648-b5df-bb84f31c8935
This volume can also be attached and deleted just like other volumes. However, snapshotting is not supported.
NFS driver notes
cinder-volume
manages the mounting of the NFS shares as well as volume creation on the shares. Keep this in mind when planning your OpenStack architecture. If you have one master NFS server, it might make sense to only have onecinder-volume
service to handle all requests to that NFS server. However, if that single server is unable to handle all requests, more than onecinder-volume
service is needed as well as potentially more than one NFS server.- Because data is stored in a file and not actually on a block storage device, you might not see the same IO performance as you would with a traditional block storage driver. Test accordingly.
- Despite possible IO performance loss, having volume data stored in a file might be beneficial. For example, backing up volumes can be as easy as copying the volume files.NoteRegular IO flushing and syncing still stands.
2.1.15. SolidFire
cinder.conf
file as follows:
volume_driver = cinder.volume.drivers.solidfire.SolidFireDriver san_ip = 172.17.1.182 # the address of your MVIP san_login = sfadmin # your cluster admin login san_password = sfpassword # your cluster admin password sf_account_prefix = '' # prefix for tenant account creation on solidfire cluster
$cinder-volume-service-hostname-$tenant-id
on the SolidFire cluster for each tenant. Unfortunately, this account formation resulted in issues for High Availability (HA) installations and installations where the cinder-volume
service can move to a new node. The current default implementation does not experience this issue as no prefix is used. For installations created on a prior release, the OLD default behavior can be configured by using the keyword "hostname" in sf_account_prefix.
Table 2.20. Description of SolidFire driver configuration options
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
sf_account_prefix = None
|
(StrOpt) Create SolidFire accounts with this prefix. Any string can be used here, but the string "hostname" is special and will create a prefix using the cinder node hostname (previous default behavior). The default is NO prefix. |
sf_allow_template_caching = True
|
(BoolOpt) Create an internal cache of copy of images when a bootable volume is created to eliminate fetch from glance and qemu-conversion on subsequent calls. |
sf_allow_tenant_qos = False
|
(BoolOpt) Allow tenants to specify QOS on create |
sf_api_port = 443
|
(IntOpt) SolidFire API port. Useful if the device api is behind a proxy on a different port. |
sf_emulate_512 = True
|
(BoolOpt) Set 512 byte emulation on volume creation; |
sf_enable_volume_mapping = True
|
(BoolOpt) Create an internal mapping of volume IDs and account. Optimizes lookups and performance at the expense of memory, very large deployments may want to consider setting to False. |
sf_svip = None
|
(StrOpt) Overrides default cluster SVIP with the one specified. This is required or deployments that have implemented the use of VLANs for iSCSI networks in their cloud. |
sf_template_account_name = openstack-vtemplate
|
(StrOpt) Account name on the SolidFire Cluster to use as owner of template/cache volumes (created if does not exist). |
2.1.16. Tintri
- Edit the
etc/cinder/cinder.conf
file and set the cinder.volume.drivers.tintri options:volume_driver=cinder.volume.drivers.tintri.TintriDriver # Mount options passed to the nfs client. See section of the # nfs man page for details. (string value) nfs_mount_options=vers=3,lookupcache=pos # # Options defined in cinder.volume.drivers.tintri # # The hostname (or IP address) for the storage system (string # value) tintri_server_hostname={Tintri VMstore Management IP} # User name for the storage system (string value) tintri_server_username={username} # Password for the storage system (string value) tintri_server_password={password} # API version for the storage system (string value) #tintri_api_version=v310 # Following options needed for NFS configuration # File with the list of available nfs shares (string value) #nfs_shares_config=/etc/cinder/nfs_shares
- Edit the
etc/nova/nova.conf
file, and set thenfs_mount_options
:nfs_mount_options=vers=3
- Edit the
/etc/cinder/nfs_shares
file, and add the Tintri VMstore mount points associated with the configured VMstore management IP in thecinder.conf
file:{vmstore_data_ip}:/tintri/{submount1} {vmstore_data_ip}:/tintri/{submount2}
Table 2.21. Description of Tintri volume driver configuration options
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
tintri_api_version = v310
|
(StrOpt) API version for the storage system |
tintri_server_hostname = None
|
(StrOpt) The hostname (or IP address) for the storage system |
tintri_server_password = None
|
(StrOpt) Password for the storage system |
tintri_server_username = None
|
(StrOpt) User name for the storage system |