Red Hat Training

A Red Hat training course is available for Red Hat OpenStack Platform

Configuration Reference

Red Hat OpenStack Platform 8

Configuring Red Hat OpenStack Platform environments

OpenStack Documentation Team

Red Hat Customer Content Services

Abstract

This document is for system administrators who want to look up configuration options. It contains lists of configuration options available with OpenStack and uses auto-generation to generate options and the descriptions from the code for each project. It includes sample configuration files.

Chapter 1. Bare metal

The Bare metal service is capable of managing and provisioning physical machines. The configuration file of this module is /etc/ironic/ironic.conf.
The following tables provide a comprehensive list of the Bare metal service configuration options.

Table 1.1. Description of agent configuration options

Configuration option = Default value Description
[agent]
agent_api_version = v1 (StrOpt) API version to use for communicating with the ramdisk agent.
agent_erase_devices_priority = None (IntOpt) Priority to run in-band erase devices via the Ironic Python Agent ramdisk. If unset, will use the priority set in the ramdisk (defaults to 10 for the GenericHardwareManager). If set to 0, will not run during cleaning.
agent_pxe_append_params = nofb nomodeset vga=normal (StrOpt) Additional append parameters for baremetal PXE boot.
agent_pxe_bootfile_name = pxelinux.0 (StrOpt) Neutron bootfile DHCP parameter.
agent_pxe_config_template = $pybasedir/drivers/modules/agent_config.template (StrOpt) Template file for PXE configuration.
heartbeat_timeout = 300 (IntOpt) Maximum interval (in seconds) for agent heartbeats.
manage_tftp = True (BoolOpt) Whether Ironic will manage TFTP files for the deploy ramdisks. If set to False, you will need to configure your own TFTP server that allows booting the deploy ramdisks.

Table 1.2. Description of AMQP configuration options

Configuration option = Default value Description
[DEFAULT]
control_exchange = openstack (StrOpt) The default exchange under which topics are scoped. May be overridden by an exchange name specified in the transport_url option.
notification_driver = [] (MultiStrOpt) Driver or drivers to handle sending notifications.
notification_topics = notifications (ListOpt) AMQP topic used for OpenStack notifications.
transport_url = None (StrOpt) A URL representing the messaging driver to use and its full configuration. If not set, we fall back to the rpc_backend option and driver specific configuration.

Table 1.3. Description of AMT configuration options

Configuration option = Default value Description
[amt]
action_wait = 10 (IntOpt) Amount of time (in seconds) to wait, before retrying an AMT operation
max_attempts = 3 (IntOpt) Maximum number of times to attempt an AMT operation, before failing
protocol = http (StrOpt) Protocol used for AMT endpoint, support http/https

Table 1.4. Description of API configuration options

Configuration option = Default value Description
[api]
host_ip = 0.0.0.0 (StrOpt) The listen IP for the Ironic API server.
max_limit = 1000 (IntOpt) The maximum number of items returned in a single response from a collection resource.
port = 6385 (IntOpt) The port for the Ironic API server.

Table 1.5. Description of authorization token configuration options

Configuration option = Default value Description
[keystone_authtoken]
admin_password = None (StrOpt) Service user password.
admin_tenant_name = admin (StrOpt) Service tenant name.
admin_token = None (StrOpt) This option is deprecated and may be removed in a future release. Single shared secret with the Keystone configuration used for bootstrapping a Keystone installation, or otherwise bypassing the normal authentication process. This option should not be used, use `admin_user` and `admin_password` instead.
admin_user = None (StrOpt) Service username.
auth_admin_prefix = (StrOpt) Prefix to prepend at the beginning of the path. Deprecated, use identity_uri.
auth_host = 127.0.0.1 (StrOpt) Host providing the admin Identity API endpoint. Deprecated, use identity_uri.
auth_plugin = None (StrOpt) Name of the plugin to load
auth_port = 35357 (IntOpt) Port of the admin Identity API endpoint. Deprecated, use identity_uri.
auth_protocol = https (StrOpt) Protocol of the admin Identity API endpoint (http or https). Deprecated, use identity_uri.
auth_section = None (StrOpt) Config Section from which to load plugin specific options
auth_uri = None (StrOpt) Complete public Identity API endpoint.
auth_version = None (StrOpt) API version of the admin Identity API endpoint.
cache = None (StrOpt) Env key for the swift cache.
cafile = None (StrOpt) A PEM encoded Certificate Authority to use when verifying HTTPs connections. Defaults to system CAs.
certfile = None (StrOpt) Required if identity server requires client certificate
check_revocations_for_cached = False (BoolOpt) If true, the revocation list will be checked for cached tokens. This requires that PKI tokens are configured on the identity server.
delay_auth_decision = False (BoolOpt) Do not handle authorization requests within the middleware, but delegate the authorization decision to downstream WSGI components.
enforce_token_bind = permissive (StrOpt) Used to control the use and type of token binding. Can be set to: "disabled" to not check token binding. "permissive" (default) to validate binding information if the bind type is of a form known to the server and ignore it if not. "strict" like "permissive" but if the bind type is unknown the token will be rejected. "required" any form of token binding is needed to be allowed. Finally the name of a binding method that must be present in tokens.
hash_algorithms = md5 (ListOpt) Hash algorithms to use for hashing PKI tokens. This may be a single algorithm or multiple. The algorithms are those supported by Python standard hashlib.new(). The hashes will be tried in the order given, so put the preferred one first for performance. The result of the first hash will be stored in the cache. This will typically be set to multiple values only while migrating from a less secure algorithm to a more secure one. Once all the old tokens are expired this option should be set to a single value for better performance.
http_connect_timeout = None (IntOpt) Request timeout value for communicating with Identity API server.
http_request_max_retries = 3 (IntOpt) How many times are we trying to reconnect when communicating with Identity API Server.
identity_uri = None (StrOpt) Complete admin Identity API endpoint. This should specify the unversioned root endpoint e.g. https://localhost:35357/
include_service_catalog = True (BoolOpt) (Optional) Indicate whether to set the X-Service-Catalog header. If False, middleware will not ask for service catalog on token validation and will not set the X-Service-Catalog header.
insecure = False (BoolOpt) Verify HTTPS connections.
keyfile = None (StrOpt) Required if identity server requires client certificate
memcache_pool_conn_get_timeout = 10 (IntOpt) (Optional) Number of seconds that an operation will wait to get a memcache client connection from the pool.
memcache_pool_dead_retry = 300 (IntOpt) (Optional) Number of seconds memcached server is considered dead before it is tried again.
memcache_pool_maxsize = 10 (IntOpt) (Optional) Maximum total number of open connections to every memcached server.
memcache_pool_socket_timeout = 3 (IntOpt) (Optional) Socket timeout in seconds for communicating with a memcache server.
memcache_pool_unused_timeout = 60 (IntOpt) (Optional) Number of seconds a connection to memcached is held unused in the pool before it is closed.
memcache_secret_key = None (StrOpt) (Optional, mandatory if memcache_security_strategy is defined) This string is used for key derivation.
memcache_security_strategy = None (StrOpt) (Optional) If defined, indicate whether token data should be authenticated or authenticated and encrypted. Acceptable values are MAC or ENCRYPT. If MAC, token data is authenticated (with HMAC) in the cache. If ENCRYPT, token data is encrypted and authenticated in the cache. If the value is not one of these options or empty, auth_token will raise an exception on initialization.
memcache_use_advanced_pool = False (BoolOpt) (Optional) Use the advanced (eventlet safe) memcache client pool. The advanced pool will only work under python 2.x.
memcached_servers = None (ListOpt) Optionally specify a list of memcached server(s) to use for caching. If left undefined, tokens will instead be cached in-process.
revocation_cache_time = 10 (IntOpt) Determines the frequency at which the list of revoked tokens is retrieved from the Identity service (in seconds). A high number of revocation events combined with a low cache duration may significantly reduce performance.
signing_dir = None (StrOpt) Directory used to cache files related to PKI tokens.
token_cache_time = 300 (IntOpt) In order to prevent excessive effort spent validating tokens, the middleware caches previously-seen tokens for a configurable duration (in seconds). Set to -1 to disable caching completely.

Table 1.6. Description of authorization configuration options

Configuration option = Default value Description
[DEFAULT]
auth_strategy = keystone (StrOpt) Method to use for authentication: noauth or keystone.

Table 1.7. Description of common configuration options

Configuration option = Default value Description
[DEFAULT]
bindir = /usr/local/bin (StrOpt) Directory where ironic binaries are installed.
enabled_drivers = pxe_ipmitool (ListOpt) Specify the list of drivers to load during service initialization. Missing drivers, or drivers which fail to initialize, will prevent the conductor service from starting. The option default is a recommended set of production-oriented drivers. A complete list of drivers present on your system may be found by enumerating the "ironic.drivers" entrypoint. An example may be found in the developer documentation online.
fatal_deprecations = False (BoolOpt) Enables or disables fatal status of deprecations.
force_raw_images = True (BoolOpt) Force backing images to raw format.
grub_config_template = $pybasedir/common/grub_conf.template (StrOpt) Template file for grub configuration file.
hash_distribution_replicas = 1 (IntOpt) [Experimental Feature] Number of hosts to map onto each hash partition. Setting this to more than one will cause additional conductor services to prepare deployment environments and potentially allow the Ironic cluster to recover more quickly if a conductor instance is terminated.
hash_partition_exponent = 5 (IntOpt) Exponent to determine number of hash partitions to use when distributing load across conductors. Larger values will result in more even distribution of load and less load when rebalancing the ring, but more memory usage. Number of partitions per conductor is (2^hash_partition_exponent). This determines the granularity of rebalancing: given 10 hosts, and an exponent of the 2, there are 40 partitions in the ring.A few thousand partitions should make rebalancing smooth in most cases. The default is suitable for up to a few hundred conductors. Too many partitions has a CPU impact.
host = sd-52009.dedibox.fr (StrOpt) Name of this node. This can be an opaque identifier. It is not necessarily a hostname, FQDN, or IP address. However, the node name must be valid within an AMQP key.
isolinux_bin = /usr/lib/syslinux/isolinux.bin (StrOpt) Path to isolinux binary file.
isolinux_config_template = $pybasedir/common/isolinux_config.template (StrOpt) Template file for isolinux configuration file.
memcached_servers = None (ListOpt) Memcached servers or None for in process cache.
my_ip = 10.0.0.1 (StrOpt) IP address of this host.
parallel_image_downloads = False (BoolOpt) Run image downloads and raw format conversions in parallel.
periodic_interval = 60 (IntOpt) Seconds between running periodic tasks.
pybasedir = /usr/lib/python/site-packages/ironic/ironic (StrOpt) Directory where the ironic python module is installed.
rootwrap_config = /etc/ironic/rootwrap.conf (StrOpt) Path to the rootwrap configuration file to use for running commands as root.
run_external_periodic_tasks = True (BoolOpt) Some periodic tasks can be run in a separate process. Should we run them here?
state_path = $pybasedir (StrOpt) Top-level directory for maintaining ironic's state.
tempdir = None (StrOpt) Explicitly specify the temporary working directory.

Table 1.8. Description of conductor configuration options

Configuration option = Default value Description
[conductor]
api_url = None (StrOpt) URL of Ironic API service. If not set ironic can get the current value from the keystone service catalog.
check_provision_state_interval = 60 (IntOpt) Interval between checks of provision timeouts, in seconds.
clean_nodes = True (BoolOpt) Cleaning is a configurable set of steps, such as erasing disk drives, that are performed on the node to ensure it is in a baseline state and ready to be deployed to. This is done after instance deletion, and during the transition from a "managed" to "available" state. When enabled, the particular steps performed to clean a node depend on which driver that node is managed by; see the individual driver's documentation for details. NOTE: The introduction of the cleaning operation causes instance deletion to take significantly longer. In an environment where all tenants are trusted (eg, because there is only one tenant), this option could be safely disabled.
configdrive_swift_container = ironic_configdrive_container (StrOpt) Name of the Swift container to store config drive data. Used when configdrive_use_swift is True.
configdrive_use_swift = False (BoolOpt) Whether to upload the config drive to Swift.
deploy_callback_timeout = 1800 (IntOpt) Timeout (seconds) for waiting callback from deploy ramdisk. 0 - unlimited.
force_power_state_during_sync = True (BoolOpt) During sync_power_state, should the hardware power state be set to the state recorded in the database (True) or should the database be updated based on the hardware state (False).
heartbeat_interval = 10 (IntOpt) Seconds between conductor heart beats.
heartbeat_timeout = 60 (IntOpt) Maximum time (in seconds) since the last check-in of a conductor.
inspect_timeout = 1800 (IntOpt) Timeout (seconds) for waiting for node inspection. 0 - unlimited.
node_locked_retry_attempts = 3 (IntOpt) Number of attempts to grab a node lock.
node_locked_retry_interval = 1 (IntOpt) Seconds to sleep between node lock attempts.
periodic_max_workers = 8 (IntOpt) Maximum number of worker threads that can be started simultaneously by a periodic task. Should be less than RPC thread pool size.
power_state_sync_max_retries = 3 (IntOpt) During sync_power_state failures, limit the number of times Ironic should try syncing the hardware node power state with the node power state in DB
send_sensor_data = False (BoolOpt) Enable sending sensor data message via the notification bus
send_sensor_data_interval = 600 (IntOpt) Seconds between conductor sending sensor data message to ceilometer via the notification bus.
send_sensor_data_types = ALL (ListOpt) List of comma separated metric types which need to be sent to Ceilometer. The default value, "ALL", is a special value meaning send all the sensor data.
sync_local_state_interval = 180 (IntOpt) When conductors join or leave the cluster, existing conductors may need to update any persistent local state as nodes are moved around the cluster. This option controls how often, in seconds, each conductor will check for nodes that it should "take over". Set it to a negative value to disable the check entirely.
sync_power_state_interval = 60 (IntOpt) Interval between syncing the node power state to the database, in seconds.
workers_pool_size = 100 (IntOpt) The size of the workers greenthread pool.

Table 1.9. Description of console configuration options

Configuration option = Default value Description
[console]
subprocess_checking_interval = 1 (IntOpt) Time interval (in seconds) for checking the status of console subprocess.
subprocess_timeout = 10 (IntOpt) Time (in seconds) to wait for the console subprocess to start.
terminal = shellinaboxd (StrOpt) Path to serial console terminal program
terminal_cert_dir = None (StrOpt) Directory containing the terminal SSL cert(PEM) for serial console access
terminal_pid_dir = None (StrOpt) Directory for holding terminal pid files. If not specified, the temporary directory will be used.

Table 1.10. Description of database configuration options

Configuration option = Default value Description
[database]
backend = sqlalchemy (StrOpt) The back end to use for the database.
connection = None (StrOpt) The SQLAlchemy connection string to use to connect to the database.
connection_debug = 0 (IntOpt) Verbosity of SQL debugging information: 0=None, 100=Everything.
connection_trace = False (BoolOpt) Add Python stack traces to SQL as comment strings.
db_inc_retry_interval = True (BoolOpt) If True, increases the interval between retries of a database operation up to db_max_retry_interval.
db_max_retries = 20 (IntOpt) Maximum retries in case of connection error or deadlock error before error is raised. Set to -1 to specify an infinite retry count.
db_max_retry_interval = 10 (IntOpt) If db_inc_retry_interval is set, the maximum seconds between retries of a database operation.
db_retry_interval = 1 (IntOpt) Seconds between retries of a database transaction.
idle_timeout = 3600 (IntOpt) Timeout before idle SQL connections are reaped.
max_overflow = None (IntOpt) If set, use this value for max_overflow with SQLAlchemy.
max_pool_size = None (IntOpt) Maximum number of SQL connections to keep open in a pool.
max_retries = 10 (IntOpt) Maximum number of database connection retries during startup. Set to -1 to specify an infinite retry count.
min_pool_size = 1 (IntOpt) Minimum number of SQL connections to keep open in a pool.
mysql_engine = InnoDB (StrOpt) MySQL engine to use.
mysql_sql_mode = TRADITIONAL (StrOpt) The SQL mode to be used for MySQL sessions. This option, including the default, overrides any server-set SQL mode. To use whatever SQL mode is set by the server configuration, set this to no value. Example: mysql_sql_mode=
pool_timeout = None (IntOpt) If set, use this value for pool_timeout with SQLAlchemy.
retry_interval = 10 (IntOpt) Interval between retries of opening a SQL connection.
slave_connection = None (StrOpt) The SQLAlchemy connection string to use to connect to the slave database.
sqlite_db = oslo.sqlite (StrOpt) The file name to use with SQLite.
sqlite_synchronous = True (BoolOpt) If True, SQLite uses synchronous mode.
use_db_reconnect = False (BoolOpt) Enable the experimental use of database reconnect on connection lost.

Table 1.11. Description of logging configuration options

Configuration option = Default value Description
[DEFAULT]
backdoor_port = None (StrOpt) Enable eventlet backdoor. Acceptable values are 0, <port>, and <start>:<end>, where 0 results in listening on a random tcp port number; <port> results in listening on the specified port number (and not enabling backdoor if that port is in use); and <start>:<end> results in listening on the smallest unused port number within the specified range of port numbers. The chosen port is displayed in the service's log file.
pecan_debug = False (BoolOpt) Enable pecan debug mode. WARNING: this is insecure and should not be used in production.

Table 1.12. Description of deploy configuration options

Configuration option = Default value Description
[deploy]
dd_block_size = 1M (StrOpt) Block size to use when writing to the nodes disk.
efi_system_partition_size = 200 (IntOpt) Size of EFI system partition in MiB when configuring UEFI systems for local boot.
iscsi_verify_attempts = 3 (IntOpt) Maximum attempts to verify an iSCSI connection is active, sleeping 1 second between attempts.

Table 1.13. Description of DHCP configuration options

Configuration option = Default value Description
[dhcp]
dhcp_provider = neutron (StrOpt) DHCP provider to use. "neutron" uses Neutron, and "none" uses a no-op provider.

Table 1.14. Description of discoverd configuration options

Configuration option = Default value Description
[discoverd]
enabled = False (BoolOpt) whether to enable inspection using ironic-discoverd
service_url = None (StrOpt) ironic-discoverd HTTP endpoint. If this is not set, the ironic-discoverd client default (http://127.0.0.1:5050) will be used.
status_check_period = 60 (IntOpt) period (in seconds) to check status of nodes on inspection

Table 1.15. Description of disk partitioner configuration options

Configuration option = Default value Description
[disk_partitioner]
check_device_interval = 1 (IntOpt) After Ironic has completed creating the partition table, it continues to check for activity on the attached iSCSI device status at this interval prior to copying the image to the node, in seconds
check_device_max_retries = 20 (IntOpt) The maximum number of times to check that the device is not accessed by another process. If the device is still busy after that, the disk partitioning will be treated as having failed.

Table 1.16. Description of glance configuration options

Configuration option = Default value Description
[glance]
allowed_direct_url_schemes = (ListOpt) A list of URL schemes that can be downloaded directly via the direct_url. Currently supported schemes: [file].
auth_strategy = keystone (StrOpt) Authentication strategy to use when connecting to glance. Only "keystone" and "noauth" are currently supported by ironic.
glance_api_insecure = False (BoolOpt) Allow to perform insecure SSL (https) requests to glance.
glance_api_servers = None (ListOpt) A list of the glance api servers available to ironic. Prefix with https:// for SSL-based glance API servers. Format is [hostname|IP]:port.
glance_host = $my_ip (StrOpt) Default glance hostname or IP address.
glance_num_retries = 0 (IntOpt) Number of retries when downloading an image from glance.
glance_port = 9292 (IntOpt) Default glance port.
glance_protocol = http (StrOpt) Default protocol to use when connecting to glance. Set to https for SSL.
swift_account = None (StrOpt) The account that Glance uses to communicate with Swift. The format is "AUTH_uuid". "uuid" is the UUID for the account configured in the glance-api.conf. Required for temporary URLs. For example: "AUTH_a422b2-91f3-2f46-74b7-d7c9e8958f5d30". Swift temporary URL format: "endpoint_url/api_version/account/container/object_id"
swift_api_version = v1 (StrOpt) The Swift API version to create a temporary URL for. Defaults to "v1". Swift temporary URL format: "endpoint_url/api_version/account/container/object_id"
swift_container = glance (StrOpt) The Swift container Glance is configured to store its images in. Defaults to "glance", which is the default in glance-api.conf. Swift temporary URL format: "endpoint_url/api_version/account/container/object_id"
swift_endpoint_url = None (StrOpt) The "endpoint" (scheme, hostname, optional port) for the Swift URL of the form "endpoint_url/api_version/account/container/object_id". Do not include trailing "/". For example, use "https://swift.example.com". Required for temporary URLs.
swift_store_multiple_containers_seed = 0 (IntOpt) This should match a config by the same name in the Glance configuration file. When set to 0, a single-tenant store will only use one container to store all images. When set to an integer value between 1 and 32, a single-tenant store will use multiple containers to store images, and this value will determine how many containers are created.
swift_temp_url_duration = 1200 (IntOpt) The length of time in seconds that the temporary URL will be valid for. Defaults to 20 minutes. If some deploys get a 401 response code when trying to download from the temporary URL, try raising this duration.
swift_temp_url_key = None (StrOpt) The secret token given to Swift to allow temporary URL downloads. Required for temporary URLs.

Table 1.17. Description of iLO configuration options

Configuration option = Default value Description
[ilo]
clean_priority_clear_secure_boot_keys = 0 (IntOpt) Priority for clear_secure_boot_keys clean step. This step is not enabled by default. It can be enabled to to clear all secure boot keys enrolled with iLO.
clean_priority_erase_devices = None (IntOpt) Priority for erase devices clean step. If unset, it defaults to 10. If set to 0, the step will be disabled and will not run during cleaning.
clean_priority_reset_bios_to_default = 10 (IntOpt) Priority for reset_bios_to_default clean step.
clean_priority_reset_ilo = 1 (IntOpt) Priority for reset_ilo clean step.
clean_priority_reset_ilo_credential = 30 (IntOpt) Priority for reset_ilo_credential clean step. This step requires "ilo_change_password" parameter to be updated in nodes's driver_info with the new password.
clean_priority_reset_secure_boot_keys_to_default = 20 (IntOpt) Priority for reset_secure_boot_keys clean step. This step will reset the secure boot keys to manufacturing defaults.
client_port = 443 (IntOpt) Port to be used for iLO operations
client_timeout = 60 (IntOpt) Timeout (in seconds) for iLO operations
power_retry = 6 (IntOpt) Number of times a power operation needs to be retried
power_wait = 2 (IntOpt) Amount of time in seconds to wait in between power operations
swift_ilo_container = ironic_ilo_container (StrOpt) The Swift iLO container to store data.
swift_object_expiry_timeout = 900 (IntOpt) Amount of time in seconds for Swift objects to auto-expire.

Table 1.18. Description of IPMI configuration options

Configuration option = Default value Description
[ipmi]
min_command_interval = 5 (IntOpt) Minimum time, in seconds, between IPMI operations sent to a server. There is a risk with some hardware that setting this too low may cause the BMC to crash. Recommended setting is 5 seconds.
retry_timeout = 60 (IntOpt) Maximum time in seconds to retry IPMI operations. There is a tradeoff when setting this value. Setting this too low may cause older BMCs to crash and require a hard reset. However, setting too high can cause the sync power state periodic task to hang when there are slow or unresponsive BMCs.

Table 1.19. Description of iRMC configuration options

Configuration option = Default value Description
[irmc]
auth_method = basic (StrOpt) Authentication method to be used for iRMC operations, either "basic" or "digest"
client_timeout = 60 (IntOpt) Timeout (in seconds) for iRMC operations
port = 443 (IntOpt) Port to be used for iRMC operations, either 80 or 443
sensor_method = ipmitool (StrOpt) Sensor data retrieval method, either "ipmitool" or "scci"

Table 1.20. Description of keystone configuration options

Configuration option = Default value Description
[keystone]
region_name = None (StrOpt) The region used for getting endpoints of OpenStackservices.

Table 1.21. Description of logging configuration options

Configuration option = Default value Description
[DEFAULT]
debug = False (BoolOpt) Print debugging output (set logging level to DEBUG instead of default WARNING level).
default_log_levels = amqp=WARN, amqplib=WARN, boto=WARN, qpid=WARN, sqlalchemy=WARN, suds=INFO, oslo.messaging=INFO, iso8601=WARN, requests.packages.urllib3.connectionpool=WARN, urllib3.connectionpool=WARN, websocket=WARN, keystonemiddleware=WARN, routes.middleware=WARN, stevedore=WARN (ListOpt) List of logger=LEVEL pairs.
fatal_exception_format_errors = False (BoolOpt) Make exception message format errors fatal.
instance_format = "[instance: %(uuid)s] " (StrOpt) The format for an instance that is passed with the log message.
instance_uuid_format = "[instance: %(uuid)s] " (StrOpt) The format for an instance UUID that is passed with the log message.
log_config_append = None (StrOpt) The name of a logging configuration file. This file is appended to any existing logging configuration files. For details about logging configuration files, see the Python logging module documentation.
log_date_format = %Y-%m-%d %H:%M:%S (StrOpt) Format string for %%(asctime)s in log records. Default: %(default)s .
log_dir = None (StrOpt) (Optional) The base directory used for relative --log-file paths.
log_file = None (StrOpt) (Optional) Name of log file to output to. If no default is set, logging will go to stdout.
log_format = None (StrOpt) DEPRECATED. A logging.Formatter log message format string which may use any of the available logging.LogRecord attributes. This option is deprecate, use logging_context_format_string and logging_default_format_string instead.
logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(request_id)s %(user_identity)s] %(instance)s%(message)s (StrOpt) Format string to use for log messages with context.
logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d (StrOpt) Data to append to log format when level is DEBUG.
logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s (StrOpt) Format string to use for log messages without context.
logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d TRACE %(name)s %(instance)s (StrOpt) Prefix each line of exception output with this format.
publish_errors = False (BoolOpt) Enables or disables publication of error events.
syslog_log_facility = LOG_USER (StrOpt) Syslog facility to receive log lines.
use_stderr = True (BoolOpt) Log output to standard error.
use_syslog = False (BoolOpt) Use syslog for logging. Existing syslog format is DEPRECATED during I, and will change in J to honor RFC5424.
use_syslog_rfc_format = False (BoolOpt) (Optional) Enables or disables syslog rfc5424 format for logging. If enabled, prefixes the MSG part of the syslog message with APP-NAME (RFC5424). The format without the APP-NAME is deprecated in I, and will be removed in J.
verbose = False (BoolOpt) Print more verbose output (set logging level to INFO instead of default WARNING level).

Table 1.22. Description of neutron configuration options

Configuration option = Default value Description
[neutron]
auth_strategy = keystone (StrOpt) Default authentication strategy to use when connecting to neutron. Can be either "keystone" or "noauth". Running neutron in noauth mode (related to but not affected by this setting) is insecure and should only be used for testing.
cleaning_network_uuid = None (StrOpt) UUID of the network to create Neutron ports on when booting to a ramdisk for cleaning/zapping using Neutron DHCP
retries = 3 (IntOpt) Client retries in the case of a failed request.
url = http://$my_ip:9696 (StrOpt) URL for connecting to neutron.
url_timeout = 30 (IntOpt) Timeout value for connecting to neutron in seconds.

Table 1.23. Description of policy configuration options

Configuration option = Default value Description
[oslo_policy]
policy_default_rule = default (StrOpt) Default rule. Enforced when a requested rule is not found.
policy_dirs = ['policy.d'] (MultiStrOpt) Directories where policy configuration files are stored. They can be relative to any directory in the search path defined by the config_dir option, or absolute paths. The file defined by policy_file must exist for these directories to be searched. Missing or empty directories are ignored.
policy_file = policy.json (StrOpt) The JSON file that defines policies.

Table 1.24. Description of PXE configuration options

Configuration option = Default value Description
[pxe]
default_ephemeral_format = ext4 (StrOpt) Default file system format for ephemeral partition, if one is created.
disk_devices = cciss/c0d0,sda,hda,vda (StrOpt) The disk devices to scan while doing the deploy.
http_root = /httpboot (StrOpt) Ironic compute node's HTTP root path.
http_url = None (StrOpt) Ironic compute node's HTTP server URL. Example: http://192.1.2.3:8080
image_cache_size = 20480 (IntOpt) Maximum size (in MiB) of cache for master images, including those in use.
image_cache_ttl = 10080 (IntOpt) Maximum TTL (in minutes) for old master images in cache.
images_path = /var/lib/ironic/images/ (StrOpt) Directory where images are stored on disk.
instance_master_path = /var/lib/ironic/master_images (StrOpt) Directory where master instance images are stored on disk.
ipxe_boot_script = $pybasedir/drivers/modules/boot.ipxe (StrOpt) The path to the main iPXE script file.
ipxe_enabled = False (BoolOpt) Enable iPXE boot.
pxe_append_params = nofb nomodeset vga=normal (StrOpt) Additional append parameters for baremetal PXE boot.
pxe_bootfile_name = pxelinux.0 (StrOpt) Bootfile DHCP parameter.
pxe_config_template = $pybasedir/drivers/modules/pxe_config.template (StrOpt) Template file for PXE configuration.
tftp_master_path = /tftpboot/master_images (StrOpt) Directory where master tftp images are stored on disk.
tftp_root = /tftpboot (StrOpt) Ironic compute node's tftp root path.
tftp_server = $my_ip (StrOpt) IP address of Ironic compute node's tftp server.
uefi_pxe_bootfile_name = elilo.efi (StrOpt) Bootfile DHCP parameter for UEFI boot mode.
uefi_pxe_config_template = $pybasedir/drivers/modules/elilo_efi_pxe_config.template (StrOpt) Template file for PXE configuration for UEFI boot loader.

Table 1.25. Description of Redis configuration options

Configuration option = Default value Description
[matchmaker_redis]
host = 127.0.0.1 (StrOpt) Host to locate redis.
password = None (StrOpt) Password for Redis server (optional).
port = 6379 (IntOpt) Use this port to connect to redis host.
[matchmaker_ring]
ringfile = /etc/oslo/matchmaker_ring.json (StrOpt) Matchmaker ring file (JSON).

Table 1.26. Description of RPC configuration options

Configuration option = Default value Description
[DEFAULT]
matchmaker_heartbeat_freq = 300 (IntOpt) Heartbeat frequency.
matchmaker_heartbeat_ttl = 600 (IntOpt) Heartbeat time-to-live.
rpc_backend = rabbit (StrOpt) The messaging driver to use, defaults to rabbit. Other drivers include qpid and zmq.
rpc_cast_timeout = 30 (IntOpt) Seconds to wait before a cast expires (TTL). Only supported by impl_zmq.
rpc_response_timeout = 60 (IntOpt) Seconds to wait for a response from a call.
rpc_thread_pool_size = 64 (IntOpt) Size of RPC thread pool.
[oslo_concurrency]
disable_process_locking = False (BoolOpt) Enables or disables inter-process locks.
lock_path = None (StrOpt) Directory to use for lock files. For security, the specified directory should only be writable by the user running the processes that need locking. Defaults to environment variable OSLO_LOCK_PATH. If external locks are used, a lock path must be set.
[oslo_messaging_amqp]
allow_insecure_clients = False (BoolOpt) Accept clients using either SSL or plain TCP
broadcast_prefix = broadcast (StrOpt) address prefix used when broadcasting to all servers
container_name = None (StrOpt) Name for the AMQP container
group_request_prefix = unicast (StrOpt) address prefix when sending to any server in group
idle_timeout = 0 (IntOpt) Timeout for inactive connections (in seconds)
server_request_prefix = exclusive (StrOpt) address prefix used when sending to a specific server
ssl_ca_file = (StrOpt) CA certificate PEM file for verifing server certificate
ssl_cert_file = (StrOpt) Identifying certificate PEM file to present to clients
ssl_key_file = (StrOpt) Private key PEM file used to sign cert_file certificate
ssl_key_password = None (StrOpt) Password for decrypting ssl_key_file (if encrypted)
trace = False (BoolOpt) Debug: dump AMQP frames to stdout

Table 1.27. Description of RabbitMQ configuration options

Configuration option = Default value Description
[oslo_messaging_rabbit]
amqp_auto_delete = False (BoolOpt) Auto-delete queues in AMQP.
amqp_durable_queues = False (BoolOpt) Use durable queues in AMQP.
fake_rabbit = False (BoolOpt) Deprecated, use rpc_backend=kombu+memory or rpc_backend=fake
heartbeat_rate = 2 (IntOpt) How often times during the heartbeat_timeout_threshold we check the heartbeat.
heartbeat_timeout_threshold = 0 (IntOpt) Number of seconds after which the Rabbit broker is considered down if heartbeat's keep-alive fails (0 disables the heartbeat, >0 enables it. Enabling heartbeats requires kombu>=3.0.7 and amqp>=1.4.0). EXPERIMENTAL
kombu_reconnect_delay = 1.0 (FloatOpt) How long to wait before reconnecting in response to an AMQP consumer cancel notification.
kombu_ssl_ca_certs = (StrOpt) SSL certification authority file (valid only if SSL enabled).
kombu_ssl_certfile = (StrOpt) SSL cert file (valid only if SSL enabled).
kombu_ssl_keyfile = (StrOpt) SSL key file (valid only if SSL enabled).
kombu_ssl_version = (StrOpt) SSL version to use (valid only if SSL enabled). Valid values are TLSv1 and SSLv23. SSLv2, SSLv3, TLSv1_1, and TLSv1_2 may be available on some distributions.
rabbit_ha_queues = False (BoolOpt) Use HA queues in RabbitMQ (x-ha-policy: all). If you change this option, you must wipe the RabbitMQ database.
rabbit_host = localhost (StrOpt) The RabbitMQ broker address where a single node is used.
rabbit_hosts = $rabbit_host:$rabbit_port (ListOpt) RabbitMQ HA cluster host:port pairs.
rabbit_login_method = AMQPLAIN (StrOpt) The RabbitMQ login method.
rabbit_max_retries = 0 (IntOpt) Maximum number of RabbitMQ connection retries. Default is 0 (infinite retry count).
rabbit_password = guest (StrOpt) The RabbitMQ password.
rabbit_port = 5672 (IntOpt) The RabbitMQ broker port where a single node is used.
rabbit_retry_backoff = 2 (IntOpt) How long to backoff for between retries when connecting to RabbitMQ.
rabbit_retry_interval = 1 (IntOpt) How frequently to retry connecting with RabbitMQ.
rabbit_use_ssl = False (BoolOpt) Connect over SSL for RabbitMQ.
rabbit_userid = guest (StrOpt) The RabbitMQ userid.
rabbit_virtual_host = / (StrOpt) The RabbitMQ virtual host.
rpc_conn_pool_size = 30 (IntOpt) Size of RPC connection pool.

Table 1.28. Description of Qpid configuration options

Configuration option = Default value Description
[oslo_messaging_qpid]
amqp_auto_delete = False (BoolOpt) Auto-delete queues in AMQP.
amqp_durable_queues = False (BoolOpt) Use durable queues in AMQP.
qpid_heartbeat = 60 (IntOpt) Seconds between connection keepalive heartbeats.
qpid_hostname = localhost (StrOpt) Qpid broker hostname.
qpid_hosts = $qpid_hostname:$qpid_port (ListOpt) Qpid HA cluster host:port pairs.
qpid_password = (StrOpt) Password for Qpid connection.
qpid_port = 5672 (IntOpt) Qpid broker port.
qpid_protocol = tcp (StrOpt) Transport to use, either 'tcp' or 'ssl'.
qpid_receiver_capacity = 1 (IntOpt) The number of prefetched messages held by receiver.
qpid_sasl_mechanisms = (StrOpt) Space separated list of SASL mechanisms to use for auth.
qpid_tcp_nodelay = True (BoolOpt) Whether to disable the Nagle algorithm.
qpid_topology_version = 1 (IntOpt) The qpid topology version to use. Version 1 is what was originally used by impl_qpid. Version 2 includes some backwards-incompatible changes that allow broker federation to work. Users should update to version 2 when they are able to take everything down, as it requires a clean break.
qpid_username = (StrOpt) Username for Qpid connection.
rpc_conn_pool_size = 30 (IntOpt) Size of RPC connection pool.

Table 1.29. Description of SeaMicro configuration options

Configuration option = Default value Description
[seamicro]
action_timeout = 10 (IntOpt) Seconds to wait for power action to be completed
max_retry = 3 (IntOpt) Maximum retries for SeaMicro operations

Table 1.30. Description of SNMP configuration options

Configuration option = Default value Description
[snmp]
power_timeout = 10 (IntOpt) Seconds to wait for power action to be completed

Table 1.31. Description of SSH configuration options

Configuration option = Default value Description
[ssh]
libvirt_uri = qemu:///system (StrOpt) libvirt uri

Table 1.32. Description of swift configuration options

Configuration option = Default value Description
[swift]
swift_max_retries = 2 (IntOpt) Maximum number of times to retry a Swift request, before failing.

Table 1.33. Description of VirtualBox configuration options

Configuration option = Default value Description
[virtualbox]
port = 18083 (IntOpt) Port on which VirtualBox web service is listening.

Chapter 2. Block Storage

The OpenStack Block Storage service provides persistent storage for Compute instances, working with many different storage drivers that you can configure.

2.1. Volume drivers

To use different volume drivers for the cinder-volume service, use the parameters described in these sections.
To set a volume driver, use the volume_driver flag. The default is:
volume_driver = cinder.volume.drivers.lvm.LVMISCSIDriver

2.1.1. Ceph RADOS Block Device (RBD)

If you use KVM or QEMU as your hypervisor, you can configure the Compute service to use Ceph RADOS block devices (RBD) for volumes.
Ceph is a massively scalable, open source, distributed storage system. It is comprised of an object store, block store, and a POSIX-compliant distributed file system. The platform can auto-scale to the exabyte level and beyond. It runs on commodity hardware, is self-healing and self-managing, and has no single point of failure. Ceph is in the Linux kernel and is integrated with the OpenStack cloud operating system. Due to its open-source nature, you can install and use this portable storage platform in public or private clouds.

RADOS

Ceph is based on RADOS: Reliable Autonomic Distributed Object Store. RADOS distributes objects across the storage cluster and replicates objects for fault tolerance. RADOS contains the following major components:
  • Object Storage Device (OSD) Daemon. The storage daemon for the RADOS service, which interacts with the OSD (physical or logical storage unit for your data).
    You must run this daemon on each server in your cluster. For each OSD, you can have an associated hard drive disk. For performance purposes, pool your hard drive disk with raid arrays, logical volume management (LVM), or B-tree file system (Btrfs) pooling. By default, the following pools are created: data, metadata, and RBD.
  • Meta-Data Server (MDS). Stores metadata. MDSs build a POSIX file system on top of objects for Ceph clients. However, if you do not use the Ceph file system, you do not need a metadata server.
  • Monitor (MON). A lightweight daemon that handles all communications with external applications and clients. It also provides a consensus for distributed decision making in a Ceph/RADOS cluster. For instance, when you mount a Ceph shared on a client, you point to the address of a MON server. It checks the state and the consistency of the data. In an ideal setup, you must run at least three ceph-mon daemons on separate servers.
Ceph developers recommend XFS for production deployments, Btrfs for testing, development, and any non-critical deployments. Btrfs has the correct feature set and roadmap to serve Ceph in the long-term, but XFS and ext4 provide the necessary stability for today’s deployments.
Note
If using Btrfs, ensure that you use the correct version (see Ceph Dependencies).
For more information about usable file systems, see ceph.com/ceph-storage/file-system/.

Ways to store, use, and expose data

To store and access your data, you can use the following storage systems:
  • RADOS. Use as an object, default storage mechanism.
  • RBD. Use as a block device. The Linux kernel RBD (RADOS block device) driver allows striping a Linux block device over multiple distributed object store data objects. It is compatible with the KVM RBD image.
  • CephFS. Use as a file, POSIX-compliant file system.
Ceph exposes RADOS; you can access it through the following interfaces:
  • RADOS Gateway. OpenStack Object Storage and Amazon-S3 compatible RESTful interface (see RADOS_Gateway).
  • librados, and its related C/C++ bindings.
  • RBD and QEMU-RBD. Linux kernel and QEMU block devices that stripe data across multiple objects.

Driver options

The following table contains the configuration options supported by the Ceph RADOS Block Device driver.
Deprecation notice
The volume_tmp_dir option has been deprecated and replaced by image_conversion_dir.

Table 2.1. Description of Ceph storage configuration options

Configuration option = Default value Description
[DEFAULT]
rados_connect_timeout = -1 (IntOpt) Timeout value (in seconds) used when connecting to ceph cluster. If value < 0, no timeout is set and default librados value is used.
rados_connection_interval = 5 (IntOpt) Interval value (in seconds) between connection retries to ceph cluster.
rados_connection_retries = 3 (IntOpt) Number of retries if connection to ceph cluster failed.
rbd_ceph_conf = (StrOpt) Path to the ceph configuration file
rbd_cluster_name = ceph (StrOpt) The name of ceph cluster
rbd_flatten_volume_from_snapshot = False (BoolOpt) Flatten volumes created from snapshots to remove dependency from volume to snapshot
rbd_max_clone_depth = 5 (IntOpt) Maximum number of nested volume clones that are taken before a flatten occurs. Set to 0 to disable cloning.
rbd_pool = rbd (StrOpt) The RADOS pool where rbd volumes are stored
rbd_secret_uuid = None (StrOpt) The libvirt uuid of the secret for the rbd_user volumes
rbd_store_chunk_size = 4 (IntOpt) Volumes will be chunked into objects of this size (in megabytes).
rbd_user = None (StrOpt) The RADOS client name for accessing rbd volumes - only set when using cephx authentication
volume_tmp_dir = None (StrOpt) Directory where temporary image files are stored when the volume driver does not write them directly to the volume. Warning: this option is now deprecated, use image_conversion_dir instead.

2.1.2. Dell EqualLogic volume driver

The Dell EqualLogic volume driver interacts with configured EqualLogic arrays and supports various operations.

Supported operations

  • Create, delete, attach, and detach volumes.
  • Create, list, and delete volume snapshots.
  • Clone a volume.
The OpenStack Block Storage service supports:
  • Multiple instances of Dell EqualLogic Groups or Dell EqualLogic Group Storage Pools and multiple pools on a single array.
  • Multiple instances of Dell EqualLogic Groups or Dell EqualLogic Group Storage Pools or multiple pools on a single array.
The Dell EqualLogic volume driver's ability to access the EqualLogic Group is dependent upon the generic block storage driver's SSH settings in the /etc/cinder/cinder.conf file (see Section 2.3, “Block Storage sample configuration files” for reference).

Table 2.2. Description of Dell EqualLogic volume driver configuration options

Configuration option = Default value Description
[DEFAULT]
eqlx_chap_login = admin (StrOpt) Existing CHAP account name. Note that this option is deprecated in favour of "chap_username" as specified in cinder/volume/driver.py and will be removed in next release.
eqlx_chap_password = password (StrOpt) Password for specified CHAP account name. Note that this option is deprecated in favour of "chap_password" as specified in cinder/volume/driver.py and will be removed in the next release
eqlx_cli_max_retries = 5 (IntOpt) Maximum retry count for reconnection. Default is 5.
eqlx_cli_timeout = 30 (IntOpt) Timeout for the Group Manager cli command execution. Default is 30. Note that this option is deprecated in favour of "ssh_conn_timeout" as specified in cinder/volume/drivers/san/san.py and will be removed in M release.
eqlx_group_name = group-0 (StrOpt) Group name to use for creating volumes. Defaults to "group-0".
eqlx_pool = default (StrOpt) Pool in which volumes will be created. Defaults to "default".
eqlx_use_chap = False (BoolOpt) Use CHAP authentication for targets. Note that this option is deprecated in favour of "use_chap_auth" as specified in cinder/volume/driver.py and will be removed in next release.
The following sample /etc/cinder/cinder.conf configuration lists the relevant settings for a typical Block Storage service using a single Dell EqualLogic Group:

Example 2.1. Default (single-instance) configuration

[DEFAULT]
#Required settings

volume_driver = cinder.volume.drivers.eqlx.DellEQLSanISCSIDriver
san_ip = IP_EQLX
san_login = SAN_UNAME
san_password = SAN_PW
eqlx_group_name = EQLX_GROUP
eqlx_pool = EQLX_POOL

#Optional settings

san_thin_provision = true|false
eqlx_use_chap = true|false
eqlx_chap_login = EQLX_UNAME
eqlx_chap_password = EQLX_PW
eqlx_cli_max_retries = 5
san_ssh_port = 22
ssh_conn_timeout = 30
san_private_key = SAN_KEY_PATH
ssh_min_pool_conn = 1
ssh_max_pool_conn = 5
In this example, replace the following variables accordingly:
IP_EQLX
The IP address used to reach the Dell EqualLogic Group through SSH. This field has no default value.
SAN_UNAME
The user name to login to the Group manager via SSH at the san_ip. Default user name is grpadmin.
SAN_PW
The corresponding password of SAN_UNAME. Not used when san_private_key is set. Default password is password.
EQLX_GROUP
The group to be used for a pool where the Block Storage service will create volumes and snapshots. Default group is group-0.
EQLX_POOL
The pool where the Block Storage service will create volumes and snapshots. Default pool is default. This option cannot be used for multiple pools utilized by the Block Storage service on a single Dell EqualLogic Group.
EQLX_UNAME
The CHAP login account for each volume in a pool, if eqlx_use_chap is set to true. Default account name is chapadmin.
EQLX_PW
The corresponding password of EQLX_UNAME. The default password is randomly generated in hexadecimal, so you must set this password manually.
SAN_KEY_PATH (optional)
The filename of the private key used for SSH authentication. This provides password-less login to the EqualLogic Group. Not used when san_password is set. There is no default value.
In addition, enable thin provisioning for SAN volumes using the default san_thin_provision = true setting.

Example 2.2. Multi back-end Dell EqualLogic configuration

The following example shows the typical configuration for a Block Storage service that uses two Dell EqualLogic back ends:
enabled_backends = backend1,backend2
san_ssh_port = 22
​ssh_conn_timeout = 30
​san_thin_provision = true
      ​
​[backend1]
​volume_driver = cinder.volume.drivers.eqlx.DellEQLSanISCSIDriver
​volume_backend_name = backend1
​san_ip = IP_EQLX1
​san_login = SAN_UNAME
san_password = SAN_PW
​eqlx_group_name = EQLX_GROUP
​eqlx_pool = EQLX_POOL
      ​
​[backend2]
​volume_driver = cinder.volume.drivers.eqlx.DellEQLSanISCSIDriver
​volume_backend_name = backend2
​san_ip = IP_EQLX2
san_login = SAN_UNAME
san_password = SAN_PW
​eqlx_group_name = EQLX_GROUP
​eqlx_pool = EQLX_POOL
In this example:
  • Thin provisioning for SAN volumes is enabled (san_thin_provision = true). This is recommended when setting up Dell EqualLogic back ends.
  • Each Dell EqualLogic back-end configuration ([backend1] and [backend2]) has the same required settings as a single back-end configuration, with the addition of volume_backend_name.
  • The san_ssh_port option is set to its default value, 22. This option sets the port used for SSH.
  • The ssh_conn_timeout option is also set to its default value, 30. This option sets the timeout in seconds for CLI commands over SSH.
  • The IP_EQLX1 and IP_EQLX2 refer to the IP addresses used to reach the Dell EqualLogic Group of backend1 and backend2 through SSH, respectively.
For information on configuring multiple back ends, see Configure a multiple-storage back end.

2.1.3. Dell Storage Center Fibre Channel and iSCSI drivers

The Dell Storage Center volume driver interacts with configured Storage Center arrays.
The Dell Storage Center driver manages Storage Center arrays through Enterprise Manager. Enterprise Manager connection settings and Storage Center options are defined in the cinder.conf file.
Prerequisite: Dell Enterprise Manager 2015 R1 or later must be used.

Supported operations

The Dell Storage Center volume driver provides the following Cinder volume operations:
  • Create, delete, attach (map), and detach (unmap) volumes.
  • Create, list, and delete volume snapshots.
  • Create a volume from a snapshot.
  • Copy an image to a volume.
  • Copy a volume to an image.
  • Clone a volume.
  • Extend a volume.

Extra spec options

Volume type extra specs can be used to select different Storage Profiles.
Storage Profiles control how Storage Center manages volume data. For a given volume, the selected Storage Profile dictates which disk tier accepts initial writes, as well as how data progression moves data between tiers to balance performance and cost. Predefined Storage Profiles are the most effective way to manage data in Storage Center.
By default, if no Storage Profile is specified in the volume extra specs, the default Storage Profile for the user account configured for the Block Storage driver is used. The extra spec key storagetype:storageprofile with the value of the name of the Storage Profile on the Storage Center can be set to allow to use Storage Profiles other than the default.
For ease of use from the command line, spaces in Storage Profile names are ignored. As an example, here is how to define two volume types using the High Priority and Low Priority Storage Profiles:
$ cinder type-create "GoldVolumeType"
$ cinder type-key "GoldVolumeType" set storagetype:storageprofile=highpriority
$ cinder type-create "BronzeVolumeType"
$ cinder type-key "BronzeVolumeType" set storagetype:storageprofile=lowpriority

iSCSI configuration

Use the following instructions to update the configuration file for iSCSI:

Example 2.3. Sample iSCSI Configuration

default_volume_type = delliscsi
enabled_backends = delliscsi

[delliscsi]
# Name to give this storage backend
volume_backend_name = delliscsi
# The iSCSI driver to load
volume_driver = cinder.volume.drivers.dell.dell_storagecenter_iscsi.DellStorageCenterISCSIDriver
# IP address of Enterprise Manager
san_ip = 172.23.8.101
# Enterprise Manager user name
san_login = Admin
# Enterprise Manager password
san_password = secret
# The Storage Center iSCSI IP address
iscsi_ip_address = 192.168.0.20
# The Storage Center serial number to use
dell_sc_ssn = 64702

# ==Optional settings==
# The Enterprise Manager API port
dell_sc_api_port = 3033
# Server folder to place new server definitions
dell_sc_server_folder = devstacksrv
# Volume folder to place created volumes
dell_sc_volume_folder = devstackvol/Cinder
# The iSCSI IP port
iscsi_port = 3260

Fibre Channel configuration

Use the following instructions to update the configuration file for fibre channel:

Example 2.4. Sample FC configuration

default_volume_type = dellfc
enabled_backends = dellfc

[dellfc]
# Name to give this storage backend
volume_backend_name = dellfc
# The FC driver to load
volume_driver = cinder.volume.drivers.dell.dell_storagecenter_fc.DellStorageCenterFCDriver
# IP address of Enterprise Manager
san_ip = 172.23.8.101
# Enterprise Manager user name
san_login = Admin
# Enterprise Manager password
san_password = secret
# The Storage Center serial number to use
dell_sc_ssn = 64702

# Optional settings

# The Enterprise Manager API port
dell_sc_api_port = 3033
# Server folder to place new server definitions
dell_sc_server_folder = devstacksrv
# Volume folder to place created volumes
dell_sc_volume_folder = devstackvol/Cinder

Driver options

The following table contains the configuration options specific to the Dell Storage Center volume driver.

Table 2.3. Description of Dell Storage Center volume driver configuration options

Configuration option = Default value Description
[DEFAULT]
dell_sc_api_port = 3033 (IntOpt) Dell API port
dell_sc_server_folder = openstack (StrOpt) Name of the server folder to use on the Storage Center
dell_sc_ssn = 64702 (IntOpt) Storage Center System Serial Number
dell_sc_verify_cert = False (BoolOpt) Enable HTTPS SC certificate verification.
dell_sc_volume_folder = openstack (StrOpt) Name of the volume folder to use on the Storage Center

2.1.4. EMC VMAX iSCSI and FC drivers

The EMC VMAX drivers, EMCVMAXISCSIDriver and EMCVMAXFCDriver, support the use of EMC VMAX storage arrays under OpenStack Block Storage. They both provide equivalent functions and differ only in support for their respective host attachment methods.
The drivers perform volume operations by communicating with the backend VMAX storage. It uses a CIM client in Python called PyWBEM to perform CIM operations over HTTP.
The EMC CIM Object Manager (ECOM) is packaged with the EMC SMI-S provider. It is a CIM server that enables CIM clients to perform CIM operations over HTTP by using SMI-S in the back-end for VMAX storage operations.
The EMC SMI-S Provider supports the SNIA Storage Management Initiative (SMI), an ANSI standard for storage management. It supports the VMAX storage system.

2.1.4.1. System requirements

EMC SMI-S Provider V4.6.2.8 and higher is required. You can download SMI-S from the EMC's support web site (login is required). See the EMC SMI-S Provider release notes for installation instructions.
EMC storage VMAX Family is supported.

2.1.4.2. Supported operations

VMAX drivers support these operations:
  • Create, delete, attach, and detach volumes.
  • Create, list, and delete volume snapshots.
  • Copy an image to a volume.
  • Copy a volume to an image.
  • Clone a volume.
  • Extend a volume.
  • Retype a volume.
  • Create a volume from a snapshot.
VMAX drivers also support the following features:
  • FAST automated storage tiering policy.
  • Dynamic masking view creation.
  • Striped volume creation.

2.1.4.3. Set up the VMAX drivers

Procedure 2.1. To set up the EMC VMAX drivers

  1. Install the python-pywbem package for your distribution. To install the python-pywbem package for Red Hat Enterprise Linux, CentOS, or Fedora:
    # yum install pywbem
  2. Download SMI-S from PowerLink and install it. Add your VMAX arrays to SMI-S.
    For information, see Section 2.1.4.3.1, “Set up SMI-S” and the SMI-S release notes.
  3. Configure connectivity. For FC driver, see Section 2.1.4.3.4, “FC Zoning with VMAX”. For iSCSI driver, see Section 2.1.4.3.5, “iSCSI with VMAX”.
2.1.4.3.1. Set up SMI-S
You can install SMI-S on a non-OpenStack host. Supported platforms include different flavors of Windows, Red Hat, and SUSE Linux. SMI-S can be installed on a physical server or a VM hosted by an ESX server. Note that the supported hypervisor for a VM running SMI-S is ESX only. See the EMC SMI-S Provider release notes for more information on supported platforms and installation instructions.
Note
You must discover storage arrays on the SMI-S server before you can use the VMAX drivers. Follow instructions in the SMI-S release notes.
SMI-S is usually installed at /opt/emc/ECIM/ECOM/bin on Linux and C:\Program Files\EMC\ECIM\ECOM\bin on Windows. After you install and configure SMI-S, go to that directory and type TestSmiProvider.exe.
Use addsys in TestSmiProvider.exe to add an array. Use dv and examine the output after the array is added. Make sure that the arrays are recognized by the SMI-S server before using the EMC VMAX drivers.
2.1.4.3.2. cinder.conf configuration file
Make the following changes in /etc/cinder/cinder.conf.
Add the following entries, where 10.10.61.45 is the IP address of the VMAX iSCSI target:
enabled_backends = CONF_GROUP_ISCSI, CONF_GROUP_FC
[CONF_GROUP_ISCSI]
iscsi_ip_address = 10.10.61.45
volume_driver = cinder.volume.drivers.emc.emc_vmax_iscsi.EMCVMAXISCSIDriver
cinder_emc_config_file = /etc/cinder/cinder_emc_config_CONF_GROUP_ISCSI.xml
volume_backend_name=ISCSI_backend
[CONF_GROUP_FC]
volume_driver = cinder.volume.drivers.emc.emc_vmax_fc.EMCVMAXFCDriver
cinder_emc_config_file = /etc/cinder/cinder_emc_config_CONF_GROUP_FC.xml
volume_backend_name=FC_backend
In this example, two backend configuration groups are enabled: CONF_GROUP_ISCSI and CONF_GROUP_FC. Each configuration group has a section describing unique parameters for connections, drivers, the volume_backend_name, and the name of the EMC-specific configuration file containing additional settings. Note that the file name is in the format /etc/cinder/cinder_emc_config_[confGroup].xml.
Once the cinder.conf and EMC-specific configuration files have been created, cinder commands need to be issued in order to create and associate OpenStack volume types with the declared volume_backend_names:
$ cinder type-create VMAX_ISCSI
$ cinder type-key VMAX_ISCSI set volume_backend_name=ISCSI_backend
$ cinder type-create VMAX_FC
$ cinder type-key VMAX_FC set volume_backend_name=FC_backend
By issuing these commands, the Block Storage volume type VMAX_ISCSI is associated with the ISCSI_backend, and the type VMAX_FC is associated with the FC_backend.
Restart the cinder-volume service.
2.1.4.3.3. cinder_emc_config_CONF_GROUP_ISCSI.xml configuration file
Create the /etc/cinder/cinder_emc_config_CONF_GROUP_ISCSI.xml file. You do not need to restart the service for this change.
Add the following lines to the XML file:
<?xml version="1.0" encoding="UTF-8" ?>
<EMC>
    <EcomServerIp>1.1.1.1</EcomServerIp>
    <EcomServerPort>00</EcomServerPort>
    <EcomUserName>user1</EcomUserName>
    <EcomPassword>password1</EcomPassword>
    <PortGroups>
      <PortGroup>OS-PORTGROUP1-PG</PortGroup>
      <PortGroup>OS-PORTGROUP2-PG</PortGroup>
    </PortGroups>
   <Array>111111111111</Array>
   <Pool>FC_GOLD1</Pool>
   <FastPolicy>GOLD1</FastPolicy>
</EMC>
Where:
  • EcomServerIp and EcomServerPort are the IP address and port number of the ECOM server which is packaged with SMI-S.
  • EcomUserName and EcomPassword are credentials for the ECOM server.
  • PortGroups supplies the names of VMAX port groups that have been pre-configured to expose volumes managed by this backend. Each supplied port group should have sufficient number and distribution of ports (across directors and switches) as to ensure adequate bandwidth and failure protection for the volume connections. PortGroups can contain one or more port groups of either iSCSI or FC ports. When a dynamic masking view is created by the VMAX driver, the port group is chosen randomly from the PortGroup list, to evenly distribute load across the set of groups provided. Make sure that the PortGroups set contains either all FC or all iSCSI port groups (for a given backend), as appropriate for the configured driver (iSCSI or FC).
  • The Array tag holds the unique VMAX array serial number.
  • The Pool tag holds the unique pool name within a given array. For backends not using FAST automated tiering, the pool is a single pool that has been created by the administrator. For backends exposing FAST policy automated tiering, the pool is the bind pool to be used with the FAST policy.
  • The FastPolicy tag conveys the name of the FAST Policy to be used. By including this tag, volumes managed by this backend are treated as under FAST control. Omitting the FastPolicy tag means FAST is not enabled on the provided storage pool.
2.1.4.3.4. FC Zoning with VMAX
Zone Manager is recommended when using the VMAX FC driver, especially for larger configurations where pre-zoning would be too complex and open-zoning would raise security concerns.
2.1.4.3.5. iSCSI with VMAX
  • Make sure the iscsi-initiator-utils package is installed on the host (use apt-get, zypper, or yum, depending on Linux flavor).
  • Verify host is able to ping VMAX iSCSI target ports.

2.1.4.4. VMAX masking view and group naming info

Masking view names
Masking views are dynamically created by the VMAX FC and iSCSI drivers using the following naming conventions:
OS-[shortHostName][poolName]-I-MV (for Masking Views using iSCSI)
OS-[shortHostName][poolName]-F-MV (for Masking Views using FC)
Initiator group names
For each host that is attached to VMAX volumes using the drivers, an initiator group is created or re-used (per attachment type). All initiators of the appropriate type known for that host are included in the group. At each new attach volume operation, the VMAX driver retrieves the initiators (either WWNNs or IQNs) from OpenStack and adds or updates the contents of the Initiator Group as required. Names are of the following format:
OS-[shortHostName]-I-IG (for iSCSI initiators)
OS-[shortHostName]-F-IG (for Fibre Channel initiators)
Note
Hosts attaching to VMAX storage managed by the OpenStack environment cannot also be attached to storage on the same VMAX not being managed by OpenStack. This is due to limitations on VMAX Initiator Group membership.
FA port groups
VMAX array FA ports to be used in a new masking view are chosen from the list provided in the EMC configuration file.
Storage group names
As volumes are attached to a host, they are either added to an existing storage group (if it exists) or a new storage group is created and the volume is then added. Storage groups contain volumes created from a pool (either single-pool or FAST-controlled), attached to a single host, over a single connection type (iSCSI or FC). Names are formed:
OS-[shortHostName][poolName]-I-SG (attached over iSCSI)
OS-[shortHostName][poolName]-F-SG (attached over Fibre Channel)

2.1.4.5. Concatenated or striped volumes

In order to support later expansion of created volumes, the VMAX Block Storage drivers create concatenated volumes as the default layout. If later expansion is not required, users can opt to create striped volumes in order to optimize I/O performance.
Below is an example of how to create striped volumes. First, create a volume type. Then define the extra spec for the volume type storagetype:stripecount representing the number of meta members in the striped volume. The example below means that each volume created under the GoldStriped volume type will be striped and made up of 4 meta members.
$ cinder type-create GoldStriped
$ cinder type-key GoldStriped set volume_backend_name=GOLD_BACKEND
$ cinder type-key GoldStriped set storagetype:stripecount=4

2.1.5. EMC VNX driver

EMC VNX driver consists of EMCCLIISCSIDriver and EMCCLIFCDriver, and supports both iSCSI and FC protocol. EMCCLIISCSIDriver (VNX iSCSI driver) and EMCCLIFCDriver (VNX FC driver) are separately based on the ISCSIDriver and FCDriver defined in Block Storage.

2.1.5.1. Overview

The VNX iSCSI driver and VNX FC driver perform the volume operations by executing Navisphere CLI (NaviSecCLI) which is a command line interface used for management, diagnostics, and reporting functions for VNX.
2.1.5.1.1. System requirements
  • VNX Operational Environment for Block version 5.32 or higher.
  • VNX Snapshot and Thin Provisioning license should be activated for VNX.
  • Navisphere CLI v7.32 or higher is installed along with the driver.
2.1.5.1.2. Supported operations
  • Create, delete, attach, and detach volumes.
  • Create, list, and delete volume snapshots.
  • Create a volume from a snapshot.
  • Copy an image to a volume.
  • Clone a volume.
  • Extend a volume.
  • Migrate a volume.
  • Retype a volume.
  • Get volume statistics.
  • Create and delete consistency groups.
  • Create, list, and delete consistency group snapshots.
  • Modify consistency groups.
  • Efficient non-disruptive volume backup.

2.1.5.2. Preparation

This section contains instructions to prepare the Block Storage nodes to use the EMC VNX driver. You install the Navisphere CLI, install the driver, ensure you have correct zoning configurations, and register the driver.
2.1.5.2.1. Install Navisphere CLI
Navisphere CLI needs to be installed on all Block Storage nodes within an OpenStack deployment. You need to download different versions for different platforms.
2.1.5.2.2. Check array software
Make sure your have following software installed for certain features.

Table 2.4. Required software

Feature Software Required
All
ThinProvisioning
All
VNXSnapshots
FAST cache support
FASTCache
Create volume with type compressed
Compression
Create volume with type deduplicated
Deduplication
2.1.5.2.3. Install EMC VNX driver
Both EMCCLIISCSIDriver and EMCCLIFCDriver are included in the Block Storage installer package:
  • emc_vnx_cli.py
  • emc_cli_fc.py (for EMCCLIFCDriver)
  • emc_cli_iscsi.py (for EMCCLIISCSIDriver)
2.1.5.2.4. Network configuration
For FC Driver, FC zoning is properly configured between hosts and VNX. Check Section 2.1.5.8.2, “Register FC port with VNX” for reference.
For iSCSI Driver, make sure your VNX iSCSI port is accessible by your hosts. Check Section 2.1.5.8.3, “Register iSCSI port with VNX” for reference.
You can use initiator_auto_registration=True configuration to avoid register the ports manually. Check the detail of the configuration in Section 2.1.5.3, “Backend configuration” for reference.
If you are trying to setup multipath, see Multipath Setup in Section 2.1.5.6.1, “Multipath setup”.

2.1.5.3. Backend configuration

Make the following changes in /etc/cinder/cinder.conf file:
Note
Changes to your configuration won't take effect until your restart your cinder service.
2.1.5.3.1. Minimum configuration
Here is a sample of minimum backend configuration. See following sections for the detail of each option Replace EMCCLIFCDriver to EMCCLIISCSIDriver if your are using the iSCSI driver.
[DEFAULT]
enabled_backends = vnx_array1

[vnx_array1]
san_ip = 10.10.72.41
san_login = sysadmin
san_password = sysadmin
naviseccli_path = /opt/Navisphere/bin/naviseccli
volume_driver=cinder.volume.drivers.emc.emc_cli_fc.EMCCLIFCDriver
initiator_auto_registration=True
2.1.5.3.2. Multi-backend configuration
Here is a sample of a multi-backend configuration. See following sections for the detail of each option. Replace EMCCLIFCDriver to EMCCLIISCSIDriver if your are using the iSCSI driver.
[DEFAULT]
enabled_backends=backendA, backendB

[backendA]
storage_vnx_pool_names = Pool_01_SAS, Pool_02_FLASH
san_ip = 10.10.72.41
storage_vnx_security_file_dir = /etc/secfile/array1
naviseccli_path = /opt/Navisphere/bin/naviseccli
volume_driver=cinder.volume.drivers.emc.emc_cli_fc.EMCCLIFCDriver
initiator_auto_registration=True

[backendB]
storage_vnx_pool_names = Pool_02_SAS
san_ip = 10.10.26.101
san_login = username
san_password = password
naviseccli_path = /opt/Navisphere/bin/naviseccli
volume_driver=cinder.volume.drivers.emc.emc_cli_fc.EMCCLIFCDriver
initiator_auto_registration=True
For more details on multi-backends, see OpenStack Cloud Administration Guide
2.1.5.3.3. Required configurations
2.1.5.3.3.1. IP of the VNX Storage Processors
Specify the SP A and SP B IP to connect.
san_ip = <IP of VNX Storage Processor A>
san_secondary_ip = <IP of VNX Storage Processor B>
2.1.5.3.3.2. VNX login credentials
There are two ways to specify the credentials.
  • Use plain text username and password.
Supply for plain username and password as below.
san_login = <VNX account with administrator role>
san_password = <password for VNX account>
storage_vnx_authentication_type = global
Valid values for storage_vnx_authentication_type are: global (default), local, ldap
  • Use Security file
This approach avoids the plain text password in your cinder configuration file. Supply a security file as below:
storage_vnx_security_file_dir=<path to security file>
Check the Unisphere CLI user guide or Section 2.1.5.8.1, “Authenticate by security file” for how to create a security file.
2.1.5.3.3.3. Path to your Unisphere CLI
Specify the absolute path to your naviseccli.
naviseccli_path = /opt/Navisphere/bin/naviseccli
2.1.5.3.3.4. Driver name
  • For the FC Driver, add the following option:
volume_driver=cinder.volume.drivers.emc.emc_cli_fc.EMCCLIFCDriver
  • For iSCSI Driver, add following option:
volume_driver=cinder.volume.drivers.emc.emc_cli_iscsi.EMCCLIISCSIDriver
2.1.5.3.4. Optional configurations
2.1.5.3.4.1. VNX pool names
Specify the list of pools to be managed, separated by ','. They should already exist in VNX.
storage_vnx_pool_names = pool 1, pool 2
If this value is not specified, all pools of the array will be used.
2.1.5.3.4.2. Initiator auto registration
When initiator_auto_registration=True, the driver will automatically register initiators to all working target ports of the VNX array during volume attaching (The driver will skip those initiators that have already been registered) if the option io_port_list is not specified in cinder.conf.
If the user wants to register the initiators with some specific ports but not register with the other ports, this functionality should be disabled.
When a comma-separated list is given to io_port_list, the driver will only register the initiator to the ports specified in the list and only return target port(s) which belong to the target ports in the io_port_list instead of all target ports.
  • Example for FC ports:
    io_port_list=a-1,B-3
    a or B is Storage Processor, number 1 and 3 are Port ID.
  • Example for iSCSI ports:
    io_port_list=a-1-0,B-3-0
    a or B is Storage Processor, the first numbers 1 and 3 are Port ID and the second number 0 is Virtual Port ID
Note
  • Rather than de-registered, the registered ports will be simply bypassed whatever they are in 'io_port_list' or not.
  • The driver will raise an exception if ports in io_port_list are not existed in VNX during startup.
2.1.5.3.4.3. Force delete volumes in storage group
Some available volumes may remain in storage group on the VNX array due to some OpenStack timeout issue. But the VNX array do not allow the user to delete the volumes which are in storage group. Option force_delete_lun_in_storagegroup is introduced to allow the user to delete the available volumes in this tricky situation.
When force_delete_lun_in_storagegroup=True in the back-end section, the driver will move the volumes out of storage groups and then delete them if the user tries to delete the volumes that remain in storage group on the VNX array.
The default value of force_delete_lun_in_storagegroup is False.
2.1.5.3.4.4. Over subscription in thin provisioning
Over subscription allows that the sum of all volumes' capacity (provisioned capacity) to be larger than the pool's total capacity.
max_over_subscription_ratio in the back-end section is the ratio of provisioned capacity over total capacity.
If the value of max_over_subscription_ratio is greater than 1.0, the provisioned capacity can exceed the total capacity. The default value of max_over_subscription_ratio is 20.0, which means the provisioned capacity can be 20 times the total physical capacity.
2.1.5.3.4.5. Storage group automatic deletion
For volume attaching, the driver has a storage group on VNX for each compute node hosting the vm instances which are going to consume VNX Block Storage (using compute node's hostname as storage group's name). All the volumes attached to the VM instances in a compute node will be put into the storage group. If destroy_empty_storage_group=True, the driver will remove the empty storage group after its last volume is detached. For data safety, it does not suggest to set destroy_empty_storage_group=True unless the VNX is exclusively managed by one Block Storage node because consistent lock_path is required for operation synchronization for this behavior.
2.1.5.3.4.6. Initiator auto deregistration
Enabling storage group automatic deletion is the precondition of this function. If initiator_auto_deregistration=True is set, the driver will deregister all the initiators of the host after its storage group is deleted.
2.1.5.3.4.7. FC SAN auto zoning
The EMC VNX FC driver supports FC SAN auto zoning when ZoneManager is configured. Set zoning_mode to fabric in DEFAULT section to enable this feature. For ZoneManager configuration, refer to Block Storage official guide.
2.1.5.3.4.8. Volume number threshold
In VNX, there is a limitation on the number of pool volumes that can be created in the system. When the limitation is reached, no more pool volumes can be created even if there is remaining capacity in the storage pool. In other words, if the scheduler dispatches a volume creation request to a back end that has free capacity but reaches the volume limitation, the creation fails.
The default value of check_max_pool_luns_threshold is False. When check_max_pool_luns_threshold=True, the pool-based back end will check the limit and will report 0 free capacity to the scheduler if the limit is reached. So the scheduler will be able to skip this kind of pool-based back end that runs out of the pool volume number.
2.1.5.3.4.9. iSCSI initiators
iscsi_initiators is a dictionary of IP addresses of the iSCSI initiator ports on OpenStack Nova/Cinder nodes which want to connect to VNX via iSCSI. If this option is configured, the driver will leverage this information to find an accessible iSCSI target portal for the initiator when attaching volume. Otherwise, the iSCSI target portal will be chosen in a relative random way.
This option is only valid for iSCSI driver.
Here is an example. VNX will connect host1 with 10.0.0.1 and 10.0.0.2. And it will connect host2 with 10.0.0.3.
The key name (like host1 in the example) should be the output of command hostname.
iscsi_initiators = {"host1":["10.0.0.1", "10.0.0.2"],"host2":["10.0.0.3"]}
2.1.5.3.4.10. Default timeout
Specify the timeout(minutes) for operations like LUN migration, LUN creation, etc. For example, LUN migration is a typical long running operation, which depends on the LUN size and the load of the array. An upper bound in the specific deployment can be set to avoid unnecessary long wait.
The default value for this option is infinite.
Example:
default_timeout = 10
2.1.5.3.4.11. Max LUNs per storage group
max_luns_per_storage_group specify the max number of LUNs in a storage group. Default value is 255. It is also the max value supportedby VNX.
2.1.5.3.4.12. Ignore pool full threshold
if ignore_pool_full_threshold is set to True, driver will force LUN creation even if the full threshold of pool is reached. Default to False

2.1.5.4. Extra spec options

Extra specs are used in volume types created in cinder as the preferred property of the volume.
The Block storage scheduler will use extra specs to find the suitable back end for the volume and the Block storage driver will create the volume based on the properties specified by the extra spec.
Use following command to create a volume type:
$ cinder type-create "demoVolumeType"
Use following command to update the extra spec of a volume type:
$ cinder type-key "demoVolumeType" set provisioning:type=thin
Volume types can also be configured in OpenStack Horizon.
In VNX Driver, we defined several extra specs. They are introduced below:
2.1.5.4.1. Provisioning type
  • Key: provisioning:type
  • Possible Values:
    • thick
    Volume is fully provisioned.

    Example 2.5. creating a thick volume type:

    $ cinder type-create "ThickVolumeType"
    $ cinder type-key "ThickVolumeType" set provisioning:type=thick thick_provisioning_support='<is> True'
    • thin
    Volume is virtually provisioned

    Example 2.6. creating a thin volume type:

    $ cinder type-create "ThinVolumeType"
    $ cinder type-key "ThinVolumeType" set provisioning:type=thin thin_provisioning_support='<is> True'
    • deduplicated
    Volume is thin and deduplication is enabled. The administrator shall go to VNX to configure the system level deduplication settings. To create a deduplicated volume, the VNX Deduplication license must be activated on VNX, and specify deduplication_support=True to let Block Storage scheduler find the proper volume back end.

    Example 2.7. creating a deduplicated volume type:

    $ cinder type-create "DeduplicatedVolumeType"
    $ cinder type-key "DeduplicatedVolumeType" set provisioning:type=deduplicated deduplication_support='<is> True'
    • compressed
    Volume is thin and compression is enabled. The administrator shall go to the VNX to configure the system level compression settings. To create a compressed volume, the VNX Compression license must be activated on VNX , and use compression_support=True to let Block Storage scheduler find a volume back end. VNX does not support creating snapshots on a compressed volume.

    Example 2.8. creating a compressed volume type:

    $ cinder type-create "CompressedVolumeType"
    $ cinder type-key "CompressedVolumeType" set provisioning:type=compressed compression_support='<is> True'
  • Default: thick
Note
provisioning:type replaces the old spec key storagetype:provisioning. The latter one will be obsoleted in the next release. If both provisioning:typeand storagetype:provisioning are set in the volume type, the value of provisioning:type will be used.
2.1.5.4.2. Storage tiering support
  • Key: storagetype:tiering
  • Possible Values:
    • StartHighThenAuto
    • Auto
    • HighestAvailable
    • LowestAvailable
    • NoMovement
  • Default: StartHighThenAuto
VNX supports fully automated storage tiering which requires the FAST license activated on the VNX. The OpenStack administrator can use the extra spec key storagetype:tiering to set the tiering policy of a volume and use the key fast_support='<is> True' to let Block Storage scheduler find a volume back end which manages a VNX with FAST license activated. Here are the five supported values for the extra spec key storagetype:tiering:

Example 2.9. creating a volume types with tiering policy:

$ cinder type-create "ThinVolumeOnLowestAvaibleTier"
$ cinder type-key "CompressedVolumeOnLowestAvaibleTier" set provisioning:type=thin storagetype:tiering=Auto fast_support='<is> True'
Note
Tiering policy can not be applied to a deduplicated volume. Tiering policy of the deduplicated LUN align with the settings of the pool.
2.1.5.4.3. FAST cache support
  • Key: fast_cache_enabled
  • Possible Values:
    • True
    • False
  • Default: False
VNX has FAST Cache feature which requires the FAST Cache license activated on the VNX. Volume will be created on the backend with FAST cache enabled when True is specified.
2.1.5.4.4. Snap-copy
  • Key: copytype:snap
  • Possible Values:
    • True
    • False
  • Default: False
The VNX driver supports snap-copy, which extremely accelerates the process for creating a copied volume.
By default, the driver will do full data copy when creating a volume from a snapshot or cloning a volume, which is time-consuming especially for large volumes. When the snap-copy is used, the driver will simply create a snapshot and mount it as a volume for the 2 kinds of operations which will be instant even for large volumes.
To enable this functionality, the source volume should have copytype:snap=True in the extra specs of its volume type. Then the new volume cloned from the source or copied from the snapshot for the source, will be in fact a snap-copy instead of a full copy. If a full copy is needed, retype/migration can be used to convert the snap-copy volume to a full-copy volume which may be time-consuming.
$ cinder type-create "SnapCopy"
$ cinder type-key "SnapCopy" set copytype:snap=True
User can determine whether the volume is a snap-copy volume or not by showing its metadata. If the 'lun_type' in metadata is 'smp', the volume is a snap-copy volume. Otherwise, it is a full-copy volume.
$ cinder metadata-show <volume>
Constraints:
  • copytype:snap=True is not allowed in the volume type of a consistency group.
  • Clone and snapshot creation are not allowed on a copied volume created through the snap-copy before it is converted to a full copy.
  • The number of snap-copy volume created from a source volume is limited to 255 at one point in time.
  • The source volume which has snap-copy volume can not be deleted.
2.1.5.4.5. Pool name
  • Key: pool_name
  • Possible Values: name of the storage pool managed by cinder
  • Default: None
If the user wants to create a volume on a certain storage pool in a backend that manages multiple pools, a volume type with a extra spec specified storage pool should be created first, then the user can use this volume type to create the volume.

Example 2.10. Creating the volume type:

$ cinder type-create "HighPerf"
$ cinder type-key "HighPerf" set pool_name=Pool_02_SASFLASH volume_backend_name=vnx_41
2.1.5.4.6. Obsoleted extra specs in Liberty
Avoid using following extra spec keys.
  • storagetype:provisioning
  • storagetype:pool

2.1.5.5. Advanced features

2.1.5.5.1. Read-only volumes
OpenStack supports read-only volumes. The following command can be used to set a volume as read-only.
$ cinder readonly-mode-update <volume> True
After a volume is marked as read-only, the driver will forward the information when a hypervisor is attaching the volume and the hypervisor will make sure the volume is read-only.
2.1.5.5.2. Efficient non-disruptive volume backup
The default implementation in Cinder for non-disruptive volume backup is not efficient since a cloned volume will be created during backup.
The approach of efficient backup is to create a snapshot for the volume and connect this snapshot (a mount point in VNX) to the Cinder host for volume backup. This eliminates migration time involved in volume clone.
Constraints:
  • Backup creation for a snap-copy volume is not allowed if the volume status is in-use since snapshot cannot be taken from this volume.

2.1.5.6. Best practice

2.1.5.6.1. Multipath setup
Enabling multipath volume access is recommended for robust data access. The major configuration includes:
  • Install multipath-tools, sysfsutils and sg3-utils on nodes hosting Nova-Compute and Cinder-Volume services (Check the operating system manual for the system distribution for specific installation steps. For Red Hat based distributions, they should be device-mapper-multipath, sysfsutils and sg3_utils).
  • Specify use_multipath_for_image_xfer=true in cinder.conf for each FC/iSCSI back end.
  • Specify iscsi_use_multipath=True in libvirt section of nova.conf. This option is valid for both iSCSI and FC driver.
For multipath-tools, here is an EMC recommended sample of /etc/multipath.conf.
user_friendly_names is not specified in the configuration and thus it will take the default value no. It is NOT recommended to set it to yes because it may fail operations such as VM live migration.
blacklist {
    # Skip the files under /dev that are definitely not FC/iSCSI devices
    # Different system may need different customization
    devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
    devnode "^hd[a-z][0-9]*"
    devnode "^cciss!c[0-9]d[0-9]*[p[0-9]*]"

    # Skip LUNZ device from VNX
    device {
        vendor "DGC"
        product "LUNZ"
        }
}

defaults {
    user_friendly_names no
    flush_on_last_del yes
}

devices {
    # Device attributed for EMC CLARiiON and VNX series ALUA
    device {
        vendor "DGC"
        product ".*"
        product_blacklist "LUNZ"
        path_grouping_policy group_by_prio
        path_selector "round-robin 0"
        path_checker emc_clariion
        features "1 queue_if_no_path"
        hardware_handler "1 alua"
        prio alua
        failback immediate
    }
}
Note
When multipath is used in OpenStack, multipath faulty devices may come out in Nova-Compute nodes due to different issues (Bug 1336683 is a typical example).
A solution to completely avoid faulty devices has not been found yet. faulty_device_cleanup.py mitigates this issue when VNX iSCSI storage is used. Cloud administrators can deploy the script in all Nova-Compute nodes and use a CRON job to run the script on each Nova-Compute node periodically so that faulty devices will not stay too long. See VNX faulty device cleanup for detailed usage and the script.

2.1.5.7. Restrictions and limitations

2.1.5.7.1. iSCSI port cache
EMC VNX iSCSI driver caches the iSCSI ports information, so that the user should restart the cinder-volume service or wait for seconds (which is configured by periodic_interval in cinder.conf) before any volume attachment operation after changing the iSCSI port configurations. Otherwise the attachment may fail because the old iSCSI port configurations were used.
2.1.5.7.2. No extending for volume with snapshots
VNX does not support extending the thick volume which has a snapshot. If the user tries to extend a volume which has a snapshot, the status of the volume would change to error_extending.
2.1.5.7.3. Limitations for deploying cinder on computer node
It is not recommended to deploy the driver on a compute node if cinder upload-to-image --force True is used against an in-use volume. Otherwise, cinder upload-to-image --force True will terminate the data access of the vm instance to the volume.
2.1.5.7.4. Storage group with host names in VNX
When the driver notices that there is no existing storage group that has the host name as the storage group name, it will create the storage group and also add the compute node's or Block Storage nodes' registered initiators into the storage group.
If the driver notices that the storage group already exists, it will assume that the registered initiators have also been put into it and skip the operations above for better performance.
It is recommended that the storage administrator does not create the storage group manually and instead relies on the driver for the preparation. If the storage administrator needs to create the storage group manually for some special requirements, the correct registered initiators should be put into the storage group as well (otherwise the following volume attaching operations will fail ).
2.1.5.7.5. EMC storage-assisted volume migration
EMC VNX driver supports storage-assisted volume migration, when the user starts migrating with cinder migrate --force-host-copy False <volume_id> <host> or cinder migrate <volume_id> <host>, cinder will try to leverage the VNX's native volume migration functionality.
In following scenarios, VNX storage-assisted volume migration will not be triggered:
  1. Volume migration between back ends with different storage protocol, ex, FC and iSCSI.
  2. Volume is to be migrated across arrays.

2.1.5.8. Appendix

2.1.5.8.1. Authenticate by security file
VNX credentials are necessary when the driver connects to the VNX system. Credentials in global, local and ldap scopes are supported. There are two approaches to provide the credentials:
The recommended one is using the Navisphere CLI security file to provide the credentials which can get rid of providing the plain text credentials in the configuration file. Following is the instruction on how to do this.
  1. Find out the Linux user id of the cinder-volume processes. Assuming the service cinder-volume is running by the account cinder.
  2. Run su as root user.
  3. In /etc/passwd, change cinder:x:113:120::/var/lib/cinder:/bin/false to cinder:x:113:120::/var/lib/cinder:/bin/bash (This temporary change is to make step 4 work.)
  4. Save the credentials on behave of cinder user to a security file (assuming the array credentials are admin/admin in global scope). In the command below, the '-secfilepath' switch is used to specify the location to save the security file.
    # su -l cinder -c '/opt/Navisphere/bin/naviseccli -AddUserSecurity -user admin -password admin -scope 0 -secfilepath <location>'
  5. Change cinder:x:113:120::/var/lib/cinder:/bin/bash back to cinder:x:113:120::/var/lib/cinder:/bin/false in /etc/passwd
  6. Remove the credentials options san_login, san_password and storage_vnx_authentication_type from cinder.conf. (normally it is /etc/cinder/cinder.conf). Add option storage_vnx_security_file_dir and set its value to the directory path of your security file generated in step 4. Omit this option if -secfilepath is not used in step 4.
  7. Restart the cinder-volume service to validate the change.
2.1.5.8.2. Register FC port with VNX
This configuration is only required when initiator_auto_registration=False.
To access VNX storage, the compute nodes should be registered on VNX first if initiator auto registration is not enabled.
To perform "Copy Image to Volume" and "Copy Volume to Image" operations, the nodes running the cinder-volume service (Block Storage nodes) must be registered with the VNX as well.
The steps mentioned below are for the compute nodes. Follow the same steps for the Block Storage nodes also (The steps can be skipped if initiator auto registration is enabled).
  1. Assume 20:00:00:24:FF:48:BA:C2:21:00:00:24:FF:48:BA:C2 is the WWN of a FC initiator port name of the compute node whose hostname and IP are myhost1 and 10.10.61.1. Register 20:00:00:24:FF:48:BA:C2:21:00:00:24:FF:48:BA:C2 in Unisphere:
    1. Login to Unisphere, go to FNM0000000000->Hosts->Initiators.
    2. Refresh and wait until the initiator 20:00:00:24:FF:48:BA:C2:21:00:00:24:FF:48:BA:C2 with SP Port A-1 appears.
    3. Click the Register button, select CLARiiON/VNX and enter the hostname (which is the output of the linux command hostname) and IP address:
      • Hostname : myhost1
      • IP : 10.10.61.1
      • Click Register
    4. Then host 10.10.61.1 will appear under Hosts->Host List as well.
  2. Register the wwn with more ports if needed.
2.1.5.8.3. Register iSCSI port with VNX
This configuration is only required when initiator_auto_registration=False.
To access VNX storage, the compute nodes should be registered on VNX first if initiator auto registration is not enabled.
To perform "Copy Image to Volume" and "Copy Volume to Image" operations, the nodes running the cinder-volume service (Block Storage nodes) must be registered with the VNX as well.
The steps mentioned below are for the compute nodes. Follow the same steps for the Block Storage nodes also (The steps can be skipped if initiator auto registration is enabled).
  1. On the compute node with IP address 10.10.61.1 and hostname myhost1, execute the following commands (assuming 10.10.61.35 is the iSCSI target):
    1. Start the iSCSI initiator service on the node
      # /etc/init.d/open-iscsi start
    2. Discover the iSCSI target portals on VNX
      # iscsiadm -m discovery -t st -p 10.10.61.35
    3. Enter /etc/iscsi
      # cd /etc/iscsi
    4. Find out the iqn of the node
      # more initiatorname.iscsi
  2. Login to VNX from the compute node using the target corresponding to the SPA port:
    # iscsiadm -m node -T iqn.1992-04.com.emc:cx.apm01234567890.a0 -p 10.10.61.35 -l
  3. Assume iqn.1993-08.org.debian:01:1a2b3c4d5f6g is the initiator name of the compute node. Register iqn.1993-08.org.debian:01:1a2b3c4d5f6g in Unisphere:
    1. Login to Unisphere, go to FNM0000000000->Hosts->Initiators .
    2. Refresh and wait until the initiator iqn.1993-08.org.debian:01:1a2b3c4d5f6g with SP Port A-8v0 appears.
    3. Click the Register button, select CLARiiON/VNX and enter the hostname (which is the output of the linux command hostname) and IP address:
      • Hostname : myhost1
      • IP : 10.10.61.1
      • Click Register
    4. Then host 10.10.61.1 will appear under Hosts->Host List as well.
  4. Logout iSCSI on the node:
    # iscsiadm -m node -u
  5. Login to VNX from the compute node using the target corresponding to the SPB port:
    # iscsiadm -m node -T iqn.1992-04.com.emc:cx.apm01234567890.b8 -p 10.10.61.36 -l
  6. In Unisphere register the initiator with the SPB port.
  7. Logout iSCSI on the node:
    # iscsiadm -m node -u
  8. Register the iqn with more ports if needed.

2.1.6. EMC XtremIO Block Storage driver configuration

The high performance XtremIO All Flash Array (AFA) offers Block Storage services to OpenStack. Using the driver, OpenStack Block Storage hosts can connect to an XtermIO Storage cluster.
This section explains how to configure and connect an OpenStack block storage host to an XtremIO storage cluster.

2.1.6.1. Support matrix

  • Xtremapp: Version 3.0 and 4.0

2.1.6.2. Supported operations

  • Create, delete, clone, attach, and detach volumes
  • Create and delete volume snapshots
  • Create a volume from a snapshot
  • Copy an image to a volume
  • Copy a volume to an image
  • Extend a volume
  • Manage and unmanage a volume
  • Get volume statistics

2.1.6.3. XtremIO Block Storage driver configuration

Edit the cinder.conf file by adding the configuration below under the [DEFAULT] section of the file in case of a single back end or under a separate section in case of multiple back ends (for example [XTREMIO]). The configuration file is usually located under the following path /etc/cinder/cinder.conf.
For a configuration example, refer to the configuration example.
2.1.6.3.1. XtremIO driver name
Configure the driver name by adding the following parameter:
  • For iSCSI volume_driver = cinder.volume.drivers.emc.xtremio.XtremIOIscsiDriver
  • For Fibre Channel volume_driver = cinder.volume.drivers.emc.xtremio.XtremIOFibreChannelDriver
2.1.6.3.2. XtremIO management server (XMS) IP
To retrieve the management IP, use the show-xms CLI command.
Configure the management IP by adding the following parameter: san_ip = XMS Management IP
2.1.6.3.3. XtremIO cluster name
In XtremIO version 4.0, a single XMS can manage multiple cluster back ends. In such setups, the administrator is required to specify the cluster name (in addition to the XMS IP). Each cluster must be defined as a separate back end.
To retrieve the Cluster Name, run the show-clusters CLI command.
Configure the cluster name by adding the xtremio_cluster_name = Cluster-Name
Note
When a single cluster is managed in XtremIO version 4.0, the cluster name is not required.
2.1.6.3.4. XtremIO user credentials
OpenStack Block Storage requires an XtremIO XMS user with administrative privileges. XtremIO recommends creating a dedicated OpenStack user account that holds an administrative user role.
Refer to the XtremIO User Guide for details on user account management
Create an XMS account using either the XMS GUI or the add-user-accountCLI command.
Configure the user credentials by adding the following parameters:
san_login = XMS username
san_password = XMS username password

2.1.6.4. Multiple back ends

Configuring multiple storage back ends enables you to create several back-end storage solutions that serve the same OpenStack Compute resources.
When a volume is created, the scheduler selects the appropriate back end to handle the request, according to the specified volume type.

2.1.6.5. Setting thin provisioning and multipathing parameters

To support thin provisioning and multipathing in the XtremIO Array, the following parameters from the Nova and Cinder configuration files should be modified as follows:
  • Thin Provisioning
    All XtremIO volumes are thin provisioned. The default value of 20 should be maintained for the max_over_subscription_ratio parameter.
    The use_cow_images parameter in thenova.conffile should be set to False as follows:
    use_cow_images = false
  • Multipathing
    The use_multipath_for_image_xfer parameter in thecinder.conf file should be set to True as follows:
    use_multipath_for_image_xfer = true

2.1.6.6. Restarting OpenStack Block Storage

Save thecinder.conffile and restart cinder by running the following command:
$ openstack-service restart cinder-volume

2.1.6.7. Configuring CHAP

The XtremIO Block Storage driver supports CHAP initiator authentication. If CHAP initiator authentication is required, set the CHAP Authentication mode to initiator.
To set the CHAP initiator mode using CLI, run the following CLI command:
$ modify-chap chap-authentication-mode=initiator
The CHAP initiator mode can also be set via the XMS GUI
Refer to XtremIO User Guide for details on CHAP configuration via GUI and CLI.
The CHAP initiator authentication credentials (username and password) are generated automatically by the Block Storage driver. Therefore, there is no need to configure the initial CHAP credentials manually in XMS.

2.1.6.8. Configuration example

cinder.conf example file
You can update the cinder.conf file by editing the necessary parameters as follows:
[Default]
enabled_backends = XtremIO

[XtremIO]
volume_driver = cinder.volume.drivers.emc.xtremio.XtremIOFibreChannelDriver
san_ip = XMS_IP
xtremio_cluster_name = Cluster01
san_login = XMS_USER
san_password = XMS_PASSWD
volume_backend_name = XtremIOAFA

2.1.7. HDS HNAS iSCSI and NFS driver

This OpenStack Block Storage volume driver provides iSCSI and NFS support for Hitachi NAS Platform Models 3080, 3090, 4040, 4060, 4080 and 4100.

2.1.7.1. Supported operations

The NFS and iSCSI drivers support these operations:
  • Create, delete, attach, and detach volumes.
  • Create, list, and delete volume snapshots.
  • Create a volume from a snapshot.
  • Copy an image to a volume.
  • Copy a volume to an image.
  • Clone a volume.
  • Extend a volume.
  • Get volume statistics.
  • Manage and unmanage a volume.

2.1.7.2. HNAS storage requirements

Before using iSCSI and NFS services, use the HNAS configuration and management GUI (SMU) or SSC CLI to create storage pool(s), file system(s), and assign an EVS. Make sure that the file system used is not created as a replication target. Additionally:
For NFS:
Create NFS exports, choose a path for them (it must be different from "/") and set the Show snapshots option to hide and disable access.
Also, in the "Access Configuration" set the option norootsquash , e.g. "* (rw, norootsquash)", so HNAS cinder driver can change the permissions of its volumes.
In order to use the hardware accelerated features of NFS HNAS, we recommend setting max-nfs-version to 3. Refer to HNAS command line reference to see how to configure this option.
For iSCSI:
You need to set an iSCSI domain.

2.1.7.3. Block storage host requirements

The Block storage host requires the nfs-utils package.
If you are not using SSH, you need the HDS SSC to communicate with an HNAS array using the SSC commands. This utility package is available in the RPM package distributed with the hardware through physical media or it can be manually copied from the SMU to the Block Storage host.

2.1.7.4. Package installation

If you are installing the driver from a RPM or DEB package, follow the steps bellow:
  1. Install the dependencies:
    # yum install nfs-utils nfs-utils-lib
  2. Configure the driver as described in the Section 2.1.7.5, “Driver configuration” section.
  3. Restart all cinder services (volume, scheduler and backup).

2.1.7.5. Driver configuration

The HDS driver supports the concept of differentiated services (also referred as quality of service) by mapping volume types to services provided through HNAS.
HNAS supports a variety of storage options and file system capabilities, which are selected through the definition of volume types and the use of multiple back ends. The driver maps up to four volume types into separated exports or file systems, and can support any number if using multiple back ends.
The configuration for the driver is read from an XML-formatted file (one per back end), which you need to create and set its path in the cinder.conf configuration file. Below are the configuration needed in the cinder.conf configuration file [1]:
[DEFAULT]
enabled_backends = hnas_iscsi1, hnas_nfs1
For HNAS iSCSI driver create this section:
[hnas_iscsi1]
volume_driver = cinder.volume.drivers.hitachi.hnas_iscsi.HDSISCSIDriver
hds_hnas_iscsi_config_file = /path/to/config/hnas_config_file.xml
volume_backend_name = HNAS-ISCSI
For HNAS NFS driver create this section:
[hnas_nfs1]
volume_driver = cinder.volume.drivers.hitachi.hnas_nfs.HDSNFSDriver
hds_hnas_nfs_config_file = /path/to/config/hnas_config_file.xml
volume_backend_name = HNAS-NFS
The XML file has the following format:
<?xml version = "1.0" encoding = "UTF-8" ?>
  <config>
    <mgmt_ip0>172.24.44.15</mgmt_ip0>
    <hnas_cmd>ssc</hnas_cmd>
    <chap_enabled>False</chap_enabled>
    <ssh_enabled>False</ssh_enabled>
    <cluster_admin_ip0>10.1.1.1</cluster_admin_ip0>
    <username>supervisor</username>
    <password>supervisor</password>
    <svc_0>
      <volume_type>default</volume_type>
      <iscsi_ip>172.24.44.20</iscsi_ip>
      <hdp>fs01-husvm</hdp>
    </svc_0>
    <svc_1>
      <volume_type>platinum</volume_type>
      <iscsi_ip>172.24.44.20</iscsi_ip>
      <hdp>fs01-platinum</hdp>
    </svc_1>
  </config>

2.1.7.6. HNAS volume driver XML configuration options

An OpenStack Block Storage node using HNAS drivers can have up to four services. Each service is defined by a svc_n tag (svc_0, svc_1, svc_2, or svc_3 [2], for example). These are the configuration options available for each service label:

Table 2.5. Configuration options for service labels

Option Type Default Description
volume_type
Required
default
When a create_volume call with a certain volume type happens, the volume type will try to be matched up with this tag. In each configuration file you must define the default volume type in the service labels and, if no volume type is specified, the default is used. Other labels are case sensitive and should match exactly. If no configured volume types match the incoming requested type, an error occurs in the volume creation.
iscsi_ip
Required only for iSCSI
An iSCSI IP address dedicated to the service.
hdp
Required
For iSCSI driver: virtual file system label associated with the service.
For NFS driver: path to the volume (<ip_address>:/<path>) associated with the service.
Additionally, this entry must be added in the file used to list available NFS shares. This file is located, by default, in /etc/cinder/nfs_shares or you can specify the location in the nfs_shares_config option in the cinder.conf configuration file.
These are the configuration options available to the config section of the XML config file:

Table 2.6. Configuration options

Option Type Default Description
mgmt_ip0
Required
Management Port 0 IP address. Should be the IP address of the "Admin" EVS.
hnas_cmd
Optional
ssc
Command to communicate to HNAS array.
chap_enabled
Optional (iSCSI only)
True
Boolean tag used to enable CHAP authentication protocol.
username
Required
supervisor
It's always required on HNAS.
password
Required
supervisor
Password is always required on HNAS.
svc_0, svc_1, svc_2, svc_3
Optional
(at least one label has to be defined)
Service labels: these four predefined names help four different sets of configuration options. Each can specify HDP and a unique volume type.
cluster_admin_ip0
Optional if ssh_enabled is True
The address of HNAS cluster admin.
ssh_enabled
Optional
False
Enables SSH authentication between Block Storage host and the SMU.
ssh_private_key
Required if ssh_enabled is True
False
Path to the SSH private key used to authenticate in HNAS SMU. The public key must be uploaded to HNAS SMU using ssh-register-public-key (this is an SSH subcommand). Note that copying the public key HNAS using ssh-copy-id doesn't work properly as the SMU periodically wipe out those keys.

2.1.7.7. Service labels

HNAS driver supports differentiated types of service using the service labels. It is possible to create up to four types of them, as gold, platinum, silver and ssd, for example.
After creating the services in the XML configuration file, you must configure one volume_type per service. Each volume_type must have the metadata service_label with the same name configured in the <volume_type> section of that service. If this is not set, OpenStack Block Storage will schedule the volume creation to the pool with largest available free space or other criteria configured in volume filters.
$ cinder type-create default
$ cinder type-key default set service_label=default
$ cinder type-create platinum-tier
$ cinder type-key platinum set service_label=platinum

2.1.7.8. Multi-back-end configuration

If you use multiple back ends and intend to enable the creation of a volume in a specific back end, you must configure volume types to set the volume_backend_name option to the appropriate back end. Then, create volume_type configurations with the same volume_backend_name .
$ cinder type-create 'iscsi'
$ cinder type-key 'iscsi' set volume_backend_name = 'HNAS-ISCSI'
$ cinder type-create 'nfs'
$ cinder type-key 'nfs' set volume_backend_name = 'HNAS-NFS'
You can deploy multiple OpenStack HNAS drivers instances that each control a separate HNAS array. Each service (svc_0, svc_1, svc_2, svc_3) on the instances need to have a volume_type and service_label metadata associated with it. If no metadata is associated with a pool, OpenStack Block Storage filtering algorithm selects the pool with the largest available free space.

2.1.7.9. SSH configuration

Instead of using SSC on the Block Storage host and store its credential on the XML configuration file, HNAS driver supports SSH authentication. To configure that:
  1. If you don't have a pair of public keys already generated, create it in the Block Storage host (leave the pass-phrase empty):
    $ mkdir -p /opt/hds/ssh
    $ ssh-keygen -f /opt/hds/ssh/hnaskey
  2. Change the owner of the key to cinder (or the user the volume service will be run):
    # chown -R cinder.cinder /opt/hds/ssh
  3. Create the directory "ssh_keys" in the SMU server:
    $ ssh [manager|supervisor]@<smu-ip> 'mkdir -p /var/opt/mercury-main/home/[manager|supervisor]/ssh_keys/'
  4. Copy the public key to the "ssh_keys" directory:
    $ scp /opt/hds/ssh/hnaskey.pub [manager|supervisor]@<smu-ip>:/var/opt/mercury-main/home/[manager|supervisor]/ssh_keys/
  5. Access the SMU server:
    $ ssh [manager|supervisor]@<smu-ip>
  6. Run the command to register the SSH keys:
    $ ssh-register-public-key -u [manager|supervisor] -f ssh_keys/hnaskey.pub
  7. Check the communication with HNAS in the Block Storage host:
    $ ssh -i /opt/hds/ssh/hnaskey [manager|supervisor]@<smu-ip> 'ssc <cluster_admin_ip0> df -a'
<cluster_admin_ip0> is "localhost" for single node deployments. This should return a list of available file systems on HNAS.

2.1.7.10. Editing the XML config file:

  1. Set the "username".
  2. Enable SSH adding the line "<ssh_enabled> True</ssh_enabled>" under "<config>" section.
  3. Set the private key path: "<ssh_private_key> /opt/hds/ssh/hnaskey</ssh_private_key>" under "<config>" section.
  4. If the HNAS is in a multi-cluster configuration set "<cluster_admin_ip0>" to the cluster node admin IP. In a single node HNAS, leave it empty.
  5. Restart cinder services.
Warning
Note that copying the public key HNAS using ssh-copy-id doesn't work properly as the SMU periodically wipe out those keys.

2.1.7.11. Manage and unmanage

The manage and unmanage are two new API extensions that add some new features to the driver. The manage action on an existing volume is very similar to a volume creation. It creates a volume entry on OpenStack Block Storage DB, but instead of creating a new volume in the back end, it only adds a 'link' to an existing volume. Volume name, description, volume_type, metadata and availability_zone are supported as in a normal volume creation.
The unmanage action on an existing volume removes the volume from the OpenStack Block Storage DB, but keeps the actual volume in the back-end. From an OpenStack Block Storage perspective the volume would be deleted, but it would still exist for outside use.
How to Manage:
On the Dashboard:
For NFS:
  1. Under the tab System -> Volumes choose the option [ + Manage Volume ]
  2. Fill the fields Identifier, Host and Volume Type with volume information to be managed:
    • Identifier: ip:/type/volume_name Example: 172.24.44.34:/silver/volume-test
    • Host: host@backend-name#pool_name Example: myhost@hnas-nfs#test_silver
    • Volume Name: volume_name Example: volume-test
    • Volume Type: choose a type of volume Example: silver
For iSCSI:
  1. Under the tab System -> Volumes choose the option [ + Manage Volume ]
  2. Fill the fields Identifier, Host, Volume Name and Volume Type with volume information to be managed:
    • Identifier: filesystem-name/volume-name Example: filesystem-test/volume-test
    • Host: host@backend-name#pool_name Example: myhost@hnas-iscsi#test_silver
    • Volume Name: volume_name Example: volume-test
    • Volume Type: choose a type of volume Example: silver
By CLI:
$ cinder --os-volume-api-version 2 manage [--source-name <source-name>][--id-type <id-type>] [--name <name>][--description <description>][--volume-type <volume-type>] [--availability-zone <availability-zone>][--metadata [<key=value> [<key=value> ...]]][--bootable] <host> [<key=value> [<key=value> ...]]
Example:
For NFS:
$ cinder --os-volume-api-version 2 manage --name <volume-test> --volume-type <silver> --source-name <172.24.44.34:/silver/volume-test> <myhost@hnas-nfs#test_silver>
For iSCSI:
$ cinder --os-volume-api-version 2 manage --name <volume-test> --volume-type <silver> --source-name <filesystem-test/volume-test> <myhost@hnas-iscsi#test_silver>
How to Unmanage:
On Dashboard:
  1. Under the tab [ System -> Volumes ] choose a volume
  2. On the volume options, choose [ +Unmanage Volume ]
  3. Check the data and confirm.
By CLI:
$ cinder --os-volume-api-version 2 unmanage <volume>
Example:
$ cinder --os-volume-api-version 2 unmanage <voltest>

2.1.7.12. Additional notes

  • The get_volume_stats() function always provides the available capacity based on the combined sum of all the HDPs that are used in these services labels.
  • After changing the configuration on the storage, the OpenStack Block Storage driver must be restarted.
  • On Red Hat, if the system is configured to use SELinux, you need to set "virt_use_nfs = on" for NFS driver work properly.
    # setsebool -P virt_use_nfs on
  • It is not possible to manage a volume if there is a slash ('/') or a colon (':') on the volume name.

2.1.8. Hitachi storage volume driver

Hitachi storage volume driver provides iSCSI and Fibre Channel support for Hitachi storages.

2.1.8.1. System requirements

Supported storages:
  • Hitachi Virtual Storage Platform G1000 (VSP G1000)
  • Hitachi Virtual Storage Platform (VSP)
  • Hitachi Unified Storage VM (HUS VM)
  • Hitachi Unified Storage 100 Family (HUS 100 Family)
Required software:
  • RAID Manager Ver 01-32-03/01 or later for VSP G1000/VSP/HUS VM
  • Hitachi Storage Navigator Modular 2 (HSNM2) Ver 27.50 or later for HUS 100 Family
Note
HSNM2 needs to be installed under /usr/stonavm.
Required licenses:
  • Hitachi In-System Replication Software for VSP G1000/VSP/HUS VM
  • (Mandatory) ShadowImage in-system replication for HUS 100 Family
  • (Optional) Copy-on-Write Snapshot for HUS 100 Family
Additionally, the pexpect package is required.

2.1.8.2. Supported operations

  • Create, delete, attach and detach volumes.
  • Create, list and delete volume snapshots.
  • Create a volume from a snapshot.
  • Copy a volume to an image.
  • Copy an image to a volume.
  • Clone a volume.
  • Extend a volume.
  • Get volume statistics.

2.1.8.3. Configuration

Set up Hitachi storage
You need to specify settings as described below. For details about each step, see the user's guide of the storage device. Use a storage administrative software such as Storage Navigator to set up the storage device so that LDEVs and host groups can be created and deleted, and LDEVs can be connected to the server and can be asynchronously copied.
  1. Create a Dynamic Provisioning pool.
  2. Connect the ports at the storage to the Controller node and Compute nodes.
  3. For VSP G1000/VSP/HUS VM, set "port security" to "enable" for the ports at the storage.
  4. For HUS 100 Family, set "Host Group security"/"iSCSI target security" to "ON" for the ports at the storage.
  5. For the ports at the storage, create host groups (iSCSI targets) whose names begin with HBSD- for the Controller node and each Compute node. Then register a WWN (initiator IQN) for each of the Controller node and Compute nodes.
  6. For VSP G1000/VSP/HUS VM, perform the following:
    • Create a storage device account belonging to the Administrator User Group. (To use multiple storage devices, create the same account name for all the target storage devices, and specify the same resource group and permissions.)
    • Create a command device (In-Band), and set user authentication to ON.
    • Register the created command device to the host group for the Controller node.
    • To use the Thin Image function, create a pool for Thin Image.
  7. For HUS 100 Family, perform the following:
    • Use the command auunitaddauto to register the unit name and controller of the storage device to HSNM2.
    • When connecting via iSCSI, if you are using CHAP certification, specify the same user and password as that used for the storage port.
Set up Hitachi Gigabit Fibre Channel adaptor
Change a parameter of the hfcldd driver and update the initram file if Hitachi Gigabit Fibre Channel adaptor is used.
# /opt/hitachi/drivers/hba/hfcmgr -E hfc_rport_lu_scan 1
# dracut -f initramfs-KERNEL_VERSION.img KERNEL_VERSION
# reboot
Set up Hitachi storage volume driver
  1. Create directory.
    # mkdir /var/lock/hbsd
    # chown cinder:cinder /var/lock/hbsd
  2. Create "volume type" and "volume key".
    This example shows that HUS100_SAMPLE is created as "volume type" and hus100_backend is registered as "volume key".
    $ cinder type-create HUS100_SAMPLE
    $ cinder type-key HUS100_SAMPLE set volume_backend_name=hus100_backend
    Specify any identical "volume type" name and "volume key".
    To confirm the created "volume type", execute the following command:
    $ cinder extra-specs-list
  3. Edit /etc/cinder/cinder.conf as follows.
    If you use Fibre Channel:
    volume_driver = cinder.volume.drivers.hitachi.hbsd_fc.HBSDFCDriver
    If you use iSCSI:
    volume_driver = cinder.volume.drivers.hitachi.hbsd_iscsi.HBSDISCSIDriver
    Also, set volume_backend_name created by cinder type-key
    volume_backend_name = hus100_backend
    This table shows configuration options for Hitachi storage volume driver.

    Table 2.7. Description of Hitachi storage volume driver configuration options

    Configuration option = Default value Description
    [DEFAULT]
    hitachi_add_chap_user = False (BoolOpt) Add CHAP user
    hitachi_async_copy_check_interval = 10 (IntOpt) Interval to check copy asynchronously
    hitachi_auth_method = None (StrOpt) iSCSI authentication method
    hitachi_auth_password = HBSD-CHAP-password (StrOpt) iSCSI authentication password
    hitachi_auth_user = HBSD-CHAP-user (StrOpt) iSCSI authentication username
    hitachi_copy_check_interval = 3 (IntOpt) Interval to check copy
    hitachi_copy_speed = 3 (IntOpt) Copy speed of storage system
    hitachi_default_copy_method = FULL (StrOpt) Default copy method of storage system
    hitachi_group_range = None (StrOpt) Range of group number
    hitachi_group_request = False (BoolOpt) Request for creating HostGroup or iSCSI Target
    hitachi_horcm_add_conf = True (BoolOpt) Add to HORCM configuration
    hitachi_horcm_numbers = 200,201 (StrOpt) Instance numbers for HORCM
    hitachi_horcm_password = None (StrOpt) Password of storage system for HORCM
    hitachi_horcm_resource_lock_timeout = 600 (IntOpt) Timeout until a resource lock is released, in seconds. The value must be between 0 and 7200.
    hitachi_horcm_user = None (StrOpt) Username of storage system for HORCM
    hitachi_ldev_range = None (StrOpt) Range of logical device of storage system
    hitachi_pool_id = None (IntOpt) Pool ID of storage system
    hitachi_serial_number = None (StrOpt) Serial number of storage system
    hitachi_target_ports = None (StrOpt) Control port names for HostGroup or iSCSI Target
    hitachi_thin_pool_id = None (IntOpt) Thin pool ID of storage system
    hitachi_unit_name = None (StrOpt) Name of an array unit
    hitachi_zoning_request = False (BoolOpt) Request for FC Zone creating HostGroup
  4. Restart Block Storage service.
    When the startup is done, "MSGID0003-I: The storage backend can be used." is output into /var/log/cinder/volume.log as follows.
    2014-09-01 10:34:14.169 28734 WARNING cinder.volume.drivers.hitachi. hbsd_common [req-a0bb70b5-7c3f-422a-a29e-6a55d6508135 None None] MSGID0003-I: The storage backend can be used. (config_group: hus100_backend)

2.1.9. Huawei storage driver

The Huawei driver supports the iSCSI and Fibre Channel connections and enables OceanStor T series V200R002, OceanStor 18000 series V100R001 and OceanStor V3 series V300R002 storage to provide block storage services for OpenStack.

Supported operations

  • Create, delete, expand, attach, and detach volumes.
  • Create and delete a snapshot.
  • Copy an image to a volume.
  • Copy a volume to an image.
  • Create a volume from a snapshot.
  • Clone a volume.

Configure block storage nodes

  1. Modify the cinder.conf configuration file and add volume_driver and cinder_huawei_conf_file items.
    • Example for configuring a storage system:
      volume_driver = cinder.volume.drivers.huawei.HuaweiVolumeDriver
      cinder_huawei_conf_file = /etc/cinder/cinder_huawei_conf.xml
    • Example for configuring multiple storage systems:
      enabled_backends = t_iscsi, 18000_iscsi
      [t_iscsi]
      volume_driver = cinder.volume.drivers.huawei.HuaweiVolumeDriver
      cinder_huawei_conf_file = /etc/cinder/cinder_huawei_conf_t_iscsi.xml
      volume_backend_name = HuaweiTISCSIDriver
      
      [18000_iscsi]
      volume_driver = cinder.volume.drivers.huawei.HuaweiVolumeDriver
      cinder_huawei_conf_file = /etc/cinder/cinder_huawei_conf_18000_iscsi.xml
      volume_backend_name = Huawei18000ISCSIDriver
  2. In /etc/cinder, create a driver configuration file. The driver configuration file name must be the same as the cinder_huawei_conf_file item in the cinder_conf configuration file.
  3. Configure product and protocol.

    Product and Protocol indicate the storage system type and link type respectively. For the OceanStor 18000 series V100R001 storage systems, the driver configuration file is as follows:
    <?xml version='1.0' encoding='UTF-8'?>
    <config>
        <Storage>
            <Product>18000</Product>
            <Protocol>iSCSI</Protocol>
            <RestURL>https://x.x.x.x/deviceManager/rest/</RestURL>
            <UserName>xxxxxxxx</UserName>
            <UserPassword>xxxxxxxx</UserPassword>
        </Storage>
        <LUN>
            <LUNType>Thick</LUNType>
            <WriteType>1</WriteType>
            <MirrorSwitch>0</MirrorSwitch>
            <LUNcopyWaitInterval>5</LUNcopyWaitInterval>
            <Timeout>432000</Timeout>
            <StoragePool>xxxxxxxx</StoragePool>
        </LUN>
        <iSCSI>
            <DefaultTargetIP>x.x.x.x</DefaultTargetIP>
            <Initiator Name="xxxxxxxx" TargetIP="x.x.x.x"/>
            <Initiator Name="xxxxxxxx" TargetIP="x.x.x.x"/>
        </iSCSI>
        <Host OSType="Linux" HostIP="x.x.x.x, x.x.x.x"/>
    </config>
    Note

    Note for fibre channel driver configuration

    • In the configuration files of OceanStor T series V200R002 and OceanStor V3 V300R002, parameter configurations are the same with the exception of the RestURL parameter. The following describes how to configure the RestURL parameter:
      <RestURL>https://x.x.x.x:8088/deviceManager/rest/</RestURL>
    • For a Fibre Channel driver, you do not need to configure an iSCSI target IP address. Delete the iSCSI configuration from the preceding examples.
      <iSCSI>
              <DefaultTargetIP>x.x.x.x</DefaultTargetIP>
              <Initiator Name="xxxxxxxx" TargetIP="x.x.x.x"/>
              <Initiator Name="xxxxxxxx" TargetIP="x.x.x.x"/>
      </iSCSI>
    This table describes the Huawei storage driver configuration options:

    Table 2.8. Huawei storage driver configuration options

    Property Type Default Description
    Product
    Mandatory
    -
    Type of a storage product. Valid values are T, TV3, or 18000.
    Protocol Mandatory
    -
    Type of a protocol. Valid values are iSCSI or FC.
    RestURL Mandatory
    -
    Access address of the Rest port (required only for the 18000)
    UserName
    Mandatory
    -
    User name of an administrator
    UserPassword
    Mandatory
    -
    Password of an administrator
    LUNType
    Optional
    Thin
    Type of a created LUN. Valid values are Thick or Thin.
    StripUnitSize
    Optional
    64
    Stripe depth of a created LUN. The value is expressed in KB.
    This flag is not valid for a thin LUN.
    WriteType
    Optional
    1
    Cache write method. The method can be write back, write through, or Required write back. The default value is 1, indicating write back.
    MirrorSwitch
    Optional
    1
    Cache mirroring policy. The default value is 1, indicating that a mirroring policy is used.
    Prefetch Type Optional
    3
    Cache prefetch strategy. The strategy can be constant prefetch, variable prefetch, or intelligent prefetch. Default value is 3, which indicates intelligent prefetch and is not required for the OceanStor 18000 series.
    Prefetch Value Optional
    0
    Cache prefetch value.
    LUNcopyWaitInterval Optional
    5
    After LUN copy is enabled, the plug-in frequently queries the copy progress. You can set a value to specify the query interval.
    Timeout Optional
    432,000
    Timeout period for waiting LUN copy of an array to complete.
    StoragePool Mandatory
    -
    Name of a storage pool that you want to use.
    DefaultTargetIP Optional
    -
    Default IP address of the iSCSI port provided for compute nodes.
    Initiator Name Optional
    -
    Name of a compute node initiator.
    Initiator TargetIP Optional
    -
    IP address of the iSCSI port provided for compute nodes.
    OSType Optional
    Linux
    The OS type for a compute node.
    HostIP Optional
    -
    The IPs for compute nodes.
    Note for the configuration
    1. You can configure one iSCSI target port for each or all compute nodes. The driver checks whether a target port IP address is configured for the current compute node. If not, select DefaultTargetIP.
    2. Only one storage pool can be configured.
    3. For details about LUN configuration information, see the show lun general command in the command-line interface (CLI) documentation or run the help -c show lun general on the storage system CLI.
    4. After the driver is loaded, the storage system obtains any modification of the driver configuration file in real time and you do not need to restart the cinder-volume service.
  4. Restart the Cinder service.

2.1.10. IBM Storwize family and SVC volume driver

The volume management driver for Storwize family and SAN Volume Controller (SVC) provides OpenStack Compute instances with access to IBM Storwize family or SVC storage systems.

2.1.10.1. Configure the Storwize family and SVC system

Network configuration
The Storwize family or SVC system must be configured for iSCSI, Fibre Channel, or both.
If using iSCSI, each Storwize family or SVC node should have at least one iSCSI IP address. The IBM Storwize/SVC driver uses an iSCSI IP address associated with the volume's preferred node (if available) to attach the volume to the instance, otherwise it uses the first available iSCSI IP address of the system. The driver obtains the iSCSI IP address directly from the storage system; you do not need to provide these iSCSI IP addresses directly to the driver.
Note
If using iSCSI, ensure that the compute nodes have iSCSI network access to the Storwize family or SVC system.
Note
OpenStack Nova's Grizzly version supports iSCSI multipath. Once this is configured on the Nova host (outside the scope of this documentation), multipath is enabled.
If using Fibre Channel (FC), each Storwize family or SVC node should have at least one WWPN port configured. If the storwize_svc_multipath_enabled flag is set to True in the Cinder configuration file, the driver uses all available WWPNs to attach the volume to the instance (details about the configuration flags appear in the next section). If the flag is not set, the driver uses the WWPN associated with the volume's preferred node (if available), otherwise it uses the first available WWPN of the system. The driver obtains the WWPNs directly from the storage system; you do not need to provide these WWPNs directly to the driver.
Note
If using FC, ensure that the compute nodes have FC connectivity to the Storwize family or SVC system.
iSCSI CHAP authentication
If using iSCSI for data access and the storwize_svc_iscsi_chap_enabled is set to True, the driver will associate randomly-generated CHAP secrets with all hosts on the Storwize family system. OpenStack compute nodes use these secrets when creating iSCSI connections.
Note
CHAP secrets are added to existing hosts as well as newly-created ones. If the CHAP option is enabled, hosts will not be able to access the storage without the generated secrets.
Note
Not all OpenStack Compute drivers support CHAP authentication. Check compatibility before using.
Note
CHAP secrets are passed from OpenStack Block Storage to Compute in clear text. This communication should be secured to ensure that CHAP secrets are not discovered.
Configure storage pools
Each instance of the IBM Storwize/SVC driver allocates all volumes in a single pool. The pool should be created in advance and be provided to the driver using the storwize_svc_volpool_name configuration flag. Details about the configuration flags and how to provide the flags to the driver appear in the next section.
Configure user authentication for the driver
The driver requires access to the Storwize family or SVC system management interface. The driver communicates with the management using SSH. The driver should be provided with the Storwize family or SVC management IP using the san_ip flag, and the management port should be provided by the san_ssh_port flag. By default, the port value is configured to be port 22 (SSH).
Note
Make sure the compute node running the cinder-volume management driver has SSH network access to the storage system.
To allow the driver to communicate with the Storwize family or SVC system, you must provide the driver with a user on the storage system. The driver has two authentication methods: password-based authentication and SSH key pair authentication. The user should have an Administrator role. It is suggested to create a new user for the management driver. Consult your storage and security administrator regarding the preferred authentication method and how passwords or SSH keys should be stored in a secure manner.
Note
When creating a new user on the Storwize or SVC system, make sure the user belongs to the Administrator group or to another group that has an Administrator role.
If using password authentication, assign a password to the user on the Storwize or SVC system. The driver configuration flags for the user and password are san_login and san_password, respectively.
If you are using the SSH key pair authentication, create SSH private and public keys using the instructions below or by any other method. Associate the public key with the user by uploading the public key: select the "choose file" option in the Storwize family or SVC management GUI under "SSH public key". Alternatively, you may associate the SSH public key using the command line interface; details can be found in the Storwize and SVC documentation. The private key should be provided to the driver using the san_private_key configuration flag.
Create a SSH key pair with OpenSSH
You can create an SSH key pair using OpenSSH, by running:
$ ssh-keygen -t rsa
The command prompts for a file to save the key pair. For example, if you select 'key' as the filename, two files are created: key and key.pub. The key file holds the private SSH key and key.pub holds the public SSH key.
The command also prompts for a pass phrase, which should be empty.
The private key file should be provided to the driver using the san_private_key configuration flag. The public key should be uploaded to the Storwize family or SVC system using the storage management GUI or command line interface.
Note
Ensure that Cinder has read permissions on the private key file.

2.1.10.2. Configure the Storwize family and SVC driver

Enable the Storwize family and SVC driver
Set the volume driver to the Storwize family and SVC driver by setting the volume_driver option in cinder.conf as follows:
volume_driver = cinder.volume.drivers.ibm.storwize_svc.StorwizeSVCDriver
Storwize family and SVC driver options in cinder.conf
The following options specify default values for all volumes. Some can be over-ridden using volume types, which are described below.

Table 2.9. List of configuration flags for Storwize storage and SVC driver

Flag name Type Default Description
san_ip
Required
Management IP or host name
san_ssh_port
Optional
22
Management port
san_login
Required
Management login username
san_password
Required [a]
Management login password
san_private_key
Required [a]
Management login SSH private key
storwize_svc_volpool_name
Required
Default pool name for volumes
storwize_svc_vol_rsize
Optional
2
Initial physical allocation (percentage) [b]
storwize_svc_vol_warning
Optional
0 (disabled)
Space allocation warning threshold (percentage) [b]
storwize_svc_vol_autoexpand
Optional
True
Enable or disable volume auto expand [c]
storwize_svc_vol_grainsize
Optional
256
Volume grain size [b] in KB
storwize_svc_vol_compression
Optional
False
Enable or disable Real-time Compression [d]
storwize_svc_vol_easytier
Optional
True
Enable or disable Easy Tier [e]
storwize_svc_vol_iogrp
Optional
0
The I/O group in which to allocate vdisks
storwize_svc_flashcopy_timeout
Optional
120
FlashCopy timeout threshold [f] (seconds)
storwize_svc_connection_protocol
Optional
iSCSI
Connection protocol to use (currently supports 'iSCSI' or 'FC')
storwize_svc_iscsi_chap_enabled
Optional
True
Configure CHAP authentication for iSCSI connections
storwize_svc_multipath_enabled
Optional
False
Enable multipath for FC connections [g]
storwize_svc_multihost_enabled
Optional
True
Enable mapping vdisks to multiple hosts [h]
storwize_svc_vol_nofmtdisk
Optional
False
Enable or disable fast format [i]
[a] The authentication requires either a password (san_password) or SSH private key (san_private_key). One must be specified. If both are specified, the driver uses only the SSH private key.
[b] The driver creates thin-provisioned volumes by default. The storwize_svc_vol_rsize flag defines the initial physical allocation percentage for thin-provisioned volumes, or if set to -1, the driver creates full allocated volumes. More details about the available options are available in the Storwize family and SVC documentation.
[c] Defines whether thin-provisioned volumes can be auto expanded by the storage system, a value of True means that auto expansion is enabled, a value of False disables auto expansion. Details about this option can be found in the –autoexpand flag of the Storwize family and SVC command line interface mkvdisk command.
[d] Defines whether Real-time Compression is used for the volumes created with OpenStack. Details on Real-time Compression can be found in the Storwize family and SVC documentation. The Storwize or SVC system must have compression enabled for this feature to work.
[e] Defines whether Easy Tier is used for the volumes created with OpenStack. Details on EasyTier can be found in the Storwize family and SVC documentation. The Storwize or SVC system must have Easy Tier enabled for this feature to work.
[f] The driver wait timeout threshold when creating an OpenStack snapshot. This is actually the maximum amount of time that the driver waits for the Storwize family or SVC system to prepare a new FlashCopy mapping. The driver accepts a maximum wait time of 600 seconds (10 minutes).
[g] Multipath for iSCSI connections requires no storage-side configuration and is enabled if the compute host has multipath configured.
[h] This option allows the driver to map a vdisk to more than one host at a time. This scenario occurs during migration of a virtual machine with an attached volume; the volume is simultaneously mapped to both the source and destination compute hosts. If your deployment does not require attaching vdisks to multiple hosts, setting this flag to False will provide added safety.
[i] Defines whether or not the fast formatting of thick-provisioned volumes is disabled at creation. The default value is False and a value of True means that fast format is disabled. Details about this option can be found in the –nofmtdisk flag of the Storwize family and SVC command line interface mkvdisk command.

Table 2.10. Description of IBM Storwise driver configuration options

Configuration option = Default value Description
[DEFAULT]
storwize_svc_allow_tenant_qos = False (BoolOpt) Allow tenants to specify QOS on create
storwize_svc_connection_protocol = iSCSI (StrOpt) Connection protocol (iSCSI/FC)
storwize_svc_flashcopy_timeout = 120 (IntOpt) Maximum number of seconds to wait for FlashCopy to be prepared.
storwize_svc_iscsi_chap_enabled = True (BoolOpt) Configure CHAP authentication for iSCSI connections (Default: Enabled)
storwize_svc_multihostmap_enabled = True (BoolOpt) Allows vdisk to multi host mapping
storwize_svc_multipath_enabled = False (BoolOpt) Connect with multipath (FC only; iSCSI multipath is controlled by Nova)
storwize_svc_npiv_compatibility_mode = True (BoolOpt) Indicate whether svc driver is compatible for NPIV setup. If it is compatible, it will allow no wwpns being returned on get_conn_fc_wwpns during initialize_connection. It should always be set to True. It will be deprecated and removed in M release.
storwize_svc_stretched_cluster_partner = None (StrOpt) If operating in stretched cluster mode, specify the name of the pool in which mirrored copies are stored.Example: "pool2"
storwize_svc_vol_autoexpand = True (BoolOpt) Storage system autoexpand parameter for volumes (True/False)
storwize_svc_vol_compression = False (BoolOpt) Storage system compression option for volumes
storwize_svc_vol_easytier = True (BoolOpt) Enable Easy Tier for volumes
storwize_svc_vol_grainsize = 256 (IntOpt) Storage system grain size parameter for volumes (32/64/128/256)
storwize_svc_vol_iogrp = 0 (IntOpt) The I/O group in which to allocate volumes
storwize_svc_vol_rsize = 2 (IntOpt) Storage system space-efficiency parameter for volumes (percentage)
storwize_svc_vol_warning = 0 (IntOpt) Storage system threshold for volume capacity warnings (percentage)
storwize_svc_volpool_name = volpool (StrOpt) Storage system storage pool for volumes
Placement with volume types
The IBM Storwize/SVC driver exposes capabilities that can be added to the extra specs of volume types, and used by the filter scheduler to determine placement of new volumes. Make sure to prefix these keys with capabilities: to indicate that the scheduler should use them. The following extra specs are supported:
  • capabilities:volume_back-end_name - Specify a specific back-end where the volume should be created. The back-end name is a concatenation of the name of the IBM Storwize/SVC storage system as shown in lssystem, an underscore, and the name of the pool (mdisk group). For example:
    capabilities:volume_back-end_name=myV7000_openstackpool
  • capabilities:compression_support - Specify a back-end according to compression support. A value of True should be used to request a back-end that supports compression, and a value of False will request a back-end that does not support compression. If you do not have constraints on compression support, do not set this key. Note that specifying True does not enable compression; it only requests that the volume be placed on a back-end that supports compression. Example syntax:
    capabilities:compression_support='<is> True'
  • capabilities:easytier_support - Similar semantics as the compression_support key, but for specifying according to support of the Easy Tier feature. Example syntax:
    capabilities:easytier_support='<is> True'
  • capabilities:storage_protocol - Specifies the connection protocol used to attach volumes of this type to instances. Legal values are iSCSI and FC. This extra specs value is used for both placement and setting the protocol used for this volume. In the example syntax, note <in> is used as opposed to <is> used in the previous examples.
    capabilities:storage_protocol='<in> FC'
Configure per-volume creation options
Volume types can also be used to pass options to the IBM Storwize/SVC driver, which over-ride the default values set in the configuration file. Contrary to the previous examples where the "capabilities" scope was used to pass parameters to the Cinder scheduler, options can be passed to the IBM Storwize/SVC driver with the "drivers" scope.
The following extra specs keys are supported by the IBM Storwize/SVC driver:
  • rsize
  • warning
  • autoexpand
  • grainsize
  • compression
  • easytier
  • multipath
  • iogrp
These keys have the same semantics as their counterparts in the configuration file. They are set similarly; for example, rsize=2 or compression=False.
Example: Volume types
In the following example, we create a volume type to specify a controller that supports iSCSI and compression, to use iSCSI when attaching the volume, and to enable compression:
$ cinder type-create compressed
$ cinder type-key compressed set capabilities:storage_protocol='<in> iSCSI' capabilities:compression_support='<is> True' drivers:compression=True
We can then create a 50GB volume using this type:
$ cinder create --display-name "compressed volume" --volume-type compressed 50
Volume types can be used, for example, to provide users with different
  • performance levels (such as, allocating entirely on an HDD tier, using Easy Tier for an HDD-SDD mix, or allocating entirely on an SSD tier)
  • resiliency levels (such as, allocating volumes in pools with different RAID levels)
  • features (such as, enabling/disabling Real-time Compression)
QOS
The Storwize driver provides QOS support for storage volumes by controlling the I/O amount. QOS is enabled by editing the etc/cinder/cinder.conf file and setting the storwize_svc_allow_tenant_qos to True.
There are three ways to set the Storwize IOThrotting parameter for storage volumes:
  • Add the qos:IOThrottling key into a QOS specification and associate it with a volume type.
  • Add the qos:IOThrottling key into an extra specification with a volume type.
  • Add the qos:IOThrottling key to the storage volume metadata.
Note
If you are changing a volume type with QOS to a new volume type without QOS, the QOS configuration settings will be removed.

2.1.10.3. Operational notes for the Storwize family and SVC driver

Migrate volumes
In the context of OpenStack Block Storage's volume migration feature, the IBM Storwize/SVC driver enables the storage's virtualization technology. When migrating a volume from one pool to another, the volume will appear in the destination pool almost immediately, while the storage moves the data in the background.
Note
To enable this feature, both pools involved in a given volume migration must have the same values for extent_size. If the pools have different values for extent_size, the data will still be moved directly between the pools (not host-side copy), but the operation will be synchronous.
Extend volumes
The IBM Storwize/SVC driver allows for extending a volume's size, but only for volumes without snapshots.
Snapshots and clones
Snapshots are implemented using FlashCopy with no background copy (space-efficient). Volume clones (volumes created from existing volumes) are implemented with FlashCopy, but with background copy enabled. This means that volume clones are independent, full copies. While this background copy is taking place, attempting to delete or extend the source volume will result in that operation waiting for the copy to complete.
Volume retype
The IBM Storwize/SVC driver enables you to modify volume types. When you modify volume types, you can also change these extra specs properties:
  • rsize
  • warning
  • autoexpand
  • grainsize
  • compression
  • easytier
  • iogrp
  • nofmtdisk
Note
When you change the rsize, grainsize or compression properties, volume copies are asynchronously synchronized on the array.
Note
To change the iogrp property, IBM Storwize/SVC firmware version 6.4.0 or later is required.

2.1.11. IBM XIV and DS8000 volume driver

The IBM Storage Driver for OpenStack is a Block Storage driver that supports IBM XIV and IBM DS8000 storage systems over Fiber channel and iSCSI.
Set the following in your cinder.conf, and use the following options to configure it.
volume_driver = cinder.volume.drivers.xiv_ds8k.XIVDS8KDriver

Table 2.11. Description of IBM XIV and DS8000 volume driver configuration options

Configuration option = Default value Description
[DEFAULT]
san_clustername = (StrOpt) Cluster name to use for creating volumes
san_ip = (StrOpt) IP address of SAN controller
san_login = admin (StrOpt) Username for SAN controller
san_password = (StrOpt) Password for SAN controller
xiv_chap = disabled (StrOpt) CHAP authentication mode, effective only for iscsi (disabled|enabled)
xiv_ds8k_connection_type = iscsi (StrOpt) Connection type to the IBM Storage Array
xiv_ds8k_proxy = xiv_ds8k_openstack.nova_proxy.XIVDS8KNovaProxy (StrOpt) Proxy driver that connects to the IBM Storage Array
For full documentation refer to IBM's online documentation available at http://pic.dhe.ibm.com/infocenter/strhosts/ic/topic/com.ibm.help.strghosts.doc/nova-homepage.html.

2.1.12. LVM

The default volume back-end uses local volumes managed by LVM.
This driver supports different transport protocols to attach volumes, currently iSCSI and iSER.
Note
The Block Storage iSCSI LVM driver has significant performance issues. In production environments, with high I/O activity, there are many potential issues which could affect performance or data integrity.
Red Hat strongly recommends using a certified Block Storage plug-in provider for storage in a production environment. The software iSCSI LVM driver should be used and is only supported for single node evaluations and proof of concept environments.
Set the following in your cinder.conf configuration file, and use the following options to configure for iSCSI transport:
    volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
    iscsi_protocol = iscsi
Use the following options to configure for the iSER transport:
    volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
    iscsi_protocol = iser

Table 2.12. Description of LVM configuration options

Configuration option = Default value Description
[DEFAULT]
lvm_conf_file = /etc/cinder/lvm.conf (StrOpt) LVM conf file to use for the LVM driver in Cinder; this setting is ignored if the specified file does not exist (You can also specify 'None' to not use a conf file even if one exists).
lvm_mirrors = 0 (IntOpt) If >0, create LVs with multiple mirrors. Note that this requires lvm_mirrors + 2 PVs with available space
lvm_type = default (StrOpt) Type of LVM volumes to deploy; (default, thin, or auto). Auto defaults to thin if thin is supported.
volume_group = cinder-volumes (StrOpt) Name for the VG that will contain exported volumes

2.1.13. NetApp unified driver

The NetApp unified driver is a block storage driver that supports multiple storage families and protocols. A storage family corresponds to storage systems built on different NetApp technologies such as clustered Data ONTAP, Data ONTAP operating in 7-Mode, and E-Series. The storage protocol refers to the protocol used to initiate data storage and access operations on those storage systems like iSCSI and NFS. The NetApp unified driver can be configured to provision and manage OpenStack volumes on a given storage family using a specified storage protocol. The OpenStack volumes can then be used for accessing and storing data using the storage protocol on the storage family system. The NetApp unified driver is an extensible interface that can support new storage families and protocols.
Note
With the Juno release of OpenStack, OpenStack Block Storage has introduced the concept of "storage pools", in which a single OpenStack Block Storage back end may present one or more logical storage resource pools from which OpenStack Block Storage will select as a storage location when provisioning volumes.
In releases prior to Juno, the NetApp unified driver contained some "scheduling" logic that determined which NetApp storage container (namely, a FlexVol volume for Data ONTAP, or a dynamic disk pool for E-Series) that a new OpenStack Block Storage volume would be placed into.
With the introduction of pools, all scheduling logic is performed completely within the OpenStack Block Storage scheduler, as each NetApp storage container is directly exposed to the OpenStack Block Storage scheduler as a storage pool; whereas previously, the NetApp unified driver presented an aggregated view to the scheduler and made a final placement decision as to which NetApp storage container the OpenStack Block Storage volume would be provisioned into.

2.1.13.1. NetApp clustered Data ONTAP storage family

The NetApp clustered Data ONTAP storage family represents a configuration group which provides OpenStack compute instances access to clustered Data ONTAP storage systems. At present it can be configured in OpenStack Block Storage to work with iSCSI and NFS storage protocols.
2.1.13.1.1. NetApp iSCSI configuration for clustered Data ONTAP
The NetApp iSCSI configuration for clustered Data ONTAP is an interface from OpenStack to clustered Data ONTAP storage systems for provisioning and managing the SAN block storage entity; that is, a NetApp LUN which can be accessed using the iSCSI protocol.
The iSCSI configuration for clustered Data ONTAP is a direct interface from OpenStack Block Storage to the clustered Data ONTAP instance and as such does not require additional management software to achieve the desired functionality. It uses NetApp APIs to interact with the clustered Data ONTAP instance.
Configuration options for clustered Data ONTAP family with iSCSI protocol
Configure the volume driver, storage family and storage protocol to the NetApp unified driver, clustered Data ONTAP, and iSCSI respectively by setting the volume_driver, netapp_storage_family and netapp_storage_protocol options in cinder.conf as follows:
volume_driver = cinder.volume.drivers.netapp.common.NetAppDriver
netapp_storage_family = ontap_cluster
netapp_storage_protocol = iscsi
netapp_vserver = openstack-vserver
netapp_server_hostname = myhostname
netapp_server_port = port
netapp_login = username
netapp_password = password
Note
To use the iSCSI protocol, you must override the default value of netapp_storage_protocol with iscsi.

Table 2.13. Description of NetApp cDOT iSCSI driver configuration options

Configuration option = Default value Description
[DEFAULT]
netapp_login = None (StrOpt) Administrative user account name used to access the storage system or proxy server.
netapp_lun_ostype = None (StrOpt) This option defines the type of operating system that will access a LUN exported from Data ONTAP; it is assigned to the LUN at the time it is created.
netapp_lun_space_reservation = enabled (StrOpt) This option determines if storage space is reserved for LUN allocation. If enabled, LUNs are thick provisioned. If space reservation is disabled, storage space is allocated on demand.
netapp_partner_backend_name = None (StrOpt) The name of the config.conf stanza for a Data ONTAP (7-mode) HA partner. This option is only used by the driver when connecting to an instance with a storage family of Data ONTAP operating in 7-Mode, and it is required if the storage protocol selected is FC.
netapp_password = None (StrOpt) Password for the administrative user account specified in the netapp_login option.
netapp_pool_name_search_pattern = (.+) (StrOpt) This option is used to restrict provisioning to the specified pools. Specify the value of this option to be a regular expression which will be applied to the names of objects from the storage backend which represent pools in Cinder. This option is only utilized when the storage protocol is configured to use iSCSI or FC.
netapp_server_hostname = None (StrOpt) The hostname (or IP address) for the storage system or proxy server.
netapp_server_port = None (IntOpt) The TCP port to use for communication with the storage system or proxy server. If not specified, Data ONTAP drivers will use 80 for HTTP and 443 for HTTPS; E-Series will use 8080 for HTTP and 8443 for HTTPS.
netapp_size_multiplier = 1.2 (FloatOpt) The quantity to be multiplied by the requested volume size to ensure enough space is available on the virtual storage server (Vserver) to fulfill the volume creation request. Note: this option is deprecated and will be removed in favor of "reserved_percentage" in the Mitaka release.
netapp_storage_family = ontap_cluster (StrOpt) The storage family type used on the storage system; valid values are ontap_7mode for using Data ONTAP operating in 7-Mode, ontap_cluster for using clustered Data ONTAP, or eseries for using E-Series.
netapp_storage_protocol = None (StrOpt) The storage protocol to be used on the data path with the storage system.
netapp_transport_type = http (StrOpt) The transport protocol used when communicating with the storage system or proxy server.
netapp_vserver = None (StrOpt) This option specifies the virtual storage server (Vserver) name on the storage cluster on which provisioning of block storage volumes should occur.
Note
If you specify an account in the netapp_login that only has virtual storage server (Vserver) administration privileges (rather than cluster-wide administration privileges), some advanced features of the NetApp unified driver will not work and you may see warnings in the OpenStack Block Storage logs.
Tip
For more information on these options and other deployment and operational scenarios, visit the NetApp OpenStack Deployment and Operations Guide.
2.1.13.1.2. NetApp NFS configuration for clustered Data ONTAP
The NetApp NFS configuration for clustered Data ONTAP is an interface from OpenStack to a clustered Data ONTAP system for provisioning and managing OpenStack volumes on NFS exports provided by the clustered Data ONTAP system that are accessed using the NFS protocol.
The NFS configuration for clustered Data ONTAP is a direct interface from OpenStack Block Storage to the clustered Data ONTAP instance and as such does not require any additional management software to achieve the desired functionality. It uses NetApp APIs to interact with the clustered Data ONTAP instance.
Configuration options for the clustered Data ONTAP family with NFS protocol
Configure the volume driver, storage family, and storage protocol to NetApp unified driver, clustered Data ONTAP, and NFS respectively by setting the volume_driver, netapp_storage_family and netapp_storage_protocol options in cinder.conf as follows:
volume_driver = cinder.volume.drivers.netapp.common.NetAppDriver
netapp_storage_family = ontap_cluster
netapp_storage_protocol = nfs
netapp_vserver = openstack-vserver
netapp_server_hostname = myhostname
netapp_server_port = port
netapp_login = username
netapp_password = password
nfs_shares_config = /etc/cinder/nfs_shares

Table 2.14. Description of NetApp cDOT NFS driver configuration options

Configuration option = Default value Description
[DEFAULT]
expiry_thres_minutes = 720 (IntOpt) This option specifies the threshold for last access time for images in the NFS image cache. When a cache cleaning cycle begins, images in the cache that have not been accessed in the last M minutes, where M is the value of this parameter, will be deleted from the cache to create free space on the NFS share.
netapp_copyoffload_tool_path = None (StrOpt) This option specifies the path of the NetApp copy offload tool binary. Ensure that the binary has execute permissions set which allow the effective user of the cinder-volume process to execute the file.
netapp_host_type = None (StrOpt) This option defines the type of operating system for all initiators that can access a LUN. This information is used when mapping LUNs to individual hosts or groups of hosts.
netapp_host_type = None (StrOpt) This option defines the type of operating system for all initiators that can access a LUN. This information is used when mapping LUNs to individual hosts or groups of hosts.
netapp_login = None (StrOpt) Administrative user account name used to access the storage system or proxy server.
netapp_lun_ostype = None (StrOpt) This option defines the type of operating system that will access a LUN exported from Data ONTAP; it is assigned to the LUN at the time it is created.
netapp_partner_backend_name = None (StrOpt) The name of the config.conf stanza for a Data ONTAP (7-mode) HA partner. This option is only used by the driver when connecting to an instance with a storage family of Data ONTAP operating in 7-Mode, and it is required if the storage protocol selected is FC.
netapp_password = None (StrOpt) Password for the administrative user account specified in the netapp_login option.
netapp_pool_name_search_pattern = (.+) (StrOpt) This option is used to restrict provisioning to the specified pools. Specify the value of this option to be a regular expression which will be applied to the names of objects from the storage backend which represent pools in Cinder. This option is only utilized when the storage protocol is configured to use iSCSI or FC.
netapp_server_hostname = None (StrOpt) The hostname (or IP address) for the storage system or proxy server.
netapp_server_port = None (IntOpt) The TCP port to use for communication with the storage system or proxy server. If not specified, Data ONTAP drivers will use 80 for HTTP and 443 for HTTPS; E-Series will use 8080 for HTTP and 8443 for HTTPS.
netapp_storage_family = ontap_cluster (StrOpt) The storage family type used on the storage system; valid values are ontap_7mode for using Data ONTAP operating in 7-Mode, ontap_cluster for using clustered Data ONTAP, or eseries for using E-Series.
netapp_storage_protocol = None (StrOpt) The storage protocol to be used on the data path with the storage system.
netapp_transport_type = http (StrOpt) The transport protocol used when communicating with the storage system or proxy server.
netapp_vserver = None (StrOpt) This option specifies the virtual storage server (Vserver) name on the storage cluster on which provisioning of block storage volumes should occur.
thres_avl_size_perc_start = 20 (IntOpt) If the percentage of available space for an NFS share has dropped below the value specified by this option, the NFS image cache will be cleaned.
thres_avl_size_perc_stop = 60 (IntOpt) When the percentage of available space on an NFS share has reached the percentage specified by this option, the driver will stop clearing files from the NFS image cache that have not been accessed in the last M minutes, where M is the value of the expiry_thres_minutes configuration option.
Note
Additional NetApp NFS configuration options are shared with the generic NFS driver. These options can be found here: Table 2.19, “Description of NFS storage configuration options”.
Note
If you specify an account in the netapp_login that only has virtual storage server (Vserver) administration privileges (rather than cluster-wide administration privileges), some advanced features of the NetApp unified driver will not work and you may see warnings in the OpenStack Block Storage logs.
NetApp NFS Copy Offload client
A feature was added in the Icehouse release of the NetApp unified driver that enables Image Service images to be efficiently copied to a destination Block Storage volume. When the Block Storage and Image Service are configured to use the NetApp NFS Copy Offload client, a controller-side copy will be attempted before reverting to downloading the image from the Image Service. This improves image provisioning times while reducing the consumption of bandwidth and CPU cycles on the host(s) running the Image and Block Storage services. This is due to the copy operation being performed completely within the storage cluster.
The NetApp NFS Copy Offload client can be used in either of the following scenarios:
  • The Image Service is configured to store images in an NFS share that is exported from a NetApp FlexVol volume and the destination for the new Block Storage volume will be on an NFS share exported from a different FlexVol volume than the one used by the Image Service. Both FlexVols must be located within the same cluster.
  • The source image from the Image Service has already been cached in an NFS image cache within a Block Storage backend. The cached image resides on a different FlexVol volume than the destination for the new Block Storage volume. Both FlexVols must be located within the same cluster.
To use this feature, you must configure the Image Service, as follows:
  • Set the default_store configuration option to file.
  • Set the filesystem_store_datadir configuration option to the path to the Image Service NFS export.
  • Set the show_image_direct_url configuration option to True.
  • Set the show_multiple_locations configuration option to True.
    Important
    If configured without the proper policy settings, a non-admin user of the Image Service can replace active image data (that is, switch out a current image without other users knowing). See the OSSN announcement (recommended actions) for configuration information: https://wiki.openstack.org/wiki/OSSN/OSSN-0065
  • Set the filesystem_store_metadata_file configuration option to a metadata file. The metadata file should contain a JSON object that contains the correct information about the NFS export used by the Image Service, similar to:
    {
        "share_location": "nfs://192.168.0.1/myGlanceExport",
        "mount_point": "/var/lib/glance/images",
        "type": "nfs"
    }
To use this feature, you must configure the Block Storage service, as follows:
  • Set the netapp_copyoffload_tool_path configuration option to the path to the NetApp Copy Offload binary.
  • Set the glance_api_version configuration option to 2.
Important
This feature requires that:
  • The storage system must have Data ONTAP v8.2 or greater installed.
  • The vStorage feature must be enabled on each storage virtual machine (SVM, also known as a Vserver) that is permitted to interact with the copy offload client.
  • To configure the copy offload workflow, enable NFS v4.0 or greater and export it from the SVM.
Tip
To download the NetApp copy offload binary to be utilized in conjunction with the netapp_copyoffload_tool_path configuration option, visit the Utility Toolchest page at the NetApp Support portal (login is required).
Tip
For more information on these options and other deployment and operational scenarios, visit the NetApp OpenStack Deployment and Operations Guide.
2.1.13.1.3. NetApp-supported extra specs for clustered Data ONTAP
Extra specs enable vendors to specify extra filter criteria that the Block Storage scheduler uses when it determines which volume node should fulfill a volume provisioning request. When you use the NetApp unified driver with a clustered Data ONTAP storage system, you can leverage extra specs with OpenStack Block Storage volume types to ensure that OpenStack Block Storage volumes are created on storage back ends that have certain properties. For example, when you configure QoS, mirroring, or compression for a storage back end.
Extra specs are associated with OpenStack Block Storage volume types, so that when users request volumes of a particular volume type, the volumes are created on storage back ends that meet the list of requirements. For example, the back ends have the available space or extra specs. You can use the specs in the following table when you define OpenStack Block Storage volume types by using the cinder type-key command.

Table 2.15. Description of extra specs options for NetApp Unified Driver with Clustered Data ONTAP

Extra spec Type Description
netapp_raid_type String Limit the candidate volume list based on one of the following raid types: raid4, raid_dp.
netapp_disk_type String Limit the candidate volume list based on one of the following disk types: ATA, BSAS, EATA, FCAL, FSAS, LUN, MSATA, SAS, SATA, SCSI, XATA, XSAS, or SSD.
netapp:qos_policy_group[a] String Specify the name of a QoS policy group, which defines measurable Service Level Objectives, that should be applied to the OpenStack Block Storage volume at the time of volume creation. Ensure that the QoS policy group object within Data ONTAP should be defined before an OpenStack Block Storage volume is created, and that the QoS policy group is not associated with the destination FlexVol volume.
netapp_mirrored Boolean Limit the candidate volume list to only the ones that are mirrored on the storage controller.
netapp_unmirrored[b] Boolean Limit the candidate volume list to only the ones that are not mirrored on the storage controller.
netapp_dedup Boolean Limit the candidate volume list to only the ones that have deduplication enabled on the storage controller.
netapp_nodedup[b] Boolean Limit the candidate volume list to only the ones that have deduplication disabled on the storage controller.
netapp_compression Boolean Limit the candidate volume list to only the ones that have compression enabled on the storage controller.
netapp_nocompression[b] Boolean Limit the candidate volume list to only the ones that have compression disabled on the storage controller.
netapp_thin_provisioned Boolean Limit the candidate volume list to only the ones that support thin provisioning on the storage controller.
netapp_thick_provisioned[b] Boolean Limit the candidate volume list to only the ones that support thick provisioning on the storage controller.
[a] Note that this extra spec has a colon (:) in its name because it is used by the driver to assign the QoS policy group to the OpenStack Block Storage volume after it has been provisioned.
[b] In the Juno release, these negative-assertion extra specs are formally deprecated by the NetApp unified driver. Instead of using the deprecated negative-assertion extra specs (for example, netapp_unmirrored) with a value of true, use the corresponding positive-assertion extra spec (for example, netapp_mirrored) with a value of false.

2.1.13.2. NetApp Data ONTAP operating in 7-Mode storage family

The NetApp Data ONTAP operating in 7-Mode storage family represents a configuration group which provides OpenStack compute instances access to 7-Mode storage systems. At present it can be configured in OpenStack Block Storage to work with iSCSI and NFS storage protocols.
2.1.13.2.1. NetApp iSCSI configuration for Data ONTAP operating in 7-Mode
The NetApp iSCSI configuration for Data ONTAP operating in 7-Mode is an interface from OpenStack to Data ONTAP operating in 7-Mode storage systems for provisioning and managing the SAN block storage entity, that is, a LUN which can be accessed using iSCSI protocol.
The iSCSI configuration for Data ONTAP operating in 7-Mode is a direct interface from OpenStack to Data ONTAP operating in 7-Mode storage system and it does not require additional management software to achieve the desired functionality. It uses NetApp ONTAPI to interact with the Data ONTAP operating in 7-Mode storage system.
Configuration options for the Data ONTAP operating in 7-Mode storage family with iSCSI protocol
Configure the volume driver, storage family and storage protocol to the NetApp unified driver, Data ONTAP operating in 7-Mode, and iSCSI respectively by setting the volume_driver, netapp_storage_family and netapp_storage_protocol options in cinder.conf as follows:
volume_driver = cinder.volume.drivers.netapp.common.NetAppDriver
netapp_storage_family = ontap_7mode
netapp_storage_protocol = iscsi
netapp_server_hostname = myhostname
netapp_server_port = 80
netapp_login = username
netapp_password = password
Note
To use the iSCSI protocol, you must override the default value of netapp_storage_protocol with iscsi.

Table 2.16. Description of NetApp 7-Mode iSCSI driver configuration options

Configuration option = Default value Description
[DEFAULT]
netapp_login = None (StrOpt) Administrative user account name used to access the storage system or proxy server.
netapp_partner_backend_name = None (StrOpt) The name of the config.conf stanza for a Data ONTAP (7-mode) HA partner. This option is only used by the driver when connecting to an instance with a storage family of Data ONTAP operating in 7-Mode, and it is required if the storage protocol selected is FC.
netapp_password = None (StrOpt) Password for the administrative user account specified in the netapp_login option.
netapp_pool_name_search_pattern = (.+) (StrOpt) This option is used to restrict provisioning to the specified pools. Specify the value of this option to be a regular expression which will be applied to the names of objects from the storage backend which represent pools in Cinder. This option is only utilized when the storage protocol is configured to use iSCSI or FC.
netapp_server_hostname = None (StrOpt) The hostname (or IP address) for the storage system or proxy server.
netapp_server_port = None (IntOpt) The TCP port to use for communication with the storage system or proxy server. If not specified, Data ONTAP drivers will use 80 for HTTP and 443 for HTTPS; E-Series will use 8080 for HTTP and 8443 for HTTPS.
netapp_size_multiplier = 1.2 (FloatOpt) The quantity to be multiplied by the requested volume size to ensure enough space is available on the virtual storage server (Vserver) to fulfill the volume creation request. Note: this option is deprecated and will be removed in favor of "reserved_percentage" in the Mitaka release.
netapp_storage_family = ontap_cluster (StrOpt) The storage family type used on the storage system; valid values are ontap_7mode for using Data ONTAP operating in 7-Mode, ontap_cluster for using clustered Data ONTAP, or eseries for using E-Series.
netapp_storage_protocol = None (StrOpt) The storage protocol to be used on the data path with the storage system.
netapp_transport_type = http (StrOpt) The transport protocol used when communicating with the storage system or proxy server.
netapp_vfiler = None (StrOpt) The vFiler unit on which provisioning of block storage volumes will be done. This option is only used by the driver when connecting to an instance with a storage family of Data ONTAP operating in 7-Mode. Only use this option when utilizing the MultiStore feature on the NetApp storage system.
Tip
For more information on these options and other deployment and operational scenarios, visit the NetApp OpenStack Deployment and Operations Guide.
2.1.13.2.2. NetApp NFS configuration for Data ONTAP operating in 7-Mode
The NetApp NFS configuration for Data ONTAP operating in 7-Mode is an interface from OpenStack to Data ONTAP operating in 7-Mode storage system for provisioning and managing OpenStack volumes on NFS exports provided by the Data ONTAP operating in 7-Mode storage system which can then be accessed using NFS protocol.
The NFS configuration for Data ONTAP operating in 7-Mode is a direct interface from OpenStack Block Storage to the Data ONTAP operating in 7-Mode instance and as such does not require any additional management software to achieve the desired functionality. It uses NetApp ONTAPI to interact with the Data ONTAP operating in 7-Mode storage system.
Configuration options for the Data ONTAP operating in 7-Mode family with NFS protocol
Configure the volume driver, storage family, and storage protocol to the NetApp unified driver, Data ONTAP operating in 7-Mode, and NFS respectively by setting the volume_driver, netapp_storage_family and netapp_storage_protocol options in cinder.conf as follows:
volume_driver = cinder.volume.drivers.netapp.common.NetAppDriver
netapp_storage_family = ontap_7mode
netapp_storage_protocol = nfs
netapp_server_hostname = myhostname
netapp_server_port = 80
netapp_login = username
netapp_password = password
nfs_shares_config = /etc/cinder/nfs_shares

Table 2.17. Description of NetApp 7-Mode NFS driver configuration options

Configuration option = Default value Description
[DEFAULT]
expiry_thres_minutes = 720 (IntOpt) This option specifies the threshold for last access time for images in the NFS image cache. When a cache cleaning cycle begins, images in the cache that have not been accessed in the last M minutes, where M is the value of this parameter, will be deleted from the cache to create free space on the NFS share.
netapp_login = None (StrOpt) Administrative user account name used to access the storage system or proxy server.
netapp_partner_backend_name = None (StrOpt) The name of the config.conf stanza for a Data ONTAP (7-mode) HA partner. This option is only used by the driver when connecting to an instance with a storage family of Data ONTAP operating in 7-Mode, and it is required if the storage protocol selected is FC.
netapp_password = None (StrOpt) Password for the administrative user account specified in the netapp_login option.
netapp_pool_name_search_pattern = (.+) (StrOpt) This option is used to restrict provisioning to the specified pools. Specify the value of this option to be a regular expression which will be applied to the names of objects from the storage backend which represent pools in Cinder. This option is only utilized when the storage protocol is configured to use iSCSI or FC.
netapp_server_hostname = None (StrOpt) The hostname (or IP address) for the storage system or proxy server.
netapp_server_port = None (IntOpt) The TCP port to use for communication with the storage system or proxy server. If not specified, Data ONTAP drivers will use 80 for HTTP and 443 for HTTPS; E-Series will use 8080 for HTTP and 8443 for HTTPS.
netapp_storage_family = ontap_cluster (StrOpt) The storage family type used on the storage system; valid values are ontap_7mode for using Data ONTAP operating in 7-Mode, ontap_cluster for using clustered Data ONTAP, or eseries for using E-Series.
netapp_storage_protocol = None (StrOpt) The storage protocol to be used on the data path with the storage system.
netapp_transport_type = http (StrOpt) The transport protocol used when communicating with the storage system or proxy server.
netapp_vfiler = None (StrOpt) The vFiler unit on which provisioning of block storage volumes will be done. This option is only used by the driver when connecting to an instance with a storage family of Data ONTAP operating in 7-Mode. Only use this option when utilizing the MultiStore feature on the NetApp storage system.
thres_avl_size_perc_start = 20 (IntOpt) If the percentage of available space for an NFS share has dropped below the value specified by this option, the NFS image cache will be cleaned.
thres_avl_size_perc_stop = 60 (IntOpt) When the percentage of available space on an NFS share has reached the percentage specified by this option, the driver will stop clearing files from the NFS image cache that have not been accessed in the last M minutes, where M is the value of the expiry_thres_minutes configuration option.
Note
Additional NetApp NFS configuration options are shared with the generic NFS driver. For a description of these, see Table 2.19, “Description of NFS storage configuration options”.
Tip
For more information on these options and other deployment and operational scenarios, visit the NetApp OpenStack Deployment and Operations Guide.

2.1.13.3. NetApp E-Series storage family

The NetApp E-Series storage family represents a configuration group which provides OpenStack compute instances access to E-Series storage systems. At present it can be configured in OpenStack Block Storage to work with the iSCSI storage protocol.
2.1.13.3.1. NetApp iSCSI configuration for E-Series
The NetApp iSCSI configuration for E-Series is an interface from OpenStack to E-Series storage systems for provisioning and managing the SAN block storage entity; that is, a NetApp LUN which can be accessed using the iSCSI protocol.
The iSCSI configuration for E-Series is an interface from OpenStack Block Storage to the E-Series proxy instance and as such requires the deployment of the proxy instance in order to achieve the desired functionality. The driver uses REST APIs to interact with the E-Series proxy instance, which in turn interacts directly with the E-Series controllers.
The use of multipath and DM-MP are required when using the OpenStack Block Storage driver for E-Series. In order for OpenStack Block Storage and OpenStack Compute to take advantage of multiple paths, the following configuration options must be correctly configured:
  • The use_multipath_for_image_xfer option should be set to True in the cinder.conf file within the driver-specific stanza (for example, [myDriver]).
  • The iscsi_use_multipath option should be set to True in the nova.conf file within the [libvirt] stanza.
Configuration options for E-Series storage family with iSCSI protocol
Configure the volume driver, storage family, and storage protocol to the NetApp unified driver, E-Series, and iSCSI respectively by setting the volume_driver, netapp_storage_family and netapp_storage_protocol options in cinder.conf as follows:
volume_driver = cinder.volume.drivers.netapp.common.NetAppDriver
netapp_storage_family = eseries
netapp_storage_protocol = iscsi
netapp_server_hostname = myhostname
netapp_server_port = 80
netapp_login = username
netapp_password = password
netapp_controller_ips = 1.2.3.4,5.6.7.8
netapp_sa_password = arrayPassword
netapp_storage_pools = pool1,pool2
use_multipath_for_image_xfer = True
Note
To use the E-Series driver, you must override the default value of netapp_storage_family with eseries.
Note
To use the iSCSI protocol, you must override the default value of netapp_storage_protocol with iscsi.

Table 2.18. Description of NetApp E-Series driver configuration options

Configuration option = Default value Description
[DEFAULT]
netapp_controller_ips = None (StrOpt) This option is only utilized when the storage family is configured to eseries. This option is used to restrict provisioning to the specified controllers. Specify the value of this option to be a comma separated list of controller hostnames or IP addresses to be used for provisioning.
netapp_enable_multiattach = False (BoolOpt) This option specifies whether the driver should allow operations that require multiple attachments to a volume. An example would be live migration of servers that have volumes attached. When enabled, this backend is limited to 256 total volumes in order to guarantee volumes can be accessed by more than one host.
netapp_host_type = None (StrOpt) This option defines the type of operating system for all initiators that can access a LUN. This information is used when mapping LUNs to individual hosts or groups of hosts.
netapp_login = None (StrOpt) Administrative user account name used to access the storage system or proxy server.
netapp_partner_backend_name = None (StrOpt) The name of the config.conf stanza for a Data ONTAP (7-mode) HA partner. This option is only used by the driver when connecting to an instance with a storage family of Data ONTAP operating in 7-Mode, and it is required if the storage protocol selected is FC.
netapp_password = None (StrOpt) Password for the administrative user account specified in the netapp_login option.
netapp_pool_name_search_pattern = (.+) (StrOpt) This option is used to restrict provisioning to the specified pools. Specify the value of this option to be a regular expression which will be applied to the names of objects from the storage backend which represent pools in Cinder. This option is only utilized when the storage protocol is configured to use iSCSI or FC.
netapp_sa_password = None (StrOpt) Password for the NetApp E-Series storage array.
netapp_server_hostname = None (StrOpt) The hostname (or IP address) for the storage system or proxy server.
netapp_server_port = None (IntOpt) The TCP port to use for communication with the storage system or proxy server. If not specified, Data ONTAP drivers will use 80 for HTTP and 443 for HTTPS; E-Series will use 8080 for HTTP and 8443 for HTTPS.
netapp_storage_family = ontap_cluster (StrOpt) The storage family type used on the storage system; valid values are ontap_7mode for using Data ONTAP operating in 7-Mode, ontap_cluster for using clustered Data ONTAP, or eseries for using E-Series.
netapp_transport_type = http (StrOpt) The transport protocol used when communicating with the storage system or proxy server.
netapp_webservice_path = /devmgr/v2 (StrOpt) This option is used to specify the path to the E-Series proxy application on a proxy server. The value is combined with the value of the netapp_transport_type, netapp_server_hostname, and netapp_server_port options to create the URL used by the driver to connect to the proxy application.
Tip
For more information on these options and other deployment and operational scenarios, visit the NetApp OpenStack Deployment and Operations Guide.

2.1.13.4. Upgrading prior NetApp drivers to the NetApp unified driver

NetApp introduced a new unified block storage driver in Havana for configuring different storage families and storage protocols. This requires defining upgrade path for NetApp drivers which existed in releases prior to Havana. This section covers the upgrade configuration for NetApp drivers to the new unified configuration and a list of deprecated NetApp drivers.
2.1.13.4.1. Upgraded NetApp drivers
This section describes how to update OpenStack Block Storage configuration from a pre-Havana release to the unified driver format.
Driver upgrade configuration
  1. NetApp iSCSI direct driver for Clustered Data ONTAP in Grizzly (or earlier).
    volume_driver = cinder.volume.drivers.netapp.iscsi.NetAppDirectCmodeISCSIDriver
    NetApp unified driver configuration.
    volume_driver = cinder.volume.drivers.netapp.common.NetAppDriver
    netapp_storage_family = ontap_cluster
    netapp_storage_protocol = iscsi
  2. NetApp NFS direct driver for Clustered Data ONTAP in Grizzly (or earlier).
    volume_driver = cinder.volume.drivers.netapp.nfs.NetAppDirectCmodeNfsDriver
    NetApp unified driver configuration.
    volume_driver = cinder.volume.drivers.netapp.common.NetAppDriver
    netapp_storage_family = ontap_cluster
    netapp_storage_protocol = nfs
  3. NetApp iSCSI direct driver for Data ONTAP operating in 7-Mode storage controller in Grizzly (or earlier)
    volume_driver = cinder.volume.drivers.netapp.iscsi.NetAppDirect7modeISCSIDriver
    NetApp unified driver configuration
    volume_driver = cinder.volume.drivers.netapp.common.NetAppDriver
    netapp_storage_family = ontap_7mode
    netapp_storage_protocol = iscsi
  4. NetApp NFS direct driver for Data ONTAP operating in 7-Mode storage controller in Grizzly (or earlier)
    volume_driver = cinder.volume.drivers.netapp.nfs.NetAppDirect7modeNfsDriver
    NetApp unified driver configuration
    volume_driver = cinder.volume.drivers.netapp.common.NetAppDriver
    netapp_storage_family = ontap_7mode
    netapp_storage_protocol = nfs
2.1.13.4.2. Deprecated NetApp drivers
This section lists the NetApp drivers in earlier releases that are deprecated in Havana.
  1. NetApp iSCSI driver for clustered Data ONTAP.
    volume_driver = cinder.volume.drivers.netapp.iscsi.NetAppCmodeISCSIDriver
  2. NetApp NFS driver for clustered Data ONTAP.
    volume_driver = cinder.volume.drivers.netapp.nfs.NetAppCmodeNfsDriver
  3. NetApp iSCSI driver for Data ONTAP operating in 7-Mode storage controller.
    volume_driver = cinder.volume.drivers.netapp.iscsi.NetAppISCSIDriver
  4. NetApp NFS driver for Data ONTAP operating in 7-Mode storage controller.
    volume_driver = cinder.volume.drivers.netapp.nfs.NetAppNFSDriver
Note
For support information on deprecated NetApp drivers in the Havana release, visit the NetApp OpenStack Deployment and Operations Guide.

2.1.14. NFS driver

The Network File System (NFS) is a distributed file system protocol originally developed by Sun Microsystems in 1984. An NFS server exports one or more of its file systems, known as shares. An NFS client can mount these exported shares on its own file system. You can perform file actions on this mounted remote file system as if the file system were local.

2.1.14.1. How the NFS driver works

The NFS driver, and other drivers based on it, work quite differently than a traditional block storage driver.
The NFS driver does not actually allow an instance to access a storage device at the block level. Instead, files are created on an NFS share and mapped to instances, which emulates a block device. This works in a similar way to QEMU, which stores instances in the /var/lib/nova/instances directory.

2.1.14.2. Enable the NFS driver and related options

To use Cinder with the NFS driver, first set the volume_driver in cinder.conf:
volume_driver=cinder.volume.drivers.nfs.NfsDriver
The following table contains the options supported by the NFS driver.

Table 2.19. Description of NFS storage configuration options

Configuration option = Default value Description
[DEFAULT]
nfs_mount_attempts = 3 (IntOpt) The number of attempts to mount nfs shares before raising an error. At least one attempt will be made to mount an nfs share, regardless of the value specified.
nfs_mount_options = None (StrOpt) Mount options passed to the nfs client. See section of the nfs man page for details.
nfs_mount_point_base = $state_path/mnt (StrOpt) Base dir containing mount points for nfs shares.
nfs_oversub_ratio = 1.0 (FloatOpt) This will compare the allocated to available space on the volume destination. If the ratio exceeds this number, the destination will no longer be valid. Note that this option is deprecated in favor of "max_oversubscription_ratio" and will be removed in the Mitaka release.
nfs_shares_config = /etc/cinder/nfs_shares (StrOpt) File with the list of available nfs shares
nfs_sparsed_volumes = True (BoolOpt) Create volumes as sparsed files which take no space.If set to False volume is created as regular file.In such case volume creation takes a lot of time.
nfs_used_ratio = 0.95 (FloatOpt) Percent of ACTUAL usage of the underlying volume before no new volumes can be allocated to the volume destination. Note that this option is deprecated in favor of "reserved_percentage" and will be removed in the Mitaka release.
Note
As of the Icehouse release, the NFS driver (and other drivers based off it) will attempt to mount shares using version 4.1 of the NFS protocol (including pNFS). If the mount attempt is unsuccessful due to a lack of client or server support, a subsequent mount attempt that requests the default behavior of the mount.nfs command will be performed. On most distributions, the default behavior is to attempt mounting first with NFS v4.0, then silently fall back to NFS v3.0 if necessary. If the nfs_mount_options configuration option contains a request for a specific version of NFS to be used, or if specific options are specified in the shares configuration file specified by the nfs_shares_config configuration option, the mount will be attempted as requested with no subsequent attempts.

2.1.14.3. How to use the NFS driver

  1. Access to one or more NFS servers. Creating an NFS server is outside the scope of this document. This example assumes access to the following NFS servers and mount points:
    • 192.168.1.200:/storage
    • 192.168.1.201:/storage
    • 192.168.1.202:/storage
    This example demonstrates the use of with this driver with multiple NFS servers. Multiple servers are not required. One is usually enough.
  2. Add your list of NFS servers to the file you specified with the nfs_shares_config option. For example, if the value of this option was set to /etc/cinder/shares.txt, then:
    # cat /etc/cinder/shares.txt
    192.168.1.200:/storage 192.168.1.201:/storage 192.168.1.202:/storage
    Comments are allowed in this file. They begin with a #.
  3. Configure the nfs_mount_point_base option. This is a directory where cinder-volume mounts all NFS shares stored in shares.txt. For this example, /var/lib/cinder/nfs is used. You can, of course, use the default value of $state_path/mnt.
  4. Start the cinder-volume service. /var/lib/cinder/nfs should now contain a directory for each NFS share specified in shares.txt. The name of each directory is a hashed name:
    # ls /var/lib/cinder/nfs/
    ... 46c5db75dc3a3a50a10bfd1a456a9f3f ...
  5. You can now create volumes as you normally would:
    $ nova volume-create --display-name myvol 5
    # ls /var/lib/cinder/nfs/46c5db75dc3a3a50a10bfd1a456a9f3f
    volume-a8862558-e6d6-4648-b5df-bb84f31c8935
    This volume can also be attached and deleted just like other volumes. However, snapshotting is not supported.

NFS driver notes

  • cinder-volume manages the mounting of the NFS shares as well as volume creation on the shares. Keep this in mind when planning your OpenStack architecture. If you have one master NFS server, it might make sense to only have one cinder-volume service to handle all requests to that NFS server. However, if that single server is unable to handle all requests, more than one cinder-volume service is needed as well as potentially more than one NFS server.
  • Because data is stored in a file and not actually on a block storage device, you might not see the same IO performance as you would with a traditional block storage driver. Test accordingly.
  • Despite possible IO performance loss, having volume data stored in a file might be beneficial. For example, backing up volumes can be as easy as copying the volume files.
    Note
    Regular IO flushing and syncing still stands.

2.1.15. SolidFire

The SolidFire Cluster is a high performance all SSD iSCSI storage device that provides massive scale out capability and extreme fault tolerance. A key feature of the SolidFire cluster is the ability to set and modify during operation specific QoS levels on a volume for volume basis. The SolidFire cluster offers this along with de-duplication, compression, and an architecture that takes full advantage of SSDs.
To configure the use of a SolidFire cluster with Block Storage, modify your cinder.conf file as follows:
volume_driver = cinder.volume.drivers.solidfire.SolidFireDriver
san_ip = 172.17.1.182         # the address of your MVIP
san_login = sfadmin           # your cluster admin login
san_password = sfpassword     # your cluster admin password
sf_account_prefix = ''        # prefix for tenant account creation on solidfire cluster
Warning
Older versions of the SolidFire driver (prior to Icehouse) created a unique account prefixed with $cinder-volume-service-hostname-$tenant-id on the SolidFire cluster for each tenant. Unfortunately, this account formation resulted in issues for High Availability (HA) installations and installations where the cinder-volume service can move to a new node. The current default implementation does not experience this issue as no prefix is used. For installations created on a prior release, the OLD default behavior can be configured by using the keyword "hostname" in sf_account_prefix.

Table 2.20. Description of SolidFire driver configuration options

Configuration option = Default value Description
[DEFAULT]
sf_account_prefix = None (StrOpt) Create SolidFire accounts with this prefix. Any string can be used here, but the string "hostname" is special and will create a prefix using the cinder node hostname (previous default behavior). The default is NO prefix.
sf_allow_template_caching = True (BoolOpt) Create an internal cache of copy of images when a bootable volume is created to eliminate fetch from glance and qemu-conversion on subsequent calls.
sf_allow_tenant_qos = False (BoolOpt) Allow tenants to specify QOS on create
sf_api_port = 443 (IntOpt) SolidFire API port. Useful if the device api is behind a proxy on a different port.
sf_emulate_512 = True (BoolOpt) Set 512 byte emulation on volume creation;
sf_enable_volume_mapping = True (BoolOpt) Create an internal mapping of volume IDs and account. Optimizes lookups and performance at the expense of memory, very large deployments may want to consider setting to False.
sf_svip = None (StrOpt) Overrides default cluster SVIP with the one specified. This is required or deployments that have implemented the use of VLANs for iSCSI networks in their cloud.
sf_template_account_name = openstack-vtemplate (StrOpt) Account name on the SolidFire Cluster to use as owner of template/cache volumes (created if does not exist).

2.1.16. Tintri

Tintri VMstore is a smart storage that sees, learns and adapts for cloud and virtualization. The Tintri Cinder driver will interact with configured VMstore running Tintri OS 4.0 and above. It supports various operations using Tintri REST APIs and NFS protocol.
To configure the use of a Tintri VMstore with Block Storage, perform the following actions:
  1. Edit the etc/cinder/cinder.conf file and set the cinder.volume.drivers.tintri options:
    volume_driver=cinder.volume.drivers.tintri.TintriDriver
    # Mount options passed to the nfs client. See section of the
    # nfs man page for details. (string value)
    nfs_mount_options=vers=3,lookupcache=pos
    
    #
    # Options defined in cinder.volume.drivers.tintri
    #
    
    # The hostname (or IP address) for the storage system (string
    # value)
    tintri_server_hostname={Tintri VMstore Management IP}
    
    # User name for the storage system (string value)
    tintri_server_username={username}
    
    # Password for the storage system (string value)
    tintri_server_password={password}
    
    # API version for the storage system (string value)
    #tintri_api_version=v310
    
    # Following options needed for NFS configuration
    # File with the list of available nfs shares (string value)
    #nfs_shares_config=/etc/cinder/nfs_shares
  2. Edit the etc/nova/nova.conf file, and set the nfs_mount_options:
    nfs_mount_options=vers=3
  3. Edit the /etc/cinder/nfs_shares file, and add the Tintri VMstore mount points associated with the configured VMstore management IP in the cinder.conf file:
    {vmstore_data_ip}:/tintri/{submount1}
    {vmstore_data_ip}:/tintri/{submount2}

Table 2.21. Description of Tintri volume driver configuration options

Configuration option = Default value Description
[DEFAULT]
tintri_api_version = v310 (StrOpt) API version for the storage system
tintri_server_hostname = None (StrOpt) The hostname (or IP address) for the storage system
tintri_server_password = None (StrOpt) Password for the storage system
tintri_server_username = None (StrOpt) User name for the storage system

2.2. Backup drivers

This section describes how to configure the cinder-backup service and its drivers.
To set a backup driver, use the backup_driver flag. By default there is no backup driver enabled.

2.2.1. Ceph backup driver

The Ceph backup driver backs up volumes of any type to a Ceph back-end store. The driver can also detect whether the volume to be backed up is a Ceph RBD volume, and if so, it tries to perform incremental and differential backups.
For source Ceph RBD volumes, you can perform backups within the same Ceph pool (not recommended). You can also perform backups between different Ceph pools and between different Ceph clusters.
At the time of writing, differential backup support in Ceph/librbd was quite new. This driver attempts a differential backup in the first instance. If the differential backup fails, the driver falls back to full backup/copy.
If incremental backups are used, multiple backups of the same volume are stored as snapshots so that minimal space is consumed in the backup store. It takes far less time to restore a volume than to take a full copy.
Note
Block Storage enables you to:
  • Restore to a new volume, which is the default and recommended action.
  • Restore to the original volume from which the backup was taken. The restore action takes a full copy because this is the safest action.
To enable the Ceph backup driver, include the following option in the cinder.conf file:
backup_driver = cinder.backup.drivers.ceph
The following configuration options are available for the Ceph backup driver.

Table 2.22. Description of Ceph backup driver configuration options

Configuration option = Default value Description
[DEFAULT]
backup_ceph_chunk_size = 134217728 (IntOpt) The chunk size, in bytes, that a backup is broken into before transfer to the Ceph object store.
backup_ceph_conf = /etc/ceph/ceph.conf (StrOpt) Ceph configuration file to use.
backup_ceph_pool = backups (StrOpt) The Ceph pool where volume backups are stored.
backup_ceph_stripe_count = 0 (IntOpt) RBD stripe count to use when creating a backup image.
backup_ceph_stripe_unit = 0 (IntOpt) RBD stripe unit to use when creating a backup image.
backup_ceph_user = cinder (StrOpt) The Ceph user to connect with. Default here is to use the same user as for Cinder volumes. If not using cephx this should be set to None.
restore_discard_excess_bytes = True (BoolOpt) If True, always discard excess bytes when restoring volumes i.e. pad with zeroes.
This example shows the default options for the Ceph backup driver.
backup_ceph_conf=/etc/ceph/ceph.conf
backup_ceph_user = cinder
backup_ceph_chunk_size = 134217728
backup_ceph_pool = backups
backup_ceph_stripe_unit = 0
backup_ceph_stripe_count = 0

2.2.2. IBM Tivoli Storage Manager backup driver

The IBM Tivoli Storage Manager (TSM) backup driver enables performing volume backups to a TSM server.
The TSM client should be installed and configured on the machine running the cinder-backup service. See the IBM Tivoli Storage Manager Backup-Archive Client Installation and User's Guide for details on installing the TSM client.
To enable the IBM TSM backup driver, include the following option in cinder.conf:
backup_driver = cinder.backup.drivers.tsm
The following configuration options are available for the TSM backup driver.

Table 2.23. Description of IBM Tivoli Storage Manager backup driver configuration options

Configuration option = Default value Description
[DEFAULT]
backup_tsm_compression = True (BoolOpt) Enable or Disable compression for backups
backup_tsm_password = password (StrOpt) TSM password for the running username
backup_tsm_volume_prefix = backup (StrOpt) Volume prefix for the backup id when backing up to TSM
This example shows the default options for the TSM backup driver.
backup_tsm_volume_prefix = backup
backup_tsm_password = password
backup_tsm_compression = True

2.2.3. Swift backup driver

The backup driver for the swift back end performs a volume backup to an object storage system.
To enable the swift backup driver, include the following option in the cinder.conf file:
backup_driver = cinder.backup.drivers.swift
The following configuration options are available for the Swift back-end backup driver.

Table 2.24. Description of Swift backup driver configuration options

Configuration option = Default value Description
[DEFAULT]
backup_swift_auth = per_user (StrOpt) Swift authentication mechanism
backup_swift_auth_version = 1 (StrOpt) Swift authentication version. Specify "1" for auth 1.0, or "2" for auth 2.0
backup_swift_block_size = 32768 (IntOpt) The size in bytes that changes are tracked for incremental backups. backup_swift_object_size has to be multiple of backup_swift_block_size.
backup_swift_ca_cert_file = None (StrOpt) Location of the CA certificate file to use for swift client requests.
backup_swift_container = volumebackups (StrOpt) The default Swift container to use
backup_swift_enable_progress_timer = True (BoolOpt) Enable or Disable the timer to send the periodic progress notifications to Ceilometer when backing up the volume to the Swift backend storage. The default value is True to enable the timer.
backup_swift_key = None (StrOpt) Swift key for authentication
backup_swift_object_size = 52428800 (IntOpt) The size in bytes of Swift backup objects
backup_swift_retry_attempts = 3 (IntOpt) The number of retries to make for Swift operations
backup_swift_retry_backoff = 2 (IntOpt) The backoff time in seconds between Swift retries
backup_swift_tenant = None (StrOpt) Swift tenant/account name. Required when connecting to an auth 2.0 system
backup_swift_url = None (StrOpt) The URL of the Swift endpoint
backup_swift_user = None (StrOpt) Swift user name
swift_catalog_info = object-store:swift:publicURL (StrOpt) Info to match when looking for swift in the service catalog. Format is: separated values of the form: <service_type>:<service_name>:<endpoint_type> - Only used if backup_swift_url is unset
To enable the swift backup driver for 1.0 or 2.0 authentication version, specify 1 or 2 correspondingly. For example:
backup_swift_auth_version = 2
In addition, the 2.0 authentication system requires backup_swift_tenant setting:
backup_swift_tenant = <None>
This example shows the default options for the Swift back-end backup driver.
backup_swift_url = http://localhost:8080/v1/AUTH_
backup_swift_auth = per_user
backup_swift_auth_version = 1
backup_swift_user = <None>
backup_swift_key = <None>
backup_swift_container = volumebackups
backup_swift_object_size = 52428800
backup_swift_retry_attempts = 3
backup_swift_retry_backoff = 2
backup_compression_algorithm = zlib

2.2.4. NFS backup driver

The backup driver for the NFS back end backs up volumes of any type to an NFS exported backup repository.
To enable the NFS backup driver, include the following option in the [DEFAULT] section of the cinder.conf file:
backup_driver = cinder.backup.drivers.nfs
The following configuration options are available for the NFS back-end backup driver.

Table 2.25. Description of NFS backup driver configuration options

Configuration option = Default value Description
[DEFAULT]
backup_container = None (StrOpt) Custom directory to use for backups.
backup_enable_progress_timer = True (BoolOpt) Enable or Disable the timer to send the periodic progress notifications to Ceilometer when backing up the volume to the backend storage. The default value is True to enable the timer.
backup_file_size = 1999994880 (IntOpt) The maximum size in bytes of the files used to hold backups. If the volume being backed up exceeds this size, then it will be backed up into multiple files.backup_file_size must be a multiple of backup_sha_block_size_bytes.
backup_mount_options = None (StrOpt) Mount options passed to the NFS client. See NFS man page for details.
backup_mount_point_base = $state_path/backup_mount (StrOpt) Base dir containing mount point for NFS share.
backup_sha_block_size_bytes = 32768 (IntOpt) The size in bytes that changes are tracked for incremental backups. backup_file_size has to be multiple of backup_sha_block_size_bytes.
backup_share = None (StrOpt) NFS share in hostname:path, ipv4addr:path, or "[ipv6addr]:path" format.

2.3. Block Storage sample configuration files

All the files in this section can be found in /etc/cinder.

2.3.1. cinder.conf

The cinder.conf file is installed in /etc/cinder by default. When you manually install the Block Storage service, the options in the cinder.conf file are set to default values.
The cinder.conf file contains most of the options to configure the Block Storage service.
                [DEFAULT]

#
# Options defined in oslo.messaging
#

# ZeroMQ bind address. Should be a wildcard (*), an ethernet
# interface, or IP. The "host" option should point or resolve
# to this address. (string value)
#rpc_zmq_bind_address=*

# MatchMaker driver. (string value)
#rpc_zmq_matchmaker=local

# ZeroMQ receiver listening port. (integer value)
#rpc_zmq_port=9501

# Number of ZeroMQ contexts, defaults to 1. (integer value)
#rpc_zmq_contexts=1

# Maximum number of ingress messages to locally buffer per
# topic. Default is unlimited. (integer value)
#rpc_zmq_topic_backlog=<None>

# Directory for holding IPC sockets. (string value)
#rpc_zmq_ipc_dir=/var/run/openstack

# Name of this node. Must be a valid hostname, FQDN, or IP
# address. Must match "host" option, if running Nova. (string
# value)
#rpc_zmq_host=cinder

# Seconds to wait before a cast expires (TTL). Only supported
# by impl_zmq. (integer value)
#rpc_cast_timeout=30

# Heartbeat frequency. (integer value)
#matchmaker_heartbeat_freq=300

# Heartbeat time-to-live. (integer value)
#matchmaker_heartbeat_ttl=600

# Size of RPC thread pool. (integer value)
#rpc_thread_pool_size=64

# Driver or drivers to handle sending notifications. (multi
# valued)
#notification_driver=

# AMQP topic used for OpenStack notifications. (list value)
# Deprecated group/name - [rpc_notifier2]/topics
#notification_topics=notifications

# Seconds to wait for a response from a call. (integer value)
#rpc_response_timeout=60

# A URL representing the messaging driver to use and its full
# configuration. If not set, we fall back to the rpc_backend
# option and driver specific configuration. (string value)
#transport_url=<None>

# The messaging driver to use, defaults to rabbit. Other
# drivers include qpid and zmq. (string value)
#rpc_backend=rabbit

# The default exchange under which topics are scoped. May be
# overridden by an exchange name specified in the
# transport_url option. (string value)
#control_exchange=openstack


#
# Options defined in cinder.exception
#

# Make exception message format errors fatal. (boolean value)
#fatal_exception_format_errors=false


#
# Options defined in cinder.quota
#

# Number of volumes allowed per project (integer value)
#quota_volumes=10

# Number of volume snapshots allowed per project (integer
# value)
#quota_snapshots=10

# Number of consistencygroups allowed per project (integer
# value)
#quota_consistencygroups=10

# Total amount of storage, in gigabytes, allowed for volumes
# and snapshots per project (integer value)
#quota_gigabytes=1000

# Number of volume backups allowed per project (integer value)
#quota_backups=10

# Total amount of storage, in gigabytes, allowed for backups
# per project (integer value)
#quota_backup_gigabytes=1000

# Number of seconds until a reservation expires (integer
# value)
#reservation_expire=86400

# Count of reservations until usage is refreshed (integer
# value)
#until_refresh=0

# Number of seconds between subsequent usage refreshes
# (integer value)
#max_age=0

# Default driver to use for quota checks (string value)
#quota_driver=cinder.quota.DbQuotaDriver

# Enables or disables use of default quota class with default
# quota. (boolean value)
#use_default_quota_class=true


#
# Options defined in cinder.service
#

# Interval, in seconds, between nodes reporting state to
# datastore (integer value)
#report_interval=10

# Interval, in seconds, between running periodic tasks
# (integer value)
#periodic_interval=60

# Range, in seconds, to randomly delay when starting the
# periodic task scheduler to reduce stampeding. (Disable by
# setting to 0) (integer value)
#periodic_fuzzy_delay=60

# IP address on which OpenStack Volume API listens (string
# value)
#osapi_volume_listen=0.0.0.0

# Port on which OpenStack Volume API listens (integer value)
#osapi_volume_listen_port=8776

# Number of workers for OpenStack Volume API service. The
# default is equal to the number of CPUs available. (integer
# value)
#osapi_volume_workers=<None>


#
# Options defined in cinder.ssh_utils
#

# Option to enable strict host key checking.  When set to
# "True" Cinder will only connect to systems with a host key
# present in the configured "ssh_hosts_key_file".  When set to
# "False" the host key will be saved upon first connection and
# used for subsequent connections.  Default=False (boolean
# value)
#strict_ssh_host_key_policy=false

# File containing SSH host keys for the systems with which
# Cinder needs to communicate.  OPTIONAL:
# Default=$state_path/ssh_known_hosts (string value)
#ssh_hosts_key_file=$state_path/ssh_known_hosts


#
# Options defined in cinder.test
#

# File name of clean sqlite db (string value)
#sqlite_clean_db=clean.sqlite


#
# Options defined in cinder.wsgi
#

# Maximum line size of message headers to be accepted.
# max_header_line may need to be increased when using large
# tokens (typically those generated by the Keystone v3 API
# with big service catalogs). (integer value)
#max_header_line=16384

# Timeout for client connections' socket operations. If an
# incoming connection is idle for this number of seconds it
# will be closed. A value of '0' means wait forever. (integer
# value)
#client_socket_timeout=900

# If False, closes the client socket connection explicitly.
# Setting it to True to maintain backward compatibility.
# Recommended setting is set it to False. (boolean value)
#wsgi_keep_alive=true

# Sets the value of TCP_KEEPALIVE (True/False) for each server
# socket. (boolean value)
#tcp_keepalive=true

# Sets the value of TCP_KEEPIDLE in seconds for each server
# socket. Not supported on OS X. (integer value)
#tcp_keepidle=600

# Sets the value of TCP_KEEPINTVL in seconds for each server
# socket. Not supported on OS X. (integer value)
#tcp_keepalive_interval=<None>

# Sets the value of TCP_KEEPCNT for each server socket. Not
# supported on OS X. (integer value)
#tcp_keepalive_count=<None>

# CA certificate file to use to verify connecting clients
# (string value)
#ssl_ca_file=<None>

# Certificate file to use when starting the server securely
# (string value)
#ssl_cert_file=<None>

# Private key file to use when starting the server securely
# (string value)
#ssl_key_file=<None>


#
# Options defined in cinder.api.common
#

# The maximum number of items that a collection resource
# returns in a single response (integer value)
#osapi_max_limit=1000

# Base URL that will be presented to users in links to the
# OpenStack Volume API (string value)
# Deprecated group/name - [DEFAULT]/osapi_compute_link_prefix
#osapi_volume_base_URL=<None>


#
# Options defined in cinder.api.middleware.auth
#

# Treat X-Forwarded-For as the canonical remote address. Only
# enable this if you have a sanitizing proxy. (boolean value)
#use_forwarded_for=false


#
# Options defined in cinder.api.middleware.sizelimit
#

# Max size for body of a request (integer value)
#osapi_max_request_body_size=114688


#
# Options defined in cinder.api.views.versions
#

# Public url to use for versions endpoint. The default is
# None, which will use the request's host_url attribute to
# populate the URL base. If Cinder is operating behind a
# proxy, you will want to change this to represent the proxy's
# URL. (string value)
#public_endpoint=<None>


#
# Options defined in cinder.backup.chunkeddriver
#

# Compression algorithm (None to disable) (string value)
#backup_compression_algorithm=zlib


#
# Options defined in cinder.backup.driver
#

# Backup metadata version to be used when backing up volume
# metadata. If this number is bumped, make sure the service
# doing the restore supports the new version. (integer value)
#backup_metadata_version=2

# The number of chunks or objects, for which one Ceilometer
# notification will be sent (integer value)
#backup_object_number_per_notification=10

# Interval, in seconds, between two progress notifications
# reporting the backup status (integer value)
#backup_timer_interval=120


#
# Options defined in cinder.backup.drivers.ceph
#

# Ceph configuration file to use. (string value)
#backup_ceph_conf=/etc/ceph/ceph.conf

# The Ceph user to connect with. Default here is to use the
# same user as for Cinder volumes. If not using cephx this
# should be set to None. (string value)
#backup_ceph_user=cinder

# The chunk size, in bytes, that a backup is broken into
# before transfer to the Ceph object store. (integer value)
#backup_ceph_chunk_size=134217728

# The Ceph pool where volume backups are stored. (string
# value)
#backup_ceph_pool=backups

# RBD stripe unit to use when creating a backup image.
# (integer value)
#backup_ceph_stripe_unit=0

# RBD stripe count to use when creating a backup image.
# (integer value)
#backup_ceph_stripe_count=0

# If True, always discard excess bytes when restoring volumes
# i.e. pad with zeroes. (boolean value)
#restore_discard_excess_bytes=true


#
# Options defined in cinder.backup.drivers.nfs
#

# The maximum size in bytes of the files used to hold backups.
# If the volume being backed up exceeds this size, then it
# will be backed up into multiple files. (integer value)
#backup_file_size=1999994880

# The size in bytes that changes are tracked for incremental
# backups. backup_swift_object_size has to be multiple of
# backup_swift_block_size. (integer value)
#backup_sha_block_size_bytes=32768

# Enable or Disable the timer to send the periodic progress
# notifications to Ceilometer when backing up the volume to
# the backend storage. The default value is True to enable the
# timer. (boolean value)
#backup_enable_progress_timer=true

# Base dir containing mount point for NFS share. (string
# value)
#backup_mount_point_base=$state_path/backup_mount

# NFS share in fqdn:path, ipv4addr:path, or "[ipv6addr]:path"
# format. (string value)
#backup_share=<None>

# Mount options passed to the NFS client. See NFS man page for
# details. (string value)
#backup_mount_options=<None>

# Custom container to use for backups. (string value)
#backup_container=<None>


#
# Options defined in cinder.backup.drivers.swift
#

# The URL of the Swift endpoint (string value)
#backup_swift_url=<None>

# Info to match when looking for swift in the service catalog.
# Format is: separated values of the form:
# <service_type>:<service_name>:<endpoint_type> - Only used if
# backup_swift_url is unset (string value)
#swift_catalog_info=object-store:swift:publicURL

# Swift authentication mechanism (string value)
#backup_swift_auth=per_user

# Swift authentication version. Specify "1" for auth 1.0, or
# "2" for auth 2.0 (string value)
#backup_swift_auth_version=1

# Swift tenant/account name. Required when connecting to an
# auth 2.0 system (string value)
#backup_swift_tenant=<None>

# Swift user name (string value)
#backup_swift_user=<None>

# Swift key for authentication (string value)
#backup_swift_key=<None>

# The default Swift container to use (string value)
#backup_swift_container=volumebackups

# The size in bytes of Swift backup objects (integer value)
#backup_swift_object_size=52428800

# The size in bytes that changes are tracked for incremental
# backups. backup_swift_object_size has to be multiple of
# backup_swift_block_size. (integer value)
#backup_swift_block_size=32768

# The number of retries to make for Swift operations (integer
# value)
#backup_swift_retry_attempts=3

# The backoff time in seconds between Swift retries (integer
# value)
#backup_swift_retry_backoff=2

# Enable or Disable the timer to send the periodic progress
# notifications to Ceilometer when backing up the volume to
# the Swift backend storage. The default value is True to
# enable the timer. (boolean value)
#backup_swift_enable_progress_timer=true


#
# Options defined in cinder.backup.drivers.tsm
#

# Volume prefix for the backup id when backing up to TSM
# (string value)
#backup_tsm_volume_prefix=backup

# TSM password for the running username (string value)
#backup_tsm_password=password

# Enable or Disable compression for backups (boolean value)
#backup_tsm_compression=true


#
# Options defined in cinder.backup.manager
#

# Driver to use for backups. (string value)
# Deprecated group/name - [DEFAULT]/backup_service
#backup_driver=cinder.backup.drivers.swift


#
# Options defined in cinder.cmd.volume
#

# Backend override of host value. (string value)
# Deprecated group/name - [DEFAULT]/host
#backend_host=<None>


#
# Options defined in cinder.cmd.volume_usage_audit
#

# If this option is specified then the start time specified is
# used instead of the start time of the last completed audit
# period. (string value)
#start_time=<None>

# If this option is specified then the end time specified is
# used instead of the end time of the last completed audit
# period. (string value)
#end_time=<None>

# Send the volume and snapshot create and delete notifications
# generated in the specified period. (boolean value)
#send_actions=false


#
# Options defined in cinder.common.config
#

# File name for the paste.deploy config for cinder-api (string
# value)
#api_paste_config=api-paste.ini

# Top-level directory for maintaining cinder's state (string
# value)
# Deprecated group/name - [DEFAULT]/pybasedir
#state_path=/var/lib/cinder

# IP address of this host (string value)
#my_ip=10.0.0.1

# Default glance host name or IP (string value)
#glance_host=$my_ip

# Default glance port (integer value)
#glance_port=9292

# A list of the glance API servers available to cinder
# ([hostname|ip]:port) (list value)
#glance_api_servers=$glance_host:$glance_port

# Version of the glance API to use (integer value)
#glance_api_version=1

# Number retries when downloading an image from glance
# (integer value)
#glance_num_retries=0

# Allow to perform insecure SSL (https) requests to glance
# (boolean value)
#glance_api_insecure=false

# Enables or disables negotiation of SSL layer compression. In
# some cases disabling compression can improve data
# throughput, such as when high network bandwidth is available
# and you use compressed image formats like qcow2. (boolean
# value)
#glance_api_ssl_compression=false

# Location of ca certificates file to use for glance client
# requests. (string value)
#glance_ca_certificates_file=<None>

# http/https timeout value for glance operations. If no value
# (None) is supplied here, the glanceclient default value is
# used. (integer value)
#glance_request_timeout=<None>

# The topic that scheduler nodes listen on (string value)
#scheduler_topic=cinder-scheduler

# The topic that volume nodes listen on (string value)
#volume_topic=cinder-volume

# The topic that volume backup nodes listen on (string value)
#backup_topic=cinder-backup

# DEPRECATED: Deploy v1 of the Cinder API. (boolean value)
#enable_v1_api=true

# Deploy v2 of the Cinder API. (boolean value)
#enable_v2_api=true

# Enables or disables rate limit of the API. (boolean value)
#api_rate_limit=true

# Specify list of extensions to load when using
# osapi_volume_extension option with
# cinder.api.contrib.select_extensions (list value)
#osapi_volume_ext_list=

# osapi volume extension to load (multi valued)
#osapi_volume_extension=cinder.api.contrib.standard_extensions

# Full class name for the Manager for volume (string value)
#volume_manager=cinder.volume.manager.VolumeManager

# Full class name for the Manager for volume backup (string
# value)
#backup_manager=cinder.backup.manager.BackupManager

# Full class name for the Manager for scheduler (string value)
#scheduler_manager=cinder.scheduler.manager.SchedulerManager

# Name of this node.  This can be an opaque identifier. It is
# not necessarily a host name, FQDN, or IP address. (string
# value)
#host=cinder

# Availability zone of this node (string value)
#storage_availability_zone=nova

# Default availability zone for new volumes. If not set, the
# storage_availability_zone option value is used as the
# default for new volumes. (string value)
#default_availability_zone=<None>

# Default volume type to use (string value)
#default_volume_type=<None>

# Time period for which to generate volume usages. The options
# are hour, day, month, or year. (string value)
#volume_usage_audit_period=month

# Path to the rootwrap configuration file to use for running
# commands as root (string value)
#rootwrap_config=/etc/cinder/rootwrap.conf

# Enable monkey patching (boolean value)
#monkey_patch=false

# List of modules/decorators to monkey patch (list value)
#monkey_patch_modules=

# Maximum time since last check-in for a service to be
# considered up (integer value)
#service_down_time=60

# The full class name of the volume API class to use (string
# value)
#volume_api_class=cinder.volume.api.API

# The full class name of the volume backup API class (string
# value)
#backup_api_class=cinder.backup.api.API

# The strategy to use for auth. Supports noauth, keystone, and
# deprecated. (string value)
#auth_strategy=noauth

# A list of backend names to use. These backend names should
# be backed by a unique [CONFIG] group with its options (list
# value)
#enabled_backends=<None>

# Whether snapshots count against gigabyte quota (boolean
# value)
#no_snapshot_gb_quota=false

# The full class name of the volume transfer API class (string
# value)
#transfer_api_class=cinder.transfer.api.API

# The full class name of the volume replication API class
# (string value)
#replication_api_class=cinder.replication.api.API

# The full class name of the consistencygroup API class
# (string value)
#consistencygroup_api_class=cinder.consistencygroup.api.API

# OpenStack privileged account username. Used for requests to
# other services (such as Nova) that require an account with
# special rights. (string value)
#os_privileged_user_name=<None>

# Password associated with the OpenStack privileged account.
# (string value)
#os_privileged_user_password=<None>

# Tenant name associated with the OpenStack privileged
# account. (string value)
#os_privileged_user_tenant=<None>


#
# Options defined in cinder.compute
#

# The full class name of the compute API class to use (string
# value)
#compute_api_class=cinder.compute.nova.API


#
# Options defined in cinder.compute.nova
#

# Match this value when searching for nova in the service
# catalog. Format is: separated values of the form:
# <service_type>:<service_name>:<endpoint_type> (string value)
#nova_catalog_info=compute:Compute Service:publicURL

# Same as nova_catalog_info, but for admin endpoint. (string
# value)
#nova_catalog_admin_info=compute:Compute Service:adminURL

# Override service catalog lookup with template for nova
# endpoint e.g. http://localhost:8774/v2/%(project_id)s
# (string value)
#nova_endpoint_template=<None>

# Same as nova_endpoint_template, but for admin endpoint.
# (string value)
#nova_endpoint_admin_template=<None>

# Region name of this node (string value)
#os_region_name=<None>

# Location of ca certificates file to use for nova client
# requests. (string value)
#nova_ca_certificates_file=<None>

# Allow to perform insecure SSL requests to nova (boolean
# value)
#nova_api_insecure=false


#
# Options defined in cinder.db.api
#

# Services to be added to the available pool on create
# (boolean value)
#enable_new_services=true

# Template string to be used to generate volume names (string
# value)
#volume_name_template=volume-%s

# Template string to be used to generate snapshot names
# (string value)
#snapshot_name_template=snapshot-%s

# Template string to be used to generate backup names (string
# value)
#backup_name_template=backup-%s


#
# Options defined in cinder.db.base
#

# Driver to use for database access (string value)
#db_driver=cinder.db


#
# Options defined in cinder.image.glance
#

# Default core properties of image (list value)
#glance_core_properties=checksum,container_format,disk_format,image_name,image_id,min_disk,min_ram,name,size

# A list of url schemes that can be downloaded directly via
# the direct_url.  Currently supported schemes: [file]. (list
# value)
#allowed_direct_url_schemes=


#
# Options defined in cinder.image.image_utils
#

# Directory used for temporary storage during image conversion
# (string value)
#image_conversion_dir=$state_path/conversion


#
# Options defined in cinder.openstack.common.eventlet_backdoor
#

# Enable eventlet backdoor.  Acceptable values are 0, <port>,
# and <start>:<end>, where 0 results in listening on a random
# tcp port number; <port> results in listening on the
# specified port number (and not enabling backdoor if that
# port is in use); and <start>:<end> results in listening on
# the smallest unused port number within the specified range
# of port numbers.  The chosen port is displayed in the
# service's log file. (string value)
#backdoor_port=<None>


#
# Options defined in cinder.openstack.common.periodic_task
#

# Some periodic tasks can be run in a separate process. Should
# we run them here? (boolean value)
#run_external_periodic_tasks=true


#
# Options defined in cinder.openstack.common.policy
#

# The JSON file that defines policies. (string value)
#policy_file=policy.json

# Default rule. Enforced when a requested rule is not found.
# (string value)
#policy_default_rule=default

# Directories where policy configuration files are stored.
# They can be relative to any directory in the search path
# defined by the config_dir option, or absolute paths. The
# file defined by policy_file must exist for these directories
# to be searched.  Missing or empty directories are ignored.
# (multi valued)
#policy_dirs=policy.d


#
# Options defined in cinder.openstack.common.versionutils
#

# Enables or disables fatal status of deprecations. (boolean
# value)
#fatal_deprecations=false


#
# Options defined in cinder.scheduler.driver
#

# The scheduler host manager class to use (string value)
#scheduler_host_manager=cinder.scheduler.host_manager.HostManager

# Maximum number of attempts to schedule an volume (integer
# value)
#scheduler_max_attempts=3


#
# Options defined in cinder.scheduler.host_manager
#

# Which filter class names to use for filtering hosts when not
# specified in the request. (list value)
#scheduler_default_filters=AvailabilityZoneFilter,CapacityFilter,CapabilitiesFilter

# Which weigher class names to use for weighing hosts. (list
# value)
#scheduler_default_weighers=CapacityWeigher


#
# Options defined in cinder.scheduler.manager
#

# Default scheduler driver to use (string value)
#scheduler_driver=cinder.scheduler.filter_scheduler.FilterScheduler


#
# Options defined in cinder.scheduler.scheduler_options
#

# Absolute path to scheduler configuration JSON file. (string
# value)
#scheduler_json_config_location=


#
# Options defined in cinder.scheduler.simple
#

# This configure option has been deprecated along with the
# SimpleScheduler.  New scheduler is able to gather capacity
# information for each host, thus setting the maximum number
# of volume gigabytes for host is no longer needed.  It's safe
# to remove this configure from cinder.conf. (integer value)
#max_gigabytes=10000


#
# Options defined in cinder.scheduler.weights.capacity
#

# Multiplier used for weighing volume capacity. Negative
# numbers mean to stack vs spread. (floating point value)
#capacity_weight_multiplier=1.0

# Multiplier used for weighing volume capacity. Negative
# numbers mean to stack vs spread. (floating point value)
#allocated_capacity_weight_multiplier=-1.0


#
# Options defined in cinder.scheduler.weights.volume_number
#

# Multiplier used for weighing volume number. Negative numbers
# mean to spread vs stack. (floating point value)
#volume_number_multiplier=-1.0


#
# Options defined in cinder.transfer.api
#

# The number of characters in the salt. (integer value)
#volume_transfer_salt_length=8

# The number of characters in the autogenerated auth key.
# (integer value)
#volume_transfer_key_length=16


#
# Options defined in cinder.volume.api
#

# Cache volume availability zones in memory for the provided
# duration in seconds (integer value)
#az_cache_duration=3600

# Create volume from snapshot at the host where snapshot
# resides (boolean value)
#snapshot_same_host=true

# Ensure that the new volumes are the same AZ as snapshot or
# source volume (boolean value)
#cloned_volume_same_az=true


#
# Options defined in cinder.volume.driver
#

# The maximum number of times to rescan iSER targetto find
# volume (integer value)
#num_iser_scan_tries=3

# This option is deprecated and unused. It will be removed in
# the Liberty release. (integer value)
#iser_num_targets=<None>

# Prefix for iSER volumes (string value)
#iser_target_prefix=iqn.2010-10.org.openstack:

# The IP address that the iSER daemon is listening on (string
# value)
#iser_ip_address=$my_ip

# The port that the iSER daemon is listening on (integer
# value)
#iser_port=3260

# The name of the iSER target user-land tool to use (string
# value)
#iser_helper=tgtadm

# Number of times to attempt to run flakey shell commands
# (integer value)
#num_shell_tries=3

# The percentage of backend capacity is reserved (integer
# value)
#reserved_percentage=0

# This option is deprecated and unused. It will be removed in
# the Liberty release. (integer value)
#iscsi_num_targets=<None>

# Prefix for iSCSI volumes (string value)
#iscsi_target_prefix=iqn.2010-10.org.openstack:

# The IP address that the iSCSI daemon is listening on (string
# value)
#iscsi_ip_address=$my_ip

# The list of secondary IP addresses of the iSCSI daemon (list
# value)
#iscsi_secondary_ip_addresses=

# The port that the iSCSI daemon is listening on (integer
# value)
#iscsi_port=3260

# The maximum number of times to rescan targets to find volume
# (integer value)
# Deprecated group/name - [DEFAULT]/num_iscsi_scan_tries
#num_volume_device_scan_tries=3

# The backend name for a given driver implementation (string
# value)
#volume_backend_name=<None>

# Do we attach/detach volumes in cinder using multipath for
# volume to image and image to volume transfers? (boolean
# value)
#use_multipath_for_image_xfer=false

# If this is set to True, attachment of volumes for image
# transfer will be aborted when multipathd is not running.
# Otherwise, it will fallback to single path. (boolean value)
#enforce_multipath_for_image_xfer=false

# Method used to wipe old volumes (string value)
#volume_clear=zero

# Size in MiB to wipe at start of old volumes. 0 => all
# (integer value)
#volume_clear_size=0

# The flag to pass to ionice to alter the i/o priority of the
# process used to zero a volume after deletion, for example
# "-c3" for idle only priority. (string value)
#volume_clear_ionice=<None>

# iSCSI target user-land tool to use. tgtadm is default, use
# lioadm for LIO iSCSI support, scstadmin for SCST target
# support, iseradm for the ISER protocol, ietadm for iSCSI
# Enterprise Target, iscsictl for Chelsio iSCSI Target or fake
# for testing. (string value)
#iscsi_helper=tgtadm

# Volume configuration file storage directory (string value)
#volumes_dir=$state_path/volumes

# IET configuration file (string value)
#iet_conf=/etc/iet/ietd.conf

# Chiscsi (CXT) global defaults configuration file (string
# value)
#chiscsi_conf=/etc/chelsio-iscsi/chiscsi.conf

# This option is deprecated and unused. It will be removed in
# the next release. (string value)
#lio_initiator_iqns=

# Sets the behavior of the iSCSI target to either perform
# blockio or fileio optionally, auto can be set and Cinder
# will autodetect type of backing device (string value)
#iscsi_iotype=fileio

# The default block size used when copying/clearing volumes
# (string value)
#volume_dd_blocksize=1M

# The blkio cgroup name to be used to limit bandwidth of
# volume copy (string value)
#volume_copy_blkio_cgroup_name=cinder-volume-copy

# The upper limit of bandwidth of volume copy. 0 => unlimited
# (integer value)
#volume_copy_bps_limit=0

# Sets the behavior of the iSCSI target to either perform
# write-back(on) or write-through(off). This parameter is
# valid if iscsi_helper is set to tgtadm or iseradm. (string
# value)
#iscsi_write_cache=on

# Determines the iSCSI protocol for new iSCSI volumes, created
# with tgtadm or lioadm target helpers. In order to enable
# RDMA, this parameter should be set with the value "iser".
# The supported iSCSI protocol values are "iscsi" and "iser".
# (string value)
#iscsi_protocol=iscsi

# The path to the client certificate key for verification, if
# the driver supports it. (string value)
#driver_client_cert_key=<None>

# The path to the client certificate for verification, if the
# driver supports it. (string value)
#driver_client_cert=<None>

# Tell driver to use SSL for connection to backend storage if
# the driver supports it. (boolean value)
#driver_use_ssl=false

# Float representation of the over subscription ratio when
# thin provisioning is involved. Default ratio is 20.0,
# meaning provisioned capacity can be 20 times of the total
# physical capacity. If the ratio is 10.5, it means
# provisioned capacity can be 10.5 times of the total physical
# capacity. A ratio of 1.0 means provisioned capacity cannot
# exceed the total physical capacity. A ratio lower than 1.0
# will be ignored and the default value will be used instead.
# (floating point value)
#max_over_subscription_ratio=20.0

# Certain ISCSI targets have predefined target names, SCST
# target driver uses this name. (string value)
#scst_target_iqn_name=<None>

# SCST target implementation can choose from multiple SCST
# target drivers. (string value)
#scst_target_driver=iscsi

# Option to enable/disable CHAP authentication for targets.
# (boolean value)
# Deprecated group/name - [DEFAULT]/eqlx_use_chap
#use_chap_auth=false

# CHAP user name. (string value)
# Deprecated group/name - [DEFAULT]/eqlx_chap_login
#chap_username=

# Password for specified CHAP account name. (string value)
# Deprecated group/name - [DEFAULT]/eqlx_chap_password
#chap_password=

# Namespace for driver private data values to be saved in.
# (string value)
#driver_data_namespace=<None>

# String representation for an equation that will be used to
# filter hosts. Only used when the driver filter is set to be
# used by the Cinder scheduler. (string value)
#filter_function=<None>

# String representation for an equation that will be used to
# determine the goodness of a host. Only used when using the
# goodness weigher is set to be used by the Cinder scheduler.
# (string value)
#goodness_function=<None>


#
# Options defined in cinder.volume.drivers.block_device
#

# List of all available devices (list value)
#available_devices=


#
# Options defined in cinder.volume.drivers.cloudbyte.options
#

# These values will be used for CloudByte storage's addQos API
# call. (dict value)
#cb_add_qosgroup=latency:15,iops:10,graceallowed:false,iopscontrol:true,memlimit:0,throughput:0,tpcontrol:false,networkspeed:0

# Driver will use this API key to authenticate against the
# CloudByte storage's management interface. (string value)
#cb_apikey=None

# CloudByte storage specific account name. This maps to a
# project name in OpenStack. (string value)
#cb_account_name=None

# This corresponds to the name of Tenant Storage Machine (TSM)
# in CloudByte storage. A volume will be created in this TSM.
# (string value)
#cb_tsm_name=None

# A retry value in seconds. Will be used by the driver to
# check if volume creation was successful in CloudByte
# storage. (integer value)
#cb_confirm_volume_create_retry_interval=5

# Will confirm a successful volume creation in CloudByte
# storage by making this many number of attempts. (integer
# value)
#cb_confirm_volume_create_retries=3

# These values will be used for CloudByte storage's
# createVolume API call. (dict value)
#cb_create_volume=compression:off,deduplication:off,blocklength:512B,sync:always,protocoltype:ISCSI,recordsize:16k


#
# Options defined in cinder.volume.drivers.datera
#

# DEPRECATED: This will be removed in the Liberty release. Use
# san_login and san_password instead. This directly sets the
# Datera API token. (string value)
#datera_api_token=<None>

# Datera API port. (string value)
#datera_api_port=7717

# Datera API version. (string value)
#datera_api_version=1

# Number of replicas to create of an inode. (string value)
#datera_num_replicas=3


#
# Options defined in cinder.volume.drivers.dell.dell_storagecenter_common
#

# Storage Center System Serial Number (integer value)
#dell_sc_ssn=64702

# Dell API port (integer value)
#dell_sc_api_port=3033

# Name of the server folder to use on the Storage Center
# (string value)
#dell_sc_server_folder=openstack

# Name of the volume folder to use on the Storage Center
# (string value)
#dell_sc_volume_folder=openstack


#
# Options defined in cinder.volume.drivers.emc.emc_vmax_common
#

# use this file for cinder emc plugin config data (string
# value)
#cinder_emc_config_file=/etc/cinder/cinder_emc_config.xml


#
# Options defined in cinder.volume.drivers.emc.emc_vnx_cli
#

# VNX authentication scope type. (string value)
#storage_vnx_authentication_type=global

# Directory path that contains the VNX security file. Make
# sure the security file is generated first. (string value)
#storage_vnx_security_file_dir=<None>

# Naviseccli Path. (string value)
#naviseccli_path=

# Storage pool name. (string value)
#storage_vnx_pool_name=<None>

# VNX secondary SP IP Address. (string value)
#san_secondary_ip=<None>

# Default timeout for CLI operations in minutes. For example,
# LUN migration is a typical long running operation, which
# depends on the LUN size and the load of the array. An upper
# bound in the specific deployment can be set to avoid
# unnecessary long wait. By default, it is 365 days long.
# (integer value)
#default_timeout=525600

# Default max number of LUNs in a storage group. By default,
# the value is 255. (integer value)
#max_luns_per_storage_group=255

# To destroy storage group when the last LUN is removed from
# it. By default, the value is False. (boolean value)
#destroy_empty_storage_group=false

# Mapping between hostname and its iSCSI initiator IP
# addresses. (string value)
#iscsi_initiators=

# Automatically register initiators. By default, the value is
# False. (boolean value)
#initiator_auto_registration=false

# Automatically deregister initiators after the related
# storage group is destroyed. By default, the value is False.
# (boolean value)
#initiator_auto_deregistration=false

# Report free_capacity_gb as 0 when the limit to maximum
# number of pool LUNs is reached. By default, the value is
# False. (boolean value)
#check_max_pool_luns_threshold=false

# Delete a LUN even if it is in Storage Groups. (boolean
# value)
#force_delete_lun_in_storagegroup=false


#
# Options defined in cinder.volume.drivers.emc.xtremio
#

# XMS cluster id in multi-cluster environment (string value)
#xtremio_cluster_name=


#
# Options defined in cinder.volume.drivers.eqlx
#

# Group name to use for creating volumes. Defaults to
# "group-0". (string value)
#eqlx_group_name=group-0

# Timeout for the Group Manager cli command execution. Default
# is 30. (integer value)
#eqlx_cli_timeout=30

# Maximum retry count for reconnection. Default is 5. (integer
# value)
#eqlx_cli_max_retries=5

# Use CHAP authentication for targets. Note that this option
# is deprecated in favour of "use_chap_auth" as specified in
# cinder/volume/driver.py and will be removed in next release.
# (boolean value)
#eqlx_use_chap=false

# Existing CHAP account name. Note that this option is
# deprecated in favour of "chap_username" as specified in
# cinder/volume/driver.py and will be removed in next release.
# (string value)
#eqlx_chap_login=admin

# Password for specified CHAP account name. Note that this
# option is deprecated in favour of "chap_password" as
# specified in cinder/volume/driver.py and will be removed in
# the next release (string value)
#eqlx_chap_password=password

# Pool in which volumes will be created. Defaults to
# "default". (string value)
#eqlx_pool=default


#
# Options defined in cinder.volume.drivers.glusterfs
#

# File with the list of available gluster shares (string
# value)
#glusterfs_shares_config=/etc/cinder/glusterfs_shares

# Create volumes as sparsed files which take no space.If set
# to False volume is created as regular file.In such case
# volume creation takes a lot of time. (boolean value)
#glusterfs_sparsed_volumes=true

# Create volumes as QCOW2 files rather than raw files.
# (boolean value)
#glusterfs_qcow2_volumes=false

# Base dir containing mount points for gluster shares. (string
# value)
#glusterfs_mount_point_base=$state_path/mnt


#
# Options defined in cinder.volume.drivers.hds.hds
#

# The configuration file for the Cinder HDS driver for HUS
# (string value)
#hds_cinder_config_file=/opt/hds/hus/cinder_hus_conf.xml


#
# Options defined in cinder.volume.drivers.hds.iscsi
#

# Configuration file for HDS iSCSI cinder plugin (string
# value)
#hds_hnas_iscsi_config_file=/opt/hds/hnas/cinder_iscsi_conf.xml


#
# Options defined in cinder.volume.drivers.hds.nfs
#

# Configuration file for HDS NFS cinder plugin (string value)
#hds_hnas_nfs_config_file=/opt/hds/hnas/cinder_nfs_conf.xml


#
# Options defined in cinder.volume.drivers.hitachi.hbsd_common
#

# Serial number of storage system (string value)
#hitachi_serial_number=<None>

# Name of an array unit (string value)
#hitachi_unit_name=<None>

# Pool ID of storage system (integer value)
#hitachi_pool_id=<None>

# Thin pool ID of storage system (integer value)
#hitachi_thin_pool_id=<None>

# Range of logical device of storage system (string value)
#hitachi_ldev_range=<None>

# Default copy method of storage system (string value)
#hitachi_default_copy_method=FULL

# Copy speed of storage system (integer value)
#hitachi_copy_speed=3

# Interval to check copy (integer value)
#hitachi_copy_check_interval=3

# Interval to check copy asynchronously (integer value)
#hitachi_async_copy_check_interval=10

# Control port names for HostGroup or iSCSI Target (string
# value)
#hitachi_target_ports=<None>

# Range of group number (string value)
#hitachi_group_range=<None>

# Request for creating HostGroup or iSCSI Target (boolean
# value)
#hitachi_group_request=false


#
# Options defined in cinder.volume.drivers.hitachi.hbsd_fc
#

# Request for FC Zone creating HostGroup (boolean value)
#hitachi_zoning_request=false


#
# Options defined in cinder.volume.drivers.hitachi.hbsd_horcm
#

# Instance numbers for HORCM (string value)
#hitachi_horcm_numbers=200,201

# Username of storage system for HORCM (string value)
#hitachi_horcm_user=<None>

# Password of storage system for HORCM (string value)
#hitachi_horcm_password=<None>

# Add to HORCM configuration (boolean value)
#hitachi_horcm_add_conf=true


#
# Options defined in cinder.volume.drivers.hitachi.hbsd_iscsi
#

# Add CHAP user (boolean value)
#hitachi_add_chap_user=false

# iSCSI authentication method (string value)
#hitachi_auth_method=<None>

# iSCSI authentication username (string value)
#hitachi_auth_user=HBSD-CHAP-user

# iSCSI authentication password (string value)
#hitachi_auth_password=HBSD-CHAP-password


#
# Options defined in cinder.volume.drivers.huawei
#

# The configuration file for the Cinder Huawei driver (string
# value)
#cinder_huawei_conf_file=/etc/cinder/cinder_huawei_conf.xml


#
# Options defined in cinder.volume.drivers.ibm.flashsystem
#

# Connection protocol should be FC. (string value)
#flashsystem_connection_protocol=FC

# Connect with multipath (FC only). (boolean value)
#flashsystem_multipath_enabled=false

# Allows vdisk to multi host mapping. (boolean value)
#flashsystem_multihostmap_enabled=true


#
# Options defined in cinder.volume.drivers.ibm.gpfs
#

# Specifies the path of the GPFS directory where Block Storage
# volume and snapshot files are stored. (string value)
#gpfs_mount_point_base=<None>

# Specifies the path of the Image service repository in GPFS.
# Leave undefined if not storing images in GPFS. (string
# value)
#gpfs_images_dir=<None>

# Specifies the type of image copy to be used.  Set this when
# the Image service repository also uses GPFS so that image
# files can be transferred efficiently from the Image service
# to the Block Storage service. There are two valid values:
# "copy" specifies that a full copy of the image is made;
# "copy_on_write" specifies that copy-on-write optimization
# strategy is used and unmodified blocks of the image file are
# shared efficiently. (string value)
#gpfs_images_share_mode=<None>

# Specifies an upper limit on the number of indirections
# required to reach a specific block due to snapshots or
# clones.  A lengthy chain of copy-on-write snapshots or
# clones can have a negative impact on performance, but
# improves space utilization.  0 indicates unlimited clone
# depth. (integer value)
#gpfs_max_clone_depth=0

# Specifies that volumes are created as sparse files which
# initially consume no space. If set to False, the volume is
# created as a fully allocated file, in which case, creation
# may take a significantly longer time. (boolean value)
#gpfs_sparse_volumes=true

# Specifies the storage pool that volumes are assigned to. By
# default, the system storage pool is used. (string value)
#gpfs_storage_pool=system


#
# Options defined in cinder.volume.drivers.ibm.ibmnas
#

# IBMNAS platform type to be used as backend storage; valid
# values are - v7ku : for using IBM Storwize V7000 Unified,
# sonas : for using IBM Scale Out NAS, gpfs-nas : for using
# NFS based IBM GPFS deployments. (string value)
#ibmnas_platform_type=v7ku


#
# Options defined in cinder.volume.drivers.ibm.storwize_svc
#

# Storage system storage pool for volumes (string value)
#storwize_svc_volpool_name=volpool

# Storage system space-efficiency parameter for volumes
# (percentage) (integer value)
#storwize_svc_vol_rsize=2

# Storage system threshold for volume capacity warnings
# (percentage) (integer value)
#storwize_svc_vol_warning=0

# Storage system autoexpand parameter for volumes (True/False)
# (boolean value)
#storwize_svc_vol_autoexpand=true

# Storage system grain size parameter for volumes
# (32/64/128/256) (integer value)
#storwize_svc_vol_grainsize=256

# Storage system compression option for volumes (boolean
# value)
#storwize_svc_vol_compression=false

# Enable Easy Tier for volumes (boolean value)
#storwize_svc_vol_easytier=true

# The I/O group in which to allocate volumes (integer value)
#storwize_svc_vol_iogrp=0

# Maximum number of seconds to wait for FlashCopy to be
# prepared. Maximum value is 600 seconds (10 minutes) (integer
# value)
#storwize_svc_flashcopy_timeout=120

# Connection protocol (iSCSI/FC) (string value)
#storwize_svc_connection_protocol=iSCSI

# Configure CHAP authentication for iSCSI connections
# (Default: Enabled) (boolean value)
#storwize_svc_iscsi_chap_enabled=true

# Connect with multipath (FC only; iSCSI multipath is
# controlled by Nova) (boolean value)
#storwize_svc_multipath_enabled=false

# Allows vdisk to multi host mapping (boolean value)
#storwize_svc_multihostmap_enabled=true

# Indicate whether svc driver is compatible for NPIV setup. If
# it is compatible, it will allow no wwpns being returned on
# get_conn_fc_wwpns during initialize_connection (boolean
# value)
#storwize_svc_npiv_compatibility_mode=false

# Allow tenants to specify QOS on create (boolean value)
#storwize_svc_allow_tenant_qos=false

# If operating in stretched cluster mode, specify the name of
# the pool in which mirrored copies are stored.Example:
# "pool2" (string value)
#storwize_svc_stretched_cluster_partner=<None>


#
# Options defined in cinder.volume.drivers.ibm.xiv_ds8k
#

# Proxy driver that connects to the IBM Storage Array (string
# value)
#xiv_ds8k_proxy=xiv_ds8k_openstack.nova_proxy.XIVDS8KNovaProxy

# Connection type to the IBM Storage Array (string value)
#xiv_ds8k_connection_type=iscsi

# CHAP authentication mode, effective only for iscsi
# (disabled|enabled) (string value)
#xiv_chap=disabled


#
# Options defined in cinder.volume.drivers.lvm
#

# Name for the VG that will contain exported volumes (string
# value)
#volume_group=cinder-volumes

# If >0, create LVs with multiple mirrors. Note that this
# requires lvm_mirrors + 2 PVs with available space (integer
# value)
#lvm_mirrors=0

# Type of LVM volumes to deploy (string value)
#lvm_type=default

# LVM conf file to use for the LVM driver in Cinder; this
# setting is ignored if the specified file does not exist (You
# can also specify 'None' to not use a conf file even if one
# exists). (string value)
#lvm_conf_file=/etc/cinder/lvm.conf


#
# Options defined in cinder.volume.drivers.netapp.options
#

# The vFiler unit on which provisioning of block storage
# volumes will be done. This option is only used by the driver
# when connecting to an instance with a storage family of Data
# ONTAP operating in 7-Mode. Only use this option when
# utilizing the MultiStore feature on the NetApp storage
# system. (string value)
#netapp_vfiler=<None>

# The name of the config.conf stanza for a Data ONTAP (7-mode)
# HA partner.  This option is only used by the driver when
# connecting to an instance with a storage family of Data
# ONTAP operating in 7-Mode, and it is required if the storage
# protocol selected is FC. (string value)
#netapp_partner_backend_name=<None>

# Administrative user account name used to access the storage
# system or proxy server. (string value)
#netapp_login=<None>

# Password for the administrative user account specified in
# the netapp_login option. (string value)
#netapp_password=<None>

# This option specifies the virtual storage server (Vserver)
# name on the storage cluster on which provisioning of block
# storage volumes should occur. (string value)
#netapp_vserver=<None>

# The hostname (or IP address) for the storage system or proxy
# server. (string value)
#netapp_server_hostname=<None>

# The TCP port to use for communication with the storage
# system or proxy server. If not specified, Data ONTAP drivers
# will use 80 for HTTP and 443 for HTTPS; E-Series will use
# 8080 for HTTP and 8443 for HTTPS. (integer value)
#netapp_server_port=<None>

# This option is used to specify the path to the E-Series
# proxy application on a proxy server. The value is combined
# with the value of the netapp_transport_type,
# netapp_server_hostname, and netapp_server_port options to
# create the URL used by the driver to connect to the proxy
# application. (string value)
#netapp_webservice_path=/devmgr/v2

# This option is only utilized when the storage family is
# configured to eseries. This option is used to restrict
# provisioning to the specified controllers. Specify the value
# of this option to be a comma separated list of controller
# hostnames or IP addresses to be used for provisioning.
# (string value)
#netapp_controller_ips=<None>

# Password for the NetApp E-Series storage array. (string
# value)
#netapp_sa_password=<None>

# This option is used to restrict provisioning to the
# specified storage pools. Only dynamic disk pools are
# currently supported. Specify the value of this option to be
# a comma separated list of disk pool names to be used for
# provisioning. (string value)
#netapp_storage_pools=<None>

# This option is used to define how the controllers in the
# E-Series storage array will work with the particular
# operating system on the hosts that are connected to it.
# (string value)
#netapp_eseries_host_type=linux_dm_mp

# If the percentage of available space for an NFS share has
# dropped below the value specified by this option, the NFS
# image cache will be cleaned. (integer value)
#thres_avl_size_perc_start=20

# When the percentage of available space on an NFS share has
# reached the percentage specified by this option, the driver
# will stop clearing files from the NFS image cache that have
# not been accessed in the last M minutes, where M is the
# value of the expiry_thres_minutes configuration option.
# (integer value)
#thres_avl_size_perc_stop=60

# This option specifies the threshold for last access time for
# images in the NFS image cache. When a cache cleaning cycle
# begins, images in the cache that have not been accessed in
# the last M minutes, where M is the value of this parameter,
# will be deleted from the cache to create free space on the
# NFS share. (integer value)
#expiry_thres_minutes=720

# This option specifies the path of the NetApp copy offload
# tool binary. Ensure that the binary has execute permissions
# set which allow the effective user of the cinder-volume
# process to execute the file. (string value)
#netapp_copyoffload_tool_path=<None>

# The quantity to be multiplied by the requested volume size
# to ensure enough space is available on the virtual storage
# server (Vserver) to fulfill the volume creation request.
# (floating point value)
#netapp_size_multiplier=1.2

# This option is only utilized when the storage protocol is
# configured to use iSCSI or FC. This option is used to
# restrict provisioning to the specified controller volumes.
# Specify the value of this option to be a comma separated
# list of NetApp controller volume names to be used for
# provisioning. (string value)
#netapp_volume_list=<None>

# The storage family type used on the storage system; valid
# values are ontap_7mode for using Data ONTAP operating in
# 7-Mode, ontap_cluster for using clustered Data ONTAP, or
# eseries for using E-Series. (string value)
#netapp_storage_family=ontap_cluster

# The storage protocol to be used on the data path with the
# storage system. (string value)
#netapp_storage_protocol=<None>

# The transport protocol used when communicating with the
# storage system or proxy server. (string value)
#netapp_transport_type=http


#
# Options defined in cinder.volume.drivers.nfs
#

# File with the list of available nfs shares (string value)
#nfs_shares_config=/etc/cinder/nfs_shares

# Create volumes as sparsed files which take no space.If set
# to False volume is created as regular file.In such case
# volume creation takes a lot of time. (boolean value)
#nfs_sparsed_volumes=true

# Percent of ACTUAL usage of the underlying volume before no
# new volumes can be allocated to the volume destination.
# (floating point value)
#nfs_used_ratio=0.95

# This will compare the allocated to available space on the
# volume destination.  If the ratio exceeds this number, the
# destination will no longer be valid. (floating point value)
#nfs_oversub_ratio=1.0

# Base dir containing mount points for nfs shares. (string
# value)
#nfs_mount_point_base=$state_path/mnt

# Mount options passed to the nfs client. See section of the
# nfs man page for details. (string value)
#nfs_mount_options=<None>

# The number of attempts to mount nfs shares before raising an
# error.  At least one attempt will be made to mount an nfs
# share, regardless of the value specified. (integer value)
#nfs_mount_attempts=3


#
# Options defined in cinder.volume.drivers.nimble
#

# Nimble Controller pool name (string value)
#nimble_pool_name=default

# Nimble Subnet Label (string value)
#nimble_subnet_label=*


#
# Options defined in cinder.volume.drivers.openvstorage
#

# Vpool to use for volumes - backend is defined by vpool not
# by us. (string value)
#vpool_name=


#
# Options defined in cinder.volume.drivers.prophetstor.options
#

# DPL pool uuid in which DPL volumes are stored. (string
# value)
#dpl_pool=

# DPL port number. (integer value)
#dpl_port=8357


#
# Options defined in cinder.volume.drivers.pure
#

# REST API authorization token. (string value)
#pure_api_token=<None>


#
# Options defined in cinder.volume.drivers.quobyte
#

# URL to the Quobyte volume e.g., quobyte://<DIR host>/<volume
# name> (string value)
#quobyte_volume_url=<None>

# Path to a Quobyte Client configuration file. (string value)
#quobyte_client_cfg=<None>

# Create volumes as sparse files which take no space. If set
# to False, volume is created as regular file.In such case
# volume creation takes a lot of time. (boolean value)
#quobyte_sparsed_volumes=true

# Create volumes as QCOW2 files rather than raw files.
# (boolean value)
#quobyte_qcow2_volumes=true

# Base dir containing the mount point for the Quobyte volume.
# (string value)
#quobyte_mount_point_base=$state_path/mnt


#
# Options defined in cinder.volume.drivers.rbd
#

# The RADOS pool where rbd volumes are stored (string value)
#rbd_pool=rbd

# The RADOS client name for accessing rbd volumes - only set
# when using cephx authentication (string value)
#rbd_user=<None>

# Path to the ceph configuration file (string value)
#rbd_ceph_conf=

# Flatten volumes created from snapshots to remove dependency
# from volume to snapshot (boolean value)
#rbd_flatten_volume_from_snapshot=false

# The libvirt uuid of the secret for the rbd_user volumes
# (string value)
#rbd_secret_uuid=<None>

# Directory where temporary image files are stored when the
# volume driver does not write them directly to the volume.
# Warning: this option is now deprecated, please use
# image_conversion_dir instead. (string value)
#volume_tmp_dir=<None>

# Maximum number of nested volume clones that are taken before
# a flatten occurs. Set to 0 to disable cloning. (integer
# value)
#rbd_max_clone_depth=5

# Volumes will be chunked into objects of this size (in
# megabytes). (integer value)
#rbd_store_chunk_size=4

# Timeout value (in seconds) used when connecting to ceph
# cluster. If value < 0, no timeout is set and default
# librados value is used. (integer value)
#rados_connect_timeout=-1


#
# Options defined in cinder.volume.drivers.remotefs
#

# IP address or Hostname of NAS system. (string value)
#nas_ip=

# User name to connect to NAS system. (string value)
#nas_login=admin

# Password to connect to NAS system. (string value)
#nas_password=

# SSH port to use to connect to NAS system. (integer value)
#nas_ssh_port=22

# Filename of private key to use for SSH authentication.
# (string value)
#nas_private_key=

# Allow network-attached storage systems to operate in a
# secure environment where root level access is not permitted.
# If set to False, access is as the root user and insecure. If
# set to True, access is not as root. If set to auto, a check
# is done to determine if this is a new installation: True is
# used if so, otherwise False. Default is auto. (string value)
#nas_secure_file_operations=auto

# Set more secure file permissions on network-attached storage
# volume files to restrict broad other/world access. If set to
# False, volumes are created with open permissions. If set to
# True, volumes are created with permissions for the cinder
# user and group (660). If set to auto, a check is done to
# determine if this is a new installation: True is used if so,
# otherwise False. Default is auto. (string value)
#nas_secure_file_permissions=auto

# Path to the share to use for storing Cinder volumes. For
# example:  "/srv/export1" for an NFS server export available
# at 10.0.5.10:/srv/export1 . (string value)
#nas_share_path=

# Options used to mount the storage backend file system where
# Cinder volumes are stored. (string value)
#nas_mount_options=<None>


#
# Options defined in cinder.volume.drivers.san.hp.hp_3par_common
#

# 3PAR WSAPI Server Url like https://<3par ip>:8080/api/v1
# (string value)
#hp3par_api_url=

# 3PAR Super user username (string value)
#hp3par_username=

# 3PAR Super user password (string value)
#hp3par_password=

# List of the CPG(s) to use for volume creation (list value)
#hp3par_cpg=OpenStack

# The CPG to use for Snapshots for volumes. If empty the
# userCPG will be used. (string value)
#hp3par_cpg_snap=

# The time in hours to retain a snapshot.  You can't delete it
# before this expires. (string value)
#hp3par_snapshot_retention=

# The time in hours when a snapshot expires  and is deleted.
# This must be larger than expiration (string value)
#hp3par_snapshot_expiration=

# Enable HTTP debugging to 3PAR (boolean value)
#hp3par_debug=false

# List of target iSCSI addresses to use. (list value)
#hp3par_iscsi_ips=

# Enable CHAP authentication for iSCSI connections. (boolean
# value)
#hp3par_iscsi_chap_enabled=false


#
# Options defined in cinder.volume.drivers.san.hp.hp_lefthand_rest_proxy
#

# HP LeftHand WSAPI Server Url like https://<LeftHand
# ip>:8081/lhos (string value)
#hplefthand_api_url=<None>

# HP LeftHand Super user username (string value)
#hplefthand_username=<None>

# HP LeftHand Super user password (string value)
#hplefthand_password=<None>

# HP LeftHand cluster name (string value)
#hplefthand_clustername=<None>

# Configure CHAP authentication for iSCSI connections
# (Default: Disabled) (boolean value)
#hplefthand_iscsi_chap_enabled=false

# Enable HTTP debugging to LeftHand (boolean value)
#hplefthand_debug=false


#
# Options defined in cinder.volume.drivers.san.san
#

# Use thin provisioning for SAN volumes? (boolean value)
#san_thin_provision=true

# IP address of SAN controller (string value)
#san_ip=

# Username for SAN controller (string value)
#san_login=admin

# Password for SAN controller (string value)
#san_password=

# Filename of private key to use for SSH authentication
# (string value)
#san_private_key=

# Cluster name to use for creating volumes (string value)
#san_clustername=

# SSH port to use with SAN (integer value)
#san_ssh_port=22

# Execute commands locally instead of over SSH; use if the
# volume service is running on the SAN device (boolean value)
#san_is_local=false

# SSH connection timeout in seconds (integer value)
#ssh_conn_timeout=30

# Minimum ssh connections in the pool (integer value)
#ssh_min_pool_conn=1

# Maximum ssh connections in the pool (integer value)
#ssh_max_pool_conn=5


#
# Options defined in cinder.volume.drivers.scality
#

# Path or URL to Scality SOFS configuration file (string
# value)
#scality_sofs_config=<None>

# Base dir where Scality SOFS shall be mounted (string value)
#scality_sofs_mount_point=$state_path/scality

# Path from Scality SOFS root to volume dir (string value)
#scality_sofs_volume_dir=cinder/volumes


#
# Options defined in cinder.volume.drivers.smbfs
#

# File with the list of available smbfs shares. (string value)
#smbfs_shares_config=/etc/cinder/smbfs_shares

# Default format that will be used when creating volumes if no
# volume format is specified. (string value)
#smbfs_default_volume_format=qcow2

# Create volumes as sparsed files which take no space rather
# than regular files when using raw format, in which case
# volume creation takes lot of time. (boolean value)
#smbfs_sparsed_volumes=true

# Percent of ACTUAL usage of the underlying volume before no
# new volumes can be allocated to the volume destination.
# (floating point value)
#smbfs_used_ratio=0.95

# This will compare the allocated to available space on the
# volume destination.  If the ratio exceeds this number, the
# destination will no longer be valid. (floating point value)
#smbfs_oversub_ratio=1.0

# Base dir containing mount points for smbfs shares. (string
# value)
#smbfs_mount_point_base=$state_path/mnt

# Mount options passed to the smbfs client. See mount.cifs man
# page for details. (string value)
#smbfs_mount_options=noperm,file_mode=0775,dir_mode=0775


#
# Options defined in cinder.volume.drivers.solidfire
#

# Set 512 byte emulation on volume creation;  (boolean value)
#sf_emulate_512=true

# Allow tenants to specify QOS on create (boolean value)
#sf_allow_tenant_qos=false

# Create SolidFire accounts with this prefix. Any string can
# be used here, but the string "hostname" is special and will
# create a prefix using the cinder node hostsname (previous
# default behavior).  The default is NO prefix. (string value)
#sf_account_prefix=<None>

# Account name on the SolidFire Cluster to use as owner of
# template/cache volumes (created if does not exist). (string
# value)
#sf_template_account_name=openstack-vtemplate

# Create an internal cache of copy of images when a bootable
# volume is created to eliminate fetch from glance and qemu-
# conversion on subsequent calls. (boolean value)
#sf_allow_template_caching=true

# SolidFire API port. Useful if the device api is behind a
# proxy on a different port. (integer value)
#sf_api_port=443


#
# Options defined in cinder.volume.drivers.srb
#

# Comma-separated list of REST servers IP to connect to. (eg
# http://IP1/,http://IP2:81/path (string value)
#srb_base_urls=<None>


#
# Options defined in cinder.volume.drivers.violin.v6000_common
#

# IP address or hostname of mg-a (string value)
#gateway_mga=<None>

# IP address or hostname of mg-b (string value)
#gateway_mgb=<None>

# Use igroups to manage targets and initiators (boolean value)
#use_igroups=false

# Global backend request timeout, in seconds (integer value)
#request_timeout=300


#
# Options defined in cinder.volume.drivers.vmware.vmdk
#

# IP address for connecting to VMware ESX/VC server. (string
# value)
#vmware_host_ip=<None>

# Username for authenticating with VMware ESX/VC server.
# (string value)
#vmware_host_username=<None>

# Password for authenticating with VMware ESX/VC server.
# (string value)
#vmware_host_password=<None>

# Optional VIM service WSDL Location e.g
# http://<server>/vimService.wsdl. Optional over-ride to
# default location for bug work-arounds. (string value)
#vmware_wsdl_location=<None>

# Number of times VMware ESX/VC server API must be retried
# upon connection related issues. (integer value)
#vmware_api_retry_count=10

# The interval (in seconds) for polling remote tasks invoked
# on VMware ESX/VC server. (floating point value)
#vmware_task_poll_interval=0.5

# Name for the folder in the VC datacenter that will contain
# cinder volumes. (string value)
#vmware_volume_folder=cinder-volumes

# Timeout in seconds for VMDK volume transfer between Cinder
# and Glance. (integer value)
#vmware_image_transfer_timeout_secs=7200

# Max number of objects to be retrieved per batch. Query
# results will be obtained in batches from the server and not
# in one shot. Server may still limit the count to something
# less than the configured value. (integer value)
#vmware_max_objects_retrieval=100

# Optional string specifying the VMware VC server version. The
# driver attempts to retrieve the version from VMware VC
# server. Set this configuration only if you want to override
# the VC server version. (string value)
#vmware_host_version=<None>

# Directory where virtual disks are stored during volume
# backup and restore. (string value)
#vmware_tmp_dir=/tmp


#
# Options defined in cinder.volume.drivers.windows.windows
#

# Path to store VHD backed volumes (string value)
#windows_iscsi_lun_path=C:\iSCSIVirtualDisks


#
# Options defined in cinder.volume.drivers.xio
#

# Default storage pool for volumes. (integer value)
#ise_storage_pool=1

# Raid level for ISE volumes. (integer value)
#ise_raid=1

# Number of retries (per port) when establishing connection to
# ISE management port. (integer value)
#ise_connection_retries=5

# Interval (secs) between retries. (integer value)
#ise_retry_interval=1

# Number on retries to get completion status after issuing a
# command to ISE. (integer value)
#ise_completion_retries=30


#
# Options defined in cinder.volume.drivers.zfssa.zfssanfs
#

# Data path IP address (string value)
#zfssa_data_ip=<None>

# HTTPS port number (string value)
#zfssa_https_port=443

# Options to be passed while mounting share over nfs (string
# value)
#zfssa_nfs_mount_options=

# Storage pool name. (string value)
#zfssa_nfs_pool=

# Project name. (string value)
#zfssa_nfs_project=NFSProject

# Share name. (string value)
#zfssa_nfs_share=nfs_share

# Data compression. (string value)
#zfssa_nfs_share_compression=off

# Synchronous write bias-latency, throughput. (string value)
#zfssa_nfs_share_logbias=latency

# REST connection timeout. (seconds) (integer value)
#zfssa_rest_timeout=<None>


#
# Options defined in cinder.volume.manager
#

# Driver to use for volume creation (string value)
#volume_driver=cinder.volume.drivers.lvm.LVMISCSIDriver

# Timeout for creating the volume to migrate to when
# performing volume migration (seconds) (integer value)
#migration_create_volume_timeout_secs=300

# Offload pending volume delete during volume service startup
# (boolean value)
#volume_service_inithost_offload=false

# FC Zoning mode configured (string value)
#zoning_mode=none

# User defined capabilities, a JSON formatted string
# specifying key/value pairs. The key/value pairs can be used
# by the CapabilitiesFilter to select between backends when
# requests specify volume types. For example, specifying a
# service level or the geographical location of a backend,
# then creating a volume type to allow the user to select by
# these different properties. (string value)
#extra_capabilities={}


[BRCD_FABRIC_EXAMPLE]

#
# Options defined in cinder.zonemanager.drivers.brocade.brcd_fabric_opts
#

# Management IP of fabric (string value)
#fc_fabric_address=

# Fabric user ID (string value)
#fc_fabric_user=

# Password for user (string value)
#fc_fabric_password=

# Connecting port (integer value)
#fc_fabric_port=22

# overridden zoning policy (string value)
#zoning_policy=initiator-target

# overridden zoning activation state (boolean value)
#zone_activate=true

# overridden zone name prefix (string value)
#zone_name_prefix=<None>

# Principal switch WWN of the fabric (string value)
#principal_switch_wwn=<None>


[CISCO_FABRIC_EXAMPLE]

#
# Options defined in cinder.zonemanager.drivers.cisco.cisco_fabric_opts
#

# Management IP of fabric (string value)
#cisco_fc_fabric_address=

# Fabric user ID (string value)
#cisco_fc_fabric_user=

# Password for user (string value)
#cisco_fc_fabric_password=

# Connecting port (integer value)
#cisco_fc_fabric_port=22

# overridden zoning policy (string value)
#cisco_zoning_policy=initiator-target

# overridden zoning activation state (boolean value)
#cisco_zone_activate=true

# overridden zone name prefix (string value)
#cisco_zone_name_prefix=<None>

# VSAN of the Fabric (string value)
#cisco_zoning_vsan=<None>


[database]

#
# Options defined in oslo.db.concurrency
#

# Enable the experimental use of thread pooling for all DB API
# calls (boolean value)
# Deprecated group/name - [DEFAULT]/dbapi_use_tpool
#use_tpool=false


[fc-zone-manager]

#
# Options defined in cinder.zonemanager.drivers.brocade.brcd_fc_zone_driver
#

# Southbound connector for zoning operation (string value)
#brcd_sb_connector=cinder.zonemanager.drivers.brocade.brcd_fc_zone_client_cli.BrcdFCZoneClientCLI


#
# Options defined in cinder.zonemanager.drivers.cisco.cisco_fc_zone_driver
#

# Southbound connector for zoning operation (string value)
#cisco_sb_connector=cinder.zonemanager.drivers.cisco.cisco_fc_zone_client_cli.CiscoFCZoneClientCLI


#
# Options defined in cinder.zonemanager.fc_zone_manager
#

# FC Zone Driver responsible for zone management (string
# value)
#zone_driver=cinder.zonemanager.drivers.brocade.brcd_fc_zone_driver.BrcdFCZoneDriver

# Zoning policy configured by user; valid values include
# "initiator-target" or "initiator" (string value)
#zoning_policy=initiator-target

# Comma separated list of Fibre Channel fabric names. This
# list of names is used to retrieve other SAN credentials for
# connecting to each SAN fabric (string value)
#fc_fabric_names=<None>

# FC SAN Lookup Service (string value)
#fc_san_lookup_service=cinder.zonemanager.drivers.brocade.brcd_fc_san_lookup_service.BrcdFCSanLookupService


[keymgr]

#
# Options defined in cinder.keymgr
#

# The full class name of the key manager API class (string
# value)
#api_class=cinder.keymgr.conf_key_mgr.ConfKeyManager


#
# Options defined in cinder.keymgr.conf_key_mgr
#

# Fixed key returned by key manager, specified in hex (string
# value)
#fixed_key=<None>


#
# Options defined in cinder.keymgr.key_mgr
#

# Authentication url for encryption service. (string value)
#encryption_auth_url=http://localhost:5000/v3

# Url for encryption service. (string value)
#encryption_api_url=http://localhost:9311/v1


[keystone_authtoken]

#
# Options defined in keystonemiddleware.auth_token
#

# Complete public Identity API endpoint. (string value)
#auth_uri=<None>

# API version of the admin Identity API endpoint. (string
# value)
#auth_version=<None>

# Do not handle authorization requests within the middleware,
# but delegate the authorization decision to downstream WSGI
# components. (boolean value)
#delay_auth_decision=false

# Request timeout value for communicating with Identity API
# server. (integer value)
#http_connect_timeout=<None>

# How many times are we trying to reconnect when communicating
# with Identity API Server. (integer value)
#http_request_max_retries=3

# Env key for the swift cache. (string value)
#cache=<None>

# Required if identity server requires client certificate
# (string value)
#certfile=<None>

# Required if identity server requires client certificate
# (string value)
#keyfile=<None>

# A PEM encoded Certificate Authority to use when verifying
# HTTPs connections. Defaults to system CAs. (string value)
#cafile=<None>

# Verify HTTPS connections. (boolean value)
#insecure=false

# Directory used to cache files related to PKI tokens. (string
# value)
#signing_dir=<None>

# Optionally specify a list of memcached server(s) to use for
# caching. If left undefined, tokens will instead be cached
# in-process. (list value)
# Deprecated group/name - [DEFAULT]/memcache_servers
#memcached_servers=<None>

# In order to prevent excessive effort spent validating
# tokens, the middleware caches previously-seen tokens for a
# configurable duration (in seconds). Set to -1 to disable
# caching completely. (integer value)
#token_cache_time=300

# Determines the frequency at which the list of revoked tokens
# is retrieved from the Identity service (in seconds). A high
# number of revocation events combined with a low cache
# duration may significantly reduce performance. (integer
# value)
#revocation_cache_time=10

# (Optional) If defined, indicate whether token data should be
# authenticated or authenticated and encrypted. Acceptable
# values are MAC or ENCRYPT.  If MAC, token data is
# authenticated (with HMAC) in the cache. If ENCRYPT, token
# data is encrypted and authenticated in the cache. If the
# value is not one of these options or empty, auth_token will
# raise an exception on initialization. (string value)
#memcache_security_strategy=<None>

# (Optional, mandatory if memcache_security_strategy is
# defined) This string is used for key derivation. (string
# value)
#memcache_secret_key=<None>

# (Optional) Number of seconds memcached server is considered
# dead before it is tried again. (integer value)
#memcache_pool_dead_retry=300

# (Optional) Maximum total number of open connections to every
# memcached server. (integer value)
#memcache_pool_maxsize=10

# (Optional) Socket timeout in seconds for communicating with
# a memcache server. (integer value)
#memcache_pool_socket_timeout=3

# (Optional) Number of seconds a connection to memcached is
# held unused in the pool before it is closed. (integer value)
#memcache_pool_unused_timeout=60

# (Optional) Number of seconds that an operation will wait to
# get a memcache client connection from the pool. (integer
# value)
#memcache_pool_conn_get_timeout=10

# (Optional) Use the advanced (eventlet safe) memcache client
# pool. The advanced pool will only work under python 2.x.
# (boolean value)
#memcache_use_advanced_pool=false

# (Optional) Indicate whether to set the X-Service-Catalog
# header. If False, middleware will not ask for service
# catalog on token validation and will not set the X-Service-
# Catalog header. (boolean value)
#include_service_catalog=true

# Used to control the use and type of token binding. Can be
# set to: "disabled" to not check token binding. "permissive"
# (default) to validate binding information if the bind type
# is of a form known to the server and ignore it if not.
# "strict" like "permissive" but if the bind type is unknown
# the token will be rejected. "required" any form of token
# binding is needed to be allowed. Finally the name of a
# binding method that must be present in tokens. (string
# value)
#enforce_token_bind=permissive

# If true, the revocation list will be checked for cached
# tokens. This requires that PKI tokens are configured on the
# identity server. (boolean value)
#check_revocations_for_cached=false

# Hash algorithms to use for hashing PKI tokens. This may be a
# single algorithm or multiple. The algorithms are those
# supported by Python standard hashlib.new(). The hashes will
# be tried in the order given, so put the preferred one first
# for performance. The result of the first hash will be stored
# in the cache. This will typically be set to multiple values
# only while migrating from a less secure algorithm to a more
# secure one. Once all the old tokens are expired this option
# should be set to a single value for better performance.
# (list value)
#hash_algorithms=md5


[matchmaker_redis]

#
# Options defined in oslo.messaging
#

# Host to locate redis. (string value)
#host=127.0.0.1

# Use this port to connect to redis host. (integer value)
#port=6379

# Password for Redis server (optional). (string value)
#password=<None>


[matchmaker_ring]

#
# Options defined in oslo.messaging
#

# Matchmaker ring file (JSON). (string value)
# Deprecated group/name - [DEFAULT]/matchmaker_ringfile
#ringfile=/etc/oslo/matchmaker_ring.json


[oslo_messaging_amqp]

#
# Options defined in oslo.messaging
#

# address prefix used when sending to a specific server
# (string value)
#server_request_prefix=exclusive

# address prefix used when broadcasting to all servers (string
# value)
#broadcast_prefix=broadcast

# address prefix when sending to any server in group (string
# value)
#group_request_prefix=unicast

# Name for the AMQP container (string value)
#container_name=<None>

# Timeout for inactive connections (in seconds) (integer
# value)
#idle_timeout=0

# Debug: dump AMQP frames to stdout (boolean value)
#trace=false

# CA certificate PEM file for verifing server certificate
# (string value)
#ssl_ca_file=

# Identifying certificate PEM file to present to clients
# (string value)
#ssl_cert_file=

# Private key PEM file used to sign cert_file certificate
# (string value)
#ssl_key_file=

# Password for decrypting ssl_key_file (if encrypted) (string
# value)
#ssl_key_password=<None>

# Accept clients using either SSL or plain TCP (boolean value)
#allow_insecure_clients=false


[oslo_messaging_qpid]

#
# Options defined in oslo.messaging
#

# Use durable queues in AMQP. (boolean value)
# Deprecated group/name - [DEFAULT]/rabbit_durable_queues
#amqp_durable_queues=false

# Auto-delete queues in AMQP. (boolean value)
#amqp_auto_delete=false

# Size of RPC connection pool. (integer value)
#rpc_conn_pool_size=30

# Qpid broker hostname. (string value)
#qpid_hostname=localhost

# Qpid broker port. (integer value)
#qpid_port=5672

# Qpid HA cluster host:port pairs. (list value)
#qpid_hosts=$qpid_hostname:$qpid_port

# Username for Qpid connection. (string value)
#qpid_username=

# Password for Qpid connection. (string value)
#qpid_password=

# Space separated list of SASL mechanisms to use for auth.
# (string value)
#qpid_sasl_mechanisms=

# Seconds between connection keepalive heartbeats. (integer
# value)
#qpid_heartbeat=60

# Transport to use, either 'tcp' or 'ssl'. (string value)
#qpid_protocol=tcp

# Whether to disable the Nagle algorithm. (boolean value)
#qpid_tcp_nodelay=true

# The number of prefetched messages held by receiver. (integer
# value)
#qpid_receiver_capacity=1

# The qpid topology version to use.  Version 1 is what was
# originally used by impl_qpid.  Version 2 includes some
# backwards-incompatible changes that allow broker federation
# to work.  Users should update to version 2 when they are
# able to take everything down, as it requires a clean break.
# (integer value)
#qpid_topology_version=1


[oslo_messaging_rabbit]

#
# Options defined in oslo.messaging
#

# Use durable queues in AMQP. (boolean value)
# Deprecated group/name - [DEFAULT]/rabbit_durable_queues
#amqp_durable_queues=false

# Auto-delete queues in AMQP. (boolean value)
#amqp_auto_delete=false

# Size of RPC connection pool. (integer value)
#rpc_conn_pool_size=30

# SSL version to use (valid only if SSL enabled). Valid values
# are TLSv1 and SSLv23. SSLv2, SSLv3, TLSv1_1, and TLSv1_2 may
# be available on some distributions. (string value)
#kombu_ssl_version=

# SSL key file (valid only if SSL enabled). (string value)
#kombu_ssl_keyfile=

# SSL cert file (valid only if SSL enabled). (string value)
#kombu_ssl_certfile=

# SSL certification authority file (valid only if SSL
# enabled). (string value)
#kombu_ssl_ca_certs=

# How long to wait before reconnecting in response to an AMQP
# consumer cancel notification. (floating point value)
#kombu_reconnect_delay=1.0

# The RabbitMQ broker address where a single node is used.
# (string value)
#rabbit_host=localhost

# The RabbitMQ broker port where a single node is used.
# (integer value)
#rabbit_port=5672

# RabbitMQ HA cluster host:port pairs. (list value)
#rabbit_hosts=$rabbit_host:$rabbit_port

# Connect over SSL for RabbitMQ. (boolean value)
#rabbit_use_ssl=false

# The RabbitMQ userid. (string value)
#rabbit_userid=guest

# The RabbitMQ password. (string value)
#rabbit_password=guest

# The RabbitMQ login method. (string value)
#rabbit_login_method=AMQPLAIN

# The RabbitMQ virtual host. (string value)
#rabbit_virtual_host=/

# How frequently to retry connecting with RabbitMQ. (integer
# value)
#rabbit_retry_interval=1

# How long to backoff for between retries when connecting to
# RabbitMQ. (integer value)
#rabbit_retry_backoff=2

# Maximum number of RabbitMQ connection retries. Default is 0
# (infinite retry count). (integer value)
#rabbit_max_retries=0

# Use HA queues in RabbitMQ (x-ha-policy: all). If you change
# this option, you must wipe the RabbitMQ database. (boolean
# value)
#rabbit_ha_queues=false

# Number of seconds after which the Rabbit broker is
# considered down if heartbeat's keep-alive fails (0 disables
# the heartbeat, >0 enables it. Enabling heartbeats requires
# kombu>=3.0.7 and amqp>=1.4.0). EXPERIMENTAL (integer value)
#heartbeat_timeout_threshold=0

# How often times during the heartbeat_timeout_threshold we
# check the heartbeat. (integer value)
#heartbeat_rate=2

# Deprecated, use rpc_backend=kombu+memory or rpc_backend=fake
# (boolean value)
#fake_rabbit=false


[profiler]

#
# Options defined in cinder.service
#

# If False fully disable profiling feature. (boolean value)
#profiler_enabled=false

# If False doesn't trace SQL requests. (boolean value)
#trace_sqlalchemy=false


[DEFAULT]


[keystone_authtoken]

#
# From keystonemiddleware.auth_token
#

# Complete public Identity API endpoint. (string value)
#auth_uri = <None>

# API version of the admin Identity API endpoint. (string value)
#auth_version = <None>

# Do not handle authorization requests within the middleware, but
# delegate the authorization decision to downstream WSGI components.
# (boolean value)
#delay_auth_decision = false

# Request timeout value for communicating with Identity API server.
# (integer value)
#http_connect_timeout = <None>

# How many times are we trying to reconnect when communicating with
# Identity API Server. (integer value)
#http_request_max_retries = 3

# Env key for the swift cache. (string value)
#cache = <None>

# Required if identity server requires client certificate (string
# value)
#certfile = <None>

# Required if identity server requires client certificate (string
# value)
#keyfile = <None>

# A PEM encoded Certificate Authority to use when verifying HTTPs
# connections. Defaults to system CAs. (string value)
#cafile = <None>

# Verify HTTPS connections. (boolean value)
#insecure = false

# Directory used to cache files related to PKI tokens. (string value)
#signing_dir = <None>

# Optionally specify a list of memcached server(s) to use for caching.
# If left undefined, tokens will instead be cached in-process. (list
# value)
# Deprecated group/name - [DEFAULT]/memcache_servers
#memcached_servers = <None>

# In order to prevent excessive effort spent validating tokens, the
# middleware caches previously-seen tokens for a configurable duration
# (in seconds). Set to -1 to disable caching completely. (integer
# value)
#token_cache_time = 300

# Determines the frequency at which the list of revoked tokens is
# retrieved from the Identity service (in seconds). A high number of
# revocation events combined with a low cache duration may
# significantly reduce performance. (integer value)
#revocation_cache_time = 10

# (Optional) If defined, indicate whether token data should be
# authenticated or authenticated and encrypted. Acceptable values are
# MAC or ENCRYPT.  If MAC, token data is authenticated (with HMAC) in
# the cache. If ENCRYPT, token data is encrypted and authenticated in
# the cache. If the value is not one of these options or empty,
# auth_token will raise an exception on initialization. (string value)
#memcache_security_strategy = <None>

# (Optional, mandatory if memcache_security_strategy is defined) This
# string is used for key derivation. (string value)
#memcache_secret_key = <None>

# (Optional) Number of seconds memcached server is considered dead
# before it is tried again. (integer value)
#memcache_pool_dead_retry = 300

# (Optional) Maximum total number of open connections to every
# memcached server. (integer value)
#memcache_pool_maxsize = 10

# (Optional) Socket timeout in seconds for communicating with a
# memcache server. (integer value)
#memcache_pool_socket_timeout = 3

# (Optional) Number of seconds a connection to memcached is held
# unused in the pool before it is closed. (integer value)
#memcache_pool_unused_timeout = 60

# (Optional) Number of seconds that an operation will wait to get a
# memcache client connection from the pool. (integer value)
#memcache_pool_conn_get_timeout = 10

# (Optional) Use the advanced (eventlet safe) memcache client pool.
# The advanced pool will only work under python 2.x. (boolean value)
#memcache_use_advanced_pool = false

# (Optional) Indicate whether to set the X-Service-Catalog header. If
# False, middleware will not ask for service catalog on token
# validation and will not set the X-Service-Catalog header. (boolean
# value)
#include_service_catalog = true

# Used to control the use and type of token binding. Can be set to:
# "disabled" to not check token binding. "permissive" (default) to
# validate binding information if the bind type is of a form known to
# the server and ignore it if not. "strict" like "permissive" but if
# the bind type is unknown the token will be rejected. "required" any
# form of token binding is needed to be allowed. Finally the name of a
# binding method that must be present in tokens. (string value)
#enforce_token_bind = permissive

# If true, the revocation list will be checked for cached tokens. This
# requires that PKI tokens are configured on the identity server.
# (boolean value)
#check_revocations_for_cached = false

# Hash algorithms to use for hashing PKI tokens. This may be a single
# algorithm or multiple. The algorithms are those supported by Python
# standard hashlib.new(). The hashes will be tried in the order given,
# so put the preferred one first for performance. The result of the
# first hash will be stored in the cache. This will typically be set
# to multiple values only while migrating from a less secure algorithm
# to a more secure one. Once all the old tokens are expired this
# option should be set to a single value for better performance. (list
# value)
#hash_algorithms = md5

# Prefix to prepend at the beginning of the path. Deprecated, use
# identity_uri. (string value)
#auth_admin_prefix =

# Host providing the admin Identity API endpoint. Deprecated, use
# identity_uri. (string value)
#auth_host = 127.0.0.1

# Port of the admin Identity API endpoint. Deprecated, use
# identity_uri. (integer value)
#auth_port = 35357

# Protocol of the admin Identity API endpoint (http or https).
# Deprecated, use identity_uri. (string value)
#auth_protocol = https

# Complete admin Identity API endpoint. This should specify the
# unversioned root endpoint e.g. https://localhost:35357/ (string
# value)
#identity_uri = <None>

# This option is deprecated and may be removed in a future release.
# Single shared secret with the Keystone configuration used for
# bootstrapping a Keystone installation, or otherwise bypassing the
# normal authentication process. This option should not be used, use
# `admin_user` and `admin_password` instead. (string value)
#admin_token = <None>

# Service username. (string value)
#admin_user = <None>

# Service user password. (string value)
#admin_password = <None>

# Service tenant name. (string value)
#admin_tenant_name = admin

		

2.3.2. api-paste.ini

Use the api-paste.ini file to configure the Block Storage API service.
            #############
# OpenStack #
#############

[composite:osapi_volume]
use = call:cinder.api:root_app_factory
/: apiversions
/v1: openstack_volume_api_v1
/v2: openstack_volume_api_v2

[composite:openstack_volume_api_v1]
use = call:cinder.api.middleware.auth:pipeline_factory
noauth = request_id faultwrap sizelimit osprofiler noauth apiv1
keystone = request_id faultwrap sizelimit osprofiler authtoken keystonecontext apiv1
keystone_nolimit = request_id faultwrap sizelimit osprofiler authtoken keystonecontext apiv1

[composite:openstack_volume_api_v2]
use = call:cinder.api.middleware.auth:pipeline_factory
noauth = request_id faultwrap sizelimit osprofiler noauth apiv2
keystone = request_id faultwrap sizelimit osprofiler authtoken keystonecontext apiv2
keystone_nolimit = request_id faultwrap sizelimit osprofiler authtoken keystonecontext apiv2

[filter:request_id]
paste.filter_factory = oslo_middleware.request_id:RequestId.factory

[filter:faultwrap]
paste.filter_factory = cinder.api.middleware.fault:FaultWrapper.factory

[filter:osprofiler]
paste.filter_factory = osprofiler.web:WsgiMiddleware.factory
hmac_keys = SECRET_KEY
enabled = yes

[filter:noauth]
paste.filter_factory = cinder.api.middleware.auth:NoAuthMiddleware.factory

[filter:sizelimit]
paste.filter_factory = cinder.api.middleware.sizelimit:RequestBodySizeLimiter.factory

[app:apiv1]
paste.app_factory = cinder.api.v1.router:APIRouter.factory

[app:apiv2]
paste.app_factory = cinder.api.v2.router:APIRouter.factory

[pipeline:apiversions]
pipeline = faultwrap osvolumeversionapp

[app:osvolumeversionapp]
paste.app_factory = cinder.api.versions:Versions.factory

##########
# Shared #
##########

[filter:keystonecontext]
paste.filter_factory = cinder.api.middleware.auth:CinderKeystoneContext.factory

[filter:authtoken]
paste.filter_factory = keystonemiddleware.auth_token:filter_factory

		

2.3.3. policy.json

The policy.json file defines additional access controls that apply to the Block Storage service.
{
    "context_is_admin": "role:admin",
    "admin_or_owner":  "is_admin:True or project_id:%(project_id)s",
    "default": "rule:admin_or_owner",

    "admin_api": "is_admin:True",

    "volume:create": "",
    "volume:delete": "",
    "volume:get": "",
    "volume:get_all": "",
    "volume:get_volume_metadata": "",
    "volume:get_volume_admin_metadata": "rule:admin_api",
    "volume:delete_volume_admin_metadata": "rule:admin_api",
    "volume:update_volume_admin_metadata": "rule:admin_api",
    "volume:get_snapshot": "",
    "volume:get_all_snapshots": "",
    "volume:extend": "",
    "volume:update_readonly_flag": "",
    "volume:retype": "",

    "volume_extension:types_manage": "rule:admin_api",
    "volume_extension:types_extra_specs": "rule:admin_api",
    "volume_extension:volume_type_access": "",
    "volume_extension:volume_type_access:addProjectAccess": "rule:admin_api",
    "volume_extension:volume_type_access:removeProjectAccess": "rule:admin_api",
    "volume_extension:volume_type_encryption": "rule:admin_api",
    "volume_extension:volume_encryption_metadata": "rule:admin_or_owner",
    "volume_extension:extended_snapshot_attributes": "",
    "volume_extension:volume_image_metadata": "",

    "volume_extension:quotas:show": "",
    "volume_extension:quotas:update": "rule:admin_api",
    "volume_extension:quota_classes": "",

    "volume_extension:volume_admin_actions:reset_status": "rule:admin_api",
    "volume_extension:snapshot_admin_actions:reset_status": "rule:admin_api",
    "volume_extension:backup_admin_actions:reset_status": "rule:admin_api",
    "volume_extension:volume_admin_actions:force_delete": "rule:admin_api",
    "volume_extension:volume_admin_actions:force_detach": "rule:admin_api",
    "volume_extension:snapshot_admin_actions:force_delete": "rule:admin_api",
    "volume_extension:volume_admin_actions:migrate_volume": "rule:admin_api",
    "volume_extension:volume_admin_actions:migrate_volume_completion": "rule:admin_api",

    "volume_extension:volume_host_attribute": "rule:admin_api",
    "volume_extension:volume_tenant_attribute": "rule:admin_or_owner",
    "volume_extension:volume_mig_status_attribute": "rule:admin_api",
    "volume_extension:hosts": "rule:admin_api",
    "volume_extension:services": "rule:admin_api",

    "volume_extension:volume_manage": "rule:admin_api",
    "volume_extension:volume_unmanage": "rule:admin_api",

    "volume:services": "rule:admin_api",

    "volume:create_transfer": "",
    "volume:accept_transfer": "",
    "volume:delete_transfer": "",
    "volume:get_all_transfers": "",

    "volume_extension:replication:promote": "rule:admin_api",
    "volume_extension:replication:reenable": "rule:admin_api",

    "backup:create" : "",
    "backup:delete": "",
    "backup:get": "",
    "backup:get_all": "",
    "backup:restore": "",
    "backup:backup-import": "rule:admin_api",
    "backup:backup-export": "rule:admin_api",

    "snapshot_extension:snapshot_actions:update_snapshot_status": "",

    "consistencygroup:create" : "group:nobody",
    "consistencygroup:delete": "group:nobody",
    "consistencygroup:update": "group:nobody",
    "consistencygroup:get": "group:nobody",
    "consistencygroup:get_all": "group:nobody",

    "consistencygroup:create_cgsnapshot" : "group:nobody",
    "consistencygroup:delete_cgsnapshot": "group:nobody",
    "consistencygroup:get_cgsnapshot": "group:nobody",
    "consistencygroup:get_all_cgsnapshots": "group:nobody",

    "scheduler_extension:scheduler_stats:get_pools" : "rule:admin_api"
}

2.3.4. rootwrap.conf

The rootwrap.conf file defines configuration values used by the rootwrap script when the Block Storage service must escalate its privileges to those of the root user.
# Configuration for cinder-rootwrap
# This file should be owned by (and only-writeable by) the root user

[DEFAULT]
# List of directories to load filter definitions from (separated by ',').
# These directories MUST all be only writeable by root !
filters_path=/etc/cinder/rootwrap.d,/usr/share/cinder/rootwrap

# List of directories to search executables in, in case filters do not
# explicitely specify a full path (separated by ',')
# If not specified, defaults to system PATH environment variable.
# These directories MUST all be only writeable by root !
exec_dirs=/sbin,/usr/sbin,/bin,/usr/bin,/usr/local/bin,/usr/local/sbin

# Enable logging to syslog
# Default value is False
use_syslog=False

# Which syslog facility to use.
# Valid values include auth, authpriv, syslog, local0, local1...
# Default value is 'syslog'
syslog_log_facility=syslog

# Which messages to log.
# INFO means log all usage
# ERROR means only log unsuccessful attempts
syslog_log_level=ERROR

2.4. Log files used by Block Storage

The corresponding log file of each Block Storage service is stored in the /var/log/cinder/ directory of the host on which each service runs.

Table 2.26. Log files used by Block Storage services

Log file Service/interface
api.log openstack-cinder-api
cinder-manage.log cinder-manage
scheduler.log openstack-cinder-scheduler
volume.log openstack-cinder-volume

2.5. Fibre Channel Zone Manager

The Fibre Channel Zone Manager allows FC SAN Zone/Access control management in conjunction with Fibre Channel block storage. The configuration of Fibre Channel Zone Manager and various zone drivers are described in this section.

2.5.1. Configure Block Storage to use Fibre Channel Zone Manager

If Block Storage is configured to use a Fibre Channel volume driver that supports Zone Manager, update cinder.conf to add the following configuration options to enable Fibre Channel Zone Manager.
Make the following changes in the /etc/cinder/cinder.conf file.

Table 2.27. Description of zoning configuration options

Configuration option = Default value Description
[DEFAULT]
zoning_mode = none (StrOpt) FC Zoning mode configured
[fc-zone-manager]
fc_fabric_names = None (StrOpt) Comma separated list of Fibre Channel fabric names. This list of names is used to retrieve other SAN credentials for connecting to each SAN fabric
fc_san_lookup_service = cinder.zonemanager.drivers.brocade.brcd_fc_san_lookup_service.BrcdFCSanLookupService (StrOpt) FC SAN Lookup Service
zone_driver = cinder.zonemanager.drivers.brocade.brcd_fc_zone_driver.BrcdFCZoneDriver (StrOpt) FC Zone Driver responsible for zone management
zoning_policy = initiator-target (StrOpt) Zoning policy configured by user; valid values include "initiator-target" or "initiator"
To use different Fibre Channel Zone Drivers, use the parameters described in this section.
Note
When multi backend configuration is used, provide the zoning_mode configuration option as part of the volume driver configuration where volume_driver option is specified.
Note
Default value of zoning_mode is None and this needs to be changed to fabric to allow fabric zoning.
Note
zoning_policy can be configured as initiator-target or initiator

2.5.2. Brocade Fibre Channel Zone Driver

Brocade Fibre Channel Zone Driver performs zoning operations through SSH. Configure Brocade Zone Driver and lookup service by specifying the following parameters:

Table 2.28. Description of zoning manager configuration options

Configuration option = Default value Description
[fc-zone-manager]
brcd_sb_connector = cinder.zonemanager.drivers.brocade.brcd_fc_zone_client_cli.BrcdFCZoneClientCLI (StrOpt) Southbound connector for zoning operation
Configure SAN fabric parameters in the form of fabric groups as described in the example below:

Table 2.29. Description of zoning fabrics configuration options

Configuration option = Default value Description
[BRCD_FABRIC_EXAMPLE]
fc_fabric_address = (StrOpt) Management IP of fabric
fc_fabric_password = (StrOpt) Password for user
fc_fabric_port = 22 (IntOpt) Connecting port
fc_fabric_user = (StrOpt) Fabric user ID
principal_switch_wwn = None (StrOpt) Principal switch WWN of the fabric
zone_activate = True (BoolOpt) overridden zoning activation state
zone_name_prefix = None (StrOpt) overridden zone name prefix
zoning_policy = initiator-target (StrOpt) overridden zoning policy
Note
Define a fabric group for each fabric using the fabric names used in fc_fabric_names configuration option as group name.

2.5.2.1. System requirements

Brocade Fibre Channel Zone Driver requires firmware version FOS v6.4 or higher.
As a best practice for zone management, use a user account with zoneadmin role. Users with admin role (including the default admin user account) are limited to a maximum of two concurrent SSH sessions.
For information about how to manage Brocade Fibre Channel switches, see the Brocade Fabric OS user documentation.

2.5.3. Cisco Fibre Channel Zone Driver

Cisco Fibre Channel Zone Driver automates the zoning operations through SSH. Configure Cisco Zone Driver, Cisco Southbound connector, FC SAN lookup service and Fabric name.
Set the following options in the cinder.conf configuration file.
[fc-zone-manager]
zone_driver = cinder.zonemanager.drivers.cisco.cisco_fc_zone_driver.CiscoFCZoneDriver
fc_san_lookup_service = cinder.zonemanager.drivers.cisco.cisco_fc_san_lookup_service.CiscoFCSanLookupService
fc_fabric_names = CISCO_FABRIC_EXAMPLE
cisco_sb_connector = cinder.zonemanager.drivers.cisco.cisco_fc_zone_client_cli.CiscoFCZoneClientCLI

Table 2.30. Description of cisco zoning manager configuration options

Configuration option = Default value Description
[fc-zone-manager]
cisco_sb_connector = cinder.zonemanager.drivers.cisco.cisco_fc_zone_client_cli.CiscoFCZoneClientCLI (StrOpt) Southbound connector for zoning operation
Configure SAN fabric parameters in the form of fabric groups as described in the example below:

Table 2.31. Description of cisco zoning fabrics configuration options

Configuration option = Default value Description
[CISCO_FABRIC_EXAMPLE]
cisco_fc_fabric_address = (StrOpt) Management IP of fabric
cisco_fc_fabric_password = (StrOpt) Password for user
cisco_fc_fabric_port = 22 (IntOpt) Connecting port
cisco_fc_fabric_user = (StrOpt) Fabric user ID
cisco_zone_activate = True (BoolOpt) overridden zoning activation state
cisco_zone_name_prefix = None (StrOpt) overridden zone name prefix
cisco_zoning_policy = initiator-target (StrOpt) overridden zoning policy
cisco_zoning_vsan = None (StrOpt) VSAN of the Fabric
Note
Define a fabric group for each fabric using the fabric names used in fc_fabric_names configuration option as group name.
The Cisco Fibre Channel Zone Driver supports basic and enhanced zoning modes.The zoning VSAN must exist with an active zone set name which is same as the fc_fabric_names option.

2.5.3.1. System requirements

Cisco MDS 9000 Family Switches.
Cisco MDS NX-OS Release 6.2(9) or later.
For information about how to manage Cisco Fibre Channel switches, see the Cisco MDS 9000 user documentation.

2.6. Additional options

These options can also be set in the cinder.conf file.

Table 2.32. Description of API configuration options

Configuration option = Default value Description
[DEFAULT]
api_paste_config = api-paste.ini (StrOpt) File name for the paste.deploy config for cinder-api
api_rate_limit = True (BoolOpt) Enables or disables rate limit of the API.
az_cache_duration = 3600 (IntOpt) Cache volume availability zones in memory for the provided duration in seconds
backend_host = None (StrOpt) Backend override of host value.
default_timeout = 525600 (IntOpt) Default timeout for CLI operations in minutes. For example, LUN migration is a typical long running operation, which depends on the LUN size and the load of the array. An upper bound in the specific deployment can be set to avoid unnecessary long wait. By default, it is 365 days long.
enable_v1_api = True (BoolOpt) DEPRECATED: Deploy v1 of the Cinder API.
enable_v2_api = True (BoolOpt) Deploy v2 of the Cinder API.
extra_capabilities = {} (StrOpt) User defined capabilities, a JSON formatted string specifying key/value pairs. The key/value pairs can be used by the CapabilitiesFilter to select between backends when requests specify volume types. For example, specifying a service level or the geographical location of a backend, then creating a volume type to allow the user to select by these different properties.
ignore_pool_full_threshold = False (BoolOpt) Force LUN creation even if the full threshold of pool is reached.
management_ips = (StrOpt) List of Management IP addresses (separated by commas)
max_header_line = 16384 (IntOpt) Maximum line size of message headers to be accepted. max_header_line may need to be increased when using large tokens (typically those generated by the Keystone v3 API with big service catalogs).
osapi_max_limit = 1000 (IntOpt) The maximum number of items that a collection resource returns in a single response
osapi_max_request_body_size = 114688 (IntOpt) Max size for body of a request
osapi_volume_base_URL = None (StrOpt) Base URL that will be presented to users in links to the OpenStack Volume API
osapi_volume_ext_list = (ListOpt) Specify list of extensions to load when using osapi_volume_extension option with cinder.api.contrib.select_extensions
osapi_volume_extension = ['cinder.api.contrib.standard_extensions'] (MultiStrOpt) osapi volume extension to load
osapi_volume_listen = 0.0.0.0 (StrOpt) IP address on which OpenStack Volume API listens
osapi_volume_listen_port = 8776 (IntOpt) Port on which OpenStack Volume API listens
osapi_volume_workers = None (IntOpt) Number of workers for OpenStack Volume API service. The default is equal to the number of CPUs available.
password = (StrOpt) Password for Redis server (optional).
per_volume_size_limit = -1 (IntOpt) Max size allowed per volume, in gigabytes
port = 6379 (IntOpt) Use this port to connect to redis host.
public_endpoint = None (StrOpt) Public url to use for versions endpoint. The default is None, which will use the request's host_url attribute to populate the URL base. If Cinder is operating behind a proxy, you will want to change this to represent the proxy's URL.
query_volume_filters = name, status, metadata, availability_zone (ListOpt) Volume filter options which non-admin user could use to query volumes. Default values are: ['name', 'status', 'metadata', 'availability_zone']
transfer_api_class = cinder.transfer.api.API (StrOpt) The full class name of the volume transfer API class
volume_api_class = cinder.volume.api.API (StrOpt) The full class name of the volume API class to use
volume_name_template = volume-%s (StrOpt) Template string to be used to generate volume names
volume_number_multiplier = -1.0 (FloatOpt) Multiplier used for weighing volume number. Negative numbers mean to spread vs stack.
volume_transfer_key_length = 16 (IntOpt) The number of characters in the autogenerated auth key.
volume_transfer_salt_length = 8 (IntOpt) The number of characters in the salt.
[oslo_middleware]
max_request_body_size = 114688 (IntOpt) The maximum body size for each request, in bytes.
secure_proxy_ssl_header = X-Forwarded-Proto (StrOpt) The HTTP Header that will be used to determine what the original request protocol scheme was, even if it was hidden by an SSL termination proxy.
[oslo_policy]
policy_default_rule = default (StrOpt) Default rule. Enforced when a requested rule is not found.
policy_dirs = ['policy.d'] (MultiStrOpt) Directories where policy configuration files are stored. They can be relative to any directory in the search path defined by the config_dir option, or absolute paths. The file defined by policy_file must exist for these directories to be searched. Missing or empty directories are ignored.
policy_file = policy.json (StrOpt) The JSON file that defines policies.
[oslo_versionedobjects]
fatal_exception_format_errors = False (BoolOpt) Make exception message format errors fatal

Table 2.33. Description of AMQP configuration options

Configuration option = Default value Description
[DEFAULT]
control_exchange = openstack (StrOpt) The default exchange under which topics are scoped. May be overridden by an exchange name specified in the transport_url option.
notification_driver = [] (MultiStrOpt) The Drivers(s) to handle sending notifications. Possible values are messaging, messagingv2, routing, log, test, noop
notification_topics = notifications (ListOpt) AMQP topic used for OpenStack notifications.
transport_url = None (StrOpt) A URL representing the messaging driver to use and its full configuration. If not set, we fall back to the rpc_backend option and driver specific configuration.

Table 2.34. Description of authorization configuration options

Configuration option = Default value Description
[DEFAULT]
auth_strategy = keystone (StrOpt) The strategy to use for auth. Supports noauth, keystone, and deprecated.

Table 2.35. Description of authorization token configuration options

Configuration option = Default value Description
[keystone_authtoken]
admin_password = None (StrOpt) Service user password.
admin_tenant_name = admin (StrOpt) Service tenant name.
admin_token = None (StrOpt) This option is deprecated and may be removed in a future release. Single shared secret with the Keystone configuration used for bootstrapping a Keystone installation, or otherwise bypassing the normal authentication process. This option should not be used, use `admin_user` and `admin_password` instead.
admin_user = None (StrOpt) Service username.
auth_admin_prefix = (StrOpt) Prefix to prepend at the beginning of the path. Deprecated, use identity_uri.
auth_host = 127.0.0.1 (StrOpt) Host providing the admin Identity API endpoint. Deprecated, use identity_uri.
auth_plugin = None (StrOpt) Name of the plugin to load
auth_port = 35357 (IntOpt) Port of the admin Identity API endpoint. Deprecated, use identity_uri.
auth_protocol = https (StrOpt) Protocol of the admin Identity API endpoint (http or https). Deprecated, use identity_uri.
auth_section = None (StrOpt) Config Section from which to load plugin specific options
auth_uri = None (StrOpt) Complete public Identity API endpoint.
auth_version = None (StrOpt) API version of the admin Identity API endpoint.
cache = None (StrOpt) Env key for the swift cache.
cafile = None (StrOpt) A PEM encoded Certificate Authority to use when verifying HTTPs connections. Defaults to system CAs.
certfile = None (StrOpt) Required if identity server requires client certificate
check_revocations_for_cached = False (BoolOpt) If true, the revocation list will be checked for cached tokens. This requires that PKI tokens are configured on the identity server.
delay_auth_decision = False (BoolOpt) Do not handle authorization requests within the middleware, but delegate the authorization decision to downstream WSGI components.
enforce_token_bind = permissive (StrOpt) Used to control the use and type of token binding. Can be set to: "disabled" to not check token binding. "permissive" (default) to validate binding information if the bind type is of a form known to the server and ignore it if not. "strict" like "permissive" but if the bind type is unknown the token will be rejected. "required" any form of token binding is needed to be allowed. Finally the name of a binding method that must be present in tokens.
hash_algorithms = md5 (ListOpt) Hash algorithms to use for hashing PKI tokens. This may be a single algorithm or multiple. The algorithms are those supported by Python standard hashlib.new(). The hashes will be tried in the order given, so put the preferred one first for performance. The result of the first hash will be stored in the cache. This will typically be set to multiple values only while migrating from a less secure algorithm to a more secure one. Once all the old tokens are expired this option should be set to a single value for better performance.
http_connect_timeout = None (IntOpt) Request timeout value for communicating with Identity API server.
http_request_max_retries = 3 (IntOpt) How many times are we trying to reconnect when communicating with Identity API Server.
identity_uri = None (StrOpt) Complete admin Identity API endpoint. This should specify the unversioned root endpoint e.g. https://localhost:35357/
include_service_catalog = True (BoolOpt) (Optional) Indicate whether to set the X-Service-Catalog header. If False, middleware will not ask for service catalog on token validation and will not set the X-Service-Catalog header.
insecure = False (BoolOpt) Verify HTTPS connections.
keyfile = None (StrOpt) Required if identity server requires client certificate
memcache_pool_conn_get_timeout = 10 (IntOpt) (Optional) Number of seconds that an operation will wait to get a memcached client connection from the pool.
memcache_pool_dead_retry = 300 (IntOpt) (Optional) Number of seconds memcached server is considered dead before it is tried again.
memcache_pool_maxsize = 10 (IntOpt) (Optional) Maximum total number of open connections to every memcached server.
memcache_pool_socket_timeout = 3 (IntOpt) (Optional) Socket timeout in seconds for communicating with a memcached server.
memcache_pool_unused_timeout = 60 (IntOpt) (Optional) Number of seconds a connection to memcached is held unused in the pool before it is closed.
memcache_secret_key = None (StrOpt) (Optional, mandatory if memcache_security_strategy is defined) This string is used for key derivation.
memcache_security_strategy = None (StrOpt) (Optional) If defined, indicate whether token data should be authenticated or authenticated and encrypted. Acceptable values are MAC or ENCRYPT. If MAC, token data is authenticated (with HMAC) in the cache. If ENCRYPT, token data is encrypted and authenticated in the cache. If the value is not one of these options or empty, auth_token will raise an exception on initialization.
memcache_use_advanced_pool = False (BoolOpt) (Optional) Use the advanced (eventlet safe) memcached client pool. The advanced pool will only work under python 2.x.
region_name = None (StrOpt) The region in which the identity server can be found.
revocation_cache_time = 10 (IntOpt) Determines the frequency at which the list of revoked tokens is retrieved from the Identity service (in seconds). A high number of revocation events combined with a low cache duration may significantly reduce performance.
signing_dir = None (StrOpt) Directory used to cache files related to PKI tokens.
token_cache_time = 300 (IntOpt) In order to prevent excessive effort spent validating tokens, the middleware caches previously-seen tokens for a configurable duration (in seconds). Set to -1 to disable caching completely.

Table 2.36. Description of backups configuration options

Configuration option = Default value Description
[DEFAULT]
backup_api_class = cinder.backup.api.API (StrOpt) The full class name of the volume backup API class
backup_compression_algorithm = zlib (StrOpt) Compression algorithm (None to disable)
backup_driver = cinder.backup.drivers.swift (StrOpt) Driver to use for backups.
backup_manager = cinder.backup.manager.BackupManager (StrOpt) Full class name for the Manager for volume backup
backup_metadata_version = 2 (IntOpt) Backup metadata version to be used when backing up volume metadata. If this number is bumped, make sure the service doing the restore supports the new version.
backup_name_template = backup-%s (StrOpt) Template string to be used to generate backup names
backup_object_number_per_notification = 10 (IntOpt) The number of chunks or objects, for which one Ceilometer notification will be sent
backup_posix_path = $state_path/backup (StrOpt) Path specifying where to store backups.
backup_service_inithost_offload = False (BoolOpt) Offload pending backup delete during backup service startup.
backup_timer_interval = 120 (IntOpt) Interval, in seconds, between two progress notifications reporting the backup status
backup_topic = cinder-backup (StrOpt) The topic that volume backup nodes listen on
snapshot_name_template = snapshot-%s (StrOpt) Template string to be used to generate snapshot names
snapshot_same_host = True (BoolOpt) Create volume from snapshot at the host where snapshot resides

Table 2.37. Description of block device configuration options

Configuration option = Default value Description
[DEFAULT]
available_devices = (ListOpt) List of all available devices

Table 2.38. Description of CA and SSL configuration options

Configuration option = Default value Description
[DEFAULT]
ssl_ca_file = None (StrOpt) CA certificate file to use to verify connecting clients
ssl_cert_file = None (StrOpt) Certificate file to use when starting the server securely
ssl_key_file = None (StrOpt) Private key file to use when starting the server securely

Table 2.39. Description of CloudByte volume driver configuration options

Configuration option = Default value Description
[DEFAULT]
cb_account_name = None (StrOpt) CloudByte storage specific account name. This maps to a project name in OpenStack.
cb_add_qosgroup = {'latency': '15', 'iops': '10', 'graceallowed': 'false', 'iopscontrol': 'true', 'memlimit': '0', 'throughput': '0', 'tpcontrol': 'false', 'networkspeed': '0'} (DictOpt) These values will be used for CloudByte storage's addQos API call.
cb_apikey = None (StrOpt) Driver will use this API key to authenticate against the CloudByte storage's management interface.
cb_auth_group = None (StrOpt) This corresponds to the discovery authentication group in CloudByte storage. Chap users are added to this group. Driver uses the first user found for this group. Default value is None.
cb_confirm_volume_create_retries = 3 (IntOpt) Will confirm a successful volume creation in CloudByte storage by making this many number of attempts.
cb_confirm_volume_create_retry_interval = 5 (IntOpt) A retry value in seconds. Will be used by the driver to check if volume creation was successful in CloudByte storage.
cb_confirm_volume_delete_retries = 3 (IntOpt) Will confirm a successful volume deletion in CloudByte storage by making this many number of attempts.
cb_confirm_volume_delete_retry_interval = 5 (IntOpt) A retry value in seconds. Will be used by the driver to check if volume deletion was successful in CloudByte storage.
cb_create_volume = {'compression': 'off', 'deduplication': 'off', 'blocklength': '512B', 'sync': 'always', 'protocoltype': 'ISCSI', 'recordsize': '16k'} (DictOpt) These values will be used for CloudByte storage's createVolume API call.
cb_tsm_name = None (StrOpt) This corresponds to the name of Tenant Storage Machine (TSM) in CloudByte storage. A volume will be created in this TSM.

Table 2.40. Description of common configuration options

Configuration option = Default value Description
[DEFAULT]
allow_availability_zone_fallback = False (BoolOpt) If the requested Cinder availability zone is unavailable, fall back to the value of default_availability_zone, then storage_availability_zone, instead of failing.
chap_password = (StrOpt) Password for specified CHAP account name.
chap_username = (StrOpt) CHAP user name.
chiscsi_conf = /etc/chelsio-iscsi/chiscsi.conf (StrOpt) Chiscsi (CXT) global defaults configuration file
cinder_internal_tenant_project_id = None (StrOpt) ID of the project which will be used as the Cinder internal tenant.
cinder_internal_tenant_user_id = None (StrOpt) ID of the user to be used in volume operations as the Cinder internal tenant.
client_socket_timeout = 900 (IntOpt) Timeout for client connections' socket operations. If an incoming connection is idle for this number of seconds it will be closed. A value of '0' means wait forever.
compute_api_class = cinder.compute.nova.API (StrOpt) The full class name of the compute API class to use
consistencygroup_api_class = cinder.consistencygroup.api.API (StrOpt) The full class name of the consistencygroup API class
default_availability_zone = None (StrOpt) Default availability zone for new volumes. If not set, the storage_availability_zone option value is used as the default for new volumes.
default_volume_type = None (StrOpt) Default volume type to use
driver_data_namespace = None (StrOpt) Namespace for driver private data values to be saved in.
driver_ssl_cert_verify = False (BoolOpt) If set to True the http client will validate the SSL certificate of the backend endpoint.
enable_force_upload = False (BoolOpt) Enables the Force option on upload_to_image. This enables running upload_volume on in-use volumes for backends that support it.
enable_new_services = True (BoolOpt) Services to be added to the available pool on create
end_time = None (StrOpt) If this option is specified then the end time specified is used instead of the end time of the last completed audit period.
enforce_multipath_for_image_xfer = False (BoolOpt) If this is set to True, attachment of volumes for image transfer will be aborted when multipathd is not running. Otherwise, it will fallback to single path.
executor_thread_pool_size = 64 (IntOpt) Size of executor thread pool.
host = localhost (StrOpt) Name of this node. This can be an opaque identifier. It is not necessarily a host name, FQDN, or IP address.
iet_conf = /etc/iet/ietd.conf (StrOpt) IET configuration file
iscsi_secondary_ip_addresses = (ListOpt) The list of secondary IP addresses of the iSCSI daemon
managed_replication_target = True (BoolOpt) There are two types of target configurations managed (replicate to another configured backend) or unmanaged (replicate to a device not managed by Cinder).
max_over_subscription_ratio = 20.0 (FloatOpt) Float representation of the over subscription ratio when thin provisioning is involved. Default ratio is 20.0, meaning provisioned capacity can be 20 times of the total physical capacity. If the ratio is 10.5, it means provisioned capacity can be 10.5 times of the total physical capacity. A ratio of 1.0 means provisioned capacity cannot exceed the total physical capacity. A ratio lower than 1.0 will be ignored and the default value will be used instead.
memcached_servers = None (ListOpt) Memcached servers or None for in process cache.
monkey_patch = False (BoolOpt) Enable monkey patching
monkey_patch_modules = (ListOpt) List of modules/decorators to monkey patch
my_ip = 10.0.0.1 (StrOpt) IP address of this host
no_snapshot_gb_quota = False (BoolOpt) Whether snapshots count against gigabyte quota
num_shell_tries = 3 (IntOpt) Number of times to attempt to run flakey shell commands
os_privileged_user_auth_url = None (StrOpt) Auth URL associated with the OpenStack privileged account.
os_privileged_user_name = None (StrOpt) OpenStack privileged account username. Used for requests to other services (such as Nova) that require an account with special rights.
os_privileged_user_password = None (StrOpt) Password associated with the OpenStack privileged account.
os_privileged_user_tenant = None (StrOpt) Tenant name associated with the OpenStack privileged account.
periodic_fuzzy_delay = 60 (IntOpt) Range, in seconds, to randomly delay when starting the periodic task scheduler to reduce stampeding. (Disable by setting to 0)
periodic_interval = 60 (IntOpt) Interval, in seconds, between running periodic tasks
replication_api_class = cinder.replication.api.API (StrOpt) The full class name of the volume replication API class
replication_devices = None (ListOpt) List of k/v pairs representing a replication target for this backend device. For unmanaged the format is: {'key-1'='val1' 'key-2'='val2'...},{...} and for managed devices its simply a list of valid configured backend_names that the driver supports replicating to: backend-a,bakcend-b...
report_interval = 10 (IntOpt) Interval, in seconds, between nodes reporting state to datastore
request_timeout = 300 (IntOpt) Global backend request timeout, in seconds
reserved_percentage = 0 (IntOpt) The percentage of backend capacity is reserved
rootwrap_config = /etc/cinder/rootwrap.conf (StrOpt) Path to the rootwrap configuration file to use for running commands as root
send_actions = False (BoolOpt) Send the volume and snapshot create and delete notifications generated in the specified period.
service_down_time = 60 (IntOpt) Maximum time since last check-in for a service to be considered up
sqlite_clean_db = clean.sqlite (StrOpt) File name of clean sqlite db
ssh_hosts_key_file = $state_path/ssh_known_hosts (StrOpt) File containing SSH host keys for the systems with which Cinder needs to communicate. OPTIONAL: Default=$state_path/ssh_known_hosts
start_time = None (StrOpt) If this option is specified then the start time specified is used instead of the start time of the last completed audit period.
state_path = /var/lib/cinder (StrOpt) Top-level directory for maintaining cinder's state
storage_availability_zone = nova (StrOpt) Availability zone of this node
strict_ssh_host_key_policy = False (BoolOpt) Option to enable strict host key checking. When set to "True" Cinder will only connect to systems with a host key present in the configured "ssh_hosts_key_file". When set to "False" the host key will be saved upon first connection and used for subsequent connections. Default=False
tcp_keepalive = True (BoolOpt) Sets the value of TCP_KEEPALIVE (True/False) for each server socket.
tcp_keepalive_count = None (IntOpt) Sets the value of TCP_KEEPCNT for each server socket. Not supported on OS X.
tcp_keepalive_interval = None (IntOpt) Sets the value of TCP_KEEPINTVL in seconds for each server socket. Not supported on OS X.
tcp_keepidle = 600 (IntOpt) Sets the value of TCP_KEEPIDLE in seconds for each server socket. Not supported on OS X.
until_refresh = 0 (IntOpt) Count of reservations until usage is refreshed
use_chap_auth = False (BoolOpt) Option to enable/disable CHAP authentication for targets.
use_forwarded_for = False (BoolOpt) Treat X-Forwarded-For as the canonical remote address. Only enable this if you have a sanitizing proxy.
watch_log_file = False (BoolOpt) (Optional) Uses logging handler designed to watch file system. When log file is moved or removed this handler will open a new log file with specified path instantaneously. It makes sense only if log-file option is specified and Linux platform is used. This option is ignored if log_config_append is set.
wsgi_keep_alive = True (BoolOpt) If False, closes the client socket connection explicitly. Setting it to True to maintain backward compatibility. Recommended setting is set it to False.
[keystone_authtoken]
memcached_servers = None (ListOpt) Optionally specify a list of memcached server(s) to use for caching. If left undefined, tokens will instead be cached in-process.

Table 2.41. Description of Compute configuration options

Configuration option = Default value Description
[DEFAULT]
nova_api_insecure = False (BoolOpt) Allow to perform insecure SSL requests to nova
nova_ca_certificates_file = None (StrOpt) Location of ca certificates file to use for nova client requests.
nova_catalog_admin_info = compute:Compute Service:adminURL (StrOpt) Same as nova_catalog_info, but for admin endpoint.
nova_catalog_info = compute:Compute Service:publicURL (StrOpt) Match this value when searching for nova in the service catalog. Format is: separated values of the form: <service_type>:<service_name>:<endpoint_type>
nova_endpoint_admin_template = None (StrOpt) Same as nova_endpoint_template, but for admin endpoint.
nova_endpoint_template = None (StrOpt) Override service catalog lookup with template for nova endpoint e.g. http://localhost:8774/v2/%(project_id)s
os_region_name = None (StrOpt) Region name of this node

Table 2.42. Description of database configuration options

Configuration option = Default value Description
[DEFAULT]
db_driver = cinder.db (StrOpt) Driver to use for database access
[database]
backend = sqlalchemy (StrOpt) The back end to use for the database.
connection = None (StrOpt) The SQLAlchemy connection string to use to connect to the database.
connection_debug = 0 (IntOpt) Verbosity of SQL debugging information: 0=None, 100=Everything.
connection_trace = False (BoolOpt) Add Python stack traces to SQL as comment strings.
db_inc_retry_interval = True (BoolOpt) If True, increases the interval between retries of a database operation up to db_max_retry_interval.
db_max_retries = 20 (IntOpt) Maximum retries in case of connection error or deadlock error before error is raised. Set to -1 to specify an infinite retry count.
db_max_retry_interval = 10 (IntOpt) If db_inc_retry_interval is set, the maximum seconds between retries of a database operation.
db_retry_interval = 1 (IntOpt) Seconds between retries of a database transaction.
idle_timeout = 3600 (IntOpt) Timeout before idle SQL connections are reaped.
max_overflow = None (IntOpt) If set, use this value for max_overflow with SQLAlchemy.
max_pool_size = None (IntOpt) Maximum number of SQL connections to keep open in a pool.
max_retries = 10 (IntOpt) Maximum number of database connection retries during startup. Set to -1 to specify an infinite retry count.
min_pool_size = 1 (IntOpt) Minimum number of SQL connections to keep open in a pool.
mysql_sql_mode = TRADITIONAL (StrOpt) The SQL mode to be used for MySQL sessions. This option, including the default, overrides any server-set SQL mode. To use whatever SQL mode is set by the server configuration, set this to no value. Example: mysql_sql_mode=
pool_timeout = None (IntOpt) If set, use this value for pool_timeout with SQLAlchemy.
retry_interval = 10 (IntOpt) Interval between retries of opening a SQL connection.
slave_connection = None (StrOpt) The SQLAlchemy connection string to use to connect to the slave database.
sqlite_db = oslo.sqlite (StrOpt) The file name to use with SQLite.
sqlite_synchronous = True (BoolOpt) If True, SQLite uses synchronous mode.
use_db_reconnect = False (BoolOpt) Enable the experimental use of database reconnect on connection lost.
use_tpool = False (BoolOpt) Enable the experimental use of thread pooling for all DB API calls

Table 2.43. Description of logging configuration options

Configuration option = Default value Description
[DEFAULT]
trace_flags = None (ListOpt) List of options that control which trace info is written to the DEBUG log level to assist developers. Valid values are method and api.

Table 2.44. Description of EMC configuration options

Configuration option = Default value Description
[DEFAULT]
check_max_pool_luns_threshold = False (BoolOpt) Report free_capacity_gb as 0 when the limit to maximum number of pool LUNs is reached. By default, the value is False.
cinder_emc_config_file = /etc/cinder/cinder_emc_config.xml (StrOpt) use this file for cinder emc plugin config data
destroy_empty_storage_group = False (BoolOpt) To destroy storage group when the last LUN is removed from it. By default, the value is False.
force_delete_lun_in_storagegroup = False (BoolOpt) Delete a LUN even if it is in Storage Groups.
initiator_auto_deregistration = False (BoolOpt) Automatically deregister initiators after the related storage group is destroyed. By default, the value is False.
initiator_auto_registration = False (BoolOpt) Automatically register initiators. By default, the value is False.
io_port_list = * (StrOpt) Comma separated iSCSI or FC ports to be used in Nova or Cinder.
iscsi_initiators = (StrOpt) Mapping between hostname and its iSCSI initiator IP addresses.
max_luns_per_storage_group = 255 (IntOpt) Default max number of LUNs in a storage group. By default, the value is 255.
naviseccli_path = (StrOpt) Naviseccli Path.
storage_vnx_authentication_type = global (StrOpt) VNX authentication scope type.
storage_vnx_pool_names = None (StrOpt) Comma-separated list of storage pool names to be used.
storage_vnx_security_file_dir = None (StrOpt) Directory path that contains the VNX security file. Make sure the security file is generated first.
xtremio_array_busy_retry_count = 5 (IntOpt) Number of retries in case array is busy
xtremio_array_busy_retry_interval = 5 (IntOpt) Interval between retries in case array is busy
xtremio_cluster_name = (StrOpt) XMS cluster id in multi-cluster environment

Table 2.45. Description of IBM FlashSystem volume driver configuration options

Configuration option = Default value Description
[DEFAULT]
flashsystem_connection_protocol = FC (StrOpt) Connection protocol should be FC. (Default is FC.)
flashsystem_iscsi_portid = 0 (IntOpt) Default iSCSI Port ID of FlashSystem. (Default port is 0.)
flashsystem_multihostmap_enabled = True (BoolOpt) Allows vdisk to multi host mapping. (Default is True)
flashsystem_multipath_enabled = False (BoolOpt) Connect with multipath (FC only).(Default is false.)

Table 2.46. Description of IBM SONAS and Storwise V7000 volume driver configuration options

Configuration option = Default value Description
[DEFAULT]
ibmnas_platform_type = v7ku (StrOpt) IBMNAS platform type to be used as backend storage; valid values are - v7ku : for using IBM Storwize V7000 Unified, sonas : for using IBM Scale Out NAS, gpfs-nas : for using NFS based IBM GPFS deployments.
nas_ip = (StrOpt) IP address or Hostname of NAS system.
nas_login = admin (StrOpt) User name to connect to NAS system.
nas_mount_options = None (StrOpt) Options used to mount the storage backend file system where Cinder volumes are stored.
nas_password = (StrOpt) Password to connect to NAS system.
nas_private_key = (StrOpt) Filename of private key to use for SSH authentication.
nas_secure_file_operations = auto (StrOpt) Allow network-attached storage systems to operate in a secure environment where root level access is not permitted. If set to False, access is as the root user and insecure. If set to True, access is not as root. If set to auto, a check is done to determine if this is a new installation: True is used if so, otherwise False. Default is auto.
nas_secure_file_permissions = auto (StrOpt) Set more secure file permissions on network-attached storage volume files to restrict broad other/world access. If set to False, volumes are created with open permissions. If set to True, volumes are created with permissions for the cinder user and group (660). If set to auto, a check is done to determine if this is a new installation: True is used if so, otherwise False. Default is auto.
nas_share_path = (StrOpt) Path to the share to use for storing Cinder volumes. For example: "/srv/export1" for an NFS server export available at 10.0.5.10:/srv/export1 .
nas_ssh_port = 22 (IntOpt) SSH port to use to connect to NAS system.

Table 2.47. Description of images configuration options

Configuration option = Default value Description
[DEFAULT]
allowed_direct_url_schemes = (ListOpt) A list of url schemes that can be downloaded directly via the direct_url. Currently supported schemes: [file].
glance_api_insecure = False (BoolOpt) Allow to perform insecure SSL (https) requests to glance
glance_api_servers = $glance_host:$glance_port (ListOpt) A list of the glance API servers available to cinder ([hostname|ip]:port)
glance_api_ssl_compression = False (BoolOpt) Enables or disables negotiation of SSL layer compression. In some cases disabling compression can improve data throughput, such as when high network bandwidth is available and you use compressed image formats like qcow2.
glance_api_version = 1 (IntOpt) Version of the glance API to use
glance_ca_certificates_file = None (StrOpt) Location of ca certificates file to use for glance client requests.
glance_core_properties = checksum, container_format, disk_format, image_name, image_id, min_disk, min_ram, name, size (ListOpt) Default core properties of image
glance_host = $my_ip (StrOpt) Default glance host name or IP
glance_num_retries = 0 (IntOpt) Number retries when downloading an image from glance
glance_port = 9292 (IntOpt) Default glance port
glance_request_timeout = None (IntOpt) http/https timeout value for glance operations. If no value (None) is supplied here, the glanceclient default value is used.
image_conversion_dir = $state_path/conversion (StrOpt) Directory used for temporary storage during image conversion
image_upload_use_cinder_backend = False (BoolOpt) If set to True, upload-to-image in raw format will create a cloned volume and register its location to the image service, instead of uploading the volume content. The cinder backend and locations support must be enabled in the image service, and glance_api_version must be set to 2.
image_upload_use_internal_tenant = False (BoolOpt) If set to True, the image volume created by upload-to-image will be placed in the internal tenant. Otherwise, the image volume is created in the current context's tenant.
image_volume_cache_enabled = False (BoolOpt) Enable the image volume cache for this backend.
image_volume_cache_max_count = 0 (IntOpt) Max number of entries allowed in the image volume cache. 0 => unlimited.
image_volume_cache_max_size_gb = 0 (IntOpt) Max size of the image volume cache for this backend in GB. 0 => unlimited.
use_multipath_for_image_xfer = False (BoolOpt) Do we attach/detach volumes in cinder using multipath for volume to image and image to volume transfers?

Table 2.48. Description of key manager configuration options

Configuration option = Default value Description
[keymgr]
api_class = cinder.keymgr.conf_key_mgr.ConfKeyManager (StrOpt) The full class name of the key manager API class
encryption_api_url = http://localhost:9311/v1 (StrOpt) Url for encryption service.
encryption_auth_url = http://localhost:5000/v3 (StrOpt) Authentication url for encryption service.
fixed_key = None (StrOpt) Fixed key returned by key manager, specified in hex

Table 2.49. Description of logging configuration options

Configuration option = Default value Description
[DEFAULT]
debug = False (BoolOpt) Print debugging output (set logging level to DEBUG instead of default INFO level).
default_log_levels = amqp=WARN, amqplib=WARN, boto=WARN, qpid=WARN, sqlalchemy=WARN, suds=INFO, oslo.messaging=INFO, iso8601=WARN, requests.packages.urllib3.connectionpool=WARN, urllib3.connectionpool=WARN, websocket=WARN, requests.packages.urllib3.util.retry=WARN, urllib3.util.retry=WARN, keystonemiddleware=WARN, routes.middleware=WARN, stevedore=WARN, taskflow=WARN (ListOpt) List of logger=LEVEL pairs. This option is ignored if log_config_append is set.
fatal_deprecations = False (BoolOpt) Enables or disables fatal status of deprecations.
fatal_exception_format_errors = False (BoolOpt) Make exception message format errors fatal.
instance_format = "[instance: %(uuid)s] " (StrOpt) The format for an instance that is passed with the log message.
instance_uuid_format = "[instance: %(uuid)s] " (StrOpt) The format for an instance UUID that is passed with the log message.
log_config_append = None (StrOpt) The name of a logging configuration file. This file is appended to any existing logging configuration files. For details about logging configuration files, see the Python logging module documentation. Note that when logging configuration files are used then all logging configuration is set in the configuration file and other logging configuration options are ignored (for example, log_format).
log_date_format = %Y-%m-%d %H:%M:%S (StrOpt) Format string for %%(asctime)s in log records. Default: %(default)s . This option is ignored if log_config_append is set.
log_dir = None (StrOpt) (Optional) The base directory used for relative --log-file paths. This option is ignored if log_config_append is set.
log_file = None (StrOpt) (Optional) Name of log file to output to. If no default is set, logging will go to stdout. This option is ignored if log_config_append is set.
log_format = None (StrOpt) DEPRECATED. A logging.Formatter log message format string which may use any of the available logging.LogRecord attributes. This option is deprecate, use logging_context_format_string and logging_default_format_string instead. This option is ignored if log_config_append is set.
logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(request_id)s %(user_identity)s] %(instance)s%(message)s (StrOpt) Format string to use for log messages with context.
logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d (StrOpt) Data to append to log format when level is DEBUG.
logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s (StrOpt) Format string to use for log messages without context.
logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s (StrOpt) Prefix each line of exception output with this format.
publish_errors = False (BoolOpt) Enables or disables publication of error events.
syslog_log_facility = LOG_USER (StrOpt) Syslog facility to receive log lines. This option is ignored if log_config_append is set.
use_stderr = True (BoolOpt) Log output to standard error. This option is ignored if log_config_append is set.
use_syslog = False (BoolOpt) Use syslog for logging. Existing syslog format is DEPRECATED and will be changed later to honor RFC5424. This option is ignored if log_config_append is set.
use_syslog_rfc_format = True (BoolOpt) (Optional) Enables or disables syslog rfc5424 format for logging. If enabled, prefixes the MSG part of the syslog message with APP-NAME (RFC5424). The format without the APP-NAME is deprecated in Kilo, and will be removed in Mitaka, along with this option. This option is ignored if log_config_append is set.
verbose = True (BoolOpt) If set to false, will disable INFO logging level, making WARNING the default.

Table 2.50. Description of NAS configuration options

Configuration option = Default value Description
[DEFAULT]
nas_ip = (StrOpt) IP address or Hostname of NAS system.
nas_login = admin (StrOpt) User name to connect to NAS system.
nas_mount_options = None (StrOpt) Options used to mount the storage backend file system where Cinder volumes are stored.
nas_password = (StrOpt) Password to connect to NAS system.
nas_private_key = (StrOpt) Filename of private key to use for SSH authentication.
nas_secure_file_operations = auto (StrOpt) Allow network-attached storage systems to operate in a secure environment where root level access is not permitted. If set to False, access is as the root user and insecure. If set to True, access is not as root. If set to auto, a check is done to determine if this is a new installation: True is used if so, otherwise False. Default is auto.
nas_secure_file_permissions = auto (StrOpt) Set more secure file permissions on network-attached storage volume files to restrict broad other/world access. If set to False, volumes are created with open permissions. If set to True, volumes are created with permissions for the cinder user and group (660). If set to auto, a check is done to determine if this is a new installation: True is used if so, otherwise False. Default is auto.
nas_share_path = (StrOpt) Path to the share to use for storing Cinder volumes. For example: "/srv/export1" for an NFS server export available at 10.0.5.10:/srv/export1 .
nas_ssh_port = 22 (IntOpt) SSH port to use to connect to NAS system.

Table 2.51. Description of Open vStorage driver configuration options

Configuration option = Default value Description
[DEFAULT]
vpool_name = (StrOpt) Vpool to use for volumes - backend is defined by vpool not by us.

Table 2.52. Description of oslo_middleware configuration options

Configuration option = Default value Description
[oslo_middleware]
max_request_body_size = 114688 (IntOpt) The maximum body size for each request, in bytes.

Table 2.53. Description of profiler configuration options

Configuration option = Default value Description
[profiler]
profiler_enabled = False (BoolOpt) If False fully disable profiling feature.
trace_sqlalchemy = False (BoolOpt) If False doesn't trace SQL requests.

Table 2.54. Description of Pure Storage driver configuration options

Configuration option = Default value Description
[DEFAULT]
pure_api_token = None (StrOpt) REST API authorization token.

Table 2.55. Description of Qpid configuration options

Configuration option = Default value Description
[oslo_messaging_qpid]
amqp_auto_delete = False (BoolOpt) Auto-delete queues in AMQP.
amqp_durable_queues = False (BoolOpt) Use durable queues in AMQP.
qpid_heartbeat = 60 (IntOpt) Seconds between connection keepalive heartbeats.
qpid_hostname = localhost (StrOpt) Qpid broker hostname.
qpid_hosts = $qpid_hostname:$qpid_port (ListOpt) Qpid HA cluster host:port pairs.
qpid_password = (StrOpt) Password for Qpid connection.
qpid_port = 5672 (IntOpt) Qpid broker port.
qpid_protocol = tcp (StrOpt) Transport to use, either 'tcp' or 'ssl'.
qpid_receiver_capacity = 1 (IntOpt) The number of prefetched messages held by receiver.
qpid_sasl_mechanisms = (StrOpt) Space separated list of SASL mechanisms to use for auth.
qpid_tcp_nodelay = True (BoolOpt) Whether to disable the Nagle algorithm.
qpid_topology_version = 1 (IntOpt) The qpid topology version to use. Version 1 is what was originally used by impl_qpid. Version 2 includes some backwards-incompatible changes that allow broker federation to work. Users should update to version 2 when they are able to take everything down, as it requires a clean break.
qpid_username = (StrOpt) Username for Qpid connection.
send_single_reply = False (BoolOpt) Send a single AMQP reply to call message. The current behavior since oslo-incubator is to send two AMQP replies - first one with the payload, a second one to ensure the other has finished to send the payload. We are going to remove it in the N release, but we must keep backward compatible at the same time. This option provides such compatibility - it defaults to False in Liberty and can be turned on for early adopters with new installations or for testing. This option will be removed in the Mitaka release.

Table 2.56. Description of quota configuration options

Configuration option = Default value Description
[DEFAULT]
max_age = 0 (IntOpt) Number of seconds between subsequent usage refreshes
quota_backup_gigabytes = 1000 (IntOpt) Total amount of storage, in gigabytes, allowed for backups per project
quota_backups = 10 (IntOpt) Number of volume backups allowed per project
quota_consistencygroups = 10 (IntOpt) Number of consistencygroups allowed per project
quota_driver = cinder.quota.DbQuotaDriver (StrOpt) Default driver to use for quota checks
quota_gigabytes = 1000 (IntOpt) Total amount of storage, in gigabytes, allowed for volumes and snapshots per project
quota_snapshots = 10 (IntOpt) Number of volume snapshots allowed per project
quota_volumes = 10 (IntOpt) Number of volumes allowed per project
reservation_expire = 86400 (IntOpt) Number of seconds until a reservation expires
use_default_quota_class = True (BoolOpt) Enables or disables use of default quota class with default quota.

Table 2.57. Description of RabbitMQ configuration options

Configuration option = Default value Description
[oslo_messaging_rabbit]
amqp_auto_delete = False (BoolOpt) Auto-delete queues in AMQP.
amqp_durable_queues = False (BoolOpt) Use durable queues in AMQP.
fake_rabbit = False (BoolOpt) Deprecated, use rpc_backend=kombu+memory or rpc_backend=fake
heartbeat_rate = 2 (IntOpt) How often times during the heartbeat_timeout_threshold we check the heartbeat.
heartbeat_timeout_threshold = 60 (IntOpt) Number of seconds after which the Rabbit broker is considered down if heartbeat's keep-alive fails (0 disable the heartbeat). EXPERIMENTAL
kombu_reconnect_delay = 1.0 (FloatOpt) How long to wait before reconnecting in response to an AMQP consumer cancel notification.
kombu_reconnect_timeout = 60 (IntOpt) How long to wait before considering a reconnect attempt to have failed. This value should not be longer than rpc_response_timeout.
kombu_ssl_ca_certs = (StrOpt) SSL certification authority file (valid only if SSL enabled).
kombu_ssl_certfile = (StrOpt) SSL cert file (valid only if SSL enabled).
kombu_ssl_keyfile = (StrOpt) SSL key file (valid only if SSL enabled).
kombu_ssl_version = (StrOpt) SSL version to use (valid only if SSL enabled). Valid values are TLSv1 and SSLv23. SSLv2, SSLv3, TLSv1_1, and TLSv1_2 may be available on some distributions.
rabbit_ha_queues = False (BoolOpt) Use HA queues in RabbitMQ (x-ha-policy: all). If you change this option, you must wipe the RabbitMQ database.
rabbit_host = localhost (StrOpt) The RabbitMQ broker address where a single node is used.
rabbit_hosts = $rabbit_host:$rabbit_port (ListOpt) RabbitMQ HA cluster host:port pairs.
rabbit_login_method = AMQPLAIN (StrOpt) The RabbitMQ login method.
rabbit_max_retries = 0 (IntOpt) Maximum number of RabbitMQ connection retries. Default is 0 (infinite retry count).
rabbit_password = guest (StrOpt) The RabbitMQ password.
rabbit_port = 5672 (IntOpt) The RabbitMQ broker port where a single node is used.
rabbit_retry_backoff = 2 (IntOpt) How long to backoff for between retries when connecting to RabbitMQ.
rabbit_retry_interval = 1 (IntOpt) How frequently to retry connecting with RabbitMQ.
rabbit_use_ssl = False (BoolOpt) Connect over SSL for RabbitMQ.
rabbit_userid = guest (StrOpt) The RabbitMQ userid.
rabbit_virtual_host = / (StrOpt) The RabbitMQ virtual host.
send_single_reply = False (BoolOpt) Send a single AMQP reply to call message. The current behavior since oslo-incubator is to send two AMQP replies - first one with the payload, a second one to ensure the other has finished to send the payload. We are going to remove it in the N release, but we must keep backward compatible at the same time. This option provides such compatibility - it defaults to False in Liberty and can be turned on for early adopters with new installations or for testing. This option will be removed in the Mitaka release.

Table 2.58. Description of Redis configuration options

Configuration option = Default value Description
[matchmaker_redis]
host = 127.0.0.1 (StrOpt) Host to locate redis.
password = (StrOpt) Password for Redis server (optional).
port = 6379 (IntOpt) Use this port to connect to redis host.

Table 2.59. Description of RPC configuration options

Configuration option = Default value Description
[DEFAULT]
rpc_backend = rabbit (StrOpt) The messaging driver to use, defaults to rabbit. Other drivers include qpid and zmq.
rpc_cast_timeout = 30 (IntOpt) Seconds to wait before a cast expires (TTL). Only supported by impl_zmq.
rpc_conn_pool_size = 30 (IntOpt) Size of RPC connection pool.
rpc_poll_timeout = 1 (IntOpt) The default number of seconds that poll should wait. Poll raises timeout exception when timeout expired.
rpc_response_timeout = 60 (IntOpt) Seconds to wait for a response from a call.
volume_topic = cinder-volume (StrOpt) The topic that volume nodes listen on
[oslo_concurrency]
disable_process_locking = False (BoolOpt) Enables or disables inter-process locks.
lock_path = None (StrOpt) Directory to use for lock files. For security, the specified directory should only be writable by the user running the processes that need locking. Defaults to environment variable OSLO_LOCK_PATH. If external locks are used, a lock path must be set.
[oslo_messaging_amqp]
allow_insecure_clients = False (BoolOpt) Accept clients using either SSL or plain TCP
broadcast_prefix = broadcast (StrOpt) address prefix used when broadcasting to all servers
container_name = None (StrOpt) Name for the AMQP container
group_request_prefix = unicast (StrOpt) address prefix when sending to any server in group
idle_timeout = 0 (IntOpt) Timeout for inactive connections (in seconds)
password = (StrOpt) Password for message broker authentication
sasl_config_dir = (StrOpt) Path to directory that contains the SASL configuration
sasl_config_name = (StrOpt) Name of configuration file (without .conf suffix)
sasl_mechanisms = (StrOpt) Space separated list of acceptable SASL mechanisms
server_request_prefix = exclusive (StrOpt) address prefix used when sending to a specific server
ssl_ca_file = (StrOpt) CA certificate PEM file to verify server certificate
ssl_cert_file = (StrOpt) Identifying certificate PEM file to present to clients
ssl_key_file = (StrOpt) Private key PEM file used to sign cert_file certificate
ssl_key_password = None (StrOpt) Password for decrypting ssl_key_file (if encrypted)
trace = False (BoolOpt) Debug: dump AMQP frames to stdout
username = (StrOpt) User name for message broker authentication

Table 2.60. Description of SAN configuration options

Configuration option = Default value Description
[DEFAULT]
san_clustername = (StrOpt) Cluster name to use for creating volumes
san_ip = (StrOpt) IP address of SAN controller
san_is_local = False (BoolOpt) Execute commands locally instead of over SSH; use if the volume service is running on the SAN device
san_login = admin (StrOpt) Username for SAN controller
san_password = (StrOpt) Password for SAN controller
san_private_key = (StrOpt) Filename of private key to use for SSH authentication
san_secondary_ip = None (StrOpt) VNX secondary SP IP Address.
san_ssh_port = 22 (IntOpt) SSH port to use with SAN
san_thin_provision = True (BoolOpt) Use thin provisioning for SAN volumes?
ssh_conn_timeout = 30 (IntOpt) SSH connection timeout in seconds
ssh_max_pool_conn = 5 (IntOpt) Maximum ssh connections in the pool
ssh_min_pool_conn = 1 (IntOpt) Minimum ssh connections in the pool

Table 2.61. Description of scheduler configuration options

Configuration option = Default value Description
[DEFAULT]
filter_function = None (StrOpt) String representation for an equation that will be used to filter hosts. Only used when the driver filter is set to be used by the Cinder scheduler.
goodness_function = None (StrOpt) String representation for an equation that will be used to determine the goodness of a host. Only used when using the goodness weigher is set to be used by the Cinder scheduler.
scheduler_default_filters = AvailabilityZoneFilter, CapacityFilter, CapabilitiesFilter (ListOpt) Which filter class names to use for filtering hosts when not specified in the request.
scheduler_default_weighers = CapacityWeigher (ListOpt) Which weigher class names to use for weighing hosts.
scheduler_driver = cinder.scheduler.filter_scheduler.FilterScheduler (StrOpt) Default scheduler driver to use
scheduler_host_manager = cinder.scheduler.host_manager.HostManager (StrOpt) The scheduler host manager class to use
scheduler_json_config_location = (StrOpt) Absolute path to scheduler configuration JSON file.
scheduler_manager = cinder.scheduler.manager.SchedulerManager (StrOpt) Full class name for the Manager for scheduler
scheduler_max_attempts = 3 (IntOpt) Maximum number of attempts to schedule an volume
scheduler_topic = cinder-scheduler (StrOpt) The topic that scheduler nodes listen on

Table 2.62. Description of SCST volume driver configuration options

Configuration option = Default value Description
[DEFAULT]
scst_target_driver = iscsi (StrOpt) SCST target implementation can choose from multiple SCST target drivers.
scst_target_iqn_name = None (StrOpt) Certain ISCSI targets have predefined target names, SCST target driver uses this name.

Table 2.63. Description of Scality REST Block storage driver configuration options

Configuration option = Default value Description
[DEFAULT]
srb_base_urls = None (StrOpt) Comma-separated list of REST servers IP to connect to. (eg http://IP1/,http://IP2:81/path

Table 2.64. Description of storage configuration options

Configuration option = Default value Description
[DEFAULT]
allocated_capacity_weight_multiplier = -1.0 (FloatOpt) Multiplier used for weighing volume capacity. Negative numbers mean to stack vs spread.
capacity_weight_multiplier = 1.0 (FloatOpt) Multiplier used for weighing volume capacity. Negative numbers mean to stack vs spread.
enabled_backends = None (ListOpt) A list of backend names to use. These backend names should be backed by a unique [CONFIG] group with its options
iscsi_helper = tgtadm (StrOpt) iSCSI target user-land tool to use. tgtadm is default, use lioadm for LIO iSCSI support, scstadmin for SCST target support, iseradm for the ISER protocol, ietadm for iSCSI Enterprise Target, iscsictl for Chelsio iSCSI Target or fake for testing.
iscsi_iotype = fileio (StrOpt) Sets the behavior of the iSCSI target to either perform blockio or fileio optionally, auto can be set and Cinder will autodetect type of backing device
iscsi_ip_address = $my_ip (StrOpt) The IP address that the iSCSI daemon is listening on
iscsi_port = 3260 (IntOpt) The port that the iSCSI daemon is listening on
iscsi_protocol = iscsi (StrOpt) Determines the iSCSI protocol for new iSCSI volumes, created with tgtadm or lioadm target helpers. In order to enable RDMA, this parameter should be set with the value "iser". The supported iSCSI protocol values are "iscsi" and "iser".
iscsi_target_flags = (StrOpt) Sets the target-specific flags for the iSCSI target. Only used for tgtadm to specify backing device flags using bsoflags option. The specified string is passed as is to the underlying tool.
iscsi_target_prefix = iqn.2010-10.org.openstack: (StrOpt) Prefix for iSCSI volumes
iscsi_write_cache = on (StrOpt) Sets the behavior of the iSCSI target to either perform write-back(on) or write-through(off). This parameter is valid if iscsi_helper is set to tgtadm or iseradm.
iser_helper = tgtadm (StrOpt) The name of the iSER target user-land tool to use
iser_ip_address = $my_ip (StrOpt) The IP address that the iSER daemon is listening on
iser_port = 3260 (IntOpt) The port that the iSER daemon is listening on
iser_target_prefix = iqn.2010-10.org.openstack: (StrOpt) Prefix for iSER volumes
migration_create_volume_timeout_secs = 300 (IntOpt) Timeout for creating the volume to migrate to when performing volume migration (seconds)
num_iser_scan_tries = 3 (IntOpt) The maximum number of times to rescan iSER targetto find volume
num_volume_device_scan_tries = 3 (IntOpt) The maximum number of times to rescan targets to find volume
volume_backend_name = None (StrOpt) The backend name for a given driver implementation
volume_clear = zero (StrOpt) Method used to wipe old volumes
volume_clear_ionice = None (StrOpt) The flag to pass to ionice to alter the i/o priority of the process used to zero a volume after deletion, for example "-c3" for idle only priority.
volume_clear_size = 0 (IntOpt) Size in MiB to wipe at start of old volumes. 0 => all
volume_copy_blkio_cgroup_name = cinder-volume-copy (StrOpt) The blkio cgroup name to be used to limit bandwidth of volume copy
volume_copy_bps_limit = 0 (IntOpt) The upper limit of bandwidth of volume copy. 0 => unlimited
volume_dd_blocksize = 1M (StrOpt) The default block size used when copying/clearing volumes
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver (StrOpt) Driver to use for volume creation
volume_manager = cinder.volume.manager.VolumeManager (StrOpt) Full class name for the Manager for volume
volume_service_inithost_offload = False (BoolOpt) Offload pending volume delete during volume service startup
volume_usage_audit_period = month (StrOpt) Time period for which to generate volume usages. The options are hour, day, month, or year.
volumes_dir = $state_path/volumes (StrOpt) Volume configuration file storage directory

Table 2.65. Description of Violin volume driver configuration options

Configuration option = Default value Description
[DEFAULT]
gateway_mga = None (StrOpt) IP address or hostname of mg-a
gateway_mgb = None (StrOpt) IP address or hostname of mg-b
use_igroups = False (BoolOpt) Use igroups to manage targets and initiators
violin_request_timeout = 300 (IntOpt) Global backend request timeout, in seconds.

Table 2.66. Description of zones configuration options

Configuration option = Default value Description
[DEFAULT]
cloned_volume_same_az = True (BoolOpt) Ensure that the new volumes are the same AZ as snapshot or source volume

2.7. New, updated, and deprecated options in Liberty for OpenStack Block Storage

Table 2.67. New options

Option = default value (Type) Help string
[DEFAULT] allow_availability_zone_fallback = False (BoolOpt) If the requested Cinder availability zone is unavailable, fall back to the value of default_availability_zone, then storage_availability_zone, instead of failing.
[DEFAULT] backup_posix_path = $state_path/backup (StrOpt) Path specifying where to store backups.
[DEFAULT] backup_service_inithost_offload = False (BoolOpt) Offload pending backup delete during backup service startup.
[DEFAULT] backup_swift_ca_cert_file = None (StrOpt) Location of the CA certificate file to use for swift client requests.
[DEFAULT] blockbridge_api_host = None (StrOpt) IP address/hostname of Blockbridge API.
[DEFAULT] blockbridge_api_port = None (IntOpt) Override HTTPS port to connect to Blockbridge API server.
[DEFAULT] blockbridge_auth_password = None (StrOpt) Blockbridge API password (for auth scheme 'password')
[DEFAULT] blockbridge_auth_scheme = token (StrOpt) Blockbridge API authentication scheme (token or password)
[DEFAULT] blockbridge_auth_token = None (StrOpt) Blockbridge API token (for auth scheme 'token')
[DEFAULT] blockbridge_auth_user = None (StrOpt) Blockbridge API user (for auth scheme 'password')
[DEFAULT] blockbridge_default_pool = None (StrOpt) Default pool name if unspecified.
[DEFAULT] blockbridge_pools = {'OpenStack': '+openstack'} (DictOpt) Defines the set of exposed pools and their associated backend query strings
[DEFAULT] cb_auth_group = None (StrOpt) This corresponds to the discovery authentication group in CloudByte storage. Chap users are added to this group. Driver uses the first user found for this group. Default value is None.
[DEFAULT] cb_confirm_volume_delete_retries = 3 (IntOpt) Will confirm a successful volume deletion in CloudByte storage by making this many number of attempts.
[DEFAULT] cb_confirm_volume_delete_retry_interval = 5 (IntOpt) A retry value in seconds. Will be used by the driver to check if volume deletion was successful in CloudByte storage.
[DEFAULT] cinder_internal_tenant_project_id = None (StrOpt) ID of the project which will be used as the Cinder internal tenant.
[DEFAULT] cinder_internal_tenant_user_id = None (StrOpt) ID of the user to be used in volume operations as the Cinder internal tenant.
[DEFAULT] dell_sc_verify_cert = False (BoolOpt) Enable HTTPS SC certificate verification.
[DEFAULT] dothill_api_protocol = https (StrOpt) DotHill API interface protocol.
[DEFAULT] dothill_backend_name = A (StrOpt) Pool or Vdisk name to use for volume creation.
[DEFAULT] dothill_backend_type = virtual (StrOpt) linear (for Vdisk) or virtual (for Pool).
[DEFAULT] dothill_iscsi_ips = (ListOpt) List of comma-separated target iSCSI IP addresses.
[DEFAULT] dothill_verify_certificate = False (BoolOpt) Whether to verify DotHill array SSL certificate.
[DEFAULT] dothill_verify_certificate_path = None (StrOpt) DotHill array SSL certificate path.
[DEFAULT] drbdmanage_redundancy = 1 (StrOpt) Number of nodes that should replicate the data.
[DEFAULT] driver_ssl_cert_verify = False (BoolOpt) If set to True the http client will validate the SSL certificate of the backend endpoint.
[DEFAULT] enable_force_upload = False (BoolOpt) Enables the Force option on upload_to_image. This enables running upload_volume on in-use volumes for backends that support it.
[DEFAULT] executor_thread_pool_size = 64 (IntOpt) Size of executor thread pool.
[DEFAULT] flashsystem_iscsi_portid = 0 (IntOpt) Default iSCSI Port ID of FlashSystem. (Default port is 0.)
[DEFAULT] glusterfs_backup_mount_point = $state_path/backup_mount (StrOpt) Base dir containing mount point for gluster share.
[DEFAULT] glusterfs_backup_share = None (StrOpt) GlusterFS share in <hostname|ipv4addr|ipv6addr>:<gluster_vol_name> format. Eg: 1.2.3.4:backup_vol
[DEFAULT] hgst_net = Net 1 (IPv4) (StrOpt) Space network name to use for data transfer
[DEFAULT] hgst_redundancy = 0 (StrOpt) Should spaces be redundantly stored (1/0)
[DEFAULT] hgst_space_group = disk (StrOpt) Group to own created spaces
[DEFAULT] hgst_space_mode = 0600 (StrOpt) UNIX mode for created spaces
[DEFAULT] hgst_space_user = root (StrOpt) User to own created spaces
[DEFAULT] hgst_storage_servers = os:gbd0 (StrOpt) Comma separated list of Space storage servers:devices. ex: os1_stor:gbd0,os2_stor:gbd0
[DEFAULT] hitachi_horcm_resource_lock_timeout = 600 (IntOpt) Timeout until a resource lock is released, in seconds. The value must be between 0 and 7200.
[DEFAULT] hpmsa_api_protocol = https (StrOpt) HPMSA API interface protocol.
[DEFAULT] hpmsa_backend_name = A (StrOpt) Pool or Vdisk name to use for volume creation.
[DEFAULT] hpmsa_backend_type = virtual (StrOpt) linear (for Vdisk) or virtual (for Pool).
[DEFAULT] hpmsa_iscsi_ips = (ListOpt) List of comma-separated target iSCSI IP addresses.
[DEFAULT] hpmsa_verify_certificate = False (BoolOpt) Whether to verify HPMSA array SSL certificate.
[DEFAULT] hpmsa_verify_certificate_path = None (StrOpt) HPMSA array SSL certificate path.
[DEFAULT] hpxp_async_copy_check_interval = 10 (IntOpt) Interval to check copy asynchronously
[DEFAULT] hpxp_compute_target_ports = None (ListOpt) Target port names of compute node for host group or iSCSI target
[DEFAULT] hpxp_copy_check_interval = 3 (IntOpt) Interval to check copy
[DEFAULT] hpxp_copy_speed = 3 (IntOpt) Copy speed of storage system
[DEFAULT] hpxp_default_copy_method = FULL (StrOpt) Default copy method of storage system. There are two valid values: "FULL" specifies that a full copy; "THIN" specifies that a thin copy. Default value is "FULL"
[DEFAULT] hpxp_group_request = False (BoolOpt) Request for creating host group or iSCSI target
[DEFAULT] hpxp_horcm_add_conf = True (BoolOpt) Add to HORCM configuration
[DEFAULT] hpxp_horcm_name_only_discovery = False (BoolOpt) Only discover a specific name of host group or iSCSI target
[DEFAULT] hpxp_horcm_numbers = 200, 201 (ListOpt) Instance numbers for HORCM
[DEFAULT] hpxp_horcm_resource_name = meta_resource (StrOpt) Resource group name of storage system for HORCM
[DEFAULT] hpxp_horcm_user = None (StrOpt) Username of storage system for HORCM
[DEFAULT] hpxp_ldev_range = None (StrOpt) Logical device range of storage system
[DEFAULT] hpxp_pool = None (StrOpt) Pool of storage system
[DEFAULT] hpxp_storage_cli = None (StrOpt) Type of storage command line interface
[DEFAULT] hpxp_storage_id = None (StrOpt) ID of storage system
[DEFAULT] hpxp_target_ports = None (ListOpt) Target port names for host group or iSCSI target
[DEFAULT] hpxp_thin_pool = None (StrOpt) Thin pool of storage system
[DEFAULT] hpxp_zoning_request = False (BoolOpt) Request for FC Zone creating host group
[DEFAULT] ignore_pool_full_threshold = False (BoolOpt) Force LUN creation even if the full threshold of pool is reached.
[DEFAULT] image_upload_use_cinder_backend = False (BoolOpt) If set to True, upload-to-image in raw format will create a cloned volume and register its location to the image service, instead of uploading the volume content. The cinder backend and locations support must be enabled in the image service, and glance_api_version must be set to 2.
[DEFAULT] image_upload_use_internal_tenant = False (BoolOpt) If set to True, the image volume created by upload-to-image will be placed in the internal tenant. Otherwise, the image volume is created in the current context's tenant.
[DEFAULT] image_volume_cache_enabled = False (BoolOpt) Enable the image volume cache for this backend.
[DEFAULT] image_volume_cache_max_count = 0 (IntOpt) Max number of entries allowed in the image volume cache. 0 => unlimited.
[DEFAULT] image_volume_cache_max_size_gb = 0 (IntOpt) Max size of the image volume cache for this backend in GB. 0 => unlimited.
[DEFAULT] infortrend_cli_max_retries = 5 (IntOpt) Maximum retry time for cli. Default is 5.
[DEFAULT] infortrend_cli_path = /opt/bin/Infortrend/raidcmd_ESDS10.jar (StrOpt) The Infortrend CLI absolute path. By default, it is at /opt/bin/Infortrend/raidcmd_ESDS10.jar
[DEFAULT] infortrend_cli_timeout = 30 (IntOpt) Default timeout for CLI copy operations in minutes. Support: migrate volume, create cloned volume and create volume from snapshot. By Default, it is 30 minutes.
[DEFAULT] infortrend_pools_name = (StrOpt) Infortrend raid pool name list. It is separated with comma.
[DEFAULT] infortrend_provisioning = full (StrOpt) Let the volume use specific provisioning. By default, it is the full provisioning. The supported options are full or thin.
[DEFAULT] infortrend_slots_a_channels_id = 0,1,2,3,4,5,6,7 (StrOpt) Infortrend raid channel ID list on Slot A for OpenStack usage. It is separated with comma. By default, it is the channel 0~7.
[DEFAULT] infortrend_slots_b_channels_id = 0,1,2,3,4,5,6,7 (StrOpt) Infortrend raid channel ID list on Slot B for OpenStack usage. It is separated with comma. By default, it is the channel 0~7.
[DEFAULT] infortrend_tiering = 0 (StrOpt) Let the volume use specific tiering level. By default, it is the level 0. The supported levels are 0,2,3,4.
[DEFAULT] io_port_list = * (StrOpt) Comma separated iSCSI or FC ports to be used in Nova or Cinder.
[DEFAULT] iscsi_target_flags = (StrOpt) Sets the target-specific flags for the iSCSI target. Only used for tgtadm to specify backing device flags using bsoflags option. The specified string is passed as is to the underlying tool.
[DEFAULT] lenovo_api_protocol = https (StrOpt) Lenovo api interface protocol.
[DEFAULT] lenovo_backend_name = A (StrOpt) Pool or Vdisk name to use for volume creation.
[DEFAULT] lenovo_backend_type = virtual (StrOpt) linear (for VDisk) or virtual (for Pool).
[DEFAULT] lenovo_iscsi_ips = (ListOpt) List of comma-separated target iSCSI IP addresses.
[DEFAULT] lenovo_verify_certificate = False (BoolOpt) Whether to verify Lenovo array SSL certificate.
[DEFAULT] lenovo_verify_certificate_path = None (StrOpt) Lenovo array SSL certificate path.
[DEFAULT] managed_replication_target = True (BoolOpt) There are two types of target configurations managed (replicate to another configured backend) or unmanaged (replicate to a device not managed by Cinder).
[DEFAULT] management_ips = (StrOpt) List of Management IP addresses (separated by commas)
[DEFAULT] nas_volume_prov_type = thin (StrOpt) Provisioning type that will be used when creating volumes.
[DEFAULT] netapp_enable_multiattach = False (BoolOpt) This option specifies whether the driver should allow operations that require multiple attachments to a volume. An example would be live migration of servers that have volumes attached. When enabled, this backend is limited to 256 total volumes in order to guarantee volumes can be accessed by more than one host.
[DEFAULT] netapp_host_type = None (StrOpt) This option defines the type of operating system for all initiators that can access a LUN. This information is used when mapping LUNs to individual hosts or groups of hosts.
[DEFAULT] netapp_lun_ostype = None (StrOpt) This option defines the type of operating system that will access a LUN exported from Data ONTAP; it is assigned to the LUN at the time it is created.
[DEFAULT] netapp_lun_space_reservation = enabled (StrOpt) This option determines if storage space is reserved for LUN allocation. If enabled, LUNs are thick provisioned. If space reservation is disabled, storage space is allocated on demand.
[DEFAULT] netapp_pool_name_search_pattern = (.+) (StrOpt) This option is used to restrict provisioning to the specified pools. Specify the value of this option to be a regular expression which will be applied to the names of objects from the storage backend which represent pools in Cinder. This option is only utilized when the storage protocol is configured to use iSCSI or FC.
[DEFAULT] os_privileged_user_auth_url = None (StrOpt) Auth URL associated with the OpenStack privileged account.
[DEFAULT] password = (StrOpt) Password for Redis server (optional).
[DEFAULT] per_volume_size_limit = -1 (IntOpt) Max size allowed per volume, in gigabytes
[DEFAULT] port = 6379 (IntOpt) Use this port to connect to redis host.
[DEFAULT] query_volume_filters = name, status, metadata, availability_zone (ListOpt) Volume filter options which non-admin user could use to query volumes. Default values are: ['name', 'status', 'metadata', 'availability_zone']
[DEFAULT] rados_connection_interval = 5 (IntOpt) Interval value (in seconds) between connection retries to ceph cluster.
[DEFAULT] rados_connection_retries = 3 (IntOpt) Number of retries if connection to ceph cluster failed.
[DEFAULT] rbd_cluster_name = ceph (StrOpt) The name of ceph cluster
[DEFAULT] replication_devices = None (ListOpt) List of k/v pairs representing a replication target for this backend device. For unmanaged the format is: {'key-1'='val1' 'key-2'='val2'...},{...} and for managed devices its simply a list of valid configured backend_names that the driver supports replicating to: backend-a,bakcend-b...
[DEFAULT] rpc_conn_pool_size = 30 (IntOpt) Size of RPC connection pool.
[DEFAULT] rpc_poll_timeout = 1 (IntOpt) The default number of seconds that poll should wait. Poll raises timeout exception when timeout expired.
[DEFAULT] rpc_zmq_all_req_rep = True (BoolOpt) Use REQ/REP pattern for all methods CALL/CAST/FANOUT.
[DEFAULT] rpc_zmq_concurrency = eventlet (StrOpt) Type of concurrency used. Either "native" or "eventlet"
[DEFAULT] sf_enable_volume_mapping = True (BoolOpt) Create an internal mapping of volume IDs and account. Optimizes lookups and performance at the expense of memory, very large deployments may want to consider setting to False.
[DEFAULT] sheepdog_store_address = 127.0.0.1 (StrOpt) IP address of sheep daemon.
[DEFAULT] sheepdog_store_port = 7000 (IntOpt) Port of sheep daemon.
[DEFAULT] sio_force_delete = False (BoolOpt) Whether to allow force delete.
[DEFAULT] sio_protection_domain_id = None (StrOpt) Protection domain id.
[DEFAULT] sio_protection_domain_name = None (StrOpt) Protection domain name.
[DEFAULT] sio_rest_server_port = 443 (StrOpt) REST server port.
[DEFAULT] sio_round_volume_capacity = True (BoolOpt) Whether to round volume capacity.
[DEFAULT] sio_server_certificate_path = None (StrOpt) Server certificate path.
[DEFAULT] sio_storage_pool_id = None (StrOpt) Storage pool id.
[DEFAULT] sio_storage_pool_name = None (StrOpt) Storage pool name.
[DEFAULT] sio_storage_pools = None (StrOpt) Storage pools.
[DEFAULT] sio_unmap_volume_before_deletion = False (BoolOpt) Whether to unmap volume before deletion.
[DEFAULT] sio_verify_server_certificate = False (BoolOpt) Whether to verify server certificate.
[DEFAULT] storage_vnx_pool_names = None (StrOpt) Comma-separated list of storage pool names to be used.
[DEFAULT] tintri_api_version = v310 (StrOpt) API version for the storage system
[DEFAULT] tintri_server_hostname = None (StrOpt) The hostname (or IP address) for the storage system
[DEFAULT] tintri_server_password = None (StrOpt) Password for the storage system
[DEFAULT] tintri_server_username = None (StrOpt) User name for the storage system
[DEFAULT] trace_flags = None (ListOpt) List of options that control which trace info is written to the DEBUG log level to assist developers. Valid values are method and api.
[DEFAULT] violin_request_timeout = 300 (IntOpt) Global backend request timeout, in seconds.
[DEFAULT] vzstorage_mount_options = None (ListOpt) Mount options passed to the vzstorage client. See section of the pstorage-mount man page for details.
[DEFAULT] vzstorage_mount_point_base = $state_path/mnt (StrOpt) Base dir containing mount points for vzstorage shares.
[DEFAULT] vzstorage_shares_config = /etc/cinder/vzstorage_shares (StrOpt) File with the list of available vzstorage shares.
[DEFAULT] vzstorage_sparsed_volumes = True (BoolOpt) Create volumes as sparsed files which take no space rather than regular files when using raw format, in which case volume creation takes lot of time.
[DEFAULT] vzstorage_used_ratio = 0.95 (FloatOpt) Percent of ACTUAL usage of the underlying volume before no new volumes can be allocated to the volume destination.
[DEFAULT] watch_log_file = False (BoolOpt) (Optional) Uses logging handler designed to watch file system. When log file is moved or removed this handler will open a new log file with specified path instantaneously. It makes sense only if log-file option is specified and Linux platform is used. This option is ignored if log_config_append is set.
[DEFAULT] xtremio_array_busy_retry_count = 5 (IntOpt) Number of retries in case array is busy
[DEFAULT] xtremio_array_busy_retry_interval = 5 (IntOpt) Interval between retries in case array is busy
[DEFAULT] zfssa_cache_directory = os-cinder-cache (StrOpt) Name of directory inside zfssa_nfs_share where cache volumes are stored.
[DEFAULT] zfssa_cache_project = os-cinder-cache (StrOpt) Name of ZFSSA project where cache volumes are stored.
[DEFAULT] zfssa_enable_local_cache = True (BoolOpt) Flag to enable local caching: True, False.
[DEFAULT] zfssa_initiator = (StrOpt) iSCSI initiator IQNs. (comma separated)
[DEFAULT] zfssa_initiator_config = (StrOpt) iSCSI initiators configuration.
[DEFAULT] zfssa_initiator_group = (StrOpt) iSCSI initiator group.
[DEFAULT] zfssa_initiator_password = (StrOpt) Secret of the iSCSI initiator CHAP user.
[DEFAULT] zfssa_initiator_user = (StrOpt) iSCSI initiator CHAP user (name).
[DEFAULT] zfssa_lun_compression = off (StrOpt) Data compression.
[DEFAULT] zfssa_lun_logbias = latency (StrOpt) Synchronous write bias.
[DEFAULT] zfssa_lun_sparse = False (BoolOpt) Flag to enable sparse (thin-provisioned): True, False.
[DEFAULT] zfssa_lun_volblocksize = 8k (StrOpt) Block size.
[DEFAULT] zfssa_pool = None (StrOpt) Storage pool name.
[DEFAULT] zfssa_project = None (StrOpt) Project name.
[DEFAULT] zfssa_replication_ip = (StrOpt) IP address used for replication data. (maybe the same as data ip)
[DEFAULT] zfssa_target_group = tgt-grp (StrOpt) iSCSI target group name.
[DEFAULT] zfssa_target_interfaces = None (StrOpt) Network interfaces of iSCSI targets. (comma separated)
[DEFAULT] zfssa_target_password = (StrOpt) Secret of the iSCSI target CHAP user.
[DEFAULT] zfssa_target_portal = None (StrOpt) iSCSI target portal (Data-IP:Port, w.x.y.z:3260).
[DEFAULT] zfssa_target_user = (StrOpt) iSCSI target CHAP user (name).
[DEFAULT] zmq_use_broker = True (BoolOpt) Shows whether zmq-messaging uses broker or not.
[cors] allow_credentials = True (BoolOpt) Indicate that the actual request can include user credentials
[cors] allow_headers = Content-Type, Cache-Control, Content-Language, Expires, Last-Modified, Pragma (ListOpt) Indicate which header field names may be used during the actual request.
[cors] allow_methods = GET, POST, PUT, DELETE, OPTIONS (ListOpt) Indicate which methods can be used during the actual request.
[cors] allowed_origin = None (StrOpt) Indicate whether this resource may be shared with the domain received in the requests "origin" header.
[cors] expose_headers = Content-Type, Cache-Control, Content-Language, Expires, Last-Modified, Pragma (ListOpt) Indicate which headers are safe to expose to the API. Defaults to HTTP Simple Headers.
[cors] max_age = 3600 (IntOpt) Maximum cache age of CORS preflight requests.
[cors.subdomain] allow_credentials = True (BoolOpt) Indicate that the actual request can include user credentials
[cors.subdomain] allow_headers = Content-Type, Cache-Control, Content-Language, Expires, Last-Modified, Pragma (ListOpt) Indicate which header field names may be used during the actual request.
[cors.subdomain] allow_methods = GET, POST, PUT, DELETE, OPTIONS (ListOpt) Indicate which methods can be used during the actual request.
[cors.subdomain] allowed_origin = None (StrOpt) Indicate whether this resource may be shared with the domain received in the requests "origin" header.
[cors.subdomain] expose_headers = Content-Type, Cache-Control, Content-Language, Expires, Last-Modified, Pragma (ListOpt) Indicate which headers are safe to expose to the API. Defaults to HTTP Simple Headers.
[cors.subdomain] max_age = 3600 (IntOpt) Maximum cache age of CORS preflight requests.
[keystone_authtoken] region_name = None (StrOpt) The region in which the identity server can be found.
[oslo_messaging_amqp] password = (StrOpt) Password for message broker authentication
[oslo_messaging_amqp] sasl_config_dir = (StrOpt) Path to directory that contains the SASL configuration
[oslo_messaging_amqp] sasl_config_name = (StrOpt) Name of configuration file (without .conf suffix)
[oslo_messaging_amqp] sasl_mechanisms = (StrOpt) Space separated list of acceptable SASL mechanisms
[oslo_messaging_amqp] username = (StrOpt) User name for message broker authentication
[oslo_messaging_qpid] send_single_reply = False (BoolOpt) Send a single AMQP reply to call message. The current behavior since oslo-incubator is to send two AMQP replies - first one with the payload, a second one to ensure the other has finished to send the payload. We are going to remove it in the N release, but we must keep backward compatible at the same time. This option provides such compatibility - it defaults to False in Liberty and can be turned on for early adopters with new installations or for testing. This option will be removed in the Mitaka release.
[oslo_messaging_rabbit] kombu_reconnect_timeout = 60 (IntOpt) How long to wait before considering a reconnect attempt to have failed. This value should not be longer than rpc_response_timeout.
[oslo_messaging_rabbit] send_single_reply = False (BoolOpt) Send a single AMQP reply to call message. The current behavior since oslo-incubator is to send two AMQP replies - first one with the payload, a second one to ensure the other has finished to send the payload. We are going to remove it in the N release, but we must keep backward compatible at the same time. This option provides such compatibility - it defaults to False in Liberty and can be turned on for early adopters with new installations or for testing. This option will be removed in the Mitaka release.
[oslo_middleware] secure_proxy_ssl_header = X-Forwarded-Proto (StrOpt) The HTTP Header that will be used to determine what the original request protocol scheme was, even if it was hidden by an SSL termination proxy.
[oslo_policy] policy_default_rule = default (StrOpt) Default rule. Enforced when a requested rule is not found.
[oslo_policy] policy_dirs = ['policy.d'] (MultiStrOpt) Directories where policy configuration files are stored. They can be relative to any directory in the search path defined by the config_dir option, or absolute paths. The file defined by policy_file must exist for these directories to be searched. Missing or empty directories are ignored.
[oslo_policy] policy_file = policy.json (StrOpt) The JSON file that defines policies.
[oslo_versionedobjects] fatal_exception_format_errors = False (BoolOpt) Make exception message format errors fatal

Table 2.68. New default values

Option Previous default value New default value
[DEFAULT] auth_strategy noauth keystone
[DEFAULT] default_log_levels amqp=WARN, amqplib=WARN, boto=WARN, qpid=WARN, sqlalchemy=WARN, suds=INFO, oslo.messaging=INFO, iso8601=WARN, requests.packages.urllib3.connectionpool=WARN, urllib3.connectionpool=WARN, websocket=WARN, requests.packages.urllib3.util.retry=WARN, urllib3.util.retry=WARN, keystonemiddleware=WARN, routes.middleware=WARN, stevedore=WARN amqp=WARN, amqplib=WARN, boto=WARN, qpid=WARN, sqlalchemy=WARN, suds=INFO, oslo.messaging=INFO, iso8601=WARN, requests.packages.urllib3.connectionpool=WARN, urllib3.connectionpool=WARN, websocket=WARN, requests.packages.urllib3.util.retry=WARN, urllib3.util.retry=WARN, keystonemiddleware=WARN, routes.middleware=WARN, stevedore=WARN, taskflow=WARN
[DEFAULT] logging_exception_prefix %(asctime)s.%(msecs)03d %(process)d TRACE %(name)s %(instance)s %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s
[DEFAULT] rpc_zmq_matchmaker local redis
[DEFAULT] storwize_svc_npiv_compatibility_mode False True
[DEFAULT] use_syslog_rfc_format False True
[DEFAULT] verbose False True
[DEFAULT] vmware_volume_folder cinder-volumes Volumes
[DEFAULT] volume_driver cinder.volume.drivers.lvm.LVMISCSIDriver cinder.volume.drivers.lvm.LVMVolumeDriver
[matchmaker_redis] password None
[oslo_messaging_rabbit] heartbeat_timeout_threshold 0 60

Table 2.69. Deprecated options

Deprecated option New Option
[DEFAULT] use_syslog None
[DEFAULT] osapi_max_request_body_size [oslo_middleware] max_request_body_size
[DEFAULT] eqlx_chap_password [DEFAULT] chap_password
[DEFAULT] datera_api_token None
[DEFAULT] glusterfs_sparsed_volumes [DEFAULT] nas_volume_prov_type
[DEFAULT] glusterfs_qcow2_volumes [DEFAULT] nas_volume_prov_type
[DEFAULT] eqlx_use_chap [DEFAULT] use_chap_auth
[DEFAULT] rpc_thread_pool_size [DEFAULT] executor_thread_pool_size
[DEFAULT] enable_v1_api None
[DEFAULT] netapp_volume_list [DEFAULT] netapp_pool_name_search_pattern
[DEFAULT] netapp_storage_pools [DEFAULT] netapp_pool_name_search_pattern
[DEFAULT] host [DEFAULT] backend_host
[DEFAULT] netapp_eseries_host_type [DEFAULT] netapp_host_type
[DEFAULT] eqlx_chap_login [DEFAULT] chap_username
[DEFAULT] log_format None
[DEFAULT] storage_vnx_pool_name [DEFAULT] storage_vnx_pool_names


[1] The configuration file location may differ.
[2] There is no relative precedence or weight among these four labels.

Chapter 3. Compute

The OpenStack Compute service is a cloud computing fabric controller, which is the main part of an IaaS system. You can use OpenStack Compute to host and manage cloud computing systems. This section describes the OpenStack Compute configuration options.
To configure your Compute installation, you must define configuration options in these files:
  • nova.conf. Contains most of the Compute configuration options. Resides in the /etc/nova/ directory.
  • api-paste.ini. Defines Compute limits. Resides in the /etc/nova/ directory.
  • Related Image service and Identity service management configuration files.

Ephemeral Storage Discrepancy with Ceph

When using Red Hat Ceph as a back end for ephemeral storage, the Compute service does not calculate the amount of available storage correctly. Specifically, Compute simply adds up the amount of available storage without factoring in replication. This results in grossly overstated available storage, which in turn could cause unexpected storage oversubscription.

To determine the correct ephemeral storage capacity, query the Ceph service directly instead. For more information, see BZ#1236473.

3.1. Overview of nova.conf

You can use a particular configuration option file by using the option (nova.conf) parameter when you run one of the nova-* services. This parameter inserts configuration option definitions from the specified configuration file name, which might be useful for debugging or performance tuning.
For a list of configuration options, see the tables in this guide.
To learn more about the nova.conf configuration file, review the general purpose configuration options documented in Table 3.18, “Description of common configuration options”.
Important
Do not specify quotes around Nova options.

Sections

Configuration options are grouped by section. The Compute configuration file supports the following sections:
[DEFAULT]
Contains most configuration options. If the documentation for a configuration option does not specify its section, assume that it appears in this section.
[baremetal]
Configures the baremetal hypervisor driver.
[cells]
Configures cells functionality. For details, see Section 3.13, “Cells”.
[conductor]
Configures the nova-conductor service.
[database]
Configures the database that Compute uses.
[glance]
Configures how to access the Image service.
[image_file_url]
Configures additional filesystems to access the Image Service.
[keymgr]
Configures the key manager.
[keystone_authtoken]
Configures authorization via Identity service.
[libvirt]
Configures the hypervisor drivers using the Libvirt library: KVM, LXC, Qemu, UML, Xen.
[matchmaker_redis]
Configures a Redis server.
[matchmaker_ring]
Configures a matchmaker ring.
[metrics]
Configures weights for the metrics weigher.
[neutron]
Configures Networking specific options.
[osapi_v3]
Configures the OpenStack Compute API v3.
[rdp]
Configures RDP proxying.
[serial_console]
Configures serial console.
[spice]
Configures virtual consoles using SPICE.
[ssl]
Configures certificate authority using SSL.
[trusted_computing]
Configures the trusted computing pools functionality and how to connect to a remote attestation service.
[upgrade_levels]
Configures version locking on the RPC (message queue) communications between the various Compute services to allow live upgrading an OpenStack installation.
[vmware]
Configures the VMware hypervisor driver.
[xenserver]
Configures the XenServer hypervisor driver.
[zookeeper]
Configures the ZooKeeper ServiceGroup driver.

3.2. Configure logging

You can use the nova.conf file to configure where Compute logs events, the level of logging, and log formats.
To customize log formats for OpenStack Compute, use the configuration option settings documented in Table 3.37, “Description of logging configuration options”.

3.3. Configure authentication and authorization

There are different methods of authentication for the OpenStack Compute project, including no authentication. The preferred system is the OpenStack Identity service, code-named Keystone.
To customize authorization settings for Compute, use the configuration options documented in Table 3.12, “Description of authentication configuration options”.
To customize certificate authority settings for Compute, use the configuration options documented in Table 3.16, “Description of CA and SSL configuration options”.
To customize Compute and the Identity service to use LDAP as a backend, refer to the configuration options documented in Table 3.34, “Description of LDAP configuration options”.

3.4. Configure resize

Resize (or Server resize) is the ability to change the flavor of a server, thus allowing it to upscale or downscale according to user needs. For this feature to work properly, you might need to configure some underlying virt layers.

3.4.1. KVM

Resize on KVM is implemented currently by transferring the images between compute nodes over ssh. For KVM you need hostnames to resolve properly and passwordless ssh access between your compute hosts. Direct access from one compute host to another is needed to copy the VM file across.

3.5. Database configuration

You can configure OpenStack Compute to use any SQLAlchemy-compatible database. The database name is nova. The nova-conductor service is the only service that writes to the database. The other Compute services access the database through the nova-conductor service.
To ensure that the database schema is current, run the following command:
# nova-manage db sync
If nova-conductor is not used, entries to the database are mostly written by the nova-scheduler service, although all services must be able to update entries in the database.
In either case, use the configuration option settings documented in Table 3.24, “Description of database configuration options” to configure the connection string for the nova database.

3.6. Configure the Oslo RPC messaging system

OpenStack projects use AMQP, an open standard for messaging middleware. OpenStack services that run on multiple servers to talk to each other. OpenStack Oslo RPC supports two implementations of AMQP: RabbitMQ and Qpid.

3.6.1. Configure RabbitMQ

OpenStack Oslo RPC uses RabbitMQ by default. Use these options to configure the RabbitMQ message system. The rpc_backend option is not required as long as RabbitMQ is the default messaging system. However, if it is included the configuration, you must set it to rabbit.
rpc_backend=rabbit
You can use these additional options to configure the RabbitMQ messaging system. You can configure messaging communication for different installation scenarios, tune retries for RabbitMQ, and define the size of the RPC thread pool. To monitor notifications through RabbitMQ, you must set the notification_driver option to nova.openstack.common.notifier.rpc_notifier in the nova.conf file. The default for sending usage data is sixty seconds plus a random number of seconds from zero to sixty.

Table 3.1. Description of RabbitMQ configuration options

Configuration option = Default value Description
[oslo_messaging_rabbit]
amqp_auto_delete = False (BoolOpt) Auto-delete queues in AMQP.
amqp_durable_queues = False (BoolOpt) Use durable queues in AMQP.
fake_rabbit = False (BoolOpt) Deprecated, use rpc_backend=kombu+memory or rpc_backend=fake
heartbeat_rate = 2 (IntOpt) How often times during the heartbeat_timeout_threshold to check the heartbeat.
heartbeat_timeout_threshold = 60 (IntOpt) Number of seconds after which the Rabbit broker is considered down if heartbeat's keep-alive fails (0 disables the heartbeat). EXPERIMENTAL
kombu_reconnect_delay = 1.0 (FloatOpt) How long to wait before reconnecting in response to an AMQP consumer cancel notification.
kombu_reconnect_timeout = 60 (IntOpt) How long to wait before considering a reconnect attempt to have failed. This value should not be longer than rpc_response_timeout.
kombu_ssl_ca_certs = (StrOpt) SSL certification authority file (valid only if SSL enabled).
kombu_ssl_certfile = (StrOpt) SSL cert file (valid only if SSL enabled).
kombu_ssl_keyfile = (StrOpt) SSL key file (valid only if SSL enabled).
kombu_ssl_version = (StrOpt) SSL version to use (valid only if SSL enabled). Valid values are TLSv1 and SSLv23. SSLv2, SSLv3, TLSv1_1, and TLSv1_2 are also available.
rabbit_ha_queues = False (BoolOpt) Use HA queues in RabbitMQ (x-ha-policy: all). If you change this option, you must wipe the RabbitMQ database.
rabbit_host = localhost (StrOpt) The RabbitMQ broker address where a single node is used.
rabbit_hosts = $rabbit_host:$rabbit_port (ListOpt) RabbitMQ HA cluster host:port pairs.
rabbit_login_method = AMQPLAIN (StrOpt) The RabbitMQ login method.
rabbit_max_retries = 0 (IntOpt) Maximum number of RabbitMQ connection retries. Default is 0 (infinite retry count).
rabbit_password = guest (StrOpt) The RabbitMQ password.
rabbit_port = 5672 (IntOpt) The RabbitMQ broker port where a single node is used.
rabbit_retry_backoff = 2 (IntOpt) How long to backoff for between retries when connecting to RabbitMQ.
rabbit_retry_interval = 1 (IntOpt) How frequently to retry connecting with RabbitMQ.
rabbit_use_ssl = False (BoolOpt) Connect over SSL for RabbitMQ.
rabbit_userid = guest (StrOpt) The RabbitMQ userid.
rabbit_virtual_host = / (StrOpt) The RabbitMQ virtual host.
send_single_reply = False (BoolOpt) Send a single AMQP reply to call message. The current behavior since oslo-incubator is to send two AMQP replies: first one with the payload, a second one to ensure the other has finished to send the payload. This option defaults to False in Liberty and can be turned on for early adopters with new installations or for testing. This option will be removed in the Mitaka release.

3.6.2. Configure Qpid

Use these options to configure the Qpid messaging system for OpenStack Oslo RPC. Qpid is not the default messaging system, so you must enable it by setting the rpc_backend option in the nova.conf file.
rpc_backend=qpid
Warning
The Qpid driver has been deprecated. The driver is planned to be removed during the Mitaka development cycle.
This critical option points the compute nodes to the Qpid broker (server). Set qpid_hostname to the host name where the broker runs in the nova.conf file.
Note
The --qpid_hostname parameter accepts a host name or IP address value.
qpid_hostname=hostname.example.com
If the Qpid broker listens on a port other than the AMQP default of 5672, you must set the qpid_port option to that value:
qpid_port=12345
If you configure the Qpid broker to require authentication, you must add a user name and password to the configuration:
qpid_username=username
qpid_password=password
By default, TCP is used as the transport. To enable SSL, set the qpid_protocol option:
qpid_protocol=ssl
This table lists additional options that you use to configure the Qpid messaging driver for OpenStack Oslo RPC. These options are used infrequently.

Table 3.2. Description of Qpid configuration options

Configuration option = Default value Description
[oslo_messaging_qpid]
amqp_auto_delete = False (BoolOpt) Auto-delete queues in AMQP.
amqp_durable_queues = False (BoolOpt) Use durable queues in AMQP.
qpid_heartbeat = 60 (IntOpt) Seconds between connection keepalive heartbeats.
qpid_hostname = localhost (StrOpt) Qpid broker hostname.
qpid_hosts = $qpid_hostname:$qpid_port (ListOpt) Qpid HA cluster host:port pairs.
qpid_password = (StrOpt) Password for Qpid connection.
qpid_port = 5672 (IntOpt) Qpid broker port.
qpid_protocol = tcp (StrOpt) Transport to use, either 'tcp' or 'ssl'.
qpid_receiver_capacity = 1 (IntOpt) The number of prefetched messages held by receiver.
qpid_sasl_mechanisms = (StrOpt) Space separated list of SASL mechanisms to use for auth.
qpid_tcp_nodelay = True (BoolOpt) Whether to disable the Nagle algorithm.
qpid_topology_version = 1 (IntOpt) The qpid topology version to use. Version 1 is what was originally used by impl_qpid. Version 2 includes some backwards-incompatible changes that allow broker federation to work. Users should update to version 2 when they are able to take everything down, as it requires a clean break.
qpid_username = (StrOpt) Username for Qpid connection.
send_single_reply = False (BoolOpt) Send a single AMQP reply to call message. The current behavior since oslo-incubator is to send two AMQP replies: first one with the payload, a second one to ensure the other has finished to send the payload. This option defaults to False in Liberty and can be turned on for early adopters with new installations or for testing. This option will be removed in the Mitaka release.

3.6.3. Configure messaging

Use these options to configure the RabbitMQ and Qpid messaging drivers.

Table 3.3. Description of AMQP configuration options

Configuration option = Default value Description
[DEFAULT]
control_exchange = openstack (StrOpt) The default exchange under which topics are scoped. May be overridden by an exchange name specified in the transport_url option.
default_publisher_id = None (StrOpt) Default publisher_id for outgoing notifications
notification_driver = [] (MultiStrOpt) Driver or drivers to handle sending notifications. Possible values are messaging, messagingv2, routing, log, test, noop.
notification_topics = notifications (ListOpt) AMQP topic used for OpenStack notifications.
transport_url = None (StrOpt) A URL representing the messaging driver to use and its full configuration. If not set, fall back to the rpc_backend option and driver specific configuration.

Table 3.4. Description of RPC configuration options

Configuration option = Default value Description
[DEFAULT]
rpc_backend = rabbit (StrOpt) The messaging driver to use, defaults to rabbit. Other drivers include qpid and zmq.
rpc_cast_timeout = 30 (IntOpt) Seconds to wait before a cast expires (TTL). Only supported by impl_zmq.
rpc_conn_pool_size = 30 (IntOpt) Size of RPC connection pool.
rpc_poll_timeout = 1 (IntOpt) The default number of seconds that poll should wait. Poll raises timeout exception when timeout expired.
rpc_response_timeout = 60 (IntOpt) Seconds to wait for a response from a call.
[cells]
rpc_driver_queue_base = cells.intercell (StrOpt) Base queue name to use when communicating between cells. Various topics by message type will be appended to this.
[oslo_concurrency]
disable_process_locking = False (BoolOpt) Enables or disables inter-process locks.
lock_path = None (StrOpt) Directory to use for lock files. For security, the specified directory should only be writable by the user running the processes that need locking. Defaults to environment variable OSLO_LOCK_PATH. If external locks are used, a lock path must be set.
[oslo_messaging_amqp]
allow_insecure_clients = False (BoolOpt) Accept clients using either SSL or plain TCP
broadcast_prefix = broadcast (StrOpt) address prefix used when broadcasting to all servers
container_name = None (StrOpt) Name for the AMQP container
group_request_prefix = unicast (StrOpt) address prefix when sending to any server in group
idle_timeout = 0 (IntOpt) Timeout for inactive connections (in seconds)
password = (StrOpt) Password for message broker authentication
sasl_config_dir = (StrOpt) Path to directory that contains the SASL configuration
sasl_config_name = (StrOpt) Name of configuration file (without .conf suffix)
sasl_mechanisms = (StrOpt) Space separated list of acceptable SASL mechanisms
server_request_prefix = exclusive (StrOpt) address prefix used when sending to a specific server
ssl_ca_file = (StrOpt) CA certificate PEM file to verify server certificate
ssl_cert_file = (StrOpt) Identifying certificate PEM file to present to clients
ssl_key_file = (StrOpt) Private key PEM file used to sign cert_file certificate
ssl_key_password = None (StrOpt) Password for decrypting ssl_key_file (if encrypted)
trace = False (BoolOpt) Debug: dump AMQP frames to stdout
username = (StrOpt) User name for message broker authentication
[upgrade_levels]
baseapi = None (StrOpt) Set a version cap for messages sent to the base api in any service

3.7. Configure the Compute API

The Compute API, run by the nova-api daemon, is the component of OpenStack Compute that receives and responds to user requests, whether they be direct API calls, or via the CLI tools or dashboard.

Configure Compute API password handling

The OpenStack Compute API enables users to specify an administrative password when they create or rebuild a server instance. If the user does not specify a password, a random password is generated and returned in the API response.
In practice, how the admin password is handled depends on the hypervisor in use and might require additional configuration of the instance. For example, you might have to install an agent to handle the password setting. If the hypervisor and instance configuration do not support setting a password at server create time, the password that is returned by the create API call is misleading because it was ignored.
To prevent this confusion, use the enable_instance_password configuration option to disable the return of the admin password for installations that do not support setting instance passwords.

Configure Compute API rate limiting

OpenStack Compute supports API rate limiting for the OpenStack API. The rate limiting allows an administrator to configure limits on the type and number of API calls that can be made in a specific time interval.
When API rate limits are exceeded, HTTP requests return an error with a status code of 403 Forbidden.
Rate limiting is not available for the EC2 API.

Define limits

To define limits, set these values:
  • The HTTP method used in the API call, typically one of GET, PUT, POST, or DELETE.
  • A human readable URI that is used as a friendly description of where the limit is applied.
  • A regular expression. The limit is applied to all URIs that match the regular expression and HTTP method.
  • A limit value that specifies the maximum count of units before the limit takes effect.
  • An interval that specifies time frame to which the limit is applied. The interval can be SECOND, MINUTE, HOUR, or DAY.
Rate limits are applied in relative order to the HTTP method, going from least to most specific.

Default limits

Normally, you install OpenStack Compute with the following limits enabled:

Table 3.5. Default API rate limits

HTTP method API URI API regular expression Limit
POST any URI (*) .* 120 per minute
POST /servers ^/servers 120 per minute
PUT any URI (*) .* 120 per minute
GET *changes-since* .*changes-since.* 120 per minute
DELETE any URI (*) .* 120 per minute
GET */os-fping ^/os-fping 12 per minute

Configure and change limits

As part of the WSGI pipeline, the /etc/nova/api-paste.ini file defines the actual limits.
To enable limits, include the ratelimit filter in the API pipeline specification. If the ratelimit filter is removed from the pipeline, limiting is disabled. You must also define the rate limit filter. The lines appear as follows:
[pipeline:openstack_compute_api_v2]
pipeline = faultwrap authtoken keystonecontext ratelimit osapi_compute_app_v2

[pipeline:openstack_volume_api_v1]
pipeline = faultwrap authtoken keystonecontext ratelimit osapi_volume_app_v1

[filter:ratelimit]
paste.filter_factory = nova.api.openstack.compute.limits:RateLimitingMiddleware.factory
To modify the limits, add a limits specification to the [filter:ratelimit] section of the file. Specify the limits in this order:
  1. HTTP method
  2. friendly URI
  3. regex
  4. limit
  5. interval
The following example shows the default rate-limiting values:
[filter:ratelimit]
paste.filter_factory = nova.api.openstack.compute.limits:RateLimitingMiddleware.factory
limits =(POST, "*", .*, 120, MINUTE);(POST, "*/servers", ^/servers, 120, MINUTE);(PUT, "*", .*, 120, MINUTE);(GET, "*changes-since*", .*changes-since.*, 120, MINUTE);(DELETE, "*", .*, 120, MINUTE);(GET, "*/os-fping", ^/os-fping, 12, MINUTE)

Configuration reference

The Compute API configuration options are documented in Table 3.9, “Description of API configuration options”.

3.8. Configure the EC2 API

You can set options in the nova.conf configuration file to control which network address and port the EC2 API listens on, the formatting of some API responses, and authentication related options.
To customize these options for OpenStack EC2 API, use the configuration option settings documented in Table 3.26, “Description of EC2 configuration options”.

3.9. Fibre Channel support in Compute

Fibre Channel support in OpenStack Compute is remote block storage attached to compute nodes for VMs.
In the Grizzly release, Fibre Channel supported only the KVM hypervisor.
Compute and Block Storage support Fibre Channel automatic zoning on Brocade and Cisco switches. On other hardware Fibre Channel arrays must be pre-zoned or directly attached to the KVM hosts.

3.9.1. KVM host requirements

You must install these packages on the KVM host:
  • sysfsutils - Nova uses the systool application in this package.
  • sg3-utils or sg3_utils - Nova uses the sg_scan and sginfo applications.
Installing the multipath-tools package is optional.

3.9.2. Install required packages

Use this command to install the system packages:
# yum install sysfsutils sg3_utils multipath-tools

3.10. iSCSI interface and offload support in Compute

Note
iSCSI interface and offload support is only present since Kilo.
Compute supports open-iscsi iSCSI interfaces for offload cards. Offload hardware must be present and configured on every compute node where offload is desired. Once an open-iscsi interface is configured, the iface name (iface.iscsi_ifacename) should be passed to libvirt via the iscsi_iface parameter for use. All iscsi sessions will be bound to this iSCSI interface.
Currently supported transports (iface.transport_name) are be2iscsi, bnx2i, cxgb3i, cxgb4i, qla4xxx, ocs . No configuration changes are needed outside of Compute node.
iSER is currently supported via the separate iSER LibvirtISERVolumeDriver and will be rejected if used via the iscsi_iface parameter.

3.10.1. iSCSI iface configuration

  • Note the distinction between the transport name (iface.transport_name) and iface name (iface.iscsi_ifacename). The actual iface name must be specified via the iscsi_iface parameter to libvirt for offload to work.
  • The default name for an iscsi iface (open-iscsi parameter iface.iscsi_ifacename) is in the format transport_name.hwaddress when generated by iscsiadm.
  • iscsiadm can be used to view and generate current iface configuration. Every network interface that supports an open-iscsi transport can have one or more iscsi ifaces associated with it. If no ifaces have been configured for a network interface supported by an open-iscsi transport, this command will create a default iface configuration for that network interface. For example :
    # iscsiadm -m iface
       default tcp,<empty>,<empty>,<empty>,<empty>
       iser iser,<empty>,<empty>,<empty>,<empty>
       bnx2i.00:05:b5:d2:a0:c2 bnx2i,00:05:b5:d2:a0:c2,5.10.10.20,<empty>,<empty>
       cxgb4i.00:07:43:28:b2:58 cxgb4i,00:07:43:28:b2:58,102.50.50.80,<empty>,<empty>
       qla4xxx.00:c0:dd:08:63:ea qla4xxx,00:c0:dd:08:63:ea,20.15.0.9,<empty>,<empty>
    
    The output is in the format : iface_name transport_name,hwaddress,ipaddress,net_ifacename,initiatorname.
  • Individual iface configuration can be viewed via
    # iscsiadm -m iface -I IFACE_NAME
       # BEGIN RECORD 2.0-873
       iface.iscsi_ifacename = cxgb4i.00:07:43:28:b2:58
       iface.net_ifacename = <empty>
       iface.ipaddress = 102.50.50.80
       iface.hwaddress = 00:07:43:28:b2:58
       iface.transport_name = cxgb4i
       iface.initiatorname = <empty>
       # END RECORD
    
    Configuration can be updated as desired via
    # iscsiadm -m iface-I IFACE_NAME--op=update -n iface.SETTING -v VALUE
  • All iface configurations need a minimum of iface.iface_name, iface.transport_name and iface.hwaddress to be correctly configured to work. Some transports may require iface.ipaddress and iface.net_ifacename as well to bind correctly.
    Detailed configuration instructions can be found in the Linux* Open-iSCSI README file .

3.11. Hypervisors

Red Hat OpenStack Platform is only supported for use with the libvirt driver (using KVM as the hypervisor on Compute nodes).
With this release of Red Hat OpenStack Platform, Ironic is now fully supported. Ironic allows you to provision bare-metal machines using common technologies (such as PXE boot and IPMI) to cover a wide range of hardware while supporting pluggable drivers to allow the addition of vendor-specific functionality.
Red Hat does not provide support for other Compute virtualization drivers such as the deprecated VMware "direct-to-ESX" hypervisor, and non-KVM libvirt hypervisors.

3.11.1. Hypervisor configuration basics

The node where the nova-compute service is installed and operates on the same node that runs all of the virtual machines. This is referred to as the compute node in this guide.
By default, the selected hypervisor is KVM. To change to another hypervisor, change the virt_type option in the [libvirt] section of nova.conf and restart the nova-compute service.
Here are the general nova.conf options that are used to configure the compute node's hypervisor: Table 3.30, “Description of hypervisor configuration options”.
Specific options for particular hypervisors can be found in the following sections.

3.11.2. KVM

KVM is configured as the default hypervisor for Compute.
Note
This document contains several sections about hypervisor selection. If you are reading this document linearly, you do not want to load the KVM module before you install nova-compute. The nova-compute service depends on qemu-kvm, which installs /lib/udev/rules.d/45-qemu-kvm.rules, which sets the correct permissions on the /dev/kvm device node.
To enable KVM explicitly, add the following configuration options to the /etc/nova/nova.conf file:
compute_driver = libvirt.LibvirtDriver

[libvirt]
virt_type = kvm
The KVM hypervisor supports the following virtual machine image formats:
  • Raw
  • QEMU Copy-on-write (qcow2)
  • QED Qemu Enhanced Disk
  • VMware virtual machine disk format (vmdk)
This section describes how to enable KVM on your system. For more information, see Installing virtualization packages on an existing Red Hat Enterprise Linux system from the Red Hat Enterprise Linux Virtualization Host Configuration and Guest Installation Guide.

3.11.2.1. Enable KVM

The following sections outline how to enable KVM based hardware virtualisation on different architectures and platforms. To perform these steps, you must be logged in as the root user.
3.11.2.1.1. For x86 based systems
  1. To determine whether the svm or vmx CPU extensions are present, run this command:
    # grep -E 'svm|vmx' /proc/cpuinfo
    This command generates output if the CPU is capable of hardware virtualization. Even if output is shown, you might still need to enable virtualization in the system BIOS for full support.
    If no output appears, consult your system documentation to ensure that your CPU and motherboard support hardware virtualization. Verify that any relevant hardware virtualization options are enabled in the system BIOS.
    The BIOS for each manufacturer is different. If you must enable virtualization in the BIOS, look for an option containing the words virtualization, VT, VMX, or SVM.
  2. To list the loaded kernel modules and verify that the kvm modules are loaded, run this command:
    # lsmod | grep kvm
    If the output includes kvm_intel or kvm_amd, the kvm hardware virtualization modules are loaded and your kernel meets the module requirements for OpenStack Compute.
    If the output does not show that the kvm module is loaded, run this command to load it:
    # modprobe -a kvm
    Run the command for your CPU. For Intel, run this command:
    # modprobe -a kvm-intel
    For AMD, run this command:
    # modprobe -a kvm-amd
    Because a KVM installation can change user group membership, you might need to log in again for changes to take effect.
    If the kernel modules do not load automatically, use the procedures listed in these subsections.
If the checks indicate that required hardware virtualization support or kernel modules are disabled or unavailable, you must either enable this support on the system or find a system with this support.
Note
Some systems require that you enable VT support in the system BIOS. If you believe your processor supports hardware acceleration but the previous command did not produce output, reboot your machine, enter the system BIOS, and enable the VT option.
If KVM acceleration is not supported, configure Compute to use a different hypervisor, such as QEMU or Xen.
These procedures help you load the kernel modules for Intel-based and AMD-based processors if they do not load automatically during KVM installation.
3.11.2.1.1.1. Intel-based processors
If your compute host is Intel-based, run these commands as root to load the kernel modules:
# modprobe kvm
# modprobe kvm-intel
See Persistent Module Loading in Red Hat Enterprise Linux 6, or Persistent Module Loading in Red Hat Enterprise Linux 7 respectively, for instructions on how to load the kvm and kvm-amd modules automatically.
3.11.2.1.1.2. AMD-based processors
If your compute host is AMD-based, run these commands as root to load the kernel modules:
# modprobe kvm
# modprobe kvm-amd
See Persistent Module Loading in Red Hat Enterprise Linux 6, or Persistent Module Loading in Red Hat Enterprise Linux 7 respectively, for instructions on how to load the kvm and kvm-intel modules automatically.
3.11.2.1.2. For POWER based systems
KVM as a hypervisor is supported on POWER system's PowerNV platform.
  1. To determine if your POWER platform supports KVM based virtualization run the following command:
    # grep PowerNV /proc/cpuinfo
    If the previous command generates the following output, then CPU supports KVM based virtualization
    platform: PowerNV
    If no output is displayed, then your POWER platform does not support KVM based hardware virtualization.
  2. To list the loaded kernel modules and verify that the kvm modules are loaded, run the following command:
    # lsmod | grep kvm
    If the output includes kvm_hv, the kvm hardware virtualization modules are loaded and your kernel meets the module requirements for OpenStack Compute.
    If the output does not show that the kvm module is loaded, run the following command to load it:
    # modprobe -a kvm
    For PowerNV platform, run the following command:
    # modprobe -a kvm-hv
    Because a KVM installation can change user group membership, you might need to log in again for changes to take effect.

3.11.2.2. Specify the CPU model of KVM guests

The Compute service enables you to control the guest CPU model that is exposed to KVM virtual machines. Use cases include:
  • To maximize performance of virtual machines by exposing new host CPU features to the guest
  • To ensure a consistent default CPU across all machines, removing reliance of variable QEMU defaults
In libvirt, the CPU is specified by providing a base CPU model name (which is a shorthand for a set of feature flags), a set of additional feature flags, and the topology (sockets/cores/threads). The libvirt KVM driver provides a number of standard CPU model names. These models are defined in the /usr/share/libvirt/cpu_map.xml file. Check this file to determine which models are supported by your local installation.
Two Compute configuration options in the [libvirt] group of nova.conf define which type of CPU model is exposed to the hypervisor when using KVM: cpu_mode and cpu_model.
The cpu_mode option can take one of the following values: none, host-passthrough, host-model, and custom.
Host model (default for KVM & QEMU)
If your nova.conf file contains cpu_mode=host-model, libvirt identifies the CPU model in /usr/share/libvirt/cpu_map.xml file that most closely matches the host, and requests additional CPU flags to complete the match. This configuration provides the maximum functionality and performance and maintains good reliability and compatibility if the guest is migrated to another host with slightly different host CPUs.
Host pass through
If your nova.conf file contains cpu_mode=host-passthrough, libvirt tells KVM to pass through the host CPU with no modifications. The difference to host-model, instead of only matching feature flags, every last detail of the host CPU is matched. This gives the best performance, and can be important to some apps which check low level CPU details, but it comes at a cost with respect to migration. The guest can only be migrated to a matching host CPU.
Custom
If your nova.conf file contains cpu_mode=custom, you can explicitly specify one of the supported named models using the cpu_model configuration option. For example, to configure the KVM guests to expose Nehalem CPUs, your nova.conf file should contain:
[libvirt]
cpu_mode = custom
cpu_model = Nehalem
None (default for all libvirt-driven hypervisors other than KVM & QEMU)
If your nova.conf file contains cpu_mode=none, libvirt does not specify a CPU model. Instead, the hypervisor chooses the default model.

3.11.2.3. Guest agent support

Use guest agents to enable optional access between compute nodes and guests through a socket, using the QMP protocol.
To enable this feature, you must set hw_qemu_guest_agent=yes as a metadata parameter on the image you want to use to create the guest-agent-capable instances from. You can explicitly disable the feature by setting hw_qemu_guest_agent=no in the image metadata.

3.11.2.4. KVM performance tweaks

The VHostNet kernel module improves network performance. To load the kernel module, run the following command as root:
# modprobe vhost_net

3.11.2.5. Troubleshoot KVM

Trying to launch a new virtual machine instance fails with the ERRORstate, and the following error appears in the /var/log/nova/nova-compute.log file:
libvirtError: internal error no supported architecture for os type 'hvm'
This message indicates that the KVM kernel modules were not loaded.
If you cannot start VMs after installation without rebooting, the permissions might not be set correctly. This can happen if you load the KVM module before you install nova-compute. To check whether the group is set to kvm, run:
# ls -l /dev/kvm
If it is not set to kvm, run:
# udevadm trigger

3.11.3. QEMU

From the perspective of the Compute service, the QEMU hypervisor is very similar to the KVM hypervisor. Both are controlled through libvirt, both support the same feature set, and all virtual machine images that are compatible with KVM are also compatible with QEMU. The main difference is that QEMU does not support native virtualization. Consequently, QEMU has worse performance than KVM and is a poor choice for a production deployment.
The typical uses cases for QEMU are
  • Running on older hardware that lacks virtualization support.
  • Running the Compute service inside of a virtual machine for development or testing purposes, where the hypervisor does not support native virtualization for guests.
To enable QEMU, add these settings to nova.conf:
compute_driver = libvirt.LibvirtDriver

[libvirt]
virt_type = qemu
For some operations you may also have to install the guestmount utility:
# yum install libguestfs-tools
The QEMU hypervisor supports the following virtual machine image formats:
  • Raw
  • QEMU Copy-on-write (qcow2)
  • VMware virtual machine disk format (vmdk)

3.12. Scheduling

Compute uses the nova-scheduler service to determine how to dispatch compute requests. For example, the nova-scheduler service determines on which host a VM should launch. In the context of filters, the term host means a physical node that has a nova-compute service running on it. You can configure the scheduler through a variety of options.
Compute is configured with the following default scheduler options in the /etc/nova/nova.conf file:
scheduler_driver_task_period = 60
scheduler_driver = nova.scheduler.filter_scheduler.FilterScheduler
scheduler_available_filters = nova.scheduler.filters.all_filters
scheduler_default_filters = RetryFilter, AvailabilityZoneFilter, RamFilter, DiskFilter, ComputeFilter, ComputeCapabilitiesFilter, ImagePropertiesFilter, ServerGroupAntiAffinityFilter, ServerGroupAffinityFilter
By default, the scheduler_driver is configured as a filter scheduler, as described in the next section. In the default configuration, this scheduler considers hosts that meet all the following criteria:
  • Have not been attempted for scheduling purposes (RetryFilter).
  • Are in the requested availability zone (AvailabilityZoneFilter).
  • Have sufficient RAM available (RamFilter).
  • Have sufficient disk space available for root and ephemeral storage (DiskFilter).
  • Can service the request (ComputeFilter).
  • Satisfy the extra specs associated with the instance type (ComputeCapabilitiesFilter).
  • Satisfy any architecture, hypervisor type, or virtual machine mode properties specified on the instance's image properties (ImagePropertiesFilter).
  • Are on a different host than other instances of a group (if requested) (ServerGroupAntiAffinityFilter).
  • Are in a set of group hosts (if requested) (ServerGroupAffinityFilter).
The scheduler caches its list of available hosts; use the scheduler_driver_task_period option to specify how often the list is updated.
Note
Do not configure service_down_time to be much smaller than scheduler_driver_task_period; otherwise, hosts appear to be dead while the host list is being cached.
The scheduler chooses a new host when an instance is migrated.
When evacuating instances from a host, the scheduler service honors the target host defined by the administrator on the evacuate command. If a target is not defined by the administrator, the scheduler determines the target host..

3.12.1. Filter scheduler

The filter scheduler (nova.scheduler.filter_scheduler.FilterScheduler) is the default scheduler for scheduling virtual machine instances. It supports filtering and weighting to make informed decisions on where a new instance should be created.

3.12.2. Filters

When the filter scheduler receives a request for a resource, it first applies filters to determine which hosts are eligible for consideration when dispatching a resource. Filters are binary: either a host is accepted by the filter, or it is rejected. Hosts that are accepted by the filter are then processed by a different algorithm to decide which hosts to use for that request, described in the Weights section.
The scheduler_available_filters configuration option in nova.conf provides the Compute service with the list of the filters that are used by the scheduler. The default setting specifies all of the filter that are included with the Compute service:
scheduler_available_filters = nova.scheduler.filters.all_filters
This configuration option can be specified multiple times. For example, if you implemented your own custom filter in Python called myfilter.MyFilter and you wanted to use both the built-in filters and your custom filter, your nova.conf file would contain:
scheduler_available_filters = nova.scheduler.filters.all_filters
scheduler_available_filters = myfilter.MyFilter
The scheduler_default_filters configuration option in nova.conf defines the list of filters that are applied by the nova-scheduler service. The default filters are:
scheduler_default_filters = RetryFilter, AvailabilityZoneFilter, RamFilter, ComputeFilter, ComputeCapabilitiesFilter, ImagePropertiesFilter, ServerGroupAntiAffinityFilter, ServerGroupAffinityFilter
The following sections describe the available filters.

3.12.2.1. AggregateCoreFilter

Filters host by CPU core numbers with a per-aggregate cpu_allocation_ratio value. If the per-aggregate value is not found, the value falls back to the global setting. If the host is in more than one aggregate and more than one value is found, the minimum value will be used. For information about how to use this filter, see Section 3.12.6, “Host aggregates and availability zones”. See also Section 3.12.2.14, “CoreFilter”.

3.12.2.2. AggregateDiskFilter

Filters host by disk allocation with a per-aggregate disk_allocation_ratio value. If the per-aggregate value is not found, the value falls back to the global setting. If the host is in more than one aggregate and more than one value is found, the minimum value will be used. For information about how to use this filter, see Section 3.12.6, “Host aggregates and availability zones”. See also Section 3.12.2.17, “DiskFilter”.

3.12.2.3. AggregateImagePropertiesIsolation

Matches properties defined in an image's metadata against those of aggregates to determine host matches:
  • If a host belongs to an aggregate and the aggregate defines one or more metadata that matches an image's properties, that host is a candidate to boot the image's instance.
  • If a host does not belong to any aggregate, it can boot instances from all images.
You can configure the AggregateImagePropertiesIsolation filter by using the following options in the nova.conf file:
# Considers only keys matching the given namespace (string). Multiple values can be given, as a comma-separated list.
aggregate_image_properties_isolation_namespace = <None>

# Separator used between the namespace and keys (string).
aggregate_image_properties_isolation_separator = .

3.12.2.4. AggregateInstanceExtraSpecsFilter

Matches properties defined in extra specs for an instance type against admin-defined properties on a host aggregate. Works with specifications that are scoped with aggregate_instance_extra_specs. Multiple values can be given, as a comma-separated list. For backward compatibility, also works with non-scoped specifications; this action is highly discouraged because it conflicts with ComputeCapabilitiesFilter filter when you enable both filters. For information about how to use this filter, see the host aggregates section.

3.12.2.5. AggregateIoOpsFilter

Filters host by disk allocation with a per-aggregate max_io_ops_per_host value. If the per-aggregate value is not found, the value falls back to the global setting. If the host is in more than one aggregate and more than one value is found, the minimum value will be used. For information about how to use this filter, see Section 3.12.6, “Host aggregates and availability zones”. See also Section 3.12.2.22, “IoOpsFilter”.

3.12.2.6. AggregateMultiTenancyIsolation

Isolates tenants to specific host aggregates. If a host is in an aggregate that has the filter_tenant_id metadata key, the host creates instances from only that tenant or list of tenants. A host can be in different aggregates. If a host does not belong to an aggregate with the metadata key, the host can create instances from all tenants.

3.12.2.7. AggregateNumInstancesFilter

Filters host by number of instances with a per-aggregate max_instances_per_host value. If the per-aggregate value is not found, the value falls back to the global setting. If the host is in more than one aggregate and thus more than one value is found, the minimum value will be used. For information about how to use this filter, see Section 3.12.6, “Host aggregates and availability zones”. See also Section 3.12.2.25, “NumInstancesFilter”.

3.12.2.8. AggregateRamFilter

Filters host by RAM allocation of instances with a per-aggregate ram_allocation_ratio value. If the per-aggregate value is not found, the value falls back to the global setting. If the host is in more than one aggregate and thus more than one value is found, the minimum value will be used. For information about how to use this filter, see Section 3.12.6, “Host aggregates and availability zones”. See also Section 3.12.2.27, “RamFilter”.

3.12.2.9. AggregateTypeAffinityFilter

This filter passes hosts if no instance_type key is set or the instance_type aggregate metadata value contains the name of the instance_type requested. The value of the instance_type metadata entry is a string that may contain either a single instance_type name or a comma-separated list of instance_type names, such as 'm1.nano' or "m1.nano,m1.small". For information about how to use this filter, see Section 3.12.6, “Host aggregates and availability zones”. See also Section 3.12.2.34, “TypeAffinityFilter”.

3.12.2.10. AllHostsFilter

This is a no-op filter. It does not eliminate any of the available hosts.

3.12.2.11. AvailabilityZoneFilter

Filters hosts by availability zone. You must enable this filter for the scheduler to respect availability zones in requests.

3.12.2.12. ComputeCapabilitiesFilter

Matches properties defined in extra specs for an instance type against compute capabilities.
If an extra specs key contains a colon (:), anything before the colon is treated as a namespace and anything after the colon is treated as the key to be matched. If a namespace is present and is not capabilities, the filter ignores the namespace. For backward compatibility, also treats the extra specs key as the key to be matched if no namespace is present; this action is highly discouraged because it conflicts with AggregateInstanceExtraSpecsFilter filter when you enable both filters.

3.12.2.13. ComputeFilter

Passes all hosts that are operational and enabled.
In general, you should always enable this filter.

3.12.2.14. CoreFilter

Only schedules instances on hosts if sufficient CPU cores are available. If this filter is not set, the scheduler might over-provision a host based on cores. For example, the virtual cores running on an instance may exceed the physical cores.
You can configure this filter to enable a fixed amount of vCPU overcommitment by using the cpu_allocation_ratio configuration option in nova.conf. The default setting is:
cpu_allocation_ratio = 16.0
With this setting, if 8 vCPUs are on a node, the scheduler allows instances up to 128 vCPU to be run on that node.
To disallow vCPU overcommitment set:
cpu_allocation_ratio = 1.0
Note
The Compute API always returns the actual number of CPU cores available on a compute node regardless of the value of the cpu_allocation_ratio configuration key. As a result changes to the cpu_allocation_ratio are not reflected via the command line clients or the dashboard. Changes to this configuration key are only taken into account internally in the scheduler.

3.12.2.15. NUMATopologyFilter

Filters hosts based on the NUMA topology that was specified for the instance through the use of flavor extra_specsin combination with the image properties, as described in detail in the related nova-spec document: Filter will try to match the exact NUMA cells of the instance to those of the host. It will consider the standard over-subscription limits each cell, and provide limits to the compute host accordingly.
Note
If instance has no topology defined, it will be considered for any host. If instance has a topology defined, it will be considered only for NUMA capable hosts.

3.12.2.16. DifferentHostFilter

Schedules the instance on a different host from a set of instances. To take advantage of this filter, the requester must pass a scheduler hint, using different_host as the key and a list of instance UUIDs as the value. This filter is the opposite of the SameHostFilter. Using the nova command-line tool, use the --hint flag. For example:
$ nova boot --image cedef40a-ed67-4d10-800e-17455edce175 --flavor 1 --hint different_host=a0cf03a5-d921-4877-bb5c-86d26cf818e1 --hint different_host=8c19174f-4220-44f0-824a-cd1eeef10287 server-1
With the API, use the os:scheduler_hints key. For example:
{
    "server": {
        "name": "server-1",
        "imageRef": "cedef40a-ed67-4d10-800e-17455edce175",
        "flavorRef": "1"
    },
    "os:scheduler_hints": {
        "different_host": [
            "a0cf03a5-d921-4877-bb5c-86d26cf818e1",
            "8c19174f-4220-44f0-824a-cd1eeef10287"
        ]
    }
}

3.12.2.17. DiskFilter

Only schedules instances on hosts if there is sufficient disk space available for root and ephemeral storage.
You can configure this filter to enable a fixed amount of disk overcommitment by using the disk_allocation_ratio configuration option in the nova.conf configuration file. The default setting disables the possibility of the overcommitment and allows launching a VM only if there is a sufficient amount of disk space available on a host:
disk_allocation_ratio = 1.0
DiskFilter always considers the value of the disk_available_least property and not the one of the free_disk_gb property of a hypervisor's statistics:
$ nova hypervisor-stats
+----------------------+-------+
| Property             | Value |
+----------------------+-------+
| count                |  1    |
| current_workload     |  0    |
| disk_available_least |  29   |
| free_disk_gb         |  35   |
| free_ram_mb          |  3441 |
| local_gb             |  35   |
| local_gb_used        |  0    |
| memory_mb            |  3953 |
| memory_mb_used       |  512  |
| running_vms          |  0    |
| vcpus                |  2    |
| vcpus_used           |  0    |
+----------------------+-------+
As it can be viewed from the command output above, the amount of the available disk space can be less than the amount of the free disk space. It happens because the disk_available_least property accounts for the virtual size rather than the actual size of images. If you use an image format that is sparse or copy on write so that each virtual instance does not require a 1:1 allocation of a virtual disk to a physical storage, it may be useful to allow the overcommitment of disk space.
To enable scheduling instances while overcommitting disk resources on the node, adjust the value of the disk_allocation_ratio configuration option to greater than 1.0:
disk_allocation_ratio > 1.0
Note
If the value is set to >1, keep track of the free disk space, as the value approaching 0 may result in the incorrect functioning of instances using it at the moment.

3.12.2.18. GroupAffinityFilter

Note
This filter is deprecated in favor of ServerGroupAffinityFilter.
The GroupAffinityFilter ensures that an instance is scheduled on to a host from a set of group hosts. To take advantage of this filter, the requester must pass a scheduler hint, using group as the key and an arbitrary name as the value. Using the nova command-line tool, use the --hint flag. For example:
$ nova boot --image IMAGE_ID --flavor 1 --hint group=GROUP server-1
This filter should not be enabled at the same time as GroupAntiAffinityFilter or neither filter will work properly.

3.12.2.19. GroupAntiAffinityFilter

Note
This filter is deprecated in favor of ServerGroupAntiAffinityFilter.
The GroupAntiAffinityFilter ensures that each instance in a group is on a different host. To take advantage of this filter, the requester must pass a scheduler hint, using group as the key and an arbitrary name as the value. Using the nova command-line tool, use the --hint flag. For example:
$ nova boot --image IMAGE_ID --flavor 1 --hint group=GROUP server-1
This filter should not be enabled at the same time as GroupAffinityFilter or neither filter will work properly.

3.12.2.20. ImagePropertiesFilter

Filters hosts based on properties defined on the instance's image. It passes hosts that can support the specified image properties contained in the instance. Properties include the architecture, hypervisor type, hypervisor version (for Xen hypervisor type only), and virtual machine mode.
For example, an instance might require a host that runs an ARM-based processor, and QEMU as the hypervisor. You can decorate an image with these properties by using:
$ glance image-update img-uuid --property architecture=arm --property hypervisor_type=qemu
The image properties that the filter checks for are:
  • architecture: describes the machine architecture required by the image. Examples are i686, x86_64, arm, and ppc64.
  • hypervisor_type: describes the hypervisor required by the image. Examples are xen, qemu, and xenapi.
    Note
    qemu is used for both QEMU and KVM hypervisor types.
  • hypervisor_version_requires: describes the hypervisor version required by the image. The property is supported for Xen hypervisor type only. It can be used to enable support for multiple hypervisor versions, and to prevent instances with newer Xen tools from being provisioned on an older version of a hypervisor. If available, the property value is compared to the hypervisor version of the compute host.
    To filter the hosts by the hypervisor version, add the hypervisor_version_requires property on the image as metadata and pass an operator and a required hypervisor version as its value:
    $ glance image-update img-uuid --property hypervisor_type=xen --property hypervisor_version_requires=">=4.3"
  • vm_mode: describes the hypervisor application binary interface (ABI) required by the image. Examples are xen for Xen 3.0 paravirtual ABI, hvm for native ABI, uml for User Mode Linux paravirtual ABI, exe for container virt executable ABI.

3.12.2.21. IsolatedHostsFilter

Allows the admin to define a special (isolated) set of images and a special (isolated) set of hosts, such that the isolated images can only run on the isolated hosts, and the isolated hosts can only run isolated images. The flag restrict_isolated_hosts_to_isolated_images can be used to force isolated hosts to only run isolated images.
The admin must specify the isolated set of images and hosts in the nova.conf file using the isolated_hosts and isolated_images configuration options. For example:
isolated_hosts = server1, server2
isolated_images = 342b492c-128f-4a42-8d3a-c5088cf27d13, ebd267a6-ca86-4d6c-9a0e-bd132d6b7d09

3.12.2.22. IoOpsFilter

The IoOpsFilter filters hosts by concurrent I/O operations on it. Hosts with too many concurrent I/O operations will be filtered out. The max_io_ops_per_host option specifies the maximum number of I/O intensive instances allowed to run on a host. A host will be ignored by the scheduler if more than max_io_ops_per_host instances in build, resize, snapshot, migrate, rescue or unshelve task states are running on it.

3.12.2.23. JsonFilter

The JsonFilter allows a user to construct a custom filter by passing a scheduler hint in JSON format. The following operators are supported:
  • =
  • <
  • >
  • in
  • <=
  • >=
  • not
  • or
  • and
The filter supports the following variables:
  • $free_ram_mb
  • $free_disk_mb
  • $total_usable_ram_mb
  • $vcpus_total
  • $vcpus_used
Using the nova command-line tool, use the --hint flag:
$ nova boot --image 827d564a-e636-4fc4-a376-d36f7ebe1747 --flavor 1 --hint query='[">=","$free_ram_mb",1024]' server1
With the API, use the os:scheduler_hints key:
{
    "server": {
        "name": "server-1",
        "imageRef": "cedef40a-ed67-4d10-800e-17455edce175",
        "flavorRef": "1"
    },
    "os:scheduler_hints": {
        "query": "[>=,$free_ram_mb,1024]"
    }
}

3.12.2.24. MetricsFilter

Filters hosts based on meters weight_setting. Only hosts with the available meters are passed so that the metrics weigher will not fail due to these hosts.

3.12.2.25. NumInstancesFilter

Hosts that have more instances running than specified by the max_instances_per_host option are filtered out when this filter is in place.

3.12.2.26. PciPassthroughFilter

The filter schedules instances on a host if the host has devices that meet the device requests in the extra_specs attribute for the flavor.

3.12.2.27. RamFilter

Only schedules instances on hosts that have sufficient RAM available. If this filter is not set, the scheduler may over provision a host based on RAM (for example, the RAM allocated by virtual machine instances may exceed the physical RAM).
You can configure this filter to enable a fixed amount of RAM overcommitment by using the ram_allocation_ratio configuration option in nova.conf. The default setting is:
ram_allocation_ratio = 1.5
This setting enables 1.5 GB instances to run on any compute node with 1 GB of free RAM.

3.12.2.28. RetryFilter

Filters out hosts that have already been attempted for scheduling purposes. If the scheduler selects a host to respond to a service request, and the host fails to respond to the request, this filter prevents the scheduler from retrying that host for the service request.
This filter is only useful if the scheduler_max_attempts configuration option is set to a value greater than zero.
If there are multiple force hosts/nodes, this filter helps to retry on the force hosts/nodes if a VM fails to boot.

3.12.2.29. SameHostFilter

Schedules the instance on the same host as another instance in a set of instances. To take advantage of this filter, the requester must pass a scheduler hint, using same_host as the key and a list of instance UUIDs as the value. This filter is the opposite of the DifferentHostFilter. Using the nova command-line tool, use the --hint flag:
$ nova boot --image cedef40a-ed67-4d10-800e-17455edce175 --flavor 1 --hint same_host=a0cf03a5-d921-4877-bb5c-86d26cf818e1 \ --hint same_host=8c19174f-4220-44f0-824a-cd1eeef10287 server-1
With the API, use the os:scheduler_hints key:
{
    "server": {
        "name": "server-1",
        "imageRef": "cedef40a-ed67-4d10-800e-17455edce175",
        "flavorRef": "1"
    },
    "os:scheduler_hints": {
        "same_host": [
            "a0cf03a5-d921-4877-bb5c-86d26cf818e1",
            "8c19174f-4220-44f0-824a-cd1eeef10287"
        ]
    }
}

3.12.2.30. ServerGroupAffinityFilter

The ServerGroupAffinityFilter ensures that an instance is scheduled on to a host from a set of group hosts. To take advantage of this filter, the requester must create a server group with an affinity policy, and pass a scheduler hint, using group as the key and the server group UUID as the value. Using the nova command-line tool, use the --hint flag. For example:
$ nova server-group-create --policy affinity group-1
$ nova boot --image IMAGE_ID --flavor 1 --hint group=SERVER_GROUP_UUID server-1

3.12.2.31. ServerGroupAntiAffinityFilter

The ServerGroupAntiAffinityFilter ensures that each instance in a group is on a different host. To take advantage of this filter, the requester must create a server group with an anti-affinity policy, and pass a scheduler hint, using group as the key and the server group UUID as the value. Using the nova command-line tool, use the --hint flag. For example:
$ nova server-group-create --policy anti-affinity group-1
$ nova boot --image IMAGE_ID --flavor 1 --hint group=SERVER_GROUP_UUID server-1

3.12.2.32. SimpleCIDRAffinityFilter

Schedules the instance based on host IP subnet range. To take advantage of this filter, the requester must specify a range of valid IP address in CIDR format, by passing two scheduler hints:
build_near_host_ip
The first IP address in the subnet (for example, 192.168.1.1)
cidr
The CIDR that corresponds to the subnet (for example, /24)
Using the nova command-line tool, use the --hint flag. For example, to specify the IP subnet 192.168.1.1/24
$ nova boot --image cedef40a-ed67-4d10-800e-17455edce175 --flavor 1 --hint build_near_host_ip=192.168.1.1 --hint cidr=/24 server-1
With the API, use the os:scheduler_hints key:
{
    "server": {
        "name": "server-1",
        "imageRef": "cedef40a-ed67-4d10-800e-17455edce175",
        "flavorRef": "1"
    },
    "os:scheduler_hints": {
        "build_near_host_ip": "192.168.1.1",
        "cidr": "24"
    }
}

3.12.2.33. TrustedFilter

Filters hosts based on their trust. Only passes hosts that meet the trust requirements specified in the instance properties.

3.12.2.34. TypeAffinityFilter

Dynamically limits hosts to one instance type. An instance can only be launched on a host, if no instance with different instances types are running on it, or if the host has no running instances at all.

3.12.3. Weights

When resourcing instances, the filter scheduler filters and weights each host in the list of acceptable hosts. Each time the scheduler selects a host, it virtually consumes resources on it, and subsequent selections are adjusted accordingly. This process is useful when the customer asks for the same large amount of instances, because weight is computed for each requested instance.
All weights are normalized before being summed up; the host with the largest weight is given the highest priority.
If cells are used, cells are weighted by the scheduler in the same manner as hosts.
Hosts and cells are weighted based on the following options in the /etc/nova/nova.conf file:

Table 3.6. Host weighting options

Section Option Description
[DEFAULT] ram_weight_multiplier By default, the scheduler spreads instances across all hosts evenly. Set the ram_weight_multiplier option to a negative number if you prefer stacking instead of spreading. Use a floating-point value.
[DEFAULT] scheduler_host_subset_size New instances are scheduled on a host that is chosen randomly from a subset of the N best hosts. This property defines the subset size from which a host is chosen. A value of 1 chooses the first host returned by the weighting functions. This value must be at least 1. A value less than 1 is ignored, and 1 is used instead. Use an integer value.
[DEFAULT] scheduler_weight_classes Defaults to nova.scheduler.weights.all_weighers, which selects the RamWeigher and MetricsWeigher. Hosts are then weighted and sorted with the largest weight winning.
[DEFAULT] io_ops_weight_multiplier Multiplier used for weighing host I/O operations. A negative value means a preference to choose light workload compute hosts.
[metrics] weight_multiplier Multiplier for weighting meters. Use a floating-point value.
[metrics] weight_setting Determines how meters are weighted. Use a comma-separated list of metricName=ratio. For example: "name1=1.0, name2=-1.0" results in: name1.value * 1.0 + name2.value * -1.0
[metrics] required
Specifies how to treat unavailable meters:
  • True—Raises an exception. To avoid the raised exception, you should use the scheduler filter MetricFilter to filter out hosts with unavailable meters.
  • False—Treated as a negative factor in the weighting process (uses the weight_of_unavailable option).
[metrics] weight_of_unavailable If required is set to False, and any one of the meters set by weight_setting is unavailable, the weight_of_unavailable value is returned to the scheduler.
For example:
[DEFAULT]
scheduler_host_subset_size = 1
scheduler_weight_classes = nova.scheduler.weights.all_weighers
ram_weight_multiplier = 1.0
io_ops_weight_multiplier = 2.0
[metrics]
weight_multiplier = 1.0
weight_setting = name1=1.0, name2=-1.0
required = false
weight_of_unavailable = -10000.0

Table 3.7. Cell weighting options

Section Option Description
[cells] mute_weight_multiplier Multiplier to weight mute children (hosts which have not sent capacity or capacity updates for some time). Use a negative, floating-point value.
[cells] offset_weight_multiplier Multiplier to weight cells, so you can specify a preferred cell. Use a floating point value.
[cells] ram_weight_multiplier By default, the scheduler spreads instances across all cells evenly. Set the ram_weight_multiplier option to a negative number if you prefer stacking instead of spreading. Use a floating-point value.
[cells] scheduler_weight_classes Defaults to nova.cells.weights.all_weighers, which maps to all cell weighers included with Compute. Cells are then weighted and sorted with the largest weight winning.
For example:
[cells]
scheduler_weight_classes = nova.cells.weights.all_weighers
mute_weight_multiplier = -10.0
ram_weight_multiplier = 1.0
offset_weight_multiplier = 1.0

3.12.4. Chance scheduler

As an administrator, you work with the filter scheduler. However, the Compute service also uses the Chance Scheduler, nova.scheduler.chance.ChanceScheduler, which randomly selects from lists of filtered hosts.

3.12.5. Utilization aware scheduling

It is possible to schedule VMs using advanced scheduling decisions. These decisions are made based on enhanced usage statistics encompassing data such as memory cache utilization, memory bandwidth utilization, or network bandwidth utilization. This is disabled by default. The administrator can configure how the metrics are weighted in the configuration file by using the weight_setting configuration option in the nova.conf configuration file. For example to configure metric1 with ratio1 and metric2 with ratio2:
        weight_setting = "metric1=ratio1, metric2=ratio2"

3.12.6. Host aggregates and availability zones

Host aggregates are a mechanism for partitioning hosts in an OpenStack cloud, or a region of an OpenStack cloud, based on arbitrary characteristics. Examples where an administrator may want to do this include where a group of hosts have additional hardware or performance characteristics.
Host aggregates are not explicitly exposed to users. Instead administrators map flavors to host aggregates. Administrators do this by setting metadata on a host aggregate, and matching flavor extra specifications. The scheduler then endeavors to match user requests for instance of the given flavor to a host aggregate with the same key-value pair in its metadata. Compute nodes can be in more than one host aggregate.
Administrators are able to optionally expose a host aggregate as an availability zone. Availability zones are different from host aggregates in that they are explicitly exposed to the user, and hosts can only be in a single availability zone. Administrators can configure a default availability zone where instances will be scheduled when the user fails to specify one.

Command-line interface

The nova command-line tool supports the following aggregate-related commands.
nova aggregate-list
Print a list of all aggregates.
nova aggregate-create <name> [availability-zone]
Create a new aggregate named <name>, and optionally in availability zone [availability-zone] if specified. The command returns the ID of the newly created aggregate. Hosts can be made available to multiple host aggregates. Be careful when adding a host to an additional host aggregate when the host is also in an availability zone. Pay attention when using the aggregate-set-metadata and aggregate-update commands to avoid user confusion when they boot instances in different availability zones. An error occurs if you cannot add a particular host to an aggregate zone for which it is not intended.
nova aggregate-delete <id>
Delete an aggregate with id <id>.
nova aggregate-details <id>
Show details of the aggregate with id <id>.
nova aggregate-add-host <id> <host>
Add host with name <host> to aggregate with id <id>.
nova aggregate-remove-host <id> <host>
Remove the host with name <host> from the aggregate with id <id>.
nova aggregate-set-metadata <id> <key=value> [<key=value> ...]
Add or update metadata (key-value pairs) associated with the aggregate with id <id>.
nova aggregate-update <id> <name> [<availability_zone>]
Update the name and availability zone (optional) for the aggregate.
nova host-list
List all hosts by service.
nova host-update --maintenance [enable | disable]
Put/resume host into/from maintenance.
Note
Only administrators can access these commands. If you try to use these commands and the user name and tenant that you use to access the Compute service do not have the admin role or the appropriate privileges, these errors occur:
ERROR: Policy does not allow compute_extension:aggregates to be performed. (HTTP 403) (Request-ID: req-299fbff6-6729-4cef-93b2-e7e1f96b4864) 
ERROR: Policy does not allow compute_extension:hosts to be performed. (HTTP 403) (Request-ID: req-ef2400f6-6776-4ea3-b6f1-7704085c27d1) 

Configure scheduler to support host aggregates

One common use case for host aggregates is when you want to support scheduling instances to a subset of compute hosts because they have a specific capability. For example, you may want to allow users to request compute hosts that have SSD drives if they need access to faster disk I/O, or access to compute hosts that have GPU cards to take advantage of GPU-accelerated code.
To configure the scheduler to support host aggregates, the scheduler_default_filters configuration option must contain the AggregateInstanceExtraSpecsFilter in addition to the other filters used by the scheduler. Add the following line to /etc/nova/nova.conf on the host that runs the nova-scheduler service to enable host aggregates filtering, as well as the other filters that are typically enabled:
scheduler_default_filters=AggregateInstanceExtraSpecsFilter,RetryFilter,AvailabilityZoneFilter,RamFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,ServerGroupAntiAffinityFilter,ServerGroupAffinityFilter

Example: Specify compute hosts with SSDs

This example configures the Compute service to enable users to request nodes that have solid-state drives (SSDs). You create a fast-io host aggregate in the nova availability zone and you add the ssd=true key-value pair to the aggregate. Then, you add the node1, and node2 compute nodes to it.
$ nova aggregate-create fast-io nova
+----+---------+-------------------+-------+----------+
| Id | Name    | Availability Zone | Hosts | Metadata |
+----+---------+-------------------+-------+----------+
| 1  | fast-io | nova              |       |          |
+----+---------+-------------------+-------+----------+

$ nova aggregate-set-metadata 1 ssd=true
+----+---------+-------------------+-------+-------------------+
| Id | Name    | Availability Zone | Hosts | Metadata          |
+----+---------+-------------------+-------+-------------------+
| 1  | fast-io | nova              | []    | {u'ssd': u'true'} |
+----+---------+-------------------+-------+-------------------+

$ nova aggregate-add-host 1 node1
+----+---------+-------------------+------------+-------------------+
| Id | Name    | Availability Zone | Hosts      | Metadata          |
+----+---------+-------------------+------------+-------------------+
| 1  | fast-io | nova              | [u'node1'] | {u'ssd': u'true'} |
+----+---------+-------------------+------------+-------------------+

$ nova aggregate-add-host 1 node2
+----+---------+-------------------+----------------------+-------------------+
| Id | Name    | Availability Zone | Hosts                | Metadata          |
+----+---------+-------------------+----------------------+-------------------+
| 1  | fast-io | nova              | [u'node1', u'node2'] | {u'ssd': u'true'} |
+----+---------+-------------------+----------------------+-------------------+
Use the nova flavor-create command to create the ssd.large flavor called with an ID of 6, 8 GB of RAM, 80 GB root disk, and four vCPUs.
$ nova flavor-create ssd.large 6 8192 80 4
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
| ID | Name      | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
| 6  | ssd.large | 8192      | 80   | 0         |      | 4     | 1.0         | True      |
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
Once the flavor is created, specify one or more key-value pairs that match the key-value pairs on the host aggregates with scope aggregate_instance_extra_specs. In this case, that is the aggregate_instance_extra_specs:ssd=true key-value pair. Setting a key-value pair on a flavor is done using the nova flavor-key command.
$ nova flavor-key ssd.large set aggregate_instance_extra_specs:ssd=true
Once it is set, you should see the extra_specs property of the ssd.large flavor populated with a key of ssd and a corresponding value of true.
$ nova flavor-show ssd.large
+----------------------------+--------------------------------------------------+
| Property                   | Value                                            |
+----------------------------+--------------------------------------------------+
| OS-FLV-DISABLED:disabled   | False                                            |
| OS-FLV-EXT-DATA:ephemeral  | 0                                                |
| disk                       | 80                                               |
| extra_specs                | {u'aggregate_instance_extra_specs:ssd': u'true'} |
| id                         | 6                                                |
| name                       | ssd.large                                        |
| os-flavor-access:is_public | True                                             |
| ram                        | 8192                                             |
| rxtx_factor                | 1.0                                              |
| swap                       |                                                  |
| vcpus                      | 4                                                |
+----------------------------+--------------------------------------------------+
Now, when a user requests an instance with the ssd.large flavor, the scheduler only considers hosts with the ssd=true key-value pair. In this example, these are node1 and node2.
Note
The key and value are case sensitive strings. The Compute scheduler performs a case sensitive string match of the value.

XenServer hypervisor pools to support live migration

When using the XenAPI-based hypervisor, the Compute service uses host aggregates to manage XenServer Resource pools, which are used in supporting live migration.

3.12.7. Configuration reference

To customize the Compute scheduler, use the configuration option settings documented in Table 3.49, “Description of scheduler configuration options”.

3.13. Cells

Cells functionality enables you to scale an OpenStack Compute cloud in a more distributed fashion without having to use complicated technologies like database and message queue clustering. It supports very large deployments.
When this functionality is enabled, the hosts in an OpenStack Compute cloud are partitioned into groups called cells. Cells are configured as a tree. The top-level cell should have a host that runs a nova-api service, but no nova-compute services. Each child cell should run all of the typical nova-* services in a regular Compute cloud except for nova-api. You can think of cells as a normal Compute deployment in that each cell has its own database server and message queue broker.
The nova-cells service handles communication between cells and selects cells for new instances. This service is required for every cell. Communication between cells is pluggable, and currently the only option is communication through RPC.
Cells scheduling is separate from host scheduling. nova-cells first picks a cell. Once a cell is selected and the new build request reaches its nova-cells service, it is sent over to the host scheduler in that cell and the build proceeds as it would have without cells.
Warning
Cell functionality is currently considered experimental.

3.13.1. Cell configuration options

Cells are disabled by default. All cell-related configuration options appear in the [cells] section in nova.conf. The following cell-related options are currently supported:
enable
Set to True to turn on cell functionality. Default is false.
name
Name of the current cell. Must be unique for each cell.
capabilities
List of arbitrary key=value pairs defining capabilities of the current cell. Values include hypervisor=xenserver;kvm,os=linux.
call_timeout
How long in seconds to wait for replies from calls between cells.
scheduler_filter_classes
Filter classes that the cells scheduler should use. By default, uses "nova.cells.filters.all_filters" to map to all cells filters included with Compute.
scheduler_weight_classes
Weight classes that the scheduler for cells uses. By default, uses nova.cells.weights.all_weighers to map to all cells weight algorithms included with Compute.
ram_weight_multiplier
Multiplier used to weight RAM. Negative numbers indicate that Compute should stack VMs on one host instead of spreading out new VMs to more hosts in the cell. The default value is 10.0.

3.13.2. Configure the API (top-level) cell

The cell type must be changed in the API cell so that requests can be proxied through nova-cells down to the correct cell properly. Edit the nova.conf file in the API cell, and specify api in the cell_type key:
[DEFAULT]
compute_api_class=nova.compute.cells_api.ComputeCellsAPI
...

[cells]
cell_type= api

3.13.3. Configure the child cells

Edit the nova.conf file in the child cells, and specify compute in the cell_type key:
[DEFAULT]
# Disable quota checking in child cells. Let API cell do it exclusively.
quota_driver=nova.quota.NoopQuotaDriver

[cells]
cell_type = compute

3.13.4. Configure the database in each cell

Before bringing the services online, the database in each cell needs to be configured with information about related cells. In particular, the API cell needs to know about its immediate children, and the child cells must know about their immediate agents. The information needed is the RabbitMQ server credentials for the particular cell.
Use the nova-manage cell create command to add this information to the database in each cell:
# nova-manage cell create -h
usage: nova-manage cell create [-h] [--name <name>]
                               [--cell_type <parent|api|child|compute>]
                               [--username <username>] [--password <password>]
                               [--broker_hosts <broker_hosts>]
                               [--hostname <hostname>] [--port <number>]
                               [--virtual_host <virtual_host>]
                               [--woffset <float>] [--wscale <float>]

optional arguments:
  -h, --help            show this help message and exit
  --name <name>         Name for the new cell
  --cell_type <parent|api|child|compute>
                        Whether the cell is parent/api or child/compute
  --username <username>
                        Username for the message broker in this cell
  --password <password>
                        Password for the message broker in this cell
  --broker_hosts <broker_hosts>
                        Comma separated list of message brokers in this cell.
                        Each Broker is specified as hostname:port with both
                        mandatory. This option overrides the --hostname and
                        --port options (if provided).
  --hostname <hostname>
                        Address of the message broker in this cell
  --port <number>       Port number of the message broker in this cell
  --virtual_host <virtual_host>
                        The virtual host of the message broker in this cell
  --woffset <float>
  --wscale <float>
As an example, assume an API cell named api and a child cell named cell1.
Within the api cell, specify the following RabbitMQ server information:
rabbit_host=10.0.0.10
rabbit_port=5672
rabbit_username=api_user
rabbit_password=api_passwd
rabbit_virtual_host=api_vhost
Within the cell1 child cell, specify the following RabbitMQ server information:
rabbit_host=10.0.1.10
rabbit_port=5673
rabbit_username=cell1_user
rabbit_password=cell1_passwd
rabbit_virtual_host=cell1_vhost
You can run this in the API cell as root:
# nova-manage cell create --name cell1 --cell_type child --username cell1_user --password cell1_passwd --hostname 10.0.1.10 --port 5673 --virtual_host cell1_vhost --woffset 1.0 --wscale 1.0
Repeat the previous steps for all child cells.
In the child cell, run the following, as root:
# nova-manage cell create --name api --cell_type parent --username api_user --password api_passwd --hostname 10.0.0.10 --port 5672 --virtual_host api_vhost --woffset 1.0 --wscale 1.0
To customize the Compute cells, use the configuration option settings documented in Table 3.17, “Description of cell configuration options”.

3.13.5. Cell scheduling configuration

To determine the best cell to use to launch a new instance, Compute uses a set of filters and weights defined in the /etc/nova/nova.conf file. The following options are available to prioritize cells for scheduling:
scheduler_filter_classes
List of filter classes. By default nova.cells.filters.all_filters is specified, which maps to all cells filters included with Compute (see Section 3.12.2, “Filters”).
scheduler_weight_classes
List of weight classes. By default nova.cells.weights.all_weighers is specified, which maps to all cell weight algorithms included with Compute. The following modules are available:
  • mute_child. Downgrades the likelihood of child cells being chosen for scheduling requests, which have not sent capacity or capability updates in a while. Options include mute_weight_multiplier (multiplier for mute children; value should be negative) and mute_weight_value (assigned to mute children; should be a positive value).
  • ram_by_instance_type. Select cells with the most RAM capacity for the instance type being requested. Because higher weights win, Compute returns the number of available units for the instance type requested. The ram_weight_multiplier option defaults to 10.0 that adds to the weight by a factor of 10. Use a negative number to stack VMs on one host instead of spreading out new VMs to more hosts in the cell.
  • weight_offset. Allows modifying the database to weight a particular cell. You can use this when you want to disable a cell (for example, '0'), or to set a default cell by making its weight_offset very high (for example, '999999999999999'). The highest weight will be the first cell to be scheduled for launching an instance.
Additionally, the following options are available for the cell scheduler:
scheduler_retries
Specifies how many times the scheduler tries to launch a new instance when no cells are available (default=10).
scheduler_retry_delay
Specifies the delay (in seconds) between retries (default=2).
As an admin user, you can also add a filter that directs builds to a particular cell. The policy.json file must have a line with "cells_scheduler_filter:TargetCellFilter" : "is_admin:True" to let an admin user specify a scheduler hint to direct a build to a particular cell.

3.13.6. Optional cell configuration

Cells store all inter-cell communication data, including user names and passwords, in the database. Because the cells data is not updated very frequently, use the [cells]cells_config option to specify a JSON file to store cells data. With this configuration, the database is no longer consulted when reloading the cells data. The file must have columns present in the Cell model (excluding common database fields and the id column). You must specify the queue connection information through a transport_url field, instead of username, password, and so on. The transport_url has the following form:
rabbit://USERNAME:PASSWORD@HOSTNAME:PORT/VIRTUAL_HOST
The scheme can be either qpid or rabbit, as shown previously. The following sample shows this optional configuration:
{
    "parent": {
        "name": "parent",
        "api_url": "http://api.example.com:8774",
        "transport_url": "rabbit://rabbit.example.com",
        "weight_offset": 0.0,
        "weight_scale": 1.0,
        "is_parent": true
    },
    "cell1": {
        "name": "cell1",
        "api_url": "http://api.example.com:8774",
        "transport_url": "rabbit://rabbit1.example.com",
        "weight_offset": 0.0,
        "weight_scale": 1.0,
        "is_parent": false
    },
    "cell2": {
        "name": "cell2",
        "api_url": "http://api.example.com:8774",
        "transport_url": "rabbit://rabbit2.example.com",
        "weight_offset": 0.0,
        "weight_scale": 1.0,
        "is_parent": false
    }
}

3.14. Conductor

The nova-conductor service enables OpenStack to function without compute nodes accessing the database. Conceptually, it implements a new layer on top of nova-compute. It should not be deployed on compute nodes, or else the security benefits of removing database access from nova-compute are negated. Just like other nova services such as nova-api or nova-scheduler, it can be scaled horizontally. You can run multiple instances of nova-conductor on different machines as needed for scaling purposes.
The methods exposed by nova-conductor are relatively simple methods used by nova-compute to offload its database operations. Places where nova-compute previously performed database access are now talking to nova-conductor. However, there are plans in the medium to long term to move more and more of what is currently in nova-compute up to the nova-conductor layer. The Compute service will start to look like a less intelligent slave service to nova-conductor. The conductor service will implement long running complex operations, ensuring forward progress and graceful error handling. This will be especially beneficial for operations that cross multiple compute nodes, such as migrations or resizes.
To customize the Conductor, use the configuration option settings documented in Table 3.20, “Description of conductor configuration options”.

3.15. Example nova.conf configuration files

The following sections describe the configuration options in the nova.conf file. You must copy the nova.conf file to each compute node. The sample nova.conf files show examples of specific configurations.

Small, private cloud

This example nova.conf file configures a small private cloud with cloud controller services, database server, and messaging server on the same server. In this case, CONTROLLER_IP represents the IP address of a central server, BRIDGE_INTERFACE represents the bridge such as br100, the NETWORK_INTERFACE represents an interface to your VLAN setup, and passwords are represented as DB_PASSWORD_COMPUTE for your Compute (nova) database password, and RABBIT PASSWORD represents the password to your message queue installation.
[DEFAULT]

# LOGS/STATE
verbose=True
logdir=/var/log/nova
state_path=/var/lib/nova
lock_path=/var/lock/nova
rootwrap_config=/etc/nova/rootwrap.conf

# SCHEDULER
compute_scheduler_driver=nova.scheduler.filter_scheduler.FilterScheduler

# VOLUMES
# configured in cinder.conf

# COMPUTE
compute_driver=libvirt.LibvirtDriver
instance_name_template=instance-%08x
api_paste_config=/etc/nova/api-paste.ini

# COMPUTE/APIS: if you have separate configs for separate services
# this flag is required for both nova-api and nova-compute
allow_resize_to_same_host=True

# APIS
osapi_compute_extension=nova.api.openstack.compute.contrib.standard_extensions
ec2_dmz_host=192.168.206.130
s3_host=192.168.206.130

# RABBITMQ
rabbit_host=192.168.206.130

# GLANCE
image_service=nova.image.glance.GlanceImageService

# NETWORK
network_manager=nova.network.manager.FlatDHCPManager
force_dhcp_release=True
dhcpbridge_flagfile=/etc/nova/nova.conf
firewall_driver=nova.virt.libvirt.firewall.IptablesFirewallDriver
# Change my_ip to match each host
my_ip=192.168.206.130
public_interface=eth0
vlan_interface=eth0
flat_network_bridge=br100
flat_interface=eth0

# NOVNC CONSOLE
novncproxy_base_url=http://192.168.206.130:6080/vnc_auto.html
# Change vncserver_proxyclient_address and vncserver_listen to match each compute host
vncserver_proxyclient_address=192.168.206.130
vncserver_listen=192.168.206.130

# AUTHENTICATION
auth_strategy=keystone
[keystone_authtoken]
auth_host = 127.0.0.1
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = nova
admin_password = nova
signing_dirname = /tmp/keystone-signing-nova

# GLANCE
[glance]
api_servers=192.168.206.130:9292

# DATABASE
[database]
connection=mysql+pymysql://nova:yourpassword@192.168.206.130/nova

# LIBVIRT
[libvirt]
virt_type=qemu

KVM, Flat, MySQL, and Glance, OpenStack or EC2 API

This example nova.conf file, from an internal Rackspace test system, is used for demonstrations.
[DEFAULT]

# LOGS/STATE
verbose=True
logdir=/var/log/nova
state_path=/var/lib/nova
lock_path=/var/lock/nova
rootwrap_config=/etc/nova/rootwrap.conf

# SCHEDULER
compute_scheduler_driver=nova.scheduler.filter_scheduler.FilterScheduler

# VOLUMES
# configured in cinder.conf

# COMPUTE
compute_driver=libvirt.LibvirtDriver
instance_name_template=instance-%08x
api_paste_config=/etc/nova/api-paste.ini

# COMPUTE/APIS: if you have separate configs for separate services
# this flag is required for both nova-api and nova-compute
allow_resize_to_same_host=True

# APIS
osapi_compute_extension=nova.api.openstack.compute.contrib.standard_extensions
ec2_dmz_host=192.168.206.130
s3_host=192.168.206.130

# RABBITMQ
rabbit_host=192.168.206.130

# GLANCE
image_service=nova.image.glance.GlanceImageService

# NETWORK
network_manager=nova.network.manager.FlatDHCPManager
force_dhcp_release=True
dhcpbridge_flagfile=/etc/nova/nova.conf
firewall_driver=nova.virt.libvirt.firewall.IptablesFirewallDriver
# Change my_ip to match each host
my_ip=192.168.206.130
public_interface=eth0
vlan_interface=eth0
flat_network_bridge=br100
flat_interface=eth0

# NOVNC CONSOLE
novncproxy_base_url=http://192.168.206.130:6080/vnc_auto.html
# Change vncserver_proxyclient_address and vncserver_listen to match each compute host
vncserver_proxyclient_address=192.168.206.130
vncserver_listen=192.168.206.130

# AUTHENTICATION
auth_strategy=keystone
[keystone_authtoken]
auth_host = 127.0.0.1
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = nova
admin_password = nova
signing_dirname = /tmp/keystone-signing-nova

# GLANCE
[glance]
api_servers=192.168.206.130:9292

# DATABASE
[database]
connection=mysql+pymysql://nova:yourpassword@192.168.206.130/nova

# LIBVIRT
[libvirt]
virt_type=qemu

XenServer, Flat networking, MySQL, and Glance, OpenStack API

This example nova.conf file is from an internal Rackspace test system.
verbose
nodaemon
network_manager=nova.network.manager.FlatManager
image_service=nova.image.glance.GlanceImageService
flat_network_bridge=xenbr0
compute_driver=xenapi.XenAPIDriver
xenapi_connection_url=https://<XenServer IP>
xenapi_connection_username=root
xenapi_connection_password=supersecret
xenapi_image_upload_handler=nova.virt.xenapi.image.glance.GlanceStore
rescue_timeout=86400
use_ipv6=true

3.16. Compute log files

The corresponding log file of each Compute service is stored in the /var/log/nova/ directory of the host on which each service runs.

Table 3.8. Log files used by Compute services

Log file Service name
api.log openstack-nova-api
cert.log [a] openstack-nova-cert
compute.log openstack-nova-compute
conductor.log openstack-nova-conductor
consoleauth.log openstack-nova-consoleauth
network.log[b] openstack-nova-network
nova-manage.log nova-manage
scheduler.log openstack-nova-scheduler
[a] The X509 certificate service (openstack-nova-cert/nova-cert) is only required by the EC2 API to the Compute service.
[b] The nova network service (openstack-nova-network/nova-network) only runs in deployments that are not configured to use the Networking service (neutron).

3.17. Compute sample configuration files

3.17.1. nova.conf - configuration options

For a complete list of all available configuration options for each OpenStack Compute service, run nova-<servicename> --help.

Table 3.9. Description of API configuration options

Configuration option = Default value Description
[DEFAULT]
api_paste_config = api-paste.ini (StrOpt) File name for the paste.deploy config for nova-api
api_rate_limit = False (BoolOpt) Whether to use per-user rate limiting for the api. This option is only used by v2 api. Rate limiting is removed from v2.1 api.
client_socket_timeout = 900 (IntOpt) Timeout for client connections' socket operations. If an incoming connection is idle for this number of seconds it will be closed. A value of '0' means wait forever.
enable_new_services = True (BoolOpt) Services to be added to the available pool on create
enabled_apis = ec2, osapi_compute, metadata (ListOpt) A list of APIs to enable by default
enabled_ssl_apis = (ListOpt) A list of APIs with enabled SSL
instance_name_template = instance-%08x (StrOpt) Template string to be used to generate instance names
max_header_line = 16384 (IntOpt) Maximum line size of message headers to be accepted. max_header_line may need to be increased when using large tokens (typically those generated by the Keystone v3 API with big service catalogs).
multi_instance_display_name_template = %(name)s-%(count)d (StrOpt) When creating multiple instances with a single request using the os-multiple-create API extension, this template will be used to build the display name for each instance. The benefit is that the instances end up with different hostnames. To restore legacy behavior of every instance having the same name, set this option to "%(name)s". Valid keys for the template are: name, uuid, count.
non_inheritable_image_properties = cache_in_nova, bittorrent (ListOpt) These are image properties which a snapshot should not inherit from an instance
null_kernel = nokernel (StrOpt) Kernel image that indicates not to use a kernel, but to use a raw disk image instead
osapi_compute_ext_list = (ListOpt) DEPRECATED: Specify list of extensions to load when using the osapi_compute_extension option with nova.api.openstack.compute.legacy_v2.contrib.select_extensions. This option will be removed in the near future. After that point you have to run all of the API.
osapi_compute_extension = ['nova.api.openstack.compute.legacy_v2.contrib.standard_extensions'] (MultiStrOpt) osapi compute extension to load. This option will be removed in the near future. After that point you have to run all of the API.
osapi_compute_link_prefix = None (StrOpt) Base URL that will be presented to users in links to the OpenStack Compute API
osapi_compute_listen = 0.0.0.0 (StrOpt) The IP address on which the OpenStack API will listen.
osapi_compute_listen_port = 8774 (IntOpt) The port on which the OpenStack API will listen.
osapi_compute_workers = None (IntOpt) Number of workers for OpenStack API service. The default will be the number of CPUs available.
osapi_hide_server_address_states = building (ListOpt) List of instance states that should hide network info
secure_proxy_ssl_header = None (StrOpt) The HTTP header used to determine the scheme for the original request, even if it was removed by an SSL terminating proxy. Typical value is HTTP_X_FORWARDED_PROTO.
servicegroup_driver = db (StrOpt) The driver for servicegroup service (valid options are: db, zk, mc)
snapshot_name_template = snapshot-%s (StrOpt) Template string to be used to generate snapshot names
tcp_keepidle = 600 (IntOpt) Sets the value of TCP_KEEPIDLE in seconds for each server socket. Not supported on OS X.
use_forwarded_for = False (BoolOpt) Treat X-Forwarded-For as the canonical remote address. Only enable this if you have a sanitizing proxy.
wsgi_default_pool_size = 1000 (IntOpt) Size of the pool of greenthreads used by wsgi
wsgi_keep_alive = True (BoolOpt) If False, closes the client socket connection explicitly.
wsgi_log_format = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f (StrOpt) A python format string that is used as the template to generate log lines. The following values can be formatted into it: client_ip, date_time, request_line, status_code, body_length, wall_seconds.
[oslo_middleware]
max_request_body_size = 114688 (IntOpt) The maximum body size for each request, in bytes.
secure_proxy_ssl_header = X-Forwarded-Proto (StrOpt) The HTTP Header that will be used to determine what the original request protocol scheme was, even if it was hidden by an SSL termination proxy.
[oslo_versionedobjects]
fatal_exception_format_errors = False (BoolOpt) Make exception message format errors fatal.

Table 3.10. Description of API v2.1 configuration options

Configuration option = Default value Description
[osapi_v21]
enabled = True (BoolOpt) DEPRECATED: Whether the V2.1 API is enabled or not. This option will be removed in the near future.
extensions_blacklist = (ListOpt) DEPRECATED: A list of v2.1 API extensions to never load. Specify the extension aliases here. This option will be removed in the near future. After that point you have to run all of the API.
extensions_whitelist = (ListOpt) DEPRECATED: If the list is not empty then a v2.1 API extension will only be loaded if it exists in this list. Specify the extension aliases here. This option will be removed in the near future. After that point you have to run all of the API.

Table 3.11. Description of API v3 configuration options

Configuration option = Default value Description
[osapi_v3]
enabled = False (BoolOpt) Whether the V3 API is enabled or not
extensions_blacklist = (ListOpt) A list of v3 API extensions to never load. Specify the extension aliases here.
extensions_whitelist = (ListOpt) If the list is not empty then a v3 API extension will only be loaded if it exists in this list. Specify the extension aliases here.

Table 3.12. Description of authentication configuration options

Configuration option = Default value Description
[DEFAULT]
auth_strategy = keystone (StrOpt) The strategy to use for auth: keystone or noauth2. noauth2 is designed for testing only, as it does no actual credential checking. noauth2 provides administrative credentials only if 'admin' is specified as the username.

Table 3.13. Description of authorization token configuration options

Configuration option = Default value Description
[keystone_authtoken]
admin_password = None (StrOpt) Service user password.
admin_tenant_name = admin (StrOpt) Service tenant name.
admin_token = None (StrOpt) This option is deprecated and may be removed in a future release. Single shared secret with the Keystone configuration used for bootstrapping a Keystone installation, or otherwise bypassing the normal authentication process. This option should not be used, use `admin_user` and `admin_password` instead.
admin_user = None (StrOpt) Service username.
auth_admin_prefix = (StrOpt) Prefix to prepend at the beginning of the path. Deprecated, use identity_uri.
auth_host = 127.0.0.1 (StrOpt) Host providing the admin Identity API endpoint. Deprecated, use identity_uri.
auth_plugin = None (StrOpt) Name of the plugin to load
auth_port = 35357 (IntOpt) Port of the admin Identity API endpoint. Deprecated, use identity_uri.
auth_protocol = https (StrOpt) Protocol of the admin Identity API endpoint (http or https). Deprecated, use identity_uri.
auth_section = None (StrOpt) Config Section from which to load plugin specific options
auth_uri = None (StrOpt) Complete public Identity API endpoint.
auth_version = None (StrOpt) API version of the admin Identity API endpoint.
cache = None (StrOpt) Env key for the swift cache.
cafile = None (StrOpt) A PEM encoded Certificate Authority to use when verifying HTTPs connections. Defaults to system CAs.
certfile = None (StrOpt) Required if identity server requires client certificate
check_revocations_for_cached = False (BoolOpt) If true, the revocation list will be checked for cached tokens. This requires that PKI tokens are configured on the identity server.
delay_auth_decision = False (BoolOpt) Do not handle authorization requests within the middleware, but delegate the authorization decision to downstream WSGI components.
enforce_token_bind = permissive (StrOpt) Used to control the use and type of token binding. Can be set to: "disabled" to not check token binding. "permissive" (default) to validate binding information if the bind type is of a form known to the server and ignore it if not. "strict" like "permissive" but if the bind type is unknown the token will be rejected. "required" any form of token binding is needed to be allowed. Finally the name of a binding method that must be present in tokens.
hash_algorithms = md5 (ListOpt) Hash algorithms to use for hashing PKI tokens. This may be a single algorithm or multiple. The algorithms are those supported by Python standard hashlib.new(). The hashes will be tried in the order given, so put the preferred one first for performance. The result of the first hash will be stored in the cache. This will typically be set to multiple values only while migrating from a less secure algorithm to a more secure one. Once all the old tokens are expired this option should be set to a single value for better performance.
http_connect_timeout = None (IntOpt) Request timeout value for communicating with Identity API server.
http_request_max_retries = 3 (IntOpt) How many times to try to reconnect when communicating with Identity API Server.
identity_uri = None (StrOpt) Complete admin Identity API endpoint. This should specify the unversioned root endpoint e.g. https://localhost:35357/
include_service_catalog = True (BoolOpt) (Optional) Indicate whether to set the X-Service-Catalog header. If False, middleware will not ask for service catalog on token validation and will not set the X-Service-Catalog header.
insecure = False (BoolOpt) Verify HTTPS connections.
keyfile = None (StrOpt) Required if identity server requires client certificate
memcache_pool_conn_get_timeout = 10 (IntOpt) (Optional) Number of seconds that an operation will wait to get a memcached client connection from the pool.
memcache_pool_dead_retry = 300 (IntOpt) (Optional) Number of seconds memcached server is considered dead before it is tried again.
memcache_pool_maxsize = 10 (IntOpt) (Optional) Maximum total number of open connections to every memcached server.
memcache_pool_socket_timeout = 3 (IntOpt) (Optional) Socket timeout in seconds for communicating with a memcached server.
memcache_pool_unused_timeout = 60 (IntOpt) (Optional) Number of seconds a connection to memcached is held unused in the pool before it is closed.
memcache_secret_key = None (StrOpt) (Optional, mandatory if memcache_security_strategy is defined) This string is used for key derivation.
memcache_security_strategy = None (StrOpt) (Optional) If defined, indicate whether token data should be authenticated or authenticated and encrypted. Acceptable values are MAC or ENCRYPT. If MAC, token data is authenticated (with HMAC) in the cache. If ENCRYPT, token data is encrypted and authenticated in the cache. If the value is not one of these options or empty, auth_token will raise an exception on initialization.
memcache_use_advanced_pool = False (BoolOpt) (Optional) Use the advanced (eventlet safe) memcached client pool. The advanced pool will only work under python 2.x.
region_name = None (StrOpt) The region in which the identity server can be found.
revocation_cache_time = 10 (IntOpt) Determines the frequency at which the list of revoked tokens is retrieved from the Identity service (in seconds). A high number of revocation events combined with a low cache duration may significantly reduce performance.
signing_dir = None (StrOpt) Directory used to cache files related to PKI tokens.
token_cache_time = 300 (IntOpt) In order to prevent excessive effort spent validating tokens, the middleware caches previously-seen tokens for a configurable duration (in seconds). Set to -1 to disable caching completely.

Table 3.14. Description of availability zones configuration options

Configuration option = Default value Description
[DEFAULT]
default_availability_zone = nova (StrOpt) Default compute node availability_zone
default_schedule_zone = None (StrOpt) Availability zone to use when user does not specify one
internal_service_availability_zone = internal (StrOpt) The availability_zone to show internal services under

Table 3.15. Description of Barbican configuration options

Configuration option = Default value Description
[barbican]
cafile = None (StrOpt) PEM encoded Certificate Authority to use when verifying HTTPs connections.
catalog_info = key-manager:barbican:public (StrOpt) Info to match when looking for barbican in the service catalog. Format is: separated values of the form: <service_type>:<service_name>:<endpoint_type>
certfile = None (StrOpt) PEM encoded client certificate cert file
endpoint_template = None (StrOpt) Override service catalog lookup with template for barbican endpoint e.g. http://localhost:9311/v1/%(project_id)s
insecure = False (BoolOpt) Verify HTTPS connections.
keyfile = None (StrOpt) PEM encoded client certificate key file
os_region_name = None (StrOpt) Region name of this node
timeout = None (IntOpt) Timeout value for http requests

Table 3.16. Description of CA and SSL configuration options

Configuration option = Default value Description
[DEFAULT]
ca_file = cacert.pem (StrOpt) Filename of root CA
ca_path = $state_path/CA (StrOpt) Where to keep the root CA
cert = self.pem (StrOpt) SSL certificate file
cert_manager = nova.cert.manager.CertManager (StrOpt) Full class name for the Manager for cert
cert_topic = cert (StrOpt) The topic cert nodes listen on
crl_file = crl.pem (StrOpt) Filename of root Certificate Revocation List
key_file = private/cakey.pem (StrOpt) Filename of private key
keys_path = $state_path/keys (StrOpt) Where to keep the keys
project_cert_subject = /C=US/ST=California/O=OpenStack/OU=NovaDev/CN=project-ca-%.16s-%s (StrOpt) Subject for certificate for projects, %s for project, timestamp
ssl_ca_file = None (StrOpt) CA certificate file to use to verify connecting clients
ssl_cert_file = None (StrOpt) SSL certificate of API server
ssl_key_file = None (StrOpt) SSL private key of API server
use_project_ca = False (BoolOpt) Should a CA be used for each project?
user_cert_subject = /C=US/ST=California/O=OpenStack/OU=NovaDev/CN=%.16s-%.16s-%s (StrOpt) Subject for certificate for users, %s for project, user, timestamp
[ssl]
ca_file = None (StrOpt) CA certificate file to use to verify connecting clients.
cert_file = None (StrOpt) Certificate file to use when starting the server securely.
key_file = None (StrOpt) Private key file to use when starting the server securely.

Table 3.17. Description of cell configuration options

Configuration option = Default value Description
[cells]
call_timeout = 60 (IntOpt) Seconds to wait for response from a call to a cell.
capabilities = hypervisor=xenserver;kvm, os=linux (ListOpt) Key/Multi-value list with the capabilities of the cell
cell_type = compute (StrOpt) Type of cell
cells_config = None (StrOpt) Configuration file from which to read cells configuration. If given, overrides reading cells from the database.
db_check_interval = 60 (IntOpt) Interval, in seconds, for getting fresh cell information from the database.
driver = nova.cells.rpc_driver.CellsRPCDriver (StrOpt) Cells communication driver to use
enable = False (BoolOpt) Enable cell functionality
instance_update_num_instances = 1 (IntOpt) Number of instances to update per periodic task run
instance_updated_at_threshold = 3600 (IntOpt) Number of seconds after an instance was updated or deleted to continue to update cells
manager = nova.cells.manager.CellsManager (StrOpt) Manager for cells
max_hop_count = 10 (IntOpt) Maximum number of hops for cells routing.
mute_child_interval = 300 (IntOpt) Number of seconds after which a lack of capability and capacity updates signals the child cell is to be treated as a mute.
mute_weight_multiplier = -10000.0 (FloatOpt) Multiplier used to weigh mute children. (The value should be negative.)
name = nova (StrOpt) Name of this cell
offset_weight_multiplier = 1.0 (FloatOpt) Multiplier used to weigh offset weigher.
reserve_percent = 10.0 (FloatOpt) Percentage of cell capacity to hold in reserve. Affects both memory and disk utilization
topic = cells (StrOpt) The topic cells nodes listen on

Table 3.18. Description of common configuration options

Configuration option = Default value Description
[DEFAULT]
bindir = /usr/local/bin (StrOpt) Directory where nova binaries are installed
compute_topic = compute (StrOpt) The topic compute nodes listen on
console_topic = console (StrOpt) The topic console proxy nodes listen on
consoleauth_topic = consoleauth (StrOpt) The topic console auth proxy nodes listen on
executor_thread_pool_size = 64 (IntOpt) Size of executor thread pool.
host = localhost (StrOpt) Name of this node. This can be an opaque identifier. It is not necessarily a hostname, FQDN, or IP address. However, the node name must be valid within an AMQP key.
memcached_servers = None (ListOpt) Memcached servers or None for in process cache.
my_ip = 10.0.0.1 (StrOpt) IP address of this host
notify_api_faults = False (BoolOpt) If set, send api.fault notifications on caught exceptions in the API service.
notify_on_state_change = None (StrOpt) If set, send compute.instance.update notifications on instance state changes. Valid values are None for no notifications, "vm_state" for notifications on VM state changes, or "vm_and_task_state" for notifications on VM and task state changes.
pybasedir = /usr/lib/python/site-packages/nova (StrOpt) Directory where the nova python module is installed
report_interval = 10 (IntOpt) Seconds between nodes reporting state to datastore
rootwrap_config = /etc/nova/rootwrap.conf (StrOpt) Path to the rootwrap configuration file to use for running commands as root
service_down_time = 60 (IntOpt) Maximum time since last check-in for up service
state_path = $pybasedir (StrOpt) Top-level directory for maintaining nova's state
tempdir = None (StrOpt) Explicitly specify the temporary working directory
use_rootwrap_daemon = False (BoolOpt) Start and use a daemon that can run the commands that need to be run with root privileges. This option is usually enabled on nodes that run nova compute processes.
[keystone_authtoken]
memcached_servers = None (ListOpt) Optionally specify a list of memcached server(s) to use for caching. If left undefined, tokens will instead be cached in-process.
[workarounds]
destroy_after_evacuate = True (BoolOpt) DEPRECATED: Whether to destroy instances on startup when it is suspected that they have previously been evacuated. This can result in data loss if undesired. See https://launchpad.net/bugs/1419785.
disable_libvirt_livesnapshot = True (BoolOpt) When using libvirt 1.2.2, live snapshots fail intermittently under load. This configuration option provides a mechanism to enable live snapshot while this is resolved. See https://bugs.launchpad.net/nova/+bug/1334398.
disable_rootwrap = False (BoolOpt) This option allows a fallback to sudo for performance reasons. For example, see https://bugs.launchpad.net/nova/+bug/1415106.
handle_virt_lifecycle_events = True (BoolOpt) Whether or not to handle events raised from the compute driver's emit_event method. These are lifecycle events raised from compute drivers that implement the method. An example of a lifecycle event is an instance starting or stopping. If the instance is going through task state changes due to an API operation, such as resize, the events are ignored. However, this is an advanced feature which allows the hypervisor to signal to the compute service that an unexpected state change has occurred in an instance and the instance can be shut down automatically, which can inherently race in reboot operations or when the compute service or host is rebooted, either intentionally or due to an unexpected outage. Care should be taken when using this and sync_power_state_interval is negative since then if any instances are out of sync between the hypervisor and the Nova database, they will have to be synchronized manually. See https://bugs.launchpad.net/bugs/1444630.

Table 3.19. Description of Compute configuration options

Configuration option = Default value Description
[DEFAULT]
compute_available_monitors = None (MultiStrOpt) [DEPRECATED] Monitor classes available to the compute which may be specified more than once. Use setuptools entry points to list available monitor plug-ins.
compute_driver = None (StrOpt) Driver to use for controlling virtualization. Options include: libvirt.LibvirtDriver, ironic.IronicDriver, and vmwareapi.VMwareVCDriver.
compute_manager = nova.compute.manager.ComputeManager (StrOpt) Full class name for the Manager for compute
compute_monitors = (ListOpt) A list of monitors that can be used for getting compute metrics. You can use the alias/name from the setuptools entry points for nova.compute.monitors.* namespaces. If no namespace is supplied, the "cpu." namespace is assumed for backwards-compatibility. An example value that would enable both the CPU and NUMA memory bandwidth monitors that used the virt driver variant: ["cpu.virt_driver", "numa_mem_bw.virt_driver"]
compute_resources = vcpu (ListOpt) The names of the extra resources to track.
compute_stats_class = nova.compute.stats.Stats (StrOpt) Class that will manage stats for the local compute host
console_host = localhost (StrOpt) Console proxy host to use to connect to instances on this host.
console_manager = nova.console.manager.ConsoleProxyManager (StrOpt) Full class name for the Manager for console proxy
default_flavor = m1.small (StrOpt) Default flavor to use for the EC2 API only. The Nova API does not support a default flavor.
default_notification_level = INFO (StrOpt) Default notification level for outgoing notifications
enable_instance_password = True (BoolOpt) Enables returning of the instance password by the relevant server API calls such as create, rebuild or rescue, If the hypervisor does not support password injection then the password returned will not be correct
heal_instance_info_cache_interval = 60 (IntOpt) Number of seconds between instance network information cache updates
image_cache_manager_interval = 2400 (IntOpt) Number of seconds to wait between runs of the image cache manager. Set to -1 to disable. Setting this to 0 will run at the default rate.
image_cache_subdirectory_name = _base (StrOpt) Where cached images are stored under $instances_path. This is NOT the full path - only a folder name. For per-compute-host cached images, set to _base_$my_ip
instance_build_timeout = 0 (IntOpt) Amount of time in seconds an instance can be in BUILD before going into ERROR status. Set to 0 to disable.
instance_delete_interval = 300 (IntOpt) Interval in seconds for retrying failed instance file deletes. Set to -1 to disable. Setting this to 0 will run at the default rate.
instance_usage_audit = False (BoolOpt) Generate periodic compute.instance.exists notifications
instance_usage_audit_period = month (StrOpt) Time period to generate instance usages for. Time period must be hour, day, month or year
instances_path = $state_path/instances (StrOpt) Where instances are stored on disk
max_concurrent_builds = 10 (IntOpt) Maximum number of instance builds to run concurrently
maximum_instance_delete_attempts = 5 (IntOpt) The number of times to attempt to reap an instance's files.
reboot_timeout = 0 (IntOpt) Automatically hard reboot an instance if it has been stuck in a rebooting state longer than N seconds. Set to 0 to disable.
reclaim_instance_interval = 0 (IntOpt) Interval in seconds for reclaiming deleted instances
rescue_timeout = 0 (IntOpt) Automatically unrescue an instance after N seconds. Set to 0 to disable.
resize_confirm_window = 0 (IntOpt) Automatically confirm resizes after N seconds. Set to 0 to disable.
resume_guests_state_on_host_boot = False (BoolOpt) Whether to start guests that were running before the host rebooted
running_deleted_instance_action = reap (StrOpt) Action to take if a running deleted instance is detected. Set to 'noop' to take no action.
running_deleted_instance_poll_interval = 1800 (IntOpt) Number of seconds to wait between runs of the cleanup task.
running_deleted_instance_timeout = 0 (IntOpt) Number of seconds after being deleted when a running instance should be considered eligible for cleanup.
shelved_offload_time = 0 (IntOpt) Time in seconds before a shelved instance is eligible for removing from a host. -1: never offload, 0: offload immediately when shelved.
shelved_poll_interval = 3600 (IntOpt) Interval in seconds for polling shelved instances to offload. Set to -1 to disable. Setting this to 0 will run at the default rate.
shutdown_timeout = 60 (IntOpt) Total amount of time to wait in seconds for an instance to perform a clean shutdown.
sync_power_state_interval = 600 (IntOpt) Interval to sync power states between the database and the hypervisor. Set to -1 to disable. Setting this to 0 will run at the default rate.
update_resources_interval = 0 (IntOpt) Interval in seconds for updating compute resources. A negative number disables the task completely. Leaving this at the default of 0 will cause this to run at the default periodic interval. Setting it to any positive value will cause it to run at approximately that number of seconds.
vif_plugging_is_fatal = True (BoolOpt) Fail instance boot if vif plugging fails
vif_plugging_timeout = 300 (IntOpt) Number of seconds to wait for neutron vif plugging events to arrive before continuing or failing (see vif_plugging_is_fatal). If this is set to zero and vif_plugging_is_fatal is False, events should not be expected to arrive at all.

Table 3.20. Description of conductor configuration options

Configuration option = Default value Description
[DEFAULT]
migrate_max_retries = -1 (IntOpt) Number of times to retry live-migration before failing. If set to -1, try until out of hosts. If set to 0, only try once, no retries.
[conductor]
manager = nova.conductor.manager.ConductorManager (StrOpt) Full class name for the Manager for conductor
topic = conductor (StrOpt) The topic on which conductor nodes listen
use_local = False (BoolOpt) Perform nova-conductor operations locally
workers = None (IntOpt) Number of workers for OpenStack Conductor service. The default will be the number of CPUs available.

Table 3.21. Description of config drive configuration options

Configuration option = Default value Description
[DEFAULT]
config_drive_format = iso9660 (StrOpt) Config drive format.
config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 (StrOpt) List of metadata versions to skip placing into the config drive
force_config_drive = None (StrOpt) Set to "always" to force injection to take place on a config drive. NOTE: The "always" will be deprecated in the Liberty release cycle.
mkisofs_cmd = genisoimage (StrOpt) Name and optionally path of the tool used for ISO image creation
[hyperv]
config_drive_cdrom = False (BoolOpt) Attaches the Config Drive image as a cdrom drive instead of a disk drive
config_drive_inject_password = False (BoolOpt) Sets the admin password in the config drive image

Table 3.22. Description of console configuration options

Configuration option = Default value Description
[DEFAULT]
console_allowed_origins = (ListOpt) Allowed Origin header hostnames for access to console proxy servers
console_public_hostname = localhost (StrOpt) Publicly visible name for this console host
console_token_ttl = 600 (IntOpt) How many seconds before deleting tokens
consoleauth_manager = nova.consoleauth.manager.ConsoleAuthManager (StrOpt) Manager for console auth
[mks]
enabled = False (BoolOpt) Enable MKS related features
mksproxy_base_url = http://127.0.0.1:6090/ (StrOpt) Location of MKS web console proxy, in the form "http://127.0.0.1:6090/"

Table 3.23. Description of CORS configuration options

Configuration option = Default value Description
[cors]
allow_credentials = True (BoolOpt) Indicate that the actual request can include user credentials
allow_headers = Content-Type, Cache-Control, Content-Language, Expires, Last-Modified, Pragma (ListOpt) Indicate which header field names may be used during the actual request.
allow_methods = GET, POST, PUT, DELETE, OPTIONS (ListOpt) Indicate which methods can be used during the actual request.
allowed_origin = None (StrOpt) Indicate whether this resource may be shared with the domain received in the requests "origin" header.
expose_headers = Content-Type, Cache-Control, Content-Language, Expires, Last-Modified, Pragma (ListOpt) Indicate which headers are safe to expose to the API. Defaults to HTTP Simple Headers.
max_age = 3600 (IntOpt) Maximum cache age of CORS preflight requests.
[cors.subdomain]
allow_credentials = True (BoolOpt) Indicate that the actual request can include user credentials
allow_headers = Content-Type, Cache-Control, Content-Language, Expires, Last-Modified, Pragma (ListOpt) Indicate which header field names may be used during the actual request.
allow_methods = GET, POST, PUT, DELETE, OPTIONS (ListOpt) Indicate which methods can be used during the actual request.
allowed_origin = None (StrOpt) Indicate whether this resource may be shared with the domain received in the requests "origin" header.
expose_headers = Content-Type, Cache-Control, Content-Language, Expires, Last-Modified, Pragma (ListOpt) Indicate which headers are safe to expose to the API. Defaults to HTTP Simple Headers.
max_age = 3600 (IntOpt) Maximum cache age of CORS preflight requests.

Table 3.24. Description of database configuration options

Configuration option = Default value Description
[DEFAULT]
db_driver = nova.db (StrOpt) The driver to use for database access
[api_database]
connection = None (StrOpt) The SQLAlchemy connection string to use to connect to the Nova API database.
connection_debug = 0 (IntOpt) Verbosity of SQL debugging information: 0=None, 100=Everything.
connection_trace = False (BoolOpt) Add Python stack traces to SQL as comment strings.
idle_timeout = 3600 (IntOpt) Timeout before idle SQL connections are reaped.
max_overflow = None (IntOpt) If set, use this value for max_overflow with SQLAlchemy.
max_pool_size = None (IntOpt) Maximum number of SQL connections to keep open in a pool.
max_retries = 10 (IntOpt) Maximum number of database connection retries during startup. Set to -1 to specify an infinite retry count.
mysql_sql_mode = TRADITIONAL (StrOpt) The SQL mode to be used for MySQL sessions. This option, including the default, overrides any server-set SQL mode. To use whatever SQL mode is set by the server configuration, set this to no value. Example: mysql_sql_mode=
pool_timeout = None (IntOpt) If set, use this value for pool_timeout with SQLAlchemy.
retry_interval = 10 (IntOpt) Interval between retries of opening a SQL connection.
slave_connection = None (StrOpt) The SQLAlchemy connection string to use to connect to the slave database.
sqlite_synchronous = True (BoolOpt) If True, SQLite uses synchronous mode.
[database]
backend = sqlalchemy (StrOpt) The back end to use for the database.
connection = None (StrOpt) The SQLAlchemy connection string to use to connect to the database.
connection_debug = 0 (IntOpt) Verbosity of SQL debugging information: 0=None, 100=Everything.
connection_trace = False (BoolOpt) Add Python stack traces to SQL as comment strings.
db_inc_retry_interval = True (BoolOpt) If True, increases the interval between retries of a database operation up to db_max_retry_interval.
db_max_retries = 20 (IntOpt) Maximum retries in case of connection error or deadlock error before error is raised. Set to -1 to specify an infinite retry count.
db_max_retry_interval = 10 (IntOpt) If db_inc_retry_interval is set, the maximum seconds between retries of a database operation.
db_retry_interval = 1 (IntOpt) Seconds between retries of a database transaction.
idle_timeout = 3600 (IntOpt) Timeout before idle SQL connections are reaped.
max_overflow = None (IntOpt) If set, use this value for max_overflow with SQLAlchemy.
max_pool_size = None (IntOpt) Maximum number of SQL connections to keep open in a pool.
max_retries = 10 (IntOpt) Maximum number of database connection retries during startup. Set to -1 to specify an infinite retry count.
min_pool_size = 1 (IntOpt) Minimum number of SQL connections to keep open in a pool.
mysql_sql_mode = TRADITIONAL (StrOpt) The SQL mode to be used for MySQL sessions. This option, including the default, overrides any server-set SQL mode. To use whatever SQL mode is set by the server configuration, set this to no value. Example: mysql_sql_mode=
pool_timeout = None (IntOpt) If set, use this value for pool_timeout with SQLAlchemy.
retry_interval = 10 (IntOpt) Interval between retries of opening a SQL connection.
slave_connection = None (StrOpt) The SQLAlchemy connection string to use to connect to the slave database.
sqlite_db = oslo.sqlite (StrOpt) The file name to use with SQLite.
sqlite_synchronous = True (BoolOpt) If True, SQLite uses synchronous mode.
use_db_reconnect = False (BoolOpt) Enable the experimental use of database reconnect on connection lost.
use_tpool = False (BoolOpt) Enable the experimental use of thread pooling for all DB API calls

Table 3.25. Description of logging configuration options

Configuration option = Default value Description
[guestfs]
debug = False (BoolOpt) Enable guestfs debug

Table 3.26. Description of EC2 configuration options

Configuration option = Default value Description
[DEFAULT]
ec2_dmz_host = $my_ip (StrOpt) The internal IP address of the EC2 API server
ec2_host = $my_ip (StrOpt) The IP address of the EC2 API server
ec2_listen = 0.0.0.0 (StrOpt) The IP address on which the EC2 API will listen.
ec2_listen_port = 8773 (IntOpt) The port on which the EC2 API will listen.
ec2_path = / (StrOpt) The path prefix used to call the ec2 API server
ec2_port = 8773 (IntOpt) The port of the EC2 API server
ec2_private_dns_show_ip = False (BoolOpt) Return the IP address as private dns hostname in describe instances
ec2_scheme = http (StrOpt) The protocol to use when connecting to the EC2 API server
ec2_strict_validation = True (BoolOpt) Validate security group names according to EC2 specification
ec2_timestamp_expiry = 300 (IntOpt) Time in seconds before ec2 timestamp expires
ec2_workers = None (IntOpt) Number of workers for EC2 API service. The default will be equal to the number of CPUs available.
keystone_ec2_insecure = False (BoolOpt) Disable SSL certificate verification.
keystone_ec2_url = http://localhost:5000/v2.0/ec2tokens (StrOpt) URL to get token from ec2 request.
lockout_attempts = 5 (IntOpt) Number of failed auths before lockout.
lockout_minutes = 15 (IntOpt) Number of minutes to lockout if triggered.
lockout_window = 15 (IntOpt) Number of minutes for lockout window.
region_list = (ListOpt) List of region=fqdn pairs separated by commas

Table 3.27. Description of ephemeral storage encryption configuration options

Configuration option = Default value Description
[ephemeral_storage_encryption]
cipher = aes-xts-plain64 (StrOpt) The cipher and mode to be used to encrypt ephemeral storage. Which ciphers are available ciphers depends on kernel support. See /proc/crypto for the list of available options.
enabled = False (BoolOpt) Whether to encrypt ephemeral storage
key_size = 512 (IntOpt) The bit length of the encryption key to be used to encrypt ephemeral storage (in XTS mode only half of the bits are used for encryption key)

Table 3.28. Description of fping configuration options

Configuration option = Default value Description
[DEFAULT]
fping_path = /usr/sbin/fping (StrOpt) Full path to fping.

Table 3.29. Description of glance configuration options

Configuration option = Default value Description
[DEFAULT]
osapi_glance_link_prefix = None (StrOpt) Base URL that will be presented to users in links to glance resources
[glance]
allowed_direct_url_schemes = (ListOpt) A list of url scheme that can be downloaded directly via the direct_url. Currently supported schemes: [file].
api_insecure = False (BoolOpt) Allow to perform insecure SSL (https) requests to glance
api_servers = None (ListOpt) A list of the glance api servers available to nova. Prefix with https:// for ssl-based glance api servers. ([hostname|ip]:port)
host = $my_ip (StrOpt) Default glance hostname or IP address
num_retries = 0 (IntOpt) Number of retries when uploading / downloading an image to / from glance.
port = 9292 (IntOpt) Default glance port
protocol = http (StrOpt) Default protocol to use when connecting to glance. Set to https for SSL.
[image_file_url]
filesystems = (ListOpt) List of file systems that are configured in this file in the image_file_url:<list entry name> sections

Table 3.30. Description of hypervisor configuration options

Configuration option = Default value Description
[DEFAULT]
default_ephemeral_format = None (StrOpt) The default format an ephemeral_volume will be formatted with on creation.
force_raw_images = True (BoolOpt) Force backing images to raw format
preallocate_images = none (StrOpt) VM image preallocation mode: "none" => no storage provisioning is done up front, "space" => storage is fully allocated at instance start
timeout_nbd = 10 (IntOpt) Amount of time, in seconds, to wait for NBD device start up.
use_cow_images = True (BoolOpt) Whether to use cow images
vcpu_pin_set = None (StrOpt) Defines which pcpus that instance vcpus can use. For example, "4-12,^8,15"
virt_mkfs = [] (MultiStrOpt) Name of the mkfs commands for ephemeral device. The format is <os_type>=<mkfs command>

Table 3.31. Description of bare metal configuration options

Configuration option = Default value Description
[ironic]
admin_auth_token = None (StrOpt) Ironic keystone auth token. DEPRECATED: use admin_username, admin_password, and admin_tenant_name instead
admin_password = None (StrOpt) Ironic keystone admin password.
admin_tenant_name = None (StrOpt) Ironic keystone tenant name.
admin_url = None (StrOpt) Keystone public API endpoint.
admin_username = None (StrOpt) Ironic keystone admin name
api_endpoint = None (StrOpt) URL for Ironic API endpoint.
api_max_retries = 60 (IntOpt) How many retries when a request does conflict. If a negative number is set, only try once, no retries.
api_retry_interval = 2 (IntOpt) How often to retry in seconds when a request does conflict
api_version = 1 (IntOpt) Version of Ironic API service endpoint.
client_log_level = None (StrOpt) Log level override for ironicclient. Set this in order to override the global "default_log_levels", "verbose", and "debug" settings. DEPRECATED: use standard logging configuration.

Table 3.32. Description of IPv6 configuration options

Configuration option = Default value Description
[DEFAULT]
fixed_range_v6 = fd00::/48 (StrOpt) Fixed IPv6 address block
gateway_v6 = None (StrOpt) Default IPv6 gateway
ipv6_backend = rfc2462 (StrOpt) Backend to use for IPv6 generation
use_ipv6 = False (BoolOpt) Use IPv6

Table 3.33. Description of key manager configuration options

Configuration option = Default value Description
[keymgr]
api_class = nova.keymgr.conf_key_mgr.ConfKeyManager (StrOpt) The full class name of the key manager API class
fixed_key = None (StrOpt) Fixed key returned by key manager, specified in hex

Table 3.34. Description of LDAP configuration options

Configuration option = Default value Description
[DEFAULT]
ldap_dns_base_dn = ou=hosts,dc=example,dc=org (StrOpt) Base DN for DNS entries in LDAP
ldap_dns_password = password (StrOpt) Password for LDAP DNS
ldap_dns_servers = ['dns.example.org'] (MultiStrOpt) DNS Servers for LDAP DNS driver
ldap_dns_soa_expiry = 86400 (StrOpt) Expiry interval (in seconds) for LDAP DNS driver Statement of Authority
ldap_dns_soa_hostmaster = hostmaster@example.org (StrOpt) Hostmaster for LDAP DNS driver Statement of Authority
ldap_dns_soa_minimum = 7200 (StrOpt) Minimum interval (in seconds) for LDAP DNS driver Statement of Authority
ldap_dns_soa_refresh = 1800 (StrOpt) Refresh interval (in seconds) for LDAP DNS driver Statement of Authority
ldap_dns_soa_retry = 3600 (StrOpt) Retry interval (in seconds) for LDAP DNS driver Statement of Authority