5.3. Manage volumes
The default OpenStack Block Storage service implementation is an iSCSI solution that uses Logical Volume Manager (LVM) for Linux.
Note
The OpenStack Block Storage service is not a shared storage solution like a Storage Area Network (SAN) of NFS volumes, where you can attach a volume to multiple servers. With the OpenStack Block Storage service, you can attach a volume to only one instance at a time.
The OpenStack Block Storage service also provides drivers that enable you to use several vendors' back-end storage devices, in addition to or instead of the base LVM implementation.
This high-level procedure shows you how to create and attach a volume to a server instance.
- You must configure both OpenStack Compute and the OpenStack Block Storage service through the
cinder.conffile. - Create a volume through the cinder create command. This command creates an LV into the volume group (VG)
cinder-volumes. - Attach the volume to an instance through the nova volume-attach command. This command creates a unique iSCSI IQN that is exposed to the compute node.
- The compute node, which runs the instance, now has an active ISCSI session and new local storage (usually a
/dev/sdXdisk). - libvirt uses that local storage as storage for the instance. The instance gets a new disk (usually a
/dev/vdXdisk).
For this particular walk through, one cloud controller runs
nova-api, nova-scheduler, nova-objectstore, nova-network and cinder-* services. Two additional compute nodes run nova-compute. The walk through uses a custom partitioning scheme that carves out 60 GB of space and labels it as LVM. The network uses the FlatManager and NetworkManager settings for OpenStack Compute.
The network mode does not interfere with the way Block Storage works, but you must set up networking for Block Storage to work (for details, see Chapter 6, Networking.
To set up Compute to use volumes, ensure that Block Storage is installed along with lvm2. This guide describes how to troubleshoot your installation and back up your Compute volumes.
5.3.1. Boot from volume
In some cases, instances can be stored and run from inside volumes. For information, see the Launch and manage instances section in the OpenStack dashboard chapter of the Red Hat Enterprise Linux OpenStack Platform 5 End User Guide available at https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux_OpenStack_Platform/.
5.3.2. Configure an NFS storage back end
This section explains how to configure OpenStack Block Storage to use NFS storage. You must be able to access the NFS shares from the server that hosts the
cinder volume service.
Note
The
cinder volume service is named openstack-cinder-volume
Procedure 5.1. Configure Block Storage to use an NFS storage back end
- Log in as
rootto the system hosting thecindervolume service. - Create a text file named
nfssharesin/etc/cinder/. - Add an entry to
/etc/cinder/nfssharesfor each NFS share that thecindervolume service should use for back end storage. Each entry should be a separate line, and should use the following format:HOST:SHARE
Where:- HOST is the IP address or host name of the NFS server.
- SHARE is the absolute path to an existing and accessible NFS share.
- Set
/etc/cinder/nfssharesto be owned by therootuser and thecindergroup:#chown root:cinder /etc/cinder/nfsshares - Set
/etc/cinder/nfssharesto be readable by members of thecindergroup:#chmod 0640 /etc/cinder/nfsshares - Configure the
cindervolume service to use the/etc/cinder/nfssharesfile created earlier. To do so, open the/etc/cinder/cinder.confconfiguration file and set thenfs_shares_configconfiguration key to/etc/cinder/nfsshares.On Using openstack-config, you can configure this by running the following command instead:#openstack-config --set /etc/cinder/cinder.conf \DEFAULT nfs_shares_config /etc/cinder/nfsshares - Optionally, provide any additional NFS mount options required in your environment in the
nfs_mount_optionsconfiguration key of/etc/cinder/cinder.conf. If your NFS shares do not require any additional mount options (or if you are unsure), skip this step.Using openstack-config, you can configure this by running the following command instead:#openstack-config --set /etc/cinder/cinder.conf \DEFAULT nfs_mount_options OPTIONSReplace OPTIONS with the mount options to be used when accessing NFS shares. See the manual page for NFS for more information on available mount options (man nfs). - Configure the
cindervolume service to use the correct volume driver, namelycinder.volume.drivers.nfs.NfsDriver. To do so, open the/etc/cinder/cinder.confconfiguration file and set thevolume_driverconfiguration key tocinder.volume.drivers.nfs.NfsDriver.Using openstack-config, you can configure this by running the following command instead:#openstack-config --set /etc/cinder/cinder.conf \DEFAULT volume_driver cinder.volume.drivers.nfs.NfsDriver - You can now restart the service to apply the configuration.To restart the
cindervolume service, run:#service openstack-cinder-volume restart
Note
The
nfs_sparsed_volumes configuration key determines whether volumes are created as sparse files and grown as needed or fully allocated up front. The default and recommended value is true, which ensures volumes are initially created as sparse files.
Setting
nfs_sparsed_volumes to false will result in volumes being fully allocated at the time of creation. This leads to increased delays in volume creation.
However, should you choose to set
nfs_sparsed_volumes to false, you can do so directly in /etc/cinder/cinder.conf.
Using openstack-config, you can configure this by running the following command instead:
#openstack-config --set /etc/cinder/cinder.conf \DEFAULT nfs_sparsed_volumes false
Important
If a client host has SELinux enabled, the
virt_use_nfs Boolean should also be enabled if the host requires access to NFS volumes on an instance. To enable this Boolean, run the following command as the root user:
#setsebool -P virt_use_nfs on
This command also makes the Boolean persistent across reboots. Run this command on all client hosts that require access to NFS volumes on an instance. This includes all Compute nodes.
5.3.3. Configure a GlusterFS back end
This section explains how to configure OpenStack Block Storage to use GlusterFS as a back end. You must be able to access the GlusterFS shares from the server that hosts the
cinder volume service.
Note
The
cinder volume service is named openstack-cinder-volume .
Mounting GlusterFS volumes requires utilities and libraries from the glusterfs-fuse package. This package must be installed on all systems that will access volumes backed by GlusterFS.
For information on how to install and configure GlusterFS, refer to the GlusterDocumentation page.
Procedure 5.2. Configure GlusterFS for OpenStack Block Storage
The GlusterFS server must also be configured accordingly in order to allow OpenStack Block Storage to use GlusterFS shares:
- Log in as
rootto the GlusterFS server. - Set each Gluster volume to use the same UID and GID as the
cinderuser:#gluster volume set VOL_NAME storage.owner-uid cinder-uid#gluster volume set VOL_NAME storage.owner-gid cinder-gidWhere:- VOL_NAME is the Gluster volume name.
- cinder-uid is the UID of the
cinderuser. - cinder-gid is the GID of the
cinderuser.
- Configure each Gluster volume to accept
libgfapiconnections. To do this, set each Gluster volume to allow insecure ports:#gluster volume set VOL_NAME server.allow-insecure on - Enable client connections from unprivileged ports. To do this, add the following line to
/etc/glusterfs/glusterd.vol:option rpc-auth-allow-insecure on
- Restart the
glusterdservice:#service glusterd restart
Procedure 5.3. Configure Block Storage to use a GlusterFS back end
After you configure the GlusterFS service, complete these steps:
- Log in as
rootto the system hosting the cinder volume service. - Create a text file named
glusterfsin/etc/cinder/. - Add an entry to
/etc/cinder/glusterfsfor each GlusterFS share that OpenStack Block Storage should use for back end storage. Each entry should be a separate line, and should use the following format:HOST:/VOL_NAME
Where:- HOST is the IP address or host name of the Red Hat Storage server.
- VOL_NAME is the name an existing and accessible volume on the GlusterFS server.
Optionally, if your environment requires additional mount options for a share, you can add them to the share's entry:HOST:/VOL_NAME -o OPTIONS
Replace OPTIONS with a comma-separated list of mount options. - Set
/etc/cinder/glusterfsto be owned by therootuser and thecindergroup.#chown root:cinder /etc/cinder/glusterfs - Set
/etc/cinder/glusterfsto be readable by members of thecindergroup:#chmod 0640 FILE - Configure OpenStack Block Storage to use the
/etc/cinder/glusterfsfile created earlier. To do so, open the/etc/cinder/cinder.confconfiguration file and set theglusterfs_shares_configconfiguration key to/etc/cinder/glusterfs.Using openstack-config, you can configure this by running the following command instead:#openstack-config --set /etc/cinder/cinder.conf \DEFAULT glusterfs_shares_config /etc/cinder/glusterfs - Configure OpenStack Block Storage to use the correct volume driver, namely
cinder.volume.drivers.glusterfs. To do so, open the/etc/cinder/cinder.confconfiguration file and set thevolume_driverconfiguration key tocinder.volume.drivers.glusterfs.Using openstack-config, you can configure this by running the following command instead:#openstack-config --set /etc/cinder/cinder.conf \DEFAULT volume_driver cinder.volume.drivers.glusterfs.GlusterfsDriver - You can now restart the service to apply the configuration.To restart the
cindervolume service, run:#service openstack-cinder-volume restart
OpenStack Block Storage is now configured to use a GlusterFS back end.
Note
In
/etc/cinder/cinder.conf, the glusterfs_sparsed_volumes configuration key determines whether volumes are created as sparse files and grown as needed or fully allocated up front. The default and recommended value of this key is true, which ensures volumes are initially created as sparse files.
Setting
glusterfs_sparsed_volumes to false will result in volumes being fully allocated at the time of creation. This leads to increased delays in volume creation.
However, should you choose to set
glusterfs_sparsed_volumes to false, you can do so directly in /etc/cinder/cinder.conf.
Using openstack-config, you can configure this by running the following command:
#openstack-config --set /etc/cinder/cinder.conf \DEFAULT glusterfs_sparsed_volumes false
Important
If a client host has SELinux enabled, the
virt_use_fusefs Boolean should also be enabled if the host requires access to GlusterFS volumes on an instance. To enable this Boolean, run the following command as the root user:
#setsebool -P virt_use_fusefs on
This command also makes the Boolean persistent across reboots. Run this command on all client hosts that require access to GlusterFS volumes on an instance. This includes all compute nodes.
5.3.4. Configure a multiple-storage back-end
With multiple storage back-ends configured, you can create several back-end storage solutions serving the same OpenStack Compute configuration. Basically, multi back-end launches one
cinder-volume for each back-end or back-end pool.
In a multi back-end configuration, each back-end has a name (
volume_backend_name). Several back-ends can have the same name. In that case, the scheduler properly decides which back-end the volume has to be created in.
The name of the back-end is declared as an extra-specification of a volume type (such as,
volume_backend_name=LVM_iSCSI). When a volume is created, the scheduler chooses an appropriate back-end to handle the request, according to the volume type specified by the user.
Enable multi back-end
To enable a multi back-end configuration, you must set the
enabled_backends flag in the cinder.conf file. This flag defines the names (separated by a comma) of the configuration groups for the different back-ends: one name is associated to one configuration group for a back-end (such as, [lvmdriver-1]).
Note
The configuration group name is not related to the
volume_backend_name.
The options for a configuration group must be defined in the group (or default options are used). All the standard Block Storage configuration options (
volume_group, volume_driver, and so on) might be used in a configuration group. Configuration values in the [DEFAULT] configuration group are not used.
This example shows three back-ends:
enabled_backends=lvmdriver-1,lvmdriver-2,lvmdriver-3 [lvmdriver-1] volume_group=cinder-volumes-1 volume_driver=cinder.volume.drivers.lvm.LVMISCSIDriver volume_backend_name=LVM_iSCSI [lvmdriver-2] volume_group=cinder-volumes-2 volume_driver=cinder.volume.drivers.lvm.LVMISCSIDriver volume_backend_name=LVM_iSCSI [lvmdriver-3] volume_group=cinder-volumes-3 volume_driver=cinder.volume.drivers.lvm.LVMISCSIDriver volume_backend_name=LVM_iSCSI_b
In this configuration,
lvmdriver-1 and lvmdriver-2 have the same volume_backend_name. If a volume creation requests the LVM_iSCSI back-end name, the scheduler uses the capacity filter scheduler to choose the most suitable driver, which is either lvmdriver-1 or lvmdriver-2. The capacity filter scheduler is enabled by default. The next section provides more information. In addition, this example presents a lvmdriver-3 back-end.
Some volume drivers require additional settings to be configured for each back-end. The following example shows the typical configuration for a Block Storage service that uses two Dell EqualLogic back-ends:
enabled_backends=backend1,backend2 san_ssh_port=22 ssh_conn_timeout=30 san_thin_provision=true [backend1] volume_driver=cinder.volume.drivers.eqlx.DellEQLSanISCSIDriver volume_backend_name=backend1 san_ip=IP_EQLX1 san_login=SAN_UNAME san_password=SAN_PW eqlx_group_name=EQLX_GROUP eqlx_pool=EQLX_POOL [backend2] volume_driver=cinder.volume.drivers.eqlx.DellEQLSanISCSIDriver volume_backend_name=backend2 san_ip=IP_EQLX2 san_login=SAN_UNAME san_password=SAN_PW eqlx_group_name=EQLX_GROUP eqlx_pool=EQLX_POOL
In this example:
- Thin provisioning for SAN volumes is enabled (
san_thin_provision=true). This is recommended when setting up Dell EqualLogic back-ends. - Each Dell EqualLogic back-end configuration (
[backend1]and[backend2]) has the same required settings as a single back-end configuration, with the addition ofvolume_backend_name. - The
san_ssh_portoption is set to its default value,22. This option sets the port used for SSH. - The
ssh_conn_timeoutoption is also set to its default value,30. This option sets the timeout (in seconds) for CLI commands over SSH. - The IP_EQLX1 and IP_EQLX2 refer to the IP addresses used to reach the Dell EqualLogic Group of
backend1andbackend2through SSH, respectively.
For more information on required and optional settings for Dell EqualLogic back-ends, refer to the Configuration Reference Guide:
Configure Block Storage scheduler multi back-end
You must enable the
filter_scheduler option to use multi back-end. Filter scheduler acts in two steps:
- The filter scheduler filters the available back-ends. By default,
AvailabilityZoneFilter,CapacityFilterandCapabilitiesFilterare enabled. - The filter scheduler weighs the previously filtered back-ends. By default,
CapacityWeigheris enabled. TheCapacityWeigherattributes higher scores to back-ends with the most available capacity.
The scheduler uses the filtering and weighing process to pick the best back-end to handle the request, and explicitly creates volumes on specific back-ends through the use of volume types.
Volume type
Before using it, a volume type has to be declared to Block Storage. This can be done by the following command:
$cinder --os-username admin --os-tenant-name admin type-create lvm
Then, an extra-specification has to be created to link the volume type to a back-end name. Run this command:
$cinder --os-username admin --os-tenant-name admin type-key lvm set volume_backend_name=LVM_iSCSI
This example creates a
lvm volume type with volume_backend_name=LVM_iSCSI as extra-specifications.
Create another volume type:
$cinder --os-username admin --os-tenant-name admin type-create lvm_gold
$cinder --os-username admin --os-tenant-name admin type-key lvm_gold set volume_backend_name=LVM_iSCSI_b
This second volume type is named
lvm_gold and has LVM_iSCSI_b as back-end name.
Note
To list the extra-specifications, use this command:
$cinder --os-username admin --os-tenant-name admin extra-specs-list
Note
If a volume type points to a
volume_backend_name that does not exist in the Block Storage configuration, the filter_scheduler returns an error that it cannot find a valid host with the suitable back-end.
Usage
When you create a volume, you must specify the volume type. The extra-specifications of the volume type are used to determine which back-end has to be used.
Considering the$cinder create --volume_type lvm --display_name test_multi_backend 1
cinder.conf described previously, the scheduler creates this volume on lvmdriver-1 or lvmdriver-2.
$cinder create --volume_type lvm_gold --display_name test_multi_backend 1
This second volume is created on
lvmdriver-3.
5.3.5. Back up Block Storage service disks
While you can use the LVM snapshot to create snapshots, you can also use it to back up your volumes. By using LVM snapshot, you reduce the size of the backup; only existing data is backed up instead of the entire volume.
To back up a volume, you must create a snapshot of it. An LVM snapshot is the exact copy of a logical volume, which contains data in a frozen state. This prevents data corruption, because data cannot be manipulated during the volume creation process. Remember that the volumes created through a nova volume-create command exist in an LVM logical volume.
You must also make sure that the operating system is not using the volume, and that all data has been flushed on the guest filesystems. This usually means that those filesystems have to be unmounted during the snapshot creation. They can be mounted again as soon as the logical volume snapshot has been created.
Before you create the snapshot, you must have enough space to save it. As a precaution, you should have at least twice as much space as the potential snapshot size. If insufficient space is available, the snapshot might become corrupted.
For this example, assume that a 100 GB volume named
volume-00000001 was created for an instance while only 4 GB are used. This example uses these commands to back up only those 4 GB:
- lvm2 command. Directly manipulates the volumes.
- kpartx command. Discovers the partition table created inside the instance.
- tar command. Creates a minimum-sized backup.
- sha1sum command. Calculates the backup checksum to check its consistency.
You can apply this process to volumes of any size.
Procedure 5.4. To back up Block Storage service disks
Create a snapshot of a used volume
- Use this command to list all volumes:
#lvdisplay - Create the snapshot; you can do this while the volume is attached to an instance:
#lvcreate --size 10G --snapshot --name volume-00000001-snapshot /dev/cinder-volumes/volume-00000001Use the--snapshotconfiguration option to tell LVM that you want a snapshot of an already existing volume. The command includes the size of the space reserved for the snapshot volume, the name of the snapshot, and the path of an already existing volume. Generally, this path is/dev/cinder-volumes/$volume_name.The size does not have to be the same as the volume of the snapshot. Thesizeparameter defines the space that LVM reserves for the snapshot volume. As a precaution, the size should be the same as that of the original volume, even if the whole space is not currently used by the snapshot. - Run the lvdisplay command again to verify the snapshot:
--- Logical volume --- LV Name /dev/cinder-volumes/volume-00000001 VG Name cinder-volumes LV UUID gI8hta-p21U-IW2q-hRN1-nTzN-UC2G-dKbdKr LV Write Access read/write LV snapshot status source of /dev/cinder-volumes/volume-00000026-snap [active] LV Status available # open 1 LV Size 15,00 GiB Current LE 3840 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 251:13 --- Logical volume --- LV Name /dev/cinder-volumes/volume-00000001-snap VG Name cinder-volumes LV UUID HlW3Ep-g5I8-KGQb-IRvi-IRYU-lIKe-wE9zYr LV Write Access read/write LV snapshot status active destination for /dev/cinder-volumes/volume-00000026 LV Status available # open 0 LV Size 15,00 GiB Current LE 3840 COW-table size 10,00 GiB COW-table LE 2560 Allocated to snapshot 0,00% Snapshot chunk size 4,00 KiB Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 251:14
Partition table discovery
- To exploit the snapshot with the tar command, mount your partition on the Block Storage service server.The kpartx utility discovers and maps table partitions. You can use it to view partitions that are created inside the instance. Without using the partitions created inside instances, you cannot see its content and create efficient backups.
#kpartx -av /dev/cinder-volumes/volume-00000001-snapshotIf the tools successfully find and map the partition table, no errors are returned. - To check the partition table map, run this command:
$ls /dev/mapper/nova*You can see thecinder--volumes-volume--00000001--snapshot1partition.If you created more than one partition on that volume, you see several partitions; for example:cinder--volumes-volume--00000001--snapshot2,cinder--volumes-volume--00000001--snapshot3, and so on. - Mount your partition:
#mount /dev/mapper/cinder--volumes-volume--volume--00000001--snapshot1 /mntIf the partition mounts successfully, no errors are returned.You can directly access the data inside the instance. If a message prompts you for a partition or you cannot mount it, determine whether enough space was allocated for the snapshot or the kpartx command failed to discover the partition table.Allocate more space to the snapshot and try the process again.
Use the tar command to create archives
Create a backup of the volume:$tar --exclude="lost+found" --exclude="some/data/to/exclude" -czf volume-00000001.tar.gz -C /mnt/ /backup/destinationThis command creates atar.gzfile that contains the data, and data only. This ensures that you do not waste space by backing up empty sectors.Checksum calculation I
You should always have the checksum for your backup files. When you transfer the same file over the network, you can run a checksum calculation to ensure that your file was not corrupted during its transfer. The checksum is a unique ID for a file. If the checksums are different, the file is corrupted.Run this command to run a checksum for your file and save the result to a file:$sha1sum volume-00000001.tar.gz > volume-00000001.checksumNoteUse the sha1sum command carefully because the time it takes to complete the calculation is directly proportional to the size of the file.For files larger than around 4 to 6 GB, and depending on your CPU, the process might take a long time.After work cleaning
Now that you have an efficient and consistent backup, use this command to clean up the file system:- Unmount the volume:
unmount /mnt - Delete the partition table:
kpartx -dv /dev/cinder-volumes/volume-00000001-snapshot - Remove the snapshot:
lvremove -f /dev/cinder-volumes/volume-00000001-snapshot
Repeat these steps for all your volumes.Automate your backups
Because more and more volumes might be allocated to your Block Storage service, you might want to automate your backups. The SCR_5005_V01_NUAC-OPENSTACK-EBS-volumes-backup.sh script assists you with this task. The script performs the operations from the previous example, but also provides a mail report and runs the backup based on thebackups_retention_dayssetting.Launch this script from the server that runs the Block Storage service.This example shows a mail report:Backup Start Time - 07/10 at 01:00:01 Current retention - 7 days The backup volume is mounted. Proceed... Removing old backups... : /BACKUPS/EBS-VOL/volume-00000019/volume-00000019_28_09_2011.tar.gz /BACKUPS/EBS-VOL/volume-00000019 - 0 h 1 m and 21 seconds. Size - 3,5G The backup volume is mounted. Proceed... Removing old backups... : /BACKUPS/EBS-VOL/volume-0000001a/volume-0000001a_28_09_2011.tar.gz /BACKUPS/EBS-VOL/volume-0000001a - 0 h 4 m and 15 seconds. Size - 6,9G --------------------------------------- Total backups size - 267G - Used space : 35% Total execution time - 1 h 75 m and 35 secondsThe script also enables you to SSH to your instances and run a mysqldump command into them. To make this work, enable the connection to the Compute project keys. If you do not want to run the mysqldump command, you can addenable_mysql_dump=0to the script to turn off this functionality.
5.3.6. Migrate volumes
OpenStack has the ability to migrate volumes between back-ends. Migrating a volume transparently moves its data from the current back-end for the volume to a new one. This is an administrator function, and can be used for functions including storage evacuation (for maintenance or decommissioning), or manual optimizations (for example, performance, reliability, or cost).
These workflows are possible for a migration:
- If the storage can migrate the volume on its own, it is given the opportunity to do so. This allows the Block Storage driver to enable optimizations that the storage might be able to perform. If the back-end is not able to perform the migration, the Block Storage uses one of two generic flows, as follows.
- If the volume is not attached, the Block Storage service creates a volume and copies the data from the original to the new volume.NoteWhile most back-ends support this function, not all do. For more information, see Volume drivers section in the Block Storage chapter of the Red Hat Enterprise Linux OpenStack Platform 5 Configuration Reference Guide available at https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux_OpenStack_Platform/.
- If the volume is attached to a VM instance, the Block Storage creates a volume, and calls Compute to copy the data from the original to the new volume. Currently this is supported only by the Compute libvirt driver.
As an example, this scenario shows two LVM back-ends and migrates an attached volume from one to the other. This scenario uses the third migration flow.
First, list the available back-ends:
#cinder-manage host listserver1@lvmstorage-1 zone1 server2@lvmstorage-2 zone1
Next, as the admin user, you can see the current status of the volume (replace the example ID with your own):
$cinder show 6088f80a-f116-4331-ad48-9afb0dfb196c+--------------------------------+--------------------------------------+ | Property | Value | +--------------------------------+--------------------------------------+ | attachments | [...] | | availability_zone | zone1 | | bootable | False | | created_at | 2013-09-01T14:53:22.000000 | | display_description | test | | display_name | test | | id | 6088f80a-f116-4331-ad48-9afb0dfb196c | | metadata | {} | | os-vol-host-attr:host | server1@lvmstorage-1 | | os-vol-mig-status-attr:migstat | None | | os-vol-mig-status-attr:name_id | None | | os-vol-tenant-attr:tenant_id | 6bdd8f41203e4149b5d559769307365e | | size | 2 | | snapshot_id | None | | source_volid | None | | status | in-use | | volume_type | None | +--------------------------------+--------------------------------------+
Note these attributes:
os-vol-host-attr:host- the volume's current back-end.os-vol-mig-status-attr:migstat- the status of this volume's migration (Nonemeans that a migration is not currently in progress).os-vol-mig-status-attr:name_id- the volume ID that this volume's name on the back-end is based on. Before a volume is ever migrated, its name on the back-end storage may be based on the volume's ID (see thevolume_name_templateconfiguration parameter). For example, ifvolume_name_templateis kept as the default value (volume-%s), your first LVM back-end has a logical volume namedvolume-6088f80a-f116-4331-ad48-9afb0dfb196c. During the course of a migration, if you create a volume and copy over the data, the volume get the new name but keeps its original ID. This is exposed by thename_idattribute.
Note
If you plan to decommission a block storage node, you must stop the
cinder volume service on the node after performing the migration. Run:
#service openstack-cinder-volume stop#chkconfig openstack-cinder-volume off
#chkconfig cinder-volume off
Stopping the
cinder volume service will prevent volumes from being allocated to the node.
Migrate this volume to the second LVM back-end:
$cinder migrate 6088f80a-f116-4331-ad48-9afb0dfb196c server2@lvmstorage-2
You can use the cinder show command to see the status of the migration. While migrating, the
migstat attribute shows states such as migrating or completing. On error, migstat is set to None and the host attribute shows the original host. On success, in this example, the output looks like:
+--------------------------------+--------------------------------------+
| Property | Value |
+--------------------------------+--------------------------------------+
| attachments | [...] |
| availability_zone | zone1 |
| bootable | False |
| created_at | 2013-09-01T14:53:22.000000 |
| display_description | test |
| display_name | test |
| id | 6088f80a-f116-4331-ad48-9afb0dfb196c |
| metadata | {} |
| os-vol-host-attr:host | server2@lvmstorage-2 |
| os-vol-mig-status-attr:migstat | None |
| os-vol-mig-status-attr:name_id | 133d1f56-9ffc-4f57-8798-d5217d851862 |
| os-vol-tenant-attr:tenant_id | 6bdd8f41203e4149b5d559769307365e |
| size | 2 |
| snapshot_id | None |
| source_volid | None |
| status | in-use |
| volume_type | None |
+--------------------------------+--------------------------------------+
Note that
migstat is None, host is the new host, and name_id holds the ID of the volume created by the migration. If you look at the second LVM back end, you find the logical volume volume-133d1f56-9ffc-4f57-8798-d5217d851862.
Note
The migration is not visible to non-admin users (for example, through the volume
status). However, some operations are not allowed while a migration is taking place, such as attaching/detaching a volume and deleting a volume. If a user performs such an action during a migration, an error is returned.
Note
Migrating volumes that have snapshots are currently not allowed.
5.3.7. Gracefully remove a GlusterFS volume from usage
Configuring the
cinder volume service to use GlusterFS involves creating a shares file (for example, /etc/cinder/glusterfs). This shares file lists each GlusterFS volume (with its corresponding storage server) that the cinder volume service can use for back end storage.
To remove a GlusterFS volume from usage as a back end, delete the volume's corresponding entry from the shares file. After doing so, restart the Block Storage services.
To restart the Block Storage service, run:
#for i in api scheduler volume; do service openstack-cinder-$i restart; done
Restarting the Block Storage services will prevent the
cinder volume service from exporting the deleted GlusterFS volume. This will prevent any instances from mounting the volume from that point onwards.
However, the removed GlusterFS volume might still be mounted on an instance at this point. Typically, this is the case when the volume was already mounted while its entry was deleted from the shares file. Whenever this occurs, you will have to unmount the volume as normal after the Block Storage services are restarted.
5.3.8. Back up and restore volumes
The cinder command-line interface provides the tools for creating a volume backup. You can restore a volume from a backup as long as the backup's associated database information (or backup metadata) is intact in the Block Storage database.
Run this command to create a backup of a volume:
$cinder backup-create VOLUME
Where VOLUME is the name or ID of the volume.
The previous command will also return a backup ID. Use this backup ID when restoring the volume, as in:
$cinder backup-restore backup_ID
As mentioned earlier, volume backups are dependent on the Block Storage database. Because of this, we recommend that you also back up your Block Storage database regularly in order to ensure data recovery.
Note
Alternatively, you can export and save the metadata of selected volume backups. Doing so will preclude the need to back up the entire Block Storage database. This is particularly useful if you only need a small subset of volumes to survive a catastrophic database failure.
For more information on how to export and import volume backup metadata, see Section 5.3.9, “Export and import backup metadata”.
5.3.9. Export and import backup metadata
A volume backup can only be restored on the same Block Storage service. This is because restoring a volume from a backup requires metadata available on the database used by the Block Storage service.
Note
For information on how to back up and restore a volume, see Section 5.3.8, “Back up and restore volumes”.
You can, however, export the metadata of a volume backup. To do so, run this command as an OpenStack
admin user (presumably, after creating a volume backup):
$cinder backup-export backup_ID
Where backup_ID is the volume backup's ID. This command should return the backup's corresponding database information as encoded string metadata.
Exporting and storing this encoded string metadata allows you to completely restore the backup, even in the event of a catastrophic database failure. This will preclude the need to back up the entire Block Storage database, particularly if you only need to keep complete backups of a small subset of volumes.
In addition, having a volume backup and its backup metadata also provides volume portability. Specifically, backing up a volume and exporting its metadata will allow you to restore the volume on a completely different Block Storage database, or even on a different cloud service. To do so, first import the backup metadata to the Block Storage database and then restore the backup.
To import backup metadata, run the following command as an OpenStack
admin:
$cinder backup-import metadata
Where metadata is the backup metadata exported earlier.
Once you have imported the backup metadata into a Block Storage database, restore the volume (Section 5.3.8, “Back up and restore volumes”).