Chapter 13. Storage Pools

This chapter includes instructions on creating storage pools of assorted types. A storage pool is a quantity of storage set aside by an administrator, often a dedicated storage administrator, for use by guest virtual machines. Storage pools are divided into storage volumes either by the storage administrator or the system administrator, and the volumes are then assigned to guest virtual machines as block devices.
For example, the storage administrator responsible for an NFS server creates a shared disk to store all of the guest virtual machines' data. The system administrator would define a storage pool on the virtualization host using the details of the shared disk. In this example, the administrator may want nfs.example.com:/path/to/share to be mounted on /vm_data). When the storage pool is started, libvirt mounts the share on the specified directory, just as if the system administrator logged in and executed mount nfs.example.com:/path/to/share /vmdata. If the storage pool is configured to autostart, libvirt ensures that the NFS shared disk is mounted on the directory specified when libvirt is started.
Once the storage pool is started, the files in the NFS shared disk are reported as storage volumes, and the storage volumes' paths may be queried using the libvirt APIs. The storage volumes' paths can then be copied into the section of a guest virtual machine's XML definition describing the source storage for the guest virtual machine's block devices.In the case of NFS, an application using the libvirt APIs can create and delete storage volumes in the storage pool (files in the NFS share) up to the limit of the size of the pool (the storage capacity of the share). Not all storage pool types support creating and deleting volumes. Stopping the storage pool (pool-destroy) undoes the start operation, in this case, unmounting the NFS share. The data on the share is not modified by the destroy operation, despite what the name of the command suggests. For more details, see man virsh.
A second example is an iSCSI storage pool. A storage administrator provisions an iSCSI target to present a set of LUNs to the host running the virtual machines. When libvirt is configured to manage that iSCSI target as a storage pool, libvirt will ensure that the host logs into the iSCSI target and libvirt can then report the available LUNs as storage volumes. The storage volumes' paths can be queried and used in virtual machines' XML definitions as in the NFS example. In this case, the LUNs are defined on the iSCSI server, and libvirt cannot create and delete volumes.
Storage pools and volumes are not required for the proper operation of guest virtual machines. Storage pools and volumes provide a way for libvirt to ensure that a particular piece of storage will be available for a guest virtual machine. On systems that do not use storage pools, system administrators must ensure the availability of the guest virtual machine's storage. For example, adding the NFS share to the host physical machine's fstab is required so that the share is mounted at boot time.
One of the advantages of using libvirt to manage storage pools and volumes is libvirt's remote protocol, so it is possible to manage all aspects of a guest virtual machine's life cycle, as well as the configuration of the resources required by the guest virtual machine. These operations can be performed on a remote host entirely within the libvirt API. As a result, a management application using libvirt can enable a user to perform all the required tasks for configuring the host physical machine for a guest virtual machine such as: allocating resources, running the guest virtual machine, shutting it down and de-allocating the resources, without requiring shell access or any other control channel.
Although the storage pool is a virtual container it is limited by two factors: maximum size allowed to it by qemu-kvm and the size of the disk on the host machine. Storage pools may not exceed the size of the disk on the host machine. The maximum sizes are as follows:
  • virtio-blk = 2^63 bytes or 8 Exabytes(using raw files or disk)
  • Ext4 = ~ 16 TB (using 4 KB block size)
  • XFS = ~8 Exabytes
  • qcow2 and host file systems keep their own metadata and scalability should be evaluated/tuned when trying very large image sizes. Using raw disks means fewer layers that could affect scalability or max size.
libvirt uses a directory-based storage pool, the /var/lib/libvirt/images/ directory, as the default storage pool. The default storage pool can be changed to another storage pool.
  • Local storage pools - Local storage pools are directly attached to the host physical machine server. Local storage pools include: local directories, directly attached disks, physical partitions, and LVM volume groups. These storage volumes store guest virtual machine images or are attached to guest virtual machines as additional storage. As local storage pools are directly attached to the host physical machine server, they are useful for development, testing and small deployments that do not require migration or large numbers of guest virtual machines. Local storage pools are not suitable for many production environments as local storage pools do not support live migration.
  • Networked (shared) storage pools - Networked storage pools include storage devices shared over a network using standard protocols. Networked storage is required when migrating virtual machines between host physical machines with virt-manager, but is optional when migrating with virsh. Networked storage pools are managed by libvirt. Supported protocols for networked storage pools include:
    • Fibre Channel-based LUNs
    • iSCSI
    • NFS
    • GFS2
    • SCSI RDMA protocols (SCSI RCP), the block export protocol used in InfiniBand and 10GbE iWARP adapters.

Note

Multi-path storage pools should not be created or used as they are not fully supported.

Example 13.1. NFS storage pool

Suppose a storage administrator responsible for an NFS server creates a share to store guest virtual machines' data. The system administrator defines a pool on the host physical machine with the details of the share (nfs.example.com:/path/to/share should be mounted on /vm_data). When the pool is started, libvirt mounts the share on the specified directory, just as if the system administrator logged in and executed mount nfs.example.com:/path/to/share /vmdata. If the pool is configured to autostart, libvirt ensures that the NFS share is mounted on the directory specified when libvirt is started.
Once the pool starts, the files that the NFS share, are reported as volumes, and the storage volumes' paths are then queried using the libvirt APIs. The volumes' paths can then be copied into the section of a guest virtual machine's XML definition file describing the source storage for the guest virtual machine's block devices. With NFS, applications using the libvirt APIs can create and delete volumes in the pool (files within the NFS share) up to the limit of the size of the pool (the maximum storage capacity of the share). Not all pool types support creating and deleting volumes. Stopping the pool negates the start operation, in this case, unmounts the NFS share. The data on the share is not modified by the destroy operation, despite the name. See man virsh for more details.

Note

Storage pools and volumes are not required for the proper operation of guest virtual machines. Pools and volumes provide a way for libvirt to ensure that a particular piece of storage will be available for a guest virtual machine, but some administrators will prefer to manage their own storage and guest virtual machines will operate properly without any pools or volumes defined. On systems that do not use pools, system administrators must ensure the availability of the guest virtual machines' storage using whatever tools they prefer. For example, adding the NFS share to the host physical machine's fstab is required so that the share is mounted at boot time.

Warning

When creating storage pools on a guest, make sure to follow the related security considerations found in the Red Hat Enterprise Linux 7 Virtualization Security Guide.

13.1. Disk-based Storage Pools

This section covers creating disk-based storage devices for guest virtual machines.

Warning

Guests should not be given write access to whole disks or block devices (for example, /dev/sdb). Use partitions (for example, /dev/sdb1) or LVM volumes.
If you pass an entire block device to the guest, the guest will likely partition it or create its own LVM groups on it. This can cause the host physical machine to detect these partitions or LVM groups and cause errors.

13.1.1. Creating a Disk-based Storage Pool Using virsh

This procedure creates a new storage pool using a disk device with the virsh command.

Warning

Dedicating a disk to a storage pool will reformat and erase all data presently stored on the disk device. It is strongly recommended to back up the data on the storage device before commencing with the following procedure:
  1. Create a GPT disk label on the disk

    The disk must be relabeled with a GUID Partition Table (GPT) disk label. GPT disk labels allow for creating a large numbers of partitions, up to 128 partitions, on each device. GPT partition tables can store partition data for far more partitions than the MS-DOS partition table.
    # parted /dev/sdb
    GNU Parted 2.1
    Using /dev/sdb
    Welcome to GNU Parted! Type 'help' to view a list of commands.
    (parted) mklabel
    New disk label type? gpt
    (parted) quit
    Information: You may need to update /etc/fstab.
    #
    
  2. Create the storage pool configuration file

    Create a temporary XML text file containing the storage pool information required for the new device.
    The file must be in the format shown below, and contain the following fields:
    <name>guest_images_disk</name>
    The name parameter determines the name of the storage pool. This example uses the name guest_images_disk in the example below.
    <device path='/dev/sdb'/>
    The device parameter with the path attribute specifies the device path of the storage device. This example uses the device /dev/sdb.
    <target> <path>/dev</path></target>
    The file system target parameter with the path sub-parameter determines the location on the host physical machine file system to attach volumes created with this storage pool.
    For example, sdb1, sdb2, sdb3. Using /dev/, as in the example below, means volumes created from this storage pool can be accessed as /dev/sdb1, /dev/sdb2, /dev/sdb3.
    <format type='gpt'/>
    The format parameter specifies the partition table type. This example uses the gpt in the example below, to match the GPT disk label type created in the previous step.
    Create the XML file for the storage pool device with a text editor.

    Example 13.2. Disk-based storage device storage pool

    <pool type='disk'>
      <name>guest_images_disk</name>
      <source>
        <device path='/dev/sdb'/>
        <format type='gpt'/>
      </source>
      <target>
        <path>/dev</path>
      </target>
    </pool>
    
  3. Start the storage pool

    Start the storage pool with the virsh pool-start command. Verify the pool is started with the virsh pool-list --all command.
    # virsh pool-start iscsirhel7guest
    Pool iscsirhel7guest started
    # virsh pool-list --all
    Name                 State      Autostart
    -----------------------------------------
    default              active     yes
    guest_images_disk    active     no
    
  4. Attach the device

    Add the storage pool definition using the virsh pool-define command with the XML configuration file created in the previous step.
    # virsh pool-define ~/guest_images_disk.xml
    Pool guest_images_disk defined from /root/guest_images_disk.xml
    # virsh pool-list --all
    Name                 State      Autostart
    -----------------------------------------
    default              active     yes
    guest_images_disk    inactive   no
    
  5. Turn on autostart

    Turn on autostart for the storage pool. Autostart configures the libvirtd service to start the storage pool when the service starts.
    # virsh pool-autostart guest_images_disk
    Pool guest_images_disk marked as autostarted
    # virsh pool-list --all
    Name                 State      Autostart
    -----------------------------------------
    default              active     yes
    guest_images_disk    active     yes
    
  6. Verify the storage pool configuration

    Verify the storage pool was created correctly, the sizes reported correctly, and the state reports as running.
    # virsh pool-info guest_images_disk
    Name:           guest_images_disk
    UUID:           551a67c8-5f2a-012c-3844-df29b167431c
    State:          running
    Capacity:       465.76 GB
    Allocation:     0.00
    Available:      465.76 GB
    # ls -la /dev/sdb
    brw-rw----. 1 root disk 8, 16 May 30 14:08 /dev/sdb
    # virsh vol-list guest_images_disk
    Name                 Path
    -----------------------------------------
    
  7. Optional: Remove the temporary configuration file

    Remove the temporary storage pool XML configuration file if it is not needed anymore.
    # rm ~/guest_images_disk.xml
A disk-based storage pool is now available.

13.1.2. Deleting a Storage Pool Using virsh

The following demonstrates how to delete a storage pool using virsh:
  1. To avoid any issues with other guest virtual machines using the same pool, it is best to stop the storage pool and release any resources in use by it.
    # virsh pool-destroy guest_images_disk
  2. Remove the storage pool's definition
    # virsh pool-undefine guest_images_disk