Chapter 13. Storage Pools
nfs.example.com:/path/to/shareto be mounted on
/vm_data). When the storage pool is started, libvirt mounts the share on the specified directory, just as if the system administrator logged in and executed
mount nfs.example.com:/path/to/share /vmdata. If the storage pool is configured to autostart, libvirt ensures that the NFS shared disk is mounted on the directory specified when libvirt is started.
fstabis required so that the share is mounted at boot time.
- virtio-blk = 2^63 bytes or 8 Exabytes(using raw files or disk)
- Ext4 = ~ 16 TB (using 4 KB block size)
- XFS = ~8 Exabytes
- qcow2 and host file systems keep their own metadata and scalability should be evaluated/tuned when trying very large image sizes. Using raw disks means fewer layers that could affect scalability or max size.
/var/lib/libvirt/images/directory, as the default storage pool. The default storage pool can be changed to another storage pool.
- Local storage pools - Local storage pools are directly attached to the host physical machine server. Local storage pools include: local directories, directly attached disks, physical partitions, and LVM volume groups. These storage volumes store guest virtual machine images or are attached to guest virtual machines as additional storage. As local storage pools are directly attached to the host physical machine server, they are useful for development, testing and small deployments that do not require migration or large numbers of guest virtual machines. Local storage pools are not suitable for many production environments as local storage pools do not support live migration.
- Networked (shared) storage pools - Networked storage pools include storage devices shared over a network using standard protocols. Networked storage is required when migrating virtual machines between host physical machines with virt-manager, but is optional when migrating with virsh. Networked storage pools are managed by libvirt. Supported protocols for networked storage pools include:
- Fibre Channel-based LUNs
- SCSI RDMA protocols (SCSI RCP), the block export protocol used in InfiniBand and 10GbE iWARP adapters.
Example 13.1. NFS storage pool
/path/to/shareshould be mounted on
/vm_data). When the pool is started, libvirt mounts the share on the specified directory, just as if the system administrator logged in and executed
mount nfs.example.com:/path/to/share /vmdata. If the pool is configured to autostart, libvirt ensures that the NFS share is mounted on the directory specified when libvirt is started.
13.1. Disk-based Storage Pools
/dev/sdb). Use partitions (for example,
/dev/sdb1) or LVM volumes.
13.1.1. Creating a Disk-based Storage Pool Using virsh
Create a GPT disk label on the diskThe disk must be relabeled with a GUID Partition Table (GPT) disk label. GPT disk labels allow for creating a large numbers of partitions, up to 128 partitions, on each device. GPT partition tables can store partition data for far more partitions than the MS-DOS partition table.
parted /dev/sdbGNU Parted 2.1 Using /dev/sdb Welcome to GNU Parted! Type 'help' to view a list of commands. (parted) mklabel New disk label type? gpt (parted) quit Information: You may need to update /etc/fstab. #
Create the storage pool configuration fileCreate a temporary XML text file containing the storage pool information required for the new device.The file must be in the format shown below, and contain the following fields:
Create the XML file for the storage pool device with a text editor.
nameparameter determines the name of the storage pool. This example uses the name guest_images_disk in the example below.
- <device path='/dev/sdb'/>
deviceparameter with the
pathattribute specifies the device path of the storage device. This example uses the device /dev/sdb.
- <target> <path>/dev</path></target>
- The file system
targetparameter with the
pathsub-parameter determines the location on the host physical machine file system to attach volumes created with this storage pool.For example, sdb1, sdb2, sdb3. Using /dev/, as in the example below, means volumes created from this storage pool can be accessed as /dev/sdb1, /dev/sdb2, /dev/sdb3.
- <format type='gpt'/>
formatparameter specifies the partition table type. This example uses the gpt in the example below, to match the GPT disk label type created in the previous step.
Example 13.2. Disk-based storage device storage pool
<pool type='disk'> <name>guest_images_disk</name> <source> <device path='/dev/sdb'/> <format type='gpt'/> </source> <target> <path>/dev</path> </target> </pool>
Start the storage poolStart the storage pool with the
virsh pool-startcommand. Verify the pool is started with the
virsh pool-list --allcommand.
virsh pool-start iscsirhel7guestPool iscsirhel7guest started #
virsh pool-list --allName State Autostart ----------------------------------------- default active yes guest_images_disk active no
Attach the deviceAdd the storage pool definition using the
virsh pool-definecommand with the XML configuration file created in the previous step.
virsh pool-define ~/guest_images_disk.xmlPool guest_images_disk defined from /root/guest_images_disk.xml #
virsh pool-list --allName State Autostart ----------------------------------------- default active yes guest_images_disk inactive no
Turn on autostartTurn on
autostartfor the storage pool. Autostart configures the
libvirtdservice to start the storage pool when the service starts.
virsh pool-autostart guest_images_diskPool guest_images_disk marked as autostarted #
virsh pool-list --allName State Autostart ----------------------------------------- default active yes guest_images_disk active yes
Verify the storage pool configurationVerify the storage pool was created correctly, the sizes reported correctly, and the state reports as
virsh pool-info guest_images_diskName: guest_images_disk UUID: 551a67c8-5f2a-012c-3844-df29b167431c State: running Capacity: 465.76 GB Allocation: 0.00 Available: 465.76 GB #
ls -la /dev/sdbbrw-rw----. 1 root disk 8, 16 May 30 14:08 /dev/sdb #
virsh vol-list guest_images_diskName Path -----------------------------------------
Optional: Remove the temporary configuration fileRemove the temporary storage pool XML configuration file if it is not needed anymore.
13.1.2. Deleting a Storage Pool Using virsh
- To avoid any issues with other guest virtual machines using the same pool, it is best to stop the storage pool and release any resources in use by it.
virsh pool-destroy guest_images_disk
- Remove the storage pool's definition
virsh pool-undefine guest_images_disk