13.4. LVM-based Storage Pools
- Thin provisioning is currently not possible with LVM-based storage pools.
- To prevent the host from unnecessarily scanning the contents of LVMs used by the guest, the
global_filteroption must be configured in
/etc/lvm/lvm.conf. For more information, refer to the Red Hat Enterprise Linux Logical Volume Manager Administration Guide.
13.4.1. Creating an LVM-based Storage Pool with virt-manager
Open the storage pool settings
- In the
virt-managergraphical interface, select the host from the main window.Open the Edit menu and select Connection Details
- click the Storage tab.
Figure 13.9. Storage tab
Create the new storage pool
Start the WizardPress the + button (the add pool button). The Add a New Storage Pool wizard appears.Choose afor the storage pool. We use guest_images_lvm for this example. Then change the to
logical: LVM Volume Group, and
Figure 13.10. Add LVM storage poolPressto continue.
Add a new pool (part 2)Fill in theand fields, and check the check box.
- Use the either select an existing LVM volume group or as the name for a new volume group. The default format is storage_pool_name/lvm_Volume_Group_name.field toThis example uses a new volume group named /dev/guest_images_lvm.
Source Pathfield is optional if an existing LVM volume group is used in the .For new LVM volume groups, input the location of a storage device in the
Source Pathfield. This example uses a blank partition /dev/sdc.
- Thecheck box instructs
virt-managerto create a new LVM volume group. If you are using an existing volume group you should not select the check box.This example is using a blank partition to create a new volume group so thecheck box must be selected.
Figure 13.11. Add target and sourceVerify the details and press thebutton format the LVM volume group and create the storage pool.
Confirm the device to be formattedA warning message appears.
Figure 13.12. Warning messagePress the Yes button to proceed to erase all data on the storage device and create the storage pool.
Verify the new storage poolThe new storage pool will appear in the list on the left after a few seconds. Verify the details are what you expect, 465.76 GB Free in our example. Also verify thefield reports the new storage pool as Active.It is generally a good idea to have thecheck box enabled, to ensure the storage pool starts automatically with libvirtd.
Figure 13.13. Confirm LVM storage pool detailsClose the Connection Details dialog, as the task is now complete.
13.4.2. Deleting a Storage Pool Using virt-manager
- To avoid any issues with other guest virtual machines using the same pool, it is best to stop the storage pool and release any resources in use by it. To do this, select the storage pool you want to stop and click .
Figure 13.14. Stop Icon
- Delete the storage pool by clicking . This icon is only enabled if you stop the storage pool first.
13.4.3. Creating an LVM-based Storage Pool with virsh
virshcommand. It uses the example of a pool named guest_images_lvm from a single drive (
/dev/sdc). This is only an example and your settings should be substituted as appropriate.
Procedure 13.3. Creating an LVM-based storage pool with virsh
- Define the pool name guest_images_lvm.
virsh pool-define-as guest_images_lvm logical - - /dev/sdc libvirt_lvm \ /dev/libvirt_lvmPool guest_images_lvm defined
- Build the pool according to the specified name. If you are using an already existing volume group, skip this step.
virsh pool-build guest_images_lvmPool guest_images_lvm built
- Initialize the new pool.
virsh pool-start guest_images_lvmPool guest_images_lvm started
- Show the volume group information with the
vgsVG #PV #LV #SN Attr VSize VFree libvirt_lvm 1 0 0 wz--n- 465.76g 465.76g
- Set the pool to start automatically.
virsh pool-autostart guest_images_lvmPool guest_images_lvm marked as autostarted
- List the available pools with the
virsh pool-list --allName State Autostart ----------------------------------------- default active yes guest_images_lvm active yes
- The following commands demonstrate the creation of three volumes (volume1, volume2 and volume3) within this pool.
virsh vol-create-as guest_images_lvm volume1 8GVol volume1 created #
virsh vol-create-as guest_images_lvm volume2 8GVol volume2 created #
virsh vol-create-as guest_images_lvm volume3 8GVol volume3 created
- List the available volumes in this pool with the
virsh vol-list guest_images_lvmName Path ----------------------------------------- volume1 /dev/libvirt_lvm/volume1 volume2 /dev/libvirt_lvm/volume2 volume3 /dev/libvirt_lvm/volume3
- The following two commands (
lvs) display further information about the newly created volumes.
lvscanACTIVE '/dev/libvirt_lvm/volume1' [8.00 GiB] inherit ACTIVE '/dev/libvirt_lvm/volume2' [8.00 GiB] inherit ACTIVE '/dev/libvirt_lvm/volume3' [8.00 GiB] inherit #
lvsLV VG Attr LSize Pool Origin Data% Move Log Copy% Convert volume1 libvirt_lvm -wi-a- 8.00g volume2 libvirt_lvm -wi-a- 8.00g volume3 libvirt_lvm -wi-a- 8.00g
13.4.4. Deleting a Storage Pool Using virsh
- To avoid any issues with other guests using the same pool, it is best to stop the storage pool and release any resources in use by it.
virsh pool-destroy guest_images_disk
- Optionally, if you want to remove the directory where the storage pool resides use the following command:
virsh pool-delete guest_images_disk
- Remove the storage pool's definition
virsh pool-undefine guest_images_disk