Chapter 11. Managing storage for virtual machines
You can manage virtual machine storage using the CLI or the web console.
This documentation provides information on how to manage virtual machine storage using the virsh
command.
11.1. Understanding virtual machine storage
The following sections provide information about storage for virtual machines (VMs), including information about storage pools, storage volumes, and how they are used to provide storage for VMs.
11.1.1. Virtual machine storage
The following provides information about how storage pools and storage volumes are used to create storage for virtual machines (VMs).
A storage pool is a quantity of storage managed by the host and set aside for use by VMs. Storage volumes can be created from space in the storage pools. Each storage volume can be assigned to a VM as a block device, such as a disk, on a guest bus.
Storage pools and volumes are managed using libvirt
. With the libvirt
remote protocol, you can manage all aspects of VM storage. These operations can be performed on a remote host. As a result, a management application that uses libvirt
, such as the RHEL web console, can be used to perform all the required tasks for configuring the storage for a VM.
The libvirt
API can be used to query the list of volumes in the storage pool or to get information regarding the capacity, allocation, and available storage in the storage pool. A storage volume in the storage pool may be queried to get information such as allocation and capacity, which may differ for sparse volumes.
For storage pools that support it, the libvirt
API can be used to create, clone, resize, and delete storage volumes. The APIs can also be used to upload data to storage volumes, download data from storage volumes, or wipe data from storage volumes.
Once a storage pool is started, a storage volume can be assigned to a VM using the storage pool name and storage volume name instead of the host path to the volume in the XML configuration files of the VM.
11.1.2. Storage pools
A storage pool is a file, directory, or storage device, managed by libvirt
to provide storage to virtual machines (VMs). Storage pools are divided into storage volumes that store VM images or are attached to VMs as additional storage. Multiple VMs can share the same storage pool, allowing for better allocation of storage resources.
Storage pools can be persistent or transient:
- A persistent storage pool survives a system restart of the host machine.
- A transient storage pool only exists until the host reboots.
The virsh pool-define
command is used to create a persistent storage pool, and the virsh pool-create
command is used to create a transient storage pool.
Storage pool storage types
Storage pools can be either local or network-based (shared):
Local storage pools
Local storage pools are attached directly to the host server. They include local directories, directly attached disks, physical partitions, and Logical Volume Management (LVM) volume groups on local devices.
Local storage pools are useful for development, testing, and small deployments that do not require migration or large numbers of VMs.
Networked (shared) storage pools
Networked storage pools include storage devices shared over a network using standard protocols.
Storage pool usage example
To illustrate the available options for managing storage pools, the following describes a sample NFS server that uses mount -t nfs nfs.example.com:/path/to/share /path/to/data
.
A storage administrator could define an NFS Storage Pool on the virtualization host to describe the exported server path and the client target path. This will allow libvirt
to perform the mount either automatically when libvirt
is started or as needed while libvirt
is running. Files with the NFS Server exported directory are listed as storage volumes within the NFS storage pool.
When the storage volume is added to the VM, the administrator does not need to add the target path to the volume. They just needs to add the storage pool and storage volume by name. Therefore, if the target client path changes, it does not affect the VM.
When the storage pool is started, libvirt
mounts the share on the specified directory, just as if the system administrator logged in and executed mount nfs.example.com:/path/to/share /vmdata
. If the storage pool is configured to autostart, libvirt
ensures that the NFS shared disk is mounted on the directory specified when libvirt
is started.
Once the storage pool is started, the files in the NFS shared disk are reported as storage volumes, and the storage volumes' paths may be queried using the libvirt
API. The storage volumes' paths can then be copied into the section of a VM’s XML definition that describes the source storage for the VM’s block devices. In the case of NFS, an application that uses the libvirt
API can create and delete storage volumes in the storage pool (files in the NFS share) up to the limit of the size of the pool (the storage capacity of the share).
Stopping (destroying) a storage pool removes the abstraction of the data, but keeps the data intact.
Not all storage pool types support creating and deleting volumes. Stopping the storage pool (pool-destroy
) undoes the start operation, in this case, unmounting the NFS share. The data on the share is not modified by the destroy operation, despite what the name of the command suggests. For more details, see man virsh
.
Supported and unsupported storage pool types
The following is a list of storage pool types supported by RHEL:
- Directory-based storage pools
- Disk-based storage pools
- Partition-based storage pools
- GlusterFS storage pools
- iSCSI-based storage pools
- LVM-based storage pools
- NFS-based storage pools
- SCSI-based storage pools with vHBA devices
- Multipath-based storage pools
- RBD-based storage pools
The following is a list of libvirt
storage pool types that are not supported by RHEL:
- Sheepdog-based storage pools
- Vstorage-based storage pools
- ZFS-based storage pools
11.1.3. Storage volumes
Storage pools are divided into storage volumes
. Storage volumes are abstractions of physical partitions, LVM logical volumes, file-based disk images, and other storage types handled by libvirt
. Storage volumes are presented to VMs as local storage devices, such as disks, regardless of the underlying hardware.
On the host machine, a storage volume is referred to by its name and an identifier for the storage pool from which it derives. On the virsh
command line, this takes the form --pool storage_pool volume_name
.
For example, to display information about a volume named firstimage in the guest_images pool.
# virsh vol-info --pool guest_images firstimage
Name: firstimage
Type: block
Capacity: 20.00 GB
Allocation: 20.00 GB
11.2. Managing storage for virtual machines using the CLI
The following documentation provides information on how to manage virtual machine (VM) storage using the virsh
command-line utility.
Using virsh
, you can add, remove, and modify VM storage, as well as view information about VM storage.
In many cases, storage for a VM is created at the same time the VM is created. Therefore, the following information primarily relates to advanced management of VM storage.
11.2.1. Viewing virtual machine storage information using the CLI
The following provides information about viewing information about storage pools and storage volumes using the CLI.
11.2.1.1. Viewing storage pool information using the CLI
Using the CLI, you can view a list of all storage pools with limited or full details about the storage pools. You can also filter the storage pools listed.
Procedure
Use the
virsh pool-list
command to view storage pool information.# virsh pool-list --all --details Name State Autostart Persistent Capacity Allocation Available default running yes yes 48.97 GiB 23.93 GiB 25.03 GiB Downloads running yes yes 175.62 GiB 62.02 GiB 113.60 GiB RHEL8-Storage-Pool running yes yes 214.62 GiB 93.02 GiB 168.60 GiB
Additional resources
-
For information on the available
virsh pool-list
options, use thevirsh pool-list --help
command.
11.2.1.2. Viewing storage volume information using the CLI
The following provides information on viewing information about storage pools. You can view a list of all storage pools in a specified storage pool and details about a specified storage pool.
Procedure
Use the
virsh vol-list
command to list the storage volumes in a specified storage pool.# virsh vol-list --pool RHEL8-Storage-Pool --details Name Path Type Capacity Allocation --------------------------------------------------------------------------------------------- .bash_history /home/VirtualMachines/.bash_history file 18.70 KiB 20.00 KiB .bash_logout /home/VirtualMachines/.bash_logout file 18.00 B 4.00 KiB .bash_profile /home/VirtualMachines/.bash_profile file 193.00 B 4.00 KiB .bashrc /home/VirtualMachines/.bashrc file 1.29 KiB 4.00 KiB .git-prompt.sh /home/VirtualMachines/.git-prompt.sh file 15.84 KiB 16.00 KiB .gitconfig /home/VirtualMachines/.gitconfig file 167.00 B 4.00 KiB RHEL8_Volume.qcow2 /home/VirtualMachines/RHEL8_Volume.qcow2 file 60.00 GiB 13.93 GiB
Use the
virsh vol-info
command to list the storage volumes in a specified storage pool.# vol-info --pool RHEL8-Storage-Pool --vol RHEL8_Volume.qcow2 Name: RHEL8_Volume.qcow2 Type: file Capacity: 60.00 GiB Allocation: 13.93 GiB
11.2.2. Creating and assigning storage for virtual machines using the CLI
The following is a high-level procedure for creating and assigning storage for virtual machines (VMs):
Create storage pools
Create one or more storage pools from available storage media. For a list of supported storage pool types, see Storage pool types.
To create persistent storage pools, use the
virsh pool-define-as
andvirsh pool-define
commands.The
virsh pool-define-as
command places the options in the command line. Thevirsh pool-define
command uses an XML file for the pool options.To create temporary storage pools, use the
virsh pool-create
andvirsh pool-create-as
commands.The
virsh pool-create-as
command places the options in the command line. Thevirsh pool-create
command uses an XML file for the pool options.
Create storage volumes
Create one or more storage volumes from the available storage pools.
Assign storage devices to a VM
Assign one or more storage devices abstracted from storage volumes to a VM.
The following sections provide information on creating and assigning storage using the CLI:
11.2.2.1. Creating and assigning directory-based storage for virtual machines using the CLI
The following provides information about creating directory-based storage pools and storage volumes, and assigning volumes to virtual machines.
11.2.2.1.1. Creating directory-based storage pools using the CLI
The following provides instructions for creating directory-based storage pools.
Prerequisites
Ensure your hypervisor supports directory storage pools:
# virsh pool-capabilities | grep "'dir' supported='yes'"
If the command displays any output, directory pools are supported.
Procedure
Create a storage pool
Use the
virsh pool-define-as
command to define and create a directory-type storage pool. For example, to create a storage pool namedguest_images_dir
that uses the /guest_images directory:# virsh pool-define-as guest_images_dir dir --target "/guest_images" Pool guest_images_dir defined
If you already have an XML configuration of the storage pool you want to create, you can also define the pool based on the XML. For details, see Section 11.2.2.1.2, “Directory-based storage pool parameters”.
Create the storage pool target path
Use the
virsh pool-build
command to create a storage pool target path for a pre-formatted file system storage pool, initialize the storage source device, and define the format of the data.# virsh pool-build guest_images_dir Pool guest_images_dir built # ls -la /guest_images total 8 drwx------. 2 root root 4096 May 31 19:38 . dr-xr-xr-x. 25 root root 4096 May 31 19:38 ..
Verify that the pool was created
Use the
virsh pool-list
command to verify that the pool was created.# virsh pool-list --all Name State Autostart ----------------------------------------- default active yes guest_images_dir inactive no
Start the storage pool
Use the
virsh pool-start
command to mount the storage pool.# virsh pool-start guest_images_dir Pool guest_images_dir started
NoteThe
virsh pool-start
command is only necessary for persistent storage pools. Transient storage pools are automatically started when they are created.[Optional] Turn on autostart
By default, a storage pool defined with the
virsh
command is not set to automatically start each time libvirtd starts. Use thevirsh pool-autostart
command to configure the storage pool to autostart.# virsh pool-autostart guest_images_dir Pool guest_images_dir marked as autostarted
Verification
Use the
virsh pool-list
command to verify theAutostart
state.# virsh pool-list --all Name State Autostart ----------------------------------------- default active yes guest_images_dir inactive yes
Verify that the storage pool was created correctly, the sizes reported are as expected, and the state is reported as
running
.# virsh pool-info guest_images_dir Name: guest_images_dir UUID: c7466869-e82a-a66c-2187-dc9d6f0877d0 State: running Persistent: yes Autostart: yes Capacity: 458.39 GB Allocation: 197.91 MB Available: 458.20 GB
11.2.2.1.2. Directory-based storage pool parameters
This section provides information about the required XML parameters for a directory-based storage pool and an example.
You can define a storage pool based on the XML configuration in a specified file. For example:
# virsh pool-define ~/guest_images.xml
Pool defined from guest_images_dir
Parameters
The following table provides a list of required parameters for the XML file for a directory-based storage pool.
Table 11.1. Directory-based storage pool parameters
Description | XML |
---|---|
The type of storage pool |
|
The name of the storage pool |
|
The path specifying the target. This will be the path used for the storage pool. |
|
Example
The following is an example of an XML file for a storage pool based on the /guest_images
directory:
<pool type='dir'> <name>dirpool</name> <target> <path>/guest_images</path> </target> </pool>
Additional resources
For more information on creating directory-based storage pools, see Section 11.2.2.1.1, “Creating directory-based storage pools using the CLI”.
11.2.2.2. Creating and assigning disk-based storage for virtual machines using the CLI
The following provides information about creating disk-based storage pools and storage volumes and assigning volumes to virtual machines.
11.2.2.2.1. Creating disk-based storage pools using the CLI
The following provides instructions for creating disk-based storage pools.
Recommendations
Be aware of the following before creating a disk-based storage pool:
-
Depending on the version of
libvirt
being used, dedicating a disk to a storage pool may reformat and erase all data currently stored on the disk device. It is strongly recommended that you back up the data on the storage device before creating a storage pool. VMs should not be given write access to whole disks or block devices (for example,
/dev/sdb
). Use partitions (for example,/dev/sdb1
) or LVM volumes.If you pass an entire block device to a VM, the VM will likely partition it or create its own LVM groups on it. This can cause the host machine to detect these partitions or LVM groups and cause errors.
Prerequisites
Ensure your hypervisor supports disk-based storage pools:
# virsh pool-capabilities | grep "'disk' supported='yes'"
If the command displays any output, disk-based pools are supported.
Procedure
Create a storage pool
Use the
virsh pool-define-as
command to define and create a disk-type storage pool. For example, to create a storage pool namedguest_images_disk
that uses the /dev/sdb1 partition, and is mounted on the /dev directory:# virsh pool-define-as guest_images_disk disk gpt --source-dev=/dev/sdb1 --target /dev Pool guest_images_disk defined
If you already have an XML configuration of the storage pool you want to create, you can also define the pool based on the XML. For details, see Section 11.2.2.2.2, “Disk-based storage pool parameters”.
Create the storage pool target path
Use the
virsh pool-build
command to create a storage pool target path for a pre-formatted file-system storage pool, initialize the storage source device, and define the format of the data.# virsh pool-build guest_images_disk Pool guest_images_disk built
NoteBuilding the target path is only necessary for disk-based, file system-based, and logical storage pools. If
libvirt
detects that the source storage device’s data format differs from the selected storage pool type, the build fails, unless theoverwrite
option is specified.Verify that the pool was created
Use the
virsh pool-list
command to verify that the pool was created.# virsh pool-list --all Name State Autostart ----------------------------------------- default active yes guest_images_disk inactive no
Start the storage pool
Use the
virsh pool-start
command to mount the storage pool.# virsh pool-start guest_images_disk Pool guest_images_disk started
NoteThe
virsh pool-start
command is only necessary for persistent storage pools. Transient storage pools are automatically started when they are created.[Optional] Turn on autostart
By default, a storage pool defined with the
virsh
command is not set to automatically start each time libvirtd starts. Use thevirsh pool-autostart
command to configure the storage pool to autostart.# virsh pool-autostart guest_images_disk Pool guest_images_disk marked as autostarted
Verification
Use the
virsh pool-list
command to verify theAutostart
state.# virsh pool-list --all Name State Autostart ----------------------------------------- default active yes guest_images_disk inactive yes
Verify that the storage pool was created correctly, the sizes reported are as expected, and the state is reported as
running
.# virsh pool-info guest_images_disk Name: guest_images_disk UUID: c7466869-e82a-a66c-2187-dc9d6f0877d0 State: running Persistent: yes Autostart: yes Capacity: 458.39 GB Allocation: 197.91 MB Available: 458.20 GB
11.2.2.2.2. Disk-based storage pool parameters
This section provides information about the required XML parameters for a disk-based storage pool and an example.
You can define a storage pool based on the XML configuration in a specified file. For example:
# virsh pool-define ~/guest_images.xml
Pool defined from guest_images_disk
Parameters
The following table provides a list of required parameters for the XML file for a disk-based storage pool.
Table 11.2. Disk-based storage pool parameters
Description | XML |
---|---|
The type of storage pool |
|
The name of the storage pool |
|
The path specifying the storage device. For example, |
|
The path specifying the target device. This will be the path used for the storage pool. |
|
Example
The following is an example of an XML file for a disk-based storage pool:
<pool type='disk'> <name>phy_disk</name> <source> <device path='/dev/sdb'/> <format type='gpt'/> </source> <target> <path>/dev</path> </target> </pool>
Additional resources
For more information on creating disk-based storage pools, see Section 11.2.2.2.1, “Creating disk-based storage pools using the CLI”.
11.2.2.3. Creating and assigning filesystem-based storage for virtual machines using the CLI
The following provides information about creating filesystem-based storage pools and storage volumes, and assigning volumes to virtual machines.
11.2.2.3.1. Creating filesystem-based storage pools using the CLI
The following provides instructions for creating filesystem-based storage pools.
Recommendations
Do not use this procedure to assign an entire disk as a storage pool (for example, /dev/sdb
). VMs should not be given write access to whole disks or block devices. This method should only be used to assign partitions (for example, /dev/sdb1
) to storage pools.
Prerequisites
Ensure your hypervisor supports filesystem-based storage pools:
# virsh pool-capabilities | grep "'fs' supported='yes'"
If the command displays any output, file-based pools are supported.
Procedure
Create a storage pool
Use the
virsh pool-define-as
command to define and create a filesystem-type storage pool. For example, to create a storage pool namedguest_images_fs
that uses the /dev/sdc1 partition, and is mounted on the /guest_images directory:# virsh pool-define-as guest_images_fs fs --source-dev /dev/sdc1 --target /guest_images Pool guest_images_fs defined
If you already have an XML configuration of the storage pool you want to create, you can also define the pool based on the XML. For details, see Section 11.2.2.3.2, “Filesystem-based storage pool parameters”.
Define the storage pool target path
Use the
virsh pool-build
command to create a storage pool target path for a pre-formatted file-system storage pool, initialize the storage source device, and define the format of the data.# virsh pool-build guest_images_fs Pool guest_images_fs built # ls -la /guest_images total 8 drwx------. 2 root root 4096 May 31 19:38 . dr-xr-xr-x. 25 root root 4096 May 31 19:38 ..
Verify that the pool was created
Use the
virsh pool-list
command to verify that the pool was created.# virsh pool-list --all Name State Autostart ----------------------------------------- default active yes guest_images_fs inactive no
Start the storage pool
Use the
virsh pool-start
command to mount the storage pool.# virsh pool-start guest_images_fs Pool guest_images_fs started
NoteThe
virsh pool-start
command is only necessary for persistent storage pools. Transient storage pools are automatically started when they are created.[Optional] Turn on autostart
By default, a storage pool defined with the
virsh
command is not set to automatically start each time libvirtd starts. Use thevirsh pool-autostart
command to configure the storage pool to autostart.# virsh pool-autostart guest_images_fs Pool guest_images_fs marked as autostarted
Verify the
Autostart
stateUse the
virsh pool-list
command to verify theAutostart
state.# virsh pool-list --all Name State Autostart ----------------------------------------- default active yes guest_images_fs inactive yes
Verify the storage pool
Verify that the storage pool was created correctly, the sizes reported are as expected, and the state is reported as
running
. Verify there is alost+found
directory in the target path on the file system, indicating that the device is mounted.# virsh pool-info guest_images_fs Name: guest_images_fs UUID: c7466869-e82a-a66c-2187-dc9d6f0877d0 State: running Persistent: yes Autostart: yes Capacity: 458.39 GB Allocation: 197.91 MB Available: 458.20 GB # mount | grep /guest_images /dev/sdc1 on /guest_images type ext4 (rw) # ls -la /guest_images total 24 drwxr-xr-x. 3 root root 4096 May 31 19:47 . dr-xr-xr-x. 25 root root 4096 May 31 19:38 .. drwx------. 2 root root 16384 May 31 14:18 lost+found
11.2.2.3.2. Filesystem-based storage pool parameters
The following provides information about the required parameters for a filesystem-based storage pool and an example.
You can define a storage pool based on the XML configuration in a specified file. For example:
# virsh pool-define ~/guest_images.xml
Pool defined from guest_images_fs
Parameters
The following table provides a list of required parameters for the XML file for a filesystem-based storage pool.
Table 11.3. Filesystem-based storage pool parameters
Description | XML |
---|---|
The type of storage pool |
|
The name of the storage pool |
|
The path specifying the partition. For example, |
|
The file system type, for example ext4. |
|
The path specifying the target. This will be the path used for the storage pool. |
|
Example
The following is an example of an XML file for a storage pool based on the /dev/sdc1
partition:
<pool type='fs'> <name>guest_images_fs</name> <source> <device path='/dev/sdc1'/> <format type='auto'/> </source> <target> <path>/guest_images</path> </target> </pool>
Additional resources
For more information on creating filesystem-based storage pools, see Section 11.2.2.3.1, “Creating filesystem-based storage pools using the CLI”.
11.2.2.4. Creating and assigning GlusterFS storage for virtual machines using the CLI
The following provides information about creating GlusterFS-based storage pools and storage volumes, and assigning volumes to virtual machines.
11.2.2.4.1. Creating GlusterFS-based storage pools using the CLI
GlusterFS is a user space file system that uses File System in Userspace (FUSE). The following provides instructions for creating GlusterFS-based storage pools.
Prerequisites
Before you can create GlusterFS-based storage pool on a host, prepare a Gluster.
Obtain the IP address of the Gluster server by listing its status with the following command:
# gluster volume status Status of volume: gluster-vol1 Gluster process Port Online Pid ------------------------------------------------------------ Brick 222.111.222.111:/gluster-vol1 49155 Y 18634 Task Status of Volume gluster-vol1 ------------------------------------------------------------ There are no active volume tasks
-
If not installed, install the
glusterfs-fuse
package. If not enabled, enable the
virt_use_fusefs
boolean. Check that it is enabled.# setsebool virt_use_fusefs on # getsebool virt_use_fusefs virt_use_fusefs --> on
Ensure your hypervisor supports GlusterFS-based storage pools:
# virsh pool-capabilities | grep "'gluster' supported='yes'"
If the command displays any output, GlusterFS-based pools are supported.
Procedure
Create a storage pool
Use the
virsh pool-define-as
command to define and create a GlusterFS-based storage pool. For example, to create a storage pool namedguest_images_glusterfs
that uses a Gluster server namedgluster-vol1
with IP111.222.111.222
, and is mounted on the root directory of the Gluster server:# virsh pool-define-as --name guest_images_glusterfs --type gluster --source-host 111.222.111.222 --source-name gluster-vol1 --source-path / Pool guest_images_glusterfs defined
If you already have an XML configuration of the storage pool you want to create, you can also define the pool based on the XML. For details, see Section 11.2.2.4.2, “GlusterFS-based storage pool parameters”.
Verify that the pool was created
Use the
virsh pool-list
command to verify that the pool was created.# virsh pool-list --all Name State Autostart -------------------------------------------- default active yes guest_images_glusterfs inactive no
Start the storage pool
Use the
virsh pool-start
command to mount the storage pool.# virsh pool-start guest_images_glusterfs Pool guest_images_glusterfs started
NoteThe
virsh pool-start
command is only necessary for persistent storage pools. Transient storage pools are automatically started when they are created.[Optional] Turn on autostart
By default, a storage pool defined with the
virsh
command is not set to automatically start each time libvirtd starts. Use thevirsh pool-autostart
command to configure the storage pool to autostart.# virsh pool-autostart guest_images_glusterfs Pool guest_images_glusterfs marked as autostarted
Verification
Use the
virsh pool-list
command to verify theAutostart
state.# virsh pool-list --all Name State Autostart -------------------------------------------- default active yes guest_images_glusterfs inactive yes
Verify that the storage pool was created correctly, the sizes reported are as expected, and the state is reported as
running
.# virsh pool-info guest_images_glusterfs Name: guest_images_glusterfs UUID: c7466869-e82a-a66c-2187-dc9d6f0877d0 State: running Persistent: yes Autostart: yes Capacity: 458.39 GB Allocation: 197.91 MB Available: 458.20 GB
11.2.2.4.2. GlusterFS-based storage pool parameters
The following provides information about the required parameters for a GlusterFS-based storage pool and an example.
You can define a storage pool based on the XML configuration in a specified file. For example:
# virsh pool-define ~/guest_images.xml
Pool defined from guest_images_glusterfs
Parameters
The following table provides a list of required parameters for the XML file for a GlusterFS-based storage pool.
Table 11.4. GlusterFS-based storage pool parameters
Description | XML |
---|---|
The type of storage pool |
|
The name of the storage pool |
|
The hostname or IP address of the Gluster server |
|
The path on the Gluster server used for the storage pool. |
|
Example
The following is an example of an XML file for a storage pool based on the Gluster file system at 111.222.111.222:
<pool type='gluster'> <name>Gluster_pool</name> <source> <host name='111.222.111.222'/> <dir path='/'/> <name>gluster-vol1</name> </source> </pool>
For more information on creating filesystem-based storage pools, see Section 11.2.2.4.1, “Creating GlusterFS-based storage pools using the CLI”.
11.2.2.5. Creating and assigning iSCSI-based storage for virtual machines using the CLI
The following provides information about creating iSCSI-based storage pools and storage volumes, securing iSCSI-based storage pools with libvirt
secrets, and assigning volumes to virtual machines.
Recommendations
Internet Small Computer System Interface (iSCSI) is a network protocol for sharing storage devices. iSCSI connects initiators (storage clients) to targets (storage servers) using SCSI instructions over the IP layer.
Using iSCSI-based devices to store virtual machines allows for more flexible storage options, such as using iSCSI as a block storage device. The iSCSI devices use a Linux-IO (LIO) target. This is a multi-protocol SCSI target for Linux. In addition to iSCSI, LIO also supports Fibre Channel and Fibre Channel over Ethernet (FCoE).
If you need to prevent access to an iSCSI storage pool, you can secure it using a libvirt secret.
Prerequisites
Before you can create an iSCSI-based storage pool, you must create iSCSI targets. You can create iSCSI targets are created using the
targetcli
package, which provides a command set for creating software-backed iSCSI targets.For more information and instructions on creating iSCSI targets, see the Managing storage devices document.
11.2.2.5.1. Creating iSCSI-based storage pools using the CLI
The following provides instructions for creating iSCSI-based storage pools.
Prerequisites
Ensure your hypervisor supports iSCSI-based storage pools:
# virsh pool-capabilities | grep "'iscsi' supported='yes'"
If the command displays any output, iSCSI-based pools are supported.
Procedure
Create a storage pool
Use the
virsh pool-define-as
command to define and create an iSCSI-type storage pool. For example, to create a storage pool namedguest_images_iscsi
that uses theiqn.2010-05.com.example.server1:iscsirhel7guest
IQN on theserver1.example.com
, and is mounted on the/dev/disk/by-path
path:# virsh pool-define-as --name guest_images_iscsi --type iscsi --source-host server1.example.com --source-dev iqn.2010-05.com.example.server1:iscsirhel7guest --target /dev/disk/by-path Pool guest_images_iscsi defined
If you already have an XML configuration of the storage pool you want to create, you can also define the pool based on the XML. For details, see Section 11.2.2.5.2, “iSCSI-based storage pool parameters”.
Verify that the pool was created
Use the
virsh pool-list
command to verify that the pool was created.# virsh pool-list --all Name State Autostart ----------------------------------------- default active yes guest_images_iscsi inactive no
Start the storage pool
Use the
virsh pool-start
command to mount the storage pool.# virsh pool-start guest_images_iscsi Pool guest_images_iscsi started
NoteThe
virsh pool-start
command is only necessary for persistent storage pools. Transient storage pools are automatically started when they are created.[Optional] Turn on autostart
By default, a storage pool defined with the
virsh
command is not set to automatically start each time libvirtd starts. Use thevirsh pool-autostart
command to configure the storage pool to autostart.# virsh pool-autostart guest_images_iscsi Pool guest_images_iscsi marked as autostarted
Verification
Use the
virsh pool-list
command to verify theAutostart
state.# virsh pool-list --all Name State Autostart ----------------------------------------- default active yes guest_images_iscsi inactive yes
Verify that the storage pool was created correctly, the sizes reported are as expected, and the state is reported as
running
. Verify there is alost+found
directory in the target path on the file system, indicating that the device is mounted.# virsh pool-info guest_images_iscsi Name: guest_images_iscsi UUID: c7466869-e82a-a66c-2187-dc9d6f0877d0 State: running Persistent: yes Autostart: yes Capacity: 458.39 GB Allocation: 197.91 MB Available: 458.20 GB
11.2.2.5.2. iSCSI-based storage pool parameters
The following provides information about the required parameters for an iSCSI-based storage pool and an example.
You can define a storage pool based on the XML configuration in a specified file. For example:
# virsh pool-define ~/guest_images.xml
Pool defined from guest_images_iscsi
Parameters
The following table provides a list of required parameters for the XML file for an iSCSI-based storage pool.
Table 11.5. iSCSI-based storage pool parameters
Description | XML |
---|---|
The type of storage pool |
|
The name of the storage pool |
|
The name of the host |
|
The iSCSI IQN |
|
The path specifying the target. This will be the path used for the storage pool. |
|
[Optional] The IQN of the iSCSI initiator. This is only needed when the ACL restricts the LUN to a particular initiator. |
|
The IQN of the iSCSI initiator can be determined using the virsh find-storage-pool-sources-as
iscsi command.
Example
The following is an example of an XML file for a storage pool based on the specified iSCSI device:
<pool type='iscsi'> <name>iSCSI_pool</name> <source> <host name='server1.example.com'/> <device path='iqn.2010-05.com.example.server1:iscsirhel7guest'/> </source> <target> <path>/dev/disk/by-path</path> </target> </pool>
Additional resources
For more information on creating iSCSCI-based storage pools, see Section 11.2.2.5.1, “Creating iSCSI-based storage pools using the CLI”.
11.2.2.5.3. Securing iSCSI storage pools with libvirt secrets
User name and password parameters can be configured with virsh
to secure an iSCSI storage pool. You can configure this before or after you define the pool, but the pool must be started for the authentication settings to take effect.
The following provides instructions for securing iSCSI-based storage pools with libvirt
secrets.
This procedure is required if a user_ID
and password
were defined when creating the iSCSI target.
Procedure
Create a libvirt secret file with a challenge-handshake authentication protocol (CHAP) user name. For example:
<secret ephemeral='no' private='yes'> <description>Passphrase for the iSCSI example.com server</description> <usage type='iscsi'> <target>iscsirhel7secret</target> </usage> </secret>
Define the libvirt secret with the
virsh secret-define
command.# virsh secret-define secret.xml
Verify the UUID with the
virsh secret-list
command.# virsh secret-list UUID Usage ------------------------------------------------------------------- 2d7891af-20be-4e5e-af83-190e8a922360 iscsi iscsirhel7secret
Assign a secret to the UUID in the output of the previous step using the
virsh secret-set-value
command. This ensures that the CHAP username and password are in a libvirt-controlled secret list. For example:# virsh secret-set-value --interactive 2d7891af-20be-4e5e-af83-190e8a922360 Enter new value for secret: Secret value set
Add an authentication entry in the storage pool’s XML file using the
virsh edit
command, and add an<auth>
element, specifyingauthentication type
,username
, andsecret usage
.For example:
<pool type='iscsi'> <name>iscsirhel7pool</name> <source> <host name='192.168.122.1'/> <device path='iqn.2010-05.com.example.server1:iscsirhel7guest'/> <auth type='chap' username='redhat'> <secret usage='iscsirhel7secret'/> </auth> </source> <target> <path>/dev/disk/by-path</path> </target> </pool>
NoteThe
<auth>
sub-element exists in different locations within the virtual machine’s<pool>
and<disk>
XML elements. For a<pool>
,<auth>
is specified within the<source>
element, as this describes where to find the pool sources, since authentication is a property of some pool sources (iSCSI and RBD). For a<disk>
, which is a sub-element of a domain, the authentication to the iSCSI or RBD disk is a property of the disk. In addition, the<auth>
sub-element for a disk differs from that of a storage pool.<auth username='redhat'> <secret type='iscsi' usage='iscsirhel7secret'/> </auth>
To activate the changes, activate the storage pool. If the pool has already been started, stop and restart the storage pool:
# virsh pool-destroy iscsirhel7pool
# virsh pool-start iscsirhel7pool
11.2.2.6. Creating and assigning LVM-based storage for virtual machines using the CLI
The following provides information about creating LVM-based storage pools and storage volumes and assigning volumes to virtual machines.
11.2.2.6.1. Creating LVM-based storage pools using the CLI
The following provides instructions for creating LVM-based storage pools.
Recommendations
Be aware of the following before creating an LVM-based storage pool:
- LVM-based storage pools do not provide the full flexibility of LVM.
-
libvirt
supports thin logical volumes, but does not provide the features of thin storage pools. LVM-based storage pools are volume groups. You can create volume groups using Logical Volume Manager commands or
virsh
commands. To manage volume groups using thevirsh
interface, use thevirsh
commands to create volume groups.For more information about volume groups, refer to the Red Hat Enterprise Linux Logical Volume Manager Administration Guide.
- LVM-based storage pools require a full disk partition. If activating a new partition or device with these procedures, the partition will be formatted and all data will be erased. If using the host’s existing Volume Group (VG) nothing will be erased. It is recommended to back up the storage device before starting.
Prerequisites
Ensure your hypervisor supports LVM-based storage pools:
# virsh pool-capabilities | grep "'logical' supported='yes'"
If the command displays any output, LVM-based pools are supported.
Procedure
Create a storage pool
Use the
virsh pool-define-as
command to define and create an LVM-type storage pool. For example, the following creates a storage pool namedguest_images_logical
that uses alibvirt_lvm
LVM device mounted on/dev/sdc
. The created storage pool is mounted as/dev/libvirt_lvm
.# virsh pool-define-as guest_images_logical logical --source-dev=/dev/sdc --source-name libvirt_lvm --target /dev/libvirt_lvm Pool guest_images_logical defined
If you already have an XML configuration of the storage pool you want to create, you can also define the pool based on the XML. For details, see Section 11.2.2.6.2, “LVM-based storage pool parameters”.
Verify that the pool was created
Use the
virsh pool-list
command to verify that the pool was created.# virsh pool-list --all Name State Autostart ------------------------------------------- default active yes guest_images_logical inactive no
Start the storage pool
Use the
virsh pool-start
command to mount the storage pool.# virsh pool-start guest_images_logical Pool guest_images_logical started
NoteThe
virsh pool-start
command is only necessary for persistent storage pools. Transient storage pools are automatically started when they are created.[Optional] Turn on autostart
By default, a storage pool defined with the
virsh
command is not set to automatically start each time libvirtd starts. Use thevirsh pool-autostart
command to configure the storage pool to autostart.# virsh pool-autostart guest_images_logical Pool guest_images_logical marked as autostarted
Verification
Use the
virsh pool-list
command to verify theAutostart
state.# virsh pool-list --all Name State Autostart ------------------------------------------- default active yes guest_images_logical inactive yes
Verify that the storage pool was created correctly, the sizes reported are as expected, and the state is reported as
running
.# virsh pool-info guest_images_logical Name: guest_images_logical UUID: c7466869-e82a-a66c-2187-dc9d6f0877d0 State: running Persistent: yes Autostart: yes Capacity: 458.39 GB Allocation: 197.91 MB Available: 458.20 GB
11.2.2.6.2. LVM-based storage pool parameters
The following provides information about the required parameters for an LVM-based storage pool and an example.
You can define a storage pool based on the XML configuration in a specified file. For example:
# virsh pool-define ~/guest_images.xml
Pool defined from guest_images_logical
Parameters
The following table provides a list of required parameters for the XML file for a LVM-based storage pool.
Table 11.6. LVM-based storage pool parameters
Description | XML |
---|---|
The type of storage pool |
|
The name of the storage pool |
|
The path to the device for the storage pool |
|
The name of the volume group |
|
The virtual group format |
|
The target path |
|
If the logical volume group is made of multiple disk partitions, there may be multiple source devices listed. For example:
<source> <device path='/dev/sda1'/> <device path='/dev/sdb3'/> <device path='/dev/sdc2'/> ... </source>
Example
The following is an example of an XML file for a storage pool based on the specified LVM:
<pool type='logical'> <name>guest_images_lvm</name> <source> <device path='/dev/sdc'/> <name>libvirt_lvm</name> <format type='lvm2'/> </source> <target> <path>/dev/libvirt_lvm</path> </target> </pool>
Additional resources
For more information on creating iSCSCI-based storage pools, see Section 11.2.2.6.1, “Creating LVM-based storage pools using the CLI”.
11.2.2.7. Creating and assigning network-based storage for virtual machines using the CLI
The following provides information about creating network-based storage pools and storage volumes and assigning volumes to virtual machines.
Prerequisites
- To create a Network File System (NFS)-based storage pool, an NFS Server should already be configured to be used by the host machine. For more information about NFS, refer to the Red Hat Enterprise Linux Storage Administration Guide.
-
Ensure that the required utilities for the file system being used are installed on the host. For example,
cifs-utils
for Common Internet File Systems (CIFS) orglusterfs.fuse
for GlusterFS.
11.2.2.7.1. Creating NFS-based storage pools using the CLI
The following provides instructions for creating storage pools based on the network file system (NFS).
Prerequisites
Ensure your hypervisor supports NFS-based storage pools:
# virsh pool-capabilities | grep "<value>nfs</value>"
If the command displays any output, NFS-based pools are supported.
Procedure
Create a storage pool
Use the virsh
pool-define-as
command to define and create an NFS-type storage pool. For example, to create a storage pool namedguest_images_netfs
that uses a NFS server with IP111.222.111.222
mounted on the server directory/home/net_mount
using the target directory/var/lib/libvirt/images/nfspool
:# virsh pool-define-as --name guest_images_netfs --type netfs --source-host='111.222.111.222' source-path='/home/net_mount' --source-format='nfs' --target='/var/lib/libvirt/images/nfspool'
If you already have an XML configuration of the storage pool you want to create, you can also define the pool based on the XML. For details, see Section 11.2.2.7.2, “NFS-based storage pool parameters”.
Verify that the pool was created
Use the
virsh pool-list
command to verify that the pool was created.# virsh pool-list --all Name State Autostart ----------------------------------------- default active yes guest_images_netfs inactive no
Start the storage pool
Use the
virsh pool-start
command to mount the storage pool.# virsh pool-start guest_images_netfs Pool guest_images_netfs started
NoteThe
virsh pool-start
command is only necessary for persistent storage pools. Transient storage pools are automatically started when they are created.[Optional] Turn on autostart
By default, a storage pool defined with the
virsh
command is not set to automatically start each time libvirtd starts. Use thevirsh pool-autostart
command to configure the storage pool to autostart.# virsh pool-autostart guest_images_netfs Pool guest_images_netfs marked as autostarted
Verification
Use the
virsh pool-list
command to verify theAutostart
state.# virsh pool-list --all Name State Autostart ----------------------------------------- default active yes guest_images_netfs inactive yes
Verify that the storage pool was created correctly, the sizes reported are as expected, and the state is reported as
running
. Verify there is alost+found
directory in the target path on the file system, indicating that the device is mounted.# virsh pool-info guest_images_netfs Name: guest_images_netfs UUID: c7466869-e82a-a66c-2187-dc9d6f0877d0 State: running Persistent: yes Autostart: yes Capacity: 458.39 GB Allocation: 197.91 MB Available: 458.20 GB
11.2.2.7.2. NFS-based storage pool parameters
The following provides information about the required parameters for an NFS-based storage pool and an example.
You can define a storage pool based on the XML configuration in a specified file. For example:
# virsh pool-define ~/guest_images.xml
Pool defined from guest_images_netfs
Parameters
The following table provides a list of required parameters for the XML file for an NFS-based storage pool.
Table 11.7. NFS-based storage pool parameters
Description | XML |
---|---|
The type of storage pool |
|
The name of the storage pool |
|
The hostname of the network server where the mount point is located. This can be a hostname or an IP address. |
|
The format of the storage pool | One of the following:
|
The directory used on the network server |
|
The path specifying the target. This will be the path used for the storage pool. |
|
Example
The following is an example of an XML file for a storage pool based on the /home/net_mount
directory of the file_server
NFS server:
<pool type='netfs'> <name>nfspool</name> <source> <host name='file_server'/> <format type='nfs'/> <dir path='/home/net_mount'/> </source> <target> <path>/var/lib/libvirt/images/nfspool</path> </target> </pool>
Additional resources
For more information on creating NFS-based storage pools, see Section 11.2.2.7.1, “Creating NFS-based storage pools using the CLI”.
11.2.2.8. Creating and assigning SCSI-based storage with vHBA devices for virtual machines using the CLI
The following provides information about creating SCSI-based storage pools and storage volumes using vHBA devices, as well as assigning volumes to virtual machines (VMs).
Recommendations
N_Port ID Virtualization (NPIV) is a software technology that allows sharing of a single physical Fibre Channel host bus adapter (HBA). This allows multiple VMs to see the same storage from multiple physical hosts, and thus allows for easier migration paths for the storage. As a result, there is no need for the migration to create or copy storage, as long as the correct storage path is specified.
In virtualization, the virtual host bus adapter, or vHBA, controls the Logical Unit Numbers (LUNs) for VMs. For a host to share one Fibre Channel device path between multiple VMs, you must create a vHBA for each VM. A single vHBA cannot be used by multiple VMs.
Each vHBA for NPIV is identified by its parent HBA and its own World Wide Node Name (WWNN) and World Wide Port Name (WWPN). The path to the storage is determined by the WWNN and WWPN values. The parent HBA can be defined as scsi_host#
or as a WWNN/WWPN pair.
If a parent HBA is defined as scsi_host#
and hardware is added to the host machine, the scsi_host#
assignment may change. Therefore, it is recommended that you define a parent HBA using a WWNN/WWPN pair.
It is recommended that you define a libvirt
storage pool based on the vHBA, because this preserves the vHBA configuration.
Using a libvirt storage pool has two primary advantages:
- The libvirt code can easily find the LUN’s path via virsh command output.
- You can migrate a VM requires only defining and starting a storage pool with the same vHBA name on the target machine. To do this, the vHBA LUN, libvirt storage pool and volume name must be specified in the VM’s XML configuration.
Before creating a vHBA, it is recommended that you configure storage array (SAN)-side zoning in the host LUN to provide isolation between VMs and prevent the possibility of data corruption.
To create a persistent vHBA configuration, first create a libvirt 'scsi' storage pool XML file. For information on the XML file, see Creating vHBAs. When creating a single vHBA that uses a storage pool on the same physical HBA, it is recommended to use a stable location for the <path>
value, such as one of the /dev/disk/by-{path|id|uuid|label}
locations on your system.
When creating multiple vHBAs that use storage pools on the same physical HBA, the value of the <path>
field must be only /dev/
, otherwise storage pool volumes are visible only to one of the vHBAs, and devices from the host cannot be exposed to multiple VMs with the NPIV configuration.
For more information on <path>
and the elements in <target>
, see upstream libvirt documentation.
11.2.2.8.1. Creating vHBAs
The following provides instructions on creating a virtual host bus adapter (vHBA).
Procedure
Locate the HBAs on your host system, using the
virsh nodedev-list --cap vports
command.The following example shows a host that has two HBAs that support vHBA:
# virsh nodedev-list --cap vports scsi_host3 scsi_host4
View the HBA’s details, using the
virsh nodedev-dumpxml HBA_device
command.# virsh nodedev-dumpxml scsi_host3
The output from the command lists the
<name>
,<wwnn>
, and<wwpn>
fields, which are used to create a vHBA.<max_vports>
shows the maximum number of supported vHBAs. For example:<device> <name>scsi_host3</name> <path>/sys/devices/pci0000:00/0000:00:04.0/0000:10:00.0/host3</path> <parent>pci_0000_10_00_0</parent> <capability type='scsi_host'> <host>3</host> <unique_id>0</unique_id> <capability type='fc_host'> <wwnn>20000000c9848140</wwnn> <wwpn>10000000c9848140</wwpn> <fabric_wwn>2002000573de9a81</fabric_wwn> </capability> <capability type='vport_ops'> <max_vports>127</max_vports> <vports>0</vports> </capability> </capability> </device>
In this example, the
<max_vports>
value shows there are a total 127 virtual ports available for use in the HBA configuration. The<vports>
value shows the number of virtual ports currently being used. These values update after creating a vHBA.Create an XML file similar to one of the following for the vHBA host. In these examples, the file is named
vhba_host3.xml
.This example uses
scsi_host3
to describe the parent vHBA.<device> <parent>scsi_host3</parent> <capability type='scsi_host'> <capability type='fc_host'> </capability> </capability> </device>
This example uses a WWNN/WWPN pair to describe the parent vHBA.
<device> <name>vhba</name> <parent wwnn='20000000c9848140' wwpn='10000000c9848140'/> <capability type='scsi_host'> <capability type='fc_host'> </capability> </capability> </device>
NoteThe WWNN and WWPN values must match those in the HBA details seen in the previous step.
The
<parent>
field specifies the HBA device to associate with this vHBA device. The details in the<device>
tag are used in the next step to create a new vHBA device for the host. For more information on thenodedev
XML format, see the libvirt upstream pages.NoteThe
virsh
command does not provide a way to define theparent_wwnn
,parent_wwpn
, orparent_fabric_wwn
attributes.Create a VHBA based on the XML file created in the previous step using the
virsh nodev-create
command.# virsh nodedev-create vhba_host3 Node device scsi_host5 created from vhba_host3.xml
Verification
Verify the new vHBA’s details (scsi_host5) using the
virsh nodedev-dumpxml
command:# virsh nodedev-dumpxml scsi_host5 <device> <name>scsi_host5</name> <path>/sys/devices/pci0000:00/0000:00:04.0/0000:10:00.0/host3/vport-3:0-0/host5</path> <parent>scsi_host3</parent> <capability type='scsi_host'> <host>5</host> <unique_id>2</unique_id> <capability type='fc_host'> <wwnn>5001a4a93526d0a1</wwnn> <wwpn>5001a4ace3ee047d</wwpn> <fabric_wwn>2002000573de9a81</fabric_wwn> </capability> </capability> </device>
11.2.2.8.2. Creating SCSI-based storage pools with vHBA devices using the CLI
The following provides instructions for creating SCSI-based storage pools using virtual host bus adapter (vHBA) devices.
Prerequisites
Ensure your hypervisor supports SCSI-based storage pools:
# virsh pool-capabilities | grep "'scsi' supported='yes'"
If the command displays any output, SCSI-based pools are supported.
- Before creating a SCSI-based storage pools with vHBA devices, create a vHBA. For more information, see Creating vHBAs.
Procedure
Create a storage pool
Use the
virsh pool-define-as
command to define and create SCSI storage pool using a vHBA. For example, the following creates a storage pool namedguest_images_vhba
that uses a vHBA identified by thescsi_host3
parent adapter, world-wide port number5001a4ace3ee047d
, and world-wide node number5001a4a93526d0a1
. The storage pool is mounted on the/dev/disk/
directory:# virsh pool-define-as guest_images_vhba scsi --adapter-parent scsi_host3 --adapter-wwnn 5001a4a93526d0a1 --adapter-wwpn 5001a4ace3ee047d --target /dev/disk/ Pool guest_images_vhba defined
If you already have an XML configuration of the storage pool you want to create, you can also define the pool based on the XML. For details, see Section 11.2.2.8.3, “Parameters for SCSI-based storage pools with vHBA devices”.
Verify that the pool was created
Use the
virsh pool-list
command to verify that the pool was created.# virsh pool-list --all Name State Autostart ----------------------------------------- default active yes guest_images_vhba inactive no
Start the storage pool
Use the
virsh pool-start
command to mount the storage pool.# virsh pool-start guest_images_vhba Pool guest_images_vhba started
NoteThe
virsh pool-start
command is only necessary for persistent storage pools. Transient storage pools are automatically started when they are created.[Optional] Turn on autostart
By default, a storage pool defined with the
virsh
command is not set to automatically start each time libvirtd starts. Use thevirsh pool-autostart
command to configure the storage pool to autostart.# virsh pool-autostart guest_images_vhba Pool guest_images_vhba marked as autostarted
Verification
Use the
virsh pool-list
command to verify theAutostart
state.# virsh pool-list --all Name State Autostart ----------------------------------------- default active yes guest_images_vhba inactive yes
Verify that the storage pool was created correctly, the sizes reported are as expected, and the state is reported as
running
.# virsh pool-info guest_images_vhba Name: guest_images_vhba UUID: c7466869-e82a-a66c-2187-dc9d6f0877d0 State: running Persistent: yes Autostart: yes Capacity: 458.39 GB Allocation: 197.91 MB Available: 458.20 GB
11.2.2.8.3. Parameters for SCSI-based storage pools with vHBA devices
The following provides information about the required parameters for a SCSi-based storage pool that uses a virtual host adapter bus (vHBA) device.
You can define a storage pool based on the XML configuration in a specified file. For example:
# virsh pool-define ~/guest_images.xml
Pool defined from guest_images_vhba
Parameters
The following table provides a list of required parameters for the XML file for a SCSI-based storage pool with vHBA.
Table 11.8. Parameters for SCSI-based storage pools with vHBA devices
Description | XML |
---|---|
The type of storage pool |
|
The name of the storage pool |
|
The identifier of the vHBA. The |
|
The target path. This will be the path used for the storage pool. |
|
When the <path>
field is /dev/
, libvirt
generates a unique short device path for the volume device path. For example, /dev/sdc
. Otherwise, the physical host path is used. For example, /dev/disk/by-path/pci-0000:10:00.0-fc-0x5006016044602198-lun-0
. The unique short device path allows the same volume to be listed in multiple virtual machines (VMs) by multiple storage pools. If the physical host path is used by multiple VMs, duplicate device type warnings may occur.
The parent
attribute can be used in the <adapter>
field to identify the physical HBA parent from which the NPIV LUNs by varying paths can be used. This field, scsi_hostN
, is combined with the vports
and max_vports
attributes to complete the parent identification. The parent
, parent_wwnn
, parent_wwpn
, or parent_fabric_wwn
attributes provide varying degrees of assurance that after the host reboots the same HBA is used.
-
If no
parent
is specified,libvirt
uses the firstscsi_hostN
adapter that supports NPIV. -
If only the
parent
is specified, problems can arise if additional SCSI host adapters are added to the configuration. -
If
parent_wwnn
orparent_wwpn
is specified, after the host reboots the same HBA is used. -
If
parent_fabric_wwn
is used, after the host reboots an HBA on the same fabric is selected, regardless of thescsi_hostN
used.
Examples
The following are examples of XML files for SCSI-based storage pools with vHBA.
A storage pool that is the only storage pool on the HBA:
<pool type='scsi'> <name>vhbapool_host3</name> <source> <adapter type='fc_host' wwnn='5001a4a93526d0a1' wwpn='5001a4ace3ee047d'/> </source> <target> <path>/dev/disk/by-path</path> </target> </pool>
A storage pool that is one of several storage pools that use a single vHBA and uses the
parent
attribute to identify the SCSI host device:<pool type='scsi'> <name>vhbapool_host3</name> <source> <adapter type='fc_host' parent='scsi_host3' wwnn='5001a4a93526d0a1' wwpn='5001a4ace3ee047d'/> </source> <target> <path>/dev/disk/by-path</path> </target> </pool>
Additional resources
For more information on creating SCSI-based storage pools with vHBA, see Section 11.2.2.8.2, “Creating SCSI-based storage pools with vHBA devices using the CLI”.
11.2.2.9. Creating and assigning storage volumes using the CLI
To obtain a disk image and attach it to a virtual machine (VM) as a virtual disk, create a storage volume and assign its XML configuration to a the VM.
Prerequisites
A storage pool with unallocated space is present on the host. To verify, list the storage pools on the host:
# virsh pool-list --details Name State Autostart Persistent Capacity Allocation Available -------------------------------------------------------------------------------------------- default running yes yes 48.97 GiB 36.34 GiB 12.63 GiB Downloads running yes yes 175.92 GiB 121.20 GiB 54.72 GiB VM-disks running yes yes 175.92 GiB 121.20 GiB 54.72 GiB
Procedure
Create a storage volume using the
virsh vol-create-as
command. For example, to create a 20 GB qcow2 volume based on theguest-images-fs
storage pool:# virsh vol-create-as --pool guest-images-fs --name vm-disk1 --capacity 20 --format qcow2
Important: Specific storage pool types do not support the
virsh vol-create-as
command and instead require specific processes to create storage volumes:-
GlusterFS-based - Use the
qemu-img
command to create storage volumes. - iSCSI-based - Prepare the iSCSI LUNs in advance on the iSCSI server.
-
Multipath-based - Use the
multipathd
command to prepare or manage the multipath. - vHBA-based - Prepare the fibre channel card in advance.
-
GlusterFS-based - Use the
Create an XML file, and add the following lines in it. This file will be used to add the storage volume as a disk to a VM.
<disk type='volume' device='disk'> <driver name='qemu' type='qcow2'/> <source pool='guest-images-fs' volume='vm-disk1'/> <target dev='hdk' bus='ide'/> </disk>
This example specifies a virtual disk that uses the
vm-disk1
volume, created in the previous step, and sets the volume to be set up as diskhdk
on anide
bus. Modify the respective parameters as appropriate for your environment.Important: With specific storage pool types, you must use different XML formats to describe a storage volume disk.
For GlusterFS-based pools:
<disk type='network' device='disk'> <driver name='qemu' type='raw'/> <source protocol='gluster' name='Volume1/Image'> <host name='example.org' port='6000'/> </source> <target dev='vda' bus='virtio'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </disk>
For multipath-based pools:
<disk type='block' device='disk'> <driver name='qemu' type='raw'/> <source dev='/dev/mapper/mpatha' /> <target dev='sda' bus='scsi'/> </disk>
For RBD-based storage pools:
<disk type='network' device='disk'> <driver name='qemu' type='raw'/> <source protocol='rbd' name='pool/image'> <host name='mon1.example.org' port='6321'/> </source> <target dev='vdc' bus='virtio'/> </disk>
Use the XML file to assign the storage volume as a disk to a VM. For example, to assign a disk defined in
~/vm-disk1.xml
to thetestguest1
VM:# attach-device --config testguest1 ~/vm-disk1.xml
Verification
- In the guest operating system of the VM, confirm that the disk image has become available as an un-formatted and un-allocated disk.
11.2.3. Deleting storage for virtual machines using the CLI
The following provides information about deleting storage pools and storage volumes using the CLI.
11.2.3.1. Deleting storage pools using the CLI
To remove a storage pool from your host system, you must stop the pool and remove its XML definition.
Procedure
List the defined storage pools using the
virsh pool-list
command.# virsh pool-list --all Name State Autostart ------------------------------------------- default active yes Downloads active yes RHEL8-Storage-Pool active yes
Stop the storage pool you want to delete using the
virsh pool-destroy
command.# virsh pool-destroy Downloads Pool Downloads destroyed
Optional: For some types of storage pools, you can remove the directory where the storage pool resides using the
virsh pool-delete
command. Note that to do so, the directory must be empty.# virsh pool-delete Downloads Pool Downloads deleted
Delete the definition of the storage pool using the
virsh pool-undefine
command.# virsh pool-undefine Downloads Pool Downloads has been undefined
Verification
Confirm that the storage pool was deleted.
# virsh pool-list --all Name State Autostart ------------------------------------------- default active yes RHEL8-Storage-Pool active yes
11.2.3.2. Deleting storage volumes using the CLI
To remove a storage volume from your host system, you must stop the pool and remove its XML definition.
Prerequisites
- Any virtual machine that uses the storage volume you want to delete is shut down.
Procedure
List the defined storage volumes in a storage pool using the
virsh vol-list
command. The command must specify the name or path of a storage pool.# virsh vol-list --pool RHEL8-Storage-Pool Name Path --------------------------------------------------------------- .bash_history /home/VirtualMachines/.bash_history .bash_logout /home/VirtualMachines/.bash_logout .bash_profile /home/VirtualMachines/.bash_profile .bashrc /home/VirtualMachines/.bashrc .git-prompt.sh /home/VirtualMachines/.git-prompt.sh .gitconfig /home/VirtualMachines/.gitconfig RHEL8_Volume.qcow2 /home/VirtualMachines/RHEL8_Volume.qcow2
Delete storage volumes using the
virsh vol-delete
command. The command must specify the name or path of the storage volume and the storage pool from which the storage volume is abstracted.# virsh vol-delete --pool RHEL-Storage-Pool RHEL8_Volume.qcow2 Pool RHEL8_Volume.qcow2 deleted
Verification
List the defined storage volumes again, and check that the output no longer displays the deleted volume.
# virsh vol-list --pool RHEL8-Storage-Pool Name Path --------------------------------------------------------------- .bash_history /home/VirtualMachines/.bash_history .bash_logout /home/VirtualMachines/.bash_logout .bash_profile /home/VirtualMachines/.bash_profile .bashrc /home/VirtualMachines/.bashrc .git-prompt.sh /home/VirtualMachines/.git-prompt.sh .gitconfig /home/VirtualMachines/.gitconfig
11.3. Managing storage for virtual machines using the web console
Using the RHEL 8 web console, you can manage various aspects of a virtual machine’s (VM’s) storage. You can use the web console to:
11.3.1. Viewing storage pool information using the web console
The following procedure describes how to view detailed storage pool information about the virtual machine (VM) storage pools that the web console session can access.
Prerequisites
- To use the web console to manage VMs, install the web console VM plug-in.
Procedure
Click
at the top of the interface. The Storage Pools window appears, showing a list of configured storage pools.The information includes the following:
- Name - The name of the storage pool.
- Size - The size of the storage pool.
- Connection - The connection used to access the storage pool.
- State - The state of the storage pool.
Click the row of the storage whose information you want to see.
The row expands to reveal the Overview pane with the following information about the selected storage pool:
- Path - The path to the storage pool.
- Persistent - Whether or not the storage pool is persistent.
- Autostart - Whether or not the storage pool starts automatically.
- Type - The type of the storage pool.
To view a list of storage volumes created from the storage pool, click
.The Storage Volumes pane appears, showing a list of configured storage volumes with their sizes and the amount of space used.
Additional resources
- For instructions on viewing information about all of the VMs to which the web console session is connected, see Section 6.2.1, “Viewing a virtualization overview in the web console”.
- For instructions on viewing basic information about a selected VM to which the web console session is connected, see Section 6.2.3, “Viewing basic virtual machine information in the web console”.
- For instructions on viewing resource usage for a selected VM to which the web console session is connected, see Section 6.2.4, “Viewing virtual machine resource usage in the web console”.
- For instructions on viewing disk information about a selected VM to which the web console session is connected, see Section 6.2.5, “Viewing virtual machine disk information in the web console”.
- For instructions on viewing virtual network interface information about a selected VM to which the web console session is connected, see Section 6.2.6, “Viewing and editing virtual network interface information in the web console”.
11.3.2. Creating storage pools using the web console
A virtual machine (VM) requires a file, directory, or storage device that can be used to create storage volumes to store the VM image or act as additional storage. You can create storage pools from local or network-based resources that you can then use to create the storage volumes.
To create storage pools using the RHEL web console, see the following procedure.
Prerequisites
- To use the web console to manage virtual machines (VMs), you must install the web console VM plug-in.
Procedure
Click
at the top of the Virtual Machines tab. The Storage Pools window appears, showing a list of configured storage pools.Click
. The Create Storage Pool dialog appears.Enter the following information in the Create Storage Pool dialog:
- Name - The name of the storage pool.
- Type - The type of the storage pool. This can be a file-system directory, a network file system, an iSCSI target, a physical disk drive, or an LVM volume group.
- Target Path - The storage pool path on the host’s file system.
- Startup - Whether or not the storage pool starts when the host boots.
- Click . The storage pool is created, the Create Storage Pool dialog closes, and the new storage pool appears in the list of storage pools.
Additional resources
- For more information about storage pools, see Understanding storage pools.
- For instructions on viewing information about storage pools using the web console, see Viewing storage pool information using the web console.
11.3.3. Removing storage pools using the web console
You can remove storage pools to free up resources on the host or on the network to improve system performance. Deleting storage pools also frees up resources that can then be used by other virtual machines (VMs).
Unless explicitly specified, deleting a storage pool does not simultaneously delete the storage volumes inside that pool.
To delete a storage pool using the RHEL web console, see the following procedure.
If you want to temporarily deactivate a storage pool instead of deleting it, see Deactivating storage pools using the web console
Prerequisites
- To use the web console to manage VMs, you must install the web console VM plug-in.
- If you want to delete a storage volume along with the pool, you must first detach the disk from the VM.
Procedure
Click
at the top of the Virtual Machines tab. The Storage Pools window appears, showing a list of configured storage pools.In the
window, click the storage pool you want to delete.The row expands to reveal the Overview pane with basic information about the selected storage pool and controls for deactivating or deleting the storage pool.
Click
.A confirmation dialog appears.
- Optional: To delete the storage volumes inside the pool, select the check box in the dialog.
Click
.The storage pool is deleted. If you had selected the checkbox in the previous step, the associated storage volumes are deleted as well.
Additional resources
- For more information about storage pools, see Understanding storage pools.
- For instructions on viewing information about storage pools using the web console, see viewing storage pool information using the web console.
11.3.4. Deactivating storage pools using the web console
If you do not want to permanently delete a storage pool, you can temporarily deactivate it instead.
When you deactivate a storage pool, no new volumes can be created in that pool. However, any virtual machines (VMs) that have volumes in that pool will continue to run. This is useful for a number of reasons, for example, you can limit the number of volumes that can be created in a pool to increase system performance.
To deactivate a storage pool using the RHEL web console, see the following procedure.
Prerequisites
- To use the web console to manage virtual machines (VMs), you must install the web console VM plug-in.
Procedure
Click
at the top of the Virtual Machines tab. The Storage Pools window appears, showing a list of configured storage pools.In the
window, click the storage pool you want to deactivate.The row expands to reveal the Overview pane with basic information about the selected storage pool and controls for deactivating and deleting the VM.
Click
.The storage pool is deactivated.
Additional resources
- For more information about storage pools, see Understanding storage pools.
- For instructions on viewing information about storage pools using the web console, see Viewing storage pool information using the web console.
11.3.5. Creating storage volumes using the web console
To create a functioning virtual machine (VM) you require a local storage device assigned to the VM that can store the VM image and VM-related data. You can create a storage volume in a storage pool and assign it to a VM as a storage disk.
To create storage volumes using the web console, see the following procedure.
Prerequisites
- To use the web console to manage VMs, you must install the web console VM plug-in.
Procedure
Click
at the top of the Virtual Machines tab. The Storage Pools window appears, showing a list of configured storage pools.In the
window, click the storage pool from which you want to create a storage volume.The row expands to reveal the Overview pane with basic information about the selected storage pool.
Click
next to the Overview tab in the expanded row.The Storage Volume tab appears with basic information about existing storage volumes, if any.
Click
.The Create Storage Volume dialog appears.
Enter the following information in the Create Storage Volume dialog:
- Name - The name of the storage volume.
- Size - The size of the storage volume in MiB or GiB.
-
Format - The format of the storage volume. The supported types are
qcow2
andraw
.
Click
.The storage volume is created, the Create Storage Volume dialog closes, and the new storage volume appears in the list of storage volumes.
Additional resources
- For more information about storage volumes, see Understanding storage volumes.
- For information about adding disks to VMs using the web console, see Adding new disks to virtual machines using the web console.
11.3.6. Removing storage volumes using the web console
You can remove storage volumes to free up space in the storage pool, or to remove storage items associated with defunct virtual machines (VMs).
To remove storage volumes using the RHEL web console, see the following procedure.
Prerequisites
- To use the web console to manage VMs, you must install the web console VM plug-in.
- You must detach the volume from the VM.
Procedure
Click
at the top of the Virtual Machines tab. The Storage Pools window appears, showing a list of configured storage pools.In the
window, click the storage pool from which you want to remove a storage volume.The row expands to reveal the Overview pane with basic information about the selected storage pool.
Click
next to the Overview tab in the expanded row.The Storage Volume tab appears with basic information about existing storage volumes, if any.
Select the storage volume you want to remove.
- Click
Additional resources
- For more information about storage volumes, see Understanding storage volumes.
11.3.7. Managing virtual machine disks using the web console
Using the RHEL 8 web console, you can manage the disks configured for the virtual machines to which the web console is connected.
You can:
11.3.7.1. Viewing virtual machine disk information in the web console
The following procedure describes how to view the disk information of a virtual machine (VM) to which the web console session is connected.
Prerequisites
To use the web console to manage VMs, install the web console VM plug-in.
Procedure
Click the row of the VM whose information you want to see.
The row expands to reveal the Overview pane with basic information about the selected VM and controls for shutting down and deleting the VM.
Click
.The Disks pane appears with information about the disks assigned to the VM.

The information includes the following:
- Device - The device type of the disk.
- Used - The amount of the disk that is used.
- Capacity - The size of the disk.
- Bus - The bus type of the disk.
- Access - Whether the disk is is writeable or read-only.
- Source - The disk device or file.
Additional resources
- For instructions on viewing information about all of the VMs to which the web console session is connected, see Section 6.2.1, “Viewing a virtualization overview in the web console”.
- For instructions on viewing information about the storage pools to which the web console session is connected, see Section 6.2.2, “Viewing storage pool information using the web console”.
- For instructions on viewing basic information about a selected VM to which the web console session is connected, see Section 6.2.3, “Viewing basic virtual machine information in the web console”.
- For instructions on viewing resource usage for a selected VM to which the web console session is connected, see Section 6.2.4, “Viewing virtual machine resource usage in the web console”.
- For instructions on viewing virtual network interface information about a selected VM to which the web console session is connected, see Section 6.2.6, “Viewing and editing virtual network interface information in the web console”.
11.3.7.2. Adding new disks to virtual machines using the web console
You can add new disks to virtual machines (VMs) by creating a new storage volume and attaching it to a VM using the RHEL 8 web console.
Prerequisites
- To use the web console to manage VMs, install the web console VM plug-in.
Procedure
In the
interface, click the row of the VM for which you want to create and attach a new disk.The row expands to reveal the Overview pane with basic information about the selected VM and controls for shutting down and deleting the VM.
Click
.The Disks pane appears with information about the disks configured for the VM.
Click
.The Add Disk dialog appears.
- Select the Create New option.
Configure the new disk.
- Pool - Select the storage pool from which the virtual disk will be created.
- Name - Enter a name for the virtual disk that will be created.
- Size - Enter the size and select the unit (MiB or GiB) of the virtual disk that will be created.
-
Format - Select the format for the virtual disk that will be created. The supported types are
qcow2
andraw
. Persistence - If checked, the virtual disk is persistent. If not checked, the virtual disk is transient.
NoteTransient disks can only be added to VMs that are running.
Additional Options - Set additional configurations for the virtual disk.
- Cache - Select the type of cache for the virtual disk.
- Bus - Select the type of bus for the virtual disk.
Click
.The virtual disk is created and connected to the VM.
Additional resources
- For instructions on viewing disk information about a selected VM to which the web console session is connected, see Section 11.3.7.1, “Viewing virtual machine disk information in the web console”.
- For information on attaching existing disks to VMs, see Section 11.3.7.3, “Attaching existing disks to virtual machines using the web console”.
- For information on detaching disks from VMs, see Section 11.3.7.4, “Detaching disks from virtual machines”.
11.3.7.3. Attaching existing disks to virtual machines using the web console
The following procedure describes how to attach existing storage volumes as disks to a virtual machine (VM) using the RHEL 8 web console.
Prerequisites
- To use the web console to manage VMs, install the web console VM plug-in.
Procedure
In the
interface, click the row of the VM to which you want to attach an existing disk.The row expands to reveal the Overview pane with basic information about the selected VM and controls for shutting down and deleting the VM.
Click
.The Disks pane appears with information about the disks configured for the VM.
Click
.The Add Disk dialog appears.
Click the Use Existing radio button.
The appropriate configuration fields appear in the Add Disk dialog.
Configure the disk for the VM.
- Pool - Select the storage pool from which the virtual disk will be attached.
- Volume - Select the storage volume that will be attached.
- Persistence - Check to make the virtual disk persistent. Clear to make the virtual disk transient.
Additional Options - Set additional configurations for the virtual disk.
- Cache - Select the type of cache for the virtual disk.
- Bus - Select the type of bus for the virtual disk.
Click
The selected virtual disk is attached to the VM.
Additional resources
- For instructions on viewing disk information about a selected VM to which the web console session is connected, see Section 11.3.7.1, “Viewing virtual machine disk information in the web console”.
- For information on creating new disks and attaching them to VMs, see Section 11.3.7.2, “Adding new disks to virtual machines using the web console”.
- For information on detaching disks from VMs, see Section 11.3.7.4, “Detaching disks from virtual machines”.
11.3.7.4. Detaching disks from virtual machines
The following describes how to detach disks from virtual machines (VMs) using the RHEL 8 web console.
Prerequisites
- To use the web console to manage VMs, install the web console VM plug-in.
Procedure
In the
interface, click the row of the VM from which you want to detach an existing disk.The row expands to reveal the Overview pane with basic information about the selected VM and controls for shutting down and deleting the VM.
Click
.The Disks pane appears with information about the disks configured for the VM.
-
Click the
Remove Disk
confirmation dialog appears. button next to the disk you want to detach from the VM. A In the confirmation dialog, click
.The virtual disk is detached from the VM.
Additional resources
- For instructions on viewing disk information about a selected VM to which the web console session is connected, see Section 11.3.7.1, “Viewing virtual machine disk information in the web console”.
- For information on creating new disks and attaching them to VMs, see Section 11.3.7.2, “Adding new disks to virtual machines using the web console”.
- For information on attaching existing disks to VMs, see Section 11.3.7.3, “Attaching existing disks to virtual machines using the web console”.