Red Hat Training

A Red Hat training course is available for RHEL 8

Chapter 11. Managing storage for virtual machines

A virtual machine (VM), just like a physical machine, requires storage for data, program, and system files. As a VM administrator, you can assign physical or network-based storage to your VMs as virtual storage. You can also modify how the storage is presented to a VM regardless of the underlying hardware.

The following sections provide information about the different types of VM storage, how they work, and how you can manage them using the CLI or the web console.

11.1. Understanding virtual machine storage

If you are new to virtual machine (VM) storage, or are unsure about how it works, the following sections provide a general overview about the various components of VM storage, how it works, management basics, and the supported solutions provided by Red Hat.

You can find information about:

11.1.1. Introduction to storage pools

A storage pool is a file, directory, or storage device, managed by libvirt to provide storage for virtual machines (VMs). You can divide storage pools into storage volumes, which store VM images or are attached to VMs as additional storage.

Furthermore, multiple VMs can share the same storage pool, allowing for better allocation of storage resources.

  • Storage pools can be persistent or transient:

    • A persistent storage pool survives a system restart of the host machine. You can use the virsh pool-define to create a persistent storage pool.
    • A transient storage pool only exists until the host reboots. You can use the virsh pool-create command to create a transient storage pool.

Storage pool storage types

Storage pools can be either local or network-based (shared):

  • Local storage pools

    Local storage pools are attached directly to the host server. They include local directories, directly attached disks, physical partitions, and Logical Volume Management (LVM) volume groups on local devices.

    Local storage pools are useful for development, testing, and small deployments that do not require migration or have a large number of VMs.

  • Networked (shared) storage pools

    Networked storage pools include storage devices shared over a network using standard protocols.

11.1.2. Introduction to storage volumes

Storage pools are divided into storage volumes. Storage volumes are abstractions of physical partitions, LVM logical volumes, file-based disk images, and other storage types handled by libvirt. Storage volumes are presented to VMs as local storage devices, such as disks, regardless of the underlying hardware.

On the host machine, a storage volume is referred to by its name and an identifier for the storage pool from which it derives. On the virsh command line, this takes the form --pool storage_pool volume_name.

For example, to display information about a volume named firstimage in the guest_images pool.

# virsh vol-info --pool guest_images firstimage
  Name:             firstimage
  Type:             block
  Capacity:         20.00 GB
  Allocation:       20.00 GB

11.1.3. Storage management using libvirt

Using the libvirt remote protocol, you can manage all aspects of VM storage. These operations can also be performed on a remote host. Consequently, a management application that uses libvirt, such as the RHEL web console, can be used to perform all the required tasks of configuring the storage of a VM.

You can use the libvirt API to query the list of volumes in a storage pool or to get information regarding the capacity, allocation, and available storage in that storage pool. For storage pools that support it, you can also use the libvirt API to create, clone, resize, and delete storage volumes. Furthermore, you can use the libvirt API to upload data to storage volumes, download data from storage volumes, or wipe data from storage volumes.

11.1.4. Overview of storage management

To illustrate the available options for managing storage, the following example talks about a sample NFS server that uses mount -t nfs nfs.example.com:/path/to/share /path/to/data.

As a storage administrator:

  • You can define an NFS storage pool on the virtualization host to describe the exported server path and the client target path. Consequently, libvirt can mount the storage either automatically when libvirt is started or as needed while libvirt is running.
  • You can simply add the storage pool and storage volume to a VM by name. You do not need to add the target path to the volume. Therefore, even if the target client path changes, it does not affect the VM.
  • You can configure storage pools to autostart. When you do so, libvirt automatically mounts the NFS shared disk on the directory which is specified when libvirt is started. libvirt mounts the share on the specified directory, similar to the command mount nfs.example.com:/path/to/share /vmdata.
  • You can query the storage volume paths using the libvirt API. These storage volumes are basically the files present in the NFS shared disk. You can then copy these paths into the section of a VM’s XML definition that describes the source storage for the VM’s block devices.
  • In the case of NFS, you can use an application that uses the libvirt API to create and delete storage volumes in the storage pool (files in the NFS share) up to the limit of the size of the pool (the storage capacity of the share).

    Note that, not all storage pool types support creating and deleting volumes.

  • You can stop a storage pool when no longer required. Stopping a storage pool (pool-destroy) undoes the start operation, in this case, unmounting the NFS share. The data on the share is not modified by the destroy operation, despite what the name of the command suggests. For more information, see man virsh.

11.1.5. Supported and unsupported storage pool types

Supported storage pool types

The following is a list of storage pool types supported by RHEL:

  • Directory-based storage pools
  • Disk-based storage pools
  • Partition-based storage pools
  • GlusterFS storage pools
  • iSCSI-based storage pools
  • LVM-based storage pools
  • NFS-based storage pools
  • SCSI-based storage pools with vHBA devices
  • Multipath-based storage pools
  • RBD-based storage pools

Unsupported storage pool types

The following is a list of libvirt storage pool types not supported by RHEL:

  • Sheepdog-based storage pools
  • Vstorage-based storage pools
  • ZFS-based storage pools

11.2. Viewing virtual machine storage information using the CLI

Before you can modify or troubleshoot your VM storage, you may want to view the current state and configuration of your storage. The following provides information about viewing information about storage pools and storage volumes using the CLI.

11.2.1. Viewing storage pool information using the CLI

Using the CLI, you can view a list of all storage pools with limited or full details about the storage pools. You can also filter the storage pools listed.

Procedure

  • Use the virsh pool-list command to view storage pool information.

    # virsh pool-list --all --details
     Name                State    Autostart  Persistent    Capacity  Allocation   Available
     default             running  yes        yes          48.97 GiB   23.93 GiB   25.03 GiB
     Downloads           running  yes        yes         175.62 GiB   62.02 GiB  113.60 GiB
     RHEL8-Storage-Pool  running  yes        yes         214.62 GiB   93.02 GiB  168.60 GiB

Additional resources

  • For information on the available virsh pool-list options, use the virsh pool-list --help command.

11.2.2. Viewing storage volume information using the CLI

The following provides information on viewing information about storage pools. You can view a list of all storage pools in a specified storage pool and details about a specified storage pool.

Procedure

  1. Use the virsh vol-list command to list the storage volumes in a specified storage pool.

    # virsh vol-list --pool RHEL8-Storage-Pool --details
     Name                Path                                               Type   Capacity  Allocation
    ---------------------------------------------------------------------------------------------
     .bash_history       /home/VirtualMachines/.bash_history       file  18.70 KiB   20.00 KiB
     .bash_logout        /home/VirtualMachines/.bash_logout        file    18.00 B    4.00 KiB
     .bash_profile       /home/VirtualMachines/.bash_profile       file   193.00 B    4.00 KiB
     .bashrc             /home/VirtualMachines/.bashrc             file   1.29 KiB    4.00 KiB
     .git-prompt.sh      /home/VirtualMachines/.git-prompt.sh      file  15.84 KiB   16.00 KiB
     .gitconfig          /home/VirtualMachines/.gitconfig          file   167.00 B    4.00 KiB
     RHEL8_Volume.qcow2  /home/VirtualMachines/RHEL8_Volume.qcow2  file  60.00 GiB   13.93 GiB
  2. Use the virsh vol-info command to list the storage volumes in a specified storage pool.

    # vol-info --pool RHEL8-Storage-Pool --vol RHEL8_Volume.qcow2
    Name:           RHEL8_Volume.qcow2
    Type:           file
    Capacity:       60.00 GiB
    Allocation:     13.93 GiB

11.3. Creating and assigning storage pools for virtual machines using the CLI

You can create one or more storage pools from available storage media. For a list of supported storage pool types, see Supported storage pool types.

  • To create persistent storage pools, use the virsh pool-define-as and virsh pool-define commands.

    The virsh pool-define-as command places the options in the command line. The virsh pool-define command uses an XML file for the pool options.

  • To create temporary storage pools, use the virsh pool-create and virsh pool-create-as commands.

    The virsh pool-create-as command places the options in the command line. The virsh pool-create command uses an XML file for the pool options.

The following sections provide information on creating and assigning various types of storage pools using the CLI:

11.3.1. Creating directory-based storage pools using the CLI

The following provides instructions for creating directory-based storage pools.

Prerequisites

  • Ensure your hypervisor supports directory storage pools:

    # virsh pool-capabilities | grep "'dir' supported='yes'"

    If the command displays any output, directory pools are supported.

Procedure

  1. Create a storage pool

    Use the virsh pool-define-as command to define and create a directory-type storage pool. For example, to create a storage pool named guest_images_dir that uses the /guest_images directory:

    # virsh pool-define-as guest_images_dir dir --target "/guest_images"
    Pool guest_images_dir defined

    If you already have an XML configuration of the storage pool you want to create, you can also define the pool based on the XML. For details, see Section 11.4.1, “Directory-based storage pool parameters”.

  2. Create the storage pool target path

    Use the virsh pool-build command to create a storage pool target path for a pre-formatted file system storage pool, initialize the storage source device, and define the format of the data.

    # virsh pool-build guest_images_dir
      Pool guest_images_dir built
    
    # ls -la /guest_images
      total 8
      drwx------.  2 root root 4096 May 31 19:38 .
      dr-xr-xr-x. 25 root root 4096 May 31 19:38 ..
  3. Verify that the pool was created

    Use the virsh pool-list command to verify that the pool was created.

    # virsh pool-list --all
    
      Name                 State      Autostart
      -----------------------------------------
      default              active     yes
      guest_images_dir     inactive   no
  4. Start the storage pool

    Use the virsh pool-start command to mount the storage pool.

    # virsh pool-start guest_images_dir
      Pool guest_images_dir started
    Note

    The virsh pool-start command is only necessary for persistent storage pools. Transient storage pools are automatically started when they are created.

  5. [Optional] Turn on autostart

    By default, a storage pool defined with the virsh command is not set to automatically start each time libvirtd starts. Use the virsh pool-autostart command to configure the storage pool to autostart.

    # virsh pool-autostart guest_images_dir
      Pool guest_images_dir marked as autostarted

Verification

  1. Use the virsh pool-list command to verify the Autostart state.

    # virsh pool-list --all
    
      Name                 State      Autostart
      -----------------------------------------
      default              active     yes
      guest_images_dir     inactive   yes
  2. Verify that the storage pool was created correctly, the sizes reported are as expected, and the state is reported as running.

    # virsh pool-info guest_images_dir
      Name:           guest_images_dir
      UUID:           c7466869-e82a-a66c-2187-dc9d6f0877d0
      State:          running
      Persistent:     yes
      Autostart:      yes
      Capacity:       458.39 GB
      Allocation:     197.91 MB
      Available:      458.20 GB

11.3.2. Creating disk-based storage pools using the CLI

The following provides instructions for creating disk-based storage pools.

Recommendations

Be aware of the following before creating a disk-based storage pool:

  • Depending on the version of libvirt being used, dedicating a disk to a storage pool may reformat and erase all data currently stored on the disk device. It is strongly recommended that you back up the data on the storage device before creating a storage pool.
  • VMs should not be given write access to whole disks or block devices (for example, /dev/sdb). Use partitions (for example, /dev/sdb1) or LVM volumes.

    If you pass an entire block device to a VM, the VM will likely partition it or create its own LVM groups on it. This can cause the host machine to detect these partitions or LVM groups and cause errors.

Prerequisites

  • Ensure your hypervisor supports disk-based storage pools:

    # virsh pool-capabilities | grep "'disk' supported='yes'"

    If the command displays any output, disk-based pools are supported.

Procedure

  1. Create a storage pool

    Use the virsh pool-define-as command to define and create a disk-type storage pool. For example, to create a storage pool named guest_images_disk that uses the /dev/sdb1 partition, and is mounted on the /dev directory:

    # virsh pool-define-as guest_images_disk disk gpt --source-dev=/dev/sdb1 --target /dev
    Pool guest_images_disk defined

    If you already have an XML configuration of the storage pool you want to create, you can also define the pool based on the XML. For details, see Section 11.4.2, “Disk-based storage pool parameters”.

  2. Create the storage pool target path

    Use the virsh pool-build command to create a storage pool target path for a pre-formatted file-system storage pool, initialize the storage source device, and define the format of the data.

    # virsh pool-build guest_images_disk
      Pool guest_images_disk built
    Note

    Building the target path is only necessary for disk-based, file system-based, and logical storage pools. If libvirt detects that the source storage device’s data format differs from the selected storage pool type, the build fails, unless the overwrite option is specified.

  3. Verify that the pool was created

    Use the virsh pool-list command to verify that the pool was created.

    # virsh pool-list --all
    
      Name                 State      Autostart
      -----------------------------------------
      default              active     yes
      guest_images_disk    inactive   no
  4. Start the storage pool

    Use the virsh pool-start command to mount the storage pool.

    # virsh pool-start guest_images_disk
      Pool guest_images_disk started
    Note

    The virsh pool-start command is only necessary for persistent storage pools. Transient storage pools are automatically started when they are created.

  5. [Optional] Turn on autostart

    By default, a storage pool defined with the virsh command is not set to automatically start each time libvirtd starts. Use the virsh pool-autostart command to configure the storage pool to autostart.

    # virsh pool-autostart guest_images_disk
      Pool guest_images_disk marked as autostarted

Verification

  1. Use the virsh pool-list command to verify the Autostart state.

    # virsh pool-list --all
    
      Name                 State      Autostart
      -----------------------------------------
      default              active     yes
      guest_images_disk    inactive   yes
  2. Verify that the storage pool was created correctly, the sizes reported are as expected, and the state is reported as running.

    # virsh pool-info guest_images_disk
      Name:           guest_images_disk
      UUID:           c7466869-e82a-a66c-2187-dc9d6f0877d0
      State:          running
      Persistent:     yes
      Autostart:      yes
      Capacity:       458.39 GB
      Allocation:     197.91 MB
      Available:      458.20 GB

11.3.3. Creating filesystem-based storage pools using the CLI

The following provides instructions for creating filesystem-based storage pools.

Recommendations

Do not use this procedure to assign an entire disk as a storage pool (for example, /dev/sdb). VMs should not be given write access to whole disks or block devices. This method should only be used to assign partitions (for example, /dev/sdb1) to storage pools.

Prerequisites

  • Ensure your hypervisor supports filesystem-based storage pools:

    # virsh pool-capabilities | grep "'fs' supported='yes'"

    If the command displays any output, file-based pools are supported.

Procedure

  1. Create a storage pool

    Use the virsh pool-define-as command to define and create a filesystem-type storage pool. For example, to create a storage pool named guest_images_fs that uses the /dev/sdc1 partition, and is mounted on the /guest_images directory:

    # virsh pool-define-as guest_images_fs fs --source-dev /dev/sdc1 --target /guest_images
    Pool guest_images_fs defined

    If you already have an XML configuration of the storage pool you want to create, you can also define the pool based on the XML. For details, see Section 11.4.3, “Filesystem-based storage pool parameters”.

  2. Define the storage pool target path

    Use the virsh pool-build command to create a storage pool target path for a pre-formatted file-system storage pool, initialize the storage source device, and define the format of the data.

    # virsh pool-build guest_images_fs
      Pool guest_images_fs built
    
    # ls -la /guest_images
      total 8
      drwx------.  2 root root 4096 May 31 19:38 .
      dr-xr-xr-x. 25 root root 4096 May 31 19:38 ..
  3. Verify that the pool was created

    Use the virsh pool-list command to verify that the pool was created.

    # virsh pool-list --all
    
      Name                 State      Autostart
      -----------------------------------------
      default              active     yes
      guest_images_fs      inactive   no
  4. Start the storage pool

    Use the virsh pool-start command to mount the storage pool.

    # virsh pool-start guest_images_fs
      Pool guest_images_fs started
    Note

    The virsh pool-start command is only necessary for persistent storage pools. Transient storage pools are automatically started when they are created.

  5. [Optional] Turn on autostart

    By default, a storage pool defined with the virsh command is not set to automatically start each time libvirtd starts. Use the virsh pool-autostart command to configure the storage pool to autostart.

    # virsh pool-autostart guest_images_fs
      Pool guest_images_fs marked as autostarted
  6. Verify the Autostart state

    Use the virsh pool-list command to verify the Autostart state.

    # virsh pool-list --all
    
      Name                 State      Autostart
      -----------------------------------------
      default              active     yes
      guest_images_fs      inactive   yes
  7. Verify the storage pool

    Verify that the storage pool was created correctly, the sizes reported are as expected, and the state is reported as running. Verify there is a lost+found directory in the target path on the file system, indicating that the device is mounted.

    # virsh pool-info guest_images_fs
      Name:           guest_images_fs
      UUID:           c7466869-e82a-a66c-2187-dc9d6f0877d0
      State:          running
      Persistent:     yes
      Autostart:      yes
      Capacity:       458.39 GB
      Allocation:     197.91 MB
      Available:      458.20 GB
    
    # mount | grep /guest_images
      /dev/sdc1 on /guest_images type ext4 (rw)
    
    # ls -la /guest_images
      total 24
      drwxr-xr-x.  3 root root  4096 May 31 19:47 .
      dr-xr-xr-x. 25 root root  4096 May 31 19:38 ..
      drwx------.  2 root root 16384 May 31 14:18 lost+found

11.3.4. Creating GlusterFS-based storage pools using the CLI

GlusterFS is a user space file system that uses File System in Userspace (FUSE). The following provides instructions for creating GlusterFS-based storage pools.

Prerequisites

  • Before you can create GlusterFS-based storage pool on a host, prepare a Gluster.

    1. Obtain the IP address of the Gluster server by listing its status with the following command:

      # gluster volume status
      Status of volume: gluster-vol1
      Gluster process                           Port	Online	Pid
      ------------------------------------------------------------
      Brick 222.111.222.111:/gluster-vol1       49155	  Y    18634
      
      Task Status of Volume gluster-vol1
      ------------------------------------------------------------
      There are no active volume tasks
    2. If not installed, install the glusterfs-fuse package.
    3. If not enabled, enable the virt_use_fusefs boolean. Check that it is enabled.

      # setsebool virt_use_fusefs on
      # getsebool virt_use_fusefs
      virt_use_fusefs --> on
  • Ensure your hypervisor supports GlusterFS-based storage pools:

    # virsh pool-capabilities | grep "'gluster' supported='yes'"

    If the command displays any output, GlusterFS-based pools are supported.

Procedure

  1. Create a storage pool

    Use the virsh pool-define-as command to define and create a GlusterFS-based storage pool. For example, to create a storage pool named guest_images_glusterfs that uses a Gluster server named gluster-vol1 with IP 111.222.111.222, and is mounted on the root directory of the Gluster server:

    # virsh pool-define-as --name guest_images_glusterfs --type gluster --source-host 111.222.111.222 --source-name gluster-vol1 --source-path /
    Pool guest_images_glusterfs defined

    If you already have an XML configuration of the storage pool you want to create, you can also define the pool based on the XML. For details, see Section 11.4.4, “GlusterFS-based storage pool parameters”.

  2. Verify that the pool was created

    Use the virsh pool-list command to verify that the pool was created.

    # virsh pool-list --all
    
      Name                    State      Autostart
      --------------------------------------------
      default                 active     yes
      guest_images_glusterfs  inactive   no
  3. Start the storage pool

    Use the virsh pool-start command to mount the storage pool.

    # virsh pool-start guest_images_glusterfs
      Pool guest_images_glusterfs started
    Note

    The virsh pool-start command is only necessary for persistent storage pools. Transient storage pools are automatically started when they are created.

  4. [Optional] Turn on autostart

    By default, a storage pool defined with the virsh command is not set to automatically start each time libvirtd starts. Use the virsh pool-autostart command to configure the storage pool to autostart.

    # virsh pool-autostart guest_images_glusterfs
      Pool guest_images_glusterfs marked as autostarted

Verification

  1. Use the virsh pool-list command to verify the Autostart state.

    # virsh pool-list --all
    
      Name                    State      Autostart
      --------------------------------------------
      default                 active     yes
      guest_images_glusterfs  inactive   yes
  2. Verify that the storage pool was created correctly, the sizes reported are as expected, and the state is reported as running.

    # virsh pool-info guest_images_glusterfs
      Name:           guest_images_glusterfs
      UUID:           c7466869-e82a-a66c-2187-dc9d6f0877d0
      State:          running
      Persistent:     yes
      Autostart:      yes
      Capacity:       458.39 GB
      Allocation:     197.91 MB
      Available:      458.20 GB

11.3.5. Creating iSCSI-based storage pools using the CLI

The following provides instructions for creating iSCSI-based storage pools.

Prerequisites

  • Ensure your hypervisor supports iSCSI-based storage pools:

    # virsh pool-capabilities | grep "'iscsi' supported='yes'"

    If the command displays any output, iSCSI-based pools are supported.

Procedure

  1. Create a storage pool

    Use the virsh pool-define-as command to define and create an iSCSI-type storage pool. For example, to create a storage pool named guest_images_iscsi that uses the iqn.2010-05.com.example.server1:iscsirhel7guest IQN on the server1.example.com, and is mounted on the /dev/disk/by-path path:

    # virsh pool-define-as --name guest_images_iscsi --type iscsi --source-host server1.example.com --source-dev iqn.2010-05.com.example.server1:iscsirhel7guest --target /dev/disk/by-path
    Pool guest_images_iscsi defined

    If you already have an XML configuration of the storage pool you want to create, you can also define the pool based on the XML. For details, see Section 11.4.5, “iSCSI-based storage pool parameters”.

  2. Verify that the pool was created

    Use the virsh pool-list command to verify that the pool was created.

    # virsh pool-list --all
    
      Name                 State      Autostart
      -----------------------------------------
      default              active     yes
      guest_images_iscsi   inactive   no
  3. Start the storage pool

    Use the virsh pool-start command to mount the storage pool.

    # virsh pool-start guest_images_iscsi
      Pool guest_images_iscsi started
    Note

    The virsh pool-start command is only necessary for persistent storage pools. Transient storage pools are automatically started when they are created.

  4. [Optional] Turn on autostart

    By default, a storage pool defined with the virsh command is not set to automatically start each time libvirtd starts. Use the virsh pool-autostart command to configure the storage pool to autostart.

    # virsh pool-autostart guest_images_iscsi
      Pool guest_images_iscsi marked as autostarted

Verification

  1. Use the virsh pool-list command to verify the Autostart state.

    # virsh pool-list --all
    
      Name                 State      Autostart
      -----------------------------------------
      default              active     yes
      guest_images_iscsi   inactive   yes
  2. Verify that the storage pool was created correctly, the sizes reported are as expected, and the state is reported as running. Verify there is a lost+found directory in the target path on the file system, indicating that the device is mounted.

    # virsh pool-info guest_images_iscsi
      Name:           guest_images_iscsi
      UUID:           c7466869-e82a-a66c-2187-dc9d6f0877d0
      State:          running
      Persistent:     yes
      Autostart:      yes
      Capacity:       458.39 GB
      Allocation:     197.91 MB
      Available:      458.20 GB

11.3.6. Creating LVM-based storage pools using the CLI

The following provides instructions for creating LVM-based storage pools.

Recommendations

Be aware of the following before creating an LVM-based storage pool:

  • LVM-based storage pools do not provide the full flexibility of LVM.
  • libvirt supports thin logical volumes, but does not provide the features of thin storage pools.
  • LVM-based storage pools are volume groups. You can create volume groups using Logical Volume Manager commands or virsh commands. To manage volume groups using the virsh interface, use the virsh commands to create volume groups.

    For more information about volume groups, refer to the Red Hat Enterprise Linux Logical Volume Manager Administration Guide.

  • LVM-based storage pools require a full disk partition. If activating a new partition or device with these procedures, the partition will be formatted and all data will be erased. If using the host’s existing Volume Group (VG) nothing will be erased. It is recommended to back up the storage device before starting.

Prerequisites

  • Ensure your hypervisor supports LVM-based storage pools:

    # virsh pool-capabilities | grep "'logical' supported='yes'"

    If the command displays any output, LVM-based pools are supported.

Procedure

  1. Create a storage pool

    Use the virsh pool-define-as command to define and create an LVM-type storage pool. For example, the following creates a storage pool named guest_images_logical that uses a libvirt_lvm LVM device mounted on /dev/sdc. The created storage pool is mounted as /dev/libvirt_lvm.

    # virsh pool-define-as guest_images_logical logical --source-dev=/dev/sdc --source-name libvirt_lvm --target /dev/libvirt_lvm
    Pool guest_images_logical defined

    If you already have an XML configuration of the storage pool you want to create, you can also define the pool based on the XML. For details, see Section 11.4.6, “LVM-based storage pool parameters”.

  2. Verify that the pool was created

    Use the virsh pool-list command to verify that the pool was created.

    # virsh pool-list --all
    
      Name                   State      Autostart
      -------------------------------------------
      default                active     yes
      guest_images_logical   inactive   no
  3. Start the storage pool

    Use the virsh pool-start command to mount the storage pool.

    # virsh pool-start guest_images_logical
      Pool guest_images_logical started
    Note

    The virsh pool-start command is only necessary for persistent storage pools. Transient storage pools are automatically started when they are created.

  4. [Optional] Turn on autostart

    By default, a storage pool defined with the virsh command is not set to automatically start each time libvirtd starts. Use the virsh pool-autostart command to configure the storage pool to autostart.

    # virsh pool-autostart guest_images_logical
      Pool guest_images_logical marked as autostarted

Verification

  1. Use the virsh pool-list command to verify the Autostart state.

    # virsh pool-list --all
    
      Name                   State      Autostart
      -------------------------------------------
      default                active     yes
      guest_images_logical   inactive   yes
  2. Verify that the storage pool was created correctly, the sizes reported are as expected, and the state is reported as running.

    # virsh pool-info guest_images_logical
      Name:           guest_images_logical
      UUID:           c7466869-e82a-a66c-2187-dc9d6f0877d0
      State:          running
      Persistent:     yes
      Autostart:      yes
      Capacity:       458.39 GB
      Allocation:     197.91 MB
      Available:      458.20 GB

11.3.7. Creating NFS-based storage pools using the CLI

The following provides instructions for creating storage pools based on the network file system (NFS).

Prerequisites

  • Ensure your hypervisor supports NFS-based storage pools:

    # virsh pool-capabilities | grep "<value>nfs</value>"

    If the command displays any output, NFS-based pools are supported.

Procedure

  1. Create a storage pool

    Use the virsh pool-define-as command to define and create an NFS-type storage pool. For example, to create a storage pool named guest_images_netfs that uses a NFS server with IP 111.222.111.222 mounted on the server directory /home/net_mount using the target directory /var/lib/libvirt/images/nfspool:

    # virsh pool-define-as --name guest_images_netfs --type netfs --source-host='111.222.111.222' source-path='/home/net_mount' --source-format='nfs' --target='/var/lib/libvirt/images/nfspool'

    If you already have an XML configuration of the storage pool you want to create, you can also define the pool based on the XML. For details, see Section 11.4.7, “NFS-based storage pool parameters”.

  2. Verify that the pool was created

    Use the virsh pool-list command to verify that the pool was created.

    # virsh pool-list --all
    
      Name                 State      Autostart
      -----------------------------------------
      default              active     yes
      guest_images_netfs   inactive   no
  3. Start the storage pool

    Use the virsh pool-start command to mount the storage pool.

    # virsh pool-start guest_images_netfs
      Pool guest_images_netfs started
    Note

    The virsh pool-start command is only necessary for persistent storage pools. Transient storage pools are automatically started when they are created.

  4. [Optional] Turn on autostart

    By default, a storage pool defined with the virsh command is not set to automatically start each time libvirtd starts. Use the virsh pool-autostart command to configure the storage pool to autostart.

    # virsh pool-autostart guest_images_netfs
      Pool guest_images_netfs marked as autostarted

Verification

  1. Use the virsh pool-list command to verify the Autostart state.

    # virsh pool-list --all
    
      Name                 State      Autostart
      -----------------------------------------
      default              active     yes
      guest_images_netfs   inactive   yes
  2. Verify that the storage pool was created correctly, the sizes reported are as expected, and the state is reported as running. Verify there is a lost+found directory in the target path on the file system, indicating that the device is mounted.

    # virsh pool-info guest_images_netfs
      Name:           guest_images_netfs
      UUID:           c7466869-e82a-a66c-2187-dc9d6f0877d0
      State:          running
      Persistent:     yes
      Autostart:      yes
      Capacity:       458.39 GB
      Allocation:     197.91 MB
      Available:      458.20 GB

11.3.8. Creating SCSI-based storage pools with vHBA devices using the CLI

The following provides instructions for creating SCSI-based storage pools using virtual host bus adapter (vHBA) devices.

Prerequisites

  • Ensure your hypervisor supports SCSI-based storage pools:

    # virsh pool-capabilities | grep "'scsi' supported='yes'"

    If the command displays any output, SCSI-based pools are supported.

  • Before creating a SCSI-based storage pools with vHBA devices, create a vHBA. For more information, see Creating vHBAs.

Procedure

  1. Create a storage pool

    Use the virsh pool-define-as command to define and create SCSI storage pool using a vHBA. For example, the following creates a storage pool named guest_images_vhba that uses a vHBA identified by the scsi_host3 parent adapter, world-wide port number 5001a4ace3ee047d, and world-wide node number 5001a4a93526d0a1. The storage pool is mounted on the /dev/disk/ directory:

    # virsh pool-define-as guest_images_vhba scsi --adapter-parent scsi_host3 --adapter-wwnn 5001a4a93526d0a1 --adapter-wwpn 5001a4ace3ee047d --target /dev/disk/
    Pool guest_images_vhba defined

    If you already have an XML configuration of the storage pool you want to create, you can also define the pool based on the XML. For details, see Section 11.4.8, “Parameters for SCSI-based storage pools with vHBA devices”.

  2. Verify that the pool was created

    Use the virsh pool-list command to verify that the pool was created.

    # virsh pool-list --all
    
      Name                 State      Autostart
      -----------------------------------------
      default              active     yes
      guest_images_vhba    inactive   no
  3. Start the storage pool

    Use the virsh pool-start command to mount the storage pool.

    # virsh pool-start guest_images_vhba
      Pool guest_images_vhba started
    Note

    The virsh pool-start command is only necessary for persistent storage pools. Transient storage pools are automatically started when they are created.

  4. [Optional] Turn on autostart

    By default, a storage pool defined with the virsh command is not set to automatically start each time libvirtd starts. Use the virsh pool-autostart command to configure the storage pool to autostart.

    # virsh pool-autostart guest_images_vhba
      Pool guest_images_vhba marked as autostarted

Verification

  1. Use the virsh pool-list command to verify the Autostart state.

    # virsh pool-list --all
    
      Name                 State      Autostart
      -----------------------------------------
      default              active     yes
      guest_images_vhba    inactive   yes
  2. Verify that the storage pool was created correctly, the sizes reported are as expected, and the state is reported as running.

    # virsh pool-info guest_images_vhba
      Name:           guest_images_vhba
      UUID:           c7466869-e82a-a66c-2187-dc9d6f0877d0
      State:          running
      Persistent:     yes
      Autostart:      yes
      Capacity:       458.39 GB
      Allocation:     197.91 MB
      Available:      458.20 GB

11.4. Parameters for creating storage pools

Based on the type of storage pool you require, you can modify its XML configuration file and define a specific type of storage pool. This section provides information about the XML parameters required for creating various types of storage pools along with examples.

11.4.1. Directory-based storage pool parameters

This section provides information about the required XML parameters for a directory-based storage pool and an example.

You can define a storage pool based on the XML configuration in a specified file. For example:

# virsh pool-define ~/guest_images.xml
  Pool defined from guest_images_dir

Parameters

The following table provides a list of required parameters for the XML file for a directory-based storage pool.

Table 11.1. Directory-based storage pool parameters

DescriptionXML

The type of storage pool

<pool type='dir'>

The name of the storage pool

<name>name</name>

The path specifying the target. This will be the path used for the storage pool.

<target>
   <path>target_path</path>
</target>

Example

The following is an example of an XML file for a storage pool based on the /guest_images directory:

<pool type='dir'>
  <name>dirpool</name>
  <target>
    <path>/guest_images</path>
  </target>
</pool>

Additional resources

For more information on creating directory-based storage pools, see Section 11.3.1, “Creating directory-based storage pools using the CLI”.

11.4.2. Disk-based storage pool parameters

This section provides information about the required XML parameters for a disk-based storage pool and an example.

You can define a storage pool based on the XML configuration in a specified file. For example:

# virsh pool-define ~/guest_images.xml
  Pool defined from guest_images_disk

Parameters

The following table provides a list of required parameters for the XML file for a disk-based storage pool.

Table 11.2. Disk-based storage pool parameters

DescriptionXML

The type of storage pool

<pool type='disk'>

The name of the storage pool

<name>name</name>

The path specifying the storage device. For example, /dev/sdb.

<source>
   <path>source_path</path>
</source>

The path specifying the target device. This will be the path used for the storage pool.

<target>
   <path>target_path</path>
</target>

Example

The following is an example of an XML file for a disk-based storage pool:

<pool type='disk'>
  <name>phy_disk</name>
  <source>
    <device path='/dev/sdb'/>
    <format type='gpt'/>
  </source>
  <target>
    <path>/dev</path>
  </target>
</pool>

Additional resources

For more information on creating disk-based storage pools, see Section 11.3.2, “Creating disk-based storage pools using the CLI”.

11.4.3. Filesystem-based storage pool parameters

The following provides information about the required parameters for a filesystem-based storage pool and an example.

You can define a storage pool based on the XML configuration in a specified file. For example:

# virsh pool-define ~/guest_images.xml
  Pool defined from guest_images_fs

Parameters

The following table provides a list of required parameters for the XML file for a filesystem-based storage pool.

Table 11.3. Filesystem-based storage pool parameters

DescriptionXML

The type of storage pool

<pool type='fs'>

The name of the storage pool

<name>name</name>

The path specifying the partition. For example, /dev/sdc1

<source>
   <device path=device_path />

The file system type, for example ext4.

    <format type=fs_type />
</source>

The path specifying the target. This will be the path used for the storage pool.

<target>
    <path>path-to-pool</path>
</target>

Example

The following is an example of an XML file for a storage pool based on the /dev/sdc1 partition:

<pool type='fs'>
  <name>guest_images_fs</name>
  <source>
    <device path='/dev/sdc1'/>
    <format type='auto'/>
  </source>
  <target>
    <path>/guest_images</path>
  </target>
</pool>

Additional resources

For more information on creating filesystem-based storage pools, see Section 11.3.3, “Creating filesystem-based storage pools using the CLI”.

11.4.4. GlusterFS-based storage pool parameters

The following provides information about the required parameters for a GlusterFS-based storage pool and an example.

You can define a storage pool based on the XML configuration in a specified file. For example:

# virsh pool-define ~/guest_images.xml
  Pool defined from guest_images_glusterfs

Parameters

The following table provides a list of required parameters for the XML file for a GlusterFS-based storage pool.

Table 11.4. GlusterFS-based storage pool parameters

DescriptionXML

The type of storage pool

<pool type='gluster'>

The name of the storage pool

<name>name</name>

The hostname or IP address of the Gluster server

<source>
   <name=gluster-name />

The path on the Gluster server used for the storage pool.

    <dir path=gluster-path />
</source>

Example

The following is an example of an XML file for a storage pool based on the Gluster file system at 111.222.111.222:

<pool type='gluster'>
  <name>Gluster_pool</name>
  <source>
    <host name='111.222.111.222'/>
    <dir path='/'/>
    <name>gluster-vol1</name>
  </source>
</pool>

For more information on creating filesystem-based storage pools, see Section 11.3.4, “Creating GlusterFS-based storage pools using the CLI”.

11.4.5. iSCSI-based storage pool parameters

The following provides information about the required parameters for an iSCSI-based storage pool and an example.

You can define a storage pool based on the XML configuration in a specified file. For example:

# virsh pool-define ~/guest_images.xml
  Pool defined from guest_images_iscsi

Parameters

The following table provides a list of required parameters for the XML file for an iSCSI-based storage pool.

Table 11.5. iSCSI-based storage pool parameters

DescriptionXML

The type of storage pool

<pool type='iscsi'>

The name of the storage pool

<name>name</name>

The name of the host

<source>
  <host name=hostname />

The iSCSI IQN

    <device path= iSCSI_IQN />
</source>

The path specifying the target. This will be the path used for the storage pool.

<target>
   <path>/dev/disk/by-path</path>
</target>

[Optional] The IQN of the iSCSI initiator. This is only needed when the ACL restricts the LUN to a particular initiator.

<initiator>
   <iqn name='initiator0' />
</initiator>

Note

The IQN of the iSCSI initiator can be determined using the virsh find-storage-pool-sources-as iscsi command.

Example

The following is an example of an XML file for a storage pool based on the specified iSCSI device:

<pool type='iscsi'>
  <name>iSCSI_pool</name>
  <source>
    <host name='server1.example.com'/>
    <device path='iqn.2010-05.com.example.server1:iscsirhel7guest'/>
  </source>
  <target>
    <path>/dev/disk/by-path</path>
  </target>
</pool>

Additional resources

For more information on creating iSCSCI-based storage pools, see Section 11.3.5, “Creating iSCSI-based storage pools using the CLI”.

11.4.6. LVM-based storage pool parameters

The following provides information about the required parameters for an LVM-based storage pool and an example.

You can define a storage pool based on the XML configuration in a specified file. For example:

# virsh pool-define ~/guest_images.xml
  Pool defined from guest_images_logical

Parameters

The following table provides a list of required parameters for the XML file for a LVM-based storage pool.

Table 11.6. LVM-based storage pool parameters

DescriptionXML

The type of storage pool

<pool type='logical'>

The name of the storage pool

<name>name</name>

The path to the device for the storage pool

<source>
   <device path='device_path' />`

The name of the volume group

    <name>VG-name</name>

The virtual group format

    <format type='lvm2' />
</source>

The target path

<target>
   <path=target_path />
</target>

Note

If the logical volume group is made of multiple disk partitions, there may be multiple source devices listed. For example:

<source>
  <device path='/dev/sda1'/>
  <device path='/dev/sdb3'/>
  <device path='/dev/sdc2'/>
  ...
</source>

Example

The following is an example of an XML file for a storage pool based on the specified LVM:

<pool type='logical'>
  <name>guest_images_lvm</name>
  <source>
    <device path='/dev/sdc'/>
    <name>libvirt_lvm</name>
    <format type='lvm2'/>
  </source>
  <target>
    <path>/dev/libvirt_lvm</path>
  </target>
</pool>

Additional resources

For more information on creating iSCSCI-based storage pools, see Section 11.3.6, “Creating LVM-based storage pools using the CLI”.

11.4.7. NFS-based storage pool parameters

The following provides information about the required parameters for an NFS-based storage pool and an example.

You can define a storage pool based on the XML configuration in a specified file. For example:

# virsh pool-define ~/guest_images.xml
  Pool defined from guest_images_netfs

Parameters

The following table provides a list of required parameters for the XML file for an NFS-based storage pool.

Table 11.7. NFS-based storage pool parameters

DescriptionXML

The type of storage pool

<pool type='netfs'>

The name of the storage pool

<name>name</name>

The hostname of the network server where the mount point is located. This can be a hostname or an IP address.

<source>
   <host name=hostname
/>

The format of the storage pool

One of the following:

    <format type='nfs' />

    <format type='glusterfs' />

    <format type='cifs' />

The directory used on the network server

    <dir path=source_path />
</source>

The path specifying the target. This will be the path used for the storage pool.

<target>
   <path>target_path</path>
</target>

Example

The following is an example of an XML file for a storage pool based on the /home/net_mount directory of the file_server NFS server:

<pool type='netfs'>
  <name>nfspool</name>
  <source>
    <host name='file_server'/>
    <format type='nfs'/>
    <dir path='/home/net_mount'/>
  </source>
  <target>
    <path>/var/lib/libvirt/images/nfspool</path>
  </target>
</pool>

Additional resources

For more information on creating NFS-based storage pools, see Section 11.3.7, “Creating NFS-based storage pools using the CLI”.

11.4.8. Parameters for SCSI-based storage pools with vHBA devices

The following provides information about the required parameters for a SCSi-based storage pool that uses a virtual host adapter bus (vHBA) device.

You can define a storage pool based on the XML configuration in a specified file. For example:

# virsh pool-define ~/guest_images.xml
  Pool defined from guest_images_vhba

Parameters

The following table provides a list of required parameters for the XML file for a SCSI-based storage pool with vHBA.

Table 11.8. Parameters for SCSI-based storage pools with vHBA devices

DescriptionXML

The type of storage pool

<pool type='scsi'>

The name of the storage pool

<name>name</name>

The identifier of the vHBA. The parent attribute is optional.

<source>
   <adapter type='fc_host'
   [parent=parent_scsi_device]
   wwnn='WWNN'
   wwpn='WWPN' />
</source>

The target path. This will be the path used for the storage pool.

<target>
   <path=target_path />
</target>

Important

When the <path> field is /dev/, libvirt generates a unique short device path for the volume device path. For example, /dev/sdc. Otherwise, the physical host path is used. For example, /dev/disk/by-path/pci-0000:10:00.0-fc-0x5006016044602198-lun-0. The unique short device path allows the same volume to be listed in multiple virtual machines (VMs) by multiple storage pools. If the physical host path is used by multiple VMs, duplicate device type warnings may occur.

Note

The parent attribute can be used in the <adapter> field to identify the physical HBA parent from which the NPIV LUNs by varying paths can be used. This field, scsi_hostN, is combined with the vports and max_vports attributes to complete the parent identification. The parent, parent_wwnn, parent_wwpn, or parent_fabric_wwn attributes provide varying degrees of assurance that after the host reboots the same HBA is used.

  • If no parent is specified, libvirt uses the first scsi_hostN adapter that supports NPIV.
  • If only the parent is specified, problems can arise if additional SCSI host adapters are added to the configuration.
  • If parent_wwnn or parent_wwpn is specified, after the host reboots the same HBA is used.
  • If parent_fabric_wwn is used, after the host reboots an HBA on the same fabric is selected, regardless of the scsi_hostN used.

Examples

The following are examples of XML files for SCSI-based storage pools with vHBA.

  • A storage pool that is the only storage pool on the HBA:

    <pool type='scsi'>
      <name>vhbapool_host3</name>
      <source>
        <adapter type='fc_host' wwnn='5001a4a93526d0a1' wwpn='5001a4ace3ee047d'/>
      </source>
      <target>
        <path>/dev/disk/by-path</path>
      </target>
    </pool>
  • A storage pool that is one of several storage pools that use a single vHBA and uses the parent attribute to identify the SCSI host device:

    <pool type='scsi'>
      <name>vhbapool_host3</name>
      <source>
        <adapter type='fc_host' parent='scsi_host3' wwnn='5001a4a93526d0a1' wwpn='5001a4ace3ee047d'/>
      </source>
      <target>
        <path>/dev/disk/by-path</path>
      </target>
    </pool>

Additional resources

For more information on creating SCSI-based storage pools with vHBA, see Section 11.3.8, “Creating SCSI-based storage pools with vHBA devices using the CLI”.

11.5. Creating and assigning storage volumes using the CLI

To obtain a disk image and attach it to a virtual machine (VM) as a virtual disk, create a storage volume and assign its XML configuration to a the VM.

Prerequisites

  • A storage pool with unallocated space is present on the host.

    • To verify, list the storage pools on the host:

      # virsh pool-list --details
      
      Name               State     Autostart   Persistent   Capacity     Allocation   Available
      --------------------------------------------------------------------------------------------
      default            running   yes         yes          48.97 GiB    36.34 GiB    12.63 GiB
      Downloads          running   yes         yes          175.92 GiB   121.20 GiB   54.72 GiB
      VM-disks           running   yes         yes          175.92 GiB   121.20 GiB   54.72 GiB
    • If you do not have an existing storage pool, create one. For more information, see Section 11.3, “Creating and assigning storage pools for virtual machines using the CLI”

Procedure

  1. Create a storage volume using the virsh vol-create-as command. For example, to create a 20 GB qcow2 volume based on the guest-images-fs storage pool:

    # virsh vol-create-as --pool guest-images-fs --name vm-disk1 --capacity 20 --format qcow2

    Important: Specific storage pool types do not support the virsh vol-create-as command and instead require specific processes to create storage volumes:

    • GlusterFS-based - Use the qemu-img command to create storage volumes.
    • iSCSI-based - Prepare the iSCSI LUNs in advance on the iSCSI server.
    • Multipath-based - Use the multipathd command to prepare or manage the multipath.
    • vHBA-based - Prepare the fibre channel card in advance.
  2. Create an XML file, and add the following lines in it. This file will be used to add the storage volume as a disk to a VM.

    <disk type='volume' device='disk'>
        <driver name='qemu' type='qcow2'/>
        <source pool='guest-images-fs' volume='vm-disk1'/>
        <target dev='hdk' bus='ide'/>
    </disk>

    This example specifies a virtual disk that uses the vm-disk1 volume, created in the previous step, and sets the volume to be set up as disk hdk on an ide bus. Modify the respective parameters as appropriate for your environment.

    Important: With specific storage pool types, you must use different XML formats to describe a storage volume disk.

    • For GlusterFS-based pools:

        <disk type='network' device='disk'>
          <driver name='qemu' type='raw'/>
          <source protocol='gluster' name='Volume1/Image'>
            <host name='example.org' port='6000'/>
          </source>
          <target dev='vda' bus='virtio'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
        </disk>
    • For multipath-based pools:

      <disk type='block' device='disk'>
      <driver name='qemu' type='raw'/>
      <source dev='/dev/mapper/mpatha' />
      <target dev='sda' bus='scsi'/>
      </disk>
    • For RBD-based storage pools:

        <disk type='network' device='disk'>
          <driver name='qemu' type='raw'/>
          <source protocol='rbd' name='pool/image'>
            <host name='mon1.example.org' port='6321'/>
          </source>
          <target dev='vdc' bus='virtio'/>
        </disk>
  3. Use the XML file to assign the storage volume as a disk to a VM. For example, to assign a disk defined in ~/vm-disk1.xml to the testguest1 VM:

    # attach-device --config testguest1 ~/vm-disk1.xml

Verification

  • In the guest operating system of the VM, confirm that the disk image has become available as an un-formatted and un-allocated disk.

11.6. Deleting storage for virtual machines using the CLI

You can delete storage associated with you virtual machines (VMs) to free up system resources, or to assign them to other VMs. You can also delete storage that you had assigned to defunct VMs. The following provides information about deleting storage pools and storage volumes using the CLI.

11.6.1. Deleting storage pools using the CLI

To remove a storage pool from your host system, you must stop the pool and remove its XML definition.

Procedure

  1. List the defined storage pools using the virsh pool-list command.

    # virsh pool-list --all
    Name                 State      Autostart
    -------------------------------------------
    default              active     yes
    Downloads            active     yes
    RHEL8-Storage-Pool   active     yes
  2. Stop the storage pool you want to delete using the virsh pool-destroy command.

    # virsh pool-destroy Downloads
    Pool Downloads destroyed
  3. Optional: For some types of storage pools, you can remove the directory where the storage pool resides using the virsh pool-delete command. Note that to do so, the directory must be empty.

    # virsh pool-delete Downloads
    Pool Downloads deleted
  4. Delete the definition of the storage pool using the virsh pool-undefine command.

    # virsh pool-undefine Downloads
    Pool Downloads has been undefined

Verification

  • Confirm that the storage pool was deleted.

    # virsh pool-list --all
    Name                 State      Autostart
    -------------------------------------------
    default              active     yes
    RHEL8-Storage-Pool   active     yes

11.6.2. Deleting storage volumes using the CLI

To remove a storage volume from your host system, you must stop the pool and remove its XML definition.

Prerequisites

  • Any virtual machine that uses the storage volume you want to delete is shut down.

Procedure

  1. Use the virsh vol-list command to list the storage volumes in a specified storage pool.

    # virsh vol-list --pool RHEL-SP
     Name                 Path
    ---------------------------------------------------------------
     .bash_history        /home/VirtualMachines/.bash_history
     .bash_logout         /home/VirtualMachines/.bash_logout
     .bash_profile        /home/VirtualMachines/.bash_profile
     .bashrc              /home/VirtualMachines/.bashrc
     .git-prompt.sh       /home/VirtualMachines/.git-prompt.sh
     .gitconfig           /home/VirtualMachines/.gitconfig
     vm-disk1             /home/VirtualMachines/vm-disk1
  2. Optional: Use the virsh vol-wipe command to wipe a storage volume. For example, to wipe a storage volume named vm-disk1 associated with the storage pool RHEL-SP:

    # virsh vol-wipe --pool RHEL-SP vm-disk1
    Vol vm-disk1 wiped
  3. Use the virsh vol-delete command to delete a storage volume. For example, to delete a storage volume named vm-disk1 associated with the storage pool RHEL-SP:

    # virsh vol-delete --pool RHEL-SP vm-disk1
    Vol vm-disk1 deleted

Verification

  • Use the virsh vol-list command again to verify that the storage volume was deleted.

    # virsh vol-list --pool RHEL-SP
     Name                 Path
    ---------------------------------------------------------------
     .bash_history        /home/VirtualMachines/.bash_history
     .bash_logout         /home/VirtualMachines/.bash_logout
     .bash_profile        /home/VirtualMachines/.bash_profile
     .bashrc              /home/VirtualMachines/.bashrc
     .git-prompt.sh       /home/VirtualMachines/.git-prompt.sh
     .gitconfig           /home/VirtualMachines/.gitconfig

11.7. Managing storage for virtual machines using the web console

Using the RHEL 8 web console, you can manage various aspects of a virtual machine’s (VM’s) storage. You can use the web console to:

11.7.1. Viewing storage pool information using the web console

The following procedure describes how to view detailed storage pool information about the virtual machine (VM) storage pools that the web console session can access.

Prerequisites

Procedure

  1. Click Storage Pools at the top of the Virtual Machines interface. The Storage Pools window appears, showing a list of configured storage pools.

    web console storage pools window

    The information includes the following:

    • Name - The name of the storage pool.
    • Size - The size of the storage pool.
    • Connection - The connection used to access the storage pool.
    • State - The state of the storage pool.
  2. Click the row of the storage whose information you want to see.

    The row expands to reveal the Overview pane with the following information about the selected storage pool:

    • Path - The path to the storage pool.
    • Persistent - Whether or not the storage pool is persistent.
    • Autostart - Whether or not the storage pool starts automatically.
    • Type - The type of the storage pool.
    web console storage pool overview
  3. To view a list of storage volumes created from the storage pool, click Storage Volumes.

    The Storage Volumes pane appears, showing a list of configured storage volumes with their sizes and the amount of space used.

    web console storage pool storage volumes

Additional resources

11.7.2. Creating storage pools using the web console

A virtual machine (VM) requires a file, directory, or storage device that can be used to create storage volumes to store the VM image or act as additional storage. You can create storage pools from local or network-based resources that you can then use to create the storage volumes.

To create storage pools using the RHEL web console, see the following procedure.

Prerequisites

Procedure

  1. Click Storage Pools at the top of the Virtual Machines tab. The Storage Pools window appears, showing a list of configured storage pools.

    web console storage pools window
  2. Click Create Storage Pool. The Create Storage Pool dialog appears.

    cockpit create storage pool
  3. Enter the following information in the Create Storage Pool dialog:

    • Name - The name of the storage pool.
    • Type - The type of the storage pool. This can be a file-system directory, a network file system, an iSCSI target, a physical disk drive, or an LVM volume group.
    • Target Path - The storage pool path on the host’s file system.
    • Startup - Whether or not the storage pool starts when the host boots.
  4. Click Create. The storage pool is created, the Create Storage Pool dialog closes, and the new storage pool appears in the list of storage pools.

Additional resources

11.7.3. Removing storage pools using the web console

You can remove storage pools to free up resources on the host or on the network to improve system performance. Deleting storage pools also frees up resources that can then be used by other virtual machines (VMs).

Important

Unless explicitly specified, deleting a storage pool does not simultaneously delete the storage volumes inside that pool.

To delete a storage pool using the RHEL web console, see the following procedure.

Note

If you want to temporarily deactivate a storage pool instead of deleting it, see Deactivating storage pools using the web console

Prerequisites

Procedure

  1. Click Storage Pools at the top of the Virtual Machines tab. The Storage Pools window appears, showing a list of configured storage pools.

    web console storage pools window
  2. In the Storage Pools window, click the storage pool you want to delete.

    The row expands to reveal the Overview pane with basic information about the selected storage pool and controls for deactivating or deleting the storage pool.

    web console storage pool overview
  3. Click Delete.

    A confirmation dialog appears.

    cockpit storage pool delete confirm
  4. Optional: To delete the storage volumes inside the pool, select the check box in the dialog.
  5. Click Delete.

    The storage pool is deleted. If you had selected the checkbox in the previous step, the associated storage volumes are deleted as well.

Additional resources

11.7.4. Deactivating storage pools using the web console

If you do not want to permanently delete a storage pool, you can temporarily deactivate it instead.

When you deactivate a storage pool, no new volumes can be created in that pool. However, any virtual machines (VMs) that have volumes in that pool will continue to run. This is useful for a number of reasons, for example, you can limit the number of volumes that can be created in a pool to increase system performance.

To deactivate a storage pool using the RHEL web console, see the following procedure.

Prerequisites

Procedure

  1. Click Storage Pools at the top of the Virtual Machines tab. The Storage Pools window appears, showing a list of configured storage pools.

    web console storage pools window
  2. In the Storage Pools window, click the storage pool you want to deactivate.

    The row expands to reveal the Overview pane with basic information about the selected storage pool and controls for deactivating and deleting the VM.

    web console storage pool overview
  3. Click Deactivate.

    web console storage pool overview

    The storage pool is deactivated.

Additional resources

11.7.5. Creating storage volumes using the web console

To create a functioning virtual machine (VM) you require a local storage device assigned to the VM that can store the VM image and VM-related data. You can create a storage volume in a storage pool and assign it to a VM as a storage disk.

To create storage volumes using the web console, see the following procedure.

Prerequisites

Procedure

  1. Click Storage Pools at the top of the Virtual Machines tab. The Storage Pools window appears, showing a list of configured storage pools.

    web console storage pools window
  2. In the Storage Pools window, click the storage pool from which you want to create a storage volume.

    The row expands to reveal the Overview pane with basic information about the selected storage pool.

    web console storage pool overview
  3. Click Storage Volumes next to the Overview tab in the expanded row.

    The Storage Volume tab appears with basic information about existing storage volumes, if any.

    cockpit storage volume overview
  4. Click Create Volume.

    The Create Storage Volume dialog appears.

    cockpit create storage volume
  5. Enter the following information in the Create Storage Volume dialog:

    • Name - The name of the storage volume.
    • Size - The size of the storage volume in MiB or GiB.
    • Format - The format of the storage volume. The supported types are qcow2 and raw.
  6. Click Create.

    The storage volume is created, the Create Storage Volume dialog closes, and the new storage volume appears in the list of storage volumes.

Additional resources

11.7.6. Removing storage volumes using the web console

You can remove storage volumes to free up space in the storage pool, or to remove storage items associated with defunct virtual machines (VMs).

To remove storage volumes using the RHEL web console, see the following procedure.

Prerequisites

Procedure

  1. Click Storage Pools at the top of the Virtual Machines tab. The Storage Pools window appears, showing a list of configured storage pools.

    web console storage pools window
  2. In the Storage Pools window, click the storage pool from which you want to remove a storage volume.

    The row expands to reveal the Overview pane with basic information about the selected storage pool.

    web console storage pool overview
  3. Click Storage Volumes next to the Overview tab in the expanded row.

    The Storage Volume tab appears with basic information about existing storage volumes, if any.

    cockpit storage volume overview
  4. Select the storage volume you want to remove.

    cockpit delete storage volume
  5. Click Delete 1 Volume

Additional resources

11.7.7. Viewing virtual machine disk information in the web console

The following procedure describes how to view the disk information of a virtual machine (VM) to which the web console session is connected.

Prerequisites

Procedure

  1. Click the row of the VM whose information you want to see.

    The row expands to reveal the Overview pane with basic information about the selected VM and controls for shutting down and deleting the VM.

  2. Click Disks.

    The Disks pane appears with information about the disks assigned to the VM.

cockpit disk info

The information includes the following:

  • Device - The device type of the disk.
  • Used - The amount of the disk that is used.
  • Capacity - The size of the disk.
  • Bus - The bus type of the disk.
  • Access - Whether the disk is is writeable or read-only.
  • Source - The disk device or file.

Additional resources

11.7.8. Adding new disks to virtual machines using the web console

You can add new disks to virtual machines (VMs) by creating a new storage volume and attaching it to a VM using the RHEL 8 web console.

Prerequisites

Procedure

  1. In the Virtual Machines interface, click the row of the VM for which you want to create and attach a new disk.

    The row expands to reveal the Overview pane with basic information about the selected VM and controls for shutting down and deleting the VM.

  2. Click Disks.

    The Disks pane appears with information about the disks configured for the VM.

    cockpit disk info

  3. Click Add Disk.

    The Add Disk dialog appears.

    cockpit add disk

  4. Select the Create New option.
  5. Configure the new disk.

    • Pool - Select the storage pool from which the virtual disk will be created.
    • Name - Enter a name for the virtual disk that will be created.
    • Size - Enter the size and select the unit (MiB or GiB) of the virtual disk that will be created.
    • Format - Select the format for the virtual disk that will be created. The supported types are qcow2 and raw.
    • Persistence - If checked, the virtual disk is persistent. If not checked, the virtual disk is transient.

      Note

      Transient disks can only be added to VMs that are running.

    • Additional Options - Set additional configurations for the virtual disk.

      • Cache - Select the type of cache for the virtual disk.
      • Bus - Select the type of bus for the virtual disk.
  6. Click Add.

    The virtual disk is created and connected to the VM.

Additional resources

11.7.9. Attaching existing disks to virtual machines using the web console

The following procedure describes how to attach existing storage volumes as disks to a virtual machine (VM) using the RHEL 8 web console.

Prerequisites

Procedure

  1. In the Virtual Machines interface, click the row of the VM to which you want to attach an existing disk.

    The row expands to reveal the Overview pane with basic information about the selected VM and controls for shutting down and deleting the VM.

  2. Click Disks.

    The Disks pane appears with information about the disks configured for the VM.

    cockpit disk info
  3. Click Add Disk.

    The Add Disk dialog appears.

    cockpit add disk
  4. Click the Use Existing radio button.

    The appropriate configuration fields appear in the Add Disk dialog.

    cockpit attach disk
  5. Configure the disk for the VM.

    • Pool - Select the storage pool from which the virtual disk will be attached.
    • Volume - Select the storage volume that will be attached.
    • Persistence - Check to make the virtual disk persistent. Clear to make the virtual disk transient.
    • Additional Options - Set additional configurations for the virtual disk.

      • Cache - Select the type of cache for the virtual disk.
      • Bus - Select the type of bus for the virtual disk.
  6. Click Add

    The selected virtual disk is attached to the VM.

Additional resources

11.7.10. Detaching disks from virtual machines using the web console

The following describes how to detach disks from virtual machines (VMs) using the RHEL 8 web console.

Prerequisites

Procedure

  1. In the Virtual Machines interface, click the row of the VM from which you want to detach an existing disk.

    The row expands to reveal the Overview pane with basic information about the selected VM and controls for shutting down and deleting the VM.

  2. Click Disks.

    The Disks pane appears with information about the disks configured for the VM.

    cockpit disk info

  3. Click the Remove button next to the disk you want to detach from the VM. A Remove Disk confirmation dialog appears.
  4. In the confirmation dialog, click Remove.

    The virtual disk is detached from the VM.

Additional resources

11.8. Securing iSCSI storage pools with libvirt secrets

User name and password parameters can be configured with virsh to secure an iSCSI storage pool. You can configure this before or after you define the pool, but the pool must be started for the authentication settings to take effect.

The following provides instructions for securing iSCSI-based storage pools with libvirt secrets.

Note

This procedure is required if a user_ID and password were defined when creating the iSCSI target.

Prerequisites

Procedure

  1. Create a libvirt secret file with a challenge-handshake authentication protocol (CHAP) user name. For example:

    <secret ephemeral='no' private='yes'>
        <description>Passphrase for the iSCSI example.com server</description>
        <usage type='iscsi'>
            <target>iscsirhel7secret</target>
        </usage>
    </secret>
  2. Define the libvirt secret with the virsh secret-define command.

    # virsh secret-define secret.xml

  3. Verify the UUID with the virsh secret-list command.

    # virsh secret-list
    UUID                                  Usage
    -------------------------------------------------------------------
    2d7891af-20be-4e5e-af83-190e8a922360  iscsi iscsirhel7secret
  4. Assign a secret to the UUID in the output of the previous step using the virsh secret-set-value command. This ensures that the CHAP username and password are in a libvirt-controlled secret list. For example:

    # virsh secret-set-value --interactive 2d7891af-20be-4e5e-af83-190e8a922360
    Enter new value for secret:
    Secret value set
  5. Add an authentication entry in the storage pool’s XML file using the virsh edit command, and add an <auth> element, specifying authentication type, username, and secret usage.

    For example:

    <pool type='iscsi'>
      <name>iscsirhel7pool</name>
        <source>
           <host name='192.168.122.1'/>
           <device path='iqn.2010-05.com.example.server1:iscsirhel7guest'/>
           <auth type='chap' username='redhat'>
              <secret usage='iscsirhel7secret'/>
           </auth>
        </source>
      <target>
        <path>/dev/disk/by-path</path>
      </target>
    </pool>
    Note

    The <auth> sub-element exists in different locations within the virtual machine’s <pool> and <disk> XML elements. For a <pool>, <auth> is specified within the <source> element, as this describes where to find the pool sources, since authentication is a property of some pool sources (iSCSI and RBD). For a <disk>, which is a sub-element of a domain, the authentication to the iSCSI or RBD disk is a property of the disk. In addition, the <auth> sub-element for a disk differs from that of a storage pool.

    <auth username='redhat'>
      <secret type='iscsi' usage='iscsirhel7secret'/>
    </auth>
  6. To activate the changes, activate the storage pool. If the pool has already been started, stop and restart the storage pool:

    # virsh pool-destroy iscsirhel7pool
    # virsh pool-start iscsirhel7pool

11.9. Creating vHBAs

A virtual host bus adapter (vHBA) device connects the host system to an SCSI device and is required for creating an SCSI-based storage pool.

The following provides instructions on creating a vHBA.

Procedure

  1. Locate the HBAs on your host system, using the virsh nodedev-list --cap vports command.

    The following example shows a host that has two HBAs that support vHBA:

    # virsh nodedev-list --cap vports
    scsi_host3
    scsi_host4
  2. View the HBA’s details, using the virsh nodedev-dumpxml HBA_device command.

    # virsh nodedev-dumpxml scsi_host3

    The output from the command lists the <name>, <wwnn>, and <wwpn> fields, which are used to create a vHBA. <max_vports> shows the maximum number of supported vHBAs. For example:

    <device>
      <name>scsi_host3</name>
      <path>/sys/devices/pci0000:00/0000:00:04.0/0000:10:00.0/host3</path>
      <parent>pci_0000_10_00_0</parent>
      <capability type='scsi_host'>
        <host>3</host>
        <unique_id>0</unique_id>
        <capability type='fc_host'>
          <wwnn>20000000c9848140</wwnn>
          <wwpn>10000000c9848140</wwpn>
          <fabric_wwn>2002000573de9a81</fabric_wwn>
        </capability>
        <capability type='vport_ops'>
          <max_vports>127</max_vports>
          <vports>0</vports>
        </capability>
      </capability>
    </device>

    In this example, the <max_vports> value shows there are a total 127 virtual ports available for use in the HBA configuration. The <vports> value shows the number of virtual ports currently being used. These values update after creating a vHBA.

  3. Create an XML file similar to one of the following for the vHBA host. In these examples, the file is named vhba_host3.xml.

    This example uses scsi_host3 to describe the parent vHBA.

    <device>
      <parent>scsi_host3</parent>
      <capability type='scsi_host'>
        <capability type='fc_host'>
        </capability>
      </capability>
    </device>

    This example uses a WWNN/WWPN pair to describe the parent vHBA.

    <device>
      <name>vhba</name>
      <parent wwnn='20000000c9848140' wwpn='10000000c9848140'/>
      <capability type='scsi_host'>
        <capability type='fc_host'>
        </capability>
      </capability>
    </device>
    Note

    The WWNN and WWPN values must match those in the HBA details seen in the previous step.

    The <parent> field specifies the HBA device to associate with this vHBA device. The details in the <device> tag are used in the next step to create a new vHBA device for the host. For more information on the nodedev XML format, see the libvirt upstream pages.

    Note

    The virsh command does not provide a way to define the parent_wwnn, parent_wwpn, or parent_fabric_wwn attributes.

  4. Create a VHBA based on the XML file created in the previous step using the virsh nodev-create command.

    # virsh nodedev-create vhba_host3
    Node device scsi_host5 created from vhba_host3.xml

Verification

  • Verify the new vHBA’s details (scsi_host5) using the virsh nodedev-dumpxml command:

    # virsh nodedev-dumpxml scsi_host5
    <device>
      <name>scsi_host5</name>
      <path>/sys/devices/pci0000:00/0000:00:04.0/0000:10:00.0/host3/vport-3:0-0/host5</path>
      <parent>scsi_host3</parent>
      <capability type='scsi_host'>
        <host>5</host>
        <unique_id>2</unique_id>
        <capability type='fc_host'>
          <wwnn>5001a4a93526d0a1</wwnn>
          <wwpn>5001a4ace3ee047d</wwpn>
          <fabric_wwn>2002000573de9a81</fabric_wwn>
        </capability>
      </capability>
    </device>

Additional resources