Chapter 3. Preparing Storage for Red Hat Virtualization

Prepare storage to be used for storage domains in the new environment. A Red Hat Virtualization environment must have at least one data storage domain, but adding more is recommended.

A data domain holds the virtual hard disks and OVF files of all the virtual machines and templates in a data center, and cannot be shared across data centers while active (but can be migrated between data centers). Data domains of multiple storage types can be added to the same data center, provided they are all shared, rather than local, domains.

Self-hosted engines must have an additional data domain dedicated to the Manager virtual machine. This domain is created during the self-hosted engine deployment, and must be at least 74 GiB. You must prepare the storage for this domain before beginning the deployment.

You can use one of the following storage types:

Important

If you are using iSCSI storage, the self-hosted engine storage domain must use its own iSCSI target. Any additional storage domains must use a different iSCSI target.

Warning

Creating additional data storage domains in the same data center as the self-hosted engine storage domain is highly recommended. If you deploy the self-hosted engine in a data center with only one active data storage domain, and that storage domain is corrupted, you will not be able to add new storage domains or remove the corrupted storage domain; you will have to redeploy the self-hosted engine.

3.1. Preparing NFS Storage

Set up NFS shares on your file storage or remote server to serve as storage domains on Red Hat Enterprise Virtualization Host systems. After exporting the shares on the remote storage and configuring them in the Red Hat Virtualization Manager, the shares will be automatically imported on the Red Hat Virtualization hosts.

For information on setting up and configuring NFS, see Network File System (NFS) in the Red Hat Enterprise Linux 7 Storage Administration Guide.

For information on how to export an 'NFS' share, see How to export 'NFS' share from NetApp Storage / EMC SAN in Red Hat Virtualization

Specific system user accounts and system user groups are required by Red Hat Virtualization so the Manager can store data in the storage domains represented by the exported directories. The following procedure sets the permissions for one directory. You must repeat the chown and chmod steps for all of the directories you intend to use as storage domains in Red Hat Virtualization.

Procedure

  1. Create the group kvm:

    # groupadd kvm -g 36
  2. Create the user vdsm in the group kvm:

    # useradd vdsm -u 36 -g 36
  3. Set the ownership of your exported directory to 36:36, which gives vdsm:kvm ownership:

    # chown -R 36:36 /exports/data
  4. Change the mode of the directory so that read and write access is granted to the owner, and so that read and execute access is granted to the group and other users:

    # chmod 0755 /exports/data

3.2. Preparing iSCSI Storage

Red Hat Virtualization supports iSCSI storage, which is a storage domain created from a volume group made up of LUNs. Volume groups and LUNs cannot be attached to more than one storage domain at a time.

For information on setting up and configuring iSCSI storage, see Online Storage Management in the Red Hat Enterprise Linux 7 Storage Administration Guide.

Important

If you are using block storage and you intend to deploy virtual machines on raw devices or direct LUNs and to manage them with the Logical Volume Manager, you must create a filter to hide the guest logical volumes. This will prevent guest logical volumes from being activated when the host is booted, a situation that could lead to stale logical volumes and cause data corruption. See https://access.redhat.com/solutions/2662261 for details.

Important

Red Hat Virtualization currently does not support block storage with a block size of 4K. You must configure block storage in legacy (512b block) mode.

Important

If your host is booting from SAN storage and loses connectivity to the storage, the storage file systems become read-only and remain in this state after connectivity is restored.

To prevent this situation, Red Hat recommends adding a drop-in multipath configuration file on the root file system of the SAN for the boot LUN to ensure that it is queued when there is a connection:

# cat /etc/multipath/conf.d/host.conf
multipaths {
    multipath {
        wwid boot_LUN_wwid
        no_path_retry queue
    }

3.3. Preparing FCP Storage

Red Hat Virtualization supports SAN storage by creating a storage domain from a volume group made of pre-existing LUNs. Neither volume groups nor LUNs can be attached to more than one storage domain at a time.

Red Hat Virtualization system administrators need a working knowledge of Storage Area Networks (SAN) concepts. SAN usually uses Fibre Channel Protocol (FCP) for traffic between hosts and shared external storage. For this reason, SAN may occasionally be referred to as FCP storage.

For information on setting up and configuring FCP or multipathing on Red Hat Enterprise Linux, see the Storage Administration Guide and DM Multipath Guide.

Important

If you are using block storage and you intend to deploy virtual machines on raw devices or direct LUNs and to manage them with the Logical Volume Manager, you must create a filter to hide the guest logical volumes. This will prevent guest logical volumes from being activated when the host is booted, a situation that could lead to stale logical volumes and cause data corruption. See https://access.redhat.com/solutions/2662261 for details.

Important

Red Hat Virtualization currently does not support block storage with a block size of 4K. You must configure block storage in legacy (512b block) mode.

Important

If your host is booting from SAN storage and loses connectivity to the storage, the storage file systems become read-only and remain in this state after connectivity is restored.

To prevent this situation, Red Hat recommends adding a drop-in multipath configuration file on the root file system of the SAN for the boot LUN to ensure that it is queued when there is a connection:

# cat /etc/multipath/conf.d/host.conf
multipaths {
    multipath {
        wwid boot_LUN_wwid
        no_path_retry queue
    }

3.4. Preparing Red Hat Gluster Storage

For information on setting up and configuring Red Hat Gluster Storage, see the Red Hat Gluster Storage Installation Guide.

For the Red Hat Gluster Storage versions that are supported with Red Hat Virtualization, see https://access.redhat.com/articles/2356261.

3.5. Customizing Multipath Configurations for SAN Vendors

To customize the multipath configuration settings, do not modify /etc/multipath.conf. Instead, create a new configuration file that overrides /etc/multipath.conf.

Warning

Upgrading Virtual Desktop and Server Manager (VDSM) overwrites the /etc/multipath.conf file. If multipath.conf contains customizations, overwriting it can trigger storage issues.

Prerequisites

Procedure

  1. To override the values of settings in /etc/multipath.conf, create a new configuration file in the /etc/multipath/conf.d/ directory.

    Note

    The files in /etc/multipath/conf.d/ execute in alphabetical order. Follow the convention of naming the file with a number at the beginning of its name. For example, /etc/multipath/conf.d/90-myfile.conf.

  2. Copy the settings you want to override from /etc/multipath.conf to the new configuration file in /etc/multipath/conf.d/. Edit the setting values and save your changes.
  3. Apply the new configuration settings by entering the systemctl reload multipathd command.

    Note

    Avoid restarting the multipathd service. Doing so generates errors in the VDSM logs.

Verification steps

If you override the VDSM-generated settings in /etc/multipath.conf, verify that the new configuration performs as expected in a variety of failure scenarios.

For example, disable all of the storage connections. Then enable one connection at a time and verify that doing so makes the storage domain reachable.

Troubleshooting

If a Red Hat Virtualization Host has trouble accessing shared storage, check /etc/multpath.conf and files under /etc/multipath/conf.d/ for values that are incompatible with the SAN.

Additional resources