Chapter 3. Known Issues

This section documents unexpected behavior known to affect Red Hat Hyperconverged Infrastructure for Virtualization (RHHI for Virtualization).

BZ#1851114 - Error message device path is not a valid name for this device is shown

When logical volume (LV) name exceeds 55 characters which is a limitation with python-blivet, error message like ValueError: gluster_thinpool_gluster_vg_<WWID> is not a valid name for this device is seen in vdsm.log and supervdsm.log files.

To work around this issue, follow these steps:

  1. Rename the volume group (VG):

    # vgrename gluster_vg_<WWID> gluster_vg_<last-4-digit-WWID>
  2. Rename thinpool:

    # lvrename gluster_vg_<last-4-digit-WWID> gluster_thinpool_gluster_vg_<WWID> gluster_thinpool_gluster_vg_<last-4-digit-WWID>
BZ#1853995 - Updating storage domain gives error dialog box
While replacing the primary volfile during host replacement to update the storage domain via the Administrator Portal, the portal gives an operation canceled dialog box. However, in the backend the values get updated.
BZ#1855945 - RHHI for Virtualization depolyment fails using multipath configuration and lvm cache

During the deployment of RHHI for Virtualization with multipath device names, volume groups (VG) and logical volumes (LV) are created with the suffix of WWID leading to LV names longer than 128 characters. This results in failure of LV cache creation.

To work around this issue, follow these steps:

When initiating RHHI for Virtualization deployment with multipath device names as ``/dev/mapper/<WWID>`, replace VG and thinpool suffix with last 4 digits of WWID as:

  1. During deployment from the web console, provide a multipath device name as /dev/mapper/<WWID> for bricks.
  2. Click Next to generate an inventory file.
  3. Login in to the deployment node via SSH.
  4. Find the <WWID> with LVM components:

    # grep vg /etc/ansible/hc_wizard_inventory.yml
  5. For all WWIDs, replace WWID with the last 4 digits of WWID.

    # sed -i 's/<WWID>/<last-4-digit-WWID>/g' /etc/ansible/hc_wizard_inventory.yml
  6. Continue deployment from web console.
BZ#1856577 - Shared storage volume fails to mount in IPv6 environment

When gluster volumes are created with the gluster_shared_storage option during the deployment of RHHI for Virtualization using IPv6 addresses, the mount option is not added in the fstab file. As a result, the shared storage fails to mount.

To workaround this issue, add mount option as xlator-option=transport.address-family=inet6 in the fstab file.

BZ#1856594 - Fails to create VDO enabled gluster volume with day2 operation from web console

Virtual Disk Optimization (VDO) enabled gluster volume with day2 operation fails from the web console.

To work around this issue, modify the playbook vdo_create.yml at /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/vdo_create.yml and change the ansible tasks as:

- name: Restart all VDO volumes
  shell: "vdo stop -n {{item.name}} && vdo start -n {{item.name}}"
  with_items: "{{ gluster_infra_vdo }}"
BZ#1858197 - Pending self-heal on the volume, post the bricks are online

In dual network configurations (one for gluster and other for ovirt management), Automatic File Replication (AFR) healer threads are not spawned with the restart of self-heal daemon resulting in pending self heal entries in the volume.

To work around this issue,follow these steps:

  1. change the hostname to the other network FQDN using the command

    # hostnamectl set-hostname <other-network-FQDN>
  2. Start the heal using the command:

    # gluster volume heal <volname>
BZ#1554241 - Cache volumes must be manually attached to asymmetric brick configurations

When bricks are configured asymmetrically, and a logical cache volume is configured, the cache volume is attached to only one brick. This is because the current implementation of asymmetric brick configuration creates a separate volume group and thin pool for each device, so asymmetric brick configurations would require a cache volume per device. However, this would use a large number of cache devices, and is not currently possible to configure using the Web Console.

To work around this issue, first remove any cache volumes that have been applied to an asymmetric brick set.

# lvconvert --uncache volume_group/logical_cache_volume

Then, follow the instructions in Configuring a logical cache volume to create a logical cache volume manually.

BZ#1690820 - Create volume populates host field with IP address not FQDN

When you create a new volume using the Web Console using the Create Volume button, the value for hosts is populated from gluster peer list, and the first host is an IP address instead of an FQDN. As part of volume creation, this value is passed to an FQDN validation process, which fails with an IP address.

To work around this issue, edit the generated variable file and manually insert the FQDN instead of the IP address.

BZ#1506680 - Disk properties not cleaned correctly on reinstall

The installer cannot clean some kinds of metadata from existing logical volumes. This means that reinstalling a hyperconverged host fails unless the disks have been manually cleared beforehand.

To work around this issue, run the following commands to remove all data and metadata from disks attached to the physical machine.

Warning

Back up any data that you want to keep before running these commands, as these commands completely remove all data and metadata on all disks.

# pvremove /dev/* --force -y
# for disk in $(ls /dev/{sd*,nv*}); do wipefs -a -f $disk; done