Chapter 1. Known Issues

This section documents unexpected behavior known to affect Red Hat Hyperconverged Infrastructure (RHHI).

BZ#1395087 - Virtual machines pause indefinitely when network unavailable
When the network for Gluster and migration traffic becomes unavailable, virtual machines performing I/O become inaccessible, and cannot complete migration to another node until the hypervisor reboots. This is expected behavior for current fencing and migration methods. There is currently no workaround for this issue.
BZ#1401969 - Arbiter brick becomes heal source
When data bricks in an arbiter volume are taken offline and brought back online one at a time, the arbiter brick is incorrectly identified as the source of correct data when healing the other bricks. This results in virtual machines being paused, because arbiter bricks contain only metadata. There is currently no workaround for this issue.
BZ#1412930 - Excessive logging when storage unavailable when TLS/SSL enabled
When Transport Layer Security (TLS/SSL) is enabled on Red Hat Gluster Storage volumes and a Red Hat Gluster Storage server becomes unavailable, a large number of connection error messages are logged until the Red Hat Gluster Storage server becomes available again. This occurs because the messages logged are not changed to a lower level log message after attempting to reconnect. There is currently no workaround for this issue.
BZ#1413845 - Hosted Engine does not migrate when management network not available
If the management network becomes unavailable during migration, the Hosted Engine virtual machine restarts but the Hosted Engine itself does not. To work around this issue, manually restart the ovirt-engine service.
BZ#1425767 - Sanity check script does not fail
The sanity check script sometimes returns zero (success) even when disks do not exist, or are not empty. Since the sanity check appears to succeed, gdeploy attempts to create physical volumes and fails. To work around this issue, ensure that the disk value in the gdeploy configuration file is correct, and that the disk has no partitions or labels, and retry deployment.
BZ#1432326 - Associating a network with a host makes network out of sync
When the Gluster network is associated with a Red Hat Gluster Storage node’s network interface, the Gluster network enters an out of sync state. To work around this issue, click the Management tab that corresponds to the node and click Refresh Capabilities.
BZ#1434105 - Live Storage Migration failure
Live migration from a Gluster-based storage domain fails when I/O operations are still in progress during migration. There is currently no workaround for this issue.
BZ#1437799 - ISO upload to Gluster storage domain using SSH fails

When uploading an ISO to a Gluster-based storage domain using SSH, the ovirt-iso-uploader tool uses the wrong path and fails with the following error as a result.

OSError: [Errno 2] No such file or directory: '/ISO_Volume'
ERROR: Unable to copy RHGSS-3.1.3-RHEL-7-20160616.2-RHGSS-x86_64-dvd1.iso to ISO storage domain on ISODomain1.
ERROR: Error message is "unable to test the available space on /ISO_Volume

To work around this issue, enable NFS access on the Gluster volume and upload using NFS instead of SSH.

BZ#1439069 - Reinstalling a node overwrites /etc/ovirt-hosted-engine/hosted-engine.conf

If a node is reinstalled after the primary Red Hat Gluster Storage server has been replaced, the contents of the /etc/ovirt-hosted-engine/hosted-engine.conf file are overwritten with details of the old primary host. This results in a non-operational state in the cluster.

To work around this issue, move the reinstalled node to Maintenance mode and update the contents of /etc/ovirt-hosted-engine/hosted-engine.conf to point to the replacement primary server. Then reboot and reactivate the reinstalled node to bring it online and mount all volumes.

BZ#1443169 - Deploying Hosted Engine fails during bridge configuration
When setting up the self-hosted engine on a system with a bridged network configuration, setup fails after the restart of the firewalld service. To work around this problem, remove all *.bak files from the /etc/sysconfig/network-scripts/ directory before deploying the self-hosted engine.