Chapter 1. Known Issues
This section documents unexpected behavior known to affect Red Hat Hyperconverged Infrastructure (RHHI).
- BZ#1395087 - Virtual machines pause indefinitely when network unavailable
- When the network for Gluster and migration traffic becomes unavailable, virtual machines performing I/O become inaccessible, and cannot complete migration to another node until the hypervisor reboots. This is expected behavior for current fencing and migration methods. There is currently no workaround for this issue.
- BZ#1401969 - Arbiter brick becomes heal source
- When data bricks in an arbiter volume are taken offline and brought back online one at a time, the arbiter brick is incorrectly identified as the source of correct data when healing the other bricks. This results in virtual machines being paused, because arbiter bricks contain only metadata. There is currently no workaround for this issue.
- BZ#1425767 - Sanity check script does not fail
The sanity check script sometimes returns zero (success) even when disks do not exist, or are not empty. Since the sanity check appears to succeed, gdeploy attempts to create physical volumes and fails. To work around this issue, ensure that the
diskvalue in the gdeploy configuration file is correct, and that the disk has no partitions or labels, and retry deployment.
- BZ#1432326 - Associating a network with a host makes network out of sync
- When the Gluster network is associated with a Red Hat Gluster Storage node’s network interface, the Gluster network enters an out of sync state. To work around this issue, click the Management tab that corresponds to the node and click Refresh Capabilities.
- BZ#1434105 - Live Storage Migration failure
- Live migration from a Gluster-based storage domain fails when I/O operations are still in progress during migration. There is currently no workaround for this issue.