Chapter 3. Known Issues
This section documents unexpected behavior known to affect Red Hat Hyperconverged Infrastructure for Virtualization (RHHI for Virtualization).
- BZ#1793398 - Incorrect variables in ansible roles during deployment
When an automated deployment is executed from the command line, RHHI for Virtualization deployment fails because of incorrect variables.
To work around this issue:
-
In the
/etc/ansible/roles/gluster.ansible/playbooks/hc-ansible-deployment/he_gluster_vars.json
file, removehe_ansible_host_name: host1
. -
In the
/etc/ansible/roles/gluster.ansible/playbooks/hc-ansible-deployment/he_gluster_vars.json
file, update the value of thehe_mem_size_MB
variable to16384
.
-
In the
- BZ#1795928 - First deployment using ansible roles fails
During automated deployment, a web hook is successfully added to
gluster-eventsapi
, but reports a failure instead of a success. This causes deployment to fail.To work around this issue, follow these steps:
Clean up the deployment attempt:
# ovirt-hosted-engine-cleanup -q
Set the
hostname
that corresponds to the front-end FQDN (fully qualified domain name):# hostnamectl set-hostname <Front-end-FQDN>
Redeploy the Hosted Engine:
# ansible-playbook -i /root/gluster_inventory.yml /etc/ansible/roles/gluster.ansible/playbooks/hc-ansible-deployment/tasks/he_deployment.yml --extra-vars='@/etc/ansible/roles/gluster.ansible/playbooks/hc-ansible-deployment/he_gluster_vars.json'
- BZ#1796035 - Ansible roles do not configure additional hosts to run the Hosted Engine
During an automated deployment, only the first host is configured to run the Hosted Engine. This reduces the availability of the deployment.
To work around this issue, perform the following steps for each additional host:
- Move the additional host to maintenance.
-
Reinstall the host, ensuring that you set
HostedEngine=deploy
.
- BZ#1754743 - Enabling LV cache along with VDO volumes fails during Deployment
If LV cache is attached to a Virtual Data Optimizer (VDO) enabled thinpool device, then the path for the cache device should be
/dev/mapper/vdo_<device name>
instead of/dev/<device name>
To work around this issue, edit the generated Ansible inventory file.
If VDO is created from
sdb
and cache disk issdc
then the configuration should be as follows:gluster_infra_cache_vars: - vgname: gluster_vg_sdb cachedisk: `/dev/mapper/vdo_sdb,/dev/sdc` cachelvname: cachelv_gluster_thinpool_gluster_vg_sdb cachethinpoolname: gluster_thinpool_gluster_vg_sdb cachelvsize: 1G cachemode: writethrough
- BZ#1432326 - Associating a network with a host makes network out of sync
- When the Gluster network is associated with a Red Hat Gluster Storage node’s network interface, the Gluster network enters an out of sync state. To work around this issue, click the Management tab that corresponds to the node and click Refresh Capabilities.
- BZ#1547674 - Default slab size limits VDO volume size
Virtual Data Optimizer (VDO) volumes are comprised of up to 8096 slabs. The default slab size is 2GB, which limits VDO volumes to a maximum physical size of 16TB, and a maximum supported logical size of 160TB. This means that creating a VDO volume on a physical device that is larger than 16TB fails.
To work around this issue, specify a larger slab size at VDO volume creation.
- BZ#1554241 - Cache volumes must be manually attached to asymmetric brick configurations
When bricks are configured asymmetrically, and a logical cache volume is configured, the cache volume is attached to only one brick. This is because the current implementation of asymmetric brick configuration creates a separate volume group and thin pool for each device, so asymmetric brick configurations would require a cache volume per device. However, this would use a large number of cache devices, and is not currently possible to configure using the Web Console.
To work around this issue, first remove any cache volumes that have been applied to an asymmetric brick set.
# lvconvert --uncache volume_group/logical_cache_volume
Then, follow the instructions in Configuring a logical cache volume to create a logical cache volume manually.
- BZ#1590264 - Storage network down after Hosted Engine deployment
During RHHI for Virtualization setup, two separate network interfaces are required to set up Red Hat Gluster Storage. After storage configuration is complete, the hosted engine is deployed and the host is added to Red Hat Virtualization as a managed host. During host deployment, the storage network is altered, and
BOOTPROTO=dhcp
is removed. This means that the storage network does not have IP addresses assigned automatically, and is not available after the hosted engine is deployed.To work around this issue, add the line
BOOTPROTO=dhcp
to the network interface configuration file for your storage network after deployment of the hosted engine is complete.- BZ#1567908 - Multipath entries for devices are visible after rebooting
The vdsm service makes various configuration changes after Red Hat Virtualization is first installed. One such change makes multipath entries for devices visible on RHHI for Virtualization, including local devices. This causes issues on hosts that are updated or rebooted before the RHHI for Virtualization deployment process is complete.
To work around this issue, add all devices to the multipath blacklist:
Add the following to
/etc/multipath.conf
:blacklist { devnode "*" }
- Reboot all hyperconverged hosts.
- BZ#1609451 - Volume status reported incorrectly after reboot
When a node reboots, including as part of upgrades or updates, subsequent runs of
gluster volume status
sometimes incorrectly report that bricks are not running, even when the relevantglusterfsd
processes are running as expected.To work around this issue, restart
glusterd
on the hyperconverged node after the node is upgraded, updated, or rebooted.- BZ#1688269 - IPv6 hosts not added to Red Hat Virtualization Manager
When IPv6 addresses are used to deploy Red Hat Hyperconverged Infrastructure for Virtualization, the second and third hyperconverged hosts are not automatically configured to be managed by the hosted engine virtual machine during the deployment process.
To work around this issue, manually add the second and third hosts to Red Hat Virtualization Manager when deployment is complete: Adding a host.
- BZ#1688239 - Geo-replication session creation fails with IPv6
-
The
gverify.sh
script is used during geo-replication to verify that secondary volumes can be mounted before data is synchronized. When IPv6 is used, the script fails, because it does not use the--xlator-option=transport.address-family=inet6
mount option as part of its checks. As a result, geo-replication cannot currently be used with IPv6 addresses. - BZ#1688243 - IPv6 deployments require additional mount option
-
Deployments that use IPv6 require an additional mount option to work correctly. Work around this issue by adding
xlator-option="transport.address-family=inet6"
to the Mount Options field on the Storage tab during Hosted Engine deployment. - BZ#1690820 - Create volume populates host field with IP address not FQDN
When you create a new volume using the Web Console using the Create Volume button, the value for hosts is populated from gluster peer list, and the first host is an IP address instead of an FQDN. As part of volume creation, this value is passed to an FQDN validation process, which fails with an IP address.
To work around this issue, edit the generated variable file and manually insert the FQDN instead of the IP address.
- BZ#1506680 - Disk properties not cleaned correctly on reinstall
The installer cannot clean some kinds of metadata from existing logical volumes. This means that reinstalling a hyperconverged host fails unless the disks have been manually cleared beforehand.
To work around this issue, run the following commands to remove all data and metadata from disks attached to the physical machine.
WarningBack up any data that you want to keep before running these commands, as these commands completely remove all data and metadata on all disks.
# pvremove /dev/* --force -y # for disk in $(ls /dev/{sd*,nv*}); do wipefs -a -f $disk; done