Chapter 2. Bug Fixes

This section documents important bugs that affected previous versions of Red Hat Hyperconverged Infrastructure for Virtualization that have been corrected in version 1.8.

BZ#1688239 - Geo-replication with IPv6 networks
Previously, geo-replication could not be used with IPv6 addresses. With this release, all the helper scripts used for gluster geo-replication are made compatible with IPV6 hostnames (FQDN).
BZ#1719140 - Virtual machine availability
Virtualization group options are updated so that the virtual machines are available when one of the hosts is powered down.
BZ#1792821 - Split-brain after heal
Previously, healing of entries in directories could be triggered when only the heal source (and not the heal target) was available. This led to replication extended attributes being reset and resulted in a GFID split-brain condition when the heal target became available again. Entry healing is now triggered only when all bricks in a replicated set are available, to avoid this issue.
BZ#1721097 - VDO statistics
Previously, Virtual Disk Optimization (VDO) statistics were not available for VDO volumes. With this release, Virtual Disk Optimization (VDO) handles different outputs correctly from the VDO statistics tool to avoid traceback caused by Virtual Desktop and Servers Manager.
BZ#1836164 - Replication of write-heavy workloads
fsync in the replication module uses eager-lock functionality which improves the performance of small-block of size approximately equal to 4k write-heavy workloads with an improvement of more than 50 percent on Red Hat Hyperconverged Infrastructure for Virtualization 1.8.
BZ#1821763 - VDO volumes and maxDiscardSize parameter
The virtual disk optimization (VDO) Ansible module now supports the maxDiscardSize parameter and sets this parameter by default. As a result, VDO volumes created with this parameter no longer fail.
BZ#1808081 - Readcache and readcachesize on VDO volumes
The readcache and readcachesize options for virtual disk optimization (VDO) are not supported on VDO volumes based on Red Hat Enterprise Linux 8, which includes Red Hat Hyperconverged Infrastructure for Virtualization 1.8. These options are now removed so that VDO volume creation succeeds on version 1.8.
BZ#1793398 - Deployment using Ansible
Previously, running the deployment playbook from the command line interface failed because of incorrect values for variables (he_ansible_host_name and he_mem_size_MB). Variable values have been updated and the deployment playbook now runs correctly.
BZ#1809413 - Activating glusterd service caused quorum loss
Previously, activating the host from the Administrator Portal restarted the glusterd service which led to quorum loss when the glusterd process ID changed. With this release, the glusterd service does not restart if it is already up and running during the activation of the host, so the glusterd process ID does not change and there is no quorum loss.
BZ#1796035 - Additional hosts in Administrator Portal
Previously, additional hosts were not added to the Administrator Portal automatically post deployment. With this release, gluster ansible roles have been updated to ensure that any additional hosts are automatically added to the Administrator Portal
BZ#1774900 - Disconnected host detection
Previously, the detection of disconnected hosts took a long time leading to sanlock timeout. With this release, the socket and rpc timeouts in gluster have been improved so that disconnected hosts are detected before sanlock timeout occurs and reboot of a single host does not impact virtual machines running on other hosts.
BZ#1795928 - Erroneous deployment failure message
When deployment playbooks were run on the command line interface, a web hook was successfully added to gluster-eventsapi, but reported a failure instead of a success, causing deployment to fail the first time it was attempted. This has been corrected and deployment now works correctly.
BZ#1715428 - Storage domain creation
Previously, storage domains were automatically created only when additional hosts were specified. The two operations have been separated, since they are logically unrelated, and storage domains are now created regardless of whether additional hosts are specified.
BZ#1733413 - Incorrect volume type displayed
Previously, the web console contained an unnecessary drop-down menu for volume type selection and showed the wrong volume type (replicated) for single node deployments. The menu is removed and the correct volume type (distributed) is now shown.
BZ#1754743 - Cache volume failure on VDO volumes
Previously, configuring volumes that used both virtual disk optimization (VDO) and a cache volume caused deployment in the web console to fail. This occurred because the underlying volume path was specified in the form "/dev/sdx" instead of the form "/dev/mapper/vdo_sdx". VDO volumes are now specified using the correct form and deployment no longer fails.
BZ#1432326 - Network out of sync after associating a network with a host
When the Gluster network was associated with a new node’s network interface, the Gluster network entered an out of sync state. This no longer occurs in Red Hat Hyperconverged Infrastructure for Virtualization 1.8.
BZ#1567908 - Multipath entries for devices visible after rebooting
The vdsm service makes various configuration changes after Red Hat Virtualization is first installed. One such change made multipath entries for devices visible in Red Hat Hyperconverged Infrastructure for Virtualization, including local devices. This caused issues on hosts that were updated or rebooted before the Red Hat Hyperconverged Infrastructure for Virtualization deployment process was complete. Red Hat Hyperconverged Infrastructure for Virtualization now provides the option to blacklist multipath devices, which prevents any entries from being used by RHHI for Virtualization.
BZ#1590264 - Storage network down after Hosted Engine deployment
During Red Hat Hyperconverged Infrastructure for Virtualization setup, two separate network interfaces are required to set up Red Hat Gluster Storage. After storage configuration is complete, the hosted engine is deployed and the host is added to the engine as a managed host. Previously, during deployment, the storage network was altered to remove BOOTPROTO=dhcp. This meant that the storage network did not have an IP addresses assigned automatically, and was not available after the hosted engine was deployed. This line is no longer removed during deployment, and the storage network is available as expected.
BZ#1609451 - Volume status reported incorrectly after reboot
When a node rebooted, including as part of upgrades or updates, subsequent runs of gluster volume status sometimes incorrectly reported that bricks were not running, even when the relevant glusterfsd processes were running as expected. State is now reported correctly in these circumstances.
BZ#1856629 - Warning use device with format /dev/mapper/<WWID> seen with gluster devices enabled
When expanding the volume from the web console as part of day2 operation with device name /dev/sdx, a warning use device name with format /dev/mapper/<WWID> was seen even when the blacklist gluster device checkbox was enabled. In version 1.8, Red Hat recommends to ignore this warning note and proceed with the next step of deployment, as the warning was not valid. In version 1.8 Batch Update 1, this issue has been corrected and the spurious warning no longer appears.