1.7 Release Notes

Red Hat Hyperconverged Infrastructure for Virtualization 1.7

Release notes and known issues

Laura Bailey

Abstract

This document outlines known issues in Red Hat Hyperconverged Infrastructure for Virtualization 1.7 at release time, and highlights the major changes since the previous release.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright’s message.

Chapter 1. What changed in this release?

1.1. Major changes in version 1.7

Be aware of the following differences between Red Hat Hyperconverged Infrastructure for Virtualization 1.7 and previous versions:

Underlying software updates

The software underlying Red Hat Hyperconverged Infrastructure for Virtualization has been updated to the following:

  • Red Hat Gluster Storage 3.5
  • Red Hat Virtualization 4.3.8
Support for 4 KB block size
Storage domains hosted on Gluster volumes now support disks with a block size of 4 KB.
4 KB block size on VDO devices

Previously, VDO devices emulated a 512 byte block size by default. As of Red Hat Hyperconverged Inrastructure for Virtualization 1.7, newly created VDO devices use a block size of 4 KB.

Important

Bricks in the same Gluster volume must share the same block size. Do not place the bricks of a Gluster volume across both an older VDO volume with a block size of 512 bytes and a new VDO volume with a block size of 4 KB.

Specify nodes for volume creation during automated deploy
Previously, during an automated deployment, volumes were created across all nodes. In deployments larger than three nodes, you can now specify which nodes you want volumes to be deployed across during an automated deployment.

1.2. Technology previews

Technology Preview features are provided with a limited support scope, as detailed on the Customer Portal: Technology Preview Features Support Scope.

The features listed in this section are provided under Technology Preview support limitations.

IPv6 networking support

Technology Preview support is now available for IPv6 addresses in IPv6-only environments (including DNS and gateway addresses). There are several important limitations to IPv6 support during this Technology Preview:

  • Disaster recovery using geo-replication is not supported (BZ#1688239)
  • An additional mount option is required when mounting storage provided by IPv6 hosts: xlator-option=transport.address-family=inet6 (BZ#1688243)
  • Hosts cannot be automatically added to the RHV Administration Portal (BZ#1688269)

Chapter 2. Bug Fixes

This section documents important bugs that affected previous versions of Red Hat Hyperconverged Infrastructure for Virtualization that have been corrected in version 1.7.

BZ#1670722 - Changed device names after reboot break gluster brick mounts
Gluster bricks were previously mounted using the device name. This device name could change after a system reboot, which made the brick unavailable. Bricks are now mounted using the UUID instead of the device name, avoiding this issue.
BZ#1723730 - 512 byte emulation required for VDO
Virtual Disk Optimizer devices were previously required to use 512 byte emulation before 4KB block devices were fully supported. This emulation is no longer required and has been removed now that 4KB block device support is available.
BZ#1780290 - Non-operational host with sharding enabled
When sharding was enabled and a file’s path had been unlinked, but its descriptor was still available, attempting to perform a read or write operation against the file resulted in the host becoming non-operational. File paths are now unlinked later, avoiding this issue.

Chapter 3. Known Issues

This section documents unexpected behavior known to affect Red Hat Hyperconverged Infrastructure for Virtualization (RHHI for Virtualization).

BZ#1793398 - Incorrect variables in ansible roles during deployment

When an automated deployment is executed from the command line, RHHI for Virtualization deployment fails because of incorrect variables.

To work around this issue:

  • In the /etc/ansible/roles/gluster.ansible/playbooks/hc-ansible-deployment/he_gluster_vars.json file, remove he_ansible_host_name: host1.
  • In the /etc/ansible/roles/gluster.ansible/playbooks/hc-ansible-deployment/he_gluster_vars.json file, update the value of the he_mem_size_MB variable to 16384.
BZ#1795928 - First deployment using ansible roles fails

During automated deployment, a web hook is successfully added to gluster-eventsapi, but reports a failure instead of a success. This causes deployment to fail.

To work around this issue, follow these steps:

  1. Clean up the deployment attempt:

    # ovirt-hosted-engine-cleanup -q
  2. Set the hostname that corresponds to the front-end FQDN (fully qualified domain name):

    # hostnamectl set-hostname <Front-end-FQDN>
  3. Redeploy the Hosted Engine:

    # ansible-playbook -i /root/gluster_inventory.yml /etc/ansible/roles/gluster.ansible/playbooks/hc-ansible-deployment/tasks/he_deployment.yml --extra-vars='@/etc/ansible/roles/gluster.ansible/playbooks/hc-ansible-deployment/he_gluster_vars.json'
BZ#1796035 - Ansible roles do not configure additional hosts to run the Hosted Engine

During an automated deployment, only the first host is configured to run the Hosted Engine. This reduces the availability of the deployment.

To work around this issue, perform the following steps for each additional host:

  1. Move the additional host to maintenance.
  2. Reinstall the host, ensuring that you set HostedEngine=deploy.
BZ#1754743 - Enabling LV cache along with VDO volumes fails during Deployment

If LV cache is attached to a Virtual Data Optimizer (VDO) enabled thinpool device, then the path for the cache device should be /dev/mapper/vdo_<device name> instead of /dev/<device name>

To work around this issue, edit the generated Ansible inventory file.

If VDO is created from sdb and cache disk is sdc then the configuration should be as follows:

gluster_infra_cache_vars:
        - vgname: gluster_vg_sdb
          cachedisk: `/dev/mapper/vdo_sdb,/dev/sdc`
          cachelvname: cachelv_gluster_thinpool_gluster_vg_sdb
          cachethinpoolname: gluster_thinpool_gluster_vg_sdb
          cachelvsize: 1G
          cachemode: writethrough
BZ#1432326 - Associating a network with a host makes network out of sync
When the Gluster network is associated with a Red Hat Gluster Storage node’s network interface, the Gluster network enters an out of sync state. To work around this issue, click the Management tab that corresponds to the node and click Refresh Capabilities.
BZ#1547674 - Default slab size limits VDO volume size

Virtual Data Optimizer (VDO) volumes are comprised of up to 8096 slabs. The default slab size is 2GB, which limits VDO volumes to a maximum physical size of 16TB, and a maximum supported logical size of 160TB. This means that creating a VDO volume on a physical device that is larger than 16TB fails.

To work around this issue, specify a larger slab size at VDO volume creation.

BZ#1554241 - Cache volumes must be manually attached to asymmetric brick configurations

When bricks are configured asymmetrically, and a logical cache volume is configured, the cache volume is attached to only one brick. This is because the current implementation of asymmetric brick configuration creates a separate volume group and thin pool for each device, so asymmetric brick configurations would require a cache volume per device. However, this would use a large number of cache devices, and is not currently possible to configure using the Web Console.

To work around this issue, first remove any cache volumes that have been applied to an asymmetric brick set.

# lvconvert --uncache volume_group/logical_cache_volume

Then, follow the instructions in Configuring a logical cache volume to create a logical cache volume manually.

BZ#1590264 - Storage network down after Hosted Engine deployment

During RHHI for Virtualization setup, two separate network interfaces are required to set up Red Hat Gluster Storage. After storage configuration is complete, the hosted engine is deployed and the host is added to Red Hat Virtualization as a managed host. During host deployment, the storage network is altered, and BOOTPROTO=dhcp is removed. This means that the storage network does not have IP addresses assigned automatically, and is not available after the hosted engine is deployed.

To work around this issue, add the line BOOTPROTO=dhcp to the network interface configuration file for your storage network after deployment of the hosted engine is complete.

BZ#1567908 - Multipath entries for devices are visible after rebooting

The vdsm service makes various configuration changes after Red Hat Virtualization is first installed. One such change makes multipath entries for devices visible on RHHI for Virtualization, including local devices. This causes issues on hosts that are updated or rebooted before the RHHI for Virtualization deployment process is complete.

To work around this issue, add all devices to the multipath blacklist:

  1. Add the following to /etc/multipath.conf:

    blacklist {
      devnode "*"
    }
  2. Reboot all hyperconverged hosts.
BZ#1609451 - Volume status reported incorrectly after reboot

When a node reboots, including as part of upgrades or updates, subsequent runs of gluster volume status sometimes incorrectly report that bricks are not running, even when the relevant glusterfsd processes are running as expected.

To work around this issue, restart glusterd on the hyperconverged node after the node is upgraded, updated, or rebooted.

BZ#1688269 - IPv6 hosts not added to Red Hat Virtualization Manager

When IPv6 addresses are used to deploy Red Hat Hyperconverged Infrastructure for Virtualization, the second and third hyperconverged hosts are not automatically configured to be managed by the hosted engine virtual machine during the deployment process.

To work around this issue, manually add the second and third hosts to Red Hat Virtualization Manager when deployment is complete: Adding a host.

BZ#1688239 - Geo-replication session creation fails with IPv6
The gverify.sh script is used during geo-replication to verify that secondary volumes can be mounted before data is synchronized. When IPv6 is used, the script fails, because it does not use the --xlator-option=transport.address-family=inet6 mount option as part of its checks. As a result, geo-replication cannot currently be used with IPv6 addresses.
BZ#1688243 - IPv6 deployments require additional mount option
Deployments that use IPv6 require an additional mount option to work correctly. Work around this issue by adding xlator-option="transport.address-family=inet6" to the Mount Options field on the Storage tab during Hosted Engine deployment.
BZ#1690820 - Create volume populates host field with IP address not FQDN

When you create a new volume using the Web Console using the Create Volume button, the value for hosts is populated from gluster peer list, and the first host is an IP address instead of an FQDN. As part of volume creation, this value is passed to an FQDN validation process, which fails with an IP address.

To work around this issue, edit the generated variable file and manually insert the FQDN instead of the IP address.

BZ#1506680 - Disk properties not cleaned correctly on reinstall

The installer cannot clean some kinds of metadata from existing logical volumes. This means that reinstalling a hyperconverged host fails unless the disks have been manually cleared beforehand.

To work around this issue, run the following commands to remove all data and metadata from disks attached to the physical machine.

Warning

Back up any data that you want to keep before running these commands, as these commands completely remove all data and metadata on all disks.

# pvremove /dev/* --force -y
# for disk in $(ls /dev/{sd*,nv*}); do wipefs -a -f $disk; done

Legal Notice

Copyright © 2020 Red Hat, Inc.
The text of and illustrations in this document are licensed by Red Hat under a Creative Commons Attribution–Share Alike 3.0 Unported license ("CC-BY-SA"). An explanation of CC-BY-SA is available at http://creativecommons.org/licenses/by-sa/3.0/. In accordance with CC-BY-SA, if you distribute this document or an adaptation of it, you must provide the URL for the original version.
Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert, Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat, Red Hat Enterprise Linux, the Shadowman logo, JBoss, MetaMatrix, Fedora, the Infinity Logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.
All other trademarks are the property of their respective owners.