-
Language:
English
-
Language:
English
Chapter 8. Upgrading to Red Hat Hyperconverged Infrastructure for Virtualization 1.5
Upgrading involves moving from one version of a product to a newer major release of the same product. This section shows you how to upgrade to Red Hat Hyperconverged Infrastructure for Virtualization 1.5 from version 1.1.
From a component standpoint, this involves the following:
- Upgrading the Hosted Engine virtual machine to Red Hat Virtualization Manager version 4.2.
- Upgrading the physical hosts to Red Hat Virtualization 4.2.
8.1. Major changes in version 1.5
Be aware of the following differences between Red Hat Hyperconverged Infrastructure for Virtualization 1.5 and previous versions.
- Deduplication and compression support with Virtual Data Optimizer (VDO)
- Configuring VDO at deployment time lets you reduce the amount of storage space required for data. You can configure Gluster bricks to use deduplication and compression, and monitor and configure notifications for VDO capacity usage so that you know when your storage is running out of space. The space saved by using Virtual Disk Optimization is displayed on the Brick and Volume detail pages of the Cockpit UI. See Understanding VDO and Monitoring VDO for more information.
- Configure disaster recovery with failover and failback
- Red Hat Hyperconverged Infrastructure for Virtualization now supports backup, failover, and failback to a remote secondary site. See Configuring backup and recovery options for an overview and information on configuring disaster recovery. See Recovering from disaster for the recovery process.
- Scale using the user interface
- New nodes can now be prepared and configured in Cockpit. See Expanding the hyperconverged cluster by adding a new volume on new nodes using Cockpit for details.
- Upgrade using the user interface
- Upgrade your deployment using the Administration Portal. See Upgrading Red Hat Hyperconverged Infrastructure for details.
- Manage your storage and virtual machines in Cockpit
- You can now view and manage your storage and your virtual machines from Cockpit. The Red Hat Virtualization Adminsitration Console is still required for some more complex tasks, such as geo-replication. See Managing Red Hat Gluster Storage using Cockpit for more information.
- Configure different devices and device types
- Previous versions of RHHI for Virtualization expected each virtualization host to be set up the same way, with the same device types and device names on each. As of version 1.5, you can specify different device and sizes as appropriate for each host and size arbiter bricks appropriately.
- Updated user interfaces
Cockpit and the Administration Portal have seen a number of updates to their user interfaces. Operations are now better organised and easier to find, and a number of new options are available.
- Specify additional hosts during Cockpit setup instead of adding them manually after Hosted Engine deployment.
- Reset brick configuration after reinstalling a virtualization host.
- Deploy on a single node
- Single node deployments of Red Hat Hyperconverged Infrastructure for Virtualization are now supported. See Deploying RHHI for Virtualization in the data center for support limitations and deployment details.
- Convert virtualization hosts
- Red Hat Virtualization hosts can now be converted into hyperconverged hosts. See Converting a virtualization cluster to a hyperconverged cluster for details.
8.2. Upgrade workflow
Red Hat Hyperconverged Infrastructure for Virtualization is a software solution comprised of several different components. Upgrade the components in the following order to minimize disruption to your deployment:
8.3. Preparing to upgrade
8.3.1. Verify brick mount options
If you have configured Virtual Disk Optimizer for any of the volumes in this deployment, you may be affected by Bug 1649507. This bug incorrectly edited the mount options of brick devices.
Edit the /etc/fstab
file on all hosts to ensure that the following are true:
-
Only bricks on VDO volumes have the
x-systemd.requires=vdo.service
option. -
Bricks on VDO volumes have the
_netdev,x-systemd.device-timeout=0
options.
8.3.2. Update to the latest version of the previous release
Ensure that you are using the latest version (4.1.11) of Red Hat Virtualization Manager 4.1 on the hosted engine virtual machine, and the latest version of Red Hat Virtualization 4.1 on the hosted engine node.
See the Red Hat Virtualization Self-Hosted Engine Guide for the Red Hat Virtualization 4.1 update process.
Do not proceed with the following prerequisites until you have updated to the latest version of Red Hat Virtualization 4.1.
8.3.3. Update subscriptions
You can check which repositories a machine has access to by running the following command as the root user:
# subscription-manager repos --list-enabled
Verify that the Hosted Engine virtual machine is subscribed to the following repositories:
-
rhel-7-server-rhv-4.2-manager-rpms
-
rhel-7-server-rhv-4-manager-tools-rpms
-
rhel-7-server-rpms
-
rhel-7-server-supplementary-rpms
-
jb-eap-7-for-rhel-7-server-rpms
-
rhel-7-server-ansible-2-rpms
-
Verify that the Hosted Engine virtual machine is not subscribed to previous versions of the above repositories.
-
rhel-7-server-rhv-4.2-manager-rpms
replaces therhel-7-server-rhv-4.2-rpms
repository -
rhel-7-server-rhv-4-manager-tools-rpms
replaces therhel-7-server-rhv-4-tools-rpms
repository
-
-
Verify that all virtualization hosts are subscribed to the
rhel-7-server-rhvh-4-rpms
repository.
Subscribe a machine to a repository by running the following command on that machine:
# subscription-manager repos --enable=<repository>
8.3.4. Verify that data is not currently being synchronized using geo-replication
- Click the Tasks tab at the bottom right of the Manager. Ensure that there are no ongoing tasks related to Data Synchronization. If data synchronization tasks are present, wait until they are complete before beginning the update.
Stop all geo-replication sessions so that synchronization will not occur during the update. Click the Geo-replication subtab and select the session that you want to stop, then click Stop.
Alternatively, run the following command to stop a geo-replication session:
# gluster volume geo-replication <MASTER_VOL> <SLAVE_HOST>::<SLAVE_VOL> stop
8.4. Upgrading Red Hat Hyperconverged Infrastructure for Virtualization
8.4.1. Upgrading the Hosted Engine virtual machine
Place the cluster into Global Maintenance mode
- Log in to Cockpit.
- Click Virtualization → Hosted Engine.
- Click Put this cluster into global maintenance.
Upgrade Red Hat Virtualization Manager.
- Log in to the Hosted Engine virtual machine.
Upgrade the setup packages:
# yum update ovirt*setup\*
Run
engine-setup
and follow the prompts to upgrade the Manager.This process can take a while and cannot be aborted, so Red Hat recommends running it inside a
screen
session. See How to use the screen command for more information about this function.Upgrade all other packages.
# yum update
Reboot the Hosted Engine virtual machine to ensure all updates are applied.
# reboot
Restart the Hosted Engine virtual machine.
- Log in to any virtualization host.
Start the Hosted Engine virtual machine.
# hosted-engine --vm-start
Verify the status of the Hosted Engine virtual machine.
# hosted-engine --vm-status
Remove the cluster from Global Maintenance mode.
- Log in to Cockpit.
- Click Virtualization → Hosted Engine.
- Click Remove this cluster from global maintenance.
8.4.2. Upgrading the virtualization hosts
If you are upgrading a host from Red Hat Virtualization 4.2.7 or 4.2.7-1, ensure that the hosted engine virtual machine is not running on that host during the upgrade process. This is related to a bug introduced in Red Hat Enterprise Linux 7.6, BZ#1641798, which affects these versions of Red Hat Virtualization.
To work around this issue, stop the hosted engine virtual machine before upgrading a host, and start it on another host.
[root@host1] # hosted-engine --vm-shutdown
[root@host2] # hosted-engine --vm-start
Perform the following steps on one virtualization host at a time.
Upgrade the virtualization host.
- In the Manager, click Compute → Hosts and select a node.
- Click Installation → Upgrade.
Click OK to confirm the upgrade.
Wait for the upgrade to complete, and for the host to become available again.
Verify self-healing is complete before upgrading the next host.
- Click the name of the host.
- Click the Bricks tab.
-
Verify that the Self-Heal Info column of all bricks is listed as
OK
before upgrading the next host.
Troubleshooting
-
If upgrading a virtualization host fails because of a conflict with the
rhvm-appliance
package, log in to the virtualization host and follow the steps in RHV: RHV-H Upgrade failed before continuing.
8.4.3. Cleaning up after upgrading
You may need to update the virt
group profile after you upgrade. This is necessary because of BZ1643730, also described in the release notes.
Verify that the following parameters are included in the
/var/lib/glusterd/groups/virt
file on each server, adding them if necessary.cluster.choose-local=off client.event-threads=4 server.event-threads=4 performance.client-io-threads=on
If you needed to update the
virt
group profile, run the following command on each volume to apply the updated profile.# gluster volume set <volname> group virt