6.14. Red Hat Virtualization 4.4 General Availability (ovirt-4.4.1)

6.14.1. Bug Fix

These bugs were fixed in this release of Red Hat Virtualization:


Previously, if you requested multiple concurrent network changes on a host, some requests were not handled due to a 'reject on busy' service policy. The current release fixes this issue with a new service policy: If resources are not available on the server to handle a request, the host queues the request for a configurable period. If server resources become available within this period, the server handles the request. Otherwise, it rejects the request. There is no guarantee for the order in which queued requests are handled.


When a virtual machine is loading, the Manager machine sends the domain XML with a NUMA Configuration CPU list containing the current CPU IDs. As a result, the libvirt/QEMU issued a warning that the NUMA Configuration CPU list is incomplete, and should contain IDs for all of the virtual CPUs. In this release, the warning no longer appears in the log.


Previously, using ovirt-engine-rename did not handle the OVN provider correctly. This caused bad IP address and hostname configurations, which prevented adding new hosts and other related issues. The current release fixes this issue. Now, ovirt-engine-rename handles ovirt-provider-ovn correctly, resolving the previous issues.


When deploying the self-hosted engine on a host, the Broker and Agent Services are brought down momentarily. When the VDSM service attempted to send a get_stats message before the services are restarted, the communication failed and the VDSM logged an error message. In this release, such events now result in a warning, and are not flagged or logged as errors.


Previously, commands trying to access an unresponsive NFS storage domain remained blocked for 20-30 minutes, which had significant impacts. This was caused by the non-optimal values of the NFS storage timeout and retry parameters. The current release fixes this issue: It changes these parameter values so commands to a non-responsive NFS storage domain fail within one minute.


Previously, importing a virtual machine (VM) from a snapshot that included the memory disk failed if you imported it to a storage domain that is different from the storage domain where the snapshot was created. This happened because the memory disk depended on the storage domain remaining unchanged. The current release fixes this issue. Registration of the VM with its memory disks succeeds. If the memory disk is not in the RHV Manager database, the VM creates a new one.


Previously, a custom scheduler policy was used without the HostDevice filter. Consequently, the virtual machine was scheduled on an unsupported host, causing a null pointer exception.

With this update, some filter policy units are now mandatory, including HostDevice. These filter policy units are always active, cannot be disabled, and they are no longer visible in the UI or API.

These filters are mandatory:

  • Compatibility-Version
  • CPU-Level
  • CpuPinning
  • HostDevice
  • PinToHost
  • VM leases ready


Previously, if you lowered the cluster compatibility version, the change did not propagate to the self-hosted engine virtual machine. As a result, the self-hosted engine virtual machine was not compatible with the new cluster version; you could not start or migrate it to another host in the cluster. The current release fixes this issue: The lower cluster compatibility version propagates to the self-hosted engine virtual machine; you can start and migrate it.


Previously, if two or more templates had the same name, selecting any of these templates displayed the same details from only one of the templates. This happened because the Administration Portal identified the selected template using a non-unique template name. The current release fixes this issue by using the template ID, which is unique, instead.


Previously, the VM Portal was inconsistent in how it displayed pool cards. After a user took all of the virtual machines from them, the VM Portal removed automatic pool cards but continued displaying manual pool cards. The current release fixes this issue: VM Portal always displays a pool card, and the card has a new label that shows how many virtual machines the user can take from the pool.


When a system had many FC LUNs with many paths per LUN, and a high I/O load, scanning of FC devices became slow, causing timeouts in monitoring VM disk size, and making VMs non-responsive. In this release, FC scans have been optimized for speed, and VMs are much less likely to become non-responsive.


Previously, Virtual Data Optimizer (VDO) statistics were not available for VDO volumes with an error, so VDO monitoring from VDSM caused a traceback. This update fixes the issue by correctly handling the different outputs from the VDO statistics tool.


Previously, if you decided to redeploy RHV Manager as a hosted engine, running the ovirt-hosted-engine-cleanup command did not clean up the /etc/libvirt/qemu.conf file correctly. Then, the hosted engine redeployment failed to restart the libvirtd service because libvirtd-tls.socket remained active. The current release fixes this issue. You can run the cleanup tool and redeploy the Manager as a hosted engine.


Previously, mixing the Logical Volume Manager (LVM) activation and deactivation commands with other commands caused possible undefined LVM behavior and warnings in the logs. The current release fixes this issue. It runs the LVM activation and deactivation commands separately from other commands. This produces resulting well-defined LVM behavior and clear errors in case of failure.


Previously, if a host failed and if the RHV Manager tried to start the high-availability virtual machine (HA VM) before the NFS lease expired, OFD locking caused the HA VM to fail with the error, "Failed to get "write" lock Is another process using the image?." If the HA VM failed three times in a row, the Manager could not start it again, breaking the HA functionality. The current release fixes this issue. RHV Manager would continue to try starting the VM even after three failures (the frequency of the attempts decreases over time). Eventually, once the lock expires, the VM would be started.


Previously, after increasing the cluster compatibility version of a cluster with virtual machines that had outstanding configuration changes, those changes were reverted. The current release fixes this issue. It applies both the outstanding configuration changes and the new cluster compatibility version to the virtual machines.


Previously the / filesystem automatically grew to fit the whole disk, and the user could not increase the size of /var or /var/log. This happened because, if a customer specified a disk larger than 49 GB while installing the Hosted Engine, the whole logical volume was allocated to the root (/) filesystem. In contrast, for the RHVM machine, the critical filesystems are /var and /var/log.

The current release fixes this issue. Now, the RHV Manager appliance is based on the logical volume manager (LVM). At setup time, its PV and VG are automatically extended, but the logical volumes (LVs) are not. As a result, after installation is complete, you can extend all of the LVs in the Manager VM using the free space in the VG.


Previously, an imported VM always had 'Cloud-Init/Sysprep' turned on. The Manager created a VmInit even when one did not exist in the OVF file of the OVA. The current release fixes this issue: The imported VM only has 'Cloud-Init/Sysprep' turned on if the OVA had it enabled. Otherwise, it is disabled.


In this release, when updating a Virtual Machine using a REST API, not specifying the console value now means that the console state should not be changed. As a result, the console keeps its previous state.


Previously, changing the template version of a VM pool created from a delete-protected VM made the VM pool non-editable and unusable. The current release fixes this issue: It prevents you from changing the template version of the VM pool whose VMs are delete-protected and fails with an error message.


Previously, after upgrading RHV 4.1 to a later version, high-availability virtual machines (HA VMs) failed validation and did not run. To run the VMs, the user had to reset the lease Storage Domain ID. The current release fixes this issue: It removes the validation and regenerates the lease information data when the lease Storage Domain ID is set. After upgrading RHV 4.1, HA VMs with lease Storage Domain IDs run.


Previously, when migrating a paused virtual machine, the Red Hat Virtualization Manager did not always recognize that the migration completed. With this update, the Manager immediately recognizes when migration is complete.


When you use the engine ("Master") to set the high-availability host running the engine virtual machine (VM) to maintenance mode, the ovirt-ha-agent migrates the engine VM to another host. Previously, in specific cases, such as when these VMs had an old compatibility version, this type of migration failed. The current release fixes this problem.


Previously, to get the Cinder Library (cinderlib), you had to install the OpenStack repository. The current release fixes this issue by providing a separate repository for cinderlib.

To enable the repository, enter:

$ dnf config-manager --set-enabled rhel-8-openstack-cinderlib-rpms

To install cinderlib, enter:

$ sudo dnf install python3-cinderlib


Previously, the user interface used the wrong unit of measure for the VM memory size in the VM settings of Hosted Engine deployment via cockpit: It showed MB instead of MiB. The current release fixes this issue: It uses MiB as the unit of measure.


Before this update, you could import a virtual machine from a cluster with a compatibility version lower than the target cluster, and the virtual machine’s cluster version would not automatically update to the new cluster’s compatibility version, causing the virtual machine’s configuration to be invalid. Consequently, you could not run the virtual machine without manually changing its configuration. With this update, the virtual machine’s cluster version automatically updates to the new cluster’s compatibility version. You can import virtual machines from cluster compatibility version 3.6 or newer.


Previously, when you created a virtual machine from a template, the BIOS type of defined in the template did not take effect on the new virtual machine. Consequently, the BIOS type on the new virtual machine was incorrect. With this update, this bug is fixed, so the BIOS type on the new virtual machine is correct.


Previously, the console client resources page showed truncated titles for some locales. The current release fixes this issue. It re-arranges the console client resources page layout as part of migrating from Patternfly 3 to Patternfly 4 and fixes the truncated titles.


Previously, the slot parameter was parsed as a string, causing disk rollback to fail during the creation of a virtual machine from a template when using an Ansible script. Note that there was no such failure when using the Administration Portal to create a virtual machine from a template. With this update, the slot parameter is parsed as an int, so disk rollback and virtual machine creation succeed.


When a large disk is converted as part of VM export to OVA, it takes a long time. Previously, the SSH channel the export script timed out and closed due to the long period of inactivity, leaving an orphan volume. The current release fixes this issue: Now, the export script adds some traffic to the SSH channel during disk conversion to prevent the SSH channel from being closed.


Previously, a virtual machine could crash with the message "qemu-kvm: Failed to lock byte 100" during a live migration with storage problems. The current release fixes this issue in the underlying platform so the issue no longer happens.


after_get_caps is a vdsm hook that periodically checks for a database connection. This hook requires ovs-vswitchd to be running in order to execute properly. Previously, the hook ran even when ovs-vswitchd was disabled, causing an error to be logged to /var/log/messages, eventually flooding it. Now, when the hook starts, it checks if the OVS service is available, and bails out of the hook when the service is not available, so the log is no longer flooded with these error messages.


Previously, the self-hosted engine high availability host’s management network was configured during deployment. The VDSM took over the Network Manager and configured the selected network interface during initial deployment, while the Network Manager remained disabled. During restore, there was no option to attach additional (non-default) networks, and the restore process failed because the high-availability host had no connectivity to networks previously configured by the user that were listed in the backup file.

In this release, the user can pause the restore process, manually add the required networks, and resume the restore process to completion.


Previously, the gluster fencing policy check failed due to a non-iterable object and threw an exception. The code also contained a minor typo. The current release fixes these issues.


Previously, when a virtual machine migration entered post-copy mode and remained in that mode for a long time, the migration sometimes failed and the migrated virtual machine was powered off. In this release, post-copy migrations are maintained to completion.


Previously, items with number ten and higher on the BIOS boot menu were not assigned sequential indexes. This made it difficult to select those items. The current release fixes this issue. Now, items ten and higher are assigned letter indexes. Users can select those items by entering the corresponding letter.


Previously, the state of the user session was not saved correctly in the Engine database, causing many unnecessary database updates to be performed. The current release fixes this issue: Now, the user session state is saved correctly on the first update.


Previously, if you updated the Data Center (DC) level, and the DC had a VM with a lower custom compatibility level than the DC’s level, the VM could not resume due to a "not supported custom compatibility version." The current release fixes this issue: It validates the DC before upgrading the DC level. If the validation finds VMs with old custom compatibility levels, it does not upgrade the DC level: Instead, it displays "Cannot update Data Center compatibility version. Please resume/power off the following VMs before updating the Data Center."


Before this update, some architecture-specific dependencies of VDSM were moved to safelease in order to keep VDSM architecture-agnostic. With this update, those dependencies have been returned to VDSM and removed from safelease.


Previously, engine-setup did not provide enough information about configuring ovirt-provider-ovn. The current release fixes this issue by providing more information in the engine-setup prompt and documentation that helps users understand their choice and follow up actions.


Previously, moving a disk resulted in the wrong SIZE/CAP key in the volume metadata. This happened because creating a volume that had a parent overwrote the size of the newly-created volume with the parent size. As a result, the volume metadata contained the wrong volume size value. The current release fixes this issue, so the volume metadata contains the correct value.


In some scenarios, the PCI address of a hotplugged SR-IOV vNIC was overwritten by an empty value, and as a result, the NIC name in the virtual machine was changed following a reboot. In this release, the vNIC PCI address is stored in the database and the NIC name persists following a virtual machine reboot.


Previously, when importing a KVM into Red Hat Virtualization, "Hardware Clock Time Offset" was not set. As a result, the Manager machine did not recognize the guest agent installed in the virtual machine. In this release, the Manager machine recognizes the guest agent on a virtual machine imported from KVM, and the "Hardware Clock Time Offset" won’t be null.


Before this update, there was no way to backup and restore the Cinderlib database. With this update, the engine-backup command includes the Cinderlib database.

For example, to backup the engine including the Cinderlib database:

# engine-backup --scope=all --mode=backup --file=cinderlib_from_old_engine --log=log_cinderlib_from_old_engine

To restore this same database:

# engine-backup --mode=restore --file=/root/cinderlib_from_old_engine --log=/root/log_cinderlib_from_old_engine --provision-all-databases --restore-permissions


In a Red Hat Virtualization (RHV) environment with VDSM version 4.3 and Manager version 4.1, the DiskTypes are parsed as int values. However, in an RHV environment with Manager version > 4.1, the DiskTypes are parsed as strings. That compatibility mismatch produced an error: "VDSM error: Invalid parameter: 'DiskType=2'". The current release fixes this issue by changing the string value back to an int, so the operation succeeds with no error.


Previously, converting a storage domain to the V5 format failed when, following an unsuccessful delete volume operation, partly-deleted volumes with cleared metadata remained in the storage domain. The current release fixes this issue. Converting a storage domain succeeds even when partly-deleted volumes with cleared metadata remain in the storage domain.


Previously, some HTML elements in Cluster Upgrade dialog had missing or duplicated IDs, which impaired automated UI testing. The current release fixes this issue. It provides missing IDs and removes duplicates to improve automated UI testing.


Previously, if you changed a virtual machine’s BIOS Type chipset from one of the Q35 options to Cluster default or visa versa while USB Policy or USB Support was Enabled, the change did not update the USB controller to the correct setting. The current release fixes this issue. The same actions update the USB controller correctly.


Previously, if you hot-unplugged a virtual machine interface shortly after booting the virtual machine, the unplugging action failed with an error. When this happened, it was because VM monitoring did not report the alias of the interface soon enough; and VDSM could not identify the vNIC to unplug. The current release fixes this issue: If if the alias is missing during hot unplug, the Engine generates one on the fly.


Previously, the python3-ovirt-engine-sdk4 package did not include the all_content attribute of the HostNicService and HostNicsService. As a result, this attribute was effectively unavailable to python3-ovirt-engine-sdk4 users. The current release fixes this issue by adding the all_content parameter to the python3-ovirt-engine-sdk4.


Previously, when creating a virtual machine with the French language selected, the Administration Portal did not accept the memory size using the French abbreviation Mo instead of MB. After setting the value with the Mo suffix, the value was reset to zero. With this update, the value is parsed correctly and the value remains as entered.


Previously, if ovirt-ha-broker restarted while the RHV Manager (engine) was querying the status of the self-hosted engine cluster, the query could get stuck. If that happened, the most straightforward workaround was to restart the RHV Manager.

This happened because the RHV Manager periodically checked the status of the self-hosted engine cluster by querying the VDSM daemon on the cluster host. With each query, VDSM checked the status of the ovirt-ha-broker daemon over a Unix Domain Socket. The communication between VDSM and ovirt-ha-broker wasn’t enforcing a timeout. If ovirt-ha-broker was restarting, such as trying to recover from a storage issue, the VDSM request could get lost, causing VDSM and the RHV Manager to wait indefinitely.

The current release fixes this issue. It enforces a timeout in the communication channel between the VDSM and ovirt-ha-broker. If ovirt-ha-broker cannot reply to VDSM within a certain timeout, VDSM reports a self-hosted engine error to the RHV Manager.


Previously, the Manager searched for guest tools only on ISO domains, not data domains. The current release fixes this issue: Now, if the Manager detects a new tool on data domains or ISO domains, it displays a mark for the Windows VM.


Before this update libvirt did not support launching virtual machines with names ending with a period, even though the Manager did. This prevented launching virtual machines with names ending with a period. With this update, the Administration Portal and the REST API now prevent ending the name of a virtual machine with a period, resolving the issue.


Previously, while VDSM was starting, the definition of the network filter vdsm-no-mac-spoofing was removed and recreated to ensure the filter was up to date. This occasionally resulted in a timeout during the start of VDSM. The current release fixes this issue. Instead of removing and recreating of the filter, the vdsm-no-mac-spoofing filter is updated during the start of the VDSM. This update takes less than a second, regardless of the number of vNICs using this filter.


Previously, during virtual machine shut down, the VDSM command Get Host Statistics occasionally failed with an Internal JSON-RPC error {'reason': '[Errno 19] vnet<x> is not present in the system'}. This failure happened because the shutdown could make an interface disappear while statistics were being gathered. The current release fixes this issue. It prevents such failures from being reported.


Previously, cloud-init could not be used on hosts with FIPS enabled. With this update, cloud-init can be used on hosts with FIPS enabled.


Previously, the About dialog in the VM Portal provided a link to GitHub for reporting issues. However, RHV customers should use the Customer Portal to report issues. The current release fixes this issue. Now, the About dialog provides a link to the Red Hat Customer Portal.


Previously, the RHV Manager reported network out of sync because the Linux kernel applied the default gateway IPv6 router advertisements, and the IPv6 routing table was not configured by RHV. The current release fixes this issue. The IPv6 routing table is configured by RHV. NetworkManager manages the default gateway IPv6 router advertisements.


During installation or upgrade to Red Hat Virtualization 4.3, engine-setup failed if the PKI Organization Name in the CA certificate included non-ASCII characters. In this release, the upgrade engine-setup process completes successfully.


Previously, the guest_cur_user_name of the vm_dynamic database table was limited to 255 characters, not enough for more than approximately 100 user names. As a result, when too many users logged in, updating the table failed with an error. The current release fixes this issue by changing the field type from VARCHAR(255) to TEXT.


Previously, enabling port mirroring on networks whose user-visible name was longer than 15 characters failed. This happened because port mirroring tried to use this long user-visible network name, which was not a valid network name. The current release fixes this issue. Now, instead of the user-visible name, port mirroring uses the VDSM network name. Therefore, you can enable port mirroring for networks whose user-visible name is longer than 15 characters.


Previously, the RHV landing page did not support scrolling. With lower screen resolutions, some users could not use the log in menu option for the Administration Portal or VM Portal. The current release fixes this issue by migrating the landing and login pages to PatternFly 4, which displays horizontal and vertical scroll bars when needed. Users can access the entire screen regardless of their screen resolution or zoom setting.


Before this update, previewing a snapshot of a virtual machine, where the snapshot of one or more of the machine’s disks did not exist or had no image with active set to "true", caused a null pointer exception to appear in the logs, and the virtual machine remained locked. With this update, before a snapshot preview occurs, a database query checks for any damaged images in the set of virtual machine images. If the query finds a damaged image, the preview operation is blocked. After you fix the damaged image, the preview operation should work.


Previously, an issue with the Next button on External Provider Imports prevented users from importing virtual machines (VMs) from external providers such as VMware. The current release fixes this issue and users can import virtual machines from external providers.


Previously, exporting a virtual machine (VM) to an Open Virtual Appliance (OVA) file archive failed if the VM was running on the Host performing the export operation. The export process failed because doing so created a virtual machine snapshot, and while the image was in use, the RHV Manager could not tear down the virtual machine. The current release fixes this issue. If the VM is running, the RHV Manager skips tearing down the image. Exporting the OVA of a running VM succeeds.


Previously, if you sent the RHV Manager an API command to attach a non-existing ISO to a VM, it attached an empty CD or left an existing one intact. The current release fixes this issue. Now, the Manager checks if the specified ISO exists, and throws an error if it doesn’t.


Previously, creating a snapshot did not correctly save the Cloud-Init/Sysprep settings for the guest OS. If you tried to clone a virtual machine from the snapshot, it did not have valid values to initialize the guest OS. The current release fixes this issue. Now, creating a snapshot correctly saves the the Cloud-Init/Sysprep configuration for the guest OS.


Previously, using LUKS alone was a problem because the RHV Manager could reboot a node using Power Management commands. However, the node would not reboot because it was waiting for the user to enter a decrypt/open/unlock passphrase. This release fixes the issue by adding clevis RPMs to the Red Hat Virtualization Host (RHVH) image. As a result, a Manager can automatically unlock/decrypt/open an RHVH using TPM or NBDE.


Previously, upgrading RHV from version 4.2 to 4.3 made the 10-setup-ovirt-provider-ovn.conf file world-readable. The current release fixes this issue, so the file has no unnecessary permissions.


Before this update, selecting templates or virtual machines did not display the proper details when templates or virtual machines with the same name were saved in different Data Centers, because the machine’s name, instead of its GUID, was used to fetch the machine’s details. With this update, the query uses the virtual machine’s GUID, and the correct details are displayed.


Previously, trying to update the IPv6 gateway in the Setup Networks dialog removed it from the network attachment. The current release fixes this issue: You can update the IPv6 gateway if the related network has the default route role.


Before this update,copying disks created by virt-v2v failed with an Invalid Parameter Exception, Invalid parameter:'DiskType=1'. With this release, copying disks succeeds.


The ovirt-host-deploy package uses otopi. Previously, otopi could not handle non-ASCII text in /root/.ssh/authorized_keys and failed with an error: 'ascii' codec can’t decode byte 0xc3 in position 25: ordinal not in range(128). The new release fixes this issue by adding support for Unicode characters to otopi.


Previously, systemd units from failed conversions were not removed from the host. These could cause collisions and prevent subsequent conversions from starting because the service name was already "in use." The current release fixes this issue. If the conversion fails, the units are explicitly removed so they cannot interfere with subsequent conversions.


Previously, the Administration Portal showed very high memory usage for a host with no virtual machines running because it was not counting slab reclaimable memory. As a result, virtual machines could not be migrated to that host. The current release fixes that issue. The free host memory is evaluated correctly.


Previously, when you tried to delete the snapshot of a virtual machine with a LUN disk, RHV parsed its image ID incorrectly and used "mapper" as its value. This issue produced a null pointer error (NPE) and made the deletion fail. The current release fixes this issue, so the image ID parses correctly and the deletion succeeds.


Previously, when you used the VM Portal to configure a virtual machine (VM) to use Windows OS, it failed with the error, "Invalid time zone for given OS type." This happened because the VM’s timezone for Windows OS was not set properly. The current release fixes this issue. If the time zone in the VM template or VM is not compatible with the VM OS, it uses the default time zone. For Windows, this default is "GMT Standard Time". For other OSs, it is "Etc/GMT". Now, you can use the VM Portal to configure a VM to use Windows OS.


Previously, after upgrading to RHV version 4.1 to 4.3, the Graphical Console for the self-hosted engine virtual machine was locked because the default display in version 4.1 is VGA. The current release fixes this issue. While upgrading to version 4.3, it changes the default display to VNC. As a result, the Graphical Console for the Hosted-Engine virtual machine is changeable.


With this release, the number of DNS configuration SQL queries that the Red Hat Virtualization Manager runs is significantly reduced, which improves the Manager’s ability to scale.


Previously, in an IPv4-only host with a .local FQDN, the deployment kept looping searching for an available IPv6 prefix until it failed. This was because the hosted-engine setup picked a link-local IP address for the host. The hosted-engine setup could not ensure that the Engine and the host are on the same subnet when one of them used a link-local address. The Engine must not use on a link-local address to be reachable through a routed network. The current release fixes this issue: Even if the hostname is resolved to a link-local IP address, the hosted-engine setup ignores the link-local IP addresses and tries to use another IP address as the address for the host. The hosted-engine can deploy on hosts, even if the hostname is resolved to a link-local address.


Previously, ExecStopPost was present in the VDSM service file. This meant that, after stopping VDSM, some of its child processes could continue and, in some cases, lead to data corruption. The current release fixes this issue. It removes ExecStopPost from the VDSM service. As a result, terminating VDSM also stops its child processes.


Previously, some migrations failed because of invalid host certificates whose Common Name (CN) contained an IP address, and because using the CN for hostname matching is obsolete. The current release fixes this issue by filling-in the Subject Alternative Name (SAN) during host installation, host upgrade, and certificate enrolment. Periodic certificate validation includes the SAN field and raises an error if it is not filled.


Previously, while creating virtual machine snapshots, if the VDSM’s command to freeze a virtual machines' file systems exceeded the snapshot command’s 3-minute timeout period, creating snapshots failed, causing virtual machines and disks to lock.

The current release adds two key-value pairs to the engine configuration. You can configure these using the engine-config tool:

  • Setting LiveSnapshotPerformFreezeInEngine to true enables the Manager to freeze VMs' file systems before it creates a snapshot of them.
  • Setting LiveSnapshotAllowInconsistent to true enables the Manager to continue creating snapshots if it fails to freeze VMs' file systems.


Previously, extending a floating QCOW disk did not work because the user interface and REST API ignored the getNewSize parameter. The current release fixes this issue and validates the settings so you can extend a floating QCOW disk.


Previously, in a large environment, the oVirt’s REST API’s response to a request for the cluster list was slow: This slowness was caused by processing a lot of surplus data from the engine database about out-of-sync hosts on the cluster which eventually was not included in the response. The current release fixes this issue. The query excludes the surplus data, and the API responds quickly.


Previously, the virtual machine (VM) instance type edit and create dialog displayed a vNIC profile editor. This item gave users the impression they could associate a vNIC profile with an instance type, which is not valid. The current release fixes this issue by removing the vNIC profile editor from the instance edit and create dialog.


Previously, VDSM did not send the Host.getStats message: It did not convert the description field of the Host.getStats message to utf-8, which caused the JSON layer to fail. The current release fixes this issue. It converts the description field to utf-8 so that VDSM can send the Host.getStats message.


Previously, issues with aliases for USB, channel, and PCI devices generated WARN and ERROR messages in engine.log when you started virtual machines.

RHV Manager omitted the GUID from the alias of the USB controller device. This information is required later to correlate the alias with the database instance of the USB device. As a result, duplicate devices were being created. Separately, channel and PCI devices whose aliases did not contain GUIDs also threw exceptions and caused warnings.

The current release fixes these issues. It removes code that was prevented the USB controller device from sending the correct alias when launching the VM. The GUID is added to the USB controller devices' aliases within the domain XML. It also filters channel and PCI controllers from the GUID conversion code to avoid printing exception warnings for these devices.


Previously, for the list of virtual machine templates in the Administration Portal, a paging bug hid every other page, and the templates on those pages, from view. The current release fixes this issue and displays every page of templates correctly.


Before this update, the engine-cleanup command enabled you to do a partial cleanup by prompting you to select which components to remove, even though partial cleanup is not supported. This resulted in a broken system. With this update, the prompt no longer appears and only full cleanup is possible.


Previously, a problem with AMD EPYC CPUs that were missing the virt-ssbd CPU flag prevented Hosted Engine installation. The current release fixes this issue.


Previously, the rename tool did not renew the websocketproxy certificates and did not update the value of WebSocketProxy in the engine configuration. This caused issues such as the VNC browser console not being able to connect to the server. The current release fixes this issue. Now, ovirt-engine-rename handles the websocket proxy correctly. It regenerates the certificate, restarts the service, and updates the value of WebSocketProxy.


Previously, if a virtual machine (VM) was forcibly shut down by SIGTERM, in some cases the VDSM did not handle the libvirt shutdown event that contained information about why the VM was shut down and evaluated it as if the guest had initiated a clean shutdown. The current release fixes this issue: VDSM handles the shutdown event, and the Manager restarts the high-availability VMs as expected.


Previously, if you ran a virtual machine (VMs) with an old operating system such as RHEL 6 and the BIOS Type was a Q35 Chipset, it caused a kernel panic. The current release fixes this issue. If a VM has an old operating system and the BIOS Type is a Q35 Chipset, it uses the VirtIO-transitional model for some devices, which enables the VM to run normally.


Previously, because of a UI regression bug in the Administration Portal, you could not add system permissions to a user. For example, clicking Add System Permissions, selecting a Role to assign, and clicking OK did not work. The current release fixes so you can add system permissions to a user.


Previously, when restoring a backup, engine-setup did not restart ovn-northd, so the ssl/tls configuration was outdated. With this update ,the the restored ssl/tls ovn-northd reloads the restored ssl/tls configuration.


Previously, trying to mount an ISO domain (File → Change CD) within the Console generated a "Failed to perform 'Change CD' operation" error due to the deprecation of REST API v3. The current release fixes this issue: It upgrades Remote Viewer to use REST API v4 so mounting an ISO domain within the console works.


Previously, if you disabled the virtio-scsi drive and imported the virtual machine that had a direct LUN attached, the import validation failed with a "Cannot import VM. VirtIO-SCSI is disabled for the VM" error. This happened because the validation tried to verify that the virtio-scsi drive was still attached to the VM. The current release fixes this issue. If the Disk Interface Type is not virtio-scsi, the validation does not search for the virtio-scsi drive. Disk Interface Type uses an alternative driver, and the validation passes.


Previously, when migrating a virtual machine, information about the running guest agent was not always passed to the destination host. In these cases, the migrated virtual machine on the destination host did not receive an after_migration life cycle event notification. This update fixes this issue. The after_migration notification works as expected now.


Before this update, you could enable a raw format disk for incremental backup from the Administration Portal or using the REST API, but because incremental backup does not support raw format disks, the backup failed.

With this update, you can only enable incremental backup for QCOW2 format disks, preventing inclusion of raw format disks.


Before this update, validation succeeded for an incremental backup operation that included raw format disks, even though incremental backup does not support raw format disks. With this update, validation succeeds for a full backup operation for a virtual machine with a raw format disk, but validation fails for an incremental backup operation for a virtual machine with a raw format disk.


The apache-sshd library is not bundled anymore in the rhvm-dependencies package. The apache-sshd library is now packaged in its own rpm package.


Previously, due to a regression, KVM Importing failed and threw exceptions. This was due to a missing readinto function on the StreamAdapter. The current release fixes this issue so that KVM importing works.


Previously, importing virtual machines failed when the source version variable was null. With this update, validation of the source compatibility version is removed, enabling the import to succeed even when the source version variable is null.


Previously, VM Pools set to HA could no be run. VM Pools are stateless. Nonetheless, a user could set a VM in a Pool as supporting HA, but then the VM could not be launched. The current release fixes this issue: It disables the HA checkbox so that the user can no longer set VM Pools to support HA. As a result, the user can no longer set a VM Pool to support HA.


Previously, the ovirt-provider-ovn network provider was non-functional on RHV 4.3.9 Hosted-Engine. This happened because, with FDP 20.A (bug 1791388), the OVS/OVN service no longer had the permissions to read the private SSL/TLS key file. The current release fixes this issue: It updates the private SSL/TLS key file permissions. OVS/OVN reads the key file and works as expected.


Previously, if running a virtual machine with its Run Once configuration failed, the RHV Manager would try to run the virtual machine with its standard configuration on a different host. The current release fixes this issue. Now, if Run Once fails, the RHV Manager tries to run the virtual machine with its Run Once configuration on a different host.


Previously, trying to run a VM failed with an unsupported configuration error if its configuration did not specify a numa node. This happened because the domain xml was missing its numa node section, and VMs require at least one numa node to run. The current release fixes this issue: If the user has not specified any numa nodes, the VM generates a numa node section. As a result, a VM where numa nodes were not specified launches regardless of how many offline CPUs are available.


Before this update, a problem in the per Data-Center loop collecting images information caused incomplete data for analysis for all but the last Data-Center collected. With this update, the information is properly collected for all Data-Centers, resolving the issue.


Previously, using the Administration Portal to import a storage domain omitted custom mount options for NFS storage servers. The current release fixes this issue by including the custom mount options.


Previously, when the Administration Portal was configured to use French language, the user could not create virtual machines. This was caused by French translations that were missing from the user interface. The current release fixes this issue. It provides the missing translations. Users can configure and create virtual machines while the Administration Portal is configured to use the French language.


Previously, if you exported a virtual machine (VM) as an Open Virtual Appliance (OVA) file from a host that was missing a loop device, and imported the OVA elsewhere, the resulting VM had an empty disk (no OS) and could not run. This was caused by a timing and permissions issue related to the missing loop device. The current release fixes the timing and permission issues. As a result, the VM to OVA export includes the guest OS. Now, when you create a VM from the OVA, the VM can run.


Previously, if you tried to start an already-running virtual machine (VM) on the same host, VDSM failed this operation too late and the VM on the host became hidden from the RHV Manager. The current release fixes the issue: VDSM immediately rejects attempts to start a running VM on the same host.


Previously, when initiating the console from the VM portal to noVNC, the console didn’t work due to a missing 'path' parameter when initiating the console. In this release,the 'path' is not mandatory, and the noVNC console can initiate even when 'path' isn’t provided.


Previously when loading a memory snapshot, the RHV Manager did not load existing device IDs. Instead, it created new IDs for each device. The Manager was unable to correlate the IDs with the devices and treated them as though they were unplugged. The current release fixes this issue. Now, the Manager consumes the device IDs and correlates them with the devices.


Previously, if you used the update template script example of the ovirt-engine-sdk to import a virtual machine or template from an OVF configuration, it failed with a null-pointer exception (NPE). This happened because the script example did not supply the Storage Pool Id and Source Storage Domain Id. The current release fixes this issue. Now, the script gets the correct ID values from the image, so importing a template succeeds.


Previously, with RHV Manager running as a self-hosted engine, the user could hotplug memory on the self-hosted engine virtual machine and exceed the physical memory of the host. In that case, restarting the virtual machine failed due to insufficient memory. The current release fixes this issue. It prevents the user from setting the self-hosted engine virtual machine’s memory to exceed the active host’s physical memory. You can only save configurations where the self-hosted engine virtual machine’s memory is less than the active host’s physical memory.


While the RHV Manager is creating a virtual machine (VM) snapshot, it can time out and fail while trying to freeze the file system. If this happens, more than one VM can write data to the same logical volume and corrupt the data on it. In the current release, you can prevent this condition by configuring the Manager to freeze the VM’s guest filesystems before creating a snapshot. To enable this behavior, run the engine-configuration tool and set the LiveSnapshotPerformFreezeInEngine key-value pair to true.


Previously, when redeploying the RHV Manager as a hosted engine after cleanup, the libvirtd service failed to start. This happened because the libvirtd-tls.socket service was active. The current release fixes this issue. Now, when you run the ovirt-hosted-engine-cleanup tool, it stops the libvirtd-tls.socket service. The libvirtd service starts when you redeploy RHV Manager as a hosted engine.


Previously, the 'Host console SSO' feature did not work with python3, which is the default python on RHEL 8. The code was initially written for Python2 and was not properly modified for Python3. The current release fixes this issue: The 'Host console SSO' feature works with Python3.


Previously, if the DNS query test timed out, it did not produce a log message. The current release fixes this issue: If a DNS query times out, it produces a "DNS query failed" message in the broker.log.


In previous versions, engine-backup --mode=verify passed even if pg_restore emitted errors. The current release fixes this issue. The engine-backup --mode=verify command correctly fails if pg_restore emits errors.


Previously, adding or removing a smart card to a running virtual machine did not work. The current release fixes this issue. When you add or remove a smart card, it saves this change to the virtual machine’s next run configuration. In the Administration Portal, the virtual machine indicates that a next run configuration exists, and lists "Smartcard" as a changed field. When you restart the virtual machine, it applies the new configuration to the virtual machine.


Previously, retrieving host capabilities failed for specific non-NUMA CPU topologies. The current release fixes this issue and correctly reports the host capabilities for those topologies.


Previously, if creating a live snapshot failed because of a storage error, the RHV Manager would incorrectly report that it had been successful. The current release fixes this issue. Now, if creating a snapshot fails, the Manager correctly shows that it failed.


Previously, the slot parameter was parsed as a string, causing disk rollback to fail during the creation of a virtual machine from a template when using an Ansible script. Note that there was no such failure when using the Administration Portal to create a virtual machine from a template. With this update, the slot parameter is parsed as an int, so disk rollback and virtual machine creation succeed.


Previously, if you backed up RHV Manager running as a self-hosted engine in RHV version 4.3, restoring it in RHV version 4.4 failed with particular CPU configurations. The current release fixes this issue. Now, restoring the RHV Manager with those CPU configurations succeeds.


Previously, in the beta version of RHV 4.4, after adding a host to a cluster with compatibility version 4.2, editing the cluster reset its BIOS Type from the previous automatically detected value to Cluster default. As a result, virtual machines could not run because a Chip Set does not exist for Cluster Default. The current release fixes this issue. It preserves the original value of BIOS Type and prevents it from being modified when you edit the cluster. As a result, you can create and run virtual machines normally after editing cluster properties.


Previously, creating a live snapshot with memory while LiveSnapshotPerformFreezeInEngine was set to True, resulted in a virtual machine file system that is frozen when previewing or committing the snapshot with memory restore. In this release, the virtual machine runs successfully after creating a preview snapshot from a memory snapshot.


Previously, running ovirt-engine-rename generated errors and failed because Python 3 renamed urlparse to urllib.parse. The current release fixes this issue. Now, ovirt-engine-rename uses urllib.parse and runs successfully.


Previously, suppose you were trying to send metrics and logs to Elasticsearch that was not on OCP: You could not set usehttps to false while also not using Elasticsearch certificates (use_omelasticsearch_cert: false). As a result, you could not send data to Elasticsearch without https. The current release fixes this issue. Now, you can set the variable "usehttps" as expected and send data to Elasticsearch without https.


Before this release, local storage pools were created but were not deleted during Self-Hosted Engine deployment, causing storage pool leftovers to remain. In this release, the cleanup is performed properly following Self-Hosted Engine deployment, and there are no storage pool leftovers.


Previously, exporting a virtual machine or template to an OVA file incorrectly sets its format in the OVF metadata file to "RAW". This issue causes problems using the OVA file. The current release fixes this issue. Exporting to OVA sets the format in the OVF metadata file to "COW", which represents the disk’s actual format, qcow2.


When you change the cluster compatibility version, it can also update the compatibility version of the virtual machines. If the update fails, it rolls back the changes. Previously, chipsets and emulated machines were not part of the cluster update. The current release fixes this issue. Now, you can also update chipsets and emulator machines when you update the cluster compatibility version.


Previously, if the block path was unavailable for a storage block device on a host, the RHV Manager could not process host devices from that host. The current release fixes this issue. The Manager can process host devices even though a block path is missing.


Previously, the`hosted-engine --set-shared-config storage` command failed to update the hosted engine storage. With this update, the command works.


Old virtual machines that have not been restarted since user aliases were introduced in RHV version 4.2 use old device aliases created by libvirt. The current release adds support for those old device aliases and links them to the new user-aliases to prevent correlation issues and devices being unplugged.

6.14.2. Enhancements

This release of Red Hat Virtualization features the following enhancements:


The REST API in the current release adds the following updatable disk properties for floating disks:

  • For Image disks: provisioned_size, alias, description, wipe_after_delete, shareable, backup, and disk_profile.
  • For LUN disks: alias, description and shareable.
  • For Cinder and Managed Block disks: provisioned_size, alias, and description.

See Services.


In this release, it is now possible to edit the properties of a Floating Disk in the Storage > Disks tab of the Administration Portal. For example, the user can edit the Description, Alias, and Size of the disk.


With this enhancement, oVirt uses NetworkManager and NetworkManager Stateful Configuration (nmstate) to configure host networking. The previous implementation used network-scripts, which are deprecated in CentOS 8. This usage of NetworkManager helps to share code with software components. As a result, oVirt integrates better with RHEL-based software. Now, for example, the Cockpit web interface can see the host networking configuration, and oVirt can read the network configuration created by the Anaconda installer.


The VDSM’s ssl_protocol, ssl_excludes, and ssl_ciphers config options have been removed. For details, see: Consistent security by crypto policies in Red Hat Enterprise Linux 8.

To fine-tune your crypto settings, change or create your crypto policy. For example, for your hosts to communicate with legacy systems that still use insecure TLSv1 or TLSv1.1, change your crypto policy to LEGACY with:

# update-crypto-policies --set LEGACY


The floppy device has been replaced by a CDROM device for sysprep installation of Compatibility Versions 4.4 and later.


After a high-availability virtual machine (HA VM) crashes, the RHV Manager tries to restart it indefinitely. At first, with a short delay between restarts. After a specified number of failed retries, the delay is longer.

Also, the Manager starts crashed HA VMs in order of priority, delaying lower-priority VMs until higher-priority VMs are 'Up.'

The current release adds new configuration options:

  • RetryToRunAutoStartVmShortIntervalInSeconds, the short delay, in seconds. The default value is 30.
  • RetryToRunAutoStartVmLongIntervalInSeconds, the long delay, in seconds. The default value is 1800, which equals 30 minutes.
  • NumOfTriesToRunFailedAutoStartVmInShortIntervals, the number of restart tries with short delays before switching to long delays. The default value is 10 tries.
  • MaxTimeAutoStartBlockedOnPriority, the maximum time, in minutes, before starting a lower-priority VM. The default value is 10 minutes.


Network operations that span multiple hosts may take a long time. This enhancement shows you when these operations finish: It records start and end events in the Events Tab of the Administration Portal and engine.log. If you use the Administration Portal to trigger the network operation, the portal also displays a pop-up notification when the operation is complete.


In the default virtual machine template, the current release changes the default setting for "VM Type" to "server." Previously, it was "desktop."


With this update, you can connect to a Gluster storage network over IPv6, without the need for IPv4.


The current release adds the ability for you to select affinity groups while creating or editing a virtual machine (VM) or host. Previously, you could only add a VM or host by editing an affinity group.


With this update, you can set the reason for shutting down or powering off a virtual machine when using a REST API request to execute the shutdown or power-off.


In this release, the default "optimized for" value optimization type for bundled templates is now set to "Server".


Previously, when creating/managing an iSCSI storage domain, there was no indication that the operation may take a long time. In this release, the following message has been added: “Loading…​ A large number of LUNs may slow down the operation.”


With this update, unmanaged networks are viewable by the user on the host NICs page at a glance. Each NIC indicates whether one of its networks is unmanaged by oVirt engine. Previously, to view this indication, the user had to open the setup dialog, which was cumbersome.


With this update, when viewing clusters, you can sort by the Cluster CPU Type and Compatibility Version columns.


The current release adds a new capability: In the "Edit Template" window, you can use the "Sealed" checkbox to indicate whether a template is sealed. The Compute > Templates window has a new "Sealed" column, which displays this information.


With this update, you can check the list of hosts that are not configured for metrics, that is, those hosts on which the Collectd and Rsyslog/Fluentd services are not running.

First, run the playbook 'manage-ovirt-metrics-services.yml' by entering:

# /usr/share/ovirt-engine-metrics/configure_ovirt_machines_for_metrics.sh --playbook=manage-ovirt-metrics-services.yml

Then, check the file /etc/ovirt-engine-metrics/hosts_not_configured_for_metrics.


The current release displays a new warning when you use 'localhost' as an FQDN: "[WARNING] Using the name 'localhost' is not recommended, and may cause problems later on."


This release adds a progress bar for the disk synchronization stage of Live Storage Migration.


This enhancement adds support for OVMF with SecureBoot, which enables UEFI support for Virtual Machines.


The current release adds the VM’s current state and uptime to the Compute > Virtual Machine: General tab.


Previously, it was problematic to flip the host into the maintenance state while it was flipping between connecting and activating state. In this release, the host, regardless of its initial state before restart, will be put into maintenance mode after restarting the host using power management configuration.


All new clusters with x86 architecture and compatibility version 4.4 or higher now set the BIOS Type to the Q35 Chipset by default, instead of the i440FX chipset.


When creating a new MAC address pool, its ranges must not overlap with each other or with any ranges in existing MAC address pools.


When a host is running in FIPS mode, VNC must use SASL authorization instead of regular passwords because of a weak algorithm inherent to the VNC protocol. The current release facilitates using SASL by providing an Ansible role, ovirt-host-setup-vnc-sasl, which you can run manually on FIPS-enabled hosts. This role does the following:

  • Creates an empty SASL password database.
  • Prepares the SASL config file for qemu.
  • Changes the libvirt config file for qemu.


Previously, when High Availability was selected for a new virtual machine, the Lease Storage Domain was set to a bootable Storage Domain automatically if the user did not already select one. In this release, a bootable Storage Domain is set as the lease Storage Domain for new High Availability virtual machines.


Previously, if you tried to deploy hosted-engine over a teaming device, it would try to proceed and then fail with an error. The current release fixes this issue. It filters out teaming devices. If only teaming devices are available, it rejects the deployment with a clear error message that describes the issue.


With this enhancement, while using cockpit or engine-setup to deploy RHV Manager as a Self-Hosted Engine, the options for specifying the NFS version include two additional versions, 4.0 and 4.2.


Previously, multipath repeatedly logged irrelevant errors for local devices. In this release, local devices are blacklisted and irrelevant errors are no longer logged.


With this update, the API reports extents information for sparse disks; which extents are data, read as zero, or unallocated (holes). This enhancement enables clients to use the imageio REST API to optimize image transfers and minimize storage requirements by skipping zero and unallocated extents.


Before this update, you could enable FIPS on a host. But because the engine was not aware of FIPS, it did not use the appropriate options with qemu when starting virtual machines, so the virtual machines were not fully operable.

With this update, you can enable FIPS for a host in the Administration Portal, and the engine uses qemu with FIPS-compatible arguments.

To enable FIPS for a host, in the Edit Host window, select the Kernel tab and check the FIPS mode checkbox.


Previously, if there were hundreds of Fiber Channel LUNs, the Administration Portal dialog box for adding or managing storage domains took too long to render and might become unresponsive. This enhancement improves performance: It displays a portion of the LUNs in a table and provides right and left arrows that users can click to see the next or previous set of LUNs. As a result, the window renders normally and remains responsive regardless of how many LUNs are present.


With this update, you can start the self-hosted engine virtual machine in a paused state. To do so, enter the following command:

# hosted-engine --vm-start-paused

To un-pause the virtual machine, enter the following command:

# hosted-engine --vm-start


This update adds support for Hyper V enlightenment for Windows virtual machines on hosts running RHEL 8.2 with cluster compatibility level set to 4.4. Specifically, Windows virtual machines now support the following Hyper V functionality:

  • reset
  • vpindex
  • runtime
  • frequencies
  • reenlightenment
  • tlbflush


The current release adds a new feature: On the VM list page, the tooltip for the VM type icon shows a list of the fields you have changed between the current and the next run of the virtual machine.


The current release enables you to migrate a group of virtual machines (VMs) that are in positive enforcing affinity with each other.


In this release, it is now possible to edit the properties of a Floating Disk in the Storage > Disks tab of Administration Portal. For example, the user can edit the Description, Alias, and Size of the disk.


With this enhancement, if a network name contains spaces or is longer than 15 characters, the Administration Portal notifies you that the RHV Manager will rename the network using the host network’s UUID as a basis for the new name.


Suppose a Host has a pair of bonded NICs using (Mode 1) Active-Backup. Previously, the user clicked Refresh Capabilities to get the current status of this bonded pair. In the current release, if the active NIC changes, it refreshes the state of the bond in the Administration Portal and REST API. You do not need to click Refresh Capabilities.


This update adds support for the following virtual CPU models:

  • Intel Cascade Lake Server
  • Intel Ivy Bridge


This enhancement moves the pop-up ("toast") notifications from the upper right corner to the lower right corner, so they no longer cover the action buttons. Now, the notifications rise from the bottom right corner to within 400 px of the top.


This update adds an audit log warning on an out-of-range IPv4 gateway static configuration for a host NIC. The validity of the gateway is assessed compared to the configured IP address and netmask. This gives users better feedback and helps them notice incorrect configurations.


This release adds a new 'status' column to the affinity group table that shows whether all of an affinity group’s rules are satisfied (status = ok) or not (status = broken). The "Enforcing" option does not affect this status.


Previously, RHV Manager created live virtual machine snapshots synchronously. If creating the snapshot exceeded the timeout period (default 180 seconds), the operation failed. These failures tended to happen with virtual machines that had large memory loads or clusters that had slow storage speeds.

With this enhancement, the live snapshot operation is asynchronous and runs until it is complete, regardless of how long it takes.


With this update, a new configuration variable, AAA_JAAS_ENABLE_DEBUG, has been added to enable Kerberos/GSSAPI debug on AAA. The default value is false.

To enable debugging, create a new configuration file named /etc/ovirt-engine/engine.conf.d/99-kerberos-debug.conf with the following content:



Red Hat Virtualization Manager virtual machines now support ignition configuration, and this feature can be used via the UI or API by any guest OS that supports it, for example, RHCOS or FCOS.


With this update, each host’s boot partition is explicitly stated in the kernel boot parameters. For example: boot=/dev/sda1 or boot=UUID=<id>


Previously, while cloning a virtual machine, you could only edit the name of the virtual machine in the Clone Virtual Machine window. With this enhancement, you can fully customize any of the virtual machine settings in the Clone Virtual Machine window. This means, for example, that you can clone a virtual machine into a different storage domain.


Previously, if a Certificate Authority ca.pem file was not present, the engine-setup tool automatically regenerated all PKI files, requiring you to reinstall or re-enroll certificates for all hosts.

Now, if ca.pem is not present but other PKI files are, engine-setup prompts you to restore ca.pem from backup without regenerating all PKI files. If a backup is present and you select this option, then you no longer need to reinstall or re-enroll certificates for all hosts.


This enhancement adds support for DMTF Redfish to RHV. To use this functionality, you use the Administration Portal to edit a Host’s properties. On the Host’s Power Management tab, you click + to add a new power management device. In the Edit fence agent window, you set Type to redfish and fill-in additional details like login information and IP/FQDN of the agent.


This enhancement enables you to use the RHV Manager’s REST API to manage subscriptions and receive notifications based on specific events. In previous versions, you could do this only in the Administration Portal.


With this enhancement, an EVENT_ID is logged when a virtual machine’s guest operating system reboots. External systems such as Cloudforms and Manage IQ rely on the EVENT_ID log messages to keep track of the virtual machine’s state.


With this update, when you upgrade RHV, engine-setup notifies you if virtual machines in the environment have snapshots whose cluster levels are incompatible with the RHV version you are upgrading to. It is safe to let it proceed, but it is not safe to use these snapshots after the upgrade. For example, it is not safe to preview these snapshots.

There is an exception to the above: engine-setup does not notify you if the virtual machine is running the Manager as a self-hosted engine. For hosted-engine, it provides an automatic "Yes" and upgrades the virtual machine without prompting or notifying you. It is unsafe to use snapshots of the hosted-engine virtual machine after the upgrade.


With this enhancement, on the "System" tab of the "New Virtual Machine" and "Edit Virtual Machine" windows, the "Serial Number Policy" displays the value of the "Cluster default" setting. If you are adding or editing a VM and are deciding whether to override the cluster-level serial number policy, seeing that information here is convenient. Previously, to see the cluster’s default serial number policy, you had to close the VM window and navigate to the Cluster window.


This enhancement enables you to attach a SCSI host device, scsi_hostdev, to a virtual machine and specify the optimal driver for the type of SCSI device:

  • scsi_generic: (Default) Enables the guest operating system to access OS-supported SCSI host devices attached to the host. Use this driver for SCSI media changers that require raw access, such as tape or CD changers.
  • scsi_block: Similar to scsi_generic but better speed and reliability. Use for SCSI disk devices. If trim or discard for the underlying device is desired, and it’s a hard disk, use this driver.
  • scsi_hd: Provides performance with lowered overhead. Supports large numbers of devices. Uses the standard SCSI device naming scheme. Can be used with aio-native. Use this driver for high-performance SSDs.
  • virtio_blk_pci: Provides the highest performance without the SCSI overhead. Supports identifying devices by their serial numbers.


qemu-guest-agent for OpenSUSE guests has been updated to qemu-guest-agent-3.1.0-lp151.6.1 build.


With this update, you can select Red Hat CoreOS (RHCOS) as the operating system for a virtual machine. When you do so, the initialization type is set to ignition. RHCOS uses ignition to initialize the virtual machine, differentiating it from RHEL.


Previously, with every security update, a new CPU type was created in the vdc_options table under the key ServerCPUList in the database for all affected architectures. For example, the Intel Skylake Client Family included the following CPU types:

  • Intel Skylake Client Family
  • Intel Skylake Client IBRS Family
  • Intel Skylake Client IBRS SSBD Family
  • Intel Skylake Client IBRS SSBD MDS Family

With this update, only two CPU Types are now supported for any CPU microarchitecture that has security updates, keeping the CPU list manageable. For example:

  • Intel Skylake Client Family
  • Secure Intel Skylake Client Family

The default CPU type will not change. The Secure CPU type will contain the latest updates.


Modernizing the software stack of ovirt-engine for build and runtime using java-11-openjdk. Java 11 openjdk is the new LTS version from Red Hat.


To transfer virtual machines between data centers, you use data storage domains because export domains were deprecated. However, moving a data storage domain to a data center that has a higher compatibility level (DC level) can upgrade its storage format version, for example, from V3 to V5. This higher format version can prevent you from reattaching the data storage domain to the original data center and transferring additional virtual machines.

In the current release, if you encounter this situation, the Administration Portal asks you to confirm that you want to update the storage domain format, for example, from 'V3' to 'V5'. It also warns that you will not be able to attach it back to an older data center with a lower DC level.

To work around this issue, you can create a destination data center that has the same compatibility level as the source data center. When you finish transferring the virtual machines, you can increase the DC level.


With this update, you can remove an unregistered entity, such as a virtual machine, a template, or a disk, without importing it into the environment.


The current release updates the ovirt-engine-extension-logger-log4j package from OpenJDK version 8 to version 11 so it aligns with the oVirt engine.


With this update, you can enable encryption for live migration of virtual machines between hosts in the same cluster. This provides more protection to data transferred between hosts. You can enable or disable encryption in the Administration Portal, in the Edit Cluster dialog box, under Migration Policy > Additional Properties. Encryption is disabled by default.


The current release adds a configuration option, VdsmUseNmstate, which you can use to enable nmstate on every new host with cluster compatibility level >= 4.4.


When a VM from the older compatibility version is imported, its configuration has to be updated to be compatible with the current cluster compatibility version. This enhancement adds a warning to the audit log that lists the updated parameters.


The current release adds support for running virtual machines on hosts that have an Intel Snow Ridge CPU. There are two ways to enable this capability:

  • Enable a virtual machine’s Pass-Through Host CPU setting and configure it to Start Running On on Specific Host(s) that have a Snow Ridge CPU.
  • Set cpuflags in the virtual machine’s custom properties to +gfni,+cldemote.


In this release, it is now possible to edit the properties of a Floating Virtual Disk in the Storage > Disks tab of the Administration Portal. For example, the user can edit the Description, Alias, and Size of the disk. You can also update floating virtual disk properties using the REST API update put command described in the Red Hat Virtualization REST API Guide.


Before this update, the live snapshot operation was synchronized, such that if VDSM required more than 180 seconds to create a snapshot, the operation failed, preventing snapshots of some virtual machines, such as those with large memory loads, or slow storage.

With this update, the live snapshot operation is asynchronous, so the operation runs until it ends successfully, regardless of how long it takes.


The current release adds a new Insights section to the RHV welcome or landing page. This section contains two links:

  • "Insights Guide" links to the "Deploying Insights in Red Hat Virtualization Manager" topic in the Administration Guide.
  • "Insights Dashboard" links to the Red Hat Insights Dashboard on the Customer Portal.


With this update, the default action in the VM Portal’s dashboard for a running virtual machine is to open a console. Before this update, the default action was "Suspend".

Specifically, the default operation for a running VM is set to "SPICE Console" if the virtual machine supports SPICE, or "VNC Console" if the virtual machine only supports VNC.

For a virtual machine running in headless mode, the default action is still "Suspend".


This update provides packages required to run oVirt Node and oVirt CentOS Linux hosts based on CentOS Linux 8.


When you remove a host from the RHV Manager, it can create duplicate entries for a host-unreachable event in the RHV Manager database. Later, if you add the host back to the RHV Manager, these entries can cause networking issues. With this enhancement, if this type of event happens, the RHV Manager prints a message to the events tab and log. The message notifies users of the issue and explains how to avoid networking issues if they add the host back to RHV Manager.


The current release moves the button to Remove a virtual machine to the "more" menu (three dots in the upper-right area). This was done to improve usability: Too many users pressed the Remove button, mistakenly believing it would remove a selected item in the details view, such as a snapshot. They did not realize it would delete the virtual machine. The new location should help users avoid this kind of mistake.


In this release, Ansible Runner is installed by default and allows running Ansible playbooks directly in the Red Hat Virtualization Manager.


In this release, modifying a MAC address pool or modifying the range of a MAC address pool that has any overlap with existing MAC address pool ranges, is strictly forbidden.


With this enhancement, when you add a host to a cluster, it has the advanced virtualization channel enabled, so the host uses the latest supported libvirt and qemu packages.


With this enhancement, the Administration Portal enables you to copy a host network configuration from one host to another by clicking a button. Copying network configurations this way is faster and easier than configuring each host separately.


On RHV-4.4, NetworkManager manages the interface and static routes. As a result, you can make more robust modifications to static routes using Network Manager Stateful Configuration (nmstate).


This release adds Grafana as a user interface and visualization tool for monitoring the Data Warehouse. You can install and configure Grafana during engine-setup. Grafana includes pre-built dashboards that present data from the ovirt_engine_history PostgreSql data warehouse database.


The current release updates the Documentation section of the RHV welcome or landing page. This makes it is easier to access the current documentation and facilitate access to translated documentation in the future.

  • The links now point to the online documentation on the Red Hat customer portal.
  • The "Introduction to the Administration Portal" guide and "REST API v3 Guide" are now obsolete and have been removed.
  • The rhvm-doc package is obsolete and has been removed.


Previously, a live snapshot of a virtual machine could take an infinite amount of time, locking the virtual machine. With this release, you can set a limit on the amount of time an asynchronous live snapshot can take using the command engine-config -s LiveSnapshotTimeoutInMinutes=<time> where <time> is a value in minutes. After the set time passes, the snapshot aborts, releasing the lock and enabling you to use the virtual machine. The default value of <time> is 30.


The apache-sshd library is not bundled anymore in the rhvm-dependencies package. The apache-sshd library is now packaged in its own rpm package.


apache-commons-collections4 has been packaged for Red Hat Virtualization Manager consumption. The package is an extension of the Java Collections Framework.


Previously, the Windows guest tools were delivered as virtual floppy disk (.vfd) files.

With this release, the virtual floppy disk is removed and the Windows guest tools are included as a virtual CD-ROM. To install the Windows guest tools, check the Attach Windows guest tools CD check box when installing a Windows virtual machine.


The current release changes the Huge Pages label to Free Huge Pages so it is easier to understand what the values represent.


This enhancement enables you to remove incremental backup root checkpoints.

Backing up a virtual machine (VM) creates a checkpoint in libvirt and the RHV Manager’s database. In large scale environments, these backups can produce a high number of checkpoints. When you restart virtual machines, the Manager redefines their checkpoints on the host; if there are many checkpoints, this operation can degrade performance. The checkpoints' XML descriptions also consume a lot of storage.

This enhancement provides the following operations:

  • View all the VM checkpoints using the new checkpoints service under the VM service - GET path-to-engine/api/vms/vm-uuid/checkpoints
  • View a specific checkpoint - GET path-to-engine/api/vms/vm-uuid/checkpoints/checkpoint-uuid
  • Remove the oldest (root) checkpoint from the chain - DELETE path-to-engine/api/vms/vm-uuid/checkpoints/checkpoint-uuid


Previously, network tests timed out after 2 seconds. The current release increases the timeout period from 2 seconds to 5 seconds. This reduces unnecessary timeouts when the network tests require more than 2 seconds to pass.


With this enhancement, RHEL 7-based hosts have SPICE encryption enabled during host deployment. Only TLSv1.2 and newer protocols are enabled. Available ciphers are limited as described in BZ1563271

RHEL 8-based hosts do not have SPICE encryption enabled. Instead, they rely on defined RHEL crypto policies (similar to VDSM BZ1179273).


The usbutils and net-tools packages have been added to the RHV-H optional channel. This eases the installation of "iDRAC Service Module" on Dell PowerEdge systems.


This enhancement increases the maximum memory limit for virtual machines to 6TB. This also applies to virtual machines with cluster level 4.3 in RHV 4.4.


With this update, the maximum memory size for 64-bit virtual machines based on x86_64 or ppc64/ppc64le architectures is now 6 TB. This limit also applies to virtual machines based on x86_64 architecture in 4.2 and 4.3 Cluster Levels.


Starting with this release, the Grafana dashboard for the Data Warehouse is installed by default to enable easy monitoring of Red Hat Virtualization metrics and logs. The Data Warehouse is installed by default at Basic scale resource use. To obtain the full benefits of Grafana, it is recommended to update the Data Warehouse scale to Full (to be able to view a larger data collection interval of up to 5 years). Full scaling may require migrating the Data Warehouse to a separate virtual machine. For Data Warehouse scaling instructions, see Changing the Data Warehouse Sampling Scale

For instructions on migrating to or installing on a separate machine, see Migrating the Data Warehouse to a Seperate Machine. and Installing and Configuring Data Warehouse on a Separate Machine


The current release adds a panel to the beginning of each Grafana dashboard describing the reports it displays and their purposes.

6.14.3. Rebase: Bug Fixes and Enhancements

These items are rebases of bug fixes and enhancements included in this release of Red Hat Virtualization:


The amkeself package has been rebased to version: 2.4.0. Highlights, important fixes, or notable enhancements:

  • v2.3.0: Support for archive encryption via GPG or OpenSSL. Added LZO and LZ4 compression support. Options to set the packaging date and stop the umask from being overridden. Optionally ignore check for available disk space when extracting. New option to check for root permissions before extracting.
  • v2.3.1: Various compatibility updates. Added unit tests for Travis CI in the GitHub repo. New --tar-extra, --untar-extra, --gpg-extra, --gpg-asymmetric-encrypt-sign options.
  • v2.4.0: Added optional support for SHA256 archive integrity checksums.


Rebase package(s) to version: 0.1.2

With this update, the ovirt-cockpit-sso package supports RHEL 8.


Rebase package(s) to version: spice-qxl-wddm-dod 0.19

Highlights, important fixes, or notable enhancements:

  • Add 800x800 resolution
  • Improve performance vs spice server 14.0 and earlier
  • Fix black screen on driver uninstall on OVMF platforms
  • Fix black screen on return from S3


The Object-Oriented SNMP API for Java Managers and Agents (snmp4j) library has been packaged for RHV-M consumption. The library was previously provided by the rhvm-dependencies package and is now provided as a standalone package.


Upgrade package(s) to version: rhv-4.4.0-23

Highlights and important bug fixes: Enhancements to VM snapshots caused a regression due to inconsistencies between the VDSM and RHV Manager versions. This upgrade fixes the issue by synchronizing the RHV Manager version to match the VDSM version.


Rebase of the apache-commons-digester package to version 2.1. This update is a minor release with new features. See the Apache release notes for more information.


Rebase of the apache-commons-configuration package to version 1.10. This update includes minor bug fixes and enhancements. See the Apache release notes for more information.


With this rebase, package ws-commons-utils has been updated to version 1.0.2 which provides following changes:

  • Updated a non-static "newDecoder" method in the Base64 class to be static.
  • Fixed the completely broken CharSetXMLWriter.


The m2crypto package has been built for use with the current version of RHV Manager. This package enables you to call OpenSSL functions from Python scripts.


With this release, Red Hat Virtualization is ported to Python 3. It no longer depends on Python 2.

6.14.4. Rebase: Enhancements Only

These items are rebases of enhancements included in this release of Red Hat Virtualization:


The openstack-java-sdk package has been rebased to version: 3.2.8. Highlights and notable enhancements: Refactored the package to use newer versions of these dependent libraries:

  • Upgraded jackson to com.fasterxml version 2.9.x
  • Upgraded commons-httpclient to org.apache.httpcomponents version 4.5


With this rebase ovirt-scheduler-proxy packages have been updated to version 0.1.9 introducing support for RHEL 8 and a refactor of the code for Python3 and Java 11 support.

6.14.5. Release Notes

This section outlines important details about the release, including recommended practices and notable changes to Red Hat Virtualization. You must take this information into account to ensure the best possible outcomes for your deployment.


oVirt 4.4 replaces the ovirt-guest-tools with a new WiX-based installer, included in Virtio-Win. You can download the ISO file containing the Windows guest drivers, agents, and installers from https://fedorapeople.org/groups/virt/virtio-win/direct-downloads/latest-virtio/


With this release, you can add hosts to RHV Manager that do not provide standard rsa-sha-1 SSH public keys but only provide rsa-sha256/rsa-sha-512 SSH public keys instead, such as CentOS 8 hosts with FIPS hardening enabled.


On non-production systems, you can use CentOS Stream as an alternative to CentOS Linux.

6.14.6. Known Issues

These known issues exist in Red Hat Virtualization at this time:


There is currently a known issue: Open vSwitch (OVS) does not work with nmstate-managed hosts. Therefore, OVS clusters cannot contain RHEL 8 hosts. Workaround: In clusters that use OVS, do not upgrade hosts to RHEL 8.


The current release contains a known issue: When the RHV Manager tries to change the mode of an existing bond to mode balance-tlb 5 or mode balance-alb 6, the host fails to apply this change. The Manager reports this as a user-visible error. To work around this issue, remove the bond and create a new one with the desired mode. A solution is presently being worked on and, if successful, is intended for RHEL 8.2.1.


Known issue: If you configure a virtual machine’s BIOS Type and Emulation Machine Type with mismatched settings, the virtual machine fails when you restart it. Workaround: To avoid problems, configure the BIOS Type and Emulation Machine Type with the proper settings for your hardware. The current release helps you avoid this issue: Adding a Host to a new cluster with auto-detect sets the BIOS Type accordingly.


Known issue: Unsubscribed RHVH hosts do not get package updates when you perform a 'Check for upgrade' operation. Instead, you get a 'no updates found' message. This happens because RHVH hosts that are not registered to Red Hat Subscription Management (RHSM) do not have repos enabled. Workaround: To get updates, register the RHVH host with Red Hat Subscription Management (RHSM).


The current release contains a known issue: If a VM has a bond mode 1 (active-backup) over an SR-IOV vNIC and VirtIO vNIC, the bond might stop working after the VM migrates to a host with SR-IOV on a NIC that uses an i40e driver, such as the Intel X710.


Registration fails for user accounts that belong to multiple organizations

Currently, when you attempt to register a system with a user account that belongs to multiple organizations, the registration process fails with the error message You must specify an organization for new units.

To work around this problem, you can either:

  • Use a different user account that does not belong to multiple organizations.
  • Use the Activation Key authentication method available in the Connect to Red Hat feature for GUI and Kickstart installations.
  • Skip the registration step in Connect to Red Hat and use the Subscription Manager to register your system post-installation.


If you create VLANs on virtual functions of SR-IOV NICs, and the VLAN interface names are longer than ten characters, the VLANs fail. This happens because the naming convention for VLAN interfaces, parent_device.VLAN_ID, tends to produce names that exceed the 10-character limit. The workaround for this issue is to create udev rules as described in 1854851.


In RHEL 8.2, ignoredisk --drives is not recognized by Anaconda in Kickstart files correctly. Consequently, when installing or reinstalling the host’s operating system, it is strongly recommended that you either detach any existing non-OS storage that is attached to the host, or use ignoredisk --only-use to avoid accidental initialization of these disks, and with that, potential data loss.


When you upgrade Red Hat Virtualization with a storage domain that is locally mounted on / (root), the data might be lost.

Use a separate logical volume or disk to prevent possible loss of data during upgrades. If you are using / (root) as the locally mounted storage domain, migrate your data to a separate logical volume or disk prior to upgrading.

6.14.7. Removed Functionality


Version 3 of the Python SDK has been deprecated since version 4.0 of oVirt. The current release removes it completely, along with version 3 of the REST API.


Version 3 of the Java SDK has been deprecated since version 4.0 of oVirt. The current release removes it completely, along with version 3 of the REST API.


The current release removes OpenStack Neutron deployment, including the automatic deployment of the neutron agents through the Network Provider tab in the New Host window and the AgentConfiguration in the REST-API. Use the following components instead:

  • To deploy OpenStack hosts, use the OpenStack Platform Director/TripleO.
  • The Open vSwitch interface mappings are already managed automatically by VDSM in Clusters with switch type OVS.
  • To manage the deployment of ovirt-provider-ovn-driver on a cluster, update the cluster’s "Default Network Provider" attribute.


RHV 4.3 was shipping drivers for Windows XP and Windows Server 2k3. Both of these operating systems are obsolete and unsupported. The current release removes these drivers.


Previously, the cockpit-machines-ovirt package was deprecated in Red Hat Virtualization version 4.3 (reference bug #1698014). The current release removes the cockpit-machines-ovirt from the ovirt-host dependencies and RHV-H image.


The vdsm-hook-macspoof has been dropped from the VDSM code. If you still require the ifacemacspoof hook, you can find and fix the vnic profiles using a script similar to the one provided in the commit message.


Support for datacenter and cluster levels earlier than version 4.2 has been removed.


Previously, the screen package was deprecated in RHEL 7.6. With this update to RHEL 8-based hosts, the screen package is removed. The current release installs the tmux package on RHEL 8-based hosts instead of screen.


The current release removes heat-cfntools, which is not used in rhvm-appliance and RHV. Updates to heat-cfntools are available only through OSP.


With this release, the Application Provisioning Tool service (APT) is removed.

The APT service could cause a Windows virtual machine to reboot without notice, causing possible data loss. With this release, the virtio-win installer replaces the APT service.


In RHV version 4.4, oVirt Engine REST API v3 has been removed. Update your custom scripts to use REST API v4.


The oVirt Engine SDK 3 Java bindings are not shipped anymore with oVirt 4.4 release.


The oVirt Python SDK version 3 has been removed from the project. You need to upgrade your scripts to use Python SDK version 4.


Hystrix monitoring integration has been removed from ovirt-engine due to limited adoption and difficulty to maintain.


The Object-Oriented SNMP API for Java Managers and Agents (snmp4j) library is no longer bundled with the rhvm-dependencies package. It is now provided as a standalone rpm package (Bug #1796815).


The current version of RHV removes libvirt packages that provided non-socket activation. Now it contains only libvirt versions that provide socket activation. Socket activation provides better resource handling: There is no dedicated active daemon; libvirt is activated for certain tasks and then exits.


Metrics Store support has been removed in Red Hat Virtualization 4.4. Administrators can use the Data Warehouse with Grafana dashboards (deployed by default with Red Hat Virtualization 4.4) to view metrics and inventory reports. See the Grafana documentation for information on Grafana. Administrators can also send metrics and logs to a standalone Elasticsearch instance.


In previous versions, the katello-agent package was automatically installed on all hosts as a dependency of the ovirt-host package. The current release, RHV 4.4 removes this dependency to reflect the removal of the katello-agent from Satellite 6.7. Instead, you can now use katello-host-tools, which enables users to install the correct agent for their version of Satellite.