Menu Close
Settings Close

Language and Page Formatting Options

Chapter 4. Release Information

These release notes highlight technology preview items, recommended practices, known issues, and deprecated functionality to be taken into consideration when deploying this release of Red Hat Virtualization.

Notes for updates released during the support lifecycle of this Red Hat Virtualization release will appear in the advisory text associated with each update or the Red Hat Virtualization Technical Notes. This document is available from the following page:

4.1. Red Hat Virtualization 4.3 General Availability (ovirt-4.3.3)

4.1.1. Bug Fix

The items listed in this section are bugs that were addressed in this release:


Previously, Red Hat Virtualization Manager did not handle hosts added to it over an IPV6-only network.

In the current release, you can use the Manager’s Administration Portal and REST API to add and manage hosts over a statically configured IPV6-only network.


In the current release, the v4 API documentation shows how to retrieve the IP addresses of a virtual machine.


Previously, memory hot unplug did not work in virtual machines started from snapshots.

This has been fixed in the current release: Memory hot unplug works in virtual machines started from snapshots.


Previously, selecting File > Change CD in the Windows 10 version of virt-viewer did not work. The current release fixes this issue.


This release updates the VM video RAM settings to ensure enough RAM is present for any Linux guest operating system.


Previously, CloudInit passed the dns_search value incorrectly as the dns_namesever value. For example, after configuring a the Networks settings of a virtual machine and runinng it, the dns_search value showed up in the resolv.conf file as the dns_namesever value. The current release fixes this issue.


Previously, when accessing RHEL 6 virtual machines from a Windows 7 client using virt-viewer, copy/paste sporadically failed. The current release fixes this issue.


Previously a "Removed device not found in conf" warning appeared in the vdsm.log after performing a successful hot unplug. In this release, after performing a successful hot unplug, this warning message will no longer appear in vdsm.log.


This release renames the 'MaxBlockDiskSize' option to 'MaxBlockDiskSizeInGibiBytes'.


Previously, guest virtual machines with USB support enabled became unresponsive after migration to a different host. The current release fixes this issue and guests work as expected after migration.


Previously, VDSM used stat() to implement islink() checks when using ioprocess to run commands. As a result, if a user or storage system created a recursive symbolic link inside the ISO storage domain, VDSM failed to report file information. In the current release, VDSM uses lstat() to implement islink() so it can report file information from recursive symbolic links.


The serial console was missing in a self-hosted engine VM created with node zero deployment. In this release, the serial console is defined correctly.


Previously, you could manage snapshots through the Administration Portal, but not in the VM Portal. In the current release, you can manage snapshots through the VM portal.


Previously, the ovirt-cockpit-sso configuration file, cockpit.conf, triggered security and integrity alerts during the verification process. In the current release, the ovirt-cockpit-sso configuration file is marked as a configuration file and is excluded from the verification process, which helps prevent false security and integrity alerts.


Previously, on a Windows client machine, if a non-English locale was selected, the spice client (remote-viewer) displayed some translated UI elements in English, not the locale language. The current release fixes this and presents those translated UI elements in the locale language.


Previously, a floppy drive in a virtual machine could prevent the virtual machine from being imported. In the current release, floppy drives are ignored during import.


A VDSM yum plugin named '' was added. Consequently, the Self-Hosted Engine setup imported the wrong vdsm module, causing it to fail. The name of the plugin was changed and now the Self-Hosted Engine setup completes successfully.


The self-hosted engine VM was selected for balancing although the BalanceVM command was not enabled for the self-hosted engine. In this release, balancing is no longer blocked.


When a virtual machine starts, VDSM uses the domain metadata section to store data which is required to configure a virtual machine but which is not adequately represented by the standard libvirt domain. Previously, VDSM stored drive IO tune settings in this metadata that were redundant because they already had proper representation in the libvirt domain. Furthermore, if IO tune settings were enabled, a bug in storing the IO tune settings prevented the virtual machine from starting. The current release removes the redundant information from the domain metadata and fixes the bug that prevented virtual machines from starting.


Do not use a VNC-based connection to deploy Red Hat Virtualization Manager as a self-hosted engine. The VNC protocol does not support password auth in FIPS mode. As a result, the self-hosted engine will fail to deploy.

Instead, deploy the Manager as a self-hosted engine, use a SPICE-based connection.


Previously, if a CD-ROM was ejected from a virtual machine and VDSM was fenced or restarted, the virtual machine became unresponsive and/or the Manager reported its status as "Unknown." In the current release, a virtual machine with an ejected CD-ROM recovers after restarting VDSM.


The release improves upon the fix in BZ#1518253 to allow for a faster abort process and a more easily understood error message.


There was a bug in the REST API for non-administrator users related to VNIC Profiles. Consequently, an error message appeared saying "GET_ALL_VNIC_PROFILES failed query execution failed due to insufficient permissions." The code was fixed and the error no longer occurs.


This release ensures that VMs existing in Red Hat Virtualization Manager version 4.2.3 or earlier do not lose their CD-ROM device if the VMs are restarted in 4.2.3 or later versions.


Previously, with some error conditions, the VM Portal displayed a completely white screen with no error message or debugging information. The current release fixes this issue: All error conditions display an error message and stack trace in the browser console.


Vdsm-gluster tries to run heal operations on all volumes. Previously, if the gluster commands got stuck, VDSM started waiting indefinitely for them, exhausting threads, until it timed-out. Then it stopped communicating with the Manager and went offline. The current release adds a timeout to the gluster heal info command so the command terminates within a set timeout and threads do not become exhausted. On timeout, the system issues a GlusterCommandTimeoutException, which causes the command to exit and notifies the Manager. As a result, VDSM threads are not stuck, and VDSM does not go offline.


Previously, when a migrating virtual machine was not properly set up on the destination host, it could still start there under certain circumstances, then run unnoticed and without VDSM supervision. This situation sometimes resulted in split-brain. Now migration is always prevented from starting if the virtual machine set up fails on the destination host.


Previously, while a datacenter was enforcing a quota, using the VM Portal to create a virtual machine from a blank template generated an error. The current release fixes this issue.


This release ensures that if a request occurs to disable I/O threads of a running VM, the I/O threads disable when the VM goes down.


This release ensures that if a request occurs to disable I/O threads of a running VM, the I/O threads setting remains disabled when changing unrelated properties of a running VM.


Previously, after importing a guest from an ova file, the Import Virtual Machine dialog displayed the network type as "Dual-mode rt8319, VirtIO", when it should have been only "VirtIO". The current release fixes this issue.


This release prevents VM snapshot creation when the VM is in a non-responding state to preclude database corruption due to an inconsistent image structure.


This fix allows the self-hosted engine virtual machine to run on the host.


The previous release changed the system manufacturer of virtual machines from "Red Hat" to "oVirt". This was inconsistent with preceding versions. Some users depended on this field to determine the underlying hypervisor. The current release fixes this issue by setting the SMBIOS manufacturer according to the product being used, which is indicated by the 'OriginType' configuration value. As a result, the manufacturer is set to 'oVirt' when oVirt is being used, and 'Red Hat' when Red Hat Virtualization is being used.


Previously, in the Administration Portal, the "New Pool" window uses the "Prestarted" label while the "Edit Pool" window uses the "Prestarted VMs" label. Both of these labels refer to the number of VMs prestarted in the pool. The current release fixes this issue.


This release updates the Red Hat Virtualization Manager power saving policy to allow VM migration from over-utilized hosts to under-utilized hosts to ensure proper balancing.


RHVH was missing a package named pam_pkcs11. Consequently, the rule for pam_pkcs11 in PAM is added, but the module does not exist, so users cannot login. The missing pam_pkcs11 package was added, and now users can login to RHVH if the correct security profile is applied.


oscap-anaconda-addon was changed to read the datastream file based on the OS name and version. Consequently, the addon looks for a datastream file named "ssg-rhvh4-ds.xml," which does not exist, so no OSCAP profiles are shown. The relevant OSCAP profiles for RHVH reside in ssg-rhel7-ds.xml, so a symlink was added named ssg-rhvh4-ds.xml that references ssg-rhel7-ds.xml.


This release allows users in Red Hat Virtualization Manager to view the full path of the host group in the host group drop-down list to facilitate host group configuration.


This release adds a log entry at the WARN level if an attempt is made to move a disk with a damaged ancestor. A workaround solution is to leverage the REST API to move the disk between storage domains.


This release ensures the clearing of the VM uptime during a guest operating system reboot, and the uptime that does display corresponds to the guest operating system.


Previously, while cloning a virtual machine with a Direct LUN attached, the Administration Portal showed the clone task as red (failed). The current release fixes this issue and displays the clone task as running until it is complete.


Previously, Red Hat Virtualization Host entered emergency mode when it was updated to the latest version and rebooted twice. This was due to the presence of a local disk WWID in /etc/multipath/wwids. In the current release, /etc/multipath/wwids has been removed. During upgrades, imgbased now calls "vdsm-tool configure --force" in the new layer, using the SYSTEMD_IGNORE_CHROOT environment variable.


Previously, trying to update a disk attribute using the /api/disks/{disk_id} API failed without an error. The current release fixes this issue.


Previously, when deploying the Self-Hosted Engine from the Cockpit, an error message appeared together with an explanation that ping issues were encountered. However, under certain conditions the explanation would disappear, leaving only a generic error message "Please correct errors before moving to the next step." In this release, if ping errors are encountered during deployment, the user will be informed of the issue, and the message will remain in the error window until the issue is resolved.


The self-hosted engine backup and restore flow has been improved, and now works correctly when the self-hosted engine storage domain is defined as the master storage domain.


This release enables VM configuration with memory greater than two terabytes.


Previously, the default ntp.conf file was migrated to chrony even when NTP was disabled, overwriting chrony.conf file with incorrect values. In the current release, ntp.conf is only migrated if NTP is enabled.


This release sets the proper REST API parameters during the VM creation to allow and make the VM available for use immediately.


The default CPU type in Red Hat Virtualization 4.2 is deprecated in Red Hat Virtualization 4.3. Previously, when you used the Edit Cluster dialog to create a new cluster or edit an existing cluster, changing the cluster compatibility version from 4.2 to 4.3 when the CPU Architecture was set to x86_64 caused the CPU Type to be set to an invalid setting, resulting in an exception. Now the CPU Type defaults to a valid entry and no exception occurs.


This release ensures that all values for Quality of Service links are visible.


Previously, after using virt-v2v to import a virtual machine from Xen or VMware environments, the Red Hat Virtualization Manager was incorrectly removing the virtual machine. The Manager was removing the import job too early, causing it to remove the job reference twice.

This issue has been fixed in the current release. The Manager only removes the import job after processing is complete. Using virt-v2v to import virtual machines from VmWare and Xen works.


Migration bandwidth limit was computed incorrectly from the user-defined settings and set to an incorrect value. In this release, the migration bandwidth limit is now set correctly.


This release ensures the value of the migration bandwidth limit is correct.


When performing an upgrade, make sure the ovirt-hosted-engine-ha and ovirt-hosted-engine-setup package versions match.


Previously, after performing an upgrade, packages that shipped files under /var were not updated correctly as /var was not layered. In this release, if an updated file exists on both the new image and the running system, the original file will be saved as ".imgbak" and the new file will be copied over enabling both the original and new files to reside under /var.


Host logs were being filled with vnc_tls errors due to problems with read permissions. In this release, the erroneous logs are no longer recorded by the host.


Previously, incorrect parsing of images named 'rhv-toolssetup_x.x_x.iso' caused a NullPointerException (NPE).

The current release fixes this issue. This image name can be parsed without causing an exception.


Previously, making an API call to the foreman (hosts, hostgroups, compute resources) returned only 20 entries. The current release fixes this issue and displays all of the entries.


Previously, imgbased failed upon receiving the e2fsck return code 1 when creating a new layer. In the current release, imgbased handles the e2fsck return code 1 as a success, since the new file system is correct and the new layer is installed successfully.


This release ensures Red Hat Virtualization Manager sets the recommended options during the creation of a volume from Red Hat Virtualization Manager to distinguish creating volumes from the Cockpit User Interface.


Previously, an incorrectly named USB3 controller, "qemu_xhci," prevented virtual machines from booting if they used a host passthrough with this controller. The current release corrects the controller name to "qemu-xhci," which resolves the booting issue.


This release ensures the upgrade process in Red Hat Virtualization Manager sets the configuration value ImageProxyAddress to point to the Red Hat Virtualization Manager FQDN if the configuration value was set to "localhost".


Red Hat Virtualization Manager no longer logs messages regarding non-preferred host penalizations if the VM is not configured to have a preferred host.


Previously when converting to OpenStack, failed conversions revealed passwords for accessing OpenStack in the wrapper log. This issue has been fixed and passwords are no longer revealed.


Previously, during an upgrade, dracut running inside chroot did not detect the cpuinfo and the kernel config files because /proc was not mounted and /boot was bindmounted. As a result, the correct microcode was missing from the initramfs.

The current release bindmounts /proc to the chroot and removes the --hostonly flag. This change inserts both AMD and Intel microcodes into the initramfs and boots the host after an upgrade.


Previously, even if lvmetad was disabled in the configuration, the lvmetad service left a pid file hanging. As a result, entering lvm commands displayed warnings.

The current release masks the lvmetad service during build so it never starts and lvm commands do not show warnings.


Previously, if an xlease volume was corrupted, VDSM could not acquire leases and features like high-availability virtual machines did not work. The current release adds rebuild-xleases and format-xleases commands to the VDSM tool. Administrators can use these commands to rebuild or format corrupted xlease volumes.


There was a bug in the REST API for non-administrator users related to VNIC Profiles. Consequently, an error message appeared saying "GET_ALL_VNIC_PROFILES failed query execution failed due to insufficient permissions." The code was fixed and the error no longer occurs.


Previously, after upgrading to version 4.2 or 4.3, the Compute > Hosts > Network Interfaces page in the Administration Portal did not display host interfaces. Instead, it would throw the following obfuscated exception several times: webadmin-0.js:formatted:176788 Mon Dec 03 11:46:02 GMT+1000 2018 SEVERE: Uncaught exception (TypeError) : Cannot read property 'a' of null

The current release fixes this issue.


In this release, the following changes have been made in the view filters for VMs in the Administration Portal under Compute > Hosts > selected host: New view filter names: - From “Running on host” to “Running on current host” (default view) - From “Pinned to host” to “Pinned to current host” - From “All” to “Both” - when “Both” is selected, a new column named “Attachment to current host” is displayed to indicate that the VM is: “Running on current host” , “Pinned to current host”, or “Pinned and Running on current host”.


Previously, when re-importing a virtual machine as an OVA file, duplicate Image IDs and Disk IDs caused errors while attempting to recreate the image. Also, after a failure, continuing to attempt to attach the image instead of failing immediately caused the error reported. Because the identifiers already existed, Red Hat Virtual Manager could not import the virtual machine OVA file even though the virtual machine name had been changed.

This issue has been fixed in the current release. The Red Hat Virtual Manager can regenerate Identifiers. When copying the image, the Manager can use image mapping to correlate the previous Image ID to the new Image ID. Finally, the Manager can move the attach image handling so that it will not be called when creating a new image if the database fails. As a result, importing virtual machines using OVA files works.


Previously, the "Multi Queues enabled" checkbox was missing from the New- or Edit Instance Types window in the Administration Portal. The current release fixes this issue.


This bug fix sets the template ID properly to address a null pointer exception during the import of a thin-provisioned VM disk from an Open Virtualization Framework configuration file.


This release ensures Red Hat Virtualization Manager defines the attribute subjectAlternativeName correctly during the renaming of the httpd certificate to prevent browser warnings or a certificate rejection.


During a self-hosted engine deployment, SSO authentication errors may occur stating that a valid profile cannot be found in credentials and to check the logs for more details. The interim workaround is to retry the authentication attempt more than once. See BZ#1695523 for a specific example involving Kerberos SSO and engine-backup.


Previously, when trying to clone a virtual machine from an Active VM snapshot, a 'Failed to get shared "write" lock. Is another process using the image?' error appeared for the following snapshot types: 'ACTIVE', 'STATELESS', 'PREVIEW' and 'NEXT_RUN'. In this release, the cloning operation will be blocked for these snapshot types.


If a user with an invalid sudo configuration uses sudo to run commands, sudo appends a "last login" message to the command output. When this happens, VDSM fails to run lvm commands. Previously, the VDSM log did not contain helpful information about what caused those failures.

The current release improves error handling in the VDSM code running lvm commands. Now, if VDSM fails, an error message clearly states that there was invalid output from the lvm commands, and shows the output added by sudo. Although this change does not fix the root cause, an invalid sudo configuration, it makes it easier to understand the issue.


Self-hosted engine deployment failed when the network interface was defined as other than 'eth0'. In this release, any valid network interface name can be used.


In this release, redirection device types are no longer set to unplugged and can now obtain the proper address from the domain xml when supported or from the host when they are not supported.


The sorting order in the list of Disks in the Storage tab of the Administration Portal was sorted alphabetically by text values in the Creation Date, instead of by time stamp. In this release, the list is now sorted by the time stamp.


A user with a UserRole or a role with a Change CD permit can now change CDs on running VMs in the VM Portal


This release updates the Ansible role to configure the Rsyslog Elasticsearch output correctly to ensure the certificate information reaches the Red Hat Virtualization Host.


This release ensures the SR-IOV vNIC profile does not undergo an invalid update while the vNIC is plugged in and running on the VM during the validation process. To update the SR-IOV vNIC profile, unplug the vNIC from the VM. After the updates are complete, replug the vNIC into the VM.


VDSM attempted to collect OpenStack related information, even on hosts that are not connected to OpenStack, and displayed a repeated error message in the system log. In this release, errors originating from OpenStack related information are not recorded in the system log. As a result, the system log is quieter.


Previously, testing of Ansible 2.8 returned deprecation errors and warnings during deployment. The current release fixes this issue.


Previously, the Self-Hosted Engine pane in Cockpit had a few minor typos. The current version fixes these issues.


updated by engine-setup. If an error occurs, engine-setup treats this is a failure and tries to rollback, which is a risky process. To work around this scenario, the package ovirt-engine-setup-plugin-ovirt-engine now requires ovirt-vmconsole 1.0.7-1. Updating the setup packages with yum should also update ovirt-vmconsole. If an error occurs, yum evaluates it as a non-fatal error. See also bug 1665197 for the actual error from ovirt-vmconsole.


Previously, while testing a RHEL 8 build of the virt-v2v daemon that turns a Red Hat Virtualization Host into a conversion host for CloudForms migration, you could not update the network profile of a running virtual machine guest. The current release fixes this issue.


This release allows an Ansible playbook to run on isolated, offline nodes.


In this release, VM migration is supported when both the origin and destination hosts have Pass-Through Host CPU enabled.


This fix includes a signed certificate for rhev-apt.exe until 2022-01-25.


This fix ensures the /etc/hosts directory label is correct for SELinux on the Red Hat Virtualization Manager virtual machine.


This fix ensures that installing the OVS-2.10 package restarts the OVS/OVN services after the package completes the install and update process.


This release ensures that during self-hosted engine deployments, downloading and installing the rhvm-appliance package does not occur if the corresponding OVA file is present.


Previously, upgrading from RHV 4.0 to 4.2 failed while using "ovirt-fast-forward-upgrade" tool due to 'eap7-jboss*' dependency issues. The current release includes a patch that fixes this bug.

4.1.2. Enhancements

This release of Red Hat Virtualization features the following enhancements:


This release allows you to limit east-west traffic of VMs, to enable traffic only between the VM and a gateway. The new filter 'clean-traffic-gateway' has been added to libvirt. With a parameter called GATEWAY_MAC, a user can specify the MAC address of the gateway that is allowed to communicate with the VM and vice versa. Note that users can specify multiple GATEWAY_MACs. There are two possible configurations of VM:

1) A VM with a static IP. This is the recommended setup. It is also recommended to set the parameter CTRL_IP_LEARNING to 'none'. Any other value will result in a leak of initial traffic. This is caused by libvirt’s learning mechanism (see and for more details).

2) A VM with DHCP. DHCP is working partially. It is not usable in production currently (

The filter has a general issue with ARP leak ( Peer VMs are able to see that the VM using this feature exists (in their arp table), but are not able to contact the VM, as the traffic from peers is still blocked by the filter.


In the current release, Windows clustering is supported for directly attached LUNs and shared disks.


The current release supports Windows clustering for directly attached LUNs and shared disks.


In this release, users can now export VM templates to OVA files located on shared storage, and import the OVA files from the shared storage into a different data center.


The iptables and iptables-service have been removed from the list of dependencies in self-hosted engine deployment.


The current release adds support for memory hot-plug for IBM POWER (ppc64le) virtual machines.


In the current release, the disk alias of a cloned virtual machine is Alias_<Cloned-Virtual-Machine-Name>.


The current release of the self-hosted engine supports deployment with static IPv6.


The current release provides a software hook for the Manager to disable restarting hosts following an outage. For example, this capability would help prevent thermal damage to hardware following an HVAC failure.


Previously the REST API did not include the CPU Type when it returned information about the host. Now, the CPU Type is included with the rest of the information concerning the host that the REST API returns, which is consistent with the Administration Portal.


In this release, VMs converted to oVirt (from VMware, Xen or OVA) now include RNG device and memory balloon device, provided that the guest OS has the necessary drivers installed.


TLSv1 and TLSv1.1 protocols are no longer secure, so they are forcefully disabled, and cannot be enabled, in the VDSM configuration.

Only TLSv1.2 and higher versions of the protocol are enabled. The exact TLS version depends on the underlying OpenSSL version.


The current release of the Administration Portal supports search queries for virtual machines with a specific cluster compatibility override setting or with a different cluster compatibility override setting (or none): Vms: custom_compatibility_version = X.Y or != X.Y.


When renaming a running virtual machine, the new name is now applied immediately, even when the QEMU process is running and is set with the previous name. In this case, the user is provided with a warning that indicates that the running instance of the virtual machine uses the previous name.


Feature: Support default route role on IPv6-only networks, but only for IPv6 static interface configuration.

Reason: oVirt engine should support IPv6 only networks for its existing capabilities.

Result: - You can set the default route role on an IPv6-only network provided it has an IPv6 gateway. - For Red Hat Virtualization Manager to correctly report the sync status of the interfaces, configure all of the interfaces with static IPv6 addresses only. Also, configure the IPv6 gateway on the logical network that has the default route role. - IPv6 dynamic configuration is currently not supported. - The IPv6 gateway on the default route role network is applied as the default route for the v6 routing table on the host. - You can set an IPv6 gateway on a non-management network. This was previously possible only on the management network). - If more that one IPv6 gateway is set on the interfaces of a host, the Manager will be in an undefined state: There will be more than one default route entry in the v6 routing table on the host, which causes the host to report that there are no v6 gateways at all (meaning that the interfaces will appear as out of sync in the Manager.)


This release adds the ability to manage the MTU of VM networks in a centralized way, enabling oVirt to manage MTU all the way from the host network to the guest in the VM. This feature allows for the consistent use of MTUs in logical networks with small MTU (e.g., tunneled networks) and large MTU (e.g., jumbo frames) in VMs, even without DHCP.


Making large snapshots and other abnormal events can pause virtual machines, impacting their system time, and other functions, such as timestamps. The current release provides Guest Time Synchronization, which, after a snapshot is created and the virtual machine is un-paused, uses VDSM and the guest agent to synchronize the system time of the virtual machine with that of the host. The time_sync_snapshot_enable option enables synchronization for snapshots. The time_sync_cont_enable option enables synchronization for abnormal events that may pause virtual machines. By default, these features are disabled for backward compatibility.


The new boot_hostdev hook allows virtual machines to boot from passed through host devices such as NIC VF’s, PCI-E SAS/RAID Cards, SCSI devices for example without requiring a normal bootable disk from a Red Hat Virtualization storage domain or direct LUN.


Previously, copying volumes to preallocated disks was slower than necessary and did not make optimal use of available network resources. In the current release, qemu-img uses out-of-order writing to improve the speed of write operations by up to six times. These operations include importing, moving, and copying large disks to preallocated storage.


Red Hat Virtualization Manager setup now uses oVirt Task Oriented Pluggable Installer/Implementation (otopi) to generate its answer files to eliminate the need for additional code or manual input on stated questions.


This release enables the export of a VM template to an Open Virtualization Appliance (OVA) file and the import of an OVA file as a VM template to facilitate VM template migration between data centers without using an export domain.


This release adds USB qemu-xhci controller support to SPICE consoles, for Q35 chipset support. Red Hat Virtualization now expects that when a BIOS type using the Q35 chipset is chosen, and USB is enabled, that the USB controller will be qemu-xhci.


The 'engine-backup' script now has default values for several options, so you do not need to supply values for these options.

To see the default values, run 'engine-backup --help'.


Previously, virtual machines could only boot from BIOS. The current release adds support for booting virtual machines via UEFI firmware, a free, newer, more modern way to initialize a system.


This feature provides support for adding security groups and rules using the ovirt-provider-ovn package, as described by the OpenStack Networking API.


Feature: Auto persist changes on SetupNetworks Instruct VDSM to commit any changes applied during setup networks immediately upon successful completion of the setup networks process and if connectivity is successfully re-established with the Red Hat Virtualization Manager. If this flag is not specified in the request, it is assumed that it was set to false, which is backward compatible with the previous behavior.

When setupNetworks is invoked from the Administration Portal, the default is 'true'. When it is invoked with a REST API call, the default is 'false'. When it is invoked from an ansible script, the default is 'true'.

Reason: When the commit was not part of the setupNetworks request, the following commit request issued by the Manager upon successful re-establishment of the connection with VDSM would sometimes fail, leaving the configuration in a non-persisted state although the intention was to persist it.

Result: The configuration is persisted immediately.


The current release of the User Interface Plugin API supports the updated Administration Portal design with the following changes: - Custom secondary menu items can be added to the vertical navigation menu. - Some functions have been renamed for consistency with the new Administration Portal design. A deprecation notice is displayed when the old names are used. - Some functions no longer support the alignRight parameter because the tabs are aligned horizontally, flowing from left to right.


If a VM does not use virtual NUMA nodes, it is better if its whole memory can fit into a single NUMA node on the host. Otherwise, there may be some performance overhead. There are two additions in this RFE:

  1. A new warning message is shown in the audit log if a VM is run on a host where its memory cannot fit into a single host NUMA node.
  2. A new policy unit is added to the scheduler: 'Fit VM to single host NUMA node'. When starting a VM, this policy prefers hosts where the VM can fit into a single NUMA node. This unit is not active by default, because it can cause undesired edge cases. For example, the policy unit would cause the following behavior when starting multiple VMs: In the following setup:

    • 9 hosts with 16 GB per NUMA node
    • 1 host with 4 GB per NUMA node When multiple VMs with 6 GB of memory are scheduled, the scheduling unit would prevent them from starting on the host with 4 GB per NUMA node, no matter how overloaded the other hosts are. It would use the last host only when all the others do not have enough free memory to run the VM.


In the Administration Portal, it is possible to set a threshold for cluster level monitoring as a percentage or an absolute value, for example, 95% or 2048 MB. When usage exceeds 95% or free memory falls below 2048 MB, a "high memory usage" or "low memory available" event is logged. This reduces log clutter for clusters with large (1.5 TB) amounts of memory.


The current release adds AMD SMT-awareness to VDSM and RHV-M. This change helps meet the constraints of schedulers and software that are licensed per-core. It also improves cache coherency for VMs by presenting a more accurate view of the CPU topology. As a result, SMT works as expected on AMD CPUs.


In the current release of the Red Hat Virtualization Manager, the "Remove" option is disabled if a virtual machine is delete-protected.


A new option, Activate Host After Install, has been added to the Administration Portal under Compute > Hosts, in the New Host or Edit Host screen. This option is selected by default.


An Ansible role, ovirt-host-deploy-spice-encryption, has been added to change the cypher string for SPICE consoles. The default cypher string satisfies FIPS requirements ('TLSv1.2+FIPS:kRSA+FIPS:!eNULL:!aNULL'). The role can be customized with the Ansible variable host_deploy_spice_cipher_string.


This release adds support for external OpenID Connect authentication using Keycloak in both the user interface and the REST API.


The current release of the User Interface Plugin API provides an "unload" handler that can be attached to a primary/secondary menu item or a details tab to perform clean-up when the user navigates away from these interface elements.


This feature provides the ability to enable live migration for HP VMs (and, in general, to all VM types with pinning settings). Previously, Red Hat Virtualization 4.2 added a new High-Performance VM profile type. This required configuration settings including pinning the VM to a host based on the host-specific configuration. Due to the pinning settings, the migration option for the HP VM type was automatically forced to be disabled. Now, Red Hat Virtualization 4.3 provides the ability for live migration of HP VMs (and all other VMs with a pinned configuration like NUMA pinning, CPU pinning, and CPU passthrough enabled). For more details, see the feature page:


Previously, changing log levels required editing libvirt.conf and restarting the libvirtd service. This restart prevented support from collecting data and made reproducing issues more difficult.

The current release adds the libvirt-admin package to the optional channel for Red Hat Virtualization Host. Installing this package enables you to run the virt-admin command to change libvirt logging levels on the fly.


High-performance virtual machines require pinning to multiple hosts to be highly-available. Previously virtual machines with NUMA pinning enabled could not be configured to run on more than one host. Now virtual machines with NUMA pinning enabled can be configured to run on one or more hosts. All hosts need to support NUMA pinning, and the NUMA pinning configuration needs to be compatible with all assigned hosts.


The current release of the User Interface Plugin API provides greater control over the placement of action buttons.


This update adds support for bare-metal machines based on IBM POWER9 CPUs running hypervisors on the RHEL-ALT host operating system. These hypervisors can run virtual machines with POWER8 or POWER9 virtual CPUs. This update also adds support for live migration of virtual machines with POWER8 virtual CPUs between hosts based on either POWER8 or POWER9 CPUs.


This release provides an Ansible role to ensure the correct shutdown of Red Hat Virtualization Manager or a Red Hat Hyperconverged Infrastructure environment.


The qemufwcfg driver has been added for the built-in firmware configuration (fw_cfg) system device on Windows 10 and Windows Server 2016 guests. As a result, fw_cfg devices are now identified correctly in the Device Manager on these guests.


The virtio-smbus driver installer for the built-in SMBus device on Windows 2008 guests has been added to the RHV Windows Guest Tools. As a result, SMBus devices are now identified correctly in the Device Manager on these guests.


In this release, the cluster property "set maintenance reason" is enabled by default.


The current release adds a new 'ssl_ciphers' option to VDSM, which enables you to configure available ciphers for encrypted connections (for example, between the Manager and VDSM, or between VDSM and VDSM). The values this option uses conform to the OpenSSL standard. For more information, see


With this release, the size of the rhvm package has been reduced.


This release adds a feature to control toast notifications. Once any notifications are showing, "Dismiss" and "Do not disturb" buttons will appear that allow the user to silence notifications.


In this release, ovirt-log-collector now supports batch mode.


A new option has been added to the Administration Portal under Compute > Clusters in the Console configuration screen: Enable VNC Encryption


In this release, self-hosted engine installation supports Ansible playbooks that use tags.


The openscap, openscap-utils and scap-security-guide packages have been added to RHVH in order to increase security hardening in RHVH deployments.


Red Hat OpenStack Platform 14’s OVN+neutron is now certified as an external network provider for Red Hat Virtualization 4.3.


Previously, "Power Off" was missing from the virtual machine context menu in the Administration Portal; although it was present in previous versions, it was removed as part of the new user interface in 4.2. Now, "Power Off" is once again present when a running virtual machine is right-clicked.


Previously, you could only assign one vGPU device type (mdev_type) to a virtual machine in the Administration Portal. The current release adds support for assigning multiple Nvidia vGPU device types to a single virtual machine.


This feature allows the user to select the cloud-init protocol with which to create a virtual machine’s network configuration. The protocol can be selected while creating or editing a VM, or while starting a VM with Run Once. In older versions of cloud-init, backward compatibility needed to be maintained with the ENI protocol, whereas on newer cloud-init versions the OpenStack-Metadata protocol is supported.


In this release, an Ansible playbook enables you to deploy the Metrics Store on a single node or on multiple nodes and to scale out an existing deployment.


The current release replaces Fluentd with Rsyslog, which can collect oVirt logs, engine.log, VDSM logs, and collectd metrics.

Systems upgraded from 4.2 will still have Fluentd installed, but it will be disabled and stopped. After upgrading to 4.3, you can remove the Fluentd packages. Fluentd will not be supported in RHEL 8. Rsyslog offers better performance.

Rsyslog can output to Elasticsearch on Red Hat OpenShift Container Platform. Sending data to your own instance of Elasticsearch is not currently supported.

Collectd is reconfigured to use write_syslog, a new plugin, to send metrics to Rsyslog. When deploying ovirt metrics, Rsyslog is configured on the Red Hat Virtualization Manager and host to collect and ship the data to the requested target.


Virtual machines can be forcibly shut down in the VM Portal.


In the past, high-performance virtual machines were pinned to specific hosts and did not support live migration. The current release enables live migration of high-performance virtual machines, as well as virtual machines with NUMA pinning, CPU pinning, or CPU passthrough enabled.


In the current release, invoking the ovirt-aaa-jdbc-tool logs the following three events to the syslog server: the user who invokes the ovirt-aaa-jdbc-tool; the parameters passed to ovirt-aaa-jdbc-tool except filter passwords; and whether invoking ovirt-aaa-jdbc-tool was successful.


Qemu Guest Agent packages for several Linux distributions have been added to make it easier to install the guest agent offline.


In this release, virt-v2v attempts to install the QEMU Guest Agent on Linux guests during VM conversion. For this feature to work properly, a current RHV guest tools ISO must be attached during the conversion.


When Importing KVM VMs and Sparseness is specified, the actual Disk Size should be preserved to improve the performance of the Import as well as to conserve disk space on the Destination Storage Domain. Previously, when you set thin provisioning for importing a KVM-based VM into a Red Hat Virtualization environment, the disk size of the VM within the Red Hat Virtualization storage domain was inflated to the volume size or larger, even when the original KVM-based VM was much smaller. KVM Sparseness is now supported so that when you import a virtual machine with thin provisioning enabled into a Red Hat Virtualization environment, the disk size of the original virtual machine image is preserved. However, KVM Sparseness is not supported for Block Storage Domains.


This release adds support for importing VMware virtual machines that include snapshots.


As part of replacing Fluentd with Rsyslog, the RHEL Ansible role logging, from the linux-system-roles collection of roles, is responsible for deploying Rsyslog configuration files and service handling for multiple projects. This role is maintained by RHEL and makes Rsyslog deployment easier and more maintainable. In this release, the Rsyslog service and configuration are deployed on the oVirt engine and hosts using this role when you deploy oVirt metrics.


During virtual machine live migration, the migration progress bar is now also shown in the host’s Virtual Machine tab.


In this release, the Correlation-Id can be passed to the vdsm-client by using the '--flow-id' argument with the vdsm-client tool.


In previous versions, it was not possible to limit the number of simultaneous sessions for each user, so active sessions could significantly grow up until they expired. Now, Red Hat Virtualization Manager 4.3 introduces the ENGINE_MAX_USER_SESSIONS option, which can limit simultaneous sessions per user. The default value is -1 and allows unlimited sessions per user.

To limit the number of simultaneous sessions per user, create the 99-limit-user-sessions.conf file in /etc/ovirt-engine/engine.conf.d and add ENGINE_MAX_USER_SESSIONS=NNN, where NNN is the maximum number of allowed simultaneous sessions per user. Save and restart using: systemctl restart ovirt-engine.


With this release, users can now disable pop-up notifications. When a pop-up notification appears in the Administration Portal, the following options are now available for disabling notifications: - Dismiss All - Do Not Disturb - for 10 minutes - for 1 hour - for 1 day - until Next Log In


Previously, version 4.2.0 added support for vGPUs and used a Consolidated ("depth-first") allocation policy.

The current release adds support for a Separated ("breadth-first") allocation policy. The default policy is the Consolidated allocation policy.


Previously, for virtual machines with a Windows 10 guest, the host CPU load was too high.

The current release reduces the CPU load by adding enlightenments that enable the hypervisor synthetic interrupt controller (SynIC) and stimer.

For example, with this enhancement, the host CPU load of a virtual machine running an idle Windows 10 guest should be approximately 0-5%.


Red Hat Enterprise Linux 8 is fully supported as a guest operating system. Note that GNOME single sign-on functionality, guest application list, and guest-side hooks are not supported.


You can now set the number of IO threads in the new/edit VM dialog in the Administration Portal, instead of just the REST API.


The current release presents the OpenSCAP security profile as an option to users installing and upgrading Red Hat Virtualization Hosts. This feature helps organizations comply with the Security Content Automation Protocol (SCAP) standards.


This release disables the "Remove" button on the Everyone permissions page to prevent misconfiguring Red Hat Virtualization Manager permissions.


The release ensures the Red Hat Virtualization internal OVN database connections and OpenStack REST APIs use TLS 1.2 and HIGH ciphers to address configurable OVN internal connections and the default Red Hat Enterprise Linux version 7 OpenSSL configuration allowing insecure ciphers.


The current release updates the QEMU post-copy migration policy from a Technology Preview to a Supported Feature. As a cautionary note, a network failure during migration results in a virtual machine in an inconsistent state, which cannot be recovered by the Manager. Administrators using this feature should be aware of the potential for data loss.


This release enhancement preserves a virtual machine’s time zone setting of a virtual machine when moving the virtual machine from one cluster to a different cluster.


In this release, the write_syslog collectd plugin is now automatically installed on the system running the ovirt-engine service to provide metrics store support.


In this release, the write_syslog collectd plugin is now automatically installed on managed hosts for metrics store support.


Previously, the background process to migrate virtual machines considered affinity groups. This release updates the background process to migrate virtual machines to consider both affinity groups and affinity labels.


In order to create and use a Managed block storage domain, a new database must be created that is accessible by cinderlib. In this release, a new database can be created during the engine-setup process, using the same procedures described in the documentation for "Configuring the Red Hat Virtualization Manager".


In this release, the available SSL ciphers used in communication between the Red Hat Virtualization Manager and VDSM have been limited, and now exclude weak or anonymous ciphers.


In this release, the IPv6 default route of a host is managed by restricting the IPv6 default gateways so that there is only one such gateway for all host interfaces. Note that: 1. When the default route role is moved away from a network, its IPv6 gateway is automatically removed from the corresponding interface. 2. After moving the default route role to a new network, you should set a static IPv6 gateway on this network. 3. If the host and Red Hat Virtualization Manager are not on the same subnet, the Manager will lose connectivity with the host on moving the default route role between networks (see note 1). You should take precautions to avoid this scenario.


The current release ships a new version of Red Hat Gluster Storage, RHGS 3.4.4, in Red Hat Virtualization Host (RHVH).


This enhancement installs the v2v-conversion-host-wrapper RPM by default on Red Hat Virtualization Host.

4.1.3. Technology Preview

The items listed in this section are provided as Technology Previews. For further information on the scope of Technology Preview status, and the associated support implications, refer to Technology Preview Features Support Scope.


This technology preview includes the flexvolume-driver and volume-provisioner component to enable dynamic storage provisioning for OpenShift Container Platform deployed on Red Hat Virtualization virtual machines. The container can use any of the existing storage technologies Red Hat Virtualization supports.

4.1.4. Rebase: Bug Fixes Only

The items listed in this section are bugs that were originally resolved in the community version and included in this release.


Previously, after importing and removing a Kernel-based Virtual Machine (KVM), trying to re-import the same virtual machine failed with a "Job ID already exists" error. The current release deletes completed import jobs from the VDSM. You can re-import a virtual machine without encountering the same error.

4.1.5. Release Notes

This section outlines important details about the release, including recommended practices and notable changes to Red Hat Virtualization. You must take this information into account to ensure the best possible outcomes for your deployment.


Large guest operating systems have a significant overhead on the host. The host requires a consecutive non-swapped block of memory that is 1/128th of the virtual machine’s memory size. Previously, this overhead was not accounted for when scheduling the virtual machine. If the memory requirement was not satisfied, the virtual machine failed to start with an error message similar to this one: "libvirtError: internal error: process exited while connecting to monitor: …​ qemu-kvm: Failed to allocate HTAB of requested size, try with smaller maxmem"

The current release fixes this issue by using dynamic hash page table resizing.


This release allows Red Hat Virtualization Manager to set a display network and open a console to a virtual machine over an IPv6 only network.


Previously, an administrator with the ClusterAdmin role was able to modify the self-hosted engine virtual machine, which could cause damage. In the current release, only a SuperUser can modify a self-hosted engine and its storage domain.


The TLSv1 and TLSv1.1 protocols are no longer secure. In the current release, they have been forcefully disabled in the VDSM configuration and cannot be enabled. Only TLSv1.2 and higher versions of the protocol are enabled. The exact version enabled depends on the underlying OpenSSL version.


This release removes the Red Hat Virtualization Manager support for clusters levels 3.6 and 4.0. Customers must upgrade their data centers to Red Hat Virtualization Manager 4.1 or later before upgrading to Red Hat Virtualization Manager 4.3.


This release updates the command sequence for Preparing Local Storage for Red Hat Virtualization Hosts by adding a command to mount the logical volume.


Previously, in the VM Portal, users who did not have permissions to create virtual machines could see the Create VM button. The current release fixes this issue by fetching user permissions and then using them to show or hide the Create VM button.


There are inconsistencies in the following internal configuration options: - HotPlugCpuSupported - HotUnplugCpuSupported - HotPlugMemorySupported - HotUnplugMemorySupported - IsMigrationSupported - IsMemorySnapshotSupported - IsSuspendSupported - ClusterRequiredRngSourcesDefault Systems that have upgraded from RHV 4.0 to RHV 4.1/4.2 and are experiencing problems with these features should upgrade to RHV 4.2.5 or later.


Previously, the VM Portal displayed all clusters, regardless of user permissions. The current release fixes this issue by fetching user permissions and displaying only those clusters which the user has permissions to use.


In this release, the oVirt release package for master, ovirt-release-master, enables a new repository hosted on the Cool Other Package Repositories (COPR) service for delivering ovirt-web-ui packages.


The current release replaces Fluentd with Rsyslog for collecting oVirt logs and collectd metrics. Hosts upgraded from 4.2 will still have Fluentd installed, but the service is disabled and stopped. After upgrading to 4.3, you can remove the Fluentd packages.


The current release replaces Fluentd with Rsyslog for collecting oVirt logs and collectd metrics. Systems upgraded from 4.2 will still have Fluentd installed but it will be disabled and stopped. After upgrading to 4.3, you can remove the Fluentd packages.


Red Hat Virtualization Manager now requires JBoss Enterprise Application Platform.


Context-sensitive help has been removed from Red Hat Virtualization (RHV) 4.3. RHV user interfaces no longer include small question mark icons for displaying context-sensitive help information.

To access the RHV documentation, use the RHV welcome page and the Red Hat Documentation tab.


The current release removes the VDSM daemon’s support for cluster levels 3.6/4.0 and Red Hat Virtualization Manager 3.6/4.0. This means that VDSM from RHV 4.3 cannot be used with the Manager from RHV 3.6/4.0. To use the new version of VDSM, upgrade the Manager to version 4.1 or later.


oVirt now requires WildFly version 15.0.1 or later.


Previously, Python-openvswitch used a compiled C extension wrapper within the library for speedier JSON processing. The memory object used to store the JSON response was not freed and was leaked. The current release fixes this issue by de-allocating the memory stored for the JSON parser so the memory is recovered.

4.1.6. Known Issues

These known issues exist in Red Hat Virtualization at this time:


When the ISO Uploader uploads ISO image files, it sets the file permissions incorrectly to -rw-r-----. Because the permissions for "other" are none, the ISO files are not visible in the Administration Portal.

Although the ISO Uploader has been deprecated, it is still available. To work around the permissions issue, set the ISO file permissions to -rw-r—​r-- by entering: chmod 644 filename.iso

Verify that the system is configured as described in the "Preparing and Adding NFS Storage" section of the Administration Guide for Red Hat Virtualization.

The above recommendations may also apply if you encounter permissions/visibility issues while using the following alternatives to the ISO Uploader: * Manually copying an ISO file to the ISO storage domain, as described in * In version 4.2 of Red Hat Virtualization onward, uploading virtual disk images and ISO images to the data storage domain using the Administration Portal or REST API.


If the same iSCSI target is used to create two or more storage domains, even if the storage domain is put into maintenance mode, the iscsi session does not get logged out. Red Hat recommends to use different iSCSI targets to create different storage domains. To work around this issue, restart the hypervisor host.


In the current release, Q35 machines cannot support more than 500 devices.


VDSM uses lldpad. Due to a bug, lldpad confuses NetXtreme II BCM57810 FCoE-enabled cards. When the VDSM configuration enables lldpad to read lldp data from the card, it renders the card unusable. To work around this issue, set enable_lldp=false in vdsm.conf.d and restart VDSM. Check that lldpad is disabled on all relevant interfaces by entering the command, "lldptool get-lldp -i $ifname adminStatus". If lldp is enabled, disable it by entering "lldptool set-lldp -i $ifname adminStatus=disabled". After ensuring that lldp support is disabled in VDSM, networking should be unaffected.

4.1.7. Deprecated Functionality

The items in this section are either no longer supported or will no longer be supported in a future release.


With this update, ovirt-image-uploader has been retired. In Red Hat Virtualization 4.0 ovirt-image-uploader was deprecated in favor of ovirt-imageio.


The ovirt-shell tool has been deprecated since RHV 4.0 and has not been updated since. It is included in RHV 4.3 and later, in order not to break existing scripts, but the tool is now unsupported.


Version 3 of the REST API has been deprecated as of RHV version 4.0. It will not be supported from RHV version 4.3, along with the ovirt-shell and version 3 of the Python SDK Guide, Ruby SDK Guide, and Java SDK Guide.


The "Scan Alignment" feature in the previous versions of the Administration Portal is only relevant to guest OSes that are outdated and unsupported.

The current release removes this "Scan Alignment" feature, along with historical records of disks being aligned or misaligned.


Conroe and Penryn CPU types are no longer supported. They will not appear as options for Compatibility Version 4.3, and a warning is displayed for older versions.


The ovirt-engine-cli package uses the version 3 REST API which is deprecated and unsupported. With this update, ovirt-engine-cli is no longer a dependency and is not installed by default.