Red Hat Training

A Red Hat training course is available for Red Hat Virtualization

Release Notes

Red Hat Virtualization 4.2

Release notes for Red Hat Virtualization 4.2

Red Hat Virtualization Documentation Team

Red Hat Customer Content Services

Abstract

The Release Notes provide high-level coverage of the improvements and additions that have been implemented in Red Hat Virtualization 4.2.

Chapter 1. Introduction

1.1. Introduction to Red Hat Virtualization

Red Hat Virtualization is an enterprise-grade server and desktop virtualization platform built on Red Hat Enterprise Linux. See the Product Guide for more information.

1.2. Subscriptions

To install the Red Hat Virtualization Manager and hosts, your systems must be registered with the Content Delivery Network using Red Hat Subscription Management. This section outlines the subscriptions and repositories required to set up a Red Hat Virtualization environment.

Important

Red Hat is transitioning the RHN-hosted interface into the Red Hat Subscription Management (RHSM) interfaces by July 31, 2017. If your current systems are registered to RHN Classic, see Migrating from RHN Classic to Red Hat Subscription Management (RHSM) for Red Hat Virtualization for instructions on how to migrate your systems to RHSM.

1.2.1. Required Subscriptions and Repositories

The packages provided in the following repositories are required to install and configure a functioning Red Hat Virtualization environment. When one of these repositories is required to install a package, the steps required to enable the repository are provided in the appropriate location in the Installation Guide or Self-Hosted Engine Guide.

Table 1.1. Red Hat Virtualization Manager

Subscription PoolRepository NameRepository LabelDetails

Red Hat Enterprise Linux Server

Red Hat Enterprise Linux Server

rhel-7-server-rpms

Provides the Red Hat Enterprise Linux 7 Server.

Red Hat Enterprise Linux Server

RHEL Server Supplementary

rhel-7-server-supplementary-rpms

Provides the virtio-win package, which provides the Windows VirtIO drivers for use in virtual machines.

Red Hat Virtualization

Red Hat Virtualization

rhel-7-server-rhv-4.2-manager-rpms

Provides the Red Hat Virtualization Manager.

Red Hat Virtualization

Red Hat Virtualization Tools

rhel-7-server-rhv-4-manager-tools-rpms

Provides dependencies for the the Red Hat Virtualization Manager that are common to all Red Hat Virtualization 4 releases.

Red Hat Ansible Engine

Red Hat Ansible Engine

rhel-7-server-ansible-2.9-rpms

Provides Red Hat Ansible Engine.

Red Hat Virtualization

Red Hat JBoss Enterprise Application Platform

jb-eap-7-for-rhel-7-server-rpms

Provides the supported release of Red Hat JBoss Enterprise Application Platform on which the Manager runs.

Table 1.2. Red Hat Virtualization Host

Subscription PoolRepository NameRepository LabelDetails

Red Hat Virtualization

Red Hat Virtualization Host

rhel-7-server-rhvh-4-rpms

Provides the rhev-hypervisor7-ng-image-update package, which allows you to update the image installed on the host.

Table 1.3. Red Hat Enterprise Linux 7 Hosts

Subscription PoolRepository NameRepository LabelDetails

Red Hat Enterprise Linux Server

Red Hat Enterprise Linux Server

rhel-7-server-rpms

Provides the Red Hat Enterprise Linux 7 Server.

Red Hat Virtualization

Red Hat Virtualization Management Agents (RPMs)

rhel-7-server-rhv-4-mgmt-agent-rpms

Provides the QEMU and KVM packages required for using Red Hat Enterprise Linux 7 servers as virtualization hosts.

Red Hat Ansible Engine

Red Hat Ansible Engine

rhel-7-server-ansible-2.9-rpms

Provides Red Hat Ansible Engine.

1.2.2. Optional Subscriptions and Repositories

The packages provided in the following repositories are not required to install and configure a functioning Red Hat Virtualization environment. However, they are required to install packages that provide supporting functionality on virtual machines and client systems such as virtual machine resource monitoring. When one of these repositories is required to install a package, the steps required to enable the repository are provided in the appropriate location in the Installation Guide or Self-Hosted Engine Guide.

Chapter 2. RHV for IBM Power

This release supports Red Hat Enterprise Linux 7 hosts on IBM POWER8, little endian hardware and Red Hat Enterprise Linux 7 virtual machines on emulated IBM POWER8 hardware. From Red Hat Virtualization 4.2.6 Red Hat Enterprise Linux hosts are supported on IBM POWER9, little endian hardware and Red Hat Enterprise Linux 7 virtual machines on emulated IBM POWER9 hardware.

Important

Previous releases of RHV for IBM Power required Red Hat Enterprise Linux hosts on POWER8 hardware to be installed from an ISO image. These hosts cannot be updated for use with this release. You must reinstall Red Hat Enterprise Linux 7 hosts using the repositories outlined below.

The packages provided in the following repositories are required to install and configure aspects of a Red Hat Virtualization environment on POWER8 hardware.

Table 2.1. Required Subscriptions and Repositories for IBM POWER8, little endian hardware

ComponentSubscription PoolRepository NameRepository LabelDetails

Red Hat Virtualization Manager

Red Hat Virtualization for IBM Power

Red Hat Virtualization for IBM Power

rhel-7-server-rhv-4-power-rpms

Provides the Red Hat Virtualization Manager for use with IBM POWER8 hosts. The Manager itself must be installed on x86_64 architecture.

Red Hat Enterprise Linux 7 hosts, little endian

Red Hat Enterprise Linux for Power, little endian

RHV Management Agent for IBM Power, little endian

rhel-7-server-rhv-4-mgmt-agent-for-power-le-rpms

Provides the QEMU and KVM packages required for using Red Hat Enterprise Linux 7 servers on IBM Power (little endian) hardware as virtualization hosts.

 

Red Hat Enterprise Linux for Power, little endian

Red Hat Enterprise Linux for IBM Power, little endian

rhel-7-for-power-le-rpms

Provides additional packages required for using Red Hat Enterprise Linux 7 servers on IBM Power (little endian) hardware as virtualization hosts.

Red Hat Enterprise Linux 7 virtual machines, big endian

Red Hat Enterprise Linux for Power, big endian

RHV Tools for IBM Power

rhel-7-server-rhv-4-tools-for-power-le-rpms

Provides the ovirt-guest-agent-common package for Red Hat Enterprise Linux 7 virtual machines on emulated IBM Power (big endian) hardware. The guest agents allow you to monitor virtual machine resources on Red Hat Enterprise Linux 7 clients.

Red Hat Enterprise Linux 7 virtual machines, little endian

Red Hat Enterprise Linux for Power, little endian

RHV Tools for IBM Power, little endian

rhel-7-server-rhv-4-tools-for-power-le-rpms

Provides the ovirt-guest-agent-common package for Red Hat Enterprise Linux 7 virtual machines on emulated IBM Power (little endian) hardware. The guest agents allow you to monitor virtual machine resources on Red Hat Enterprise Linux 7 clients.

Table 2.2. Required Subscriptions and Repositories for IBM POWER9, little endian hardware

ComponentSubscription PoolRepository NameRepository LabelDetails

Red Hat Virtualization Manager

Red Hat Virtualization for IBM Power

Red Hat Virtualization for IBM Power

rhel-7-server-rhv-4-power-rpms

Provides the Red Hat Virtualization Manager for use with IBM POWER9 hosts. The Manager itself must be installed on x86_64 architecture.

Red Hat Enterprise Linux 7 hosts, little endian

Red Hat Enterprise Linux for Power, little endian

RHV Management Agent for IBM Power, little endian

rhel-7-server-rhv-4-mgmt-agent-for-power-9-rpms

Provides the QEMU and KVM packages required for using Red Hat Enterprise Linux 7 servers on IBM Power (little endian) hardware as virtualization hosts.

 

Red Hat Enterprise Linux for Power, little endian

Red Hat Enterprise Linux for IBM Power, little endian

rhel-7-for-power-9-rpms

Provides additional packages required for using Red Hat Enterprise Linux 7 servers on IBM Power (little endian) hardware as virtualization hosts.

Red Hat Enterprise Linux 7 virtual machines, big endian

Red Hat Enterprise Linux for Power, big endian

RHV Tools for IBM Power

rhel-7-server-rhv-4-tools-for-power-le-rpms

Provides the ovirt-guest-agent-common package for Red Hat Enterprise Linux 7 virtual machines on emulated IBM Power (big endian) hardware. The guest agents allow you to monitor virtual machine resources on Red Hat Enterprise Linux 7 clients.

Red Hat Enterprise Linux 7 virtual machines, little endian

Red Hat Enterprise Linux for Power, little endian

RHV Tools for IBM Power, little endian

rhel-7-server-rhv-4-tools-for-power-le-rpms

Provides the ovirt-guest-agent-common package for Red Hat Enterprise Linux 7 virtual machines on emulated IBM Power (little endian) hardware. The guest agents allow you to monitor virtual machine resources on Red Hat Enterprise Linux 7 clients.

Unsupported Features

The following Red Hat Virtualization features are not supported:

  • SPICE display
  • SmartCard
  • Sound device
  • Guest SSO
  • Integration with OpenStack Networking (Neutron), OpenStack Image (Glance), and OpenStack Volume (Cinder)
  • Self-hosted engine
  • Red Hat Virtualization Host (RHVH)
  • Disk Block Alignment

For a full list of bugs that affect the RHV for IBM Power release, see https://bugzilla.redhat.com/show_bug.cgi?id=1444027.

Chapter 3. Technology Preview and Deprecated Features

3.1. Technology Preview Features

Important

Technology Preview features are not supported with Red Hat production service-level agreements (SLAs) and might not be functionally complete, and Red Hat does not recommend using them for production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information see Red Hat Technology Preview Features Support Scope.

The following features are available as Technology Previews in Red Hat Virtualization:

NoVNC console option
Option for opening a virtual machine console in the browser using HTML5.
Websocket proxy
Allows users to connect to virtual machines through a noVNC console.
VDSM hook for nested virtualization
Allows a virtual machine to serve as a host.
Import Debian and Ubuntu virtual machines from VMware and Xen

Allows virt-v2v to convert Debian and Ubuntu virtual machines from VMware or RHEL 5 Xen to KVM.

Known Issues:

  • virt-v2v cannot change the default kernel in the GRUB2 configuration. The kernel configured on the guest operating system is not changed during the conversion, even if a more optimal version is available.
  • After converting a Debian or Ubuntu virtual machine from VMware to KVM, the name of the virtual machine’s network interface may change, and will need to be configured manually.
Open vSwitch cluster type support
Adds Open vSwitch networking capabilities.
moVirt
Mobile Android app for Red Hat Virtualization.
Shared and local storage in the same data center
Allows the creation of single-brick Gluster volumes to enable local storage to be used as a storage domain in shared data centers.

3.2. Deprecated Features

The following features are deprecated and will be removed in a future version of Red Hat Virtualization:

Version 3 REST API
Use the version 4 REST API.
Export domains

Use a data domain. You can migrate data domains between data centers and import the virtual machines into the new data center.

In Red Hat Virtualization 4.2, some tasks may still require the export domain.

ISO domains

Use a data domain. You can upload images to data domains.

In Red Hat Virtualization 4.2, some tasks may still require the ISO domain.

RHEV-M Shell
Use the Java, Python, or Ruby SDK, or the version 4 REST API.
Iptables
Use the firewalld service.
Conroe, Penryn, Opteron G1, Opteron G2, and Opteron G3 CPU types
Use newer CPU types.
IBRS CPU types
Use newer fixes.
3.6 cluster compatibility version
Use a newer cluster compatibility version. You can upgrade the compatibility version of existing clusters.

Chapter 4. Release Information

These release notes highlight technology preview items, recommended practices, known issues, and deprecated functionality to be taken into consideration when deploying this release of Red Hat Virtualization.

Notes for updates released during the support lifecycle of this Red Hat Virtualization release will appear in the advisory text associated with each update or the Red Hat Virtualization Technical Notes. This document is available from the following page:

https://access.redhat.com/documentation/en-us/red_hat_virtualization

4.1. Red Hat Virtualization 4.2 General Availability

4.1.1. Enhancements

This release of Red Hat Virtualization features the following enhancements:

BZ#659847

You can export virtual machines as named OVF/OVA files and save them to a specific path on a host or mounted NFS shared storage.

BZ#723931

The Manager now displays multipath device alerts, to enable users to troubleshoot storage issues caused by faulty paths.

BZ#856337

Link Layer Discovery Protocol (LLDP) support is integrated into the Manager, so that LLDP information gathered by each host is available through the REST API and the UI. This feature improves and simplifies the detection of network configuration issues in large enterprise environments.

BZ#870884

The ovirt-log-collector tool has been changed to add the ipmitool plugin to the list of plugins executed by sosreport on the hosts. The collected data now includes hardware details provided by ipmitool.

BZ#872530

The default zeroing method has been changed from "dd" to "blkdiscard", which allows storage offloading, if supported by the storage array, and consumes much less network bandwidth. The zeroing method can be reverted by adding "zero_method = dd" to /etc/vdsm/vdsm.conf.

BZ#895356

This release adds a new spice-qxl-wddm-dod package, containing a display-only-driver (DOD) for the QXL (virtual) device. It can be installed on Windows 10 virtual machines. This driver adds support for arbitrary resolution, multiple monitors, and client mouse mode.

BZ#957788

Within the VM, direct LUNs are automatically assigned specific file systems and mount points so that they can be identified in /dev/disk/by-id by the 'lvm-pv-uuid' prefix.

BZ#988285

The Manager supports the ability to define logical networks, ports, and subnets that are not attached to the physical interfaces of the host. This enables the virtual network interfaces to create an isolated network within Red Hat Virtualization, allowing the virtual machines to communicate among themselves. This feature is configured through the REST API and the UI.

BZ#995362

This update allows the user to set the firewall type for the cluster from the Edit Cluster window. The firewall type can now be set to firewalld for all version 4.0 and later clusters. VDSM 4.20 or later is required on the host to enable firewalld support.

After changing the firewall type, all hosts in the cluster need to be reinstalled using Reinstall action to apply this change.

BZ#1056502

This update allows you to search in Red Hat Virtualization Manager using more than one tag.

BZ#1066114

collectd's swap plugin has been expanded to provide the following swap statistics: used, free, cached, swap_io - in and swap_io out.

BZ#1073370

This update adds the ability to navigate between objects of the same type by using the Up and Down keys. For example, while viewing a virtual machine, press Up to go to the previous virtual machine.

BZ#1096497

Glance images returned by the REST API now list their size and type (iso/disk).

BZ#1122970

In the current release, an ISO image file can be uploaded, via the Manager or the REST API, to a data storage domain and attached to a virtual machine as a CDROM device.

BZ#1138177

Permissions on virtual machines and templates are now part of the OVF, so that when a storage domain is detached from one data center and attached to another, the virtual machines and templates can be imported with their previous permissions.

BZ#1146864

The Notification Drawer has an icon indicating the status of the event. A yellow exclamation mark icon for warnings, a red x with a circle icon for errors, and  successful or informational events have an icon with a blue i.

BZ#1149579

You can connect to more than one iSCSI target by connecting all the portals of an iSCSI portal group to enable multipathing. This must be done before running the self-hosted engine deployment script. To be as backwards-compatible as possible and to avoid requiring changes in the deployment script, it saves the multiple IP and port values as a string composed of multiple comma-separated values with the same key used by the previous version. It assumes that the login is the same for all the portals.

BZ#1150245

Memory volumes from snapshots can now be moved to other storage domains within the same data center.

BZ#1157372

The oVirt.cluster-upgrade Ansible role is now available as part of the ovirt-ansible-cluster-upgrade package, which is installed as a dependency of the ovirt-ansible-roles package.

BZ#1160667

The engine now supports explicit DNS configuration.

BZ#1168327

After live merge, the base volume is reduced to its optimal size.

BZ#1177628

PostgreSQL is configured to save log entries for a year and to include timestamps.

BZ#1181665

VDSM uses an event in libvirt 3.2.0 to obtain information about the allocation of block chunked drives and improve the efficiency of the thin provisioning implementation. This enables VDSM to consume less system resources.

BZ#1200963

In the current release, a non-management network can be set as the default network route using the Manager, instead of selecting a custom property.

BZ#1205739

A previously imported storage domain that was destroyed or detached can now be imported into an uninitialized Data Center. In the past, this operation failed because the storage domain retained its old metadata.

BZ#1207992

Virtual machines now stay operational when connectivity with CD-ROM images breaks. The error is reported to the guest operating system. Note that the configuration of the storage device may affect the time it takes to detect the error. During this time, the virtual machine is non-operational.

BZ#1228543

Previously, it was only possible to add memory to a running virtual machine (VM). To remove memory, the VM had to be shut down.
In this release, it is now possible to hot unplug memory from a running VM, provided that certain requirements are met and limitations are considered.

BZ#1263602

Additional mount options for the self-hosted engine storage domain, such as NFS mount, are available in the Manager.

BZ#1267807

All portals of an iSCSI portal group are connected to enable iSCSI multipath, saving IP and port values in a string.

BZ#1285456

In the current release, the Manager's column widths, visibility, and order are persistent.

BZ#1291177

In the Administration Portal, on the "Disks" tab, disks can be filtered by content type.

BZ#1304650

With this release, the self-hosted engine deployment script waits for the Manager VM to power up before reporting that the deployment is complete. Now when the deployment is finished, you can connect to the Manager right away instead of waiting for the VM to power up.

BZ#1317450

Previously, when a virtual machine was paused due to IO error, the only resume policy was "Auto Resume", which resumed the virtual machine. "Auto Resume" was problematic because it could interfere with custom HA solutions. In the current release, the "Kill" and "Leave Paused" resume policies have been added. "Leave Paused" was introduced for users who prefer this option because they have their own HA implementation. "Kill" allows virtual machines with a lease to automatically restart on another host in the event of irrecoverable outage.

The speed of IO Error reporting depends on the underlying storage protocol. On FC storage, IO Errors are generally detected quickly, while on NFS mounts with typical default settings, they may not be detected for several minutes.

BZ#1317739

In the current release, during the hosted engine deployment, if only one configured NIC is available on the host, it is offered as the default option.

BZ#1319524

This update enables you to export a virtual machine as an OVA file to a specified path on a host in the data center. The specified path can also be NFS shared storage that is mounted on the host.

BZ#1319758

If an OVA file is accessible to at least one host in a data center, the OVA file can now be imported into the data center as a virtual machine.

BZ#1324532

If a user runs 'engine-upgrade-check' without running 'engine-setup', a warning displays, informing the user that the system may not be fully up to date, even if the engine-upgrade-check says no upgrade is available.

BZ#1330217

Cloud-Init supports IPv6 properties for initializing a virtual machine's network interfaces.

BZ#1334982

Previously, in an emergency, users were required to shut down the hosts to preserve the data center. This caused running virtual machines to be killed by the systemd process without performing a graceful shutdown. As a result, the virtual machines' state became undefined, which led to problematic scenarios for virtual machines running databases such as Oracle and SAP.

In this release, virtual machines can be gracefully shut down by delaying the systemd process. After the virtual machines are shut down, the systemd process takes control and continues the shutdown. The VDSM is only shut down after the virtual machines have been gracefully shut down, after passing information to the Manager and waiting 5 seconds for the Manager to acknowledge the virtual machines have been shut down.

BZ#1335642

A virtual machine can now be sealed, in the Manager, in preparation for deployment as a template.

BZ#1364947

In the current release, the storage domain is prevented from going into maintenance if the OVF update fails. An optional check box to force maintenance mode, if desired, has been added.

BZ#1365834

The rhev-guest-tools-iso package is now named rhv-guest-tools-iso. Also, the ISO filenames were modified to match the re-branding.

BZ#1366905

The engine already supports filtering the network communication of VMs. Now it is possible to configure the filter parameters using the REST-API.

See http://www.ovirt.org/develop/release-management/features/network/networkfilterparameters/#current-implementation-status for details.

BZ#1367806

A new VDSM parameter enables a host to remove a disk/snapshot on block storage, where "Wipe After Delete" is enabled, in much less time than the "dd" command, especially if the storage supports "Write same."

BZ#1372163

When live or cold merge fails, snapshot disks may be left in an illegal state. If VMs with illegal snapshot disks are shut down, they will not re-start. VMs with illegal snapshot disks are now marked with an exclamation mark and a warning message not to shut them down.

BZ#1374007

In this release, a new version of Anaconda now includes storage constraint checks and default settings for Red Hat Virtualization Hosts (RHVH), which require a special partitioning layout. When custom partitioning is selected, LVM-thin is the default for RHVH.

BZ#1376843

The Python JSON-RPC client will reconnect if the connection is interrupted.

BZ#1379309

This update integrates Red Hat Virtualization with the Red Hat Gluster Storage events framework to determine the real-time status of Red Hat Gluster Storage volume entities.

BZ#1388595

The current default size of the pool of available MAC addresses, 133, is too small and has been increased to 1024.

BZ#1389673

Previously, administrators had to enter an unencrypted password when invoking 'ovirt-aaa-jdbc-tool user password-reset'. The password was then encrypted inside ovirt-aaa-jdbc-tool and stored in the database.

This update enables administrators to use the new --encrypted option to enter an already encrypted password when invoking 'ovirt-aaa-jdbc-tool user password-reset'.

However, there are some caveats when providing encrypted passwords:

1. Entering an encrypted password means that password validity tests cannot be performed, so they are skipped and the password is accepted even if it does not comply with the password validation policy.

2. A password has to be encrypted using the same configured algorithm. To encrypt passwords, administrators can use the '/usr/share/ovirt-engine/bin/ovirt-engine-crypto-tool.sh' tool, which provides the 'pbe-encode' command to encrypt passwords using the default PBKDF2WithHmacSHA1 algorithm.

BZ#1391859

New feature greatly reduces the time required to create thick allocated disks on file-based storage.

BZ#1396925

Otopi can now optionally write its own answer files, which are simpler to understand compared to tool-specific files written by existing tools that use otopi. The functionality is also different, imitating more closely the behavior without an answer file and answers provided interactively.

To generate:

 engine-setup --otopi-environment=DIALOG/answerFile=str:/tmp/ans1

To use:

 engine-setup --config-append=/tmp/ans1

BZ#1399609

When migrating virtual machines, the host selection drop-down list filters the results to display only appropriate destination hosts.

BZ#1400890

Previously, although you could use engine-rename to change the FQDN of the Manager, the Manager still passed the old value when deploying new self-hosted engine nodes. Now, you can use hosted-engine --set-shared-config to modify the FQDN in hosted-engine.conf.

BZ#1404389

The engine can update a direct LUN's missing or outdated information.

BZ#1404509

When collecting SOS reports from hosts, chrony and systemd SOS plugins can now collect information about time synchronization. In addition, a new --time-only option has been added to ovirt-log-collector, allowing information about time differences to be gathered from the hosts without gathering full SOS reports, saving a considerable amount of time for the operation.

BZ#1405603

Renamed rhevm-setup-plugins to rhv-setup-plugins to match product name.

BZ#1405805

The REST API now supports download/upload of snapshots for backup and restoration.

BZ#1408847

Red Hat Virtualization now supports TLSv1.2.

BZ#1409766

This release adds a maintenance tool to run vacuum actions on the DWH database or specific tables.

The tool optimizes table stats and compacts the tables, resulting in less disk space usage. This allows for more efficient maintenance, and the updated table stats allow for better query planning.

Also provided is an engine-setup dialog that offers to perform vacuum during upgrades. This can be automated by the answer file.

BZ#1411100

From now on, all timestamp records for otopi-based tools logs (including engine-setup, host-deploy, and hosted-engine --deploy) will contain a time zone to ease correlation between logs on the Manager and hosts. They will also now include the fraction of a second. Previously they contained a timestamp without a time zone and fraction of a second, for example:

2017-04-03 09:56:58 DEBUG otopi.context context.dumpEnvironment:760 ENVIRONMENT DUMP - BEGIN

From now on there will always be a comma and fraction of a second after the seconds part, and a timezone identifier at the end of the timestamp part, for example:

2017-04-05 10:41:08,500+0300 DEBUG otopi.context context.dumpEnvironment:760 ENVIRONMENT DUMP - BEGIN

BZ#1413316

For sosreport 3.4 and later, log-collector enables the collectd plugin, which collects the collectd configuration.

BZ#1416141

You can now schedule jobs using JBoss ManagedThreadFactory, ManagedExecutorService, and ManagedScheduledExecutorService instead of Java EE ExecutorService and Quartz.

BZ#1416491

This update adds SSO support for OpenID Connect clients. The following new OpenID Connect discovery endpoint has been added so that clients can discover the authorization endpoints and OpenID Connect capabilities of the Manager:

https://<Manager>/ovirt-engine/sso/openid/.well-known/openid-configuration

The following endpoint is used for client authorization and for obtaining the authentication code:

https://<Manager>/ovirt-engine/sso/openid/authorize

The following endpoint is used by clients to obtain the authentication token from the authentication code:

https://<Manager>/ovirt-engine/sso/openid/token

The following endpoint can used by clients to get details of the logged in user:

https://<Manager>/ovirt-engine/sso/openid/userinfo

The following endpoint can used by clients to get the keys used by SSO to sign the id_token returned from token and tokeninfo endpoints:

https://<Manager>/ovirt-engine/sso/openid/jwks

BZ#1417708

Previously the ovirt-ha-agent would open a new json-rpc connection each time it communicated with VDSM. Now it will try and reuse an open connection if available to improve performance.

BZ#1418579

SELinux is now set to Enforcing by default in the ovirt-appliance.

BZ#1420039

Added a manual refresh button to the dashboard UI, which updates the display with currently available system summary information.
Note that Utilization data updates independently on a (default) 5 minute schedule.

BZ#1420068

In this release, Red Hat Virtualization Host supports NIST SP 800-53 partitioning requirements to improve the security. Environments upgrading to Red Hat Virtualization 4.2 will also be configured to match NIST SP 800-53 partitioning requirements.

BZ#1420404

The user can now decide whether a virtual machine should be warm or cold rebooted when started as "Run Once" in the Administration Portal. To facilitate this, the "Trap guest reboots" option has been renamed to "Rollback this configuration during reboots".

This enables virtual machines to start on the same host when is it run as "Run Once" and then rebooted.

BZ#1422982

Previously, the default memory allotment for the RHV-M Virtual Appliance was always large enough to include support for user additions.

In this release, the RHV-M Virtual Appliance includes a swap partition that enables the memory to be increased when required.

BZ#1425032

katello-agent is now installed by default on Red Hat Virtualization and Red Hat Virtualization Hosts (RHVH) and is included in the RHVH image.

Katello Agent sends information about installed rpms to Satellite.

BZ#1428498

You can now view virtual machines that are pinned to a host, even when they are shut down.

BZ#1429537

Self-hosted engines now require GlusterFS 3.10 instead of 3.8.

BZ#1430799

vdsm-tool now provides commands for VDSM network cleanup, such as `vdsm-tool clear-nets` and `vdsm-tool dummybr-remove`. You can remove networks configured by VDSM following the steps below. Note that the VDSM service does not need to be running:

1. To prevent loss of connectivity, it might be necessary to exclude the default route network from the cleanup. Look for a network providing the default route (ovirtmgmt by default):
# vdsm-tool list-nets
...
ovirtmgmt (default route)
...

2. Remove all networks configured by VDSM except for the default network:
# vdsm-tool clear-nets --exclude-net ovirtmgmt

3. Remove the libvirt dummy bridge ;vdsmdummy;
# vdsm-tool dummybr-remove

4. Now that the host is clean, you can remove VDSM.

BZ#1433676

The Welcome page displays a link to the CA certificate.

BZ#1434306

A "Force Remove" button has been added to the Administration > Providers screen. Currently, it applies only to volume providers (Cinder). It removes the provider from the database, along with all related entities, such as storage domain, virtual machines, templates, and disks.

BZ#1437145

Previously, it was not possible to exclude from the ovirt-guest-agent the IPs of NICs that the customer does not want to appear in reports.
In this release, the customer can create a list of NICs to exclude from the reports. It is supported for Linux systems only, on a per virtual machine basis. A new field called "ignored_nics" has been added to etc/ovirt-guest-agent.conf for defining space-delimited NICs.

Note that for existing VMs only, there is a known Manager caching issue whereby the NIC information is not removed as required when the NIC is blacklisted.

BZ#1439332

Previously, engine-backup always used the /tmp directory, which was problematic if the directory was full. In the current release, it is possible to change the engine-backup's temporary directory.

BZ#1441059

The ovirt-engine-notifier tool exposes events that users can subscribe to by using SMTP or SNMP ovirt-engine-notifier providers.

The virtual machine events are:
- VM_PAUSED_EIO: The machine has been paused due to a storage I/O error.
- VM_PAUSED_ENOSPC: The machine has been paused due to lack of storage space.
- VM_PAUSED_EPERM: The machine has been paused due to a storage read/write permissions problem.
- VM_PAUSED_ERROR: The machine has been paused due to an unknown storage error.
- VM_RECOVERED_FROM_PAUSE_ERROR: The machine has recovered from being paused.

BZ#1441501

Previously the Manager configuration options with user or admin access levels were only available to GWT clients. To obtain option values for non-GWT clients they are now exposed in the following REST API:

  GET https://<ENGINE_FQDN>:/ovirt-engine/api/options/<OPTION_NAME>

Where <OPTION_NAME> is an existing configuration option name. The output of that call is a list of values. To obtain a value for only a specific client version, you need to specify the requested version as follows:

  GET https://<ENGINE_FQDN>:/ovirt-engine/api/options/<OPTION_NAME>?version=<VERSION>

The Manager configuration options are not backward compatible and may differ for each Manager version, and therefore existing options may not be available. Options are exposed as read-only in the REST APIs, but some of them can be updated by using the engine-config command-line tool.

BZ#1445681

For REST API clients, the SSO authentication error now appears in the body of the response, for example, "<html><head><title>Error</title></head><body>access_denied: Cannot authenticate user 'admin@internal': The username or password is incorrect.</body></html>".

BZ#1446480

This release introduces the collectd uptime plugin which collects statistics and reports on the system's uptime.

BZ#1447300

Sparsify and sysprep can now be run on POWER hosts.

BZ#1450749

The Administration Portal UI was redesigned and enhanced to match the latest PatternFly standards. Primary navigation has been converted to vertical navigation and replaces the system tree on the left side of the screen. The previous master-detail based view has been replaced with a page-per-object (one level of tabs) design.

BZ#1454368

If you install/configure the OVN provider while running engine-setup interactively, engine-setup saves the OVN responses to the answer file, so that when you run engine-setup with the generated answer file, it will not ask the questions.

BZ#1455169

This release introduces a new alternative deployment flow for self-hosted engines. The new flow will use the current engine code to create all entities needed for successful self-hosted engine deployment.
A local bootstrap VM with the RHV-M Appliance will be created on the host and hosted-engine-setup will use it (via Ansible) to add a host (the same one it is running on), storage domain, storage disks, and finally a VM that will later become the Manager VM in the engine (thus eliminating the need for importing it).
Once all the entities are created, hosted-engine-setup can shut down the bootstrap VM, copy its disk to the disk it created using the engine, create the self-hosted engine configuration files, start the agent and the broker, and the Manager VM will start.

BZ#1456414

The default value of the "Discard After Delete" field of block storage domains has been changed from "false" to "true". The "Discard After Delete" checkbox is selected in the Administration Portal and a block storage domain created with the REST API has "Discard After Delete" enabled by default.

BZ#1457239

Previously, it was difficult to configure a virtual machine (VM) to run with high performance workloads via the Administration Portal as it involved understanding and configuring a large number of settings. In addition, several features that were essential for improving the VM's performance were not supported at all, for example, huge pages and IO thread pinning.

In this release, a new optimization type called High Performance is available when configuring VMs. It is capable of running a VM with the highest possible performance, with performance metrics as close to bare metal as possible. When the customer selects High Performance optimization, some of the the VMs settings are automatically configured while other settings are suggested for manual configuration.

BZ#1457471

Previously, ovirt-ha-agent was renewing its JSON RPC connection to VDSM on each message using getHardwareInfo to verify it. Now that the JSON RPC client internally supports a reconnect mechanism, there is no need to always create a fresh connection on the ovirt-ha-agent side.

BZ#1458501

The REST-API now supports LLDP. See http://www.ovirt.org/develop/release-management/features/network/lldp/ for details.

BZ#1459134

With this release, the engine now requires PostgreSQL 9.5 or later, which provides new features and better performance. The engine-setup tool can help upgrade an existing database to Software Collections PostgreSQL 9.5, as well as use that version for new setups.

BZ#1459908

The precision of the rx_rate, tx_rate, rx_drop, and tx_drop of virtual and host network interfaces has been increased, enabling the REST API to generate more accurate network interface statistics.

BZ#1460609

A new package, ovirt-host, has been introduced, consolidating host package requirements into a single meta package.

BZ#1461251

Users can provide a path to a locally saved appliance OVA in cases where the appliance RPM is not available or not wanted.

BZ#1462294

Using the virt-engine-extension-aaa-ldap-setup tool it's possible to configure an Active Directory forest with multi-domain trust, or an Active Directory forest with a single domain. However it is currently not possible to configure using a single domain from a multi-domain Active Directory forest because this is advanced configuration which is difficult to perform automatically.

This update provides common advanced Active Directory configuration examples that users can copy and adapt to their local environment. Those examples are bundled within the ovirt-engine-extension-aaa-ldap package, and can be found at /usr/share/ovirt-engine-extension-aaa-ldap/examples/README.md.

The ovirt-engine-extension-aaa-ldap-setup tool user experience has also been improved with the following changes:

- Add more detailed error reporting for various Active Directory forest configuration steps.
- Made the login test mandatory to test the provided configuration.

BZ#1462629

The Network Interfaces tab in the Compute > Host details view has been re-designed so that important NIC information is always visible and more details can be viewed by expanding the Expand/Collapse button of each network interface.

BZ#1462811

The Ansible role ovirt-host-deploy[1] is now executed as part of the host installation/reinstallation flow. This role is included in ovirt-ansible-roles package and is installed on the Red Hat Virtualization Manager by default.

[1] /usr/share/doc/ansible/rols/ovirt-host-deploy/README.md

BZ#1462821

The ovirt-ansible-roles package contains Ansible roles which helps administrators with common Red Hat Virtualization tasks. All roles can be executed from the command line using Ansible, but some of thoe roles are executed directly from Red Hat Virtualization Manager. More details about the roles can be found in the README.md file included in the package[1] or directly in the source code repository[2].

[1] /usr/share/doc/ovirt-ansible-roles/README.md
[2] https://github.com/ovirt/ovirt-ansible

BZ#1463633

This update enables Red Hat Virtualization API users to request that contents of some of the entity's links be returned inline, inside the requested entity.

Previously, to retrieve multiple related objects from the API, the only alternative was to retrieve the first one, then send additional requests to retrieve the related objects. For example, if you needed a virtual machine and also the disks and NICs, you would need to first send a request like this:

  GET /ovirt-engine/api/vms/123

And then send additional requests to get the disk attachments, the disks, and the NICs:

  GET /ovirt-engine/api/vms/123/diskattachments GET /ovirt-engine/api/disks/456 GET /ovirt-engine/api/disks/789 GET /ovirt-engine/api/vms/123/nics

In environments with high latency, this increases the time required to retrieve the data. In addition, it also means that multiple queries have to be sent to the database to retrieve the data. To improve this, a new follow parameter has been introduced. This parameter is a list of links that the server should follow and populate. For example, the previous scenario will be solved sending this request:

GET /ovirt-engine/api/vms/123?follow=diskattachments.disks,nics

This will return the virtual machine with the disks and the NICs embedded in the same response, thus avoiding the multiple network round-trips. The multiple database queries will be avoided only if the server is modified to retrieve that data with the more efficient queries, otherwise the server will use the approach of calling itself to retrieve it, which won’t improve the number of queries.

BZ#1463853

Previously, the partitioning scheme for the RHV-M Virtual Appliance included two primary partitions, "/" and swap.

In this release, the disk partitioning scheme has been modified to match the scheme specified by NIST. The updated disk partitions are as follows:

/boot 1G (primary)
/home 1G (lvm)
/tmp 2G (lvm)
/var 20G (lvm)
/var/log 10G (lvm)
/var/log/audit 1G (lvm)
swap 8G (lvm)
/ 6G (primary)

BZ#1464486

Previously, the version tag was used as part of the RPM's naming scheme, for example, "4.1.timestamp", which created differences between the upstream and downstream versioning schemes. In this release, the downstream versioning scheme is aligned with the upstream scheme and the timestamp has moved from the version tag to the release tag.

BZ#1468965

All audit log messages around login or logout now contain not only a username, but also the IP address of the client from which a user is connecting and the ID of the session (if it exists), to be able to distinguish between several connections from a single client.

BZ#1471833

The configure_ovirt_machines_for_metrics script is able to pass optional Ansible parameters to the ansible-playbook, for greater flexibility.

BZ#1472747

You can authenticate ovirt-provider-ovn against Active Directory.

To authenticate via user/password set
ovirt-admin-user-name=<admin_username> in /etc/ovirt-provider-ovn/conf.d and use <admin_username>@<ad_domain>@<auth_profile> when defining the external provider in the Manager.

To authenticate with an active directory group, set the following in /etc/ovirt-provider-ovn/conf.d:

[AUTH]
auth-plugin=auth.plugins.ovirt:AuthorizationByGroup

[OVIRT]
ovirt-admin-role-id=def00005-0000-0000-0000-def000000005
ovirt-admin-group-attribute-name=AAA_AUTHZ_GROUP_NAME;java.lang.String;0eebe54f-b429-44f3-aa80-4704cbb16835

and use <admin_username>@<ad_domain>@<auth_profile> when defining the external provider in the Manager.

BZ#1474209

Previously, hosted-engine-setup assumed that the user set the same CHAP username and password for both iSCSI discovery and iSCSI login. Now, the user can pass different username and password couples for iSCSI discovery and iSCSI login at setup time.

BZ#1475113

collectd and fluentd are deployed and configured to send data, such as CPU usage, memory, interface metrics, to a remote metrics store as part of the Metrics Store feature.

BZ#1475780

The default zeroing method has been changed from "dd" to "blkdiscard". The new default method performs significantly better than "dd", as it can use storage offloading (if it is supported by the storage array), and it consumes much less network bandwidth. If required, it is possible to locally set the zeroing method back to "dd" by changing the "zero_method" parameter in /etc/vdsm/vdsm.conf to "dd".

BZ#1479714

This update provides support for running Self-hosted Engine on replica 1 gluster volumes to enable single node hyperconverged deployments.

BZ#1480433

A host must be reinstalled if you turn Kdump integration on or off, or change the kernel command line parameters.

All hosts in a cluster must be reinstalled if you change the firewall type of the cluster.

An exclamation mark icon in Compute > Hosts indicates that a host needs to be reinstalled. Details appear in the host's details view, in the Events tab.

BZ#1483305

The REST API supports the download of disk snapshots.

BZ#1484058

The Manager supports disk download using the browser.

BZ#1484060

This release adds a progress bar to the status column of a disk during image download. In addition, the user no longer needs to click Download -> Cancel when the download is complete.

BZ#1486006

"SUSE Linux Enterprise Server 11+" has been added to the list of guest operating systems in the Manager.

BZ#1486207

The oVirt.cluster-upgrade role can be used to automate upgrading all hosts in cluster.

BZ#1486237

The oVirt.infra Ansible role can be used to automate data center setup.

BZ#1486239

The oVirt.image-template Ansible role can be used to automate creating a virtual machine template from external image.

BZ#1486243

The oVirt.manageiq Ansible role can be used to download a ManageIQ or Red Hat CloudForms QCOW2 image and create a virtual machine from it. You can also register a provider in the ManageIQ or Red Hat CloudForms installation.

BZ#1486249

The oVirt.vm-infra Ansible role can be used to automate creating a virtual machine infrastructure.

BZ#1486712

Previously, there was no way to identify external virtual machines in the Administration Portal. Only basic operations can be performed on external machines as they are not managed by the Manager. In this release, external machines can be identified by the prefix "external".

BZ#1488014

Logs of the Ansible metrics run are saved to /var/log/ovirt-engine/ansible/.

BZ#1488466

The taskcleaner and unlock_entity execution scripts have both had options for user, database, port and server names removed (-u, -d, -p, -s). Also there is no more need to export PGPASSWORD. All values are now obtained from the Manager configuration files.

BZ#1489328

Previously, administrators had to pass a specific entity type ("vm", "disk", or "snapshot") using the -q option for each unlock_entity.sh invocation, but there was no way to process all entity types within a single invocation. Now, there is a new "all" value which can be passed using the -q option to process all entity types at once.

BZ#1489567

Red Hat Virtualization Manager now displays the Red Hat Virtualization Host version installed.

BZ#1490041

The ipa-client package is now installed on hosts by default and is included in Red Hat Virtualization Host.

IPA Client allows integration of Cockpit certificate signing and SSO with Red Hat IdM, and adds the host to the IdM realm.

BZ#1490447

Previously, data was collected from all hosts in a cluster, which created an output file that was too large to handle. In this release, the hypervisor-per-cluster option enables you to collect data from a single host (the Storage Pool Manager, if available) per cluster.

BZ#1490866

Iptables has been deprecated in Red Hat Virtualization 4.2 and will be completely removed in version 4.3. Administrators must switch to firewalld, which is introduced in version 4.2. Otherwise, the Manager will review the clusters every 30 days and raise warning events in the audit log. A warning message has been added to engine-config help for all Iptables-related settings.

BZ#1491771

RHV-M allows storage domain creation with a single Gluster brick.

BZ#1492067

An "mdev_type" column now appears in the host devices list. Previously, if a user wanted to pass through an mdev device to a virtual machine, they needed to find the mdev_type of the device by SSHing into the hypervisor and running a VDSM command. Now, this information is exposed in the UI.

BZ#1492706

The engine now requires OVN.

BZ#1496382

It is now possible to hot unplug memory from a running virtual machine using REST APIs. However, there are some limitations:

- Only previously hot plugged memory devices can be removed.
- The guest OS must support memory hot unplug.
- All blocks of the previously hot plugged memory must be onlined as movable.
- It is not recommended to combine memory hot unplug with memory ballooning.

BZ#1497612

This update includes the ovirt-cockpit-sso package to provide Single Sign On (SSO) for the host console (Cockpit) from the Administration Portal.

To enable it on the Manager machine:

1) Install the ovirt-cockpit-sso package:
   # yum install ovirt-cockpit-sso

2) Enable and start the services:
   # systemctl enable cockpit.socket
   # systemctl enable ovirt-cockpit-sso
   # systemctl start ovirt-cockpit-sso

The service listens on 9986/tcp port.

To use the service:

1) Log into the Administration Portal.
2) Select "Host".
3) Click the "Host Console" button.
4) Optionally enable pop-up windows in your browser.
5) Optionally confirm the host's fingerprint, if prompted.

The new browser will appear with a root Cockpit session.

Note: For the best experience, the cockpit-ovirt package must be installed on the selected Red Hat Virtualization Host.

BZ#1498327

Decreased the monitoring loop execution time by no longer searching for OVF storage locations on every loop iteration. The location is now saved, reused and expired only in case of error.

BZ#1500579

In the current release, the Balloon service, which provides usage information, is installed and enabled in the host by default, wherever the Balloon driver is installed.

BZ#1503148

NTP is deprecated in favor of chrony in RHV 4.2. The updated default configuration allows users upgrading to RHV 4.2 from RHV-H to seamlessly transition from NTP to chrony without intervention.

BZ#1505398

If Open vSwitch is already installed, the engine installation will fail and an Open vSwitch conflict warning will appear. (This will not occur if Open vSwitch has never been installed.) The solution is to download the correct version of Open vSwitch (for example, "yumdownloader openvswitch-2.7.2") and to install the RPM ("yum downgrade openvswitch-2.7.2-10.git20170914.el7fdp.x86_64.rpm"). If using the RPM directly causes yum to display a warning, clear the yum cache: "yum clean all".

BZ#1506217

Previously, unresponsive hosts with power management enabled had to be fenced manually. In the current release, the Manager, upon start-up, will automatically attempt to fence the hosts after a configurable period (5 minutes, by default) of inactivity has elapsed.

BZ#1506697

Virtual machine file systems are no longer frozen while taking the temporary snapshots created during live storage migration. This improves the speed of the migration.

BZ#1507277

You can now select vNIC profiles for templates when registering them from a storage domain.

BZ#1507427

This release introduces a hook mechanism on one of the setup Ansible playbooks, allowing you to add custom values for engine-setup to the hosted-engine-setup answer file. You can place custom .yml files with Ansible tasks under /usr/share/ovirt-hosted-engine-setup/ansible/hooks/enginevm_before_engine_setup/ and /usr/share/ovirt-hosted-engine-setup/ansible/hooks/enginevm_after_engine_setup/.
The first set of tasks will be executed on the Manager VM before running engine-setup (example: install custom Manager RPMs or inject something in the answer file for engine-setup); the second set will be executed after engine-setup (example: change a parameter with engine-config).
Two examples are available in /usr/share/ovirt-hosted-engine-setup/ansible/hooks/enginevm_before_engine_setup/enginevm_before_engine_setup.yml.example and /usr/share/ovirt-hosted-engine-setup/ansible/hooks/enginevm_after_engine_setup/enginevm_after_engine_setup.yml.example.

BZ#1508480

The mom process is now monitored by the collectd processes plugin.

BZ#1508481

The supervdsm process has been added to the collectd processes plugin so that it can be monitored.

BZ#1508484

The ovirt-engine-dwhd process is now monitored by the collectd processes plugin.

BZ#1509065

Whenever you select a detail view in the application, the browser URL is now updated to match the selected entity. For instance, if you have a VM named MyVM and you click on the name to see the details, the URL of the browser changes to #vms-general;name=MyVM. If you switch to the network interfaces tab, the URL in your browser switches to #vms-network_interfaces;name=MyVM. Changing entity or changing location keeps the browser URL synchronized. This allows you to use your browsers bookmark functionality to store a link to that VM.

As a complementary functionality, you can pass arguments to places to execute some functionality based on the type of argument you have passed in. The following types are available:

    SEARCH: for main views only - this allows you to pre-populate the search string used in the search bar.
    NAME: most entities are uniquely named and you can use their name in a detail view to go directly to that named entity.
    DATACENTER: quota and networks are not uniquely named, but are unique combined with their associated data center. To link directly to either, you need to specify NAME and DATACENTER.
    NETWORK: VNIC profiles are not uniquely named, but need both DATACENTER and NETWORK to be specified to directly link to it.

If you are not already logged in, you are redirected to the SSO login page and then back to the desired place in the application. This allows external applications to directly link to entities in web admin UI.

BZ#1513583

A link to the Engine's CA certificate has been added to the Welcome page.

BZ#1514897

Previously, if a virtual machine was paused due to a storage error, it would be resumed again when the storage domain came back up. This was not always desirable (for example, high availability can complicate the situation). Now, there are three resume behaviors which can be configured per virtual machine:
- KILL: will kill the virtual machine if it was paused for longer than 80 seconds
- LEAVE_PAUSED: leaves the virtual machine paused
- RESUME: resumes the virtual machine

BZ#1514927

Previously, newly deployed hosts had to be individually configured to collect metrics and logs.

In this release, for systems in which both the Metrics Store server and the Manager and hosts have already been configured to collect metrics and logs, all newly-deployed hosts will be configured automatically.

BZ#1514942

This release adds the ability to specify a host in the ImageTransfer entity. This is useful if you prefer to use a specific host for uploading.

BZ#1515308

Hook evaluation on gluster events can decrease performance, this update disables hooks that are irrelevant on hyperconverged deployments, for example SMB and NFS hooks.

BZ#1515698

In order to allow for faster remediation of kernel CVEs and for testing of fixes from newer kernels, RHVH now supports installation of new kernels without a full image update. New kernel installations properly update the bootloader configuration.

BZ#1517774

This feature provides basic router support, as described by the OpenStack Networking API.
Note that the "external_gateway_info" property is not supported by this feature.

BZ#1517832

In the Virtual Machines detail view of those Hosts tab, clicking the name of a VM will take you directly to the detail view of that VM.

BZ#1518689

Otopi is able to log a machine's network connections after failures. This option is enabled by installing the package, otopi-debug-plugins. It can help to debug service start failures caused by "Address already in use" errors.

BZ#1520126

Previously, to configure the curator parameter for the metrics index, the user had to manually update it.

In this release, the curator parameter is configured when running the metrics configuration script.

BZ#1520424

After starting up, the Manager will automatically attempt to fence unresponsive hosts that have power management enabled after the configurable quiet time (5 minutes by default) has elapsed. Previously the user needed to fence them manually.

BZ#1524824

Red Hat Virtualization now supports AMD EPYC processors for guest virtual machines.

BZ#1528371

This update enables engine-setup to upgrade PostgreSQL 9.2 to 9.5, even when the locale of the 9.2 database is different from the system locale.

BZ#1530675

You can now create an OVN network that is connected to a physical host network. This feature enables virtual machines on the external network to be on the same network as the virtual machines within the Data Center.

BZ#1530730

The Manager and the REST API support uploading an ISO image to a data storage domain and attaching it to a virtual machine as a CDROM device.

BZ#1530919

There is now a vars directory for Metrics, instead of a single config.yml file, and the path is not hard coded. You can add a variable file to /etc/config.yml.d/, and it will be used in the Ansible playbook.

BZ#1532083

Added a new script to execute several operations, such as validate, fail-over, fail-back, generate_vars

BZ#1534212

In this release, the self-hosted engine can be installed on IBRS-compatible CPUs and the cluster's CPU type is set accordingly.

BZ#1537032

Skylake Server CPU family is now supported by Red Hat Virtualization.

Note: The previous "Skylake" family has been renamed to "Skylake Client" to differentiate.

BZ#1537498

As part of the CloudForms Infrastructure Migration project and to accelerate virt-v2v, ndbkit is now available from the RHV management agent repository on hosts.

BZ#1537501

As part of the CloudForms Infrastructure Migration project and to accelerate virt-v2v, ndbkit is now available from the RHVH repository on hosts.

BZ#1539363

A new configuration option called TransferImageClientInactivityTimeoutInSeconds (default = 60 seconds) is now available. When there's no activity with an image transfer, the Manager monitors the transfer for a period of time that equals to this configuration value, and then it stops the transfer. During an upload, the transfer is paused and can be resumed, during a download the transfer is canceled.

The configuration option is available for uploads and downloads in the Manager and Rest API.

Note: The configuration option, UploadImageUiInactivityTimeoutInSeconds has been removed.

BZ#1539636

This update adds support for configuring lvmcache on thin-pool used for Red Hat Gluster Storage bricks. This enables you to leverage the superior I/O performance of SSD devices to deliver performance improvement for gluster volumes when the bricks are comprised of slower HDDs.

BZ#1540289

The default value of JBoss's jboss.as.management.blocking.timeout option can be changed by creating /etc/ovirt-engine/engine.conf.d/99-jboss-blocking-timeout.conf with "ENGINE_JBOSS_BLOCKING_TIMEOUT=NNN", where "NNN" is the timeout value in seconds.

BZ#1540548

Previously, virtual machines that were paused for too long due to I/O errors were only killed when the engine tried to restart them. The current release adds a default setting, as part of high availability, that kills virtual machines that are paused too long because of I/O error, regardless of whether or when they will be resumed. This allows paused highly available virtual machines to migrate and be restarted.

BZ#1542604

Self-hosted engine installation using Ansible now only shows active network interfaces for setting the ovirtmgmt bridge.

BZ#1542973

This release adds an Ansible playbook that:
- Generates the vars.yaml file. The openshift_logging_mux_namespaces value is set according to the ovirt_env_name configured by the user.
- Copies both vars.yaml and the ansible-inventory files to the metrics machine.
This simplifies the Metrics Store installation by placing the files required for the ViaQ logging installation on the metrics machine, and already including the openshift_logging_mux_namespaces value.

BZ#1545559

In this release, the oVirt metrics configuration script automatically assigns an external IP to the Elasticsearch service.

BZ#1546668

You can now enable and disable auto-sync per external network provider. The Automatic Synchronization property of a network provider can be set when creating a new provider, and changed when editing an existing provider.

BZ#1550135

Failed login attempts now appear in the audit log, with details and the user name that failed to log in.

BZ#1550568

You can now manage virtual machine disks and network interfaces in the VM Portal.

BZ#1555268

Previously, Red Hat Enterprise Linux kernels had kernel address space layout randomization enabled by default. This feature prevented trouble-shooting and analysis of the guest's memory dumps. In the current feature, "vmcoreinfo" is enabled for all Linux guests. It allows a compatible kernel to export the debugging information so that the memory image can be analyzed.

BZ#1560240

In this release, it is possible to configure a persistent storage partition other than the default partition (/var) for Elasticsearch, by setting a parameter in the OpenShift Ansible inventory files.

BZ#1568736

The EnableKASLRDump VDC option is now enabled in RHV by default. This option allows core dumps of KASLR-enabled kernels for cluster level >= 4.2.

4.1.2. Technology Preview

The items listed in this section are provided as Technology Previews. For further information on the scope of Technology Preview status, and the associated support implications, refer to https://access.redhat.com/support/offerings/techpreview/.

BZ#1335837

This release includes the ability to import Debian/Ubuntu virtual machines from VMware and Xen, which is available as a Technology Preview feature. From RHEL 7.4 virt-v2v can convert Debian and Ubuntu based virtual machines.

Known Issues:

1. virt-v2v cannot change the default kernel in the GRUB2 configuration and the kernel configured on the guest operating system is not changed during the conversion, even if a more optimal version of the kernel is available on the guest.

2. After converting a Debian or Ubuntu VMware guest to KVM, the name of the guest's network interface may change, and will need to be configured manually.

BZ#1421746

In this release, a new VDSM hook that configures nested virtualization, has been introduced as a Technology Preview. Support for nested virtualization was introduced in Red Hat Enterprise Linux 7 and it enables a virtual machine to serve as a host.
VDSM hooks are a means to insert code, commands, or scripts into a point in the lifecycle of a virtual machine or the VDSM daemon.

4.1.3. Release Notes

This section outlines important details about the release, including recommended practices and notable changes to Red Hat Virtualization. You must take this information into account to ensure the best possible outcomes for your deployment.

BZ#1420310

With this update, an issue with the the way that the API supports the 'filter' parameter to indicate if results should be filtered according to the permissions of the user has been corrected. Previously, due to the way this was implemented, non-admin users needed to set this parameter for almost all operations, as the default value was 'false'. Now, to simplify the task for non-admin users, this patch changes the default value to 'true', but only for non-admin users. If the value is explicitly given in a request it will be honored.This is a backwards compatibility breaking change, as clients that used non-admin users and did *not* explicitly provide the 'filter' parameter will start to behave differently. However, this is unlikely, as calls from non-admin users without the 'filter=true' are almost useless. For those unlikely cases where this may be a problem, the patch also introduces a new 'ENGINE_API_FILTER_BY_DEFAULT' configuration parameter:

  #
  # This flags indicates if 'filtering' should be enabled by default for
  # users that aren't administrators.
  #
  ENGINE_API_FILTER_BY_DEFAULT="true"

If it is necessary to revert to the behavior of previous versions of the engine, it can be achieved by changing this parameter in a configuration file inside the '/etc/ovirt-engine/engine.conf.d' directory. For
example:

  # echo 'ENGINE_API_FILTER_BY_DEFAULT="false"' > \
  /etc/ovirt-engine/engine.conf.d/99-filter-by-default.conf

  # systemctl restart ovirt-engine

BZ#1425935

This update includes the new ovirt-register-sso-client-tool command-line tool to register SSO clients.

When running the tool, the user is prompted to enter the client ID, callback prefix, and the certificate location. A new entry is created in the sso_clients table if one does not exist, or the existing one with the same client ID is updated. The client_secret which is written to a temporary file should be noted and used by the client.

The client secret in the sso_clients table is encrypted and is for SSO internal use only.

BZ#1438822

With this update, support for configuring "Discard After Delete" per host has been dropped. This also drops the value "discard_enable" from the VDSM configuration file. The "Discard After Delete" feature now needs to be configured per block storage domain. For more information see - http://www.ovirt.org/develop/release-management/features/storage/discard-after-delete/

BZ#1463083

Previously, when adding a new data storage domain without stating the values of Discard After Delete (DAD) and storage format, their default values were previously calculated according to the data center version of the host that added the storage domain. Discard After Delete was set to true for a data center greater than or equal to Red Hat Virtualization 4.1, otherwise it was set to false. The storage format was set V4 for the a data center greater than or equal to Red Hat Virtualization 4.1, otherwise it was set to V3. This was a bad heuristic since the data center of the host that added the domain is a random data center and it is not necessarily the data center that the domain will be later attached to. Now, the logic has been changed so that the default storage format for new data domains will be the latest format, currently it is V4. For non-data domains, nothing has changed V1 was and will remain the default value. The default value of Discard After Delete will be calculated according to the storage format and is set to true for storage formats greater than or equal to V4, otherwise it is set to false.

BZ#1465106

The guest tools ISO now includes spice-qxl-wddm-dod and the latest spice-vdagent.

BZ#1490784

With this update, Red Hat Enterprise Linux versions less than version 6.9 are not supported in clusters using the machine type pseries-rhel7.4.0 or newer (4.1+). It is now blacklisted and cannot be started due to there being no CPU architecture available. A new definition of guest operating system for Red Hat Enterprise Linux versions 6.9 and greater has been added and should be used for Red Hat Enterprise Linux 6 guests in Red Hat Virtualization 4.1 and greater clusters. Also note that the cluster compatibility version 4.2 requires the pseries-rhel7.5.0 machine type, which is introduced by Red Hat Enterprise Linux 7.5 hosts. Until they are available you need to keep using the 4.1 cluster compatibility level.

BZ#1502716

With this update, the ansible and cockpit plugins are now delivered in the Red Hat Enterprise Linux 7 Server Extras repository. This repository is enabled by default in CentOS but needs to be enabled Red Hat Enterprise Linux in order to get access the required dependencies. To enable the repository on Red Hat Enterprise Linux run:

 # subscription-manager repos --enable=rhel-7-server-extras-rpms

BZ#1507406

With this update, the Virtual Machine Portal has been updated to version 1.3.1 and adds the following functionality:
- Use of "AlwaysFilterResultsForWebUi" engine configuration parameter
- BnB (Brand new Branding) - aligned with ovirt-engine
- Smartcard Enabled option
- SSH public key
- Guest Operating System names as labels
- i18n improvements, first translations
- multiple fixes and minor UI enhancements

BZ#1511962

Previously, TLSv12 support was backported into Red Hat Virtualization 4.1.5 (BZ#1412552) but it was turned off by default and enabling TLSv12 required manual configuration. Now, TLSv12 support is enabled by default and no manual configuration is required.

BZ#1537620

The ovirt-provider-ovn now provides a license file:
LICENSE
with GPLv2.

An AUTHORS file has also been added.

BZ#1562049

Do not upgrade to RHEL 7.5 if you have virtual machines using Direct LUN disks with the "Allow Privileged SCSI I/O" option checked, to avoid unexpected behavior.

4.1.4. Known Issues

These known issues exist in Red Hat Virtualization at this time:

BZ#1454536

Red Hat Virtualization Host generates VDSM certificates at the time of the first boot. This means that if the system clock was not set correctly at install time, chronyd or ntpd may resynchronize the clock after the VDSM certificate was generated, leading to a certificate that is not valid yet if the timezone is behind UTC. Now, imgbased-configure-vdsm starts after chronyd or ntpd and waits two seconds for the clock to synchronize; however, this does guarantee that the clock will be synchronized in time, so the best workaround is to set the system clock appropriately during installation.

BZ#1518253

Under certain conditions an issue with a change in selinux policy, and the script that converts a selinux policy in the old format to the new format of the selinux policy, causes the engine-setup upgrade to postgresql to fail with the error '[ERROR] Failed to execute stage 'Misc configuration': Failed to start service 'rh-postgresql95-postgresql'. The log in /var/log/messages shows 'postgresql-ctl: postgres cannot access the server configuration file "/var/opt/rh/rh-postgresql95/lib/pgsql/data/postgresql.conf": Permission denied'. To prevent this, reinstall the rh-postgresql95-runtime package by running 'yum reinstall rh-postgresql95-runtime', then run engine-setup again.

BZ#1523614

Previously, when a user attempted to move a disk with a snapshot that had been created before the disk was extended, the operation failed in storage domains whose data center was 4.0 or earlier. This occurred because "qemu-img convert" with compat=0.10 images interprets the space after the backing file as zeroes, sometimes causing the output disk to be larger than the logical volume created for it. In the current release, an attempt to move such a disk is blocked with an error message stating that the disk's snapshot must be deleted before moving the disk.

BZ#1554028

Previously, with storage domains of data centers with compatibility version 4.0 or earlier, if a virtual disk based on a template disk was extended and moved during live storage migration, the move operation failed because the child image was larger than the parent image (template disk). In the current release, the move operation of such disks displays an error message instead of failing. The resolution for this scenario is to upgrade the data center to compatibility version 4.1 or later.

4.1.5. Deprecated Functionality

The items in this section are either no longer supported or will no longer be supported in a future release

BZ#1426580

With this update, the Host Deploy is no longer installing qemu-kvm-tools on deployed hosts.

BZ#1443989

SPICE HTML5 support has been removed.

BZ#1456558

With this update, iptables has been deprecated in favor of firewalld. In Red Hat Virtualization 4.2 it is still possible to use iptables but iptables will not be supported in future releases.

BZ#1473182

With this update, the ovirt-engine-setup-plugin-dockerc package has been dropped. The ovirt-engine-setup-plugin-dockerc package was previously deprecated in version 4.1.5.

BZ#1486822

With this update, the redhat-support-plugin-rhev package has been deprecated. The documentation functionality of the redhat-support-plugin-rhev package has been merged into the rhvm-doc package in Red Hat Virtualization 4.2.

4.2. Red Hat Virtualization 4.2 Batch Update 1 (ovirt-4.2.4)

4.2.1. Enhancements

This release of Red Hat Virtualization features the following enhancements:

BZ#1366900

This release includes a new utility called ovirt-engine-hyper-upgrade. It can be used to guide the user through upgrading 4.0 or later systems.

BZ#1484532

Starting from version 4.0, Red Hat Virtualization Hosts could not be deployed from Satellite, and therefore could not take advantage of Satellite's tooling features.

In this release, Red Hat Virtualization Hosts can now be deployed from Satellite 6.3.2 and later.

BZ#1511823

When you add an external network provider, a check box allows you to automatically synchronize your cluster networks with the networks imported from the default network provider. The networks are immediately available for the virtual machines.

BZ#1579210

The cockpit-machines-ovirt plugin (https://cockpit-project.org/guide/latest/feature-ovirtvirtualmachines) has been added to Red Hat Enterprise Linux hosts and Red Hat Virtualization Hosts.

BZ#1585022

This release adds a new 'ssl_ciphers' option to VDSM, which allows you to configure available ciphers for encrypted connections (for example, the Manager to VDSM, or VDSM to VDSM). The values of this option conform to OpenSSL standard.
To set this option:

1. Move the host to Maintenance in the Manager.

2. Create a new /etc/vdsm/vdsm.conf.d/99-custom-ciphers.conf file with the following content:

[vars]
ssl_ciphers = <VALUE>

where <VALUE> is one of the values described in the CIPHERS STRINGD section in https://www.openssl.org/docs/man1.0.2/apps/ciphers.html.

3. Restart VDSM.

4. Activate the host in the Manager.

4.3. Red Hat Virtualization 4.2 Batch Update 2 (ovirt-4.2.5)

4.3.1. Enhancements

This release of Red Hat Virtualization features the following enhancements:

BZ#1511234

The new boot_hostdev hook allows virtual machines to boot from passed through host devices such as NIC VF's, PCI-E SAS/RAID Cards, SCSI devices for example without requiring a normal bootable disk from a Red Hat Virtualization storage domain or direct LUN.

BZ#1591927

This release introduces a new `rhv-log-collector-analyzer --live` tool that creates an HTML analysis of your Red Hat Virtualization deployment. It list major entities and warns of common issues, mostly focused on pre-upgrade action items.

BZ#1592320

This update adds support for IBM POWER9 hypervisors with RHEL-ALT and POWER9 guests. It also adds support for POWER8 guests on a POWER9 hypervisor and live migration of POWER8 guests between POWER8 and POWER9 hypervisors.

BZ#1593676

This feature adds static routes support to ovirt-provider-ovn, as specified in
https://developer.openstack.org/api-ref/network/v2/#routers-routers

The appropriate REST request for this is as follows:
{
"router": {
  "routes": [
   {
      "destination": "179.24.1.0/24",
       "nexthop": "172.24.3.99"
   },
  ...

BZ#1596315

This update adds a secure AMD EPYC CPU variant for CVE-2018-3639, called "AMD EPYC IBPB SSBD".

BZ#1608362

This update adds a feature to control pop up notifications. Once 3 or more notifications are showing, "Dismiss" and "Do not disturb" buttons will appear that allow the user to silence notifications.

4.3.2. Release Notes

This section outlines important details about the release, including recommended practices and notable changes to Red Hat Virtualization. You must take this information into account to ensure the best possible outcomes for your deployment.

BZ#1605076

There were inconsistencies in the following internal configuration options:

* HotPlugCpuSupported
* HotUnplugCpuSupported
* HotPlugMemorySupported
* HotUnplugMemorySupported
* IsMigrationSupported
* IsMemorySnapshotSupported
* IsSuspendSupported
* ClusterRequiredRngSourcesDefault

If you are having issues with these features, upgrade to Red Hat Virtualization 4.2.5+ to resolve the problem.

4.4. Red Hat Virtualization 4.2 Batch Update 3 (ovirt-4.2.6)

4.4.1. Enhancements

This release of Red Hat Virtualization features the following enhancements:

BZ#1134318

In the current release, it is possible to create single-brick Gluster volumes so that local storage can be used as a storage domain, along with shared storage, in shared data centers.

BZ#1295041

Previously, the engine-setup tool did not support host names that only resolved to an IPv6 address. In this release, the engine-setup tool supports host names that only resolve to an IPv6 address.

BZ#1297037

Previously, it was not possible to send an non-maskable interrupt (NMI) to a non-responsive guest operating system. In this release, users can send an NMI with Cockpit. A new menu option called "Send Non-Maskable Interrupt" was added to the "Shut Down" menu that is available from the Virtual Machines tab. It sends a command to the required virtual machine, and it is sent regardless of the virtual machine's state and without checking the type of operating system or its settings. In the event of an operating system that is not installed or configured correctly, no action will be taken.

BZ#1360839

Previously, it was only possible to attach a network to a host with an ipv6 address using either autoconf or dhcpv6. The two were mutually exclusive. In the current release, it is possible to enable both modes simultaneously using the Administration Portal or the REST API.

BZ#1558847

Previously, if a host's network definitions became unsynchronized with the definitions on the Manager, there was no way to synchronize all unsynchronized hosts on the cluster level. In this release, a new Sync All Networks button has been added to the Cluster screen in the Administration Portal that enables users to synchronize all unsynchronized hosts with the definitions defined on the Manager.

BZ#1565541

Previously, it was not possible to define long network names when modifying the tunnelling network for OVN controllers. In this release, a script has been provided to enable long network names to be used for tunnelling network definitions.

BZ#1590266

Previously, self-hosted engine deployment failed with an unclear message if the Manager virtual machine could not be reached by its IP address. In this release, a clear message is provided when deployment fails for this reason.

BZ#1607127

The current release always uses the latest Ansible inventory file, so that a particular Openshift version no longer needs to be specified and the documentation is always up to date.

BZ#1608467

Self-hosted engine deployment fails if firewalld is masked on the host. This is now checked earlier in the deployment script.

BZ#1613193

This release provides a shutdown/startup Ansible role for Red Hat Virtualization and Red Hat Hyperconverged Infrastructure environments.

BZ#1617977

VDO savings are displayed as part of the brick properties in the Administration Portal. This feature allows administrators to see the actual space savings resulting from VDO duplication and compression.

BZ#1618650

Previously, it was not possible to force a virtual machine to power off from the VM Portal. In this release, it is now possible to force a virtual machine to power off from the VM Portal.

BZ#1622700

Previously, multipath repeatedly logged irrelevant errors for local devices. In this release, local devices are blacklisted and irrelevant errors are no longer logged.

BZ#1624923

Memory hotplugging is now available for ppc64le virtual machines.

4.4.2. Technology Preview

The items listed in this section are provided as Technology Previews. For further information on the scope of Technology Preview status, and the associated support implications, refer to https://access.redhat.com/support/offerings/techpreview/.

BZ#1109597

This release allows the creation of single-brick Gluster volumes to enable local storage to be used as a storage domain for shared data centers. As a result, both local storage and shared storage can be used. This feature is a Tech Preview.

4.4.3. Release Notes

This section outlines important details about the release, including recommended practices and notable changes to Red Hat Virtualization. You must take this information into account to ensure the best possible outcomes for your deployment.

BZ#1619686

The Red Hat Virtualization Manager is a JBoss Application Server application that requires Red Hat JBoss Enterprise Application Platform (JBoss EAP) 7.1 be installed on the Red Hat Virtualization Manager host. For more information, see the following sections in the Red Hat Virtualization 4.2 Installation Guide: "Enabling the Red Hat Virtualization Manager Repositories" and "Configuring a Local Repository for Offline Red Hat Virtualization Manager Installation".

4.5. Red Hat Virtualization 4.2 Batch Update 4 (ovirt-4.2.7)

4.5.1. Enhancements

This release of Red Hat Virtualization features the following enhancements:

BZ#1590967

The current release has a 'VDO Savings' field that displays the savings percentage for the Gluster Storage Domain, Volume, and Brick views.

BZ#1610979

In the current release, a filter for VNIC profiles, `clean-traffic-gateway`, supports private VLAN connections.

BZ#1620271

The ovirt-log-collector now includes the RHV Log Collector Analyzer report. This analysis is generated by the rhv-log-collector-analyzer tool, which analyzes the RHV environment and detects anomalies in the system.

BZ#1620573

Large snapshots can result in long pauses of a VM that can affect the accuracy of the System Time, upon which time stamps and other time related functions depend. Guest Time Synchronization enables synchronization of the VM’s System Time during the creation of snapshots when enabled. When this feature is enabled and the Guest Agent is running, the VDSM process on the Host attempts to synchronize the System Time of the VM with the Host’s System Time when snapshots are completed and the VM is un-paused. To turn on Guest Time Synchronization for snapshots, use the time_sync_snapshot_enable option. For synchronizing the VM’s System Time during abnormal scenarios that may cause the VM to pause, you can enable the time_sync_cont_enable option. By default, these features are disabled for backward compatibility.

BZ#1621211

Copying volumes to preallocated disks is slower than it can be, and does not make optimal use of available network resources. In this release, qemu-img uses out of order writing. As a result, write operations, such as importing, moving or copying large disks to preallocated storage, can be up to 6 times faster.

BZ#1623259

In the current release, for compatibility versions 4.2 and 4.3, a warning in the Cluster screen indicates that the CPU types currently used are not supported in 4.3. The warning enables the user to change the cluster CPU type to a supported CPU type.

BZ#1628477

When a VM is running, the disk size of the virtual machine should be no larger than was required during the initial allocation of disk space, unless you specify pre-allocation. Previously, when you set thin provisioning for importing a KVM-based VM into a Red Hat Virtualizaton environment, the disk size of the VM within the Red Hat Virtualization storage domain was inflated to the volume size or larger, even when the original KVM-based VM was much smaller.
KVM Sparseness is now supported so that when you import a virtual machine with thin provisioning enabled into a Red Hat Virtualization environment, the disk size of the original virtual machine image is preserved.

BZ#1635624

In the current release, ovirt-ansible-hosted-engine-setup, ovirt-ansible-repositories, and ovirt-engine-setup role packages are included in the rhel-7-server-rhvh-4-rpms repository.

BZ#1637534

QEMU Guest Agent packages for several Linux distributions have been added to ease offline installation of the guest agent.

4.5.2. Release Notes

This section outlines important details about the release, including recommended practices and notable changes to Red Hat Virtualization. You must take this information into account to ensure the best possible outcomes for your deployment.

BZ#1619686

The Red Hat Virtualization Manager 4.2 is a JBoss Application Server application that requires Red Hat JBoss Enterprise Application Platform (JBoss EAP) 7.2 to be installed on the host. To make sure that you install JBoss EAP 7.2, before you upgrade to Red Hat Virtualization Manager 4.2, enable the jb-eap-7-for-rhel-7-server-rpms.

For more information, see Enabling the Red Hat Virtualization Manager Repositories
 in the Red Hat Virtualization 4.2 Installation Guide at link:https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.2/html-single/installation_guide/#Enabling_the_Red_Hat_Virtualization_Manager_Repositories_standalone_install[.

4.5.3. Known Issues

These known issues exist in Red Hat Virtualization at this time:

BZ#1638457

In this release the snapshot functionality introduced in 4.2.6 (ovirt-web-ui-1.4.3-1) has a problem where a user without any administrator role permissions cannot restore a virtual machine to a previously created snapshot. In such a case, the virtual machine enters into the "in preview" state, and only an administrator using the Administration Portal can resolve the virtual machine.

4.5.4. Deprecated Functionality

The items in this section are either no longer supported or will no longer be supported in a future release.

BZ#1623266

With this update, the CPU types Conroe, Penryn, Opteron G1, Opteron G2 and Opteron G3 have been deprecated. Red Hat Virtualization 4.3 will not support these CPU types.

4.6. Red Hat Virtualization 4.2 Batch Update 5 (ovirt-4.2.8)

4.6.1. Bug Fix

These bugs were fixed in this release of Red Hat Virtualization:

BZ#1644970

ovirt-web-ui 1.4 uses web technologies that Microsoft Internet Explorer 11 does not support, and as a result, opening ovirt-web-ui in IE11 showed a simple white screen.

Polyfills were added to support using the newer technology, and the base HTML page now enables compatibility mode in IE11 to better support rendering the ovirt-web-ui.

As a result, the 1.4 version of ovirt-web-ui, based on ovirt-engine 4.2, now loads and has base functionality on IE11.

BZ#1646992

This fix addresses the attempt to move a disk with a damaged ancestor (e.g., the ancestor doesn't have an entry in the 'images' table), which leads to an NPE and an error. If the error persists, a possible workaround is to use the API to move the disk between storage domains. The fix documents the error in the Red Hat Virtualization Manager log.

BZ#1647025

This fix addresses listing disk profiles on a storage domain that have a Quality of Service link set where the href attribute value for the QoS link does not display for each listing. This release ensures that all QoS link values are visible.

BZ#1655659

This release ensures the correct parsing of the rhv-toossetup_x.x.x.iso file.

BZ#1658054

When removing RHVH or RHEL hosts in the Administration Portal, the Remove Host(s) confirmation window indicates that the hosts still have self-hosted engines deployed on them.
The self-hosted engine deployment status is now detected correctly.

BZ#1658514

This fix addresses a warning message while cloning a VM with an attached direct logical unit number or a shareable disk. This release ensures the cloning succeeds, informs that the task is in process, and does not display the error message.

BZ#1659096

Enforce version consistency at rpm level between the ovirt-hosted-engine-ha and the ovirt-hosted-engine-setup.

BZ#1659960

This release ensures that during Red Hat Virtualization Manager upgrades from 4.1 to 4.2.8, the network interfaces display properly on the 4.1 host with the bond defined.

4.6.2. Enhancements

This release of Red Hat Virtualization features the following enhancements:

BZ#1477599

The Administration Portal now provides an "Updating" indicator while network setup takes place on a host, until the setup completes.

BZ#1630861

Neutron from Red Hat OpenStack Platform 13 configured to use Open Virtual Network can be used as an external network provider on Red Hat Virtualization 4.2.8 with one limitation.

If a VM with a vNIC on an external network provided by Red Hat OpenStack Platform 13 Neutron with OVN Modular Layer 2 plugin migrates to another host, the port status displays as ‘down’ despite the port working properly.

BZ#1638746

This release adds the support of Windows clustering for iSCSI based direct attached logical unit numbers.

BZ#1639460

This release supports a new WARN message in the Red Hat Virtualization Manager log on startup if overlapping ranges are found within a MAC pool or between MAC pools. Each warning details the outcome as applicable.

BZ#1648624

This release supports Neutron from Red Hat OpenStack Platform 13 configured to use OpenDaylight as an external network provider on RHV 4.2.z with the same port status limitation described in BZ#1630861.

BZ#1651649

This release allows the number of I/O threads to be set in the Administration Portal VM dialog. This enhancement complements the existing REST API to set the number of I/O threads, allowing users the option to use either the REST API or the Administration Portal to set the number of I/O threads.

4.6.3. Rebase: Bug Fixes and Enhancements

These items are rebases of bug fixes and enhancements included in this release of Red Hat Virtualization:

BZ#1662923

Previous Red Hat Virtualization installations of 3.6 ELS and 4.1 introduced "Intel Haswell Family-IBRS" Cluster CPU type for Meltdown/Spectre mitigations. Red Hat Virtualization 4.2 refers to this CPU type as "Intel Haswell IBRS Family." This release updates the CPU type name and the previous name updates automatically on upgrade.

4.6.4. Release Notes

This section outlines important details about the release, including recommended practices and notable changes to Red Hat Virtualization. You must take this information into account to ensure the best possible outcomes for your deployment.

BZ#1651227

Ansible 2.7.2 or higher is required to run oVirt Ansible roles.

BZ#1657878

Red Hat Virtualization Manager now requires JBoss Enterprise Application Platform 7.2.0.

BZ#1659650

Red Hat Virtualization 4.3 no longer supports the 3.6 and 4.0 data centers and cluster levels. This fix adds a service that runs weekly to evaluate existing data centers and raises an alert to audit the inability to upgrade to 4.3 if the data center is compatible with either versions 3.6, 4.0, or both versions.

4.6.5. Deprecated Functionality

The items in this section are either no longer supported, or will no longer be supported in a future release.

BZ#1649817

With this update, Cluster CPU types that only partially mitigate the Spectre and Meltdown vulnerabilities are deprecated, and support for them will be removed in Red Hat Virtualization 4.3. In the Red Hat Virtualization Manager, this encompasses CPU types that contain "IBRS" and "IBPB" without "SSBD". The alternative is to use the equivalent SSBD Cluster CPU type.

To see and set the CPU Type in use:

1. Log in to the Administration Portal.
2. Click Compute > Clusters.
3. Select a cluster and click Edit.
4. In the General tab, use the CPU Type dropdown.