Menu Close
Settings Close

Language and Page Formatting Options

Red Hat Training

A Red Hat training course is available for Red Hat Virtualization

Chapter 3. Release Information

These release notes highlight technology preview items, recommended practices, known issues, and deprecated functionality to be taken into consideration when deploying this release of Red Hat Virtualization.
Notes for updates released during the support lifecycle of this Red Hat Virtualization release will appear in the advisory text associated with each update or the Red Hat Virtualization Technical Notes. This document is available from the following page:

3.1. Red Hat Virtualization 4.1 GA

3.1.1. Enhancements

This release of Red Hat Virtualization features the following enhancements:
With this update, the ability to sparsify a thin-provisioned disk has been added to Red Hat Virtualization. When a virtual machine is shut down, the user can sparsify the disk to convert the free space within the disk image back to free space on the host.
Previously, if power management fencing was not available, automatic high availability of virtual machines did not work without manual confirmation that the host the virtual machine was running on was rebooted.

In this release, a new option for virtual machine lease on storage domains enables automatic high availability failover of a virtual machine, when the host running the machine is down, without the availability of power management for the host.
This release introduces QCOW2 v3, which has a compatibility level of 1.1. This enables the QEMU to use this volume in a more efficient way, with its improved performance capabilities. In addition, it is fully backwards-compatible with the QCOW2 feature set, is easy to upgrade from QCOW2 v2 to QCOW2 v3, and supports extensibility.
This enhancement allows live migration of virtual machines that have SR-IOV NICs attached. Before the migration all SR-IOV NICs are hot unplugged, and after successful migration they are hot plugged back.
This release provides a cleanup script for completely cleaning the host after a failed attempt to install a self-hosted engine.
Previously, it was impossible to reboot a host without enabling Power Management. In this release, it is now possible to shut down and reboot a host without using Power Management. From the Management menu, a new option called SSH Management is available, enabling administrators to select either Restart or Stop.
With this update, if the web console, either noVnc or SPICE HTML5, is unable to connect to the websocket proxy server, a pop-up displays with troubleshooting suggestions. The pop-up contains a link to the default CA certificate.
Previously, almost all data path operations on the hosts could only be performed on the elected Storage Pool Manager. This could potentially cause bottlenecks.

In this release, a new lightweight host jobs management infrastructure was introduced, which enables data path operations to run on any host. In addition, administrators can monitor the progress of Move Disk operations with the aid of a progress indicator.
With this release, when creating virtual machine pools using a template that is present in more than one storage domain, virtual machine disks can be distributed to multiple storage domains by selecting "Auto select target" in New Pool -> Resource Allocation -> Disk Allocation.
Previously, when notification emails were successfully sent to a configured SMTP server, a success message did not appear in the notifier.log file. In this release, when a message is successfully sent to an SMTP server, the following message appears in the notifier.log file: E-mail subject='...' to='...' sent successfully
This release adds support for CPU hot unplug to Red Hat Virtualization. Note that the guest operating system must also support the feature, and only previously hot plugged CPUs can be hot unplugged.
With this update, the code interfacing with VDSM now uses the VDSM API directly instead of using vdsClient and xmlrpc.
Previously, CPU pinning information could be silently lost. Now, a pop-up warning, that CPU pinning information will be lost, displays when a user saves a virtual machine. This means that the user is aware that CPU pinning information will be lost and has the choice to cancel the operation.
With this update, the VDSM thread name is now included in the system monitoring tools. This makes it easier to track the resource usage of the threads.
Power Management alerts are now disabled when fencing is disabled in a cluster.
With this update, virtual machines can now be searched for by the user who created them. Using the REST API, the search query is ".../api/vms?search=created_by_user_id%3D[USER_ID]". The required User ID can be retrieved by using ".../api/users". In addition, the Administration Portal shows the creator's name in the virtual machine general sub-tab. However, it is possible for the user to be removed from the system after the virtual machine is created.
With this update, image disks are now identifiable from within the guest by engine id, for example, by looking under /dev/disk/by-id. The disk id is now passed to the guest as the disk serial.
This update provides a link to the gluster volume when creating a gluster storage domain, and enables a single unified flow.

This enables the backup volfile server mount options to be auto-populated, and paves the way for integration features like Disaster Recovery setup using gluster geo-replication.
Previously, in a hyper-converged cluster environment containing gluster and virt nodes, it was possible to create a replica set containing bricks from the same server. A warning appeared but the action was enabled even though there was a risk of losing data or service. In this release, it will no longer be possible to create a replica set containing multiple bricks from the same server in a hyper-converged environment.
The vioscsi.sys file is now compatible with Microsoft Cluster Services, which enables the Windows virtio-scsi driver to support Windows Server Failover Clustering (WSFC) using shared storage. As a result, vioscsi.sys can pass all tests in the "Validate a Configuration" process.
Previously, in GlusterFS, if a node went down and then returned, GlusterFS would automatically initiate a self-heal operation. During this operation, which could be time-consuming, a subsequent maintenance mode action within the same GlusterFS replica set could result in a split-brain scenario.

In this release, if a Gluster host is performing a self-heal activity, administrators will not be able to move it into maintenance mode. In extreme cases, administrators can use the force option to forcefully move a host into maintenance mode.
Previously, the system performed automatic migrations, when required, without displaying the reason for doing so in the Event log or in the Administrator portal. In this release, after performing an automatic migration, the reason for doing so will be displayed.
This update introduced a check in the host maintenance flow to ensure glusterFS quorum can be maintained for all glusterFS volumes that have the 'cluster.quorum-type' option set. Similarly, there is a new check to ensure that the host moving to maintenance does not have a glusterFS brick that is a source of volume self-healing. These checks will be performed by default when moving the host to maintenance.

There is an option in the Manager to skip these checks, but this can result in bringing your system to halt. This option should only be used in extreme cases.
Previously, when importing a virtual machine from a data storage domain, if the virtual machine had a “bad” MAC address, a MAC collision could occur in the target LAN. A “bad” MAC address is an address that is already in use or an address that is out of the range in the target cluster. In this release, it is possible to assign a new MAC address when importing the virtual machine from a data storage domain.
Previously, when restoring a backup of a hosted engine on a different environment, for disaster recovery purposes, administrators were sometimes required to remove the previous hosts from the engine. This was accomplished from within the engine's database, which is a risk-prone procedure.

In this release, a new CLI option can be used during the restore procedure to enable administrators to remove the previous host directly from the engine backup.
Previously, when restoring a backup of a self-hosted engine on a different environment, for disaster recovery purposes, administrators were sometimes required to remove the previous self-hosted engine's storage domain and virtual machine. This was accomplished from within the engine's database, which was a risk-prone procedure.

                      With this update, a new CLI option enables administrators to remove the previous self-hosted engine's storage domain and virtual machine directly from the backup of the engine during the restore procedure.
Previously, discard commands (UNMAP SCSI commands) that were sent from the guest were ignored by QEMU and were not passed on to the underlying storage. This meant that storage that was no longer in use could not be freed up.

In this release, it is now possible to pass on discard commands to the underlying storage. A new property called Pass Discard was added to the Virtual Disk window. When selected, and if all the restrictions are met, discard commands that are sent from the guest will not be ignored by QEMU and will be passed on to the underlying storage. The reported unused blocks in the thinly provisioned LUNs of the underlying storage will be marked as free, and the reported consumed space will be reduced.
Previously, when the Virtual Machine was powered down, deleting a snapshot could potentially be a very long process. This was due to the need to copy the data from the base snapshot to the top snapshot, where the base snapshot is usually larger than the top snapshot.

Now, when deleting a snapshot when the Virtual Machine is powered down, data is copied from the top snapshot to the base snapshot, which significantly reduces the time required to delete the snapshot.
With this update, support for Gluster arbiter volume creation has been added to Red Hat Virtualization. Arbiter volumes are recommended in place of regular three-way replicated volumes to save storage space.
Previously, deleting a snapshot while the virtual machine was down took a long time. With this release, snapshot deletion while a virtual machine is down uses 'qemu-img commit' instead of 'qemu-img rebase', improving the performance of the operation.
With this update, the user can configure the number of memory slots reserved for spare self-hosted engine hosts if the current host crashes. Previously, there was a chance that the self-hosted engine virtual machine would not have a place to start on a loaded cluster and this compromised the high availability feature. Now, the self-hosted engine will have a place to start a backup host so that it is ready to accept the virtual machine if the current host crashes.
With this update, the option '--accept-defaults' has been added to the engine-setup command. This option causes engine-setup to no longer prompt for answers that have a default. This option saves time for the user as they no longer need to answer the prompts individually, if they are planning to accept the defaults, and also allows other tools to run engine-setup unattended. If the engine-setup command is run using this option and a weak password is provided, the user will be prompted for a stronger password because the default answer to 'Use weak password?' is No. To work around this, add the answer to an answer file.
This feature allows you to map external VNIC profiles that are defined on an imported VM to the ones that are present in the cluster the VM is going to be imported to. The previous solution exchanged all external VNIC profiles that were not present in the target cluster with an empty profile, which removed the imported VM's network functionality. Now, after importing a VM from a data domain, the VM is configured properly according to the VNIC profiles that are defined in the target cluster.
With this update, a new option to check for upgrades has been added when installing a host. In the Administration Portal this is available in the host installation menu and it can be triggered by using the hosts upgradecheck endpoint in the REST API.
Previously, the column control menu in the Administration Portal, which is accessed by right-clicking on a column header in any of the tables, contained arrows for controlling the order of the columns. In this release, the order of the columns in this menu can be defined by dragging and dropping the column to the required position within the menu.
In this release, a link has been added to the Edit Fence Agent window, which opens the online help and displays information about the parameters that can be set for fence agents.
With this update, the limit of virtual CPUs has been increased to reflect the capabilities of Red Hat Enterprise Linux 7.3 hosts. It is now possible to configure a virtual machine with up to 288 vCPUs when hosts are in a 4.1 cluster.
Previously, if a live migration was performed with extreme memory write intensive workloads, the migration would never be able to complete because QEMU could not transfer the memory changes fast enough. In this case, the migration could not reach the non-live finishing phase. 

In this release and in these situations, RHV will restrict the amount of CPU given to the guest to reduce the rate at which memory is changed and allow the migration to complete.
With this update, the loading performance of external virtual machines from an external server has been improved for VMware, KVM, and Xen. Previously, when displaying a list of virtual machines, libvirt was asked for the full information for each virtual machine when only the virtual machine names were required. Now, libvirt is only asked for the virtual machine names at the first import dialog and only imports the full virtual machine data list after the user has selected the required virtual machines.
With this update, CPU hotplug is now supported on POWER for selected guest operating systems. CPU hotplug is now supported on both x86_64 architecture and on selected guest operating systems for ppc64le.
With this update, the user can now customize the virtual machine disk size when using the engine-appliance.
The latest virtio-win release, which includes Windows 10 drivers, is now required by the Manager to ensure that suitable drivers can be supplied to virtual machines during installation of Windows 10.
This feature allows you to edit the configuration stored in the self-hosted engine's shared storage.
Users can now change an initialized Data Center type to Shared or Local. The following updates will be available:

1. Shared to Local - Only for a data center that does not contain more than one host and more than one cluster, since a local data center does not support it. The Manager will validate and block this operation with the following messages:


2. Local to Shared - Only for a data center that does not contain a local storage domain. The Manager will validate and block this operation with the following message:


This update is useful for data domains that are used to move virtual machines or templates around.
The weighting for virtual machine scheduling has been updated. The best host for the virtual machine is now selected using a weighted rank algorithm instead of the pure sum of weights. A rank is calculated for the policy unit and host, and the weight multiplier is then used to multiply the ranks for the given policy unit. The host with the highest number is selected.

The reason for the change is that current weight policy units do not use a common result value range. Each unit reports numbers as needed, and this causes issues with user configured preferences. For example, memory (which has high numbers) always wins over CPU (0-100).

This update ensures that the impact of the policy unit multiplier for the scheduling policy configuration is more predictable. However, users who are using it should check their configuration for sanity when upgrading.
This update ensures that the Manager signs certificates using the SHA-256 algorithm instead of SHA-1 because SHA-256 is more secure and is expected to have a longer life expectancy.

Only the default for new certificates was changed. To change certificates for existing hosts, they need to be reinstalled, or to have their certificates enrolled. Other certificates require a completely new setup, using engine-cleanup and engine-setup, including the one for httpd.
This update adds the ability to download or upload Red Hat Virtualization images (for example, virtual machine images) using the Red Hat Virtualization API.
High availability could previously not be enabled for virtual machines in Hyper-converged mode. Previous fencing policies ignored Gluster processes. But in Hyper-converged mode, fencing policies are required to ensure that a host is not fenced if there is a brick process running, or to ensure no loss of quorum when shutting down the host with an active brick.

The following fencing policies have been added to Hyper-converged clusters:
- SkipFencingIfGlusterBricksUp: Fencing will be skipped if bricks are running and can be reached from other peers.
- SkipFencingIfGlusterQuorumNotMet: Fencing will be skipped if bricks are running and shutting down the host will cause loss of quorum

Virtual machine high availability can be tested by enabling power management on hyper-converged nodes.
This update adds the ability to acquire a lease per virtual machine on shared storage, without attaching the lease to a disk. This adds the capability to avoid split-brain, and to avoid starting a virtual machine on another host if the original host becomes non-responsive, thereby improving virtual machine high availability.
MAC address pools are now bound to the cluster instead of the data center because certain environments require MAC address pools on the cluster level.

On the REST layer, the mac_pool attribute was added to the cluster, and can be set or queried. The StoragePool resource (represents the data center) was also altered. When updating its mac_pool_id, all clusters in a given data center will be updated to use this MAC address pool. When StoragePool is queried using the REST GET method, the ID of the MAC address pool will be reported only when all clusters in the given data center are using the same MAC address pool. Otherwise, the user needs to use the Cluster resource to get the MAC address pool of each individual cluster.
This feature allows you to request a new MAC address when importing a virtual machine from a data storage domain. This allows you to avoid importing a virtual machine with a bad MAC address, which might cause a MAC address collision in the target LAN. A MAC address would be considered "bad" if it is already in use in the target cluster or is out of the range of the MAC pool of the target cluster.
The NTP configuration is automatically set when deploying the self-hosted engine.
This update adds support for deploying Gluster storage during the self-hosted engine deployment through the Cockpit UI. Previously, the user needed to first deploy the Gluster storage using gdeploy, then deploy the self-hosted engine using the Cockpit UI, and configuration files had to be manually updated.
This update ensures that a self-hosted engine deployment works without needing to disable NetworkManager.
This update allows you to enable SSH access for the Manager virtual machine when deploying the self-hosted engine. You can choose between yes, no, and without-password. You can also pass a public SSH key for the root user during deployment.
A new 'original_template' property has been introduced for the 'vm' REST API resource. This enables the user to get information about the template the virtual machine was based on before cloning.
Previously, support for Legacy USB was deprecated and the UI displayed three options: Native, Legacy (Deprecated), and Disabled. In this release, the Legacy option has been completely removed and the UI now displays two options: Enabled and Disabled.
With this release, /dev/random is now the default random number generator in clusters with a cluster compatibility level of 4.0 and earlier, and /dev/urandom is now the default random number generator in clusters with a cluster compatibility level of 4.1 and later. Because these random number generators are enabled by default, the option to enable them has now been removed from the New Cluster and Edit Cluster windows. However, you can select the random number generator source for individual virtual machines from the New Virtual Machine and Edit Virtual Machine windows.
This update includes the subversion name and subversion number of the template being removed from the "remove template(s)" screen.

The two "remove template(s)" screens now display the following:

Are you sure you want to remove the following items?
- template-name (Version: subversion-name(subversion-number))
With this update, it is now possible to configure discard after delete per block storage domain. Previously, a user could get similar functionality by configuring the discard_enable parameter in the VDSM configuration file. This caused each logical volume (disk or snapshot) that was about to be removed by this specific host to be discarded first. Now, discard after delete can be enabled for a block storage domain, rather than a host. This means that if discard after delete is enabled, it no longer matters which host removes the logical volume, as each logical volume under this domain will be discarded before it is removed.
This update ensures that only hosts that have the status Up or NonOperational are checked for upgrades. Previously hosts with the status Maintenance were also checked, but often they were not reachable, which caused errors in Events.
Previously, if the guest agent was not running or was out of date, the hover text message that appeared next to the explanation mark for the problematic virtual machine informed the user that the operating system did not match or that the timezone configuration was incorrect. In this release, the hover text will correctly display a message informing the user that the guest agent needs to be installed and running in the guest.
With this update, a log file located in /var/log/httpd/ovirt-requests-log now logs all requests made to the Red Hat Virtualization Manager via HTTPS, including how long each request took. There is a 'Correlation-Id' header included to allow for easier comparison of requests with the engine.log. CorrelationIds are now generated for every request automatically and can be passed to the REST API per Correlation-Id header or per correlation_id query parameter.
With this update, a user can now save a provider for external libvirt connection in the Providers tree section. When a user tries to import a virtual machine from libvirt+kvm to the Red Hat Virtualization environment the saved provider is available instead of having to re-enter the address.
The self-hosted engine only supports deployment using the RHV-M Appliance. With this release, the deployment script allows you to download and install the Appliance RPM directly, instead of having to install it before deployment.
When users import virtual machines from Xen on RHEL to Red Hat Virtualization it will access the saved provider address instead of the user having to re-enter the address.
Previously, after_hibernation hooks were never executed. With this release, before_hibernation and after_hibernation hooks are always executed on the guest operating system (with the guest agent installed) when suspending and resuming a virtual machine.
Previously, when importing a virtual machine, if the import failed, the output of the virt-v2v tool was not available for investigating the reason for the failure, and the import had to be reproduced manually. In this release, the output of virt-v2v is now stored in the /var/log/vdsm/import directory. All logs older than 30 days are automatically removed.
Previously, a Dashboard tab was introduced to the Administration Portal. However, when loading the Administration Portal the user landed at the Virtual Machines tab followed by an immediate switch to the Dashboard tab. Now, the UI plugin has been improved to allow pre-loading of UI plugins, such as ovirt-engine-dashboard. This means that the user lands directly at the Dashboard tab.
With this update, the debug logging for ovirt-engine-extension-aaa-ldap has been updated. When ovirt-engine-extension-aaa-ldap is enabled the following messages will show in the logs. The LDAP server that authenticated a user is shown as "User 'myuser1' is performing bind request to:" and the LDAP server that performed a search request is shown as "Performing SearchRequest '...' request on server"
This update includes the Post-copy migration policy, which is available as a Technology Preview feature. The policy is similar to the Minimal Downtime policy, and enables the virtual machine to start running on the destination host as soon as possible. During the final phase of the migration (post-copy phase), the missing parts of the memory content is transferred between the hosts on demand. This guarantees that the migration will eventually converge with very little downtime. The disadvantage of this policy is that in the post-copy phase, the virtual machine may slow down significantly as the missing parts of memory are transferred between the hosts. If anything goes wrong during the post-copy phase, such as a network failure between the hosts, the running virtual machine instance will be lost. It is therefore not possible to abort a migration during the post-copy phase.
With this release, if you do not specify any NUMA mapping, Red Hat Virtualization defaults to a NUMA node that contains the host device's memory-mapped I/O (MMIO). This configuration is only preferred, rather than strictly required.
The self-hosted engine setup wizard now warns users if the host is already registered to Red Hat Virtualization Manager. Previously, a host that was registered to the Manager but not running a self-hosted engine would present the option to set up a self-hosted engine, which ran the risk of unregistering the host. Now, hosts that are registered to the Manager present a "Redeploy" button in the Hosted Engine wizard in Cockpit, which must be selected in order to continue.
This update adds Gluster-related fencing policies for hyper-converged clusters. Previous fencing policies ignored Gluster processes. But in Hyper-converged mode, fencing policies are required to ensure that a host is not fenced if there is a brick process running, or to ensure no loss of quorum when shutting down the host with an active brick.

The following fencing policies have been added to Hyper-converged clusters:
- SkipFencingIfGlusterBricksUp: Fencing will be skipped if bricks are running and can be reached from other peers.
- SkipFencingIfGlusterQuorumNotMet: Fencing will be skipped if bricks are running and shutting down the host will cause loss of quorum
Red Hat Virtualization Host (RHVH) 4.0 allows users to install RPMs. However, installed RPMs are lost after upgrading RHVH.

RHVH 4.1 now includes a yum plugin that saves and reinstalls RPM packages after upgrading, to ensure that installed RPMs are no longer lost after upgrading.

This will not work when upgrading from RHVH 4.0 to RHVH 4.1.
The rng-tools package has been added to oVirt Node NG / RHV-H. This tool is required for the TPM module to be able to work with the Random Number Generator.
This enhancement is a rebase on the jsonrpc Dispatcher APIs to provide better performance and make the code more robust.
A mobile client for Red Hat Enterprise Virtualization, which is compatible with Red Hat Enterprise Virtualization 3.5 onwards, is available for Android devices.
oVirt release now provides repository configuration files for enabling GlusterFS 3.8 repositories on Red Hat Enterprise Linux, CentOS Linux, and similar operating systems.
Since Red Hat Virtualization now has the capability to deploy additional self-hosted engine hosts from the Manager with host-deploy, the capability to deploy additional self-hosted engine hosts from hosted-engine setup is not required anymore. It has now been removed.

Similarly, the RHV-M Appliance has proved to be the easiest flow to have a working self-hosted engine environment; all other bootstrap flows have now been removed.
This release adds support for overlay networks using Open Virtual Network (OVN) as a Technology Preview. This feature allows you to add OVN as an external network provider, and import or create networks from it in the Red Hat Virtualization Manager. You can then provision virtual machines with network interfaces connected using these logical overlays (OVN networks).
Previously, Python SDK was configured to communicate with  the server using uncompressed responses. This caused long response times. In this release, the default configuration is to send compressed responses.
Multiple updates were made to the UI for Self-Hosted Engine.

New icons have been added:
- To virtual machines to indicate whether it is the Manager virtual machine.
- To hosts to indicate whether it can run the Manager virtual machine.
- To storage domains to indicate whether it contains the Manager virtual machine.

Buttons to enable and disable global maintenance mode have been moved to the context menu of a host that can run the Manager virtual machine. Depending on the current status of global maintenance mode, either the enable or disable option will be enabled.
The "Enable USB Auto-Share" option in the "Console options" dialog is now only available if "USB Support" is enabled on the virtual machine.
Previously, Java SDK was configured to communicate with the server using uncompressed responses. This caused long response times. In this release, the default configuration is to send compressed responses.
In this release, when installing or reinstalling hosts, the collectd and fluentd packages are now installed, including the relevant plugins. These can be used to send statistics to a central metrics store.
Previously, if SPICE USB redirection was disabled, libvirt created a default USB controller. With this update, if SPICE USB redirection is disabled then the virtual machine has a new USB controller, which is configurable per guest operating system and cluster version. This is defined in the configuration file.
The tcpdump package is now included with Red Hat Virtualization Host.
A reinstalling and restoring workflow was tested and confirmed for moving from version 3.6 Red Hat Enterprise Virtualization Hypervisor hosts to the new implementation, Red Hat Virtualization Host, in 4.0 or 4.1.
With this update, IBM Security (Tivoli) Directory Server has been added to supported LDAP servers in ovirt-engine-extension-aaa-ldap. This allows customers to attach Red Hat Virtualization 4.1 to their IBM Security (Tivoli) Directory Server setup and to use users and groups from this setup in Red Hat Virtualization.
Previously, the ExportVmCommand appeared in the Engine log without the ID of the virtual machine being exported. This information has now been added to the log.

Note: After this change, users must have export permissions for the virtual machine and its disks to export a virtual machine. Previously, permissions to export virtual machine disks were sufficient.
Previously, users who wanted to use Cockpit for system configuration needed to log in to the system and retrieve IP address information manually. Now, Red Hat Virtualization Host provides a message on login informing users of the URL to the Cockpit user interface.
The "screen" package is now available as part of the base RHVH image.
This release introduces a 'force' flag, which can be used to update a storage server connection regardless of the associated storage domain status (allowing updates even when the storage domain is not in Maintenance mode).

For example: PUT /ovirt-engine/api/storageconnections/123;force=true
This update adds the ability to import partial virtual machines using the REST API.

The Hyper Converged Infrastructure (HCI) Disaster Recovery (DR) solution is based on the concept that only data disks are replicated and system disks are not. Previously, if some of the virtual machine's disks were not replicated, the virtual machine import would fail. Since disks have snapshots, they cannot be imported as floating disks. To allow the DR to work, a virtual machine is forced to import from a storage domain, even if some of its disks are not accessible.

The following is a REST request for importing a partial unregistered virtual machine.

POST /api/storagedomains/xxxxxxx-xxxx-xxxx-xxxxxx/vms/xxxxxxx-xxxx-xxxx-xxxxxx/register HTTP/1.1
Accept: application/xml
Content-type: application/xml

    <cluster id='bf5a9e9e-5b52-4b0d-aeba-4ee4493f1072'></cluster>
Red Hat Virtualization now supports headless virtual machines that run without a graphical console and display device. Headless mode is also supported for templates, pools, and instance types. This feature supports running a headless virtual machine from start, or after the initial setup (after "Run Once"). Headless mode can be enabled or disabled for a new or existing virtual machine at any time.
With this update, the Red Hat Virtualization Host includes sysstat as part of the base image.
This feature allows you to request a console ticket for a specific graphics device by means of the REST API. The existing endpoint, /api/vms/{vmId}/ticket, defaulted to SPICE in scenarios when SPICE+VNC was configured as the graphics protocol, making it impossible to request a VNC ticket. Now, a ticket action has been added to the /api/vms/{vmId}/graphicsconsoles/{consoleId} resource, making it possible to request a ticket for a specific console. This specific endpoint is now preferred, and the pre-existing per-VM endpoint is considered deprecated.
Previously when integrating the Manager with an LDAP server using the ovirt-engine-extension-aaa-ldap-setup tool, the root of the LDAP tree (base DN) was selected automatically based on the LDAP server defaults. However, sometimes the defaults are incorrect for Manager integrations, and administrators are required to edit configuration files manually after the setup job completes.

Now the ovirt-engine-extension-aaa-ldap-setup tool offers to override the default base DN retrieved from LDAP server, so manual changes are no longer necessary.
This release adds the ability to specify a Maximum Memory value in all VM-like dialogs (Virtual Machine, Template, Pool, and Instance Type). It is accessible in the '{vm, template, instance_type}/memory_policy/max' tag in the REST API. The value defines the upper limit to which memory hot plug can be performed. The default value is 4x memory size.
This release adds a maintenance tool to run vacuum actions on the engine database (or specific tables). This tool optimizes table stats and compacts the internals of tables, resulting in less disk space usage, more efficient future maintenance work, and updated table stats for better query planning. Also provided is an engine-setup dialog that offers to perform vacuum during upgrades. This can be automated by the answer file.
Previously, it was not possible to install Windows Server 2016 on a virtual machine. In this release, it is now possible to install Windows Server 2016 on a virtual machine. When adding a virtual machine, Windows Server 2016 appears in the list of available operating systems.
Previously, the Networking tab in Cockpit was disabled in Red Hat Virtualization Host (RHVH) images. This is now enabled, meaning that system networking can be configured through Cockpit in RHVH.
Support for virtual machine to host affinity has been added. This enables users to create affinity groups for virtual machines to be associated with designated hosts. Virtual machine host affinity can be disabled or enabled on request.

Virtual machine to host affinity is useful in the following scenarios:
- Hosts with specific hardware are required by certain virtual machines.
- Virtual machines that form a logical management unit can be run on a certain set of hosts for SLA or management, for example, a separate rack for each customer.
- Virtual machines with licensed software must run on specific physical machines to avoid scheduling virtual machines to hosts that need to be decommissioned or upgraded.
The user experience for HA global maintenance has been improved in the UI by moving the options to a more logical location, and providing a visual indication about the current state of HA global maintenance for a given host.

The "Enable HA Global Maintenance" and "Disable HA Global Maintenance" buttons are now displayed on the right-click menu for hosts instead of virtual machines, and reflect the global maintenance state of the host by disabling the button matching the host's current HA global maintenance state.

The previous method of displaying the options for virtual machines was unintuitive. Additionally, both the enable and disable options remained available regardless of whether or not the host was in HA global maintenance mode.
With this release, Intel Skylake family CPUs are now supported.
This update adds the ability to import partial templates through REST. You can register a template even if some of the storage domains are missing.

The following is a REST request for importing a partial unregistered template:

POST /api/storagedomains/xxxxxxx-xxxx-xxxx-xxxxxx/templates/xxxxxxx-xxxx-xxxx-xxxxxx/register HTTP/1.1
Accept: application/xml
Content-type: application/xml

    <cluster id='bf5a9e9e-5b52-4b0d-aeba-4ee4493f1072'></cluster>
During the authorization stage of the login flow, the user's group memberships, including nested groups, are retrieved. Nested group memberships are resolved using recursive LDAP searches, which could take significant amount of time.

This update uses a special Active Directory feature called LDAP_MATCHING_RULE_IN_CHAIN, which allows you to fetch complete group memberships, including nested groups, in one LDAP search.
With this update, some ancillary self-hosted engine commands that were still based on xmlrpc have been moved to jsonrpc.
Since Red Hat Enterprise Virtualization 3.6, ovirt-ha-agent read its configuration and the Manager virtual machine specification from shared storage. Previously, they were local files replicated on each involved host. This enhancement modifies the output of hosted-engine --vm-status to show whether the configuration and the Manager virtual machine specification have been, on each reported host, correctly read from the shared storage.
Previously, the Java heap size for Data Warehouse was not explicitly set. This resulted in the Java virtual machine using the default size, which could have been as large as a quarter of the machine's total memory. With this release, Data Warehouse's configuration was updated to allocate 1 GB of RAM, with the addition of two new parameters:

The size can be set to a higher value for larger environments using these new parameters.
This feature integrates the setup for data sync to a remote location using geo-replication for Gluster-based storage domains, to improve disaster recovery. A user is able to schedule data sync to a remote location from the Red Hat Virtualization UI.
This release changes the default disk interface type from virtio-blk to virtio-scsi. virtio-blk is still supported, but users are encouraged to use the more modern virtio-scsi. When creating or attaching a disk to a virtual machine, the virtio-scsi interface type will now be selected by default.
This update allows you to change the default network used by the host from the management network (ovirtmgmt) to a non-management network.
The self-hosted engine's machine type has now been upgraded for Red Hat Enterprise Linux 7 compatibility.
With this update, the ability to remove LUNs from a block data domain has been added. This means that LUNs can be removed from a block data domain provided that there is enough free space on the other domain devices to contain the data stored on the LUNs being removed.
Previously, when a USB hub containing a redirected device was unplugged, spice-usbdk-win failed to clean up the redirected device. When the USB hub and its attached device were replugged, the device could not be redirected.

In this release, the issue has been fixed. spice-usbdk-win will now clean up the redirected device as required. When the USB hub and the USB device are re-plugged, the device can be redirected to the guest.
This feature adds rule enforcement support for VM to host affinity. VM to host affinity groups require the affinity rule enforcer to handle them in addition to the existing enforcement of VM to VM affinity. The rule enforcer will now be able to find VM to host affinity violations and choose a VM to migrate according to these violations.
This release adds the VirtIO-RNG driver installer to the guest tools ISO for supported Windows versions.
A script is now supplied to configure collectd and fluentd on hosts to send statistics to a central store.
In this release, the following optional troubleshooting packages have been added to the RHV-H repository:

These packages can be installed on Red Hat Virtualization Host.
With this update, support for Red Hat Virtualization and oVirt has been added to Ansible. For more information about oVirt Ansible modules see
This fix allows administrators to set the engine-config option "HostPackagesUpdateTimeInHours" to 0, which disables automatic periodical checks for host upgrades. Automatic periodical checks are not always needed, for example when managing hosts using Satellite.
From now on, all timestamp records for the engine and engine tools logs will contain a time zone to ease correlation between logs on the Manager and hosts. Previously engine.log contained a timestamp without a time zone, for example:

2017-02-27 13:35:06,720 INFO  [org.ovirt.engine.core.dal.dbbroker.DbFacade] (ServerService Thread Pool -- 51) [] Initializing the DbFacade

From now on there will always be a timezone identifier at the end of the timestamp part, for example:

2017-02-27 13:35:06,720+01 INFO  [org.ovirt.engine.core.dal.dbbroker.DbFacade] (ServerService Thread Pool -- 51) [] Initializing the DbFacade
This release enables Virtual Machines to lease areas on the storage domain. If a Virtual Machine has a lease on a storage domain, it will not be possible to move this storage domain into Maintenance mode.
If the user attempts to do so, an error message will appear explaining that a virtual machine currently has a lease on this storage.
Previously, rhvm-appliance was not available via subscriptions on the RHV-H repositories. In this release, rhvm-appliance is now used as the  preferred deployment mechanism by  ovirt-hosted-engine-setup, and is now available in the RHV-H repositories.
Previously, the Networking tab was not available in the Cockpit in Red Hat Virtualization Host, even though the NetworkManager was enabled. With this release, the Networking tab is now available in the Cockpit, and administrators can use it to configure the network.
Previously, when the Manager attempted to connect to VDSM it tried to negotiate the highest available version of TLS but due to previous issues there was a limitation to try TLSv1.0 as the highest version and to not try any higher version. Now, the limit has been removed so that TLSv1.1 and TLSv1.2 can be negotiated if they are available on the VDSM side. Removing this limit will allow TLSv1.0 to be dropped from future versions of VDSM.
The Red Hat Virtualization Manager now provides warnings for all data centers and clusters that have not been upgraded to latest installed version. The compatibility version of all data centers is checked once a week and on Manager startup. If it is not the latest version, an alert is raised and stored in the audit log. The Data Centers and Clusters main tabs now also show an exclamation mark icon for each data center or cluster that is not at the latest version. Hovering over this icon displays a recommendation to upgrade the compatibility version.
In RHV 4.1 a new tools repository containing packages required by the Red Hat Virtualization Manager has been added. See the Release Notes or Installation Guide for repository details.
The 'localdisk' hook adds the ability to use fast local storage instead of shared storage, while using shared storage for managing virtual machine templates. Currently, a user has to choose between fast local storage, where nothing is shared with other hosts, or shared storage where everything is shared between the hosts and fast local storage cannot be used. This update mixes local and shared storage.

The 'localdisk' hook works as follows:

1) A user will create a virtual machine normally on shared storage of any type. To use the virtual machine with local storage, the user will need to pin the virtual machine to a certain host and enable the localdisk hook.

2) When starting the virtual machine on the pinned host, the localdisk hook will copy the virtual machine disks from shared storage into the host's local storage and modify the disk path to use the local copy of the disk.

3) The original disk may be a single volume or a chain of volumes based on a template. The local copy is a raw preallocated volume using a LVM logical volume on the special "ovirt-local" volume group.

To change storage on a virtual machine using local storage, the localdisk hook must be disabled.

- Virtual machines using local disk must be pinned to a specific host and cannot be migrated between hosts.
- No storage operations on a virtual machines using local disks are allowed, for example, creating/deleting snapshots, moving disks, creating templates from the virtual machine.
- The virtual machine disks on the shared storage should not be deleted and the storage domain needs to be active and accessible.
With this release, the net-snmp package is part of the Red Hat Virtualization Host image by default.
It is now possible to create NFS storage domains with NFS version 4.2 via the REST API.
The API supports the 'filter' parameter to indicate if results should be filtered according to the permissions of the user. Due to the way this is implemented, non-admin users need to set this parameter for almost all operations because the default value is 'false'. To simplify things for non-admin users, a configuration option ENGINE_API_FILTER_BY_DEFAULT has been added, which allows you to change the default value to 'true', but only for non-admin users. If the value is explicitly given in a request, it will be honored.

If you change the value of ENGINE_API_FILTER_BY_DEFAULT to 'true', be aware that this is not backwards compatible, as clients that used non-admin users and did not explicitly provide the 'filter' parameter will start to behave differently. However, this is unlikely because calls from non-admin users without 'filter=true' are almost useless.

If it is necessary to change the default behavior, it can be achieved by changing this parameter in a configuration file inside the '/etc/ovirt-engine/engine.conf.d' directory. For example:

  # echo 'ENGINE_API_FILTER_BY_DEFAULT="true"' > \

  # systemctl restart ovirt-engine

3.1.2. Technology Preview

The items listed in this section are provided as Technology Previews. For further information on the scope of Technology Preview status, and the associated support implications, refer to
In this release, a new user interface for the User Portal has been introduced as a Technology Preview. The new user interface offers improved performance. The new User Portal can be accessed from the following link: https://[ENGINE_HOST]/ovirt-engine/web-ui

3.1.3. Release Notes

This section outlines important details about the release, including recommended practices and notable changes to Red Hat Virtualization. You must take this information into account to ensure the best possible outcomes for your deployment.
Previously, Red Hat Virtualization Host (RHVH) was shipped without an End User License Agreement (EULA). In this release, this bug has been fixed and RHVH now includes an EULA.
Currently, virtual machines with block devices cannot be imported from Xen to Red Hat Virtualization using the Administration Portal. This release adds a workaround to manually import Xen virtual machines with block devices:

1. Create an export domain.
2. Run the following command to copy the files locally:
   $ virt-v2v-copy-to-local -ic xen+ssh:// vmname
3. Run the following command to convert the virtual machine to libvirt XML:
   $ virt-v2v -i libvirtxml vmname.xml -o rhev -of raw -os servername:/path/to/export/domain
4. The virtual machine should now appear in your export domain, and can be imported to a data domain.
RHEV Agent has been renamed to oVirt Agent on Windows.
rhevm-spice-client packages were renamed to spice-client-msi.
Previously the name of the “rhevm-appliance” RPM contained only a timestamp, without versioning information. In this release, the Red Hat Virtualization release will now be included in the name of the  “rhevm-appliance” RPM and will be visible from the node channel.

3.1.4. Known Issues

These known issues exist in Red Hat Virtualization at this time:
When hosted-engine --deploy is run on additional hosts that have multiple FQDNs associated with it, the script will pick the host address that is returned by default.

Ensure that a host's hostname resolves to the required FQDN/em1 before deploying the self-hosted engine.
Previously, after deleting a snapshot in a data center, the original volume's allocation policy and size differed from the pre-snapshot state. In this release, if a snapshot is created from a preallocated volume, when the snapshot is deleted, qemu-img is called to copy data from the top volume to the base volume. As a result, the original volume's allocation policy and size are identical.
Due to an unstable slave order in NetworkManager, DHCP over a bond created by NetworkManager may receive a different IP address after adding it to Red Hat Virtualization (RHV) or after rebooting. The workaround is to avoid using DHCP on a NetworkManager-controlled bond.

NetworkManager may also remove a DHCP-provided host name after a host is added to RHV. To avoid this, persist the host name explicitly via Cockpit or hostnamectl.
In Red Hat Virtualization 4.1, when the Manager deploys a host, collectd is always installed; however, host deployment will fail if you are attempting to deploy a new or reinstalled version 3.y host (in a cluster with 3.6 compatibility level), because collectd is not shipped in the 3.y repositories.

To avoid this, ensure that you install and deploy any version 3.y hosts prior to upgrading the Manager to 4.1.

Note that after the Manager upgrade, these hosts will continue to work, but you will not be able to reinstall them without first upgrading them to version 4.1.
Cockpit is currently available only for x86_64 architecture. As a result, in Red Hat Virtualization, Cockpit is supported only for x86_64 hosts, and is not supported for ppc64le (IBM POWER8) hosts.
File conflicts, as a result of package renaming, caused direct upgrades of debuginfo from version 1.0.12 and earlier to a later version to fail due to duplicate files in the same location.

The workaround for this problem is to manually uninstall the previous version of the debuginfo package before an installation or upgrade of the newer ovirt-guest-agent packages, such as:

# yum remove rhevm-guest-agent-debuginfo

This workaround does not introduce any limitations and is simple to execute for users of the debuginfo packages.

3.1.5. Deprecated Functionality

The items in this section are either no longer supported or will no longer be supported in a future release
This release removes the ability to export Gluster volume profile statistics as a PDF file (a feature that was not widely used) as part of removing the dependency on the avalon-framework package.
The rhevm-guest-agent packages for Red Hat Enterprise Linux have now been renamed to ovirt-guest-agent, to align with upstream.
This release removes a no-longer-needed workaround for the vdsm-jsonrpc deprecation warning.
IFCFG persistence mode has been declared deprecated. The Unified persistence mode has been the default from version 3.5 and should now be used in all systems.