Red Hat Training

A Red Hat training course is available for Red Hat Virtualization

Chapter 2. RHEA-2018:1489 VDSM 4.2 GA

The bugs in this chapter are addressed by advisory RHEA-2018:1489. Further information about this advisory is available at https://access.redhat.com/errata/RHSA-2018:1489.

vdsm

Previously, an incorrect storage domain procedure could create invalid storage domain LVM metadata. When detected, the system would fail to activate the storage domain. Now, the system logs a warning when invalid storage domain metadata is detected, without failing the activation.
A previously imported storage domain that was destroyed or detached can now be imported into an uninitialized Data Center. In the past, this operation failed because the storage domain retained its old metadata.
Previously, VDSM was refreshing active logical volumes that did not change (or never change) and do not need refresh, increasing the load on the storage server, delaying other LVM operations, and adding noise to the logs. Now, VDSM only refreshes logical volumes that have been changed, so there are no more useless refresh operations.
Currently, LVM scans and activates raw volumes during boot. Then it scans and activates guest logical volumes created inside a guest on top of the raw volumes. It also scans and activates guest logical volumes inside LUNs which are not part of a Red Hat Virtualization storage domain. As a result, there may be thousands of active logical volumes on a host, which should not be active. This leads to very slow boot time and may lead to data corruption later if a logical volume active on the host was extended on another host.
To avoid this, you can configure an LVM filter using the "vdsm-tool config-lvm-filter" command. The LVM filter prevents scanning and activation of logical volumes not required by the host, which improves boot time.
Previously, when copying disks using qcow2 compressed format, the destination disk size was not calculated correctly, because it was incorrectly assumed that the disk was not compressed. Copying an uploaded disk using qcow2 compressed format, or cloning a virtual machine using such a disk, would fail. Now, the system estimates the destination disk size based on the qcow2 actual image format, so it is possible to copy compressed disks and clone virtual machines that use them.
Previously, incorrect LVM configuration resulted in incorrect LVM output. The LVM configuration has now been fixed so that the correct LVM output is generated. The names of the generated files are as follows:

lvm_lvs_-v_-o_tags_--config_global_locking_type_0_use_lvmetad_0_devices_preferred_names_.dev.mapper._ignore_suspended_devices_1_write_cache_state_0_disable_after_error_count_3_filter_a_.dev.mapper.._r

lvm_pvs_-v_-o_all_--config_global_locking_type_0_use_lvmetad_0_devices_preferred_names_.dev.mapper._ignore_suspended_devices_1_write_cache_state_0_disable_after_error_count_3_filter_a_.dev.mapper.._r

lvm_vgs_-v_-o_tags_--config_global_locking_type_0_use_lvmetad_0_devices_preferred_names_.dev.mapper._ignore_suspended_devices_1_write_cache_state_0_disable_after_error_count_3_filter_a_.dev.mapper.._r
Previously, when a VM was migrating and the source host became non-operational, the VM could end up running on two hosts simultaneously. This has now been fixed.
Previously, VDSM expected the optional object identifier (OID) field in LLDP. If this was absent, parsing of LLDP failed. Now, VDSM no longer expects the OID field in LLDP. LLDP info from the host is presented via the REST API without an OID.
Previously, TLSv12 support was backported into Red Hat Virtualization 4.1.5 (BZ#1412552) but it was turned off by default and enabling TLSv12 required manual configuration. Now, TLSv12 support is enabled by default and no manual configuration is required.
Previously, VDSM was accessing NFS storage directly when performing lease operations. A bug in Python could cause the entire VDSM process to hang if the NFS storage was not responsive. VDSM could become unkillable (D state) for many hours, until the host was rebooted. Now, VDSM uses an external process to access NFS storage, so it can continue to function correctly even if the NFS storage becomes non-responsive, and can be restarted if needed.
Previously, the after_vm_pause VDSM hook was not executed after I/O errors. This has now been fixed.
LVM scans and activates raw volumes during boot. Then it scans and activates guest logical volumes created inside a guest on top of the raw volumes. It also scans and activates guest logical volumes inside LUNs which are not part of a Red Hat Virtualization storage domain. As a result, it may find logical volumes with the same volume name or volume group name as groups or volumes on the host, causing errors.
To avoid this, you can configure an LVM filter using the "vdsm-tool config-lvm-filter" command. The LVM filter prevents scanning and activation of logical volumes not required by the host, thereby avoiding naming collisions.
In this release, a new VDSM hook that configures nested virtualization, has been introduced as a Technology Preview. Support for nested virtualization was introduced in Red Hat Enterprise Linux 7 and it enables a virtual machine to serve as a host.
VDSM hooks are a means to insert code, commands, or scripts into a point in the lifecycle of a virtual machine or the VDSM daemon.
vdsm-tool now provides commands for VDSM network cleanup, such as `vdsm-tool clear-nets` and `vdsm-tool dummybr-remove`. You can remove networks configured by VDSM following the steps below. Note that the VDSM service does not need to be running:

1. To prevent loss of connectivity, it might be necessary to exclude the default route network from the cleanup. Look for a network providing the default route (ovirtmgmt by default):
# vdsm-tool list-nets
...
ovirtmgmt (default route)
...

2. Remove all networks configured by VDSM except for the default network:
# vdsm-tool clear-nets --exclude-net ovirtmgmt

3. Remove the libvirt dummy bridge ;vdsmdummy;
# vdsm-tool dummybr-remove

4. Now that the host is clean, you can remove VDSM.
Sparsify and sysprep can now be run on POWER hosts.
Less additional space is now required in /var/tmp during VMware OVA import.
Red Hat Virtualization uses the qemu-img tool to copy disks during live storage migration, instead of dd. This tool converts unused space in the image to holes, making the destination disk sparse. Raw preallocated disks copied during live storage migration were converted to raw sparse disks.
Now, you can use the qemu-img preallocation option when copying raw preallocated disks to file-based storage domains, so that the disks are kept preallocated after the migration.
LVM scans and activates LUNs and raw volumes, and any logical volume inside them, like those created within a guest. It then displays the unexpected guest logical volumes, or confusing errors about them.
You can now use "vdsm-tool config-lvm-filter" to configure an LVM filter so that LVM cannot scan or activate guest logical volumes, preventing the unexpected output.
Previously, VDSM could miss short storage outages that caused a VM to pause, so the VM paused during the short outage was not resumed. Now, Multipath queues I/O for more time and fails I/O only if there was no access to storage after the timeout, and VDSM uses a shorter timeout to detect inaccessible storage. If the outage is very short, a VM will not pause, and does not need to be resumed. If the outage is longer and a VM did pause, VDSM is more likely to detect the outage and resume the VM when storage becomes available again.
In this release, VDSM will no longer detect libvirt block jobs that have successfully completed, which will allow live merge operations to complete successfully in these cases.
Previously, the libvirt API would report live merges as complete before they were completed, resulting in errors. With this release, live merge progress is now detected using the libvirt xml, resulting in correct reporting of live merge completion status.
Previously, when a block job was manually aborted during a live merge, the merge operation would fail. In this release,  vdsm will now detect the failure and correct it.
Leaked clusters on an image are now correctly identified and handled, allowing cold merges to succeed when they are present.
The host monitoring task will now correctly report host statistics after the vdsm task execution queue becomes full under extreme load.
Libvirt no longer logs an incorrect error when a virtual machine is shut down correctly.