Release notes

Migration Toolkit for Virtualization 2.5

Version 2.5

Red Hat Modernization and Migration Documentation Team

Abstract

This document describes new features, known issues, and resolved issues for the Migration Toolkit for Virtualization 2.5.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright’s message.

Chapter 1. Migration Toolkit for Virtualization 2.5

You can use the Migration Toolkit for Virtualization (MTV) to migrate virtual machines from the following source providers to OpenShift Virtualization destination providers:

  • VMware vSphere
  • Red Hat Virtualization (RHV)
  • OpenStack
  • Open Virtual Appliances (OVAs) that were created by VMware vSphere
  • Remote OpenShift Virtualization clusters

The release notes describe technical changes, new features and enhancements, and known issues for Migration Toolkit for Virtualization.

1.1. Technical changes

This release has the following technical changes:

Migration from OpenStack moves to being a fully supported feature

In this version of MTV, migration using OpenStack source providers graduated from a Technology Preview feature to a fully supported feature.

Disabling FIPS

EMS enforcement is disabled for migrations with VMware vSphere source providers to enable migrations from versions of vSphere that are supported by MTV but do not comply with the 2023 FIPS requirements.

Integration of the create and update provider user interface

The user interface of create and update providers now aligns with the look and feel of the Red Hat OpenShift web console and displays up-to-date data.

Standalone UI

The old UI of MTV 2.3 cannot be enabled by setting feature_ui: true in ForkliftController anymore.

Errors logged in populator pods are improved

In previous releases of MTV 2.5, populator pods were always restarted on failure. This made it difficult to gather the logs from the failed pods. In MTV 2.5.3, the number of restarts of populator pods is limited to three times. On the third and final time, the populator pod remains in the fail status and its logs can then be easily gathered by must-gather and by forklift-controller to know this step has failed. (MTV-818)

1.2. New features and enhancements

This release has the following features and improvements:

Migration using OVA files created by VMware vSphere

In MTV 2.5, you can migrate using Open Virtual Appliance (OVA) files that were created by VMware vSphere as source providers. (MTV-336)

Note

Migration of OVA files that were not created by VMware vSphere but are compatible with vSphere might succeed. However, migration of such files is not supported by MTV. MTV supports only OVA files created by VMware vSphere.

Migration using one or more Open Virtual Appliance (OVA) files as a source provider is a Technology Preview.

Important

Migration using one or more Open Virtual Appliance (OVA) files as a source provider is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview/.

Migrating VMs between Red Hat OpenShift clusters

In MTV 2.5, you can now use Red Hat OpenShift Virtualization provider as a source provider as well as a destination provider. You can migrate VMs from the cluster that MTV is deployed on to another cluster, or from a remote cluster to the cluster that MTV is deployed on. (MTV-571)

Migration of VMs with direct LUNs from RHV

During the migration from RHV, direct LUNs are detached from the source virtual machines and attached to the target virtual machines. Note that this mechanism does not work yet for Fibre Channel. (MTV-329)

Additional authentication methods for OpenStack

In addition to standard password authentication, the following authentication methods are supported: Token authentication and Application credential authentication. (MTV-539)

Validation rules for OpenStack

The validation service includes default validation rules for virtual machines from OpenStack. (MTV-508)

VDDK is now optional for VMware vSphere providers

The VMware vSphere source provider can now be created without specifying a VDDK init image. It is strongly recommended to create a VDDK init image to accelerate migrations.

Deployment on OKE enabled

In MTV 2.5.3, deployment on OpenShift Kubernetes Engine (OKE) has been enabled. For more information, see About OpenShift Kubernetes Engine. (MTV-803)

Migration of VMs to destination storage classes with encrypted RBD now supported

In MTV 2.5.4, migration of VMs to destination storage classes that have encrypted RADOS Block Devices (RBD) volumes is now supported.

To make use of this new feature, set the value of the parameter controller_block_overhead to 1Gi, following the procedure in Configuring the MTV Operator. (MTV-851)

1.3. Known issues

This release has the following known issues:

Deleting migration plan does not remove temporary resources

Deleting a migration plan does not remove temporary resources such as importer pods, conversion pods, config maps, secrets, failed VMs and data volumes. You must archive a migration plan before deleting it to clean up the temporary resources. (BZ#2018974)

Unclear error status message for VM with no operating system

The error status message for a VM with no operating system on the Plans page of the web console does not describe the reason for the failure. (BZ#22008846)

Migration of virtual machines with encrypted partitions fails during conversion

vSphere only: Migrations from RHV and OpenStack do not fail, but the encryption key may be missing on the target Red Hat OpenShift cluster.

Migration fails during precopy/cutover while a snapshot operation is performed on the source VM

Warm migration from RHV fails if a snapshot operation is performed on the source VM. If a user performs a snapshot operation on the source VM at the time when a migration snapshot is scheduled, the migration fails instead of waiting for the user’s snapshot operation to finish. (MTV-456)

Unable to schedule migrated VM with multiple disks to more than one storage classes of type hostPath

When migrating a VM with multiple disks to more than one storage classes of type hostPath, it might happen that a VM cannot be scheduled. Workaround: Use shared storage on the target Red Hat OpenShift cluster.

Non-supported guest operating systems in warm migrations

Warm migrations and migrations to remote Red Hat OpenShift clusters from vSphere do not support all types of guest operating systems that are supported in cold migrations to the local Red Hat OpenShift cluster. This is a consequence of using RHEL 8 in the former case and RHEL 9 in the latter case.
See Converting virtual machines from other hypervisors to KVM with virt-v2v in RHEL 7, RHEL 8, and RHEL 9 for the list of supported guest operating systems.

VMs from vSphere with RHEL 9 guest operating system may start with network interfaces that are down

When migrating VMs that are installed with RHEL 9 as guest operating system from vSphere, the network interfaces of the VMs could be disabled when they start in OpenShift Virtualization. (MTV-491)

Import OVA: ConnectionTestFailed message appears when adding OVA provider

When adding an OVA provider, the error message ConnectionTestFailed may instantly appear, although the provider is created successfully. If the message does not disappear after a few minutes and the provider status does not move to Ready, this means that the ova server pod creation has failed. (MTV-671)

For a complete list of all known issues in this release, see the list of Known Issues in Jira.

1.4. Resolved issues

This release has the following resolved issues:

Flaw was found in jsrsasign package which is vulnerable to Observable Discrepancy

Versions of the package jsrsasign before 11.0.0, used in previous releases of MTV, are vulnerable to Observable Discrepancy in the RSA PKCS1.5 or RSA-OAEP decryption process. This discrepancy means an attacker could decrypt ciphertexts by exploiting this vulnerability. However, exploiting this vulnerability requires the attacker to have access to a large number of ciphertexts encrypted with the same key. This issue has been resolved in MTV 2.5.5 by upgrading the package jsrasign to version 11.0.0.

For more information, see CVE-2024-21484.

Multiple HTTP/2 enabled web servers are vulnerable to a DDoS attack (Rapid Reset Attack)

A flaw was found in handling multiplexed streams in the HTTP/2 protocol. In previous releases of MTV, the HTTP/2 protocol allowed a denial of service (server resource consumption) because request cancellation could reset multiple streams quickly. The server had to set up and tear down the streams while not hitting any server-side limit for the maximum number of active streams per connection, which resulted in a denial of service due to server resource consumption.

This issue has been resolved in MTV 2.5.2. It is advised to update to this version of MTV or later.

For more information, see CVE-2023-44487 (Rapid Reset Attack) and CVE-2023-39325 (Rapid Reset Attack).

Gin Web Framework does not properly sanitize filename parameter of Context.FileAttachment function

A flaw was found in the Gin-Gonic Gin Web Framework, used by MTV. The filename parameter of the Context.FileAttachment function was not properly sanitized. This flaw in the package could allow a remote attacker to bypass security restrictions caused by improper input validation by the filename parameter of the Context.FileAttachment function. A maliciously created filename could cause the Content-Disposition header to be sent with an unexpected filename value, or otherwise modify the Content-Disposition header.

This issue has been resolved in MTV 2.5.2. It is advised to update to this version of MTV or later.

For more information, see CVE-2023-29401 (Gin-Gonic Gin Web Framework) and CVE-2023-26125.

CVE-2023-26144: mtv-console-plugin-container: graphql: Insufficient checks in the OverlappingFieldsCanBeMergedRule.ts

A flaw was found in the package GraphQL from 16.3.0 and before 16.8.1. This flaw means MTV versions before MTV 2.5.2 are vulnerable to Denial of Service (DoS) due to insufficient checks in the OverlappingFieldsCanBeMergedRule.ts file when parsing large queries. This issue may allow an attacker to degrade system performance. (MTV-712)

This issue has been resolved in MTV 2.5.2. It is advised to update to this version of MTV or later.

For more information, see CVE-2023-26144.

CVE-2023-45142: Memory leak found in the otelhttp handler of open-telemetry

A flaw was found in otelhttp handler of OpenTelemetry-Go. This flaw means MTV versions before MTV 2.5.3 are vulnerable to a memory leak caused by http.user_agent and http.method having unbound cardinality, which could allow a remote, unauthenticated attacker to exhaust the server’s memory by sending many malicious requests, affecting the availability. (MTV-795)

This issue has been resolved in MTV 2.5.3. It is advised to update to this version of MTV or later.

For more information, see CVE-2023-45142.

CVE-2023-39322: QUIC connections do not set an upper bound on the amount of data buffered when reading post-handshake messages

A flaw was found in Golang. This flaw means MTV versions before MTV 2.5.3 are vulnerable to QUIC connections not setting an upper bound on the amount of data buffered when reading post-handshake messages, allowing a malicious QUIC connection to cause unbounded memory growth. With the fix, connections now consistently reject messages larger than 65KiB in size. (MTV-708)

This issue has been resolved in MTV 2.5.3. It is advised to update to this version of MTV or later.

For more information, see CVE-2023-39322.

CVE-2023-39321: Processing an incomplete post-handshake message for a QUIC connection can cause a panic

A flaw was found in Golang. This flaw means MTV versions before MTV 2.5.3 are vulnerable to processing an incomplete post-handshake message for a QUIC connection, which causes a panic. (MTV-693)

This issue has been resolved in MTV 2.5.3. It is advised to update to this version of MTV or later.

For more information, see CVE-2023-39321.

CVE-2023-39319: Flaw in html/template package

A flaw was found in the Golang html/template package used in MTV. This flaw means MTV versions before MTV 2.5.3 are vulnerable, as the html/template package did not properly handle occurrences of <script, <!--, and </script within JavaScript literals in <script> contexts. This flaw could cause the template parser to improperly consider script contexts to be terminated early, causing actions to be improperly escaped, which could be leveraged to perform an XSS attack. (MTV-693)

This issue has been resolved in MTV 2.5.3. It is advised to update to this version of MTV or later.

For more information, see CVE-2023-39319.

CVE-2023-39318: Flaw in html/template package

A flaw was found in the Golang html/template package used in MTV. This flaw means MTV versions before MTV 2.5.3 are vulnerable as the html/template package did not properly handle HMTL-like "" comment tokens, nor hashbang \#! comment tokens. This flaw could cause the template parser to improperly interpret the contents of <script> contexts, causing actions to be improperly escaped, which could be leveraged to perform an XSS attack. (MTV-693)

This issue has been resolved in MTV 2.5.3. It is advised to update to this version of MTV or later.

For more information, see CVE-2023-39318.

Logs archive file downloaded from UI includes logs related to deleted migration plan/VM

In previous releases of MTV 2.5, the log files downloaded from UI could contain logs that are related to a previous migration plan. (MTV-783)

This issue has been resolved in MTV 2.5.3.

Extending a VM disk in RHV is not reflected in the MTV inventory

In previous releases of MTV 2.5, the size of disks that are extended in RHV was not adequately monitored. This resulted in the inability to migrate virtual machines with extended disks from a RHV provider. (MTV-830)

This issue has been resolved in MTV 2.5.3.

Filesystem overhead configurable

In previous releases of MTV 2.5, the filesystem overhead for new persistent volumes was hard-coded to 10%. The overhead was insufficient for certain filesystem types, resulting in failures during cold-migrations from RHV and OSP to the cluster where MTV is deployed. In other filesystem types, the hard-coded overhead was too high, resulting in excessive storage consumption.

In MTV 2.5.3, the filesystem overhead can be configured and is no longer hard-coded. If your migration allocates persistent volumes without CDI, you can adjust the file system overhead. You adjust the file system overhead by adding the following label and value to the spec portion of the forklift-controller ` CR:

spec:
  `controller_filesystem_overhead: <percentage>` 1
1
The percentage of overhead. If this label is not added, the default value of 10% is used. This setting is valid only if the storageclass is filesystem. (MTV-699)

Ensure up-to-date data is displayed in the create and update provider forms

In previous releases of MTV, the create and update provider forms could have presented stale data.

This issue is resolved in MTV 2.5, the new forms of create and update provider display up-to-date properties of the provider. (MTV-603)

Snapshots that are created during a migration in OpenStack are not deleted

In previous releases of MTV, the Migration Controller service did not delete snapshots that were created during a migration of source virtual machines in OpenStack automatically.

This issue is resolved in MTV 2.5, all the snapshots created during the migration are removed after the migration has been completed. (MTV-620)

RHV snapshots are not deleted after a successful migration

In previous releases of MTV, the Migration Controller service did not delete snapshots automatically after a successful warm migration of a VM from RHV.

This issue is resolved in MTV 2.5, the snapshots generated during migration are removed after a successful migration, and the original snapshots are not removed after a successful migration. (MTV-349)

Warm migration fails when cutover conflicts with precopy

In previous releases of MTV, the cutover operation failed when it was triggered while precopy was being performed. The VM was locked in RHV and therefore the ovirt-engine rejected the snapshot creation, or disk transfer, operation.

This issue is resolved in MTV 2.5, the cutover operation is triggered, but it is not performed at that time because the VM is locked. Once the precopy operation completes, the cutover operation is triggered. (MTV-686)

Warm migration fails when VM is locked

In previous releases of MTV, triggering a warm migration while there was an ongoing operation in RHV that locked the VM caused the migration to fail because the snapshot creation could not be triggered.

This issue is resolved in MTV 2.5, warm migration does not fail when an operation that locks the VM is performed in RHV. The migration does not fail, but starts when the VM is unlocked. (MTV-687)

Deleting migrated VM does not remove PVC and PV

In previous releases of MTV, when removing a VM that was migrated, its persistent volume claims (PVCs) and physical volumes (PV) were not deleted.

This issue is resolved in MTV 2.5, PVCs and PVs are deleted when deleting migrated VM.(MTV-492)

PVC deletion hangs after archiving and deleting migration plan

In previous releases of MTV, when a migration failed, its PVCs and PVs were not deleted as expected when its migration plan was archived and deleted.

This issue is resolved in MTV 2.5, PVCs are deleted when archiving and deleting migration plan.(MTV-493)

VM with multiple disks may boot from non-bootable disk after migration

In previous releases of MTV, VM with multiple disks that were migrated might not have been able to boot on the target Red Hat OpenShift cluster.

This issue is resolved in MTV 2.5, VM with multiple disks that are migrated are able to boot on the target Red Hat OpenShift cluster. (MTV-433)

Transfer network not taken into account for cold migrations from vSphere

In MTV releases 2.4.0-2.5.3, cold migrations from vSphere to the local cluster on which MTV was deployed did not take a specified transfer network into account. This issue is resolved in MTV 2.5.4. (MTV-846)

For a complete list of all resolved issues in this release, see the list of Resolved Issues in Jira.

1.5. Upgrade notes

It is recommended to upgrade from MTV 2.4.2 to MTV 2.5.

Upgrade from 2.4.0 fails

When upgrading from MTV 2.4.0 to a later version, the operation fails with an error that says the field spec.selector of deployment forklift-controller is immutable. Workaround: Remove the custom resource forklift-controller of type ForkliftController from the installed namespace, and recreate it. Refresh the Red Hat OpenShift console once the forklift-console-plugin pod runs to load the upgraded MTV web console. (MTV-518)

Chapter 2. Migration Toolkit for Virtualization 2.4

Migrate virtual machines (VMs) from VMware vSphere or Red Hat Virtualization or OpenStack to OpenShift Virtualization with the Migration Toolkit for Virtualization (MTV).

The release notes describe technical changes, new features and enhancements, and known issues.

2.1. Technical changes

This release has the following technical changes:

Faster disk image migration from RHV

Disk images are not converted anymore using virt-v2v when migrating from RHV. This change speeds up migrations and also allows migration for guest operating systems that are not supported by virt-vsv. (forklift-controller#403)

Faster disk transfers by ovirt-imageio client (ovirt-img)

Disk transfers use ovirt-imageio client (ovirt-img) instead of Containerized Data Import (CDI) when migrating from RHV to the local OpenShift Container Platform cluster, accelerating the migration.

Faster migration using conversion pod disk transfer

When migrating from vSphere to the local OpenShift Container Platform cluster, the conversion pod transfers the disk data instead of Containerized Data Importer (CDI), accelerating the migration.

Migrated virtual machines are not scheduled on the target OCP cluster

The migrated virtual machines are no longer scheduled on the target OpenShift Container Platform cluster. This enables migrating VMs that cannot start due to limit constraints on the target at migration time.

StorageProfile resource needs to be updated for a non-provisioner storage class

You must update the StorageProfile resource with accessModes and volumeMode for non-provisioner storage classes such as NFS.

VDDK 8 can be used in the VDDK image

Previous versions of MTV supported only using VDDK version 7 for the VDDK image. MTV supports both versions 7 and 8, as follows:

  • If you are migrating to OCP 4.12 or earlier, use VDDK version 7.
  • If you are migrating to OCP 4.13 or later, use VDDK version 8.

2.2. New features and enhancements

This release has the following features and improvements:

OpenStack migration

MTV now supports migrations with OpenStack as a source provider. This feature is a provided as a Technology Preview and only supports cold migrations.

OCP console plugin

The Migration Toolkit for Virtualization Operator now integrates the MTV web console into the Red Hat OpenShift web console. The new UI operates as an OCP Console plugin that adds the sub-menu Migration to the navigation bar. It is implemented in version 2.4, disabling the old UI. You can enable the old UI by setting feature_ui: true in ForkliftController. (MTV-427)

Skip certification option

Skip certificate validation option was added to VMware and RHV providers. If selected, the provider’s certificate will not be validated and the UI will not ask for specifying a CA certificate.

Only third-party certificate required

Only the third-party certificate needs to be specified when defining a RHV provider that sets with the Manager CA certificate.

Conversion of VMs with RHEL9 guest operating system

Cold migrations from vSphere to a local Red Hat OpenShift cluster use virt-v2v on RHEL 9. (MTV-332)

2.3. Known issues

This release has the following known issues:

Deleting migration plan does not remove temporary resources

Deleting a migration plan does not remove temporary resources such as importer pods, conversion pods, config maps, secrets, failed VMs and data volumes. You must archive a migration plan before deleting it to clean up the temporary resources. (BZ#2018974)

Unclear error status message for VM with no operating system

The error status message for a VM with no operating system on the Plans page of the web console does not describe the reason for the failure. (BZ#22008846)

Log archive file includes logs of a deleted migration plan or VM

If deleting a migration plan and then running a new migration plan with the same name, or if deleting a migrated VM and then remigrate the source VM, then the log archive file created by the MTV web console might include the logs of the deleted migration plan or VM. (BZ#2023764)

Migration of virtual machines with encrypted partitions fails during conversion

vSphere only: Migrations from RHV and OpenStack don’t fail, but the encryption key may be missing on the target OCP cluster.

Snapshots that are created during the migration in OpenStack are not deleted

The Migration Controller service does not delete snapshots that are created during the migration for source virtual machines in OpenStack automatically. Workaround: the snapshots can be removed manually on OpenStack.

RHV snapshots are not deleted after a successful migration

The Migration Controller service does not delete snapshots automatically after a successful warm migration of a RHV VM. Workaround: Snapshots can be removed from RHV instead. (MTV-349)

Migration fails during precopy/cutover while a snapshot operation is executed on the source VM

Some warm migrations from RHV might fail. When running a migration plan for warm migration of multiple VMs from RHV, the migrations of some VMs might fail during the cutover stage. In that case, restart the migration plan and set the cutover time for the VM migrations that failed in the first run.

Warm migration from RHV fails if a snapshot operation is performed on the source VM. If the user performs a snapshot operation on the source VM at the time when a migration snapshot is scheduled, the migration fails instead of waiting for the user’s snapshot operation to finish. (MTV-456)

Cannot schedule migrated VM with multiple disks to more than one storage classes of type hostPath

When migrating a VM with multiple disks to more than one storage classes of type hostPath, it may result in a VM that cannot be scheduled. Workaround: It is recommended to use shared storage on the target OCP cluster.

Deleting migrated VM does not remove PVC and PV

When removing a VM that was migrated, its persistent volume claims (PVCs) and physical volumes (PV) are not deleted. Workaround: remove the CDI importer pods and then remove the remaining PVCs and PVs. (MTV-492)

PVC deletion hangs after archiving and deleting migration plan

When a migration fails, its PVCs and PVs are not deleted as expected when its migration plan is archived and deleted. Workaround: Remove the CDI importer pods and then remove the remaining PVCs and PVs. (MTV-493)

VM with multiple disks may boot from non-bootable disk after migration

VM with multiple disks that was migrated might not be able to boot on the target OCP cluster. Workaround: Set the boot order appropriately to boot from the bootable disk. (MTV-433)

Non-supported guest operating systems in warm migrations

Warm migrations and migrations to remote OCP clusters from vSphere do not support all types of guest operating systems that are supported in cold migrations to the local OCP cluster. It is a consequence of using RHEL 8 in the former case and RHEL 9 in the latter case.
See Converting virtual machines from other hypervisors to KVM with virt-v2v in RHEL 7, RHEL 8, and RHEL 9 for the list of supported guest operating systems.

VMs from vSphere with RHEL 9 guest operating system may start with network interfaces that are down

When migrating VMs that are installed with RHEL 9 as guest operating system from vSphere, their network interfaces could be disabled when they start in OpenShift Virtualization. (MTV-491)

Upgrade from 2.4.0 fails

When upgrading from MTV 2.4.0 to a later version, the operation fails with an error that says the field spec.selector of deployment forklift-controller is immutable. Workaround: remove the custom resource forklift-controller of type ForkliftController from the installed namespace, and recreate it. The user needs to refresh the OCP Console once the forklift-console-plugin pod runs to load the upgraded MTV web console. (MTV-518)

2.4. Resolved issues

This release has the following resolved issues:

Multiple HTTP/2 enabled web servers are vulnerable to a DDoS attack (Rapid Reset Attack)

A flaw was found in handling multiplexed streams in the HTTP/2 protocol. In previous releases of MTV, the HTTP/2 protocol allowed a denial of service (server resource consumption) because request cancellation could reset multiple streams quickly. The server had to set up and tear down the streams while not hitting any server-side limit for the maximum number of active streams per connection, which resulted in a denial of service due to server resource consumption.

This issue has been resolved in MTV 2.4.3 and 2.5.2. It is advised to update to one of these versions of MTV or later.

For more information, see CVE-2023-44487 (Rapid Reset Attack) and CVE-2023-39325 (Rapid Reset Attack).

Improve invalid/conflicting VM name handling

Improve the automatic renaming of VMs during migration to fit RFC 1123. This feature that was introduced in 2.3.4 is enhanced to cover more special cases. (MTV-212)

Prevent locking user accounts due to incorrect credentials

If a user specifies an incorrect password for RHV providers, they are no longer locked in RHV. An error returns when the RHV manager is accessible and adding the provider. If the RHV manager is inaccessible, the provider is added, but there would be no further attempt after failing, due to incorrect credentials. (MTV-324)

Users without cluster-admin role can create new providers

Previously, the cluster-admin role was required to browse and create providers. In this release, users with sufficient permissions on MTV resources (providers, plans, migrations, NetworkMaps, StorageMaps, hooks) can operate MTV without cluster-admin permissions. (MTV-334)

Convert i440fx to q35

Migration of virtual machines with i440fx chipset is now supported. The chipset is converted to q35 during the migration. (MTV-430)

Preserve the UUID setting in SMBIOS for a VM that is migrated from RHV

The Universal Unique ID (UUID) number within the System Management BIOS (SMBIOS) no longer changes for VMs that are migrated from RHV. This enhancement enables applications that operate within the guest operating system and rely on this setting, such as for licensing purposes, to operate on the target OCP cluster in a manner similar to that of RHV. (MTV-597)

Do not expose password for RHV in error messages

Previously, the password that was specified for RHV manager appeared in error messages that were displayed in the web console and logs when failing to connect to RHV. In this release, error messages that are generated when failing to connect to RHV do not reveal the password for RHV manager.

QEMU guest agent is now installed on migrated VMs

The QEMU guest agent is installed on VMs during cold migration from vSphere. (BZ#2018062)

Chapter 3. Migration Toolkit for Virtualization 2.3

You can migrate virtual machines (VMs) from VMware vSphere or Red Hat Virtualization to OpenShift Virtualization with the Migration Toolkit for Virtualization (MTV).

The release notes describe technical changes, new features and enhancements, and known issues.

3.1. Technical changes

This release has the following technical changes:

Setting the VddkInitImage path is part of the procedure of adding VMware provider.

In the web console, you enter the VddkInitImage path when adding a VMware provider. Alternatively, from the CLI, you add the VddkInitImage path to the Provider CR for VMware migrations.

The StorageProfile resource needs to be updated for a non-provisioner storage class

You must update the StorageProfile resource with accessModes and volumeMode for non-provisioner storage classes such as NFS. The documentation includes a link to the relevant procedure.

3.2. New features and enhancements

This release has the following features and improvements:

MTV 2.5 supports warm migration from RHV

You can use warm migration to migrate VMs from both VMware and RHV.

The minimal sufficient set of privileges for VMware users is established

VMware users do not have to have full cluster-admin privileges to perform a VM migration. The minimal sufficient set of user’s privileges is established and documented.

MTV documentation is updated with instructions on using hooks

MTV documentation includes instructions on adding hooks to migration plans and running hooks on VMs.

3.3. Known issues

This release has the following known issues:

Some warm migrations from RHV might fail

When you run a migration plan for warm migration of multiple VMs from RHV, the migrations of some VMs might fail during the cutover stage. In that case, restart the migration plan and set the cutover time for the VM migrations that failed in the first run. (BZ#2063531)

Snapshots are not deleted after warm migration

The Migration Controller service does not delete snapshots automatically after a successful warm migration of a RHV VM. You can delete the snapshots manually. (BZ#22053183)

Warm migration from RHV fails if a snapshot operation is performed on the source VM

If the user performs a snapshot operation on the source VM at the time when a migration snapshot is scheduled, the migration fails instead of waiting for the user’s snapshot operation to finish. (BZ#2057459)

QEMU guest agent is not installed on migrated VMs

The QEMU guest agent is not installed on migrated VMs. Workaround: Install the QEMU guest agent with a post-migration hook. (BZ#2018062)

Deleting migration plan does not remove temporary resources.

Deleting a migration plan does not remove temporary resources such as importer pods, conversion pods, config maps, secrets, failed VMs and data volumes. (BZ#2018974) You must archive a migration plan before deleting it in order to clean up the temporary resources.

Unclear error status message for VM with no operating system

The error status message for a VM with no operating system on the Migration plan details page of the web console does not describe the reason for the failure. (BZ#2008846)

Log archive file includes logs of a deleted migration plan or VM

If you delete a migration plan and then run a new migration plan with the same name or if you delete a migrated VM and then remigrate the source VM, the log archive file created by the MTV web console might include the logs of the deleted migration plan or VM. (BZ#2023764)

Migration of virtual machines with encrypted partitions fails during conversion

The problem occurs for both vSphere and RHV migrations.

MTV 2.3.4 only: When the source provider is RHV, duplicating a migration plan fails in either the network mapping stage or the storage mapping stage.

Possible workaround: Delete cache in the browser or restart the browser. (BZ#2143191)

Chapter 4. Migration Toolkit for Virtualization 2.2

You can migrate virtual machines (VMs) from VMware vSphere or Red Hat Virtualization to OpenShift Virtualization with the Migration Toolkit for Virtualization (MTV).

The release notes describe technical changes, new features and enhancements, and known issues.

4.1. Technical changes

This release has the following technical changes:

Setting the precopy time interval for warm migration

You can set the time interval between snapshots taken during the precopy stage of warm migration.

4.2. New features and enhancements

This release has the following features and improvements:

Creating validation rules

You can create custom validation rules to check the suitability of VMs for migration. Validation rules are based on the VM attributes collected by the Provider Inventory service and written in Rego, the Open Policy Agent native query language.

Downloading logs by using the web console

You can download logs for a migration plan or a migrated VM by using the MTV web console.

Duplicating a migration plan by using the web console

You can duplicate a migration plan by using the web console, including its VMs, mappings, and hooks, in order to edit the copy and run as a new migration plan.

Archiving a migration plan by using the web console

You can archive a migration plan by using the MTV web console. Archived plans can be viewed or duplicated. They cannot be run, edited, or unarchived.

4.3. Known issues

This release has the following known issues:

Certain Validation service issues do not block migration

Certain Validation service issues, which are marked as Critical and display the assessment text, The VM will not be migrated, do not block migration. (BZ#2025977)

The following Validation service assessments do not block migration:

Table 4.1. Issues that do not block migration

AssessmentResult

The disk interface type is not supported by OpenShift Virtualization (only sata, virtio_scsi and virtio interface types are currently supported).

The migrated VM will have a virtio disk if the source interface is not recognized.

The NIC interface type is not supported by OpenShift Virtualization (only e1000, rtl8139 and virtio interface types are currently supported).

The migrated VM will have a virtio NIC if the source interface is not recognized.

The VM is using a vNIC profile configured for host device passthrough, which is not currently supported by OpenShift Virtualization.

The migrated VM will have an SR-IOV NIC. The destination network must be set up correctly.

One or more of the VM’s disks has an illegal or locked status condition.

The migration will proceed but the disk transfer is likely to fail.

The VM has a disk with a storage type other than image, and this is not currently supported by OpenShift Virtualization.

The migration will proceed but the disk transfer is likely to fail.

The VM has one or more snapshots with disks in ILLEGAL state. This is not currently supported by OpenShift Virtualization.

The migration will proceed but the disk transfer is likely to fail.

The VM has USB support enabled, but USB devices are not currently supported by OpenShift Virtualization.

The migrated VM will not have USB devices.

The VM is configured with a watchdog device, which is not currently supported by OpenShift Virtualization.

The migrated VM will not have a watchdog device.

The VM’s status is not up or down.

The migration will proceed but it might hang if the VM cannot be powered off.

QEMU guest agent is not installed on migrated VMs

The QEMU guest agent is not installed on migrated VMs. Workaround: Install the QEMU guest agent with a post-migration hook. (BZ#2018062)

Missing resource causes error message in current.log file

If a resource does not exist, for example, if the virt-launcher pod does not exist because the migrated VM is powered off, its log is unavailable.

The following error appears in the missing resource’s current.log file when it is downloaded from the web console or created with the must-gather tool: error: expected 'logs [-f] [-p] (POD | TYPE/NAME) [-c CONTAINER]'. (BZ#2023260)

Importer pod log is unavailable after warm migration

Retaining the importer pod for debug purposes causes warm migration to hang during the precopy stage. (BZ#2016290)

As a temporary workaround, the importer pod is removed at the end of the precopy stage so that the precopy succeeds. However, this means that the importer pod log is not retained after warm migration is complete. You can only view the importer pod log by using the oc logs -f <cdi-importer_pod> command during the precopy stage.

This issue only affects the importer pod log and warm migration. Cold migration and the virt-v2v logs are not affected.

Deleting migration plan does not remove temporary resources.

Deleting a migration plan does not remove temporary resources such as importer pods, conversion pods, config maps, secrets, failed VMs and data volumes. (BZ#2018974) You must archive a migration plan before deleting it in order to clean up the temporary resources.

Unclear error status message for VM with no operating system

The error status message for a VM with no operating system on the Migration plan details page of the web console does not describe the reason for the failure. (BZ#2008846)

Network, storage, and VM referenced by name in the Plan CR are not displayed in the web console.

If a Plan CR references storage, network, or VMs by name instead of by ID, the resources do not appear in the MTV web console. The migration plan cannot be edited or duplicated. (BZ#1986020)

Log archive file includes logs of a deleted migration plan or VM

If you delete a migration plan and then run a new migration plan with the same name or if you delete a migrated VM and then remigrate the source VM, the log archive file created by the MTV web console might include the logs of the deleted migration plan or VM. (BZ#2023764)

If a target VM is deleted during migration, its migration status is Succeeded in the Plan CR

If you delete a target VirtualMachine CR during the Convert image to kubevirt step of the migration, the Migration details page of the web console displays the state of the step as VirtualMachine CR not found. However, the status of the VM migration is Succeeded in the Plan CR file and in the web console. (BZ#2031529)

Chapter 5. Migration Toolkit for Virtualization 2.1

You can migrate virtual machines (VMs) from VMware vSphere or Red Hat Virtualization to OpenShift Virtualization with the Migration Toolkit for Virtualization (MTV).

The release notes describe new features and enhancements, known issues, and technical changes.

5.1. Technical changes

VDDK image added to HyperConverged custom resource

The VMware Virtual Disk Development Kit (VDDK) SDK image must be added to the HyperConverged custom resource. Before this release, it was referenced in the v2v-vmware config map.

5.2. New features and enhancements

This release adds the following features and improvements.

Cold migration from Red Hat Virtualization

You can perform a cold migration of VMs from Red Hat Virtualization.

Migration hooks

You can create migration hooks to run Ansible playbooks or custom code before or after migration.

Filtered must-gather data collection

You can specify options for the must-gather tool that enable you to filter the data by namespace, migration plan, or VMs.

SR-IOV network support

You can migrate VMs with a single root I/O virtualization (SR-IOV) network interface if the OpenShift Virtualization environment has an SR-IOV network.

5.3. Known issues

QEMU guest agent is not installed on migrated VMs

The QEMU guest agent is not installed on migrated VMs. Workaround: Install the QEMU guest agent with a post-migration hook. (BZ#2018062)

Disk copy stage does not progress

The disk copy stage of a RHV VM does not progress and the MTV web console does not display an error message. (BZ#1990596)

The cause of this problem might be one of the following conditions:

  • The storage class does not exist on the target cluster.
  • The VDDK image has not been added to the HyperConverged custom resource.
  • The VM does not have a disk.
  • The VM disk is locked.
  • The VM time zone is not set to UTC.
  • The VM is configured for a USB device.

To disable USB devices, see Configuring USB Devices in the Red Hat Virtualization documentation.

To determine the cause:

  1. Click WorkloadsVirtualization in the Red Hat OpenShift web console.
  2. Click the Virtual Machines tab.
  3. Select a virtual machine to open the Virtual Machine Overview screen.
  4. Click Status to view the status of the virtual machine.

VM time zone must be UTC with no offset

The time zone of the source VMs must be UTC with no offset. You can set the time zone to GMT Standard Time after first assessing the potential impact on the workload. (BZ#1993259)

RHV resource UUID causes a "Provider not found" error

If a RHV resource UUID is used in a Host, NetworkMap, StorageMap, or Plan custom resource (CR), a "Provider not found" error is displayed.

You must use the resource name. (BZ#1994037)

Same RHV resource name in different data centers causes ambiguous reference

If a RHV resource name is used in a NetworkMap, StorageMap, or Plan custom resource (CR) and if the same resource name exists in another data center, the Plan CR displays a critical "Ambiguous reference" condition. You must rename the resource or use the resource UUID in the CR.

In the web console, the resource name appears twice in the same list without a data center reference to distinguish them. You must rename the resource. (BZ#1993089)

Snapshots are not deleted after warm migration

Snapshots are not deleted automatically after a successful warm migration of a VMware VM. You must delete the snapshots manually in VMware vSphere. (BZ#2001270)

Chapter 6. Migration Toolkit for Virtualization 2.0

You can migrate virtual machines (VMs) from VMware vSphere with the Migration Toolkit for Virtualization (MTV).

The release notes describe new features and enhancements, known issues, and technical changes.

6.1. New features and enhancements

This release adds the following features and improvements.

Warm migration

Warm migration reduces downtime by copying most of the VM data during a precopy stage while the VMs are running. During the cutover stage, the VMs are stopped and the rest of the data is copied.

Cancel migration

You can cancel an entire migration plan or individual VMs while a migration is in progress. A canceled migration plan can be restarted in order to migrate the remaining VMs.

Migration network

You can select a migration network for the source and target providers for improved performance. By default, data is copied using the VMware administration network and the Red Hat OpenShift pod network.

Validation service

The validation service checks source VMs for issues that might affect migration and flags the VMs with concerns in the migration plan.

Important

The validation service is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview/.

6.2. Known issues

This section describes known issues and mitigations.

QEMU guest agent is not installed on migrated VMs

The QEMU guest agent is not installed on migrated VMs. Workaround: Install the QEMU guest agent with a post-migration hook. (BZ#2018062)

Network map displays a "Destination network not found" error

If the network map remains in a NotReady state and the NetworkMap manifest displays a Destination network not found error, the cause is a missing network attachment definition. You must create a network attachment definition for each additional destination network before you create the network map. (BZ#1971259)

Warm migration gets stuck during third precopy

Warm migration uses changed block tracking snapshots to copy data during the precopy stage. The snapshots are created at one-hour intervals by default. When a snapshot is created, its contents are copied to the destination cluster. However, when the third snapshot is created, the first snapshot is deleted and the block tracking is lost. (BZ#1969894)

You can do one of the following to mitigate this issue:

  • Start the cutover stage no more than one hour after the precopy stage begins so that only one internal snapshot is created.
  • Increase the snapshot interval in the vm-import-controller-config config map to 720 minutes:

    $ oc patch configmap/vm-import-controller-config \
      -n openshift-cnv -p '{"data": \
      {"warmImport.intervalMinutes": "720"}}'

Legal Notice

Copyright © 2024 Red Hat, Inc.
The text of and illustrations in this document are licensed by Red Hat under a Creative Commons Attribution–Share Alike 3.0 Unported license ("CC-BY-SA"). An explanation of CC-BY-SA is available at http://creativecommons.org/licenses/by-sa/3.0/. In accordance with CC-BY-SA, if you distribute this document or an adaptation of it, you must provide the URL for the original version.
Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert, Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat, Red Hat Enterprise Linux, the Shadowman logo, the Red Hat logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
Java® is a registered trademark of Oracle and/or its affiliates.
XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.
MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.
Node.js® is an official trademark of Joyent. Red Hat is not formally related to or endorsed by the official Joyent Node.js open source or commercial project.
The OpenStack® Word Mark and OpenStack logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation's permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
All other trademarks are the property of their respective owners.