Chapter 5. Migrating virtual machines from the command line

You can migrate virtual machines to OpenShift Virtualization from the command line.

Important

You must ensure that all prerequisites are met.

5.1. Migrating virtual machines

You migrate virtual machines (VMs) from the command line (CLI) by creating MTV custom resources (CRs).

Important

You must specify a name for cluster-scoped CRs.

You must specify both a name and a namespace for namespace-scoped CRs.

Prerequisites

  • You must be logged in as a user with cluster-admin privileges.
  • VMware only: You must have a VMware Virtual Disk Development Kit (VDDK) image in a secure registry that is accessible to all clusters.

Procedure

  1. VMware only: Add the VDDK image to the HyperConverged CR:

    $ cat << EOF | oc apply -f -
    apiVersion: hco.kubevirt.io/v1beta1
    kind: HyperConverged
    metadata:
      name: kubevirt-hyperconverged
      namespace: openshift-cnv
    spec:
      vddkInitImage: <registry_route_or_server_path>/vddk:<tag> 1
    EOF
    1
    Specify the VDDK image that you created.
  2. Create a Secret manifest for the source provider credentials:

    $ cat << EOF | oc apply -f -
    apiVersion: v1
    kind: Secret
    metadata:
      name: <secret>
      namespace: openshift-mtv
    type: Opaque
    stringData:
      user: <user> 1
      password: <password> 2
      cacert: <engine_ca_certificate> 3
      thumbprint: <vcenter_fingerprint> 4
    EOF
    1
    Specify the base64-encoded vCenter admin user or the RHV Manager user.
    2
    Specify the base64-encoded password.
    3
    RHV only: Specify the base64-encoded CA certificate of the Manager. You can retrieve it at https://<engine_host>/ovirt-engine/services/pki-resource?resource=ca-certificate&format=X509-PEM-CA.
    4
    VMware only: Specify the vCenter SHA-1 fingerprint.
  3. Create a Provider manifest for the source provider:

    $ cat << EOF | oc apply -f -
    apiVersion: forklift.konveyor.io/v1beta1
    kind: Provider
    metadata:
      name: <provider>
      namespace: openshift-mtv
    spec:
      type: <provider_type> 1
      url: <api_end_point> 2
      secret:
        name: <secret> 3
        namespace: openshift-mtv
    EOF
    1
    Allowed values are ovirt and vsphere.
    2
    Specify the API end point URL, for example, https://<vCenter_host>/sdk for vSphere or https://<engine_host>/ovirt-engine/api/ for RHV.
    3
    Specify the name of provider Secret CR.
  4. VMware only: Create a Host manifest:

    $ cat << EOF | oc apply -f -
    apiVersion: forklift.konveyor.io/v1beta1
    kind: Host
    metadata:
      name: <vmware_host>
      namespace: openshift-mtv
    spec:
      provider:
        namespace: openshift-mtv
        name: <source_provider> 1
      id: <source_host_mor> 2
      ipAddress: <source_network_ip> 3
    EOF
    1
    Specify the name of the VMware Provider CR.
    2
    Specify the managed object reference (MOR) of the VMware host.
    3
    Specify the IP address of the VMware migration network.
  5. Create a NetworkMap manifest to map the source and destination networks:

    $  cat << EOF | oc apply -f -
    apiVersion: forklift.konveyor.io/v1beta1
    kind: NetworkMap
    metadata:
      name: <network_map>
      namespace: openshift-mtv
    spec:
      map:
        - destination:
            name: <pod>
            namespace: openshift-mtv
            type: pod 1
          source: 2
            id: <source_network_id> 3
            name: <source_network_name>
        - destination:
            name: <network_attachment_definition> 4
            namespace: <network_attachment_definition_namespace> 5
            type: multus
          source:
            id: <source_network_id>
            name: <source_network_name>
      provider:
        source:
          name: <source_provider>
          namespace: openshift-mtv
        destination:
          name: <destination_cluster>
          namespace: openshift-mtv
    EOF
    1
    Allowed values are pod and multus.
    2
    You can use either the id or the name parameter to specify the source network.
    3
    Specify the VMware network MOR or RHV network UUID.
    4
    Specify a network attachment definition for each additional OpenShift Virtualization network.
    5
    Specify the namespace of the OpenShift Virtualization network attachment definition.
  6. Create a StorageMap manifest to map source and destination storage:

    $ cat << EOF | oc apply -f -
    apiVersion: forklift.konveyor.io/v1beta1
    kind: StorageMap
    metadata:
      name: <storage_map>
      namespace: openshift-mtv
    spec:
      map:
        - destination:
            storageClass: <storage_class>
            accessMode: <access_mode> 1
          source:
            id: <source_datastore> 2
        - destination:
            storageClass: <storage_class>
            accessMode: <access_mode>
          source:
            id: <source_datastore>
      provider:
        source:
          name: <source_provider>
          namespace: openshift-mtv
        destination:
          name: <destination_cluster>
          namespace: openshift-mtv
    EOF
    1
    Allowed values are ReadWriteOnce and ReadWriteMany.
    2
    Specify the VMware data storage MOR or RHV storage domain UUID, for example, f2737930-b567-451a-9ceb-2887f6207009.
  7. Optional: Create a Hook manifest to run custom code on a VM during the phase specified in the Plan CR:

    $  cat << EOF | oc apply -f -
    apiVersion: forklift.konveyor.io/v1beta1
    kind: Hook
    metadata:
      name: <hook>
      namespace: openshift-mtv
    spec:
      image: quay.io/konveyor/hook-runner 1
      playbook: | 2
        LS0tCi0gbmFtZTogTWFpbgogIGhvc3RzOiBsb2NhbGhvc3QKICB0YXNrczoKICAtIG5hbWU6IExv
        YWQgUGxhbgogICAgaW5jbHVkZV92YXJzOgogICAgICBmaWxlOiAiL3RtcC9ob29rL3BsYW4ueW1s
        IgogICAgICBuYW1lOiBwbGFuCiAgLSBuYW1lOiBMb2FkIFdvcmtsb2FkCiAgICBpbmNsdWRlX3Zh
        cnM6CiAgICAgIGZpbGU6ICIvdG1wL2hvb2svd29ya2xvYWQueW1sIgogICAgICBuYW1lOiB3b3Jr
        bG9hZAoK
    EOF
    1
    You can use the default hook-runner image or specify a custom image. If you specify a custom image, you do not have to specify a playbook.
    2
    Optional: Base64-encoded Ansible playbook. If you specify a playbook, the image must be hook-runner.
  8. Create a Plan manifest for the migration:

    $ cat << EOF | oc apply -f -
    apiVersion: forklift.konveyor.io/v1beta1
    kind: Plan
    metadata:
      name: <plan> 1
      namespace: openshift-mtv
    spec:
      warm: true 2
      provider:
        source:
          name: <source_provider>
          namespace: openshift-mtv
        destination:
          name: <destination_cluster>
          namespace: openshift-mtv
      map:
        network: 3
          name: <network_map> 4
          namespace: openshift-mtv
        storage:
          name: <storage_map> 5
          namespace: openshift-mtv
      targetNamespace: openshift-mtv
      vms: 6
        - id: <source_vm> 7
        - name: <source_vm>
          hooks: 8
            - hook:
                namespace: openshift-mtv
                name: <hook> 9
              step: <step> 10
    EOF
    1
    Specify the name of the Plan CR.
    2
    VMware only: Specify whether the migration is warm or cold. If you specify a warm migration without specifying a value for the cutover parameter in the Migration manifest, only the precopy stage will run. Warm migration is not supported for RHV.
    3
    You can add multiple network mappings.
    4
    Specify the name of the NetworkMap CR.
    5
    Specify the name of the StorageMap CR.
    6
    You can use either the id or the name parameter to specify the source VMs.
    7
    Specify the VMware VM MOR or RHV VM UUID.
    8
    Optional: You can specify up to two hooks for a VM. Each hook must run during a separate migration step.
    9
    Specify the name of the Hook CR.
    10
    Allowed values are PreHook, before the migation plan starts, or PostHook, after the migration is complete.
  9. Create a Migration manifest to run the Plan CR:

    $ cat << EOF | oc apply -f -
    apiVersion: forklift.konveyor.io/v1beta1
    kind: Migration
    metadata:
      name: <migration> 1
      namespace: openshift-mtv
    spec:
      plan:
        name: <plan> 2
        namespace: openshift-mtv
      cutover: <cutover_time> 3
    EOF
    1
    Specify the name of the Migration CR.
    2
    Specify the name of the Plan CR that you are running. The Migration CR creates a VirtualMachine CR for each VM that is migrated.
    3
    Optional: Specify a cutover time according to the ISO 8601 format with the UTC time offset, for example, 2021-04-04T01:23:45.678+09:00.

    You can associate multiple Migration CRs with a single Plan CR. If a migration does not complete, you can create a new Migration CR, without changing the Plan CR, to migrate the remaining VMs.

  10. Retrieve the Migration CR to monitor the progress of the migration:

    $ oc get migration/<migration> -n openshift-mtv -o yaml

5.2. Canceling a migration

You can cancel an entire migration or individual virtual machines (VMs) while a migration is in progress from the command line interface (CLI).

Canceling an entire migration

  • Delete the Migration CR:

    $ oc delete migration <migration> -n openshift-mtv 1
    1
    Specify the name of the Migration CR.

Canceling the migration of individual VMs

  1. Add the individual VMs to the spec.cancel block of the Migration manifest:

    $ cat << EOF | oc apply -f -
    apiVersion: forklift.konveyor.io/v1beta1
    kind: Migration
    metadata:
      name: <migration>
      namespace: openshift-mtv
    ...
    spec:
      cancel:
      - id: vm-102 1
      - id: vm-203
      - name: rhel8-vm
    EOF
    1
    You can specify a VM by using the id key or the name key.

    The value of the id key is the managed object reference, for a VMware VM, or the VM UUID, for a RHV VM.

  2. Retrieve the Migration CR to monitor the progress of the remaining VMs:

    $ oc get migration/<migration> -n openshift-mtv -o yaml