Chapter 2. Migrating from OpenShift Container Platform 4.1

2.1. Migration tools and prerequisites

You can migrate application workloads from OpenShift Container Platform 4.1 to 4.5 with the Migration Toolkit for Containers (MTC). MTC enables you to control the migration and to minimize application downtime.

Note

You can migrate between OpenShift Container Platform clusters of the same version, for example, from 4.1 to 4.1, as long as the source and target clusters are configured correctly.

The MTC web console and API, based on Kubernetes custom resources, enable you to migrate stateful and stateless application workloads at the granularity of a namespace.

MTC supports the file system and snapshot data copy methods for migrating data from the source cluster to the target cluster. You can select a method that is suited for your environment and is supported by your storage provider.

You can use migration hooks to run Ansible playbooks at certain points during the migration. The hooks are added when you create a migration plan.

2.1.1. Migration prerequisites

The Migration Toolkit for Containers (MTC) has the following prerequisites:

  • You must upgrade the source cluster to the latest z-stream release.
  • You must have cluster-admin privileges on all clusters.
  • The source and target clusters must have unrestricted network access to the replication repository.
  • The cluster on which the MigrationController CR is installed must have unrestricted network access to the other clusters.
  • If your application uses images from the openshift namespace, the required versions of the images must be present on the target cluster. You can manually update an image stream tag in order to use a deprecated OpenShift Container Platform 3 image on an OpenShift Container Platform 4.5 cluster.

2.1.2. About the Migration Toolkit for Containers

The Migration Toolkit for Containers (MTC) enables you to migrate Kubernetes resources, persistent volume data, and internal container images from an OpenShift Container Platform source cluster to an OpenShift Container Platform 4.5 target cluster, using the MTC web console or the Kubernetes API.

Migrating an application with the MTC web console involves the following steps:

  1. Install the Migration Toolkit for Containers Operator on all clusters.

    You can install the Migration Toolkit for Containers Operator in a restricted environment with limited or no internet access. The source and target clusters must have network access to each other and to a mirror registry.

  2. Configure the replication repository, an intermediate object storage that MTC uses to migrate data.

    The source and target clusters must have network access to the replication repository during migration. In a restricted environment, you can use an internally hosted S3 storage repository. If you are using a proxy server, you must configure it to allow network traffic between the replication repository and the clusters.

  3. Add the source cluster to the MTC web console.
  4. Add the replication repository to the MTC web console.
  5. Create a migration plan, with one of the following data migration options:

    • Copy: MTC copies the data from the source cluster to the replication repository, and from the replication repository to the target cluster.

      migration PV copy
    • Move: MTC unmounts a remote volume, for example, NFS, from the source cluster, creates a PV resource on the target cluster pointing to the remote volume, and then mounts the remote volume on the target cluster. Applications running on the target cluster use the same remote volume that the source cluster was using. The remote volume must be accessible to the source and target clusters.

      Note

      Although the replication repository does not appear in this diagram, it is required for migration.

      migration PV move
  6. Run the migration plan, with one of the following options:

    • Stage (optional) copies data to the target cluster without stopping the application.

      Staging can be run multiple times so that most of the data is copied to the target before migration. This minimizes the duration of the migration and application downtime.

    • Migrate stops the application on the source cluster and recreates its resources on the target cluster. Optionally, you can migrate the workload without stopping the application.
OCP 3 to 4 App migration

2.1.3. About data copy methods

The Migration Toolkit for Containers (MTC) supports the file system and snapshot data copy methods for migrating data from the source cluster to the target cluster. You can select a method that is suited for your environment and is supported by your storage provider.

2.1.3.1. File system copy method

MTC copies data files from the source cluster to the replication repository, and from there to the target cluster.

Table 2.1. File system copy method summary

BenefitsLimitations
  • Clusters can have different storage classes
  • Supported for all S3 storage providers
  • Optional data verification with checksum
  • Slower than the snapshot copy method
  • Optional data verification significantly reduces performance

2.1.3.2. Snapshot copy method

MTC copies a snapshot of the source cluster data to the replication repository of a cloud provider. The data is restored on the target cluster.

AWS, Google Cloud Provider, and Microsoft Azure support the snapshot copy method.

Table 2.2. Snapshot copy method summary

BenefitsLimitations
  • Faster than the file system copy method
  • Cloud provider must support snapshots.
  • Clusters must be on the same cloud provider.
  • Clusters must be in the same location or region.
  • Clusters must have the same storage class.
  • Storage class must be compatible with snapshots.

2.1.4. About migration hooks

You can use migration hooks to run Ansible playbooks at certain points during a migration with the Migration Toolkit for Containers (MTC). The hooks are added when you create a migration plan.

Note

If you do not want to use Ansible playbooks, you can create a custom container image and add it to a migration plan.

Migration hooks perform tasks such as customizing application quiescence, manually migrating unsupported data types, and updating applications after migration.

A single migration hook runs on a source or target cluster at one of the following migration steps:

  • PreBackup: Before backup tasks are started on the source cluster
  • PostBackup: After backup tasks are complete on the source cluster
  • PreRestore: Before restore tasks are started on the target cluster
  • PostRestore: After restore tasks are complete on the target cluster

    You can assign one hook to each migration step, up to a maximum of four hooks for a single migration plan.

The default hook-runner image is registry.redhat.io/rhmtc/openshift-migration-hook-runner-rhel7:v1.4.0. This image is based on Ansible Runner and includes python-openshift for Ansible Kubernetes resources and an updated oc binary. You can also create your own hook image with additional Ansible modules or tools.

The Ansible playbook is mounted on a hook container as a config map. The hook container runs as a job on a cluster with a specified service account and namespace. The job continues to run until it reaches the default backoff limit for retries, 6, or successful completion, even if the initial pod is evicted or killed.

2.2. Deploying the Migration Toolkit for Containers

You can install the Migration Toolkit for Containers Operator on your OpenShift Container Platform 4.5 target cluster and 4.1 source cluster. The Migration Toolkit for Containers Operator installs the Migration Toolkit for Containers (MTC) on the target cluster by default.

Note

Optional: You can configure the Migration Toolkit for Containers Operator to install the MTC on an OpenShift Container Platform 3 cluster or on a remote cluster.

In a restricted environment, you can install the Migration Toolkit for Containers Operator from a local mirror registry.

After you have installed the Migration Toolkit for Containers Operator on your clusters, you can launch the MTC web console.

2.2.1. Installing the Migration Toolkit for Containers Operator

You can install the Migration Toolkit for Containers Operator with the Operator Lifecycle Manager on an OpenShift Container Platform 4.5 target cluster and on an OpenShift Container Platform 4.1 source cluster.

2.2.1.1. Installing the Migration Toolkit for Containers on an OpenShift Container Platform 4.5 target cluster

You can install the Migration Toolkit for Containers (MTC) on an OpenShift Container Platform 4.5 target cluster by using Operator Lifecycle Manager (OLM) to install the Migration Toolkit for Containers Operator.

MTC is installed on the target cluster by default.

Procedure

  1. In the OpenShift Container Platform web console, click OperatorsOperatorHub.
  2. Use the Filter by keyword field to find the Migration Toolkit for Containers Operator.
  3. Select the Migration Toolkit for Containers Operator and click Install.
  4. In the Subscription tab, change the Approval option to Automatic.
  5. Click Install.

    On the Installed Operators page, the Migration Toolkit for Containers Operator appears in the openshift-migration project with the status Succeeded.

  6. Click Migration Toolkit for Containers Operator.
  7. Under Provided APIs, locate the Migration Controller tile, and click Create Instance.
  8. Click Create.
  9. Click WorkloadsPods to verify that the MTC pods are running.

2.2.1.2. Installing the Migration Toolkit for Containers on an OpenShift Container Platform 4.1 source cluster

You can install the Migration Toolkit for Containers (MTC) on an OpenShift Container Platform 4 source cluster by using Operator Lifecycle Manager (OLM) to install the Migration Toolkit for Containers Operator.

Procedure

  1. In the OpenShift Container Platform web console, click CatalogOperatorHub.
  2. Use the Filter by keyword field to find the Migration Toolkit for Containers Operator.
  3. Select the Migration Toolkit for Containers Operator and click Install.
  4. Click Install.

    On the Installed Operators page, the Migration Toolkit for Containers Operator appears in the openshift-migration project with the status Succeeded.

  5. Click Migration Toolkit for Containers Operator.
  6. Under Provided APIs, locate the Migration Controller tile, and click Create Instance.
  7. Update the migration_controller and migration_ui parameters and add the deprecated_cors_configuration parameter to the manifest:

    spec:
    ...
      migration_controller: false
      migration_ui: false
    ...
      deprecated_cors_configuration: true
  8. Click Create.
  9. Click WorkloadsPods to verify that the MTC pods are running.

2.2.2. Installing the Migration Toolkit for Containers Operator in a restricted environment

You can build a custom Operator catalog image for OpenShift Container Platform 4, push it to a local mirror image registry, and configure the Operator Lifecycle Manager to install the Migration Toolkit for Containers Operator from the local registry.

2.2.2.1. Building an Operator catalog image

Cluster administrators can build a custom Operator catalog image based on the Package Manifest Format to be used by Operator Lifecycle Manager (OLM). The catalog image can be pushed to a container image registry that supports Docker v2-2. For a cluster on a restricted network, this registry can be a registry that the cluster has network access to, such as a mirror registry created during a restricted network cluster installation.

Important

The internal registry of the OpenShift Container Platform cluster cannot be used as the target registry because it does not support pushing without a tag, which is required during the mirroring process.

For this example, the procedure assumes use of a mirror registry that has access to both your network and the Internet.

Prerequisites

  • Workstation with unrestricted network access
  • oc version 4.3.5+
  • podman version 1.4.4+
  • Access to mirror registry that supports Docker v2-2
  • If you are working with private registries, set the REG_CREDS environment variable to the file path of your registry credentials for use in later steps. For example, for the podman CLI:

    $ REG_CREDS=${XDG_RUNTIME_DIR}/containers/auth.json
  • If you are working with private namespaces that your quay.io account has access to, you must set a Quay authentication token. Set the AUTH_TOKEN environment variable for use with the --auth-token flag by making a request against the login API using your quay.io credentials:

    $ AUTH_TOKEN=$(curl -sH "Content-Type: application/json" \
        -XPOST https://quay.io/cnr/api/v1/users/login -d '
        {
            "user": {
                "username": "'"<quay_username>"'",
                "password": "'"<quay_password>"'"
            }
        }' | jq -r '.token')

Procedure

  1. On the workstation with unrestricted network access, authenticate with the target mirror registry:

    $ podman login <registry_host_name>

    Also authenticate with registry.redhat.io so that the base image can be pulled during the build:

    $ podman login registry.redhat.io
  2. Build a catalog image based on the redhat-operators catalog from Quay.io, tagging and pushing it to your mirror registry:

    $ oc adm catalog build \
        --appregistry-org redhat-operators \1
        --from=registry.redhat.io/openshift4/ose-operator-registry:v4.5 \2
        --filter-by-os="linux/amd64" \3
        --to=<registry_host_name>:<port>/olm/redhat-operators:v1 \4
        [-a ${REG_CREDS}] \5
        [--insecure] \6
        [--auth-token "${AUTH_TOKEN}"] 7
    1
    Organization (namespace) to pull from an App Registry instance.
    2
    Set --from to the ose-operator-registry base image using the tag that matches the target OpenShift Container Platform cluster major and minor version.
    3
    Set --filter-by-os to the operating system and architecture to use for the base image, which must match the target OpenShift Container Platform cluster. Valid values are linux/amd64, linux/ppc64le, and linux/s390x.
    4
    Name your catalog image and include a tag, for example, v1.
    5
    Optional: If required, specify the location of your registry credentials file.
    6
    Optional: If you do not want to configure trust for the target registry, add the --insecure flag.
    7
    Optional: If other application registry catalogs are used that are not public, specify a Quay authentication token.

    Example output

    INFO[0013] loading Bundles                               dir=/var/folders/st/9cskxqs53ll3wdn434vw4cd80000gn/T/300666084/manifests-829192605
    ...
    Pushed sha256:f73d42950021f9240389f99ddc5b0c7f1b533c054ba344654ff1edaf6bf827e3 to example_registry:5000/olm/redhat-operators:v1

    Sometimes invalid manifests are accidentally introduced catalogs provided by Red Hat; when this happens, you might see some errors:

    Example output with errors

    ...
    INFO[0014] directory                                     dir=/var/folders/st/9cskxqs53ll3wdn434vw4cd80000gn/T/300666084/manifests-829192605 file=4.2 load=package
    W1114 19:42:37.876180   34665 builder.go:141] error building database: error loading package into db: fuse-camel-k-operator.v7.5.0 specifies replacement that couldn't be found
    Uploading ... 244.9kB/s

    These errors are usually non-fatal, and if the Operator package mentioned does not contain an Operator you plan to install or a dependency of one, then they can be ignored.

2.2.2.2. Configuring OperatorHub for restricted networks

Cluster administrators can configure OLM and OperatorHub to use local content in a restricted network environment using a custom Operator catalog image. For this example, the procedure uses a custom redhat-operators catalog image previously built and pushed to a supported registry.

Prerequisites

  • Workstation with unrestricted network access
  • A custom Operator catalog image pushed to a supported registry
  • oc version 4.3.5+
  • podman version 1.4.4+
  • Access to mirror registry that supports Docker v2-2
  • If you are working with private registries, set the REG_CREDS environment variable to the file path of your registry credentials for use in later steps. For example, for the podman CLI:

    $ REG_CREDS=${XDG_RUNTIME_DIR}/containers/auth.json

Procedure

  1. The oc adm catalog mirror command extracts the contents of your custom Operator catalog image to generate the manifests required for mirroring. You can choose to either:

    • Allow the default behavior of the command to automatically mirror all of the image content to your mirror registry after generating manifests, or
    • Add the --manifests-only flag to only generate the manifests required for mirroring, but do not actually mirror the image content to a registry yet. This can be useful for reviewing what will be mirrored, and it allows you to make any changes to the mapping list if you only require a subset of the content. You can then use that file with the oc image mirror command to mirror the modified list of images in a later step.

    On your workstation with unrestricted network access, run the following command:

    $ oc adm catalog mirror \
        <registry_host_name>:<port>/olm/redhat-operators:v1 \1
        <registry_host_name>:<port> \
        [-a ${REG_CREDS}] \2
        [--insecure] \3
        --filter-by-os='.*' \4
        [--manifests-only] 5
    1
    Specify your Operator catalog image.
    2
    Optional: If required, specify the location of your registry credentials file.
    3
    Optional: If you do not want to configure trust for the target registry, add the --insecure flag.
    4
    This flag is currently required due to a known issue with multiple architecture support. If unset or set to any value other than .*, filtering out different architectures changes the digest of the manifest list, also known as a "multi-arch image", which causes deployments of those images and Operators on disconnected clusters to fail. For more information, see BZ#1890951.
    5
    Optional: Only generate the manifests required for mirroring and do not actually mirror the image content to a registry.

    Example output

    using database path mapping: /:/tmp/190214037
    wrote database to /tmp/190214037
    using database at: /tmp/190214037/bundles.db 1
    ...

    1
    Temporary database generated by the command.

    After running the command, a <image_name>-manifests/ directory is created in the current directory and generates the following files:

    • The imageContentSourcePolicy.yaml file defines an ImageContentSourcePolicy object that can configure nodes to translate between the image references stored in Operator manifests and the mirrored registry.
    • The mapping.txt file contains all of the source images and where to map them in the target registry. This file is compatible with the oc image mirror command and can be used to further customize the mirroring configuration.
  2. If you used the --manifests-only flag in the previous step and want to mirror only a subset of the content:

    1. Modify the list of images in your mapping.txt file to your specifications. If you are unsure of the exact names and versions of the subset of images you want to mirror, use the following steps to find them:

      1. Run the sqlite3 tool against the temporary database that was generated by the oc adm catalog mirror command to retrieve a list of images matching a general search query. The output helps inform how you will later edit your mapping.txt file.

        For example, to retrieve a list of images that are similar to the string clusterlogging.4.3:

        $ echo "select * from related_image \
            where operatorbundle_name like 'clusterlogging.4.3%';" \
            | sqlite3 -line /tmp/190214037/bundles.db 1
        1
        Refer to the previous output of the oc adm catalog mirror command to find the path of the database file.

        Example output

        image = registry.redhat.io/openshift4/ose-logging-kibana5@sha256:aa4a8b2a00836d0e28aa6497ad90a3c116f135f382d8211e3c55f34fb36dfe61
        operatorbundle_name = clusterlogging.4.3.33-202008111029.p0
        
        image = registry.redhat.io/openshift4/ose-oauth-proxy@sha256:6b4db07f6e6c962fc96473d86c44532c93b146bbefe311d0c348117bf759c506
        operatorbundle_name = clusterlogging.4.3.33-202008111029.p0
        ...

      2. Use the results from the previous step to edit the mapping.txt file to only include the subset of images you want to mirror.

        For example, you can use the image values from the previous example output to find that the following matching lines exist in your mapping.txt file:

        Matching image mappings in mapping.txt

        registry.redhat.io/openshift4/ose-logging-kibana5@sha256:aa4a8b2a00836d0e28aa6497ad90a3c116f135f382d8211e3c55f34fb36dfe61=<registry_host_name>:<port>/openshift4-ose-logging-kibana5:a767c8f0
        registry.redhat.io/openshift4/ose-oauth-proxy@sha256:6b4db07f6e6c962fc96473d86c44532c93b146bbefe311d0c348117bf759c506=<registry_host_name>:<port>/openshift4-ose-oauth-proxy:3754ea2b

        In this example, if you only want to mirror these images, you would then remove all other entries in the mapping.txt file and leave only the above two lines.

    2. Still on your workstation with unrestricted network access, use your modified mapping.txt file to mirror the images to your registry using the oc image mirror command:

      $ oc image mirror \
          [-a ${REG_CREDS}] \
          -f ./redhat-operators-manifests/mapping.txt
  3. Apply the ImageContentSourcePolicy object:

    $ oc apply -f ./redhat-operators-manifests/imageContentSourcePolicy.yaml
  4. Create a CatalogSource object that references your catalog image.

    1. Modify the following to your specifications and save it as a catalogsource.yaml file:

      apiVersion: operators.coreos.com/v1alpha1
      kind: CatalogSource
      metadata:
        name: my-operator-catalog
        namespace: openshift-marketplace
      spec:
        sourceType: grpc
        image: <registry_host_name>:<port>/olm/redhat-operators:v1 1
        displayName: My Operator Catalog
        publisher: grpc
      1
      Specify your custom Operator catalog image.
    2. Use the file to create the CatalogSource object:

      $ oc create -f catalogsource.yaml
  5. Verify the following resources are created successfully.

    1. Check the pods:

      $ oc get pods -n openshift-marketplace

      Example output

      NAME                                    READY   STATUS    RESTARTS  AGE
      my-operator-catalog-6njx6               1/1     Running   0         28s
      marketplace-operator-d9f549946-96sgr    1/1     Running   0         26h

    2. Check the catalog source:

      $ oc get catalogsource -n openshift-marketplace

      Example output

      NAME                  DISPLAY               TYPE PUBLISHER  AGE
      my-operator-catalog   My Operator Catalog   grpc            5s

    3. Check the package manifest:

      $ oc get packagemanifest -n openshift-marketplace

      Example output

      NAME    CATALOG              AGE
      etcd    My Operator Catalog  34s

You can now install the Operators from the OperatorHub page on your restricted network OpenShift Container Platform cluster web console.

2.2.2.3. Installing the Migration Toolkit for Containers on an OpenShift Container Platform 4.5 target cluster in a restricted environment

You can install the Migration Toolkit for Containers (MTC) on an OpenShift Container Platform 4.5 target cluster by using Operator Lifecycle Manager (OLM) to install the Migration Toolkit for Containers Operator.

MTC is installed on the target cluster by default.

Prerequisites

  • You have created a custom Operator catalog and pushed it to a mirror registry.
  • You have configured OLM to install the Migration Toolkit for Containers Operator from the mirror registry.

Procedure

  1. In the OpenShift Container Platform web console, click OperatorsOperatorHub.
  2. Use the Filter by keyword field to find the Migration Toolkit for Containers Operator.
  3. Select the Migration Toolkit for Containers Operator and click Install.
  4. Click Install.

    On the Installed Operators page, the Migration Toolkit for Containers Operator appears in the openshift-migration project with the status Succeeded.

  5. Click Migration Toolkit for Containers Operator.
  6. Under Provided APIs, locate the Migration Controller tile, and click Create Instance.
  7. Click Create.
  8. Click WorkloadsPods to verify that the MTC pods are running.

2.2.2.4. Installing the Migration Toolkit for Containers on an OpenShift Container Platform 4.1 source cluster in a restricted environment

You can install the Migration Toolkit for Containers (MTC) on an OpenShift Container Platform 4 source cluster by using Operator Lifecycle Manager (OLM) to install the Migration Toolkit for Containers Operator.

Prerequisites

  • You have created a custom Operator catalog and pushed it to a mirror registry.
  • You have configured OLM to install the Migration Toolkit for Containers Operator from the mirror registry.

Procedure

  1. Use the Filter by keyword field to find the Migration Toolkit for Containers Operator.
  2. Select the Migration Toolkit for Containers Operator and click Install.
  3. Click Install.

    On the Installed Operators page, the Migration Toolkit for Containers Operator appears in the openshift-migration project with the status Succeeded.

  4. Click Migration Toolkit for Containers Operator.
  5. Under Provided APIs, locate the Migration Controller tile, and click Create Instance.
  6. Click Create.
  7. Click WorkloadsPods to verify that the MTC pods are running.

2.2.3. Launching the MTC web console

You can launch the Migration Toolkit for Containers (MTC) web console in a browser.

Procedure

  1. Log in to the OpenShift Container Platform cluster on which you have installed MTC.
  2. Obtain the MTC web console URL by entering the following command:

    $ oc get -n openshift-migration route/migration -o go-template='https://{{ .spec.host }}'

    The output resembles the following: https://migration-openshift-migration.apps.cluster.openshift.com.

  3. Launch a browser and navigate to the MTC web console.

    Note

    If you try to access the MTC web console immediately after installing the Migration Toolkit for Containers Operator, the console might not load because the Operator is still configuring the cluster. Wait a few minutes and retry.

  4. If you are using self-signed CA certificates, you will be prompted to accept the CA certificate of the source cluster API server. The web page guides you through the process of accepting the remaining certificates.
  5. Log in with your OpenShift Container Platform username and password.

2.3. Upgrading the Migration Toolkit for Containers

You can upgrade the Migration Toolkit for Containers (MTC) by upgrading the Migration Toolkit for Containers Operator.

If you are upgrading MTC 1.3, you must perform an additional procedure to update the MigPlan custom resource (CR).

2.3.1. Upgrading the Migration Toolkit for Containers on an OpenShift Container Platform 4 cluster

You can upgrade the Migration Toolkit for Containers (MTC) on an OpenShift Container Platform 4 cluster using the OpenShift Container Platform console.

If you selected the Automatic approval option when you installed the Migration Toolkit for Containers Operator, the Operator is updated automatically.

The following procedure enables you to change the Manual approval option to Automatic or to change the release channel.

Procedure

  1. In the OpenShift Container Platform console, navigate to OperatorsInstalled Operators.
  2. Click Migration Toolkit for Containers Operator.
  3. In the Subscription tab, change the Approval option to Automatic.
  4. Optional: Edit the Channel.

    Updating the subscription deploys the updated Migration Toolkit for Containers Operator and updates the MTC components.

  5. Under Provided APIs, locate the Migration Controller tile, and click Create Instance.
  6. If you are upgrading MTC on a 4.1 source cluster, update the migration_controller and migration_ui parameters and add the deprecated_cors_configuration parameter to the migration_controller manifest:

    spec:
    ...
      migration_controller: false
      migration_ui: false
      deprecated_cors_configuration: true
    Note

    You do not need to update the manifest of the target cluster.

  7. Click Create.
  8. Click WorkloadsPods to verify that the MTC pods are running.

2.3.2. Upgrading MTC 1.3

If you are upgrading Migration Toolkit for Containers (MTC) version 1.3.x, you must manually update the indirectImageMigration and indirectVolumeMigration parameters in the MigPlan custom resource (CR).

Because the indirectImageMigration and indirectVolumeMigration parameters do not exist in version 1.3, their default value in version 1.4 is false, which means that direct image migration and direct volume migration are enabled. Because the direct migration requirements are not fulfilled, the migration plan cannot reach a Ready state unless these parameter values are changed to true.

Prerequisites

  • You must have upgraded MTC from version 1.3.x to 1.4.
  • You must have cluster-admin privileges.

Procedure

  1. Log in to the target cluster.
  2. Get the MigPlan CR:

    $ oc get migplan <migplan> -o yaml -n openshift-migration
  3. Change the following parameter values to true and save the file:

    ...
    spec:
      indirectImageMigration: true
      indirectVolumeMigration: true
  4. Apply the changes:

    $ oc replace -f <migplan>.yaml -n openshift-migration
  5. Verify the changes by viewing the updated MigPlan CR:

    $ oc get migplan <migplan> -o yaml -n openshift-migration

2.4. Configuring a replication repository

You must configure an object storage to use as a replication repository. The Migration Toolkit for Containers (MTC) copies data from the source cluster to the replication repository, and then from the replication repository to the target cluster.

MTC supports the file system and snapshot data copy methods for migrating data from the source cluster to the target cluster. You can select a method that is suited for your environment and is supported by your storage provider.

The following storage providers are supported:

The source and target clusters must have network access to the replication repository during migration.

In a restricted environment, you can create an internally hosted replication repository. If you use a proxy server, you must ensure that your replication repository is allowed.

2.4.1. Configuring a Multi-Cloud Object Gateway storage bucket as a replication repository

You can install the OpenShift Container Storage Operator and configure a Multi-Cloud Object Gateway (MCG) storage bucket as a replication repository for the Migration Toolkit for Containers (MTC).

2.4.1.1. Installing the OpenShift Container Storage Operator

You can install the OpenShift Container Storage Operator from OperatorHub.

Procedure

  1. In the OpenShift Container Platform web console, click OperatorsOperatorHub.
  2. Use Filter by keyword (in this case, OCS) to find the OpenShift Container Storage Operator.
  3. Select the OpenShift Container Storage Operator and click Install.
  4. Select an Update Channel, Installation Mode, and Approval Strategy.
  5. Click Install.

    On the Installed Operators page, the OpenShift Container Storage Operator appears in the openshift-storage project with the status Succeeded.

2.4.1.2. Creating the Multi-Cloud Object Gateway storage bucket

You can create the Multi-Cloud Object Gateway (MCG) storage bucket’s custom resources (CRs).

Procedure

  1. Log in to the OpenShift Container Platform cluster:

    $ oc login
  2. Create the NooBaa CR configuration file, noobaa.yml, with the following content:

    apiVersion: noobaa.io/v1alpha1
    kind: NooBaa
    metadata:
      name: noobaa
      namespace: openshift-storage
    spec:
     dbResources:
       requests:
         cpu: 0.5 1
         memory: 1Gi
     coreResources:
       requests:
         cpu: 0.5 2
         memory: 1Gi
    1 2
    For a very small cluster, you can change the cpu value to 0.1.
  3. Create the NooBaa object:

    $ oc create -f noobaa.yml
  4. Create the BackingStore CR configuration file, bs.yml, with the following content:

    apiVersion: noobaa.io/v1alpha1
    kind: BackingStore
    metadata:
      finalizers:
      - noobaa.io/finalizer
      labels:
        app: noobaa
      name: mcg-pv-pool-bs
      namespace: openshift-storage
    spec:
      pvPool:
        numVolumes: 3 1
        resources:
          requests:
            storage: 50Gi 2
        storageClass: gp2 3
      type: pv-pool
    1
    Specify the number of volumes in the persistent volume pool.
    2
    Specify the size of the volumes.
    3
    Specify the storage class.
  5. Create the BackingStore object:

    $ oc create -f bs.yml
  6. Create the BucketClass CR configuration file, bc.yml, with the following content:

    apiVersion: noobaa.io/v1alpha1
    kind: BucketClass
    metadata:
      labels:
        app: noobaa
      name: mcg-pv-pool-bc
      namespace: openshift-storage
    spec:
      placementPolicy:
        tiers:
        - backingStores:
          - mcg-pv-pool-bs
          placement: Spread
  7. Create the BucketClass object:

    $ oc create -f bc.yml
  8. Create the ObjectBucketClaim CR configuration file, obc.yml, with the following content:

    apiVersion: objectbucket.io/v1alpha1
    kind: ObjectBucketClaim
    metadata:
      name: migstorage
      namespace: openshift-storage
    spec:
      bucketName: migstorage 1
      storageClassName: openshift-storage.noobaa.io
      additionalConfig:
        bucketclass: mcg-pv-pool-bc
    1
    Record the bucket name for adding the replication repository to the MTC web console.
  9. Create the ObjectBucketClaim object:

    $ oc create -f obc.yml
  10. Watch the resource creation process to verify that the ObjectBucketClaim status is Bound:

    $ watch -n 30 'oc get -n openshift-storage objectbucketclaim migstorage -o yaml'

    This process can take five to ten minutes.

  11. Obtain and record the following values, which are required when you add the replication repository to the MTC web console:

    • S3 endpoint:

      $ oc get route -n openshift-storage s3
    • S3 provider access key:

      $ oc get secret -n openshift-storage migstorage -o go-template='{{ .data.AWS_ACCESS_KEY_ID }}' | base64 -d
    • S3 provider secret access key:

      $ oc get secret -n openshift-storage migstorage -o go-template='{{ .data.AWS_SECRET_ACCESS_KEY }}' | base64 -d

2.4.2. Configuring an AWS S3 storage bucket as a replication repository

You can configure an AWS S3 storage bucket as a replication repository for the Migration Toolkit for Containers (MTC).

Prerequisites

  • The AWS S3 storage bucket must be accessible to the source and target clusters.
  • You must have the AWS CLI installed.
  • If you are using the snapshot copy method:

    • You must have access to EC2 Elastic Block Storage (EBS).
    • The source and target clusters must be in the same region.
    • The source and target clusters must have the same storage class.
    • The storage class must be compatible with snapshots.

Procedure

  1. Create an AWS S3 bucket:

    $ aws s3api create-bucket \
        --bucket <bucket_name> \ 1
        --region <bucket_region> 2
    1
    Specify your S3 bucket name.
    2
    Specify your S3 bucket region, for example, us-east-1.
  2. Create the IAM user velero:

    $ aws iam create-user --user-name velero
  3. Create an EC2 EBS snapshot policy:

    $ cat > velero-ec2-snapshot-policy.json <<EOF
    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Effect": "Allow",
                "Action": [
                    "ec2:DescribeVolumes",
                    "ec2:DescribeSnapshots",
                    "ec2:CreateTags",
                    "ec2:CreateVolume",
                    "ec2:CreateSnapshot",
                    "ec2:DeleteSnapshot"
                ],
                "Resource": "*"
            }
        ]
    }
    EOF
  4. Create an AWS S3 access policy for one or for all S3 buckets:

    $ cat > velero-s3-policy.json <<EOF
    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Effect": "Allow",
                "Action": [
                    "s3:GetObject",
                    "s3:DeleteObject",
                    "s3:PutObject",
                    "s3:AbortMultipartUpload",
                    "s3:ListMultipartUploadParts"
                ],
                "Resource": [
                    "arn:aws:s3:::<bucket_name>/*" 1
                ]
            },
            {
                "Effect": "Allow",
                "Action": [
                    "s3:ListBucket",
                    "s3:GetBucketLocation",
                    "s3:ListBucketMultipartUploads"
                ],
                "Resource": [
                    "arn:aws:s3:::<bucket_name>" 2
                ]
            }
        ]
    }
    EOF
    1 2
    To grant access to a single S3 bucket, specify the bucket name. To grant access to all AWS S3 buckets, specify * instead of a bucket name as in the following example:

    Example output

    "Resource": [
        "arn:aws:s3:::*"

  5. Attach the EC2 EBS policy to velero:

    $ aws iam put-user-policy \
      --user-name velero \
      --policy-name velero-ebs \
      --policy-document file://velero-ec2-snapshot-policy.json
  6. Attach the AWS S3 policy to velero:

    $ aws iam put-user-policy \
      --user-name velero \
      --policy-name velero-s3 \
      --policy-document file://velero-s3-policy.json
  7. Create an access key for velero:

    $ aws iam create-access-key --user-name velero
    {
      "AccessKey": {
            "UserName": "velero",
            "Status": "Active",
            "CreateDate": "2017-07-31T22:24:41.576Z",
            "SecretAccessKey": <AWS_SECRET_ACCESS_KEY>, 1
            "AccessKeyId": <AWS_ACCESS_KEY_ID> 2
        }
    }
    1 2
    Record the AWS_SECRET_ACCESS_KEY and the AWS_ACCESS_KEY_ID for adding the AWS repository to the MTC web console.

2.4.3. Configuring a Google Cloud Provider storage bucket as a replication repository

You can configure a Google Cloud Provider (GCP) storage bucket as a replication repository for the Migration Toolkit for Containers (MTC).

Prerequisites

  • The GCP storage bucket must be accessible to the source and target clusters.
  • You must have gsutil installed.
  • If you are using the snapshot copy method:

    • The source and target clusters must be in the same region.
    • The source and target clusters must have the same storage class.
    • The storage class must be compatible with snapshots.

Procedure

  1. Log in to gsutil:

    $ gsutil init

    Example output

    Welcome! This command will take you through the configuration of gcloud.
    
    Your current configuration has been set to: [default]
    
    To continue, you must login. Would you like to login (Y/n)?

  2. Set the BUCKET variable:

    $ BUCKET=<bucket_name> 1
    1
    Specify your bucket name.
  3. Create a storage bucket:

    $ gsutil mb gs://$BUCKET/
  4. Set the PROJECT_ID variable to your active project:

    $ PROJECT_ID=$(gcloud config get-value project)
  5. Create a velero IAM service account:

    $ gcloud iam service-accounts create velero \
        --display-name "Velero Storage"
  6. Create the SERVICE_ACCOUNT_EMAIL variable:

    $ SERVICE_ACCOUNT_EMAIL=$(gcloud iam service-accounts list \
      --filter="displayName:Velero Storage" \
      --format 'value(email)')
  7. Create the ROLE_PERMISSIONS variable:

    $ ROLE_PERMISSIONS=(
        compute.disks.get
        compute.disks.create
        compute.disks.createSnapshot
        compute.snapshots.get
        compute.snapshots.create
        compute.snapshots.useReadOnly
        compute.snapshots.delete
        compute.zones.get
    )
  8. Create the velero.server custom role:

    $ gcloud iam roles create velero.server \
        --project $PROJECT_ID \
        --title "Velero Server" \
        --permissions "$(IFS=","; echo "${ROLE_PERMISSIONS[*]}")"
  9. Add IAM policy binding to the project:

    $ gcloud projects add-iam-policy-binding $PROJECT_ID \
        --member serviceAccount:$SERVICE_ACCOUNT_EMAIL \
        --role projects/$PROJECT_ID/roles/velero.server
  10. Update the IAM service account:

    $ gsutil iam ch serviceAccount:$SERVICE_ACCOUNT_EMAIL:objectAdmin gs://${BUCKET}
  11. Save the IAM service account keys to the credentials-velero file in the current directory:

    $ gcloud iam service-accounts keys create credentials-velero \
      --iam-account $SERVICE_ACCOUNT_EMAIL

2.4.4. Configuring a Microsoft Azure Blob storage container as a replication repository

You can configure a Microsoft Azure Blob storage container as a replication repository for the Migration Toolkit for Containers (MTC).

Prerequisites

  • You must have an Azure storage account.
  • You must have the Azure CLI installed.
  • The Azure Blob storage container must be accessible to the source and target clusters.
  • If you are using the snapshot copy method:

    • The source and target clusters must be in the same region.
    • The source and target clusters must have the same storage class.
    • The storage class must be compatible with snapshots.

Procedure

  1. Set the AZURE_RESOURCE_GROUP variable:

    $ AZURE_RESOURCE_GROUP=Velero_Backups
  2. Create an Azure resource group:

    $ az group create -n $AZURE_RESOURCE_GROUP --location <CentralUS> 1
    1
    Specify your location.
  3. Set the AZURE_STORAGE_ACCOUNT_ID variable:

    $ AZURE_STORAGE_ACCOUNT_ID=velerobackups
  4. Create an Azure storage account:

    $ az storage account create \
      --name $AZURE_STORAGE_ACCOUNT_ID \
      --resource-group $AZURE_RESOURCE_GROUP \
      --sku Standard_GRS \
      --encryption-services blob \
      --https-only true \
      --kind BlobStorage \
      --access-tier Hot
  5. Set the BLOB_CONTAINER variable:

    $ BLOB_CONTAINER=velero
  6. Create an Azure Blob storage container:

    $ az storage container create \
      -n $BLOB_CONTAINER \
      --public-access off \
      --account-name $AZURE_STORAGE_ACCOUNT_ID
  7. Create a service principal and credentials for velero:

    $ AZURE_SUBSCRIPTION_ID=`az account list --query '[?isDefault].id' -o tsv` \
      AZURE_TENANT_ID=`az account list --query '[?isDefault].tenantId' -o tsv` \
      AZURE_CLIENT_SECRET=`az ad sp create-for-rbac --name "velero" --role "Contributor" --query 'password' -o tsv` \
      AZURE_CLIENT_ID=`az ad sp list --display-name "velero" --query '[0].appId' -o tsv`
  8. Save the service principal credentials in the credentials-velero file:

    $ cat << EOF  > ./credentials-velero
    AZURE_SUBSCRIPTION_ID=${AZURE_SUBSCRIPTION_ID}
    AZURE_TENANT_ID=${AZURE_TENANT_ID}
    AZURE_CLIENT_ID=${AZURE_CLIENT_ID}
    AZURE_CLIENT_SECRET=${AZURE_CLIENT_SECRET}
    AZURE_RESOURCE_GROUP=${AZURE_RESOURCE_GROUP}
    AZURE_CLOUD_NAME=AzurePublicCloud
    EOF

2.5. Migrating your applications

You must add your clusters and a replication repository to the MTC web console. Then, you can create and run a migration plan.

If your cluster or replication repository are secured with self-signed certificates, you can create a CA certificate bundle file or disable SSL verification.

2.5.1. Creating a CA certificate bundle file

If you use a self-signed certificate to secure a cluster or a replication repository for the Migration Toolkit for Containers (MTC), certificate verification might fail with the following error message: Certificate signed by unknown authority.

You can create a custom CA certificate bundle file and upload it in the MTC web console when you add a cluster or a replication repository.

Procedure

Download a CA certificate from a remote endpoint and save it as a CA bundle file:

$ echo -n | openssl s_client -connect <host_FQDN>:<port> \ 1
  | sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' > <ca_bundle.cert> 2
1
Specify the host FQDN and port of the endpoint, for example, api.my-cluster.example.com:6443.
2
Specify the name of the CA bundle file.

2.5.2. Configuring a migration plan

2.5.2.1. Increasing limits for large migrations

You can increase the limits on migration objects and container resources for large migrations with the Migration Toolkit for Containers (MTC).

Important

You must test these changes before you perform a migration in a production environment.

Procedure

  1. Edit the MigrationController CR manifest:

    $ oc edit migrationcontroller -n openshift-migration
  2. Update the following parameters:

    ...
    mig_controller_limits_cpu: "1" 1
    mig_controller_limits_memory: "10Gi" 2
    ...
    mig_controller_requests_cpu: "100m" 3
    mig_controller_requests_memory: "350Mi" 4
    ...
    mig_pv_limit: 100 5
    mig_pod_limit: 100 6
    mig_namespace_limit: 10 7
    ...
    1
    Specifies the number of CPUs available to the MigrationController CR.
    2
    Specifies the amount of memory available to the MigrationController CR.
    3
    Specifies the number of CPU units available for MigrationController CR requests. 100m represents 0.1 CPU units (100 * 1e-3).
    4
    Specifies the amount of memory available for MigrationController CR requests.
    5
    Specifies the number of persistent volumes that can be migrated.
    6
    Specifies the number of pods that can be migrated.
    7
    Specifies the number of namespaces that can be migrated.
  3. Create a migration plan that uses the updated parameters to verify the changes.

    If your migration plan exceeds the MigrationController CR limits, the MTC console displays a warning message when you save the migration plan.

2.5.2.2. Excluding resources from a migration plan

You can exclude resources, for example, image streams, persistent volumes (PVs), or subscriptions, from a Migration Toolkit for Containers (MTC) migration plan in order to reduce the load or to migrate images or PVs with a different tool.

Procedure

  1. Edit the MigrationController CR manifest:

    $ oc edit migrationcontroller -n openshift-migration
  2. Update the spec section by adding a parameter to exclude specific resources or by adding a resource to the excluded_resources parameter if it does not have its own exclusion parameter:

    apiVersion: migration.openshift.io/v1alpha1
    kind: MigrationController
    metadata:
      name: migration-controller
      namespace: openshift-migration
    spec:
      disable_image_migration: true 1
      disable_pv_migration: true 2
      ...
      excluded_resources: 3
      - imagetags
      - templateinstances
      - clusterserviceversions
      - packagemanifests
      - subscriptions
      - servicebrokers
      - servicebindings
      - serviceclasses
      - serviceinstances
      - serviceplans
    1
    Add disable_image_migration: true to exclude image streams from the migration. Do not edit the excluded_resources parameter. imagestreams is added to excluded_resources when the MigrationController pod restarts.
    2
    Add disable_pv_migration: true to exclude PVs from the migration plan. Do not edit the excluded_resources parameter. persistentvolumes and persistentvolumeclaims are added to excluded_resources when the MigrationController pod restarts. Disabling PV migration also disables PV discovery when you create the migration plan.
    3
    You can add OpenShift Container Platform resources to the excluded_resources list. Do not delete any of the default excluded resources. These resources are known to be problematic for migration.
  3. Wait two minutes for the MigrationController pod to restart so that the changes are applied.
  4. Verify that the resource is excluded:

    $ oc get deployment -n openshift-migration migration-controller -o yaml | grep EXCLUDED_RESOURCES -A1

    The output contains the excluded resources:

    Example output

        - name: EXCLUDED_RESOURCES
          value:
          imagetags,templateinstances,clusterserviceversions,packagemanifests,subscriptions,servicebrokers,servicebindings,serviceclasses,serviceinstances,serviceplans,imagestreams,persistentvolumes,persistentvolumeclaims

2.5.3. Adding a cluster to the Migration Toolkit for Containers web console

You can add a cluster to the Migration Toolkit for Containers (MTC) web console.

Prerequisites

If you are using Azure snapshots to copy data:

  • You must provide the Azure resource group name when you add the source cluster.
  • The source and target clusters must be in the same Azure resource group and in the same location.

Procedure

  1. Log in to the cluster.
  2. Obtain the service account token:

    $ oc sa get-token migration-controller -n openshift-migration

    Example output

    eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJtaWciLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlY3JldC5uYW1lIjoibWlnLXRva2VuLWs4dDJyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6Im1pZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE1YjFiYWMwLWMxYmYtMTFlOS05Y2NiLTAyOWRmODYwYjMwOCIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDptaWc6bWlnIn0.xqeeAINK7UXpdRqAtOj70qhBJPeMwmgLomV9iFxr5RoqUgKchZRG2J2rkqmPm6vr7K-cm7ibD1IBpdQJCcVDuoHYsFgV4mp9vgOfn9osSDp2TGikwNz4Az95e81xnjVUmzh-NjDsEpw71DH92iHV_xt2sTwtzftS49LpPW2LjrV0evtNBP_t_RfskdArt5VSv25eORl7zScqfe1CiMkcVbf2UqACQjo3LbkpfN26HAioO2oH0ECPiRzT0Xyh-KwFutJLS9Xgghyw-LD9kPKcE_xbbJ9Y4Rqajh7WdPYuB0Jd9DPVrslmzK-F6cgHHYoZEv0SvLQi-PO0rpDrcjOEQQ

  3. In the MTC web console, click Clusters.
  4. Click Add cluster.
  5. Fill in the following fields:

    • Cluster name: May contain lower-case letters (a-z) and numbers (0-9). Must not contain spaces or international characters.
    • Url: URL of the cluster’s API server, for example, https://<master1.example.com>:8443.
    • Service account token: String that you obtained from the source cluster.
    • Exposed route to image registry: Optional. You can specify a route to the image registry of your source cluster to enable direct migration for images, for example, docker-registry-default.apps.cluster.com.

      Direct migration is much faster than migration with a replication repository.

    • Azure cluster: Optional. Select it if you are using Azure snapshots to copy your data.
    • Azure resource group: This field appears if Azure cluster is checked.
    • If you use a custom CA bundle, click Browse and browse to the CA bundle file.
  6. Click Add cluster.

    The cluster appears in the Clusters list.

2.5.4. Adding a replication repository to the MTC web console

You can add an object storage bucket as a replication repository to the Migration Toolkit for Containers (MTC) web console.

Prerequisites

  • You must configure an object storage bucket for migrating the data.

Procedure

  1. In the MTC web console, click Replication repositories.
  2. Click Add repository.
  3. Select a Storage provider type and fill in the following fields:

    • AWS for AWS S3, MCG, and generic S3 providers:

      • Replication repository name: Specify the replication repository name in the MTC web console.
      • S3 bucket name: Specify the name of the S3 bucket you created.
      • S3 bucket region: Specify the S3 bucket region. Required for AWS S3. Optional for other S3 providers.
      • S3 endpoint: Specify the URL of the S3 service, not the bucket, for example, https://<s3-storage.apps.cluster.com>. Required for a generic S3 provider. You must use the https:// prefix.
      • S3 provider access key: Specify the <AWS_SECRET_ACCESS_KEY> for AWS or the S3 provider access key for MCG.
      • S3 provider secret access key: Specify the <AWS_ACCESS_KEY_ID> for AWS or the S3 provider secret access key for MCG.
      • Require SSL verification: Clear this check box if you are using a generic S3 provider.
      • If you use a custom CA bundle, click Browse and browse to the Base64-encoded CA bundle file.
    • GCP:

      • Replication repository name: Specify the replication repository name in the MTC web console.
      • GCP bucket name: Specify the name of the GCP bucket.
      • GCP credential JSON blob: Specify the string in the credentials-velero file.
    • Azure:

      • Replication repository name: Specify the replication repository name in the MTC web console.
      • Azure resource group: Specify the resource group of the Azure Blob storage.
      • Azure storage account name: Specify the Azure Blob storage account name.
      • Azure credentials - INI file contents: Specify the string in the credentials-velero file.
  4. Click Add repository and wait for connection validation.
  5. Click Close.

    The new repository appears in the Replication repositories list.

2.5.5. Creating a migration plan in the MTC web console

You can create a migration plan in the for the Migration Toolkit for Containers (MTC) web console.

Prerequisites

  • The source and target clusters must have network access to each other and to the replication repository.
  • If you use snapshots to copy data, the source and target clusters must run on the same cloud provider (AWS, GCP, or Azure) and be located in the same region.

Procedure

  1. In the MTC web console, click Migration plans.
  2. Click Add migration plan to launch the Migration Plan wizard.
  3. In the General screen, enter the Plan name.
  4. Select a source cluster, a target cluster, and a replication repository and then click Next.
  5. In the Namespaces screen, select the projects to be migrated and then click Next.
  6. In the Persistent volumes screen, click a Migration type for each PV:

    • The Copy option copies the data from the PV of a source cluster to the replication repository and then restores it on a newly created PV, with similar characteristics, in the target cluster.

      If you specified a route to an image registry when you added the source cluster to the web console, you can migrate images directly from the source cluster to the target cluster.

    • The Move option unmounts a remote volume, for example, NFS, from the source cluster, creates a PV resource on the target cluster pointing to the remote volume, and then mounts the remote volume on the target cluster. Applications running on the target cluster use the same remote volume that the source cluster was using. The remote volume must be accessible to the source and target clusters.
  7. Click Next.
  8. In the Copy options screen, click a Copy method for each PV:

    • The Snapshot copy option backs up and restores the disk using the cloud provider’s snapshot functionality. Copying snapshots is faster than copying the file system.

      Note

      The storage and clusters must be in the same region and the storage classes must be compatible.

    • The Filesystem copy option backs up the files on the source cluster and restores them on the target cluster.
  9. Select Verify copy if you want to verify data migrated with Filesystem copy. Data is verified by generating a checksum for each source file and checking the checksum after restoration. This option significantly reduces performance.
  10. Select a Target storage class for each PV.

    You can change the storage class of data migrated with Filesystem copy.

  11. Click Next.
  12. In the Migration options screen, the Direct image migration option is selected if you specified an exposed image registry route for the source cluster. The Direct PV migration option is selected if you are migrating data with Filesystem copy.

    The direct migration options copy images and files directly from the source cluster to the target cluster. This option is much faster than copying images and files from the source cluster to the replication repository and then from the replication repository to the target cluster.

  13. Click Next.
  14. In the Hooks screen, click Add Hook to add a hook to the migration plan.
  15. Enter the hook name.
  16. If your hook is an Ansible playbook, click Browse to upload the playbook and update the Ansible runtime image field if you are using a custom Ansible image.
  17. If your hook is not an Ansible playbook, click Custom container image and specify the image name and path.
  18. Click Source cluster or Target cluster on which the hook should run.
  19. Enter the Service account name and the Service account namespace of the cluster.
  20. Select the migration step when the hook should run:

    • PreBackup: Before backup tasks are started on the source cluster
    • PostBackup: After backup tasks are complete on the source cluster
    • PreRestore: Before restore tasks are started on the target cluster
    • PostRestore: After restore tasks are complete on the target cluster
  21. Click Add Hook and then click Close.

    You can add up to four hooks to a single migration plan. Each hook runs during a different migration step.

  22. Click Finish and then click Close.

    The migration plan is displayed in the Migration plans list.

2.5.6. Running a migration plan in the MTC web console

You can stage or migrate applications and data with the migration plan you created in the Migration Toolkit for Containers (MTC) web console.

Procedure

  1. Log in to the source cluster.
  2. Delete old images:

    $ oc adm prune images
  3. Log in to the MTC web console and click Migration plans.
  4. Click the Options menu kebab next to a migration plan and select Stage to copy data from the source cluster to the target cluster without stopping the application.

    You can run Stage multiple times to reduce the actual migration time.

  5. When you are ready to migrate the application workload, the Options menu kebab beside a migration plan and select Migrate.
  6. Optional: In the Migrate window, you can select Do not stop applications on the source cluster during migration.
  7. Click Migrate.
  8. When the migration is complete, verify that the application migrated successfully in the OpenShift Container Platform web console:

    1. Click HomeProjects.
    2. Click the migrated project to view its status.
    3. In the Routes section, click Location to verify that the application is functioning, if applicable.
    4. Click WorkloadsPods to verify that the pods are running in the migrated namespace.
    5. Click StoragePersistent volumes to verify that the migrated persistent volume is correctly provisioned.

2.6. Troubleshooting

You can view the Migration Toolkit for Containers (MTC) custom resources and download logs to troubleshoot a failed migration.

If the application was stopped during the failed migration, you must roll back the migration in order to prevent data corruption.

Note

Manual rollback is not required if the application was not stopped during migration because the original application is still running on the source cluster.

2.6.1. Viewing migration Custom Resources

The Migration Toolkit for Containers (MTC) creates the following custom resources (CRs):

migration architecture diagram

20 MigCluster (configuration, MTC cluster): Cluster definition

20 MigStorage (configuration, MTC cluster): Storage definition

20 MigPlan (configuration, MTC cluster): Migration plan

The MigPlan CR describes the source and target clusters, replication repository, and namespaces being migrated. It is associated with 0, 1, or many MigMigration CRs.

Note

Deleting a MigPlan CR deletes the associated MigMigration CRs.

20 BackupStorageLocation (configuration, MTC cluster): Location of Velero backup objects

20 VolumeSnapshotLocation (configuration, MTC cluster): Location of Velero volume snapshots

20 MigMigration (action, MTC cluster): Migration, created every time you stage or migrate data. Each MigMigration CR is associated with a MigPlan CR.

20 Backup (action, source cluster): When you run a migration plan, the MigMigration CR creates two Velero backup CRs on each source cluster:

  • Backup CR #1 for Kubernetes objects
  • Backup CR #2 for PV data

20 Restore (action, target cluster): When you run a migration plan, the MigMigration CR creates two Velero restore CRs on the target cluster:

  • Restore CR #1 (using Backup CR #2) for PV data
  • Restore CR #2 (using Backup CR #1) for Kubernetes objects

Procedure

  1. List the MigMigration CRs in the openshift-migration namespace:

    $ oc get migmigration -n openshift-migration

    Example output

    NAME                                   AGE
    88435fe0-c9f8-11e9-85e6-5d593ce65e10   6m42s

  2. Inspect the MigMigration CR:

    $ oc describe migmigration 88435fe0-c9f8-11e9-85e6-5d593ce65e10 -n openshift-migration

    The output is similar to the following examples.

MigMigration example output

name:         88435fe0-c9f8-11e9-85e6-5d593ce65e10
namespace:    openshift-migration
labels:       <none>
annotations:  touch: 3b48b543-b53e-4e44-9d34-33563f0f8147
apiVersion:  migration.openshift.io/v1alpha1
kind:         MigMigration
metadata:
  creationTimestamp:  2019-08-29T01:01:29Z
  generation:          20
  resourceVersion:    88179
  selfLink:           /apis/migration.openshift.io/v1alpha1/namespaces/openshift-migration/migmigrations/88435fe0-c9f8-11e9-85e6-5d593ce65e10
  uid:                 8886de4c-c9f8-11e9-95ad-0205fe66cbb6
spec:
  migPlanRef:
    name:        socks-shop-mig-plan
    namespace:   openshift-migration
  quiescePods:  true
  stage:         false
status:
  conditions:
    category:              Advisory
    durable:               True
    lastTransitionTime:  2019-08-29T01:03:40Z
    message:               The migration has completed successfully.
    reason:                Completed
    status:                True
    type:                  Succeeded
  phase:                   Completed
  startTimestamp:         2019-08-29T01:01:29Z
events:                    <none>

Velero backup CR #2 example output that describes the PV data

apiVersion: velero.io/v1
kind: Backup
metadata:
  annotations:
    openshift.io/migrate-copy-phase: final
    openshift.io/migrate-quiesce-pods: "true"
    openshift.io/migration-registry: 172.30.105.179:5000
    openshift.io/migration-registry-dir: /socks-shop-mig-plan-registry-44dd3bd5-c9f8-11e9-95ad-0205fe66cbb6
  creationTimestamp: "2019-08-29T01:03:15Z"
  generateName: 88435fe0-c9f8-11e9-85e6-5d593ce65e10-
  generation: 1
  labels:
    app.kubernetes.io/part-of: migration
    migmigration: 8886de4c-c9f8-11e9-95ad-0205fe66cbb6
    migration-stage-backup: 8886de4c-c9f8-11e9-95ad-0205fe66cbb6
    velero.io/storage-location: myrepo-vpzq9
  name: 88435fe0-c9f8-11e9-85e6-5d593ce65e10-59gb7
  namespace: openshift-migration
  resourceVersion: "87313"
  selfLink: /apis/velero.io/v1/namespaces/openshift-migration/backups/88435fe0-c9f8-11e9-85e6-5d593ce65e10-59gb7
  uid: c80dbbc0-c9f8-11e9-95ad-0205fe66cbb6
spec:
  excludedNamespaces: []
  excludedResources: []
  hooks:
    resources: []
  includeClusterResources: null
  includedNamespaces:
  - sock-shop
  includedResources:
  - persistentvolumes
  - persistentvolumeclaims
  - namespaces
  - imagestreams
  - imagestreamtags
  - secrets
  - configmaps
  - pods
  labelSelector:
    matchLabels:
      migration-included-stage-backup: 8886de4c-c9f8-11e9-95ad-0205fe66cbb6
  storageLocation: myrepo-vpzq9
  ttl: 720h0m0s
  volumeSnapshotLocations:
  - myrepo-wv6fx
status:
  completionTimestamp: "2019-08-29T01:02:36Z"
  errors: 0
  expiration: "2019-09-28T01:02:35Z"
  phase: Completed
  startTimestamp: "2019-08-29T01:02:35Z"
  validationErrors: null
  version: 1
  volumeSnapshotsAttempted: 0
  volumeSnapshotsCompleted: 0
  warnings: 0

Velero restore CR #2 example output that describes the Kubernetes resources

apiVersion: velero.io/v1
kind: Restore
metadata:
  annotations:
    openshift.io/migrate-copy-phase: final
    openshift.io/migrate-quiesce-pods: "true"
    openshift.io/migration-registry: 172.30.90.187:5000
    openshift.io/migration-registry-dir: /socks-shop-mig-plan-registry-36f54ca7-c925-11e9-825a-06fa9fb68c88
  creationTimestamp: "2019-08-28T00:09:49Z"
  generateName: e13a1b60-c927-11e9-9555-d129df7f3b96-
  generation: 3
  labels:
    app.kubernetes.io/part-of: migration
    migmigration: e18252c9-c927-11e9-825a-06fa9fb68c88
    migration-final-restore: e18252c9-c927-11e9-825a-06fa9fb68c88
  name: e13a1b60-c927-11e9-9555-d129df7f3b96-gb8nx
  namespace: openshift-migration
  resourceVersion: "82329"
  selfLink: /apis/velero.io/v1/namespaces/openshift-migration/restores/e13a1b60-c927-11e9-9555-d129df7f3b96-gb8nx
  uid: 26983ec0-c928-11e9-825a-06fa9fb68c88
spec:
  backupName: e13a1b60-c927-11e9-9555-d129df7f3b96-sz24f
  excludedNamespaces: null
  excludedResources:
  - nodes
  - events
  - events.events.k8s.io
  - backups.velero.io
  - restores.velero.io
  - resticrepositories.velero.io
  includedNamespaces: null
  includedResources: null
  namespaceMapping: null
  restorePVs: true
status:
  errors: 0
  failureReason: ""
  phase: Completed
  validationErrors: null
  warnings: 15

2.6.2. Using the migration log reader

You can use the migration log reader to display a single filtered view of all the migration logs.

Procedure

  1. Get the mig-log-reader pod:

    $ oc -n openshift-migration get pods | grep log
  2. Enter the following command to display a single migration log:

    $ oc -n openshift-migration logs -f <mig-log-reader-pod> -c color 1
    1
    The -c plain option displays the log without colors.

2.6.3. Downloading migration logs

You can download the Velero, Restic, and MigrationController pod logs in the Migration Toolkit for Containers (MTC) web console to troubleshoot a failed migration.

Procedure

  1. In the MTC console, click Migration plans to view the list of migration plans.
  2. Click the Options menu kebab of a specific migration plan and select Logs.
  3. Click Download Logs to download the logs of the MigrationController, Velero, and Restic pods for all clusters.

    You can download a single log by selecting the cluster, log source, and pod source, and then clicking Download Selected.

    You can access a pod log from the CLI by using the oc logs command:

    $ oc logs <pod-name> -f -n openshift-migration 1
    1
    Specify the pod name.

2.6.4. Error messages and resolutions

This section describes common error messages you might encounter with the Migration Toolkit for Containers (MTC) and how to resolve their underlying causes.

2.6.4.1. Restic timeout error

If a CA certificate error message is displayed the first time you try to access the MTC console, the likely cause is the use of self-signed CA certificates in one of the clusters.

To resolve this issue, navigate to the oauth-authorization-server URL displayed in the error message and accept the certificate. To resolve this issue permanently, add the certificate to the trust store of your web browser.

If an Unauthorized message is displayed after you have accepted the certificate, navigate to the MTC console and refresh the web page.

2.6.4.2. OAuth timeout error in the MTC console

If a connection has timed out message is displayed in the MTC console after you have accepted a self-signed certificate, the causes are likely to be the following:

You can determine the cause of the timeout.

Procedure

  1. Navigate to the MTC console and inspect the elements with the browser web inspector.
  2. Check the MigrationUI pod log:

    $ oc logs <MigrationUI_Pod> -n openshift-migration

2.6.4.3. PodVolumeBackups timeout error in Velero pod log

If a migration fails because Restic times out, the following error is displayed in the Velero pod log.

Example output

level=error msg="Error backing up item" backup=velero/monitoring error="timed out waiting for all PodVolumeBackups to complete" error.file="/go/src/github.com/heptio/velero/pkg/restic/backupper.go:165" error.function="github.com/heptio/velero/pkg/restic.(*backupper).BackupPodVolumes" group=v1

The default value of restic_timeout is one hour. You can increase this parameter for large migrations, keeping in mind that a higher value may delay the return of error messages.

Procedure

  1. In the OpenShift Container Platform web console, navigate to OperatorsInstalled Operators.
  2. Click Migration Toolkit for Containers Operator.
  3. In the MigrationController tab, click migration-controller.
  4. In the YAML tab, update the following parameter value:

    spec:
      restic_timeout: 1h 1
    1
    Valid units are h (hours), m (minutes), and s (seconds), for example, 3h30m15s.
  5. Click Save.

2.6.4.4. ResticVerifyErrors in the MigMigration custom resource

If data verification fails when migrating a persistent volume with the file system data copy method, the following error is displayed in the MigMigration CR.

Example output

status:
  conditions:
  - category: Warn
    durable: true
    lastTransitionTime: 2020-04-16T20:35:16Z
    message: There were verify errors found in 1 Restic volume restores. See restore `<registry-example-migration-rvwcm>`
      for details 1
    status: "True"
    type: ResticVerifyErrors 2

1
The error message identifies the Restore CR name.
2
ResticVerifyErrors is a general error warning type that includes verification errors.
Note

A data verification error does not cause the migration process to fail.

You can check the Restore CR to identify the source of the data verification error.

Procedure

  1. Log in to the target cluster.
  2. View the Restore CR:

    $ oc describe <registry-example-migration-rvwcm> -n openshift-migration

    The output identifies the persistent volume with PodVolumeRestore errors.

    Example output

    status:
      phase: Completed
      podVolumeRestoreErrors:
      - kind: PodVolumeRestore
        name: <registry-example-migration-rvwcm-98t49>
        namespace: openshift-migration
      podVolumeRestoreResticErrors:
      - kind: PodVolumeRestore
        name: <registry-example-migration-rvwcm-98t49>
        namespace: openshift-migration

  3. View the PodVolumeRestore CR:

    $ oc describe <migration-example-rvwcm-98t49>

    The output identifies the Restic pod that logged the errors.

    Example output

      completionTimestamp: 2020-05-01T20:49:12Z
      errors: 1
      resticErrors: 1
      ...
      resticPod: <restic-nr2v5>

  4. View the Restic pod log to locate the errors:

    $ oc logs -f <restic-nr2v5>

2.6.5. Direct volume migration does not complete

If direct volume migration does not complete, the target cluster might not have the same node-selector annotations as the source cluster.

Migration Toolkit for Containers (MTC) migrates namespaces with all annotations in order to preserve security context constraints and scheduling requirements. During direct volume migration, MTC creates Rsync transfer pods on the target cluster in the namespaces that were migrated from the source cluster. If a target cluster namespace does not have the same annotations as the source cluster namespace, the Rsync transfer pods cannot be scheduled. The Rsync pods remain in a Pending state.

You can identify and fix this issue by performing the following procedure.

Procedure

  1. Check the status of the MigMigration CR:

    $ oc describe migmigration <pod_name> -n openshift-migration

    The output includes the following status message:

    Example output

    ...
    Some or all transfer pods are not running for more than 10 mins on destination cluster
    ...

  2. On the source cluster, obtain the details of a migrated namespace:

    $ oc get namespace <namespace> -o yaml 1
    1
    Specify the migrated namespace.
  3. On the target cluster, edit the migrated namespace:

    $ oc edit namespace <namespace>
  4. Add missing openshift.io/node-selector annotations to the migrated namespace as in the following example:

    apiVersion: v1
    kind: Namespace
    metadata:
      annotations:
        openshift.io/node-selector: "region=east"
    ...
  5. Run the migration plan again.

2.6.6. Using must-gather to collect data

You must run the must-gather tool if you open a customer support case on the Red Hat Customer Portal for the Migration Toolkit for Containers (MTC).

The openshift-migration-must-gather-rhel8 image for MTC collects migration-specific logs and data that are not collected by the default must-gather image.

Procedure

  1. Navigate to the directory where you want to store the must-gather data.
  2. Run the must-gather command:

    $ oc adm must-gather --image=openshift-migration-must-gather-rhel8:v1.4.0
  3. Remove authentication keys and other sensitive information.
  4. Create an archive file containing the contents of the must-gather data directory:

    $ tar cvaf must-gather.tar.gz must-gather.local.<uid>/
  5. Upload the compressed file as an attachment to your customer support case.

2.6.7. Rolling back a migration

You can roll back a migration by using the MTC web console or the CLI.

2.6.7.1. Rolling back a migration in the MTC web console

You can roll back a migration by using the Migration Toolkit for Containers (MTC) web console.

If your application was stopped during a failed migration, you must roll back the migration in order to prevent data corruption in the persistent volume.

Rollback is not required if the application was not stopped during migration because the original application is still running on the source cluster.

Procedure

  1. In the MTC web console, click Migration plans.
  2. Click the Options menu kebab beside a migration plan and select Rollback.
  3. Click Rollback and wait for rollback to complete.

    In the migration plan details, Rollback succeeded is displayed.

  4. Verify that rollback was successful in the OpenShift Container Platform web console of the source cluster:

    1. Click HomeProjects.
    2. Click the migrated project to view its status.
    3. In the Routes section, click Location to verify that the application is functioning, if applicable.
    4. Click WorkloadsPods to verify that the pods are running in the migrated namespace.
    5. Click StoragePersistent volumes to verify that the migrated persistent volume is correctly provisioned.
2.6.7.1.1. Rolling back a migration from the CLI

You can roll back a migration by using the CLI.

If your application was stopped during a failed migration, you must roll back the migration in order to prevent data corruption in the persistent volume.

Rollback is not required if the application was not stopped during migration because the original application is still running on the source cluster.

Procedure

  1. Create a MigMigration CR object based on the following example:

    $ cat << EOF | oc apply -f -
    ---
    apiVersion: migration.openshift.io/v1alpha1
    kind: MigMigration
    metadata:
      labels:
        controller-tools.k8s.io: "1.0"
      name: migration-rollback
      namespace: openshift-migration
    spec:
      # 'canceled: true' cancels the migration
      canceled: false
      # 'rollback: true' rolls back the migration
      rollback: true
      # 'stage: true' runs a stage migration without quiescing the application on the source cluster.
      stage: false
      # 'quiescePods: true' scales the pods on the source cluster to '0' after the 'Backup' stage of a migration has finished
      quiescePods: false
      # 'keepAnnotations: true' retains the labels and annotations applied by the migration
      keepAnnotations: false
    
      migPlanRef:
        name: <migplan-name> 1
        namespace: openshift-migration
    EOF
    1
    Specify the name of the migration plan that you want to roll back.
  2. In the MTC console, verify that the migrated project resources have been removed from the target cluster.
  3. Verify that the migrated project resources are present in the source cluster and that the application is running.

2.6.8. Known issues

This release has the following known issues:

  • During migration, the Migration Toolkit for Containers (MTC) preserves the following namespace annotations:

    • openshift.io/sa.scc.mcs
    • openshift.io/sa.scc.supplemental-groups
    • openshift.io/sa.scc.uid-range

      These annotations preserve the UID range, ensuring that the containers retain their file system permissions on the target cluster. There is a risk that the migrated UIDs could duplicate UIDs within an existing or future namespace on the target cluster. (BZ#1748440)

  • If an AWS bucket is added to the MTC web console and then deleted, its status remains True because the MigStorage CR is not updated. (BZ#1738564)
  • Most cluster-scoped resources are not yet handled by MTC. If your applications require cluster-scoped resources, you might have to create them manually on the target cluster.
  • If a migration fails, the migration plan does not retain custom PV settings for quiesced pods. You must manually roll back the migration, delete the migration plan, and create a new migration plan with your PV settings. (BZ#1784899)
  • If a large migration fails because Restic times out, you can increase the restic_timeout parameter value (default: 1h) in the MigrationController CR.
  • If you select the data verification option for PVs that are migrated with the file system copy method, performance is significantly slower.