Migration Toolkit for Containers

OpenShift Container Platform 4.6

Migrating to OpenShift Container Platform 4

Red Hat OpenShift Documentation Team

Abstract

This document provides instructions for migrating your OpenShift Container Platform cluster from version 3 to version 4, and also for migrating from an earlier OpenShift Container Platform 4 release to the latest version.

Chapter 1. Migrating from OpenShift Container Platform 3

1.1. About migrating OpenShift Container Platform 3 to 4

OpenShift Container Platform 4 includes new technologies and functionality that results in a cluster that is self-managing, flexible, and automated. The way that OpenShift Container Platform 4 clusters are deployed and managed drastically differs from OpenShift Container Platform 3.

To successfully transition from OpenShift Container Platform 3 to OpenShift Container Platform 4, it is important that you review the following information:

Planning your transition
Learn about the differences between OpenShift Container Platform versions 3 and 4. Prior to transitioning, be sure that you have reviewed and prepared for storage, networking, logging, security, and monitoring considerations.
Performing your migration

Learn about and use the tools to perform your migration:

  • Migration Toolkit for Containers (MTC) to migrate your application workloads
  • Control Plane Migration Assistant (CPMA) to migrate your control plane

1.2. Planning your migration

Before performing your migration to OpenShift Container Platform 4.6, it is important to take the time to properly plan for the transition. OpenShift Container Platform 4 introduces architectural changes and enhancements, so the procedures that you used to manage your OpenShift Container Platform 3 cluster might not apply for OpenShift Container Platform 4.

Note

This planning document assumes that you are transitioning from OpenShift Container Platform 3.11 to OpenShift Container Platform 4.6.

This document provides high-level information on the most important differences between OpenShift Container Platform 3 and OpenShift Container Platform 4 and the most noteworthy migration considerations. For detailed information on configuring your OpenShift Container Platform 4 cluster, review the appropriate sections of the OpenShift Container Platform documentation. For detailed information on new features and other notable technical changes, review the OpenShift Container Platform 4.6 release notes.

It is not possible to upgrade your existing OpenShift Container Platform 3 cluster to OpenShift Container Platform 4. You must start with a new OpenShift Container Platform 4 installation. Tools are available to assist in migrating your control plane settings and application workloads.

1.2.1. Comparing OpenShift Container Platform 3 and OpenShift Container Platform 4

With OpenShift Container Platform 3, administrators individually deployed Red Hat Enterprise Linux (RHEL) hosts, and then installed OpenShift Container Platform on top of these hosts to form a cluster. Administrators were responsible for properly configuring these hosts and performing updates.

OpenShift Container Platform 4 represents a significant change in the way that OpenShift Container Platform clusters are deployed and managed. OpenShift Container Platform 4 includes new technologies and functionality, such as Operators, MachineSets, and Red Hat Enterprise Linux CoreOS (RHCOS), which are core to the operation of the cluster. This technology shift enables clusters to self-manage some functions previously performed by administrators. This also ensures platform stability and consistency, and simplifies installation and scaling.

For more information, see OpenShift Container Platform architecture.

1.2.1.1. Architecture differences

Immutable infrastructure

OpenShift Container Platform 4 uses Red Hat Enterprise Linux CoreOS (RHCOS), which is designed to run containerized applications, and provides efficient installation, Operator-based management, and simplified upgrades. RHCOS is an immutable container host, rather than a customizable operating system like RHEL. RHCOS enables OpenShift Container Platform 4 to manage and automate the deployment of the underlying container host. RHCOS is a part of OpenShift Container Platform, which means that everything runs inside a container and is deployed using OpenShift Container Platform.

In OpenShift Container Platform 4, control plane nodes must run RHCOS, ensuring that full-stack automation is maintained for the control plane. This makes rolling out updates and upgrades a much easier process than in OpenShift Container Platform 3.

For more information, see Red Hat Enterprise Linux CoreOS (RHCOS).

Operators

Operators are a method of packaging, deploying, and managing a Kubernetes application. Operators ease the operational complexity of running another piece of software. They watch over your environment and use the current state to make decisions in real time. Advanced Operators are designed to upgrade and react to failures automatically.

For more information, see Understanding Operators.

1.2.1.2. Installation and update differences

Installation process

To install OpenShift Container Platform 3.11, you prepared your Red Hat Enterprise Linux (RHEL) hosts, set all of the configuration values your cluster needed, and then ran an Ansible playbook to install and set up your cluster.

In OpenShift Container Platform 4.6, you use the OpenShift installation program to create a minimum set of resources required for a cluster. Once the cluster is running, you use Operators to further configure your cluster and to install new services. After first boot, Red Hat Enterprise Linux CoreOS (RHCOS) systems are managed by the Machine Config Operator (MCO) that runs in the OpenShift Container Platform cluster.

For more information, see Installation process.

If you want to add RHEL worker machines to your OpenShift Container Platform 4.6 cluster, you use an Ansible playbook to join the RHEL worker machines after the cluster is running. For more information, see Adding RHEL compute machines to an OpenShift Container Platform cluster.

Infrastructure options

In OpenShift Container Platform 3.11, you installed your cluster on infrastructure that you prepared and maintained. In addition to providing your own infrastructure, OpenShift Container Platform 4 offers an option to deploy a cluster on infrastructure that the OpenShift Container Platform installation program provisions and the cluster maintains.

For more information, see OpenShift Container Platform installation overview.

Upgrading your cluster

In OpenShift Container Platform 3.11, you upgraded your cluster by running Ansible playbooks. In OpenShift Container Platform 4.6, the cluster manages its own updates, including updates to Red Hat Enterprise Linux CoreOS (RHCOS) on cluster nodes. You can easily upgrade your cluster by using the web console or by using the oc adm upgrade command from the OpenShift CLI and the Operators will automatically upgrade themselves. If your OpenShift Container Platform 4.6 cluster has Red Hat Enterprise Linux worker machines, then you will still need to run an Ansible playbook to upgrade those worker machines.

For more information, see Updating clusters.

1.2.2. Migration considerations

Review the changes and other considerations that might affect your transition from OpenShift Container Platform 3.11 to OpenShift Container Platform 4.

1.2.2.1. Storage considerations

Review the following storage changes to consider when transitioning from OpenShift Container Platform 3.11 to OpenShift Container Platform 4.6.

Local volume persistent storage

Local storage is only supported by using the Local Storage Operator in OpenShift Container Platform 4.6. It is not supported to use the local provisioner method from OpenShift Container Platform 3.11.

For more information, see Persistent storage using local volumes.

FlexVolume persistent storage

The FlexVolume plug-in location changed from OpenShift Container Platform 3.11. The new location in OpenShift Container Platform 4.6 is /etc/kubernetes/kubelet-plugins/volume/exec. Attachable FlexVolume plug-ins are no longer supported.

For more information, see Persistent storage using FlexVolume.

Container Storage Interface (CSI) persistent storage

Persistent storage using the Container Storage Interface (CSI) was Technology Preview in OpenShift Container Platform 3.11. CSI version 1.1.0 is fully supported in OpenShift Container Platform 4.6, but does not ship with any CSI drivers. You must install your own driver.

For more information, see Persistent storage using the Container Storage Interface (CSI).

Red Hat OpenShift Container Storage

Red Hat OpenShift Container Storage 3, which is available for use with OpenShift Container Platform 3.11, uses Red Hat Gluster Storage as the backing storage.

Red Hat OpenShift Container Storage 4, which is available for use with OpenShift Container Platform 4, uses Red Hat Ceph Storage as the backing storage.

For more information, see Persistent storage using Red Hat OpenShift Container Storage and the interoperability matrix article.

Unsupported persistent storage options

Support for the following persistent storage options from OpenShift Container Platform 3.11 has changed in OpenShift Container Platform 4.6:

  • GlusterFS is no longer supported.
  • CephFS as a standalone product is no longer supported.
  • Ceph RBD as a standalone product is no longer supported.

If you used of one these in OpenShift Container Platform 3.11, you must choose a different persistent storage option for full support in OpenShift Container Platform 4.6.

For more information, see Understanding persistent storage.

1.2.2.2. Networking considerations

Review the following networking changes to consider when transitioning from OpenShift Container Platform 3.11 to OpenShift Container Platform 4.6.

Network isolation mode

The default network isolation mode for OpenShift Container Platform 3.11 was ovs-subnet, though users frequently switched to use ovn-multitenant. The default network isolation mode for OpenShift Container Platform 4.6 is now NetworkPolicy.

If your OpenShift Container Platform 3.11 cluster used the ovs-subnet or ovs-multitenant mode, it is recommended to switch to the NetworkPolicy mode for your OpenShift Container Platform 4.6 cluster. NetworkPolicy is supported upstream, is more flexible, and also provides the functionality that ovs-multitenant does. If you want to maintain the ovs-multitenant behavior while using NetworkPolicy in OpenShift Container Platform 4.6, follow the steps to configure multitenant isolation using NetworkPolicy.

For more information, see About network policy.

Encrypting traffic between hosts

In OpenShift Container Platform 3.11, you could use IPsec to encrypt traffic between hosts. OpenShift Container Platform 4.6 does not support IPsec. It is recommended to use Red Hat OpenShift Service Mesh to enable mutual TLS between services.

For more information, see Understanding Red Hat OpenShift Service Mesh.

1.2.2.3. Logging considerations

Review the following logging changes to consider when transitioning from OpenShift Container Platform 3.11 to OpenShift Container Platform 4.6.

Deploying cluster logging

OpenShift Container Platform 4 provides a simple deployment mechanism for cluster logging, by using a Cluster Logging custom resource.

For more information, see Installing cluster logging.

Aggregated logging data

You cannot transition your aggregate logging data from OpenShift Container Platform 3.11 into your new OpenShift Container Platform 4 cluster.

For more information, see About cluster logging.

Unsupported logging configurations

Some logging configurations that were available in OpenShift Container Platform 3.11 are no longer supported in OpenShift Container Platform 4.6.

For more information on the explicitly unsupported logging cases, see Maintenance and support.

1.2.2.4. Security considerations

Review the following security changes to consider when transitioning from OpenShift Container Platform 3.11 to OpenShift Container Platform 4.6.

Unauthenticated access to discovery endpoints

In OpenShift Container Platform 3.11, an unauthenticated user could access the discovery endpoints (for example, /api/* and /apis/*). For security reasons, unauthenticated access to the discovery endpoints is no longer allowed in OpenShift Container Platform 4.6. If you do need to allow unauthenticated access, you can configure the RBAC settings as necessary; however, be sure to consider the security implications as this can expose internal cluster components to the external network.

Identity providers

Configuration for identity providers has changed for OpenShift Container Platform 4, including the following notable changes:

  • The request header identity provider in OpenShift Container Platform 4.6 requires mutual TLS, where in OpenShift Container Platform 3.11 it did not.
  • The configuration of the OpenID Connect identity provider was simplified in OpenShift Container Platform 4.6. It now obtains data, which previously had to specified in OpenShift Container Platform 3.11, from the provider’s /.well-known/openid-configuration endpoint.

For more information, see Understanding identity provider configuration.

OAuth token storage format

Newly created OAuth HTTP bearer tokens no longer match the names of their OAuth access token objects. The object names are now a hash of the bearer token and are no longer sensitive. This reduces the risk of leaking sensitive information.

1.2.2.5. Monitoring considerations

Review the following monitoring changes to consider when transitioning from OpenShift Container Platform 3.11 to OpenShift Container Platform 4.6.

Alert for monitoring infrastructure availability

The default alert that triggers to ensure the availability of the monitoring structure was called DeadMansSwitch in OpenShift Container Platform 3.11. This was renamed to Watchdog in OpenShift Container Platform 4. If you had PagerDuty integration set up with this alert in OpenShift Container Platform 3.11, you must set up the PagerDuty integration for the Watchdog alert in OpenShift Container Platform 4.

For more information, see Applying custom Alertmanager configuration.

1.3. Migration tools and prerequisites

You can migrate application workloads from OpenShift Container Platform 3.7, 3.9, 3.10, and 3.11 to OpenShift Container Platform 4.6 with the Migration Toolkit for Containers (MTC). MTC enables you to control the migration and to minimize application downtime.

The MTC web console and API, based on Kubernetes Custom Resources, enable you to migrate stateful application workloads at the granularity of a namespace.

MTC supports the file system and snapshot data copy methods for migrating data from the source cluster to the target cluster. You can select a method that is suited for your environment and is supported by your storage provider.

You can use migration hooks to run Ansible playbooks at certain points during the migration. The hooks are added when you create a migration plan.

Note

The service catalog is deprecated in OpenShift Container Platform 4. You can migrate workload resources provisioned with the service catalog from OpenShift Container Platform 3 to 4 but you cannot perform service catalog actions such as provision, deprovision, or update on these workloads after migration.

The MTC web console displays a message if the service catalog resources cannot be migrated.

The Control Plane Migration Assistant (CPMA) is a CLI-based tool that assists you in migrating the control plane. The CPMA processes the OpenShift Container Platform 3 configuration files and generates Custom Resource (CR) manifest files, which are consumed by OpenShift Container Platform 4.6 Operators.

Important

Before you begin your migration, be sure to review the information on planning your migration.

1.3.1. Migration prerequisites

  • You must have podman installed.
  • The source cluster must be OpenShift Container Platform 3.7, 3.9, 3.10, or 3.11.
  • You must upgrade the source cluster to the latest z-stream release.
  • You must have cluster-admin privileges on all clusters.
  • The source and target clusters must have unrestricted network access to the replication repository.
  • The cluster on which the Migration controller is installed must have unrestricted access to the other clusters.
  • If your application uses images from the openshift namespace, the required versions of the images must be present on the target cluster.

    If the required images are not present, you must update the imagestreamtags references to use an available version that is compatible with your application. If the imagestreamtags cannot be updated, you can manually upload equivalent images to the application namespaces and update the applications to reference them.

The following imagestreamtags have been removed from OpenShift Container Platform 4.2:

  • dotnet:1.0, dotnet:1.1, dotnet:2.0
  • dotnet-runtime:2.0
  • mariadb:10.1
  • mongodb:2.4, mongodb:2.6
  • mysql:5.5, mysql:5.6
  • nginx:1.8
  • nodejs:0.10, nodejs:4, nodejs:6
  • perl:5.16, perl:5.20
  • php:5.5, php:5.6
  • postgresql:9.2, postgresql:9.4, postgresql:9.5
  • python:3.3, python:3.4
  • ruby:2.0, ruby:2.2

1.3.2. About the Migration Toolkit for Containers

The Migration Toolkit for Containers (MTC) enables you to migrate Kubernetes resources, persistent volume data, and internal container images from an OpenShift Container Platform source cluster to an OpenShift Container Platform 4.6 target cluster, using the MTC web console or the Kubernetes API.

Migrating an application with the MTC web console involves the following steps:

  1. Install the MTC Operator on all clusters.

    You can install the MTC Operator in a restricted environment with limited or no internet access. The source and target clusters must have network access to each other and to a mirror registry.

  2. Configure the replication repository, an intermediate object storage that MTC uses to migrate data.

    The source and target clusters must have network access to the replication repository during migration. In a restricted environment, you can use an internally hosted S3 storage repository. If you use a proxy server, you must ensure that replication repository is whitelisted.

  3. Add the source cluster to the MTC web console.
  4. Add the replication repository to the MTC web console.
  5. Create a migration plan, with one of the following data migration options:

    • Copy: MTC copies the data from the source cluster to the replication repository, and from the replication repository to the target cluster.

      migration PV copy
    • Move: MTC unmounts a remote volume (for example, NFS) from the source cluster, creates a PV resource on the target cluster pointing to the remote volume, and then mounts the remote volume on the target cluster. Applications running on the target cluster use the same remote volume that the source cluster was using. The remote volume must be accessible to the source and target clusters.

      Note

      Although the replication repository does not appear in this diagram, it is required for the actual migration.

      migration PV move
  6. Run the migration plan, with one of the following options:

    • Stage (optional) copies data to the target cluster without stopping the application.

      Staging can be run multiple times so that most of the data is copied to the target before migration. This minimizes the actual migration time and application downtime.

    • Migrate stops the application on the source cluster and recreates its resources on the target cluster. Optionally, you can migrate the workload without stopping the application.
OCP 3 to 4 App migration

1.3.3. About data copy methods

MTC supports the file system and snapshot data copy methods for migrating data from the source cluster to the target cluster. You can select a method that is suited for your environment and is supported by your storage provider.

1.3.3.1. File system copy method

MTC copies data files from the source cluster to the replication repository, and from there to the target cluster.

Table 1.1. File system copy method summary

BenefitsLimitations
  • Clusters can have different storage classes
  • Supported for all S3 storage providers
  • Optional data verification with checksum
  • Slower than the snapshot copy method
  • Optional data verification significantly reduces performance

1.3.3.2. Snapshot copy method

MTC copies a snapshot of the source cluster’s data to a cloud provider’s object storage, configured as a replication repository. The data is restored on the target cluster.

AWS, Google Cloud Provider, and Microsoft Azure support the snapshot copy method.

Table 1.2. Snapshot copy method summary

BenefitsLimitations
  • Faster than the file system copy method
  • Cloud provider must support snapshots.
  • Clusters must be on the same cloud provider.
  • Clusters must be in the same location or region.
  • Clusters must have the same storage class.
  • Storage class must be compatible with snapshots.

1.3.4. About migration hooks

You can use migration hooks to run Ansible playbooks at certain points during the migration. The hooks are added when you create a migration plan.

Note

If you do not want to use Ansible playbooks, you can create a custom container image and add it to a migration plan.

Migration hooks perform tasks such as customizing application quiescence, manually migrating unsupported data types, and updating applications after migration.

A single migration hook runs on a source or target cluster at one of the following migration steps:

  • PreBackup: Before backup tasks are started on the source cluster
  • PostBackup: After backup tasks are complete on the source cluster
  • PreRestore: Before restore tasks are started on the target cluster
  • PostRestore: After restore tasks are complete on the target cluster

    You can assign one hook to each migration step, up to a maximum of four hooks for a single migration plan.

The default hook-runner image is registry.redhat.io/rhmtc/openshift-migration-hook-runner-rhel7:v1.3.0. This image is based on Ansible Runner and includes python-openshift for Ansible Kubernetes resources and an updated oc binary. You can also create your own hook image with additional Ansible modules or tools.

The Ansible playbook is mounted on a hook container as a ConfigMap. The hook container runs as a Job on a cluster with a specified service account and namespace. The Job runs, even if the initial Pod is evicted or killed, until it reaches the default backoffLimit (6) or successful completion.

1.3.5. About the Control Plane Migration Assistant

The Control Plane Migration Assistant (CPMA) is a CLI-based tool that assists you in migrating the control plane from OpenShift Container Platform 3.7 (or later) to 4.6. CPMA processes the OpenShift Container Platform 3 configuration files and generates Custom Resource (CR) manifest files, which are consumed by OpenShift Container Platform 4.6 Operators.

Because OpenShift Container Platform 3 and 4 have significant configuration differences, not all parameters are processed. CPMA can generate a report that describes whether features are supported fully, partially, or not at all.

Configuration files

CPMA uses the Kubernetes and OpenShift Container Platform APIs to access the following configuration files on an OpenShift Container Platform 3 cluster:

  • Master configuration file (default: /etc/origin/master/master-config.yaml)
  • CRI-O configuration file (default: /etc/crio/crio.conf)
  • etcd configuration file (default: /etc/etcd/etcd.conf)
  • Image registries file (default: /etc/containers/registries.conf)
  • Dependent configuration files:

    • Password files (for example, HTPasswd)
    • ConfigMaps
    • Secrets

CR Manifests

CPMA generates CR manifests for the following configurations:

  • API server CA certificate: 100_CPMA-cluster-config-APISecret.yaml

    Note

    If you are using an unsigned API server CA certificate, you must add the certificate manually to the target cluster.

  • CRI-O: 100_CPMA-crio-config.yaml
  • Cluster resource quota: 100_CPMA-cluster-quota-resource-x.yaml
  • Project resource quota: 100_CPMA-resource-quota-x.yaml
  • Portable image registry (/etc/registries/registries.conf) and portable image policy (etc/origin/master/master-config.yam): 100_CPMA-cluster-config-image.yaml
  • OAuth providers: 100_CPMA-cluster-config-oauth.yaml
  • Project configuration: 100_CPMA-cluster-config-project.yaml
  • Scheduler: 100_CPMA-cluster-config-scheduler.yaml
  • SDN: 100_CPMA-cluster-config-sdn.yaml

1.4. Deploying the Migration Toolkit for Containers

You can deploy the Migration Toolkit for Containers (MTC) on an OpenShift Container Platform 4.6 target cluster and an OpenShift Container Platform 3 source cluster by installing the MTC Operator. The MTC Operator deploys MTC on the target cluster by default.

Note

Optional: You can configure the MTC Operator to install the MTC on an OpenShift Container Platform 3 cluster or on a remote cluster.

In a restricted environment, you can install the MTC Operator from a local mirror registry.

After you have installed the MTC Operator on your clusters, you can launch the MTC web console.

1.4.1. Installing the MTC Operator

You can install the MTC Operator with the Operator Lifecycle Manager (OLM) on an OpenShift Container Platform 4.6 target cluster and manually on an OpenShift Container Platform 3 source cluster.

1.4.1.1. Installing the MTC Operator on an OpenShift Container Platform 4.6 target cluster

You can install the MTC Operator on an OpenShift Container Platform 4.6 target cluster with the Operator Lifecycle Manager (OLM).

The MTC Operator installs the Migration Toolkit for Containers on the target cluster by default.

Procedure

  1. In the OpenShift Container Platform web console, click OperatorsOperatorHub.
  2. Use the Filter by keyword field (in this case, Migration) to find the MTC Operator.
  3. Select the MTC Operator and click Install.
  4. On the Install Operator page, click Install.

    On the Installed Operators page, the MTC Operator appears in the openshift-migration project with the status Succeeded.

  5. Click MTC Operator.
  6. Under Provided APIs, locate the Migration Controller tile, and click Create Instance.
  7. Click Create.
  8. Click WorkloadsPods to verify that the Controller Manager, Migration UI, Restic, and Velero pods are running.

1.4.1.2. Installing the MTC Operator on an OpenShift Container Platform 3 source cluster

You can install the MTC Operator manually on an OpenShift Container Platform 3 source cluster.

Important

You must install the same MTC version on the OpenShift Container Platform 3 and 4 clusters. The MTC Operator on the OpenShift Container Platform 4 cluster is updated automatically by the Operator Lifecycle Manager.

To ensure that you have the latest version on the OpenShift Container Platform 3 cluster, download the operator.yml and controller-3.yml files when you are ready to create and run the migration plan.

Prerequisites

  • Access to registry.redhat.io
  • OpenShift Container Platform 3 cluster configured to pull images from registry.redhat.io

    To pull images, you must create an imagestreamsecret and copy it to each node in your cluster.

Procedure

  1. Log in to registry.redhat.io with your Red Hat Customer Portal credentials:

    $ sudo podman login registry.redhat.io
  2. Download the operator.yml file:

    $ sudo podman cp $(sudo podman create registry.redhat.io/rhmtc/openshift-migration-rhel7-operator:v1.3.0):/operator.yml ./
  3. Download the controller-3.yml file:

    $ sudo podman cp $(sudo podman create registry.redhat.io/rhmtc/openshift-migration-rhel7-operator:v1.3.0):/controller-3.yml ./
  4. Log in to your OpenShift Container Platform 3 cluster.
  5. Verify that the cluster can authenticate with registry.redhat.io:

    $ oc run test --image registry.redhat.io/ubi8 --command sleep infinity
  6. Create the MTC Operator CR object:

    $ oc create -f operator.yml

    Example output

    namespace/openshift-migration created
    rolebinding.rbac.authorization.k8s.io/system:deployers created
    serviceaccount/migration-operator created
    customresourcedefinition.apiextensions.k8s.io/migrationcontrollers.migration.openshift.io created
    role.rbac.authorization.k8s.io/migration-operator created
    rolebinding.rbac.authorization.k8s.io/migration-operator created
    clusterrolebinding.rbac.authorization.k8s.io/migration-operator created
    deployment.apps/migration-operator created
    Error from server (AlreadyExists): error when creating "./operator.yml":
    rolebindings.rbac.authorization.k8s.io "system:image-builders" already exists 1
    Error from server (AlreadyExists): error when creating "./operator.yml":
    rolebindings.rbac.authorization.k8s.io "system:image-pullers" already exists

    1
    You can ignore Error from server (AlreadyExists) messages. They are caused by the MTC Operator creating resources for earlier versions of OpenShift Container Platform 3 that are provided in later releases.
  7. Create the Migration Controller CR object:

    $ oc create -f controller-3.yml
  8. Verify that the Velero and Restic pods are running:

    $ oc get pods -n openshift-migration

1.4.2. Installing the MTC Operator in a restricted environment

You can install the MTC Operator with the Operator Lifecycle Manager (OLM) on an OpenShift Container Platform 4.6 target cluster and manually on an OpenShift Container Platform 3 source cluster.

For OpenShift Container Platform 4.6, you can build a custom Operator catalog image, push it to a local mirror image registry, and configure OLM to install the MTC Operator from the local registry. A mapping.txt file is created when you run the oc adm catalog mirror command.

On the OpenShift Container Platform 3 cluster, you can create a manifest file based on the Operator image and edit the file to point to your local image registry. The image value in the manifest file uses the sha256 value from the mapping.txt file. Then, you can use the local image to create the MTC Operator.

1.4.2.1. Prerequisites

  • If you want to prune the default catalog and selectively mirror only a subset of Operators, install the opm CLI.

1.4.2.2. Disabling the default OperatorHub sources

Operator catalogs that source content provided by Red Hat and community projects are configured for OperatorHub by default during an OpenShift Container Platform installation. Before configuring OperatorHub to instead use local catalog sources in a restricted network environment, you must disable the default catalogs.

Procedure

  • Disable the sources for the default catalogs by adding disableAllDefaultSources: true to the OperatorHub spec:

    $ oc patch OperatorHub cluster --type json \
        -p '[{"op": "add", "path": "/spec/disableAllDefaultSources", "value": true}]'

1.4.2.3. Pruning an index image

An index image, based on the Operator Bundle Format, is a containerized snapshot of an Operator catalog. You can prune an index of all but a specified list of packages, creating a copy of the source index containing only the Operators that you want.

When configuring Operator Lifecycle Manager (OLM) to use mirrored content on restricted network OpenShift Container Platform clusters, use this pruning method if you want to only mirror a subset of Operators from the default catalogs.

For the steps in this procedure, the target registry is an existing mirror registry that is accessible by both your cluster and a workstation with unrestricted network access. This example also shows pruning the index image for the default redhat-operators catalog, but the process is the same for all index images.

Prerequisites

  • Workstation with unrestricted network access
  • podman version 1.4.4+
  • grpcurl
  • opm version 1.12.3+
  • Access to a registry that supports Docker v2-2

Procedure

  1. Authenticate with registry.redhat.io:

    $ podman login registry.redhat.io
  2. Authenticate with your target registry:

    $ podman login <target_registry>
  3. Determine the list of packages you want to include in your pruned index.

    1. Run the source index image that you want to prune in a container. For example:

      $ podman run -p50051:50051 \
          -it registry.redhat.io/redhat/redhat-operator-index:v4.6

      Example output

      Trying to pull registry.redhat.io/redhat/redhat-operator-index:v4.6...
      Getting image source signatures
      Copying blob ae8a0c23f5b1 done
      ...
      INFO[0000] serving registry                              database=/database/index.db port=50051

    2. In a separate terminal session, use the grpcurl command to get a list of the packages provided by the index:

      $ grpcurl -plaintext localhost:50051 api.Registry/ListPackages > packages.out
    3. Inspect the packages.out file and identify which package names from this list you want to keep in your pruned index. For example:

      Example snippets of packages list

      ...
      {
        "name": "advanced-cluster-management"
      }
      ...
      {
        "name": "jaeger-product"
      }
      ...
      {
      {
        "name": "quay-operator"
      }
      ...

    4. In the terminal session where you executed the podman run command, press Ctrl and C to stop the container process.
  4. Run the following command to prune the source index of all but the specified packages:

    $ opm index prune \
        -f registry.redhat.io/redhat/redhat-operator-index:v4.6 \1
        -p advanced-cluster-management,jaeger-product,quay-operator \2
        -t <target_registry>:<port>/<namespace>/redhat-operator-index:v4.6 3
    1
    Index to prune.
    2
    Comma-separated list of packages to keep.
    3
    Custom tag for new index image being built.
  5. Run the following command to push the new index image to your target registry:

    $ podman push <target_registry>:<port>/<namespace>/redhat-operator-index:v4.6

    where <namespace> is any existing namespace on the registry. For example, you might create an olm-mirror namespace to push all mirrored content to.

1.4.2.4. Mirroring an Operator catalog

You can mirror the Operator content of a Red Hat-provided catalog, or a custom catalog, into a container image registry using the oc adm catalog mirror command. The target registry must support Docker v2-2. For a cluster on a restricted network, this registry can be a registry that the cluster has network access to, such as a mirror registry created during a restricted network cluster installation.

You must also mirror the Red Hat-provided index image, or push your own custom-built index image, to the target registry by using the oc image mirror command. You can then use the mirrored index image to create a CatalogSource that allows Operator Lifecycle Manager (OLM) to load the mirrored catalog onto your OpenShift Container Platform cluster.

For the steps in this procedure, the target registry is an existing mirror registry that is accessible by both your cluster and a workstation with unrestricted network access. This example also shows mirroring the default redhat-operators catalog, but the process is the same for all catalogs.

Prerequisites

  • Workstation with unrestricted network access
  • podman version 1.4.4+
  • Access to mirror registry that supports Docker v2-2
  • If you are working with private registries, set the REG_CREDS environment variable to the file path of your registry credentials for use in later steps. For example, for the podman CLI:

    $ REG_CREDS=${XDG_RUNTIME_DIR}/containers/auth.json

Procedure

  1. On your workstation with unrestricted network access, use the podman login command to authenticate with the your target mirror registry:

    $ podman login <mirror_registry>
  2. Authenticate with registry.redhat.io:

    $ podman login registry.redhat.io
  3. The oc adm catalog mirror command extracts the contents of an index image to generate the manifests required for mirroring. You can choose either of the following:

    • Allow the default behavior of the command to automatically mirror all of the image content from the index image to your mirror registry after generating manifests.
    • Add the --manifests-only flag to only generate the manifests required for mirroring, but do not actually mirror the image content to the registry yet. This can be useful for reviewing what will be mirrored, and it allows you to make any changes to the mapping list if you only require a subset of packages. You can then use that file with the oc image mirror command to mirror the modified list of images in a later step.

      Note

      The --manifests-only flag is intended for advanced selective mirroring of content from the catalog. The opm index prune command, if you used it previously to prune the index image, is suitable for most use cases.

    On your workstation with unrestricted network access, run the following command:

    $ oc adm catalog mirror \
        <index_image> \1
        <mirror_registry>:<port> \2
        [-a ${REG_CREDS}] \3
        [--insecure] \4
        [--filter-by-os="<os>/<arch>"] \5
        [--manifests-only] 6
    1
    Specify the index image for the catalog you want to mirror. For example, this might be a pruned index image that you created previously, or one of the source index images for the default catalogs, such as registry.redhat.io/redhat/redhat-operator-index:v4.6.
    2
    Specify the target registry to mirror the Operator content to.
    3
    Optional: If required, specify the location of your registry credentials file.
    4
    Optional: If you do not want to configure trust for the target registry, add the --insecure flag.
    5
    Optional: Because the catalog might reference images that support multiple architectures and operating systems, you can filter by architecture and operating system to mirror only the images that match. Valid values are linux/amd64, linux/ppc64le, and linux/s390x.
    6
    Optional: Only generate the manifests required for mirroring and do not actually mirror the image content to a registry.

    Example output

    src image has index label for database path: /database/index.db
    using database path mapping: /database/index.db:/tmp/153048078
    wrote database to /tmp/153048078 1
    ...
    wrote mirroring manifests to redhat-operator-index-manifests

    1
    Directory for the temporary index.db database generated by the command.

    After running the command, a <image_name>-manifests/ directory is created in the current directory and generates the following files:

    • The imageContentSourcePolicy.yaml file defines an ImageContentSourcePolicy object that can configure nodes to translate between the image references stored in Operator manifests and the mirrored registry.
    • The mapping.txt file contains all of the source images and where to map them in the target registry. This file is compatible with the oc image mirror command and can be used to further customize the mirroring configuration.
  4. If you used the --manifests-only flag in the previous step and want to further trim the subset of packages to be mirrored:

    1. Modify the list of images in your mapping.txt file to your specifications. If you are unsure of the exact names and versions of the subset of images you want to mirror, use the following steps to find them:

      1. Run the sqlite3 tool against the temporary database that was generated by the oc adm catalog mirror command to retrieve a list of images matching a general search query. The output helps inform how you will later edit your mapping.txt file.

        For example, to retrieve a list of images that are similar to the string jaeger:

        $ echo "select * from related_image \
            where operatorbundle_name like '%jaeger%';" \
            | sqlite3 -line /tmp/153048078/index.db 1
        1
        Refer to the previous output of the oc adm catalog mirror command to find the path of the database file.

        Example output

        ...
        image = registry.redhat.io/distributed-tracing/jaeger-all-in-one-rhel7@sha256:41f769c2c32f3f050aa42d86f084b739914ff9ba2f0aed2d9b0b69357b48459d
        operatorbundle_name = jaeger-operator.v1.17.6
        
        image = registry.redhat.io/distributed-tracing/jaeger-es-index-cleaner-rhel7@sha256:c64ac461d96523516a199bd132ad4d7148317e503a735028f0d8f7ba063a61cb
        operatorbundle_name = jaeger-operator.v1.17.6
        
        image = registry.redhat.io/distributed-tracing/jaeger-rhel7-operator:1.13.2
        operatorbundle_name = jaeger-operator.v1.13.2-1

      2. Use the results from the previous step to help you edit the mapping.txt file to only include the subset of images you want to mirror.

        For example, you can use the image values from the previous example output to find that the following matching lines exist in your mapping.txt file:

        Matching image mappings in mapping.txt

        ...
        registry.redhat.io/distributed-tracing/jaeger-all-in-one-rhel7@sha256:41f769c2c32f3f050aa42d86f084b739914ff9ba2f0aed2d9b0b69357b48459d=quay.io/adellape/distributed-tracing-jaeger-all-in-one-rhel7:5cf7a033
        ...
        registry.redhat.io/distributed-tracing/jaeger-es-index-cleaner-rhel7@sha256:c64ac461d96523516a199bd132ad4d7148317e503a735028f0d8f7ba063a61cb=quay.io/adellape/distributed-tracing-jaeger-es-index-cleaner-rhel7:ecfd2ca7
        ...
        registry.redhat.io/distributed-tracing/jaeger-rhel7-operator:1.13.2=quay.io/adellape/distributed-tracing-jaeger-rhel7-operator:1.13.2
        ...

        In this example, if you only want to mirror these images, you would then remove all other entries in the mapping.txt file and leave only the above matching image mapping lines.

    2. Still on your workstation with unrestricted network access, use your modified mapping.txt file to mirror the images to your registry using the oc image mirror command:

      $ oc image mirror \
          [-a ${REG_CREDS}] \
          -f ./redhat-operator-index-manifests/mapping.txt
  5. Apply the ImageContentSourcePolicy:

    $ oc apply -f ./redhat-operator-index-manifests/imageContentSourcePolicy.yaml
  6. If you are not using a custom, pruned version of an index image, push the Red Hat-provided index image to your registry:

    $ oc image mirror \
        [-a ${REG_CREDS}] \
        registry.redhat.io/redhat/redhat-operator-index:v4.6 \1
        <mirror_registry>:<port>/<namespace>/redhat-operator-index:v4.6 2
    1
    Specify the index image for catalog that you mirrored content for in the previous step.
    2
    Specify where to mirror the index image.

You can now create a CatalogSource to reference your mirrored index image and Operator content.

1.4.2.5. Creating a catalog from an index image

You can create an Operator catalog from an index image and apply it to an OpenShift Container Platform cluster for use with Operator Lifecycle Manager (OLM).

Prerequisites

  • An index image built and pushed to a registry.

Procedure

  1. Create a CatalogSource object that references your index image.

    1. Modify the following to your specifications and save it as a catalogsource.yaml file:

      apiVersion: operators.coreos.com/v1alpha1
      kind: CatalogSource
      metadata:
        name: my-operator-catalog
        namespace: openshift-marketplace
      spec:
        sourceType: grpc
        image: <mirror_registry>:<port>/<namespace>/redhat-operator-index:v4.6 1
        displayName: My Operator Catalog
        publisher: <publisher_name> 2
        updateStrategy:
          registryPoll: 3
            interval: 30m
      1
      Specify your index image.
      2
      Specify your name or an organization name publishing the catalog.
      3
      CatalogSources can automatically check for new versions to keep up to date.
    2. Use the file to create the CatalogSource object:

      $ oc create -f catalogsource.yaml
  2. Verify the following resources are created successfully.

    1. Check the pods:

      $ oc get pods -n openshift-marketplace

      Example output

      NAME                                    READY   STATUS    RESTARTS  AGE
      my-operator-catalog-6njx6               1/1     Running   0         28s
      marketplace-operator-d9f549946-96sgr    1/1     Running   0         26h

    2. Check the CatalogSource:

      $ oc get catalogsource -n openshift-marketplace

      Example output

      NAME                  DISPLAY               TYPE PUBLISHER  AGE
      my-operator-catalog   My Operator Catalog   grpc            5s

    3. Check the PackageManifest:

      $ oc get packagemanifest -n openshift-marketplace

      Example output

      NAME                          CATALOG               AGE
      jaeger-product                My Operator Catalog   93s

You can now install the Operators from the OperatorHub page on your OpenShift Container Platform web console.

1.4.2.6. Installing the MTC Operator on an OpenShift Container Platform 4.6 target cluster in a restricted environment

You can install the MTC Operator on an OpenShift Container Platform 4.6 target cluster with the Operator Lifecycle Manager (OLM).

The MTC Operator installs the Migration Toolkit for Containers on the target cluster by default.

Prerequisites

  • You have created a custom Operator catalog and pushed it to a mirror registry.
  • You have configured OLM to install the MTC Operator from the mirror registry.

Procedure

  1. In the OpenShift Container Platform web console, click OperatorsOperatorHub.
  2. Use the Filter by keyword field (in this case, Migration) to find the MTC Operator.
  3. Select the MTC Operator and click Install.
  4. On the Install Operator page, click Install.

    On the Installed Operators page, the MTC Operator appears in the openshift-migration project with the status Succeeded.

  5. Click MTC Operator.
  6. Under Provided APIs, locate the Migration Controller tile, and click Create Instance.
  7. Click Create.
  8. Click WorkloadsPods to verify that the Controller Manager, Migration UI, Restic, and Velero pods are running.

1.4.2.7. Installing the MTC Operator on an OpenShift Container Platform 3 source cluster in a restricted environment

You can create a manifest file based on the MTC Operator image and edit the manifest to point to your local image registry. Then, you can use the local image to create the MTC Operator on an OpenShift Container Platform 3 source cluster.

Important

You must install the same MTC version on the OpenShift Container Platform 3 and 4 clusters. The MTC Operator on the OpenShift Container Platform 4 cluster is updated automatically by the Operator Lifecycle Manager.

To ensure that you have the latest version on the OpenShift Container Platform 3 cluster, download the operator.yml and controller-3.yml files when you are ready to create and run the migration plan.

Prerequisites

  • Access to registry.redhat.io
  • Linux workstation with unrestricted network access
  • Mirror registry that supports Docker v2-2
  • Custom Operator catalog pushed to a mirror registry

Procedure

  1. On the workstation with unrestricted network access, log in to registry.redhat.io with your Red Hat Customer Portal credentials:

    $ sudo podman login registry.redhat.io
  2. Download the operator.yml file:

    $ sudo podman cp $(sudo podman create registry.redhat.io/rhmtc/openshift-migration-rhel7-operator:v1.3.0):/operator.yml ./
  3. Download the controller-3.yml file:

    $ sudo podman cp $(sudo podman create registry.redhat.io/rhmtc/openshift-migration-rhel7-operator:v1.3.0):/controller-3.yml ./
  4. Obtain the Operator image value from the mapping.txt file that was created when you ran the oc adm catalog mirror on the OpenShift Container Platform 4 cluster:

    $ grep openshift-migration-rhel7-operator ./mapping.txt | grep rhmtc

    The output shows the mapping between the registry.redhat.io image and your mirror registry image.

    Example output

    registry.redhat.io/rhmtc/openshift-migration-rhel7-operator@sha256:468a6126f73b1ee12085ca53a312d1f96ef5a2ca03442bcb63724af5e2614e8a=<registry.apps.example.com>/rhmtc/openshift-migration-rhel7-operator

  5. Update the image and REGISTRY values in the operator.yml file:

    containers:
      - name: ansible
        image: <registry.apps.example.com>/rhmtc/openshift-migration-rhel7-operator@sha256:<468a6126f73b1ee12085ca53a312d1f96ef5a2ca03442bcb63724af5e2614e8a> 1
    ...
      - name: operator
        image: <registry.apps.example.com>/rhmtc/openshift-migration-rhel7-operator@sha256:<468a6126f73b1ee12085ca53a312d1f96ef5a2ca03442bcb63724af5e2614e8a> 2
    ...
        env:
        - name: REGISTRY
          value: <registry.apps.example.com> 3
    1
    Specify your mirror registry and the sha256 value of the Operator image in the mapping.txt file.
    2
    Specify your mirror registry and the sha256 value of the Operator image in the mapping.txt file.
    3
    Specify your mirror registry.
  6. Log in to your OpenShift Container Platform 3 cluster.
  7. Create the MTC Operator CR object:

    $ oc create -f operator.yml

    Example output

    namespace/openshift-migration created
    rolebinding.rbac.authorization.k8s.io/system:deployers created
    serviceaccount/migration-operator created
    customresourcedefinition.apiextensions.k8s.io/migrationcontrollers.migration.openshift.io created
    role.rbac.authorization.k8s.io/migration-operator created
    rolebinding.rbac.authorization.k8s.io/migration-operator created
    clusterrolebinding.rbac.authorization.k8s.io/migration-operator created
    deployment.apps/migration-operator created
    Error from server (AlreadyExists): error when creating "./operator.yml":
    rolebindings.rbac.authorization.k8s.io "system:image-builders" already exists 1
    Error from server (AlreadyExists): error when creating "./operator.yml":
    rolebindings.rbac.authorization.k8s.io "system:image-pullers" already exists

    1
    You can ignore Error from server (AlreadyExists) messages. They are caused by the MTC Operator creating resources for earlier versions of OpenShift Container Platform 3 that are provided in later releases.
  8. Create the Migration Controller CR object:

    $ oc create -f controller-3.yml
  9. Verify that the Velero and Restic pods are running:

    $ oc get pods -n openshift-migration

1.4.3. Launching the MTC web console

You can launch the MTC web console in a browser.

Procedure

  1. Log in to the OpenShift Container Platform cluster on which you have installed MTC.
  2. Obtain the MTC web console URL by entering the following command:

    $ oc get -n openshift-migration route/migration -o go-template='https://{{ .spec.host }}'

    The output resembles the following: https://migration-openshift-migration.apps.cluster.openshift.com.

  3. Launch a browser and navigate to the MTC web console.

    Note

    If you try to access the MTC web console immediately after installing the MTC Operator, the console may not load because the Operator is still configuring the cluster. Wait a few minutes and retry.

  4. If you are using self-signed CA certificates, you will be prompted to accept the CA certificate of the source cluster’s API server. The web page guides you through the process of accepting the remaining certificates.
  5. Log in with your OpenShift Container Platform username and password.

1.5. Upgrading the Migration Toolkit for Containers

You can upgrade the Migration Toolkit for Containers (MTC) by installing the latest MTC Operator.

1.5.1. Upgrading the MTC Operator on an OpenShift Container Platform 4 cluster

You can upgrade to MTC 1.3 on an OpenShift Container Platform 4 cluster by deleting the MigrationController custom resource (CR), uninstalling the CAM Operator, and then installing the MTC Operator.

Procedure

  1. Delete the MigrationController CR:

    $ oc delete migrationcontroller -n openshift-migration migration-controller
  2. In the OpenShift Container Platform console, navigate to Operators > Installed Operators.
  3. Click CAM Operator.
  4. On the right side of the Operator Details page, select Uninstall Operator from the Actions list.
  5. Select Uninstall. This Operator stops running and no longer receives updates.
  6. Navigate to OperatorsOperatorHub.
  7. Use the Filter by keyword field to find the MTC Operator.
  8. Select the MTC Operator and click Install.
  9. On the Install Operator page, click Install.

    On the Installed Operators page, verify that the MTC Operator appears in the openshift-migration project with the status Succeeded.

1.5.2. Upgrading the MTC Operator on an OpenShift Container Platform 3 cluster

You can upgrade MTC on an OpenShift Container Platform 3 cluster by downloading the latest operator.yml file and replacing the existing MTC Operator.

Note

If you remove and recreate the namespace, you must update the cluster’s service account token in the MTC web console.

Procedure

  1. Log in to registry.redhat.io with your Red Hat Customer Portal credentials:

    $ sudo podman login registry.redhat.io
  2. Download the latest operator.yml file:

    $ sudo podman cp $(sudo podman create registry.redhat.io/rhmtc/openshift-migration-rhel7-operator:v1.3.0):/operator.yml ./
  3. Replace the MTC Operator:

    $ oc replace --force -f operator.yml
  4. If you are upgrading from version 1.1.2 or earlier, delete the Restic Pod to apply the changes:

    1. Get the Restic Pod name:

      $ oc get pod -n openshift-migration | grep restic
    2. Delete the Restic Pod or Pods:

      $ oc delete pod <restic_pod>
  5. If you are upgrading from version 1.2 or later, scale the migration-operator deployment to apply the changes:

    1. Scale the migration-operator deployment to 0:

      $ oc scale -n openshift-migration --replicas=0 deployment/migration-operator
    2. Scale the migration-operator deployment to 1 to apply the changes:

      $ oc scale -n openshift-migration --replicas=1 deployment/migration-operator
  6. Verify that the operator was upgraded to the latest version:

    $ oc -o yaml -n openshift-migration get deployment/migration-operator | grep image: | awk -F ":" '{ print $NF }'

1.6. Configuring a replication repository

You must configure an object storage to use as a replication repository. The Migration Toolkit for Containers (MTC) copies data from the source cluster to the replication repository, and then from the replication repository to the target cluster.

MTC supports the file system and snapshot data copy methods for migrating data from the source cluster to the target cluster. You can select a method that is suited for your environment and is supported by your storage provider.

The following storage providers are supported:

The source and target clusters must have network access to the replication repository during migration.

In a restricted environment, you can create an internally hosted replication repository. If you use a proxy server, you must ensure that your replication repository is allowed.

1.6.1. Configuring a Multi-Cloud Object Gateway storage bucket as a replication repository

You can install the OpenShift Container Storage Operator and configure a Multi-Cloud Object Gateway (MCG) storage bucket as a replication repository.

1.6.1.1. Installing the OpenShift Container Storage Operator

You can install the OpenShift Container Storage Operator from OperatorHub.

Procedure

  1. In the OpenShift Container Platform web console, click OperatorsOperatorHub.
  2. Use Filter by keyword (in this case, OCS) to find the OpenShift Container Storage Operator.
  3. Select the OpenShift Container Storage Operator and click Install.
  4. Select an Update Channel, Installation Mode, and Approval Strategy.
  5. Click Install.

    On the Installed Operators page, the OpenShift Container Storage Operator appears in the openshift-storage project with the status Succeeded.

1.6.1.2. Creating the Multi-Cloud Object Gateway storage bucket

You can create the Multi-Cloud Object Gateway (MCG) storage bucket’s Custom Resources (CRs).

Procedure

  1. Log in to the OpenShift Container Platform cluster:

    $ oc login
  2. Create the NooBaa CR configuration file, noobaa.yml, with the following content:

    apiVersion: noobaa.io/v1alpha1
    kind: NooBaa
    metadata:
      name: noobaa
      namespace: openshift-storage
    spec:
     dbResources:
       requests:
         cpu: 0.5 1
         memory: 1Gi
     coreResources:
       requests:
         cpu: 0.5 2
         memory: 1Gi
    1 2
    For a very small cluster, you can change the cpu value to 0.1.
  3. Create the NooBaa object:

    $ oc create -f noobaa.yml
  4. Create the BackingStore CR configuration file, bs.yml, with the following content:

    apiVersion: noobaa.io/v1alpha1
    kind: BackingStore
    metadata:
      finalizers:
      - noobaa.io/finalizer
      labels:
        app: noobaa
      name: mcg-pv-pool-bs
      namespace: openshift-storage
    spec:
      pvPool:
        numVolumes: 3 1
        resources:
          requests:
            storage: 50Gi 2
        storageClass: gp2 3
      type: pv-pool
    1
    Specify the number of volumes in the PV pool.
    2
    Specify the size of the volumes.
    3
    Specify the storage class.
  5. Create the BackingStore object:

    $ oc create -f bs.yml
  6. Create the BucketClass CR configuration file, bc.yml, with the following content:

    apiVersion: noobaa.io/v1alpha1
    kind: BucketClass
    metadata:
      labels:
        app: noobaa
      name: mcg-pv-pool-bc
      namespace: openshift-storage
    spec:
      placementPolicy:
        tiers:
        - backingStores:
          - mcg-pv-pool-bs
          placement: Spread
  7. Create the BucketClass object:

    $ oc create -f bc.yml
  8. Create the ObjectBucketClaim CR configuration file, obc.yml, with the following content:

    apiVersion: objectbucket.io/v1alpha1
    kind: ObjectBucketClaim
    metadata:
      name: migstorage
      namespace: openshift-storage
    spec:
      bucketName: migstorage 1
      storageClassName: openshift-storage.noobaa.io
      additionalConfig:
        bucketclass: mcg-pv-pool-bc
    1
    Record the bucket name for adding the replication repository to the MTC web console.
  9. Create the ObjectBucketClaim object:

    $ oc create -f obc.yml
  10. Watch the resource creation process to verify that the ObjectBucketClaim status is Bound:

    $ watch -n 30 'oc get -n openshift-storage objectbucketclaim migstorage -o yaml'

    This process can take five to ten minutes.

  11. Obtain and record the following values, which are required when you add the replication repository to the MTC web console:

    • S3 endpoint:

      $ oc get route -n openshift-storage s3
    • S3 provider access key:

      $ oc get secret -n openshift-storage migstorage -o go-template='{{ .data.AWS_ACCESS_KEY_ID }}' | base64 -d
    • S3 provider secret access key:

      $ oc get secret -n openshift-storage migstorage -o go-template='{{ .data.AWS_SECRET_ACCESS_KEY }}' | base64 -d

1.6.2. Configuring an AWS S3 storage bucket as a replication repository

You can configure an AWS S3 storage bucket as a replication repository.

Prerequisites

  • The AWS S3 storage bucket must be accessible to the source and target clusters.
  • You must have the AWS CLI installed.
  • If you are using the snapshot copy method:

    • You must have access to EC2 Elastic Block Storage (EBS).
    • The source and target clusters must be in the same region.
    • The source and target clusters must have the same storage class.
    • The storage class must be compatible with snapshots.

Procedure

  1. Create an AWS S3 bucket:

    $ aws s3api create-bucket \
        --bucket <bucket_name> \ 1
        --region <bucket_region> 2
    1
    Specify your S3 bucket name.
    2
    Specify your S3 bucket region, for example, us-east-1.
  2. Create the IAM user velero:

    $ aws iam create-user --user-name velero
  3. Create an EC2 EBS snapshot policy:

    $ cat > velero-ec2-snapshot-policy.json <<EOF
    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Effect": "Allow",
                "Action": [
                    "ec2:DescribeVolumes",
                    "ec2:DescribeSnapshots",
                    "ec2:CreateTags",
                    "ec2:CreateVolume",
                    "ec2:CreateSnapshot",
                    "ec2:DeleteSnapshot"
                ],
                "Resource": "*"
            }
        ]
    }
    EOF
  4. Create an AWS S3 access policy for one or for all S3 buckets:

    $ cat > velero-s3-policy.json <<EOF
    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Effect": "Allow",
                "Action": [
                    "s3:GetObject",
                    "s3:DeleteObject",
                    "s3:PutObject",
                    "s3:AbortMultipartUpload",
                    "s3:ListMultipartUploadParts"
                ],
                "Resource": [
                    "arn:aws:s3:::<bucket_name>/*" 1
                ]
            },
            {
                "Effect": "Allow",
                "Action": [
                    "s3:ListBucket",
                    "s3:GetBucketLocation",
                    "s3:ListBucketMultipartUploads"
                ],
                "Resource": [
                    "arn:aws:s3:::<bucket_name>" 2
                ]
            }
        ]
    }
    EOF
    1 2
    To grant access to a single S3 bucket, specify the bucket name. To grant access to all AWS S3 buckets, specify * instead of a bucket name as in the following example:

    Example output

    "Resource": [
        "arn:aws:s3:::*"

  5. Attach the EC2 EBS policy to velero:

    $ aws iam put-user-policy \
      --user-name velero \
      --policy-name velero-ebs \
      --policy-document file://velero-ec2-snapshot-policy.json
  6. Attach the AWS S3 policy to velero:

    $ aws iam put-user-policy \
      --user-name velero \
      --policy-name velero-s3 \
      --policy-document file://velero-s3-policy.json
  7. Create an access key for velero:

    $ aws iam create-access-key --user-name velero
    {
      "AccessKey": {
            "UserName": "velero",
            "Status": "Active",
            "CreateDate": "2017-07-31T22:24:41.576Z",
            "SecretAccessKey": <AWS_SECRET_ACCESS_KEY>, 1
            "AccessKeyId": <AWS_ACCESS_KEY_ID> 2
        }
    }
    1 2
    Record the AWS_SECRET_ACCESS_KEY and the AWS_ACCESS_KEY_ID for adding the AWS repository to the MTC web console.

1.6.3. Configuring a Google Cloud Provider storage bucket as a replication repository

You can configure a Google Cloud Provider (GCP) storage bucket as a replication repository.

Prerequisites

  • The GCP storage bucket must be accessible to the source and target clusters.
  • You must have gsutil installed.
  • If you are using the snapshot copy method:

    • The source and target clusters must be in the same region.
    • The source and target clusters must have the same storage class.
    • The storage class must be compatible with snapshots.

Procedure

  1. Run gsutil init to log in:

    Example output

    Welcome! This command will take you through the configuration of gcloud.
    
    Your current configuration has been set to: [default]
    
    To continue, you must login. Would you like to login (Y/n)?

  2. Set the BUCKET variable:

    $ BUCKET=<bucket_name> 1
    1
    Specify your bucket name.
  3. Create a storage bucket:

    $ gsutil mb gs://$BUCKET/
  4. Set the PROJECT_ID variable to your active project:

    $ PROJECT_ID=$(gcloud config get-value project)
  5. Create a velero IAM service account:

    $ gcloud iam service-accounts create velero \
        --display-name "Velero Storage"
  6. Create the SERVICE_ACCOUNT_EMAIL variable:

    $ SERVICE_ACCOUNT_EMAIL=$(gcloud iam service-accounts list \
      --filter="displayName:Velero Storage" \
      --format 'value(email)')
  7. Create the ROLE_PERMISSIONS variable:

    $ ROLE_PERMISSIONS=(
        compute.disks.get
        compute.disks.create
        compute.disks.createSnapshot
        compute.snapshots.get
        compute.snapshots.create
        compute.snapshots.useReadOnly
        compute.snapshots.delete
        compute.zones.get
    )
  8. Create the velero.server custom role:

    $ gcloud iam roles create velero.server \
        --project $PROJECT_ID \
        --title "Velero Server" \
        --permissions "$(IFS=","; echo "${ROLE_PERMISSIONS[*]}")"
  9. Add IAM policy binding to the project:

    $ gcloud projects add-iam-policy-binding $PROJECT_ID \
        --member serviceAccount:$SERVICE_ACCOUNT_EMAIL \
        --role projects/$PROJECT_ID/roles/velero.server
  10. Update the IAM service account:

    $ gsutil iam ch serviceAccount:$SERVICE_ACCOUNT_EMAIL:objectAdmin gs://${BUCKET}
  11. Save the IAM service account keys to the credentials-velero file in the current directory:

    $ gcloud iam service-accounts keys create credentials-velero \
      --iam-account $SERVICE_ACCOUNT_EMAIL

1.6.4. Configuring a Microsoft Azure Blob storage container as a replication repository

You can configure a Microsoft Azure Blob storage container as a replication repository.

Prerequisites

  • You must have an Azure storage account.
  • You must have the Azure CLI installed.
  • The Azure Blob storage container must be accessible to the source and target clusters.
  • If you are using the snapshot copy method:

    • The source and target clusters must be in the same region.
    • The source and target clusters must have the same storage class.
    • The storage class must be compatible with snapshots.

Procedure

  1. Set the AZURE_RESOURCE_GROUP variable:

    $ AZURE_RESOURCE_GROUP=Velero_Backups
  2. Create an Azure resource group:

    $ az group create -n $AZURE_RESOURCE_GROUP --location <CentralUS> 1
    1
    Specify your location.
  3. Set the AZURE_STORAGE_ACCOUNT_ID variable:

    $ AZURE_STORAGE_ACCOUNT_ID=velerobackups
  4. Create an Azure storage account:

    $ az storage account create \
      --name $AZURE_STORAGE_ACCOUNT_ID \
      --resource-group $AZURE_RESOURCE_GROUP \
      --sku Standard_GRS \
      --encryption-services blob \
      --https-only true \
      --kind BlobStorage \
      --access-tier Hot
  5. Set the BLOB_CONTAINER variable:

    $ BLOB_CONTAINER=velero
  6. Create an Azure Blob storage container:

    $ az storage container create \
      -n $BLOB_CONTAINER \
      --public-access off \
      --account-name $AZURE_STORAGE_ACCOUNT_ID
  7. Create a service principal and credentials for velero:

    $ AZURE_SUBSCRIPTION_ID=`az account list --query '[?isDefault].id' -o tsv` \
      AZURE_TENANT_ID=`az account list --query '[?isDefault].tenantId' -o tsv` \
      AZURE_CLIENT_SECRET=`az ad sp create-for-rbac --name "velero" --role "Contributor" --query 'password' -o tsv` \
      AZURE_CLIENT_ID=`az ad sp list --display-name "velero" --query '[0].appId' -o tsv`
  8. Save the service principal credentials in the credentials-velero file:

    $ cat << EOF  > ./credentials-velero
    AZURE_SUBSCRIPTION_ID=${AZURE_SUBSCRIPTION_ID}
    AZURE_TENANT_ID=${AZURE_TENANT_ID}
    AZURE_CLIENT_ID=${AZURE_CLIENT_ID}
    AZURE_CLIENT_SECRET=${AZURE_CLIENT_SECRET}
    AZURE_RESOURCE_GROUP=${AZURE_RESOURCE_GROUP}
    AZURE_CLOUD_NAME=AzurePublicCloud
    EOF

1.7. Migrating your applications

You must add your clusters and a replication repository to the MTC web console. Then, you can create and run a migration plan.

If your cluster or replication repository are secured with self-signed certificates, you can create a CA certificate bundle file or disable SSL verification.

1.7.1. Creating a CA certificate bundle file

If you use a self-signed certificate to secure a cluster or a replication repository, certificate verification might fail with the following error message: Certificate signed by unknown authority.

You can create a custom CA certificate bundle file and upload it in the MTC web console when you add a cluster or a replication repository.

Procedure

Download a CA certificate from a remote endpoint and save it as a CA bundle file:

$ echo -n | openssl s_client -connect <host_FQDN>:<port> \ 1
  | sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' > <ca_bundle.cert> 2
1
Specify the host FQDN and port of the endpoint, for example, api.my-cluster.example.com:6443.
2
Specify the name of the CA bundle file.

1.7.2. Configuring a migration plan

You can configure a migration plan to suit your needs by increasing the number of objects migrated or excluding resources from migration.

1.7.2.1. Increasing Migration Controller limits for large migrations

You can increase the Migration Controller limits on migration objects and container resources for large migrations.

Important

You must test these changes before you perform a migration in a production environment.

Procedure

  1. Edit the Migration Controller manifest:

    $ oc edit migrationcontroller -n openshift-migration
  2. Update the following parameters:

    ...
    mig_controller_limits_cpu: "1" 1
    mig_controller_limits_memory: "10Gi" 2
    ...
    mig_controller_requests_cpu: "100m" 3
    mig_controller_requests_memory: "350Mi" 4
    ...
    mig_pv_limit: 100 5
    mig_pod_limit: 100 6
    mig_namespace_limit: 10 7
    ...
    1
    Specifies the number of CPUs available to the Migration Controller.
    2
    Specifies the amount of memory available to the Migration Controller.
    3
    Specifies the number of CPU units available for Migration Controller requests. 100m represents 0.1 CPU units (100 * 1e-3).
    4
    Specifies the amount of memory available for Migration Controller requests.
    5
    Specifies the number of PVs that can be migrated.
    6
    Specifies the number of pods that can be migrated.
    7
    Specifies the number of namespaces that can be migrated.
  3. Create a migration plan that uses the updated parameters to verify the changes.

    If your migration plan exceeds the Migration Controller limits, the MTC console displays a warning message when you save the migration plan.

1.7.2.2. Excluding resources from a migration plan

You can exclude resources, for example, ImageStreams, persistent volumes (PVs), or subscriptions, from a migration plan in order to reduce the load or to migrate images or PVs with a different tool.

Procedure

  1. Edit the Migration Controller CR:

    $ oc edit migrationcontroller -n openshift-migration
  2. Update the spec section by adding a parameter to exclude specific resources or by adding a resource to the excluded_resources parameter if it does not have its own exclusion parameter:

    apiVersion: migration.openshift.io/v1alpha1
    kind: MigrationController
    metadata:
      name: migration-controller
      namespace: openshift-migration
    spec:
      disable_image_migration: true 1
      disable_pv_migration: true 2
      ...
      excluded_resources: 3
      - imagetags
      - templateinstances
      - clusterserviceversions
      - packagemanifests
      - subscriptions
      - servicebrokers
      - servicebindings
      - serviceclasses
      - serviceinstances
      - serviceplans
    1
    Add disable_image_migration: true to exclude imagestreams from the migration. Do not edit the excluded_resources parameter. imagestreams is added to excluded_resources when the Migration Controller Pod restarts.
    2
    Add disable_pv_migration: true to exclude PVs from the migration plan. Do not edit the excluded_resources parameter. persistentvolumes and persistentvolumeclaims are added to excluded_resources when the Migration Controller Pod restarts. Disabling PV migration also disables PV discovery when you create the migration plan.
    3
    You can add OpenShift Container Platform resources to the excluded_resources list. Do not delete any of the default excluded resources. These resources are known to be problematic for migration.
  3. Wait two minutes for the Migration Controller Pod to restart so that the changes are applied.
  4. Verify that the resource is excluded:

    $ oc get deployment -n openshift-migration migration-controller -o yaml | grep EXCLUDED_RESOURCES -A1

    The output contains the excluded resources, as shown in the following example:

        - name: EXCLUDED_RESOURCES
          value:
          imagetags,templateinstances,clusterserviceversions,packagemanifests,subscriptions,servicebrokers,servicebindings,serviceclasses,serviceinstances,serviceplans,imagestreams,persistentvolumes,persistentvolumeclaims

1.7.3. Adding a cluster to the MTC web console

You can add a cluster to the MTC web console.

Prerequisites

If you are using Azure snapshots to copy data:

  • You must provide the Azure resource group name when you add the source cluster.
  • The source and target clusters must be in the same Azure resource group and in the same location.

Procedure

  1. Log in to the cluster.
  2. Obtain the service account token:

    $ oc sa get-token migration-controller -n openshift-migration

    Example output

    eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJtaWciLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlY3JldC5uYW1lIjoibWlnLXRva2VuLWs4dDJyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6Im1pZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE1YjFiYWMwLWMxYmYtMTFlOS05Y2NiLTAyOWRmODYwYjMwOCIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDptaWc6bWlnIn0.xqeeAINK7UXpdRqAtOj70qhBJPeMwmgLomV9iFxr5RoqUgKchZRG2J2rkqmPm6vr7K-cm7ibD1IBpdQJCcVDuoHYsFgV4mp9vgOfn9osSDp2TGikwNz4Az95e81xnjVUmzh-NjDsEpw71DH92iHV_xt2sTwtzftS49LpPW2LjrV0evtNBP_t_RfskdArt5VSv25eORl7zScqfe1CiMkcVbf2UqACQjo3LbkpfN26HAioO2oH0ECPiRzT0Xyh-KwFutJLS9Xgghyw-LD9kPKcE_xbbJ9Y4Rqajh7WdPYuB0Jd9DPVrslmzK-F6cgHHYoZEv0SvLQi-PO0rpDrcjOEQQ

  3. Log in to the MTC web console.
  4. In the Clusters section, click Add cluster.
  5. Fill in the following fields:

    • Cluster name: May contain lower-case letters (a-z) and numbers (0-9). Must not contain spaces or international characters.
    • Url: URL of the cluster’s API server, for example, https://<master1.example.com>:8443.
    • Service account token: String that you obtained from the source cluster.
    • Azure cluster: Optional. Select it if you are using Azure snapshots to copy your data.
    • Azure resource group: This field appears if Azure cluster is checked.
    • If you use a custom CA bundle, click Browse and browse to the CA bundle file.
  6. Click Add cluster.

    The cluster appears in the Clusters section of the MTC web console.

1.7.4. Adding a replication repository to the MTC web console

You can add an object storage bucket as a replication repository to the MTC web console.

Prerequisites

  • You must configure an object storage bucket for migrating the data.

Procedure

  1. Log in to the MTC web console.
  2. In the Replication repositories section, click Add repository.
  3. Select a Storage provider type and fill in the following fields:

    • AWS for AWS S3, MCG, and generic S3 providers:

      • Replication repository name: Specify the replication repository name in the MTC web console.
      • S3 bucket name: Specify the name of the S3 bucket you created.
      • S3 bucket region: Specify the S3 bucket region. Required for AWS S3. Optional for other S3 providers.
      • S3 endpoint: Specify the URL of the S3 service, not the bucket, for example, https://<s3-storage.apps.cluster.com>. Required for a generic S3 provider. You must use the https:// prefix.
      • S3 provider access key: Specify the <AWS_SECRET_ACCESS_KEY> for AWS or the S3 provider access key for MCG.
      • S3 provider secret access key: Specify the <AWS_ACCESS_KEY_ID> for AWS or the S3 provider secret access key for MCG.
      • Require SSL verification: Clear this check box if you are using a generic S3 provider.
      • If you use a custom CA bundle, click Browse and browse to the Base64-encoded CA bundle file.
    • GCP:

      • Replication repository name: Specify the replication repository name in the MTC web console.
      • GCP bucket name: Specify the name of the GCP bucket.
      • GCP credential JSON blob: Specify the string in the credentials-velero file.
    • Azure:

      • Replication repository name: Specify the replication repository name in the MTC web console.
      • Azure resource group: Specify the resource group of the Azure Blob storage.
      • Azure storage account name: Specify the Azure Blob storage account name.
      • Azure credentials - INI file contents: Specify the string in the credentials-velero file.
  4. Click Add repository and wait for connection validation.
  5. Click Close.

    The new repository appears in the Replication repositories section.

1.7.5. Creating a migration plan in the MTC web console

You can create a migration plan in the MTC web console.

Prerequisites

  • The MTC web console must contain the following:

    • Source cluster
    • Target cluster
    • Replication repository
  • The source and target clusters must have network access to each other and to the replication repository.
  • If you use snapshots to copy data, the source and target clusters must run on the same cloud provider (AWS, GCP, or Azure) and be located in the same region.

Procedure

  1. Log in to the MTC web console.
  2. In the Plans section, click Add plan.
  3. Enter the Plan name and click Next.

    The Plan name can contain up to 253 lower-case alphanumeric characters (a-z, 0-9). It must not contain spaces or underscores (_).

  4. Select a Source cluster.
  5. Select a Target cluster.
  6. Select a Replication repository.
  7. Select the projects to be migrated and click Next.
  8. Select Copy or Move for the PVs:

    • Copy copies the data in a source cluster’s PV to the replication repository and then restores it on a newly created PV, with similar characteristics, in the target cluster.

      Optional: You can verify data copied with the file system method by selecting Verify copy. This option generates a checksum for each source file and checks it after restoration. The operation significantly reduces performance.

    • Move unmounts a remote volume (for example, NFS) from the source cluster, creates a PV resource on the target cluster pointing to the remote volume, and then mounts the remote volume on the target cluster. Applications running on the target cluster use the same remote volume that the source cluster was using. The remote volume must be accessible to the source and target clusters.
  9. Click Next.
  10. Select a Copy method for the PVs:

    • Snapshot backs up and restores the disk using the cloud provider’s snapshot functionality. It is significantly faster than Filesystem.

      Note

      The storage and clusters must be in the same region and the storage class must be compatible.

    • Filesystem copies the data files from the source disk to a newly created target disk.
  11. Select a Storage class for the PVs.

    If you selected the Filesystem copy method, you can change the storage class during migration, for example, from Red Hat Gluster Storage or NFS storage to Red Hat Ceph Storage.

  12. Click Next.
  13. If you want to add a migration hook, click Add Hook and perform the following steps:

    1. Specify the name of the hook.
    2. Select Ansible playbook to use your own playbook or Custom container image for a hook written in another language.
    3. Click Browse to upload the playbook.
    4. Optional: If you are not using the default Ansible runtime image, specify your custom Ansible image.
    5. Specify the cluster on which you want the hook to run.
    6. Specify the service account name.
    7. Specify the namespace.
    8. Select the migration step at which you want the hook to run:

      • PreBackup: Before backup tasks are started on the source cluster
      • PostBackup: After backup tasks are complete on the source cluster
      • PreRestore: Before restore tasks are started on the target cluster
      • PostRestore: After restore tasks are complete on the target cluster
  14. Click Add.

    You can add up to four hooks to a migration plan, assigning each hook to a different migration step.

  15. Click Finish.
  16. Click Close.

    The migration plan appears in the Plans section.

1.7.6. Running a migration plan in the MTC web console

You can stage or migrate applications and data with the migration plan you created in the MTC web console.

Prerequisites

The MTC web console must contain the following:

  • Source cluster
  • Target cluster
  • Replication repository
  • Valid migration plan

Procedure

  1. Log in to the source cluster.
  2. Delete old images:

    $ oc adm prune images
  3. Log in to the MTC web console.
  4. Select a migration plan.
  5. Click Stage to copy data from the source cluster to the target cluster without stopping the application.

    You can run Stage multiple times to reduce the actual migration time.

  6. When you are ready to migrate the application workload, click Migrate.

    Migrate stops the application workload on the source cluster and recreates its resources on the target cluster.

  7. Optional: In the Migrate window, you can select Do not stop applications on the source cluster during migration.
  8. Click Migrate.
  9. Optional: To stop a migration in progress, click the Options menu kebab and select Cancel.
  10. When the migration is complete, verify that the application migrated successfully in the OpenShift Container Platform web console:

    1. Click HomeProjects.
    2. Click the migrated project to view its status.
    3. In the Routes section, click Location to verify that the application is functioning, if applicable.
    4. Click WorkloadsPods to verify that the pods are running in the migrated namespace.
    5. Click StoragePersistent volumes to verify that the migrated persistent volume is correctly provisioned.

1.8. Migrating your control plane settings

The Control Plane Migration Assistant (CPMA) is a CLI-based tool that assists you in migrating the control plane from OpenShift Container Platform 3.7 (or later) to 4.6. The CPMA processes the OpenShift Container Platform 3 configuration files and generates Custom Resource (CR) manifest files, which are consumed by OpenShift Container Platform 4.6 Operators.

1.8.1. Installing the Control Plane Migration Assistant

You can download the Control Plane Migration Assistant (CPMA) binary file from the Red Hat Customer Portal and install it on Linux, macOS, or Windows operating systems.

Procedure

  1. In the Red Hat Customer Portal, navigate to DownloadsRed Hat OpenShift Container Platform.
  2. On the Download Red Hat OpenShift Container Platform page, select Red Hat OpenShift Container Platform from the Product Variant list.
  3. Select CPMA 1.0 for RHEL 7 from the Version list. This binary works on RHEL 7 and RHEL 8.
  4. Click Download Now to download cpma for Linux and macOS or cpma.exe for Windows.
  5. Save the file in a directory defined as $PATH for Linux and macOS or %PATH% for Windows.
  6. For Linux, make the file executable:

    $ sudo chmod +x cpma

1.8.2. Using the Control Plane Migration Assistant

The Control Plane Migration Assistant (CPMA) generates CR manifests, which are consumed by OpenShift Container Platform 4.6 Operators, and a report that indicates which OpenShift Container Platform 3 features are supported fully, partially, or not at all.

CPMA can run in remote mode, retrieving the configuration files from the source cluster using SSH, or in local mode, using local copies of the source cluster’s configuration files.

Prerequisites

  • The source cluster must be OpenShift Container Platform 3.7 or later.
  • The source cluster must be updated to the latest synchronous release.
  • An environment health check must be run on the source cluster to confirm that there are no diagnostic errors or warnings.
  • The CPMA binary must be executable.
  • You must have cluster-admin privileges for the source cluster.

Procedure

  1. Log in to the OpenShift Container Platform 3 cluster:

    $ oc login https://<master1.example.com> 1
    1
    Specify the master node. You must be logged in to receive a token for the Kubernetes and OpenShift Container Platform APIs.
  2. Run the CPMA:

    $ cpma --manifests=false 1
    1
    The --manifests=false option runs the CPMA without generating CR manifests.

    Each prompt requires you to provide input, as in the following example:

    Example output

    ? Do you wish to save configuration for future use? true
    ? What will be the source for OCP3 config files? Remote host 1
    ? Path to crio config file /etc/crio/crio.conf
    ? Path to etcd config file /etc/etcd/etcd.conf
    ? Path to master config file /etc/origin/master/master-config.yaml
    ? Path to node config file /etc/origin/node/node-config.yaml
    ? Path to registries config file /etc/containers/registries.conf
    ? Do wish to find source cluster using KUBECONFIG or prompt it? KUBECONFIG
    ? Select cluster obtained from KUBECONFIG contexts master1-example-com:443
    ? Select master node master1.example.com
    ? SSH login root 2
    ? SSH Port 22
    ? Path to private SSH key /home/user/.ssh/openshift_key
    ? Path to application data, skip to use current directory .
    INFO[29 Aug 19 00:07 UTC] Starting manifest and report generation
    INFO[29 Aug 19 00:07 UTC] Transform:Starting for - API
    INFO[29 Aug 19 00:07 UTC] APITransform::Extract
    INFO[29 Aug 19 00:07 UTC] APITransform::Transform:Reports
    INFO[29 Aug 19 00:07 UTC] Transform:Starting for - Cluster
    INFO[29 Aug 19 00:08 UTC] ClusterTransform::Transform:Reports
    INFO[29 Aug 19 00:08 UTC] ClusterReport::ReportQuotas
    INFO[29 Aug 19 00:08 UTC] ClusterReport::ReportPVs
    INFO[29 Aug 19 00:08 UTC] ClusterReport::ReportNamespaces
    INFO[29 Aug 19 00:08 UTC] ClusterReport::ReportNodes
    INFO[29 Aug 19 00:08 UTC] ClusterReport::ReportRBAC
    INFO[29 Aug 19 00:08 UTC] ClusterReport::ReportStorageClasses
    INFO[29 Aug 19 00:08 UTC] Transform:Starting for - Crio
    INFO[29 Aug 19 00:08 UTC] CrioTransform::Extract
    WARN[29 Aug 19 00:08 UTC] Skipping Crio: No configuration file available
    INFO[29 Aug 19 00:08 UTC] Transform:Starting for - Docker
    INFO[29 Aug 19 00:08 UTC] DockerTransform::Extract
    INFO[29 Aug 19 00:08 UTC] DockerTransform::Transform:Reports
    INFO[29 Aug 19 00:08 UTC] Transform:Starting for - ETCD
    INFO[29 Aug 19 00:08 UTC] ETCDTransform::Extract
    INFO[29 Aug 19 00:08 UTC] ETCDTransform::Transform:Reports
    INFO[29 Aug 19 00:08 UTC] Transform:Starting for - OAuth
    INFO[29 Aug 19 00:08 UTC] OAuthTransform::Extract
    INFO[29 Aug 19 00:08 UTC] OAuthTransform::Transform:Reports
    INFO[29 Aug 19 00:08 UTC] Transform:Starting for - SDN
    INFO[29 Aug 19 00:08 UTC] SDNTransform::Extract
    INFO[29 Aug 19 00:08 UTC] SDNTransform::Transform:Reports
    INFO[29 Aug 19 00:08 UTC] Transform:Starting for - Image
    INFO[29 Aug 19 00:08 UTC] ImageTransform::Extract
    INFO[29 Aug 19 00:08 UTC] ImageTransform::Transform:Reports
    INFO[29 Aug 19 00:08 UTC] Transform:Starting for - Project
    INFO[29 Aug 19 00:08 UTC] ProjectTransform::Extract
    INFO[29 Aug 19 00:08 UTC] ProjectTransform::Transform:Reports
    INFO[29 Aug 19 00:08 UTC] Flushing reports to disk
    INFO[29 Aug 19 00:08 UTC] Report:Added: report.json
    INFO[29 Aug 19 00:08 UTC] Report:Added: report.html
    INFO[29 Aug 19 00:08 UTC] Successfully finished transformations

    1
    The Remote host option runs the CPMA in remote mode.
    2
    SSH login: The SSH user must have sudo permissions on the OpenShift Container Platform 3 cluster in order to access the configuration files.

    The CPMA creates the following files and directory in the current directory if you did not specify an output directory:

    • cpma.yaml file: Configuration options that you provided when you ran the CPMA
    • master1.example.com/: Configuration files from the master node
    • report.json: JSON-encoded report
    • report.html: HTML-encoded report
  3. Open the report.html file in a browser to view the CPMA report.
  4. If you generate CR manifests, apply the CR manifests to the OpenShift Container Platform 4.6 cluster, as shown in the following example:

    $ oc apply -f 100_CPMA-cluster-config-secret-htpasswd-secret.yaml

1.9. Troubleshooting

You can view the migration Custom Resources (CRs) and download logs to troubleshoot a failed migration.

If the application was stopped during the failed migration, you must roll it back manually in order to prevent data corruption.

Note

Manual rollback is not required if the application was not stopped during migration because the original application is still running on the source cluster.

1.9.1. Viewing migration Custom Resources

The Migration Toolkit for Containers (MTC) creates the following Custom Resources (CRs):

migration architecture diagram

20 MigCluster (configuration, MTC cluster): Cluster definition

20 MigStorage (configuration, MTC cluster): Storage definition

20 MigPlan (configuration, MTC cluster): Migration plan

The MigPlan CR describes the source and target clusters, replication repository, and namespaces being migrated. It is associated with 0, 1, or many MigMigration CRs.

Note

Deleting a MigPlan CR deletes the associated MigMigration CRs.

20 BackupStorageLocation (configuration, MTC cluster): Location of Velero backup objects

20 VolumeSnapshotLocation (configuration, MTC cluster): Location of Velero volume snapshots

20 MigMigration (action, MTC cluster): Migration, created every time you stage or migrate data. Each MigMigration CR is associated with a MigPlan CR.

20 Backup (action, source cluster): When you run a migration plan, the MigMigration CR creates two Velero backup CRs on each source cluster:

  • Backup CR #1 for Kubernetes objects
  • Backup CR #2 for PV data

20 Restore (action, target cluster): When you run a migration plan, the MigMigration CR creates two Velero restore CRs on the target cluster:

  • Restore CR #1 (using Backup CR #2) for PV data
  • Restore CR #2 (using Backup CR #1) for Kubernetes objects

Procedure

  1. View the CR:

    $ oc get <cr> -n openshift-migration 1
    1
    Specify the migration CR, for example, migmigration.

    Example output

    NAME                                   AGE
    88435fe0-c9f8-11e9-85e6-5d593ce65e10   6m42s

  2. Inspect the migmigration CR:

    $ oc describe <migmigration> <88435fe0-c9f8-11e9-85e6-5d593ce65e10> -n openshift-migration

    The output is similar to the following examples.

MigMigration example output

name:         88435fe0-c9f8-11e9-85e6-5d593ce65e10
namespace:    openshift-migration
labels:       <none>
annotations:  touch: 3b48b543-b53e-4e44-9d34-33563f0f8147
apiVersion:  migration.openshift.io/v1alpha1
kind:         MigMigration
metadata:
  creationTimestamp:  2019-08-29T01:01:29Z
  generation:          20
  resourceVersion:    88179
  selfLink:           /apis/migration.openshift.io/v1alpha1/namespaces/openshift-migration/migmigrations/88435fe0-c9f8-11e9-85e6-5d593ce65e10
  uid:                 8886de4c-c9f8-11e9-95ad-0205fe66cbb6
spec:
  migPlanRef:
    name:        socks-shop-mig-plan
    namespace:   openshift-migration
  quiescePods:  true
  stage:         false
status:
  conditions:
    category:              Advisory
    durable:               True
    lastTransitionTime:  2019-08-29T01:03:40Z
    message:               The migration has completed successfully.
    reason:                Completed
    status:                True
    type:                  Succeeded
  phase:                   Completed
  startTimestamp:         2019-08-29T01:01:29Z
events:                    <none>

Velero backup CR #2 example output that describes the PV data

apiVersion: velero.io/v1
kind: Backup
metadata:
  annotations:
    openshift.io/migrate-copy-phase: final
    openshift.io/migrate-quiesce-pods: "true"
    openshift.io/migration-registry: 172.30.105.179:5000
    openshift.io/migration-registry-dir: /socks-shop-mig-plan-registry-44dd3bd5-c9f8-11e9-95ad-0205fe66cbb6
  creationTimestamp: "2019-08-29T01:03:15Z"
  generateName: 88435fe0-c9f8-11e9-85e6-5d593ce65e10-
  generation: 1
  labels:
    app.kubernetes.io/part-of: migration
    migmigration: 8886de4c-c9f8-11e9-95ad-0205fe66cbb6
    migration-stage-backup: 8886de4c-c9f8-11e9-95ad-0205fe66cbb6
    velero.io/storage-location: myrepo-vpzq9
  name: 88435fe0-c9f8-11e9-85e6-5d593ce65e10-59gb7
  namespace: openshift-migration
  resourceVersion: "87313"
  selfLink: /apis/velero.io/v1/namespaces/openshift-migration/backups/88435fe0-c9f8-11e9-85e6-5d593ce65e10-59gb7
  uid: c80dbbc0-c9f8-11e9-95ad-0205fe66cbb6
spec:
  excludedNamespaces: []
  excludedResources: []
  hooks:
    resources: []
  includeClusterResources: null
  includedNamespaces:
  - sock-shop
  includedResources:
  - persistentvolumes
  - persistentvolumeclaims
  - namespaces
  - imagestreams
  - imagestreamtags
  - secrets
  - configmaps
  - pods
  labelSelector:
    matchLabels:
      migration-included-stage-backup: 8886de4c-c9f8-11e9-95ad-0205fe66cbb6
  storageLocation: myrepo-vpzq9
  ttl: 720h0m0s
  volumeSnapshotLocations:
  - myrepo-wv6fx
status:
  completionTimestamp: "2019-08-29T01:02:36Z"
  errors: 0
  expiration: "2019-09-28T01:02:35Z"
  phase: Completed
  startTimestamp: "2019-08-29T01:02:35Z"
  validationErrors: null
  version: 1
  volumeSnapshotsAttempted: 0
  volumeSnapshotsCompleted: 0
  warnings: 0

Velero restore CR #2 example output that describes the Kubernetes resources

apiVersion: velero.io/v1
kind: Restore
metadata:
  annotations:
    openshift.io/migrate-copy-phase: final
    openshift.io/migrate-quiesce-pods: "true"
    openshift.io/migration-registry: 172.30.90.187:5000
    openshift.io/migration-registry-dir: /socks-shop-mig-plan-registry-36f54ca7-c925-11e9-825a-06fa9fb68c88
  creationTimestamp: "2019-08-28T00:09:49Z"
  generateName: e13a1b60-c927-11e9-9555-d129df7f3b96-
  generation: 3
  labels:
    app.kubernetes.io/part-of: migration
    migmigration: e18252c9-c927-11e9-825a-06fa9fb68c88
    migration-final-restore: e18252c9-c927-11e9-825a-06fa9fb68c88
  name: e13a1b60-c927-11e9-9555-d129df7f3b96-gb8nx
  namespace: openshift-migration
  resourceVersion: "82329"
  selfLink: /apis/velero.io/v1/namespaces/openshift-migration/restores/e13a1b60-c927-11e9-9555-d129df7f3b96-gb8nx
  uid: 26983ec0-c928-11e9-825a-06fa9fb68c88
spec:
  backupName: e13a1b60-c927-11e9-9555-d129df7f3b96-sz24f
  excludedNamespaces: null
  excludedResources:
  - nodes
  - events
  - events.events.k8s.io
  - backups.velero.io
  - restores.velero.io
  - resticrepositories.velero.io
  includedNamespaces: null
  includedResources: null
  namespaceMapping: null
  restorePVs: true
status:
  errors: 0
  failureReason: ""
  phase: Completed
  validationErrors: null
  warnings: 15

1.9.2. Downloading migration logs

You can download the Velero, Restic, and Migration controller logs in the MTC web console to troubleshoot a failed migration.

Procedure

  1. Log in to the MTC console.
  2. Click Plans to view the list of migration plans.
  3. Click the Options menu kebab of a specific migration plan and select Logs.
  4. Click Download Logs to download the logs of the Migration controller, Velero, and Restic for all clusters.
  5. To download a specific log:

    1. Specify the log options:

      • Cluster: Select the source, target, or MTC host cluster.
      • Log source: Select Velero, Restic, or Controller.
      • Pod source: Select the Pod name, for example, controller-manager-78c469849c-v6wcf

        The selected log is displayed.

        You can clear the log selection settings by changing your selection.

    2. Click Download Selected to download the selected log.

Optionally, you can access the logs by using the CLI, as in the following example:

$ oc logs <pod-name> -f -n openshift-migration 1
1
Specify the Pod name.

1.9.3. Updating deprecated API GroupVersionKinds

In OpenShift Container Platform 4.6, some API GroupVersionKinds (GVKs) that are used by OpenShift Container Platform 3.x are deprecated.

If your source cluster uses deprecated GVKs, the following warning is displayed when you create a migration plan: Some namespaces contain GVKs incompatible with destination cluster. You can click See details to view the namespace and the incompatible GVKs.

Note

This warning does not block the migration.

During migration, the deprecated GVKs are saved in the Velero Backup Custom Resource (CR) #1 for Kubernetes objects. You can download the Backup CR, extract the deprecated GVK yaml files, and update them with the oc convert command. Then you create the updated GVKs on the target cluster.

Procedure

  1. Run the migration plan.
  2. View the MigPlan CR:

    $ oc describe migplan <migplan_name> -n openshift-migration 1
    1
    Specify the name of the migration plan.

    The output is similar to the following:

    metadata:
      ...
      uid: 79509e05-61d6-11e9-bc55-02ce4781844a 1
    status:
      ...
      conditions:
      - category: Warn
        lastTransitionTime: 2020-04-30T17:16:23Z
        message: 'Some namespaces contain GVKs incompatible with destination cluster.
          See: `incompatibleNamespaces` for details'
        status: "True"
        type: GVKsIncompatible
      incompatibleNamespaces:
      - gvks:
        - group: batch
          kind: cronjobs 2
          version: v2alpha1
        - group: batch
          kind: scheduledjobs 3
          version: v2alpha1
    1
    Record the MigPlan UID.
    2 3
    Record the deprecated GVKs.
  3. Get the MigMigration name associated with the MigPlan UID:

    $ oc get migmigration -o json | jq -r '.items[] | select(.metadata.ownerReferences[].uid=="<migplan_uid>") | .metadata.name' 1
    1
    Specify the MigPlan UID.
  4. Get the MigMigration UID associated with the MigMigration name:

    $ oc get migmigration <migmigration_name> -o jsonpath='{.metadata.uid}' 1
    1
    Specify the MigMigration name.
  5. Get the Velero Backup name associated with the MigMigration UID:

    $ oc get backup.velero.io --selector migration-initial-backup="<migmigration_uid>" -o jsonpath={.items[*].metadata.name} 1
    1
    Specify the MigMigration UID.
  6. Download the contents of the Velero Backup to your local machine:

    • For AWS S3:

      $ aws s3 cp s3://<bucket_name>/velero/backups/<backup_name> <backup_local_dir> --recursive 1
      1
      Specify the bucket, backup name, and your local backup directory name.
    • For GCP:

      $ gsutil cp gs://<bucket_name>/velero/backups/<backup_name> <backup_local_dir> --recursive 1
      1
      Specify the bucket, backup name, and your local backup directory name.
    • For Azure:

      $ azcopy copy 'https://velerobackups.blob.core.windows.net/velero/backups/<backup_name>' '<backup_local_dir>' --recursive 1
      1
      Specify the backup name and your local backup directory name.
  7. Extract the Velero Backup archive file:

    $ tar -xfv <backup_local_dir>/<backup_name>.tar.gz -C <backup_local_dir>
  8. Run oc convert in offline mode on each deprecated GVK:

    $ oc convert -f <backup_local_dir>/resources/<gvk>.json 1
    1
    Specify the deprecated GVK.
  9. Create the converted GVK on the target cluster:

    $ oc create -f <gvk>.json 1
    1
    Specify the converted GVK.

1.9.4. Error messages and resolutions

This section describes common error messages and how to resolve their underlying causes.

1.9.4.1. CA certificate error in the MTC console

If a CA certificate error message is displayed the first time you try to access the MTC console, the likely cause is the use of self-signed CA certificates in one of the clusters.

To resolve this issue, navigate to the oauth-authorization-server URL displayed in the error message and accept the certificate. To resolve this issue permanently, add the certificate to the trust store of your web browser.

If an Unauthorized message is displayed after you have accepted the certificate, navigate to the MTC console and refresh the web page.

1.9.4.2. OAuth timeout error in the MTC console

If a connection has timed out message is displayed in the MTC console after you have accepted a self-signed certificate, the causes are likely to be the following:

You can determine the cause of the timeout.

Procedure

  1. Navigate to the MTC console and inspect the elements with the browser web inspector.
  2. Check the migration-ui pod log:

    $ oc logs migration-ui-<86b679ffc7-h6l6v> -n openshift-migration

1.9.4.3. PodVolumeBackups timeout error in Velero log

If a migration fails because Restic times out, the following error is displayed in the Velero log.

Example output

level=error msg="Error backing up item" backup=velero/monitoring error="timed out waiting for all PodVolumeBackups to complete" error.file="/go/src/github.com/heptio/velero/pkg/restic/backupper.go:165" error.function="github.com/heptio/velero/pkg/restic.(*backupper).BackupPodVolumes" group=v1

The default value of restic_timeout is one hour. You can increase this parameter for large migrations, keeping in mind that a higher value may delay the return of error messages.

Procedure

  1. In the OpenShift Container Platform web console, navigate to OperatorsInstalled Operators.
  2. Click MTC Operator.
  3. In the MigrationController tab, click migration-controller.
  4. In the YAML tab, update the following parameter value:

    spec:
      restic_timeout: 1h 1
    1
    Valid units are h (hours), m (minutes), and s (seconds), for example, 3h30m15s.
  5. Click Save.

1.9.4.4. ResticVerifyErrors in the MigMigration Custom Resource

If data verification fails when migrating a PV with the file system data copy method, the following error is displayed in the MigMigration Custom Resource (CR).

Example output

status:
  conditions:
  - category: Warn
    durable: true
    lastTransitionTime: 2020-04-16T20:35:16Z
    message: There were verify errors found in 1 Restic volume restores. See restore `<registry-example-migration-rvwcm>`
      for details 1
    status: "True"
    type: ResticVerifyErrors 2

1
The error message identifies the Restore CR name.
2
ResticErrors is a general error warning that includes verification errors.
Note

A data verification error does not cause the migration process to fail.

You can check the Restore CR to identify the source of the data verification error.

Procedure

  1. Log in to the target cluster.
  2. View the Restore CR:

    $ oc describe <registry-example-migration-rvwcm> -n openshift-migration

    The output identifies the PV with PodVolumeRestore errors.

    Example output

    status:
      phase: Completed
      podVolumeRestoreErrors:
      - kind: PodVolumeRestore
        name: <registry-example-migration-rvwcm-98t49>
        namespace: openshift-migration
      podVolumeRestoreResticErrors:
      - kind: PodVolumeRestore
        name: <registry-example-migration-rvwcm-98t49>
        namespace: openshift-migration

  3. View the PodVolumeRestore CR:

    $ oc describe <migration-example-rvwcm-98t49>

    The output identifies the Restic pod that logged the errors.

    Example output

      completionTimestamp: 2020-05-01T20:49:12Z
      errors: 1
      resticErrors: 1
      ...
      resticPod: <restic-nr2v5>

  4. View the Restic pod log to locate the errors:

    $ oc logs -f <restic-nr2v5>

1.9.5. Manually rolling back a migration

If your application was stopped during a failed migration, you must roll it back manually in order to prevent data corruption in the PV.

This procedure is not required if the application was not stopped during migration, because the original application is still running on the source cluster.

Procedure

  1. On the target cluster, switch to the migrated project:

    $ oc project <project>
  2. Get the deployed resources:

    $ oc get all
  3. Delete the deployed resources to ensure that the application is not running on the target cluster and accessing data on the PVC:

    $ oc delete <resource_type>
  4. To stop a daemon set without deleting it, update the nodeSelector in the YAML file:

    apiVersion: apps/v1
    kind: DaemonSet
    metadata:
      name: hello-daemonset
    spec:
      selector:
          matchLabels:
            name: hello-daemonset
      template:
        metadata:
          labels:
            name: hello-daemonset
        spec:
          nodeSelector:
            role: worker 1
    1
    Specify a nodeSelector value that does not exist on any node.
  5. Update each PV’s reclaim policy so that unnecessary data is removed. During migration, the reclaim policy for bound PVs is Retain, to ensure that data is not lost when an application is removed from the source cluster. You can remove these PVs during rollback.

    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: pv0001
    spec:
      capacity:
        storage: 5Gi
      accessModes:
        - ReadWriteOnce
      persistentVolumeReclaimPolicy: Retain 1
      ...
    status:
      ...
    1
    Specify Recycle or Delete.
  6. On the source cluster, switch to your migrated project:

    $ oc project <project_name>
  7. Obtain the project’s deployed resources:

    $ oc get all
  8. Start one or more replicas of each deployed resource:

    $ oc scale --replicas=1 <resource_type>/<resource_name>
  9. Update the nodeSelector of the DaemonSet resource to its original value, if you changed it during the procedure.

1.9.6. Using must-gather to collect data

You must run the must-gather tool if you open a customer support case on the Red Hat Customer Portal.

The openshift-migration-must-gather-rhel8 image collects migration-specific logs and data that are not collected by the default must-gather image.

Procedure

  1. Navigate to the directory where you want to store the must-gather data.
  2. Run the must-gather command:

    $ oc adm must-gather --image=openshift-migration-must-gather-rhel8:v1.3.0
  3. Remove authentication keys and other sensitive information.
  4. Create an archive file containing the contents of the must-gather data directory:

    $ tar cvaf must-gather.tar.gz must-gather.local.<uid>/
  5. Upload the compressed file as an attachment to your customer support case.

1.9.7. Known issues

This release has the following known issues:

  • During migration, the Migration Toolkit for Containers (MTC) preserves the following namespace annotations:

    • openshift.io/sa.scc.mcs
    • openshift.io/sa.scc.supplemental-groups
    • openshift.io/sa.scc.uid-range

      These annotations preserve the UID range, ensuring that the containers retain their file system permissions on the target cluster. There is a risk that the migrated UIDs could duplicate UIDs within an existing or future namespace on the target cluster. (BZ#1748440)

  • If an AWS bucket is added to the MTC web console and then deleted, its status remains True because the MigStorage CR is not updated. (BZ#1738564)
  • Most cluster-scoped resources are not yet handled by MTC. If your applications require cluster-scoped resources, you may have to create them manually on the target cluster.
  • If a migration fails, the migration plan does not retain custom PV settings for quiesced pods. You must manually roll back the migration, delete the migration plan, and create a new migration plan with your PV settings. (BZ#1784899)
  • If a large migration fails because Restic times out, you can increase the restic_timeout parameter value (default: 1h) in the Migration Controller CR.
  • If you select the data verification option for PVs that are migrated with the file system copy method, performance is significantly slower.
  • If you are migrating data from NFS storage and root_squash is enabled, Restic maps to nfsnobody. The migration fails and a permission error is displayed in the Restic Pod log. You can resolve this issue by creating a supplemental group for Restic. (BZ#1873641)
  • If Velero has an invalid BackupStorageLocation during start-up, it will crash-loop until the invalid BackupStorageLocation is removed. This scenario is triggered by incorrect credentials, a non-existent S3 bucket, and other configuration errors. (BZ#1881707)

Chapter 2. Migrating from OpenShift Container Platform 4.1

2.1. Migration tools and prerequisites

You can migrate application workloads from OpenShift Container Platform 4.1 to 4.6 with the Migration Toolkit for Containers (MTC). MTC enables you to control the migration and to minimize application downtime.

Note

You can migrate between OpenShift Container Platform clusters of the same version, for example, from 4.1 to 4.1, as long as the source and target clusters are configured correctly.

The MTC web console and API, based on Kubernetes Custom Resources, enable you to migrate stateful and stateless application workloads at the granularity of a namespace.

MTC supports the file system and snapshot data copy methods for migrating data from the source cluster to the target cluster. You can select a method that is suited for your environment and is supported by your storage provider.

You can use migration hooks to run Ansible playbooks at certain points during the migration. The hooks are added when you create a migration plan.

2.1.1. Migration prerequisites

  • You must upgrade the source cluster to the latest z-stream release.
  • You must have cluster-admin privileges on all clusters.
  • The source and target clusters must have unrestricted network access to the replication repository.
  • The cluster on which the Migration controller is installed must have unrestricted access to the other clusters.
  • If your application uses images from the openshift namespace, the required versions of the images must be present on the target cluster.

    If the required images are not present, you must update the imagestreamtags references to use an available version that is compatible with your application. If the imagestreamtags cannot be updated, you can manually upload equivalent images to the application namespaces and update the applications to reference them.

The following imagestreamtags have been removed from OpenShift Container Platform 4.2:

  • dotnet:1.0, dotnet:1.1, dotnet:2.0
  • dotnet-runtime:2.0
  • mariadb:10.1
  • mongodb:2.4, mongodb:2.6
  • mysql:5.5, mysql:5.6
  • nginx:1.8
  • nodejs:0.10, nodejs:4, nodejs:6
  • perl:5.16, perl:5.20
  • php:5.5, php:5.6
  • postgresql:9.2, postgresql:9.4, postgresql:9.5
  • python:3.3, python:3.4
  • ruby:2.0, ruby:2.2

2.1.2. About the Migration Toolkit for Containers

The Migration Toolkit for Containers (MTC) enables you to migrate Kubernetes resources, persistent volume data, and internal container images from an OpenShift Container Platform source cluster to an OpenShift Container Platform 4.6 target cluster, using the MTC web console or the Kubernetes API.

Migrating an application with the MTC web console involves the following steps:

  1. Install the MTC Operator on all clusters.

    You can install the MTC Operator in a restricted environment with limited or no internet access. The source and target clusters must have network access to each other and to a mirror registry.

  2. Configure the replication repository, an intermediate object storage that MTC uses to migrate data.

    The source and target clusters must have network access to the replication repository during migration. In a restricted environment, you can use an internally hosted S3 storage repository. If you use a proxy server, you must ensure that replication repository is whitelisted.

  3. Add the source cluster to the MTC web console.
  4. Add the replication repository to the MTC web console.
  5. Create a migration plan, with one of the following data migration options:

    • Copy: MTC copies the data from the source cluster to the replication repository, and from the replication repository to the target cluster.

      migration PV copy
    • Move: MTC unmounts a remote volume (for example, NFS) from the source cluster, creates a PV resource on the target cluster pointing to the remote volume, and then mounts the remote volume on the target cluster. Applications running on the target cluster use the same remote volume that the source cluster was using. The remote volume must be accessible to the source and target clusters.

      Note

      Although the replication repository does not appear in this diagram, it is required for the actual migration.

      migration PV move
  6. Run the migration plan, with one of the following options:

    • Stage (optional) copies data to the target cluster without stopping the application.

      Staging can be run multiple times so that most of the data is copied to the target before migration. This minimizes the actual migration time and application downtime.

    • Migrate stops the application on the source cluster and recreates its resources on the target cluster. Optionally, you can migrate the workload without stopping the application.
OCP 3 to 4 App migration

2.1.3. About data copy methods

MTC supports the file system and snapshot data copy methods for migrating data from the source cluster to the target cluster. You can select a method that is suited for your environment and is supported by your storage provider.

2.1.3.1. File system copy method

MTC copies data files from the source cluster to the replication repository, and from there to the target cluster.

Table 2.1. File system copy method summary

BenefitsLimitations
  • Clusters can have different storage classes
  • Supported for all S3 storage providers
  • Optional data verification with checksum
  • Slower than the snapshot copy method
  • Optional data verification significantly reduces performance

2.1.3.2. Snapshot copy method

MTC copies a snapshot of the source cluster’s data to a cloud provider’s object storage, configured as a replication repository. The data is restored on the target cluster.

AWS, Google Cloud Provider, and Microsoft Azure support the snapshot copy method.

Table 2.2. Snapshot copy method summary

BenefitsLimitations
  • Faster than the file system copy method
  • Cloud provider must support snapshots.
  • Clusters must be on the same cloud provider.
  • Clusters must be in the same location or region.
  • Clusters must have the same storage class.
  • Storage class must be compatible with snapshots.

2.1.4. About migration hooks

You can use migration hooks to run Ansible playbooks at certain points during the migration. The hooks are added when you create a migration plan.

Note

If you do not want to use Ansible playbooks, you can create a custom container image and add it to a migration plan.

Migration hooks perform tasks such as customizing application quiescence, manually migrating unsupported data types, and updating applications after migration.

A single migration hook runs on a source or target cluster at one of the following migration steps:

  • PreBackup: Before backup tasks are started on the source cluster
  • PostBackup: After backup tasks are complete on the source cluster
  • PreRestore: Before restore tasks are started on the target cluster
  • PostRestore: After restore tasks are complete on the target cluster

    You can assign one hook to each migration step, up to a maximum of four hooks for a single migration plan.

The default hook-runner image is registry.redhat.io/rhmtc/openshift-migration-hook-runner-rhel7:v1.3.0. This image is based on Ansible Runner and includes python-openshift for Ansible Kubernetes resources and an updated oc binary. You can also create your own hook image with additional Ansible modules or tools.

The Ansible playbook is mounted on a hook container as a ConfigMap. The hook container runs as a Job on a cluster with a specified service account and namespace. The Job runs, even if the initial Pod is evicted or killed, until it reaches the default backoffLimit (6) or successful completion.

2.2. Deploying the Migration Toolkit for Containers

You can install the MTC Operator on your OpenShift Container Platform 4.6 target cluster and 4.1 source cluster. The MTC Operator installs the Migration Toolkit for Containers (MTC) on the target cluster by default.

Note

Optional: You can configure the MTC Operator to install the MTC on an OpenShift Container Platform 3 cluster or on a remote cluster.

In a restricted environment, you can install the MTC Operator from a local mirror registry.

After you have installed the MTC Operator on your clusters, you can launch the MTC web console.

2.2.1. Installing the MTC Operator

You can install the MTC Operator with the Operator Lifecycle Manager (OLM) on an OpenShift Container Platform 4.6 target cluster and on an OpenShift Container Platform 4.1 source cluster.

2.2.1.1. Installing the MTC Operator on an OpenShift Container Platform 4.6 target cluster

You can install the MTC Operator on an OpenShift Container Platform 4.6 target cluster with the Operator Lifecycle Manager (OLM).

The MTC Operator installs the Migration Toolkit for Containers on the target cluster by default.

Procedure

  1. In the OpenShift Container Platform web console, click OperatorsOperatorHub.
  2. Use the Filter by keyword field (in this case, Migration) to find the MTC Operator.
  3. Select the MTC Operator and click Install.
  4. On the Install Operator page, click Install.

    On the Installed Operators page, the MTC Operator appears in the openshift-migration project with the status Succeeded.

  5. Click MTC Operator.
  6. Under Provided APIs, locate the Migration Controller tile, and click Create Instance.
  7. Click Create.
  8. Click WorkloadsPods to verify that the Controller Manager, Migration UI, Restic, and Velero pods are running.

2.2.1.2. Installing the MTC Operator on an OpenShift Container Platform 4.1 source cluster

You can install the MTC Operator on an OpenShift Container Platform 4 source cluster with the Operator Lifecycle Manager (OLM).

Procedure

  1. In the OpenShift Container Platform web console, click CatalogOperatorHub.
  2. Use the Filter by keyword field (in this case, Migration) to find the MTC Operator.
  3. Select the MTC Operator and click Install.
  4. On the Install Operator page, click Install.

    On the Installed Operators page, the MTC Operator appears in the openshift-migration project with the status Succeeded.

  5. Click MTC Operator.
  6. Under Provided APIs, locate the Migration Controller tile, and click Create Instance.
  7. Set the migration_controller and migration_ui parameters to false and add the deprecated_cors_configuration: true parameter to the spec stanza:

    spec:
      ...
      migration_controller: false
      migration_ui: false
      ...
      deprecated_cors_configuration: true
  8. Click Create.
  9. Click WorkloadsPods to verify that the Restic and Velero pods are running.

2.2.2. Installing the MTC Operator in a restricted environment

You can build a custom Operator catalog image for OpenShift Container Platform 4, push it to a local mirror image registry, and configure the Operator Lifecycle Manager to install the MTC Operator from the local registry.

2.2.2.1. Prerequisites

  • If you want to prune the default catalog and selectively mirror only a subset of Operators, install the opm CLI.

2.2.2.2. Disabling the default OperatorHub sources

Operator catalogs that source content provided by Red Hat and community projects are configured for OperatorHub by default during an OpenShift Container Platform installation. Before configuring OperatorHub to instead use local catalog sources in a restricted network environment, you must disable the default catalogs.

Procedure

  • Disable the sources for the default catalogs by adding disableAllDefaultSources: true to the OperatorHub spec:

    $ oc patch OperatorHub cluster --type json \
        -p '[{"op": "add", "path": "/spec/disableAllDefaultSources", "value": true}]'

2.2.2.3. Pruning an index image

An index image, based on the Operator Bundle Format, is a containerized snapshot of an Operator catalog. You can prune an index of all but a specified list of packages, creating a copy of the source index containing only the Operators that you want.

When configuring Operator Lifecycle Manager (OLM) to use mirrored content on restricted network OpenShift Container Platform clusters, use this pruning method if you want to only mirror a subset of Operators from the default catalogs.

For the steps in this procedure, the target registry is an existing mirror registry that is accessible by both your cluster and a workstation with unrestricted network access. This example also shows pruning the index image for the default redhat-operators catalog, but the process is the same for all index images.

Prerequisites

  • Workstation with unrestricted network access
  • podman version 1.4.4+
  • grpcurl
  • opm version 1.12.3+
  • Access to a registry that supports Docker v2-2

Procedure

  1. Authenticate with registry.redhat.io:

    $ podman login registry.redhat.io
  2. Authenticate with your target registry:

    $ podman login <target_registry>
  3. Determine the list of packages you want to include in your pruned index.

    1. Run the source index image that you want to prune in a container. For example:

      $ podman run -p50051:50051 \
          -it registry.redhat.io/redhat/redhat-operator-index:v4.6

      Example output

      Trying to pull registry.redhat.io/redhat/redhat-operator-index:v4.6...
      Getting image source signatures
      Copying blob ae8a0c23f5b1 done
      ...
      INFO[0000] serving registry                              database=/database/index.db port=50051

    2. In a separate terminal session, use the grpcurl command to get a list of the packages provided by the index:

      $ grpcurl -plaintext localhost:50051 api.Registry/ListPackages > packages.out
    3. Inspect the packages.out file and identify which package names from this list you want to keep in your pruned index. For example:

      Example snippets of packages list

      ...
      {
        "name": "advanced-cluster-management"
      }
      ...
      {
        "name": "jaeger-product"
      }
      ...
      {
      {
        "name": "quay-operator"
      }
      ...

    4. In the terminal session where you executed the podman run command, press Ctrl and C to stop the container process.
  4. Run the following command to prune the source index of all but the specified packages:

    $ opm index prune \
        -f registry.redhat.io/redhat/redhat-operator-index:v4.6 \1
        -p advanced-cluster-management,jaeger-product,quay-operator \2
        -t <target_registry>:<port>/<namespace>/redhat-operator-index:v4.6 3
    1
    Index to prune.
    2
    Comma-separated list of packages to keep.
    3
    Custom tag for new index image being built.
  5. Run the following command to push the new index image to your target registry:

    $ podman push <target_registry>:<port>/<namespace>/redhat-operator-index:v4.6

    where <namespace> is any existing namespace on the registry. For example, you might create an olm-mirror namespace to push all mirrored content to.

2.2.2.4. Mirroring an Operator catalog

You can mirror the Operator content of a Red Hat-provided catalog, or a custom catalog, into a container image registry using the oc adm catalog mirror command. The target registry must support Docker v2-2. For a cluster on a restricted network, this registry can be a registry that the cluster has network access to, such as a mirror registry created during a restricted network cluster installation.

You must also mirror the Red Hat-provided index image, or push your own custom-built index image, to the target registry by using the oc image mirror command. You can then use the mirrored index image to create a CatalogSource that allows Operator Lifecycle Manager (OLM) to load the mirrored catalog onto your OpenShift Container Platform cluster.

For the steps in this procedure, the target registry is an existing mirror registry that is accessible by both your cluster and a workstation with unrestricted network access. This example also shows mirroring the default redhat-operators catalog, but the process is the same for all catalogs.

Prerequisites

  • Workstation with unrestricted network access
  • podman version 1.4.4+
  • Access to mirror registry that supports Docker v2-2
  • If you are working with private registries, set the REG_CREDS environment variable to the file path of your registry credentials for use in later steps. For example, for the podman CLI:

    $ REG_CREDS=${XDG_RUNTIME_DIR}/containers/auth.json

Procedure

  1. On your workstation with unrestricted network access, use the podman login command to authenticate with the your target mirror registry:

    $ podman login <mirror_registry>
  2. Authenticate with registry.redhat.io:

    $ podman login registry.redhat.io
  3. The oc adm catalog mirror command extracts the contents of an index image to generate the manifests required for mirroring. You can choose either of the following:

    • Allow the default behavior of the command to automatically mirror all of the image content from the index image to your mirror registry after generating manifests.
    • Add the --manifests-only flag to only generate the manifests required for mirroring, but do not actually mirror the image content to the registry yet. This can be useful for reviewing what will be mirrored, and it allows you to make any changes to the mapping list if you only require a subset of packages. You can then use that file with the oc image mirror command to mirror the modified list of images in a later step.

      Note

      The --manifests-only flag is intended for advanced selective mirroring of content from the catalog. The opm index prune command, if you used it previously to prune the index image, is suitable for most use cases.

    On your workstation with unrestricted network access, run the following command:

    $ oc adm catalog mirror \
        <index_image> \1
        <mirror_registry>:<port> \2
        [-a ${REG_CREDS}] \3
        [--insecure] \4
        [--filter-by-os="<os>/<arch>"] \5
        [--manifests-only] 6
    1
    Specify the index image for the catalog you want to mirror. For example, this might be a pruned index image that you created previously, or one of the source index images for the default catalogs, such as registry.redhat.io/redhat/redhat-operator-index:v4.6.
    2
    Specify the target registry to mirror the Operator content to.
    3
    Optional: If required, specify the location of your registry credentials file.
    4
    Optional: If you do not want to configure trust for the target registry, add the --insecure flag.
    5
    Optional: Because the catalog might reference images that support multiple architectures and operating systems, you can filter by architecture and operating system to mirror only the images that match. Valid values are linux/amd64, linux/ppc64le, and linux/s390x.
    6
    Optional: Only generate the manifests required for mirroring and do not actually mirror the image content to a registry.

    Example output

    src image has index label for database path: /database/index.db
    using database path mapping: /database/index.db:/tmp/153048078
    wrote database to /tmp/153048078 1
    ...
    wrote mirroring manifests to redhat-operator-index-manifests

    1
    Directory for the temporary index.db database generated by the command.

    After running the command, a <image_name>-manifests/ directory is created in the current directory and generates the following files:

    • The imageContentSourcePolicy.yaml file defines an ImageContentSourcePolicy object that can configure nodes to translate between the image references stored in Operator manifests and the mirrored registry.
    • The mapping.txt file contains all of the source images and where to map them in the target registry. This file is compatible with the oc image mirror command and can be used to further customize the mirroring configuration.
  4. If you used the --manifests-only flag in the previous step and want to further trim the subset of packages to be mirrored:

    1. Modify the list of images in your mapping.txt file to your specifications. If you are unsure of the exact names and versions of the subset of images you want to mirror, use the following steps to find them:

      1. Run the sqlite3 tool against the temporary database that was generated by the oc adm catalog mirror command to retrieve a list of images matching a general search query. The output helps inform how you will later edit your mapping.txt file.

        For example, to retrieve a list of images that are similar to the string jaeger:

        $ echo "select * from related_image \
            where operatorbundle_name like '%jaeger%';" \
            | sqlite3 -line /tmp/153048078/index.db 1
        1
        Refer to the previous output of the oc adm catalog mirror command to find the path of the database file.

        Example output

        ...
        image = registry.redhat.io/distributed-tracing/jaeger-all-in-one-rhel7@sha256:41f769c2c32f3f050aa42d86f084b739914ff9ba2f0aed2d9b0b69357b48459d
        operatorbundle_name = jaeger-operator.v1.17.6
        
        image = registry.redhat.io/distributed-tracing/jaeger-es-index-cleaner-rhel7@sha256:c64ac461d96523516a199bd132ad4d7148317e503a735028f0d8f7ba063a61cb
        operatorbundle_name = jaeger-operator.v1.17.6
        
        image = registry.redhat.io/distributed-tracing/jaeger-rhel7-operator:1.13.2
        operatorbundle_name = jaeger-operator.v1.13.2-1

      2. Use the results from the previous step to help you edit the mapping.txt file to only include the subset of images you want to mirror.

        For example, you can use the image values from the previous example output to find that the following matching lines exist in your mapping.txt file:

        Matching image mappings in mapping.txt

        ...
        registry.redhat.io/distributed-tracing/jaeger-all-in-one-rhel7@sha256:41f769c2c32f3f050aa42d86f084b739914ff9ba2f0aed2d9b0b69357b48459d=quay.io/adellape/distributed-tracing-jaeger-all-in-one-rhel7:5cf7a033
        ...
        registry.redhat.io/distributed-tracing/jaeger-es-index-cleaner-rhel7@sha256:c64ac461d96523516a199bd132ad4d7148317e503a735028f0d8f7ba063a61cb=quay.io/adellape/distributed-tracing-jaeger-es-index-cleaner-rhel7:ecfd2ca7
        ...
        registry.redhat.io/distributed-tracing/jaeger-rhel7-operator:1.13.2=quay.io/adellape/distributed-tracing-jaeger-rhel7-operator:1.13.2
        ...

        In this example, if you only want to mirror these images, you would then remove all other entries in the mapping.txt file and leave only the above matching image mapping lines.

    2. Still on your workstation with unrestricted network access, use your modified mapping.txt file to mirror the images to your registry using the oc image mirror command:

      $ oc image mirror \
          [-a ${REG_CREDS}] \
          -f ./redhat-operator-index-manifests/mapping.txt
  5. Apply the ImageContentSourcePolicy:

    $ oc apply -f ./redhat-operator-index-manifests/imageContentSourcePolicy.yaml
  6. If you are not using a custom, pruned version of an index image, push the Red Hat-provided index image to your registry:

    $ oc image mirror \
        [-a ${REG_CREDS}] \
        registry.redhat.io/redhat/redhat-operator-index:v4.6 \1
        <mirror_registry>:<port>/<namespace>/redhat-operator-index:v4.6 2
    1
    Specify the index image for catalog that you mirrored content for in the previous step.
    2
    Specify where to mirror the index image.

You can now create a CatalogSource to reference your mirrored index image and Operator content.

2.2.2.5. Creating a catalog from an index image

You can create an Operator catalog from an index image and apply it to an OpenShift Container Platform cluster for use with Operator Lifecycle Manager (OLM).

Prerequisites

  • An index image built and pushed to a registry.

Procedure

  1. Create a CatalogSource object that references your index image.

    1. Modify the following to your specifications and save it as a catalogsource.yaml file:

      apiVersion: operators.coreos.com/v1alpha1
      kind: CatalogSource
      metadata:
        name: my-operator-catalog
        namespace: openshift-marketplace
      spec:
        sourceType: grpc
        image: <mirror_registry>:<port>/<namespace>/redhat-operator-index:v4.6 1
        displayName: My Operator Catalog
        publisher: <publisher_name> 2
        updateStrategy:
          registryPoll: 3
            interval: 30m
      1
      Specify your index image.
      2
      Specify your name or an organization name publishing the catalog.
      3
      CatalogSources can automatically check for new versions to keep up to date.
    2. Use the file to create the CatalogSource object:

      $ oc create -f catalogsource.yaml
  2. Verify the following resources are created successfully.

    1. Check the pods:

      $ oc get pods -n openshift-marketplace

      Example output

      NAME                                    READY   STATUS    RESTARTS  AGE
      my-operator-catalog-6njx6               1/1     Running   0         28s
      marketplace-operator-d9f549946-96sgr    1/1     Running   0         26h

    2. Check the CatalogSource:

      $ oc get catalogsource -n openshift-marketplace

      Example output

      NAME                  DISPLAY               TYPE PUBLISHER  AGE
      my-operator-catalog   My Operator Catalog   grpc            5s

    3. Check the PackageManifest:

      $ oc get packagemanifest -n openshift-marketplace

      Example output

      NAME                          CATALOG               AGE
      jaeger-product                My Operator Catalog   93s

You can now install the Operators from the OperatorHub page on your OpenShift Container Platform web console.

2.2.2.6. Installing the MTC Operator on an OpenShift Container Platform 4.6 target cluster in a restricted environment

You can install the MTC Operator on an OpenShift Container Platform 4.6 target cluster with the Operator Lifecycle Manager (OLM).

The MTC Operator installs the Migration Toolkit for Containers on the target cluster by default.

Prerequisites

  • You have created a custom Operator catalog and pushed it to a mirror registry.
  • You have configured OLM to install the MTC Operator from the mirror registry.

Procedure

  1. In the OpenShift Container Platform web console, click OperatorsOperatorHub.
  2. Use the Filter by keyword field (in this case, Migration) to find the MTC Operator.
  3. Select the MTC Operator and click Install.
  4. On the Install Operator page, click Install.

    On the Installed Operators page, the MTC Operator appears in the openshift-migration project with the status Succeeded.

  5. Click MTC Operator.
  6. Under Provided APIs, locate the Migration Controller tile, and click Create Instance.
  7. Click Create.
  8. Click WorkloadsPods to verify that the Controller Manager, Migration UI, Restic, and Velero pods are running.

2.2.2.7. Installing the MTC Operator on an OpenShift Container Platform 4.1 source cluster in a restricted environment

You can install the MTC Operator on an OpenShift Container Platform 4 source cluster with the Operator Lifecycle Manager (OLM).

Prerequisites

  • You have created a custom Operator catalog and pushed it to a mirror registry.
  • You have configured OLM to install the MTC Operator from the mirror registry.

Procedure

  1. Use the Filter by keyword field (in this case, Migration) to find the MTC Operator.
  2. Select the MTC Operator and click Install.
  3. On the Install Operator page, click Install.

    On the Installed Operators page, the MTC Operator appears in the openshift-migration project with the status Succeeded.

  4. Click MTC Operator.
  5. Under Provided APIs, locate the Migration Controller tile, and click Create Instance.
  6. Click Create.

2.2.3. Launching the MTC web console

You can launch the MTC web console in a browser.

Procedure

  1. Log in to the OpenShift Container Platform cluster on which you have installed MTC.
  2. Obtain the MTC web console URL by entering the following command:

    $ oc get -n openshift-migration route/migration -o go-template='https://{{ .spec.host }}'

    The output resembles the following: https://migration-openshift-migration.apps.cluster.openshift.com.

  3. Launch a browser and navigate to the MTC web console.

    Note

    If you try to access the MTC web console immediately after installing the MTC Operator, the console may not load because the Operator is still configuring the cluster. Wait a few minutes and retry.

  4. If you are using self-signed CA certificates, you will be prompted to accept the CA certificate of the source cluster’s API server. The web page guides you through the process of accepting the remaining certificates.
  5. Log in with your OpenShift Container Platform username and password.

2.3. Upgrading the Migration Toolkit for Containers

You can upgrade the Migration Toolkit for Containers (MTC) by upgrading the MTC Operator.

2.3.1. Upgrading the MTC Operator on an OpenShift Container Platform 4 cluster

You can upgrade to MTC 1.3 on an OpenShift Container Platform 4 cluster by deleting the MigrationController custom resource (CR), uninstalling the CAM Operator, and then installing the MTC Operator.

Procedure

  1. Delete the MigrationController CR:

    $ oc delete migrationcontroller -n openshift-migration migration-controller
  2. In the OpenShift Container Platform console, navigate to Operators > Installed Operators.
  3. Click CAM Operator.
  4. On the right side of the Operator Details page, select Uninstall Operator from the Actions list.
  5. Select Uninstall. This Operator stops running and no longer receives updates.
  6. Navigate to OperatorsOperatorHub.
  7. Use the Filter by keyword field to find the MTC Operator.
  8. Select the MTC Operator and click Install.
  9. On the Install Operator page, click Install.

    On the Installed Operators page, verify that the MTC Operator appears in the openshift-migration project with the status Succeeded.

2.4. Configuring a replication repository

You must configure an object storage to use as a replication repository. The Migration Toolkit for Containers (MTC) copies data from the source cluster to the replication repository, and then from the replication repository to the target cluster.

MTC supports the file system and snapshot data copy methods for migrating data from the source cluster to the target cluster. You can select a method that is suited for your environment and is supported by your storage provider.

The following storage providers are supported:

The source and target clusters must have network access to the replication repository during migration.

In a restricted environment, you can create an internally hosted replication repository. If you use a proxy server, you must ensure that your replication repository is allowed.

2.4.1. Configuring a Multi-Cloud Object Gateway storage bucket as a replication repository

You can install the OpenShift Container Storage Operator and configure a Multi-Cloud Object Gateway (MCG) storage bucket as a replication repository.

2.4.1.1. Installing the OpenShift Container Storage Operator

You can install the OpenShift Container Storage Operator from OperatorHub.

Procedure

  1. In the OpenShift Container Platform web console, click OperatorsOperatorHub.
  2. Use Filter by keyword (in this case, OCS) to find the OpenShift Container Storage Operator.
  3. Select the OpenShift Container Storage Operator and click Install.
  4. Select an Update Channel, Installation Mode, and Approval Strategy.
  5. Click Install.

    On the Installed Operators page, the OpenShift Container Storage Operator appears in the openshift-storage project with the status Succeeded.

2.4.1.2. Creating the Multi-Cloud Object Gateway storage bucket

You can create the Multi-Cloud Object Gateway (MCG) storage bucket’s Custom Resources (CRs).

Procedure

  1. Log in to the OpenShift Container Platform cluster:

    $ oc login
  2. Create the NooBaa CR configuration file, noobaa.yml, with the following content:

    apiVersion: noobaa.io/v1alpha1
    kind: NooBaa
    metadata:
      name: noobaa
      namespace: openshift-storage
    spec:
     dbResources:
       requests:
         cpu: 0.5 1
         memory: 1Gi
     coreResources:
       requests:
         cpu: 0.5 2
         memory: 1Gi
    1 2
    For a very small cluster, you can change the cpu value to 0.1.
  3. Create the NooBaa object:

    $ oc create -f noobaa.yml
  4. Create the BackingStore CR configuration file, bs.yml, with the following content:

    apiVersion: noobaa.io/v1alpha1
    kind: BackingStore
    metadata:
      finalizers:
      - noobaa.io/finalizer
      labels:
        app: noobaa
      name: mcg-pv-pool-bs
      namespace: openshift-storage
    spec:
      pvPool:
        numVolumes: 3 1
        resources:
          requests:
            storage: 50Gi 2
        storageClass: gp2 3
      type: pv-pool
    1
    Specify the number of volumes in the PV pool.
    2
    Specify the size of the volumes.
    3
    Specify the storage class.
  5. Create the BackingStore object:

    $ oc create -f bs.yml
  6. Create the BucketClass CR configuration file, bc.yml, with the following content:

    apiVersion: noobaa.io/v1alpha1
    kind: BucketClass
    metadata:
      labels:
        app: noobaa
      name: mcg-pv-pool-bc
      namespace: openshift-storage
    spec:
      placementPolicy:
        tiers:
        - backingStores:
          - mcg-pv-pool-bs
          placement: Spread
  7. Create the BucketClass object:

    $ oc create -f bc.yml
  8. Create the ObjectBucketClaim CR configuration file, obc.yml, with the following content:

    apiVersion: objectbucket.io/v1alpha1
    kind: ObjectBucketClaim
    metadata:
      name: migstorage
      namespace: openshift-storage
    spec:
      bucketName: migstorage 1
      storageClassName: openshift-storage.noobaa.io
      additionalConfig:
        bucketclass: mcg-pv-pool-bc
    1
    Record the bucket name for adding the replication repository to the MTC web console.
  9. Create the ObjectBucketClaim object:

    $ oc create -f obc.yml
  10. Watch the resource creation process to verify that the ObjectBucketClaim status is Bound:

    $ watch -n 30 'oc get -n openshift-storage objectbucketclaim migstorage -o yaml'

    This process can take five to ten minutes.

  11. Obtain and record the following values, which are required when you add the replication repository to the MTC web console:

    • S3 endpoint:

      $ oc get route -n openshift-storage s3
    • S3 provider access key:

      $ oc get secret -n openshift-storage migstorage -o go-template='{{ .data.AWS_ACCESS_KEY_ID }}' | base64 -d
    • S3 provider secret access key:

      $ oc get secret -n openshift-storage migstorage -o go-template='{{ .data.AWS_SECRET_ACCESS_KEY }}' | base64 -d

2.4.2. Configuring an AWS S3 storage bucket as a replication repository

You can configure an AWS S3 storage bucket as a replication repository.

Prerequisites

  • The AWS S3 storage bucket must be accessible to the source and target clusters.
  • You must have the AWS CLI installed.
  • If you are using the snapshot copy method:

    • You must have access to EC2 Elastic Block Storage (EBS).
    • The source and target clusters must be in the same region.
    • The source and target clusters must have the same storage class.
    • The storage class must be compatible with snapshots.

Procedure

  1. Create an AWS S3 bucket:

    $ aws s3api create-bucket \
        --bucket <bucket_name> \ 1
        --region <bucket_region> 2
    1
    Specify your S3 bucket name.
    2
    Specify your S3 bucket region, for example, us-east-1.
  2. Create the IAM user velero:

    $ aws iam create-user --user-name velero
  3. Create an EC2 EBS snapshot policy:

    $ cat > velero-ec2-snapshot-policy.json <<EOF
    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Effect": "Allow",
                "Action": [
                    "ec2:DescribeVolumes",
                    "ec2:DescribeSnapshots",
                    "ec2:CreateTags",
                    "ec2:CreateVolume",
                    "ec2:CreateSnapshot",
                    "ec2:DeleteSnapshot"
                ],
                "Resource": "*"
            }
        ]
    }
    EOF
  4. Create an AWS S3 access policy for one or for all S3 buckets:

    $ cat > velero-s3-policy.json <<EOF
    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Effect": "Allow",
                "Action": [
                    "s3:GetObject",
                    "s3:DeleteObject",
                    "s3:PutObject",
                    "s3:AbortMultipartUpload",
                    "s3:ListMultipartUploadParts"
                ],
                "Resource": [
                    "arn:aws:s3:::<bucket_name>/*" 1
                ]
            },
            {
                "Effect": "Allow",
                "Action": [
                    "s3:ListBucket",
                    "s3:GetBucketLocation",
                    "s3:ListBucketMultipartUploads"
                ],
                "Resource": [
                    "arn:aws:s3:::<bucket_name>" 2
                ]
            }
        ]
    }
    EOF
    1 2
    To grant access to a single S3 bucket, specify the bucket name. To grant access to all AWS S3 buckets, specify * instead of a bucket name as in the following example:

    Example output

    "Resource": [
        "arn:aws:s3:::*"

  5. Attach the EC2 EBS policy to velero:

    $ aws iam put-user-policy \
      --user-name velero \
      --policy-name velero-ebs \
      --policy-document file://velero-ec2-snapshot-policy.json
  6. Attach the AWS S3 policy to velero:

    $ aws iam put-user-policy \
      --user-name velero \
      --policy-name velero-s3 \
      --policy-document file://velero-s3-policy.json
  7. Create an access key for velero:

    $ aws iam create-access-key --user-name velero
    {
      "AccessKey": {
            "UserName": "velero",
            "Status": "Active",
            "CreateDate": "2017-07-31T22:24:41.576Z",
            "SecretAccessKey": <AWS_SECRET_ACCESS_KEY>, 1
            "AccessKeyId": <AWS_ACCESS_KEY_ID> 2
        }
    }
    1 2
    Record the AWS_SECRET_ACCESS_KEY and the AWS_ACCESS_KEY_ID for adding the AWS repository to the MTC web console.

2.4.3. Configuring a Google Cloud Provider storage bucket as a replication repository

You can configure a Google Cloud Provider (GCP) storage bucket as a replication repository.

Prerequisites

  • The GCP storage bucket must be accessible to the source and target clusters.
  • You must have gsutil installed.
  • If you are using the snapshot copy method:

    • The source and target clusters must be in the same region.
    • The source and target clusters must have the same storage class.
    • The storage class must be compatible with snapshots.

Procedure

  1. Run gsutil init to log in:

    Example output

    Welcome! This command will take you through the configuration of gcloud.
    
    Your current configuration has been set to: [default]
    
    To continue, you must login. Would you like to login (Y/n)?

  2. Set the BUCKET variable:

    $ BUCKET=<bucket_name> 1
    1
    Specify your bucket name.
  3. Create a storage bucket:

    $ gsutil mb gs://$BUCKET/
  4. Set the PROJECT_ID variable to your active project:

    $ PROJECT_ID=$(gcloud config get-value project)
  5. Create a velero IAM service account:

    $ gcloud iam service-accounts create velero \
        --display-name "Velero Storage"
  6. Create the SERVICE_ACCOUNT_EMAIL variable:

    $ SERVICE_ACCOUNT_EMAIL=$(gcloud iam service-accounts list \
      --filter="displayName:Velero Storage" \
      --format 'value(email)')
  7. Create the ROLE_PERMISSIONS variable:

    $ ROLE_PERMISSIONS=(
        compute.disks.get
        compute.disks.create
        compute.disks.createSnapshot
        compute.snapshots.get
        compute.snapshots.create
        compute.snapshots.useReadOnly
        compute.snapshots.delete
        compute.zones.get
    )
  8. Create the velero.server custom role:

    $ gcloud iam roles create velero.server \
        --project $PROJECT_ID \
        --title "Velero Server" \
        --permissions "$(IFS=","; echo "${ROLE_PERMISSIONS[*]}")"
  9. Add IAM policy binding to the project:

    $ gcloud projects add-iam-policy-binding $PROJECT_ID \
        --member serviceAccount:$SERVICE_ACCOUNT_EMAIL \
        --role projects/$PROJECT_ID/roles/velero.server
  10. Update the IAM service account:

    $ gsutil iam ch serviceAccount:$SERVICE_ACCOUNT_EMAIL:objectAdmin gs://${BUCKET}
  11. Save the IAM service account keys to the credentials-velero file in the current directory:

    $ gcloud iam service-accounts keys create credentials-velero \
      --iam-account $SERVICE_ACCOUNT_EMAIL

2.4.4. Configuring a Microsoft Azure Blob storage container as a replication repository

You can configure a Microsoft Azure Blob storage container as a replication repository.

Prerequisites

  • You must have an Azure storage account.
  • You must have the Azure CLI installed.
  • The Azure Blob storage container must be accessible to the source and target clusters.
  • If you are using the snapshot copy method:

    • The source and target clusters must be in the same region.
    • The source and target clusters must have the same storage class.
    • The storage class must be compatible with snapshots.

Procedure

  1. Set the AZURE_RESOURCE_GROUP variable:

    $ AZURE_RESOURCE_GROUP=Velero_Backups
  2. Create an Azure resource group:

    $ az group create -n $AZURE_RESOURCE_GROUP --location <CentralUS> 1
    1
    Specify your location.
  3. Set the AZURE_STORAGE_ACCOUNT_ID variable:

    $ AZURE_STORAGE_ACCOUNT_ID=velerobackups
  4. Create an Azure storage account:

    $ az storage account create \
      --name $AZURE_STORAGE_ACCOUNT_ID \
      --resource-group $AZURE_RESOURCE_GROUP \
      --sku Standard_GRS \
      --encryption-services blob \
      --https-only true \
      --kind BlobStorage \
      --access-tier Hot
  5. Set the BLOB_CONTAINER variable:

    $ BLOB_CONTAINER=velero
  6. Create an Azure Blob storage container:

    $ az storage container create \
      -n $BLOB_CONTAINER \
      --public-access off \
      --account-name $AZURE_STORAGE_ACCOUNT_ID
  7. Create a service principal and credentials for velero:

    $ AZURE_SUBSCRIPTION_ID=`az account list --query '[?isDefault].id' -o tsv` \
      AZURE_TENANT_ID=`az account list --query '[?isDefault].tenantId' -o tsv` \
      AZURE_CLIENT_SECRET=`az ad sp create-for-rbac --name "velero" --role "Contributor" --query 'password' -o tsv` \
      AZURE_CLIENT_ID=`az ad sp list --display-name "velero" --query '[0].appId' -o tsv`
  8. Save the service principal credentials in the credentials-velero file:

    $ cat << EOF  > ./credentials-velero
    AZURE_SUBSCRIPTION_ID=${AZURE_SUBSCRIPTION_ID}
    AZURE_TENANT_ID=${AZURE_TENANT_ID}
    AZURE_CLIENT_ID=${AZURE_CLIENT_ID}
    AZURE_CLIENT_SECRET=${AZURE_CLIENT_SECRET}
    AZURE_RESOURCE_GROUP=${AZURE_RESOURCE_GROUP}
    AZURE_CLOUD_NAME=AzurePublicCloud
    EOF

2.5. Migrating your applications

You must add your clusters and a replication repository to the MTC web console. Then, you can create and run a migration plan.

If your cluster or replication repository are secured with self-signed certificates, you can create a CA certificate bundle file or disable SSL verification.

2.5.1. Creating a CA certificate bundle file

If you use a self-signed certificate to secure a cluster or a replication repository, certificate verification might fail with the following error message: Certificate signed by unknown authority.

You can create a custom CA certificate bundle file and upload it in the MTC web console when you add a cluster or a replication repository.

Procedure

Download a CA certificate from a remote endpoint and save it as a CA bundle file:

$ echo -n | openssl s_client -connect <host_FQDN>:<port> \ 1
  | sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' > <ca_bundle.cert> 2
1
Specify the host FQDN and port of the endpoint, for example, api.my-cluster.example.com:6443.
2
Specify the name of the CA bundle file.

2.5.2. Configuring a migration plan

2.5.2.1. Increasing Migration Controller limits for large migrations

You can increase the Migration Controller limits on migration objects and container resources for large migrations.

Important

You must test these changes before you perform a migration in a production environment.

Procedure

  1. Edit the Migration Controller manifest:

    $ oc edit migrationcontroller -n openshift-migration
  2. Update the following parameters:

    ...
    mig_controller_limits_cpu: "1" 1
    mig_controller_limits_memory: "10Gi" 2
    ...
    mig_controller_requests_cpu: "100m" 3
    mig_controller_requests_memory: "350Mi" 4
    ...
    mig_pv_limit: 100 5
    mig_pod_limit: 100 6
    mig_namespace_limit: 10 7
    ...
    1
    Specifies the number of CPUs available to the Migration Controller.
    2
    Specifies the amount of memory available to the Migration Controller.
    3
    Specifies the number of CPU units available for Migration Controller requests. 100m represents 0.1 CPU units (100 * 1e-3).
    4
    Specifies the amount of memory available for Migration Controller requests.
    5
    Specifies the number of PVs that can be migrated.
    6
    Specifies the number of pods that can be migrated.
    7
    Specifies the number of namespaces that can be migrated.
  3. Create a migration plan that uses the updated parameters to verify the changes.

    If your migration plan exceeds the Migration Controller limits, the MTC console displays a warning message when you save the migration plan.

2.5.2.2. Excluding resources from a migration plan

You can exclude resources, for example, ImageStreams, persistent volumes (PVs), or subscriptions, from a migration plan in order to reduce the load or to migrate images or PVs with a different tool.

Procedure

  1. Edit the Migration Controller CR:

    $ oc edit migrationcontroller -n openshift-migration
  2. Update the spec section by adding a parameter to exclude specific resources or by adding a resource to the excluded_resources parameter if it does not have its own exclusion parameter:

    apiVersion: migration.openshift.io/v1alpha1
    kind: MigrationController
    metadata:
      name: migration-controller
      namespace: openshift-migration
    spec:
      disable_image_migration: true 1
      disable_pv_migration: true 2
      ...
      excluded_resources: 3
      - imagetags
      - templateinstances
      - clusterserviceversions
      - packagemanifests
      - subscriptions
      - servicebrokers
      - servicebindings
      - serviceclasses
      - serviceinstances
      - serviceplans
    1
    Add disable_image_migration: true to exclude imagestreams from the migration. Do not edit the excluded_resources parameter. imagestreams is added to excluded_resources when the Migration Controller Pod restarts.
    2
    Add disable_pv_migration: true to exclude PVs from the migration plan. Do not edit the excluded_resources parameter. persistentvolumes and persistentvolumeclaims are added to excluded_resources when the Migration Controller Pod restarts. Disabling PV migration also disables PV discovery when you create the migration plan.
    3
    You can add OpenShift Container Platform resources to the excluded_resources list. Do not delete any of the default excluded resources. These resources are known to be problematic for migration.
  3. Wait two minutes for the Migration Controller Pod to restart so that the changes are applied.
  4. Verify that the resource is excluded:

    $ oc get deployment -n openshift-migration migration-controller -o yaml | grep EXCLUDED_RESOURCES -A1

    The output contains the excluded resources, as shown in the following example:

        - name: EXCLUDED_RESOURCES
          value:
          imagetags,templateinstances,clusterserviceversions,packagemanifests,subscriptions,servicebrokers,servicebindings,serviceclasses,serviceinstances,serviceplans,imagestreams,persistentvolumes,persistentvolumeclaims

2.5.3. Adding a cluster to the MTC web console

You can add a cluster to the MTC web console.

Prerequisites

If you are using Azure snapshots to copy data:

  • You must provide the Azure resource group name when you add the source cluster.
  • The source and target clusters must be in the same Azure resource group and in the same location.

Procedure

  1. Log in to the cluster.
  2. Obtain the service account token:

    $ oc sa get-token migration-controller -n openshift-migration

    Example output

    eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJtaWciLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlY3JldC5uYW1lIjoibWlnLXRva2VuLWs4dDJyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6Im1pZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE1YjFiYWMwLWMxYmYtMTFlOS05Y2NiLTAyOWRmODYwYjMwOCIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDptaWc6bWlnIn0.xqeeAINK7UXpdRqAtOj70qhBJPeMwmgLomV9iFxr5RoqUgKchZRG2J2rkqmPm6vr7K-cm7ibD1IBpdQJCcVDuoHYsFgV4mp9vgOfn9osSDp2TGikwNz4Az95e81xnjVUmzh-NjDsEpw71DH92iHV_xt2sTwtzftS49LpPW2LjrV0evtNBP_t_RfskdArt5VSv25eORl7zScqfe1CiMkcVbf2UqACQjo3LbkpfN26HAioO2oH0ECPiRzT0Xyh-KwFutJLS9Xgghyw-LD9kPKcE_xbbJ9Y4Rqajh7WdPYuB0Jd9DPVrslmzK-F6cgHHYoZEv0SvLQi-PO0rpDrcjOEQQ

  3. Log in to the MTC web console.
  4. In the Clusters section, click Add cluster.
  5. Fill in the following fields:

    • Cluster name: May contain lower-case letters (a-z) and numbers (0-9). Must not contain spaces or international characters.
    • Url: URL of the cluster’s API server, for example, https://<master1.example.com>:8443.
    • Service account token: String that you obtained from the source cluster.
    • Azure cluster: Optional. Select it if you are using Azure snapshots to copy your data.
    • Azure resource group: This field appears if Azure cluster is checked.
    • If you use a custom CA bundle, click Browse and browse to the CA bundle file.
  6. Click Add cluster.

    The cluster appears in the Clusters section of the MTC web console.

2.5.4. Adding a replication repository to the MTC web console

You can add an object storage bucket as a replication repository to the MTC web console.

Prerequisites

  • You must configure an object storage bucket for migrating the data.

Procedure

  1. Log in to the MTC web console.
  2. In the Replication repositories section, click Add repository.
  3. Select a Storage provider type and fill in the following fields:

    • AWS for AWS S3, MCG, and generic S3 providers:

      • Replication repository name: Specify the replication repository name in the MTC web console.
      • S3 bucket name: Specify the name of the S3 bucket you created.
      • S3 bucket region: Specify the S3 bucket region. Required for AWS S3. Optional for other S3 providers.
      • S3 endpoint: Specify the URL of the S3 service, not the bucket, for example, https://<s3-storage.apps.cluster.com>. Required for a generic S3 provider. You must use the https:// prefix.
      • S3 provider access key: Specify the <AWS_SECRET_ACCESS_KEY> for AWS or the S3 provider access key for MCG.
      • S3 provider secret access key: Specify the <AWS_ACCESS_KEY_ID> for AWS or the S3 provider secret access key for MCG.
      • Require SSL verification: Clear this check box if you are using a generic S3 provider.
      • If you use a custom CA bundle, click Browse and browse to the Base64-encoded CA bundle file.
    • GCP:

      • Replication repository name: Specify the replication repository name in the MTC web console.
      • GCP bucket name: Specify the name of the GCP bucket.
      • GCP credential JSON blob: Specify the string in the credentials-velero file.
    • Azure:

      • Replication repository name: Specify the replication repository name in the MTC web console.
      • Azure resource group: Specify the resource group of the Azure Blob storage.
      • Azure storage account name: Specify the Azure Blob storage account name.
      • Azure credentials - INI file contents: Specify the string in the credentials-velero file.
  4. Click Add repository and wait for connection validation.
  5. Click Close.

    The new repository appears in the Replication repositories section.

2.5.5. Creating a migration plan in the MTC web console

You can create a migration plan in the MTC web console.

Prerequisites

  • The MTC web console must contain the following:

    • Source cluster
    • Target cluster
    • Replication repository
  • The source and target clusters must have network access to each other and to the replication repository.
  • If you use snapshots to copy data, the source and target clusters must run on the same cloud provider (AWS, GCP, or Azure) and be located in the same region.

Procedure

  1. Log in to the MTC web console.
  2. In the Plans section, click Add plan.
  3. Enter the Plan name and click Next.

    The Plan name can contain up to 253 lower-case alphanumeric characters (a-z, 0-9). It must not contain spaces or underscores (_).

  4. Select a Source cluster.
  5. Select a Target cluster.
  6. Select a Replication repository.
  7. Select the projects to be migrated and click Next.
  8. Select Copy or Move for the PVs:

    • Copy copies the data in a source cluster’s PV to the replication repository and then restores it on a newly created PV, with similar characteristics, in the target cluster.

      Optional: You can verify data copied with the file system method by selecting Verify copy. This option generates a checksum for each source file and checks it after restoration. The operation significantly reduces performance.

    • Move unmounts a remote volume (for example, NFS) from the source cluster, creates a PV resource on the target cluster pointing to the remote volume, and then mounts the remote volume on the target cluster. Applications running on the target cluster use the same remote volume that the source cluster was using. The remote volume must be accessible to the source and target clusters.
  9. Click Next.
  10. Select a Copy method for the PVs:

    • Snapshot backs up and restores the disk using the cloud provider’s snapshot functionality. It is significantly faster than Filesystem.

      Note

      The storage and clusters must be in the same region and the storage class must be compatible.

    • Filesystem copies the data files from the source disk to a newly created target disk.
  11. Select a Storage class for the PVs.

    If you selected the Filesystem copy method, you can change the storage class during migration, for example, from Red Hat Gluster Storage or NFS storage to Red Hat Ceph Storage.

  12. Click Next.
  13. If you want to add a migration hook, click Add Hook and perform the following steps:

    1. Specify the name of the hook.
    2. Select Ansible playbook to use your own playbook or Custom container image for a hook written in another language.
    3. Click Browse to upload the playbook.
    4. Optional: If you are not using the default Ansible runtime image, specify your custom Ansible image.
    5. Specify the cluster on which you want the hook to run.
    6. Specify the service account name.
    7. Specify the namespace.
    8. Select the migration step at which you want the hook to run:

      • PreBackup: Before backup tasks are started on the source cluster
      • PostBackup: After backup tasks are complete on the source cluster
      • PreRestore: Before restore tasks are started on the target cluster
      • PostRestore: After restore tasks are complete on the target cluster
  14. Click Add.

    You can add up to four hooks to a migration plan, assigning each hook to a different migration step.

  15. Click Finish.
  16. Click Close.

    The migration plan appears in the Plans section.

2.5.6. Running a migration plan in the MTC web console

You can stage or migrate applications and data with the migration plan you created in the MTC web console.

Prerequisites

The MTC web console must contain the following:

  • Source cluster
  • Target cluster
  • Replication repository
  • Valid migration plan

Procedure

  1. Log in to the source cluster.
  2. Delete old images:

    $ oc adm prune images
  3. Log in to the MTC web console.
  4. Select a migration plan.
  5. Click Stage to copy data from the source cluster to the target cluster without stopping the application.

    You can run Stage multiple times to reduce the actual migration time.

  6. When you are ready to migrate the application workload, click Migrate.

    Migrate stops the application workload on the source cluster and recreates its resources on the target cluster.

  7. Optional: In the Migrate window, you can select Do not stop applications on the source cluster during migration.
  8. Click Migrate.
  9. Optional: To stop a migration in progress, click the Options menu kebab and select Cancel.
  10. When the migration is complete, verify that the application migrated successfully in the OpenShift Container Platform web console:

    1. Click HomeProjects.
    2. Click the migrated project to view its status.
    3. In the Routes section, click Location to verify that the application is functioning, if applicable.
    4. Click WorkloadsPods to verify that the pods are running in the migrated namespace.
    5. Click StoragePersistent volumes to verify that the migrated persistent volume is correctly provisioned.

2.6. Troubleshooting

You can view the migration Custom Resources (CRs) and download logs to troubleshoot a failed migration.

If the application was stopped during the failed migration, you must roll it back manually in order to prevent data corruption.

Note

Manual rollback is not required if the application was not stopped during migration because the original application is still running on the source cluster.

2.6.1. Viewing migration Custom Resources

The Migration Toolkit for Containers (MTC) creates the following Custom Resources (CRs):

migration architecture diagram

20 MigCluster (configuration, MTC cluster): Cluster definition

20 MigStorage (configuration, MTC cluster): Storage definition

20 MigPlan (configuration, MTC cluster): Migration plan

The MigPlan CR describes the source and target clusters, replication repository, and namespaces being migrated. It is associated with 0, 1, or many MigMigration CRs.

Note

Deleting a MigPlan CR deletes the associated MigMigration CRs.

20 BackupStorageLocation (configuration, MTC cluster): Location of Velero backup objects

20 VolumeSnapshotLocation (configuration, MTC cluster): Location of Velero volume snapshots

20 MigMigration (action, MTC cluster): Migration, created every time you stage or migrate data. Each MigMigration CR is associated with a MigPlan CR.

20 Backup (action, source cluster): When you run a migration plan, the MigMigration CR creates two Velero backup CRs on each source cluster:

  • Backup CR #1 for Kubernetes objects
  • Backup CR #2 for PV data

20 Restore (action, target cluster): When you run a migration plan, the MigMigration CR creates two Velero restore CRs on the target cluster:

  • Restore CR #1 (using Backup CR #2) for PV data
  • Restore CR #2 (using Backup CR #1) for Kubernetes objects

Procedure

  1. View the CR:

    $ oc get <cr> -n openshift-migration 1
    1
    Specify the migration CR, for example, migmigration.

    Example output

    NAME                                   AGE
    88435fe0-c9f8-11e9-85e6-5d593ce65e10   6m42s

  2. Inspect the migmigration CR:

    $ oc describe <migmigration> <88435fe0-c9f8-11e9-85e6-5d593ce65e10> -n openshift-migration

    The output is similar to the following examples.

MigMigration example output

name:         88435fe0-c9f8-11e9-85e6-5d593ce65e10
namespace:    openshift-migration
labels:       <none>
annotations:  touch: 3b48b543-b53e-4e44-9d34-33563f0f8147
apiVersion:  migration.openshift.io/v1alpha1
kind:         MigMigration
metadata:
  creationTimestamp:  2019-08-29T01:01:29Z
  generation:          20
  resourceVersion:    88179
  selfLink:           /apis/migration.openshift.io/v1alpha1/namespaces/openshift-migration/migmigrations/88435fe0-c9f8-11e9-85e6-5d593ce65e10
  uid:                 8886de4c-c9f8-11e9-95ad-0205fe66cbb6
spec:
  migPlanRef:
    name:        socks-shop-mig-plan
    namespace:   openshift-migration
  quiescePods:  true
  stage:         false
status:
  conditions:
    category:              Advisory
    durable:               True
    lastTransitionTime:  2019-08-29T01:03:40Z
    message:               The migration has completed successfully.
    reason:                Completed
    status:                True
    type:                  Succeeded
  phase:                   Completed
  startTimestamp:         2019-08-29T01:01:29Z
events:                    <none>

Velero backup CR #2 example output that describes the PV data

apiVersion: velero.io/v1
kind: Backup
metadata:
  annotations:
    openshift.io/migrate-copy-phase: final
    openshift.io/migrate-quiesce-pods: "true"
    openshift.io/migration-registry: 172.30.105.179:5000
    openshift.io/migration-registry-dir: /socks-shop-mig-plan-registry-44dd3bd5-c9f8-11e9-95ad-0205fe66cbb6
  creationTimestamp: "2019-08-29T01:03:15Z"
  generateName: 88435fe0-c9f8-11e9-85e6-5d593ce65e10-
  generation: 1
  labels:
    app.kubernetes.io/part-of: migration
    migmigration: 8886de4c-c9f8-11e9-95ad-0205fe66cbb6
    migration-stage-backup: 8886de4c-c9f8-11e9-95ad-0205fe66cbb6
    velero.io/storage-location: myrepo-vpzq9
  name: 88435fe0-c9f8-11e9-85e6-5d593ce65e10-59gb7
  namespace: openshift-migration
  resourceVersion: "87313"
  selfLink: /apis/velero.io/v1/namespaces/openshift-migration/backups/88435fe0-c9f8-11e9-85e6-5d593ce65e10-59gb7
  uid: c80dbbc0-c9f8-11e9-95ad-0205fe66cbb6
spec:
  excludedNamespaces: []
  excludedResources: []
  hooks:
    resources: []
  includeClusterResources: null
  includedNamespaces:
  - sock-shop
  includedResources:
  - persistentvolumes
  - persistentvolumeclaims
  - namespaces
  - imagestreams
  - imagestreamtags
  - secrets
  - configmaps
  - pods
  labelSelector:
    matchLabels:
      migration-included-stage-backup: 8886de4c-c9f8-11e9-95ad-0205fe66cbb6
  storageLocation: myrepo-vpzq9
  ttl: 720h0m0s
  volumeSnapshotLocations:
  - myrepo-wv6fx
status:
  completionTimestamp: "2019-08-29T01:02:36Z"
  errors: 0
  expiration: "2019-09-28T01:02:35Z"
  phase: Completed
  startTimestamp: "2019-08-29T01:02:35Z"
  validationErrors: null
  version: 1
  volumeSnapshotsAttempted: 0
  volumeSnapshotsCompleted: 0
  warnings: 0

Velero restore CR #2 example output that describes the Kubernetes resources

apiVersion: velero.io/v1
kind: Restore
metadata:
  annotations:
    openshift.io/migrate-copy-phase: final
    openshift.io/migrate-quiesce-pods: "true"
    openshift.io/migration-registry: 172.30.90.187:5000
    openshift.io/migration-registry-dir: /socks-shop-mig-plan-registry-36f54ca7-c925-11e9-825a-06fa9fb68c88
  creationTimestamp: "2019-08-28T00:09:49Z"
  generateName: e13a1b60-c927-11e9-9555-d129df7f3b96-
  generation: 3
  labels:
    app.kubernetes.io/part-of: migration
    migmigration: e18252c9-c927-11e9-825a-06fa9fb68c88
    migration-final-restore: e18252c9-c927-11e9-825a-06fa9fb68c88
  name: e13a1b60-c927-11e9-9555-d129df7f3b96-gb8nx
  namespace: openshift-migration
  resourceVersion: "82329"
  selfLink: /apis/velero.io/v1/namespaces/openshift-migration/restores/e13a1b60-c927-11e9-9555-d129df7f3b96-gb8nx
  uid: 26983ec0-c928-11e9-825a-06fa9fb68c88
spec:
  backupName: e13a1b60-c927-11e9-9555-d129df7f3b96-sz24f
  excludedNamespaces: null
  excludedResources:
  - nodes
  - events
  - events.events.k8s.io
  - backups.velero.io
  - restores.velero.io
  - resticrepositories.velero.io
  includedNamespaces: null
  includedResources: null
  namespaceMapping: null
  restorePVs: true
status:
  errors: 0
  failureReason: ""
  phase: Completed
  validationErrors: null
  warnings: 15

2.6.2. Downloading migration logs

You can download the Velero, Restic, and Migration controller logs in the MTC web console to troubleshoot a failed migration.

Procedure

  1. Log in to the MTC console.
  2. Click Plans to view the list of migration plans.
  3. Click the Options menu kebab of a specific migration plan and select Logs.
  4. Click Download Logs to download the logs of the Migration controller, Velero, and Restic for all clusters.
  5. To download a specific log:

    1. Specify the log options:

      • Cluster: Select the source, target, or MTC host cluster.
      • Log source: Select Velero, Restic, or Controller.
      • Pod source: Select the Pod name, for example, controller-manager-78c469849c-v6wcf

        The selected log is displayed.

        You can clear the log selection settings by changing your selection.

    2. Click Download Selected to download the selected log.

Optionally, you can access the logs by using the CLI, as in the following example:

$ oc logs <pod-name> -f -n openshift-migration 1
1
Specify the Pod name.

2.6.3. Error messages and resolutions

This section describes common error messages and how to resolve their underlying causes.

2.6.3.1. CA certificate error in the MTC console

If a CA certificate error message is displayed the first time you try to access the MTC console, the likely cause is the use of self-signed CA certificates in one of the clusters.

To resolve this issue, navigate to the oauth-authorization-server URL displayed in the error message and accept the certificate. To resolve this issue permanently, add the certificate to the trust store of your web browser.

If an Unauthorized message is displayed after you have accepted the certificate, navigate to the MTC console and refresh the web page.

2.6.3.2. OAuth timeout error in the MTC console

If a connection has timed out message is displayed in the MTC console after you have accepted a self-signed certificate, the causes are likely to be the following:

You can determine the cause of the timeout.

Procedure

  1. Navigate to the MTC console and inspect the elements with the browser web inspector.
  2. Check the migration-ui pod log:

    $ oc logs migration-ui-<86b679ffc7-h6l6v> -n openshift-migration

2.6.3.3. PodVolumeBackups timeout error in Velero log

If a migration fails because Restic times out, the following error is displayed in the Velero log.

Example output

level=error msg="Error backing up item" backup=velero/monitoring error="timed out waiting for all PodVolumeBackups to complete" error.file="/go/src/github.com/heptio/velero/pkg/restic/backupper.go:165" error.function="github.com/heptio/velero/pkg/restic.(*backupper).BackupPodVolumes" group=v1

The default value of restic_timeout is one hour. You can increase this parameter for large migrations, keeping in mind that a higher value may delay the return of error messages.

Procedure

  1. In the OpenShift Container Platform web console, navigate to OperatorsInstalled Operators.
  2. Click MTC Operator.
  3. In the MigrationController tab, click migration-controller.
  4. In the YAML tab, update the following parameter value:

    spec:
      restic_timeout: 1h 1
    1
    Valid units are h (hours), m (minutes), and s (seconds), for example, 3h30m15s.
  5. Click Save.

2.6.3.4. ResticVerifyErrors in the MigMigration Custom Resource

If data verification fails when migrating a PV with the file system data copy method, the following error is displayed in the MigMigration Custom Resource (CR).

Example output

status:
  conditions:
  - category: Warn
    durable: true
    lastTransitionTime: 2020-04-16T20:35:16Z
    message: There were verify errors found in 1 Restic volume restores. See restore `<registry-example-migration-rvwcm>`
      for details 1
    status: "True"
    type: ResticVerifyErrors 2

1
The error message identifies the Restore CR name.
2
ResticErrors is a general error warning that includes verification errors.
Note

A data verification error does not cause the migration process to fail.

You can check the Restore CR to identify the source of the data verification error.

Procedure

  1. Log in to the target cluster.
  2. View the Restore CR:

    $ oc describe <registry-example-migration-rvwcm> -n openshift-migration

    The output identifies the PV with PodVolumeRestore errors.

    Example output

    status:
      phase: Completed
      podVolumeRestoreErrors:
      - kind: PodVolumeRestore
        name: <registry-example-migration-rvwcm-98t49>
        namespace: openshift-migration
      podVolumeRestoreResticErrors:
      - kind: PodVolumeRestore
        name: <registry-example-migration-rvwcm-98t49>
        namespace: openshift-migration

  3. View the PodVolumeRestore CR:

    $ oc describe <migration-example-rvwcm-98t49>

    The output identifies the Restic pod that logged the errors.

    Example output

      completionTimestamp: 2020-05-01T20:49:12Z
      errors: 1
      resticErrors: 1
      ...
      resticPod: <restic-nr2v5>

  4. View the Restic pod log to locate the errors:

    $ oc logs -f <restic-nr2v5>

2.6.4. Manually rolling back a migration

If your application was stopped during a failed migration, you must roll it back manually in order to prevent data corruption in the PV.

This procedure is not required if the application was not stopped during migration, because the original application is still running on the source cluster.

Procedure

  1. On the target cluster, switch to the migrated project:

    $ oc project <project>
  2. Get the deployed resources:

    $ oc get all
  3. Delete the deployed resources to ensure that the application is not running on the target cluster and accessing data on the PVC:

    $ oc delete <resource_type>
  4. To stop a daemon set without deleting it, update the nodeSelector in the YAML file:

    apiVersion: apps/v1
    kind: DaemonSet
    metadata:
      name: hello-daemonset
    spec:
      selector:
          matchLabels:
            name: hello-daemonset
      template:
        metadata:
          labels:
            name: hello-daemonset
        spec:
          nodeSelector:
            role: worker 1
    1
    Specify a nodeSelector value that does not exist on any node.
  5. Update each PV’s reclaim policy so that unnecessary data is removed. During migration, the reclaim policy for bound PVs is Retain, to ensure that data is not lost when an application is removed from the source cluster. You can remove these PVs during rollback.

    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: pv0001
    spec:
      capacity:
        storage: 5Gi
      accessModes:
        - ReadWriteOnce
      persistentVolumeReclaimPolicy: Retain 1
      ...
    status:
      ...
    1
    Specify Recycle or Delete.
  6. On the source cluster, switch to your migrated project:

    $ oc project <project_name>
  7. Obtain the project’s deployed resources:

    $ oc get all
  8. Start one or more replicas of each deployed resource:

    $ oc scale --replicas=1 <resource_type>/<resource_name>
  9. Update the nodeSelector of the DaemonSet resource to its original value, if you changed it during the procedure.

2.6.5. Using must-gather to collect data

You must run the must-gather tool if you open a customer support case on the Red Hat Customer Portal.

The openshift-migration-must-gather-rhel8 image collects migration-specific logs and data that are not collected by the default must-gather image.

Procedure

  1. Navigate to the directory where you want to store the must-gather data.
  2. Run the must-gather command:

    $ oc adm must-gather --image=openshift-migration-must-gather-rhel8:v1.3.0
  3. Remove authentication keys and other sensitive information.
  4. Create an archive file containing the contents of the must-gather data directory:

    $ tar cvaf must-gather.tar.gz must-gather.local.<uid>/
  5. Upload the compressed file as an attachment to your customer support case.

2.6.6. Known issues

This release has the following known issues:

  • During migration, the Migration Toolkit for Containers (MTC) preserves the following namespace annotations:

    • openshift.io/sa.scc.mcs
    • openshift.io/sa.scc.supplemental-groups
    • openshift.io/sa.scc.uid-range

      These annotations preserve the UID range, ensuring that the containers retain their file system permissions on the target cluster. There is a risk that the migrated UIDs could duplicate UIDs within an existing or future namespace on the target cluster. (BZ#1748440)

  • If an AWS bucket is added to the MTC web console and then deleted, its status remains True because the MigStorage CR is not updated. (BZ#1738564)
  • Most cluster-scoped resources are not yet handled by MTC. If your applications require cluster-scoped resources, you may have to create them manually on the target cluster.
  • If a migration fails, the migration plan does not retain custom PV settings for quiesced pods. You must manually roll back the migration, delete the migration plan, and create a new migration plan with your PV settings. (BZ#1784899)
  • If a large migration fails because Restic times out, you can increase the restic_timeout parameter value (default: 1h) in the Migration Controller CR.
  • If you select the data verification option for PVs that are migrated with the file system copy method, performance is significantly slower.
  • If you are migrating data from NFS storage and root_squash is enabled, Restic maps to nfsnobody. The migration fails and a permission error is displayed in the Restic Pod log. You can resolve this issue by creating a supplemental group for Restic. (BZ#1873641)
  • If Velero has an invalid BackupStorageLocation during start-up, it will crash-loop until the invalid BackupStorageLocation is removed. This scenario is triggered by incorrect credentials, a non-existent S3 bucket, and other configuration errors. (BZ#1881707)

Chapter 3. Migrating from OpenShift Container Platform 4.2 and later

3.1. Migration tools and prerequisites

You can migrate application workloads from OpenShift Container Platform 4.2 to 4.6 with the Migration Toolkit for Containers (MTC). MTC enables you to control the migration and to minimize application downtime.

Note

You can migrate between OpenShift Container Platform clusters of the same version, for example, from 4.2 to 4.2 or from 4.3 to 4.3, as long as the source and target clusters are configured correctly.

The MTC web console and API, based on Kubernetes Custom Resources, enable you to migrate stateful and stateless application workloads at the granularity of a namespace.

MTC supports the file system and snapshot data copy methods for migrating data from the source cluster to the target cluster. You can select a method that is suited for your environment and is supported by your storage provider.

You can use migration hooks to run Ansible playbooks at certain points during the migration. The hooks are added when you create a migration plan.

3.1.1. Migration prerequisites

  • You must upgrade the source cluster to the latest z-stream release.
  • You must have cluster-admin privileges on all clusters.
  • The source and target clusters must have unrestricted network access to the replication repository.
  • The cluster on which the Migration controller is installed must have unrestricted access to the other clusters.
  • If your application uses images from the openshift namespace, the required versions of the images must be present on the target cluster.

    If the required images are not present, you must update the imagestreamtags references to use an available version that is compatible with your application. If the imagestreamtags cannot be updated, you can manually upload equivalent images to the application namespaces and update the applications to reference them.

The following imagestreamtags have been removed from OpenShift Container Platform 4.2:

  • dotnet:1.0, dotnet:1.1, dotnet:2.0
  • dotnet-runtime:2.0
  • mariadb:10.1
  • mongodb:2.4, mongodb:2.6
  • mysql:5.5, mysql:5.6
  • nginx:1.8
  • nodejs:0.10, nodejs:4, nodejs:6
  • perl:5.16, perl:5.20
  • php:5.5, php:5.6
  • postgresql:9.2, postgresql:9.4, postgresql:9.5
  • python:3.3, python:3.4
  • ruby:2.0, ruby:2.2

3.1.2. About the Migration Toolkit for Containers

The Migration Toolkit for Containers (MTC) enables you to migrate Kubernetes resources, persistent volume data, and internal container images from an OpenShift Container Platform source cluster to an OpenShift Container Platform 4.6 target cluster, using the MTC web console or the Kubernetes API.

Migrating an application with the MTC web console involves the following steps:

  1. Install the MTC Operator on all clusters.

    You can install the MTC Operator in a restricted environment with limited or no internet access. The source and target clusters must have network access to each other and to a mirror registry.

  2. Configure the replication repository, an intermediate object storage that MTC uses to migrate data.

    The source and target clusters must have network access to the replication repository during migration. In a restricted environment, you can use an internally hosted S3 storage repository. If you use a proxy server, you must ensure that replication repository is whitelisted.

  3. Add the source cluster to the MTC web console.
  4. Add the replication repository to the MTC web console.
  5. Create a migration plan, with one of the following data migration options:

    • Copy: MTC copies the data from the source cluster to the replication repository, and from the replication repository to the target cluster.

      migration PV copy
    • Move: MTC unmounts a remote volume (for example, NFS) from the source cluster, creates a PV resource on the target cluster pointing to the remote volume, and then mounts the remote volume on the target cluster. Applications running on the target cluster use the same remote volume that the source cluster was using. The remote volume must be accessible to the source and target clusters.

      Note

      Although the replication repository does not appear in this diagram, it is required for the actual migration.

      migration PV move
  6. Run the migration plan, with one of the following options:

    • Stage (optional) copies data to the target cluster without stopping the application.

      Staging can be run multiple times so that most of the data is copied to the target before migration. This minimizes the actual migration time and application downtime.

    • Migrate stops the application on the source cluster and recreates its resources on the target cluster. Optionally, you can migrate the workload without stopping the application.
OCP 3 to 4 App migration

3.1.3. About data copy methods

MTC supports the file system and snapshot data copy methods for migrating data from the source cluster to the target cluster. You can select a method that is suited for your environment and is supported by your storage provider.

3.1.3.1. File system copy method

MTC copies data files from the source cluster to the replication repository, and from there to the target cluster.

Table 3.1. File system copy method summary

BenefitsLimitations
  • Clusters can have different storage classes
  • Supported for all S3 storage providers
  • Optional data verification with checksum
  • Slower than the snapshot copy method
  • Optional data verification significantly reduces performance

3.1.3.2. Snapshot copy method

MTC copies a snapshot of the source cluster’s data to a cloud provider’s object storage, configured as a replication repository. The data is restored on the target cluster.

AWS, Google Cloud Provider, and Microsoft Azure support the snapshot copy method.

Table 3.2. Snapshot copy method summary

BenefitsLimitations
  • Faster than the file system copy method
  • Cloud provider must support snapshots.
  • Clusters must be on the same cloud provider.
  • Clusters must be in the same location or region.
  • Clusters must have the same storage class.
  • Storage class must be compatible with snapshots.

3.1.4. About migration hooks

You can use migration hooks to run Ansible playbooks at certain points during the migration. The hooks are added when you create a migration plan.

Note

If you do not want to use Ansible playbooks, you can create a custom container image and add it to a migration plan.

Migration hooks perform tasks such as customizing application quiescence, manually migrating unsupported data types, and updating applications after migration.

A single migration hook runs on a source or target cluster at one of the following migration steps:

  • PreBackup: Before backup tasks are started on the source cluster
  • PostBackup: After backup tasks are complete on the source cluster
  • PreRestore: Before restore tasks are started on the target cluster
  • PostRestore: After restore tasks are complete on the target cluster

    You can assign one hook to each migration step, up to a maximum of four hooks for a single migration plan.

The default hook-runner image is registry.redhat.io/rhmtc/openshift-migration-hook-runner-rhel7:v1.3.0. This image is based on Ansible Runner and includes python-openshift for Ansible Kubernetes resources and an updated oc binary. You can also create your own hook image with additional Ansible modules or tools.

The Ansible playbook is mounted on a hook container as a ConfigMap. The hook container runs as a Job on a cluster with a specified service account and namespace. The Job runs, even if the initial Pod is evicted or killed, until it reaches the default backoffLimit (6) or successful completion.

3.2. Deploying the Migration Toolkit for Containers

You can install the MTC Operator on your OpenShift Container Platform 4.6 target cluster and 4.2 source cluster. The MTC Operator installs the Migration Toolkit for Containers (MTC) on the target cluster by default.

Note

Optional: You can configure the MTC Operator to install the MTC on an OpenShift Container Platform 3 cluster or on a remote cluster.

In a restricted environment, you can install the MTC Operator from a local mirror registry.

After you have installed the MTC Migration Operator on your clusters, you can launch the MTC web console.

3.2.1. Installing the MTC Operator

You can install the MTC Operator with the Operator Lifecycle Manager (OLM) on an OpenShift Container Platform 4.6 target cluster and on an OpenShift Container Platform 4.1 source cluster.

3.2.1.1. Installing the MTC Operator on an OpenShift Container Platform 4.6 target cluster

You can install the MTC Operator on an OpenShift Container Platform 4.6 target cluster with the Operator Lifecycle Manager (OLM).

The MTC Operator installs the Migration Toolkit for Containers on the target cluster by default.

Procedure

  1. In the OpenShift Container Platform web console, click OperatorsOperatorHub.
  2. Use the Filter by keyword field (in this case, Migration) to find the MTC Operator.
  3. Select the MTC Operator and click Install.
  4. On the Install Operator page, click Install.

    On the Installed Operators page, the MTC Operator appears in the openshift-migration project with the status Succeeded.

  5. Click MTC Operator.
  6. Under Provided APIs, locate the Migration Controller tile, and click Create Instance.
  7. Click Create.
  8. Click WorkloadsPods to verify that the Controller Manager, Migration UI, Restic, and Velero pods are running.

3.2.1.2. Installing the MTC Operator on an OpenShift Container Platform 4.2 source cluster

You can install the MTC Operator on an OpenShift Container Platform 4 source cluster with the Operator Lifecycle Manager (OLM).

Procedure

  1. In the OpenShift Container Platform web console, click OperatorsOperatorHub.
  2. Use the Filter by keyword field (in this case, Migration) to find the MTC Operator.
  3. Select the MTC Operator and click Install.
  4. On the Install Operator page, click Install.

    On the Installed Operators page, the MTC Operator appears in the openshift-migration project with the status Succeeded.

  5. Click MTC Operator.
  6. Under Provided APIs, locate the Migration Controller tile, and click Create Instance.
  7. Set the migration_controller and migration_ui parameters to false in the spec stanza:

    spec:
      ...
      migration_controller: false
      migration_ui: false
      ...
  8. Click Create.
  9. Click WorkloadsPods to verify that the Restic and Velero pods are running.

3.2.2. Installing the MTC Operator in a restricted environment

You can build a custom Operator catalog image for OpenShift Container Platform 4, push it to a local mirror image registry, and configure OLM to install the Operator from the local registry.

3.2.2.1. Prerequisites

  • If you want to prune the default catalog and selectively mirror only a subset of Operators, install the opm CLI.

3.2.2.2. Disabling the default OperatorHub sources

Operator catalogs that source content provided by Red Hat and community projects are configured for OperatorHub by default during an OpenShift Container Platform installation. Before configuring OperatorHub to instead use local catalog sources in a restricted network environment, you must disable the default catalogs.

Procedure

  • Disable the sources for the default catalogs by adding disableAllDefaultSources: true to the OperatorHub spec:

    $ oc patch OperatorHub cluster --type json \
        -p '[{"op": "add", "path": "/spec/disableAllDefaultSources", "value": true}]'

3.2.2.3. Pruning an index image

An index image, based on the Operator Bundle Format, is a containerized snapshot of an Operator catalog. You can prune an index of all but a specified list of packages, creating a copy of the source index containing only the Operators that you want.

When configuring Operator Lifecycle Manager (OLM) to use mirrored content on restricted network OpenShift Container Platform clusters, use this pruning method if you want to only mirror a subset of Operators from the default catalogs.

For the steps in this procedure, the target registry is an existing mirror registry that is accessible by both your cluster and a workstation with unrestricted network access. This example also shows pruning the index image for the default redhat-operators catalog, but the process is the same for all index images.

Prerequisites

  • Workstation with unrestricted network access
  • podman version 1.4.4+
  • grpcurl
  • opm version 1.12.3+
  • Access to a registry that supports Docker v2-2

Procedure

  1. Authenticate with registry.redhat.io:

    $ podman login registry.redhat.io
  2. Authenticate with your target registry:

    $ podman login <target_registry>
  3. Determine the list of packages you want to include in your pruned index.

    1. Run the source index image that you want to prune in a container. For example:

      $ podman run -p50051:50051 \
          -it registry.redhat.io/redhat/redhat-operator-index:v4.6

      Example output

      Trying to pull registry.redhat.io/redhat/redhat-operator-index:v4.6...
      Getting image source signatures
      Copying blob ae8a0c23f5b1 done
      ...
      INFO[0000] serving registry                              database=/database/index.db port=50051

    2. In a separate terminal session, use the grpcurl command to get a list of the packages provided by the index:

      $ grpcurl -plaintext localhost:50051 api.Registry/ListPackages > packages.out
    3. Inspect the packages.out file and identify which package names from this list you want to keep in your pruned index. For example:

      Example snippets of packages list

      ...
      {
        "name": "advanced-cluster-management"
      }
      ...
      {
        "name": "jaeger-product"
      }
      ...
      {
      {
        "name": "quay-operator"
      }
      ...

    4. In the terminal session where you executed the podman run command, press Ctrl and C to stop the container process.
  4. Run the following command to prune the source index of all but the specified packages:

    $ opm index prune \
        -f registry.redhat.io/redhat/redhat-operator-index:v4.6 \1
        -p advanced-cluster-management,jaeger-product,quay-operator \2
        -t <target_registry>:<port>/<namespace>/redhat-operator-index:v4.6 3
    1
    Index to prune.
    2
    Comma-separated list of packages to keep.
    3
    Custom tag for new index image being built.
  5. Run the following command to push the new index image to your target registry:

    $ podman push <target_registry>:<port>/<namespace>/redhat-operator-index:v4.6

    where <namespace> is any existing namespace on the registry. For example, you might create an olm-mirror namespace to push all mirrored content to.

3.2.2.4. Mirroring an Operator catalog

You can mirror the Operator content of a Red Hat-provided catalog, or a custom catalog, into a container image registry using the oc adm catalog mirror command. The target registry must support Docker v2-2. For a cluster on a restricted network, this registry can be a registry that the cluster has network access to, such as a mirror registry created during a restricted network cluster installation.

You must also mirror the Red Hat-provided index image, or push your own custom-built index image, to the target registry by using the oc image mirror command. You can then use the mirrored index image to create a CatalogSource that allows Operator Lifecycle Manager (OLM) to load the mirrored catalog onto your OpenShift Container Platform cluster.

For the steps in this procedure, the target registry is an existing mirror registry that is accessible by both your cluster and a workstation with unrestricted network access. This example also shows mirroring the default redhat-operators catalog, but the process is the same for all catalogs.

Prerequisites

  • Workstation with unrestricted network access
  • podman version 1.4.4+
  • Access to mirror registry that supports Docker v2-2
  • If you are working with private registries, set the REG_CREDS environment variable to the file path of your registry credentials for use in later steps. For example, for the podman CLI:

    $ REG_CREDS=${XDG_RUNTIME_DIR}/containers/auth.json

Procedure

  1. On your workstation with unrestricted network access, use the podman login command to authenticate with the your target mirror registry:

    $ podman login <mirror_registry>
  2. Authenticate with registry.redhat.io:

    $ podman login registry.redhat.io
  3. The oc adm catalog mirror command extracts the contents of an index image to generate the manifests required for mirroring. You can choose either of the following:

    • Allow the default behavior of the command to automatically mirror all of the image content from the index image to your mirror registry after generating manifests.
    • Add the --manifests-only flag to only generate the manifests required for mirroring, but do not actually mirror the image content to the registry yet. This can be useful for reviewing what will be mirrored, and it allows you to make any changes to the mapping list if you only require a subset of packages. You can then use that file with the oc image mirror command to mirror the modified list of images in a later step.

      Note

      The --manifests-only flag is intended for advanced selective mirroring of content from the catalog. The opm index prune command, if you used it previously to prune the index image, is suitable for most use cases.

    On your workstation with unrestricted network access, run the following command:

    $ oc adm catalog mirror \
        <index_image> \1
        <mirror_registry>:<port> \2
        [-a ${REG_CREDS}] \3
        [--insecure] \4
        [--filter-by-os="<os>/<arch>"] \5
        [--manifests-only] 6
    1
    Specify the index image for the catalog you want to mirror. For example, this might be a pruned index image that you created previously, or one of the source index images for the default catalogs, such as registry.redhat.io/redhat/redhat-operator-index:v4.6.
    2
    Specify the target registry to mirror the Operator content to.
    3
    Optional: If required, specify the location of your registry credentials file.
    4
    Optional: If you do not want to configure trust for the target registry, add the --insecure flag.
    5
    Optional: Because the catalog might reference images that support multiple architectures and operating systems, you can filter by architecture and operating system to mirror only the images that match. Valid values are linux/amd64, linux/ppc64le, and linux/s390x.
    6
    Optional: Only generate the manifests required for mirroring and do not actually mirror the image content to a registry.

    Example output

    src image has index label for database path: /database/index.db
    using database path mapping: /database/index.db:/tmp/153048078
    wrote database to /tmp/153048078 1
    ...
    wrote mirroring manifests to redhat-operator-index-manifests

    1
    Directory for the temporary index.db database generated by the command.

    After running the command, a <image_name>-manifests/ directory is created in the current directory and generates the following files:

    • The imageContentSourcePolicy.yaml file defines an ImageContentSourcePolicy object that can configure nodes to translate between the image references stored in Operator manifests and the mirrored registry.
    • The mapping.txt file contains all of the source images and where to map them in the target registry. This file is compatible with the oc image mirror command and can be used to further customize the mirroring configuration.
  4. If you used the --manifests-only flag in the previous step and want to further trim the subset of packages to be mirrored:

    1. Modify the list of images in your mapping.txt file to your specifications. If you are unsure of the exact names and versions of the subset of images you want to mirror, use the following steps to find them:

      1. Run the sqlite3 tool against the temporary database that was generated by the oc adm catalog mirror command to retrieve a list of images matching a general search query. The output helps inform how you will later edit your mapping.txt file.

        For example, to retrieve a list of images that are similar to the string jaeger:

        $ echo "select * from related_image \
            where operatorbundle_name like '%jaeger%';" \
            | sqlite3 -line /tmp/153048078/index.db 1
        1
        Refer to the previous output of the oc adm catalog mirror command to find the path of the database file.

        Example output

        ...
        image = registry.redhat.io/distributed-tracing/jaeger-all-in-one-rhel7@sha256:41f769c2c32f3f050aa42d86f084b739914ff9ba2f0aed2d9b0b69357b48459d
        operatorbundle_name = jaeger-operator.v1.17.6
        
        image = registry.redhat.io/distributed-tracing/jaeger-es-index-cleaner-rhel7@sha256:c64ac461d96523516a199bd132ad4d7148317e503a735028f0d8f7ba063a61cb
        operatorbundle_name = jaeger-operator.v1.17.6
        
        image = registry.redhat.io/distributed-tracing/jaeger-rhel7-operator:1.13.2
        operatorbundle_name = jaeger-operator.v1.13.2-1

      2. Use the results from the previous step to help you edit the mapping.txt file to only include the subset of images you want to mirror.

        For example, you can use the image values from the previous example output to find that the following matching lines exist in your mapping.txt file:

        Matching image mappings in mapping.txt

        ...
        registry.redhat.io/distributed-tracing/jaeger-all-in-one-rhel7@sha256:41f769c2c32f3f050aa42d86f084b739914ff9ba2f0aed2d9b0b69357b48459d=quay.io/adellape/distributed-tracing-jaeger-all-in-one-rhel7:5cf7a033
        ...
        registry.redhat.io/distributed-tracing/jaeger-es-index-cleaner-rhel7@sha256:c64ac461d96523516a199bd132ad4d7148317e503a735028f0d8f7ba063a61cb=quay.io/adellape/distributed-tracing-jaeger-es-index-cleaner-rhel7:ecfd2ca7
        ...
        registry.redhat.io/distributed-tracing/jaeger-rhel7-operator:1.13.2=quay.io/adellape/distributed-tracing-jaeger-rhel7-operator:1.13.2
        ...

        In this example, if you only want to mirror these images, you would then remove all other entries in the mapping.txt file and leave only the above matching image mapping lines.

    2. Still on your workstation with unrestricted network access, use your modified mapping.txt file to mirror the images to your registry using the oc image mirror command:

      $ oc image mirror \
          [-a ${REG_CREDS}] \
          -f ./redhat-operator-index-manifests/mapping.txt
  5. Apply the ImageContentSourcePolicy:

    $ oc apply -f ./redhat-operator-index-manifests/imageContentSourcePolicy.yaml
  6. If you are not using a custom, pruned version of an index image, push the Red Hat-provided index image to your registry:

    $ oc image mirror \
        [-a ${REG_CREDS}] \
        registry.redhat.io/redhat/redhat-operator-index:v4.6 \1
        <mirror_registry>:<port>/<namespace>/redhat-operator-index:v4.6 2
    1
    Specify the index image for catalog that you mirrored content for in the previous step.
    2
    Specify where to mirror the index image.

You can now create a CatalogSource to reference your mirrored index image and Operator content.

3.2.2.5. Creating a catalog from an index image

You can create an Operator catalog from an index image and apply it to an OpenShift Container Platform cluster for use with Operator Lifecycle Manager (OLM).

Prerequisites

  • An index image built and pushed to a registry.

Procedure

  1. Create a CatalogSource object that references your index image.

    1. Modify the following to your specifications and save it as a catalogsource.yaml file:

      apiVersion: operators.coreos.com/v1alpha1
      kind: CatalogSource
      metadata:
        name: my-operator-catalog
        namespace: openshift-marketplace
      spec:
        sourceType: grpc
        image: <mirror_registry>:<port>/<namespace>/redhat-operator-index:v4.6 1
        displayName: My Operator Catalog
        publisher: <publisher_name> 2
        updateStrategy:
          registryPoll: 3
            interval: 30m
      1
      Specify your index image.
      2
      Specify your name or an organization name publishing the catalog.
      3
      CatalogSources can automatically check for new versions to keep up to date.
    2. Use the file to create the CatalogSource object:

      $ oc create -f catalogsource.yaml
  2. Verify the following resources are created successfully.

    1. Check the pods:

      $ oc get pods -n openshift-marketplace

      Example output

      NAME                                    READY   STATUS    RESTARTS  AGE
      my-operator-catalog-6njx6               1/1     Running   0         28s
      marketplace-operator-d9f549946-96sgr    1/1     Running   0         26h

    2. Check the CatalogSource:

      $ oc get catalogsource -n openshift-marketplace

      Example output

      NAME                  DISPLAY               TYPE PUBLISHER  AGE
      my-operator-catalog   My Operator Catalog   grpc            5s

    3. Check the PackageManifest:

      $ oc get packagemanifest -n openshift-marketplace

      Example output

      NAME                          CATALOG               AGE
      jaeger-product                My Operator Catalog   93s

You can now install the Operators from the OperatorHub page on your OpenShift Container Platform web console.

3.2.2.6. Installing the MTC Operator on an OpenShift Container Platform 4.6 target cluster in a restricted environment

You can install the MTC Operator on an OpenShift Container Platform 4.6 target cluster with the Operator Lifecycle Manager (OLM).

The MTC Operator installs the Migration Toolkit for Containers on the target cluster by default.

Prerequisites

  • You have created a custom Operator catalog and pushed it to a mirror registry.
  • You have configured OLM to install the MTC Operator from the mirror registry.

Procedure

  1. In the OpenShift Container Platform web console, click OperatorsOperatorHub.
  2. Use the Filter by keyword field (in this case, Migration) to find the MTC Operator.
  3. Select the MTC Operator and click Install.
  4. On the Install Operator page, click Install.

    On the Installed Operators page, the MTC Operator appears in the openshift-migration project with the status Succeeded.

  5. Click MTC Operator.
  6. Under Provided APIs, locate the Migration Controller tile, and click Create Instance.
  7. Click Create.
  8. Click WorkloadsPods to verify that the Controller Manager, Migration UI, Restic, and Velero pods are running.

3.2.2.7. Installing the MTC Operator on an OpenShift Container Platform 4.2 source cluster in a restricted environment

You can install the MTC Operator on an OpenShift Container Platform 4 source cluster with the Operator Lifecycle Manager (OLM).

Prerequisites

  • You have created a custom Operator catalog and pushed it to a mirror registry.
  • You have configured OLM to install the MTC Operator from the mirror registry.

Procedure

  1. In the OpenShift Container Platform web console, click OperatorsOperatorHub.
  2. Use the Filter by keyword field (in this case, Migration) to find the MTC Operator.
  3. Select the MTC Operator and click Install.
  4. On the Install Operator page, click Install.

    On the Installed Operators page, the MTC Operator appears in the openshift-migration project with the status Succeeded.

  5. Click MTC Operator.
  6. Under Provided APIs, locate the Migration Controller tile, and click Create Instance.
  7. Click Create.

3.2.3. Launching the MTC web console

You can launch the MTC web console in a browser.

Procedure

  1. Log in to the OpenShift Container Platform cluster on which you have installed MTC.
  2. Obtain the MTC web console URL by entering the following command:

    $ oc get -n openshift-migration route/migration -o go-template='https://{{ .spec.host }}'

    The output resembles the following: https://migration-openshift-migration.apps.cluster.openshift.com.

  3. Launch a browser and navigate to the MTC web console.

    Note

    If you try to access the MTC web console immediately after installing the MTC Operator, the console may not load because the Operator is still configuring the cluster. Wait a few minutes and retry.

  4. If you are using self-signed CA certificates, you will be prompted to accept the CA certificate of the source cluster’s API server. The web page guides you through the process of accepting the remaining certificates.
  5. Log in with your OpenShift Container Platform username and password.

3.3. Upgrading the Migration Toolkit for Containers

You can upgrade the Migration Toolkit for Containers (MTC) by upgrading the MTC Operator.

3.3.1. Upgrading the MTC Operator on an OpenShift Container Platform 4 cluster

You can upgrade to MTC 1.3 on an OpenShift Container Platform 4 cluster by deleting the MigrationController custom resource (CR), uninstalling the CAM Operator, and then installing the MTC Operator.

Procedure

  1. Delete the MigrationController CR:

    $ oc delete migrationcontroller -n openshift-migration migration-controller
  2. In the OpenShift Container Platform console, navigate to Operators > Installed Operators.
  3. Click CAM Operator.
  4. On the right side of the Operator Details page, select Uninstall Operator from the Actions list.
  5. Select Uninstall. This Operator stops running and no longer receives updates.
  6. Navigate to OperatorsOperatorHub.
  7. Use the Filter by keyword field to find the MTC Operator.
  8. Select the MTC Operator and click Install.
  9. On the Install Operator page, click Install.

    On the Installed Operators page, verify that the MTC Operator appears in the openshift-migration project with the status Succeeded.

3.4. Configuring a replication repository

You must configure an object storage to use as a replication repository. The Migration Toolkit for Containers (MTC) copies data from the source cluster to the replication repository, and then from the replication repository to the target cluster.

MTC supports the file system and snapshot data copy methods for migrating data from the source cluster to the target cluster. You can select a method that is suited for your environment and is supported by your storage provider.

The following storage providers are supported:

The source and target clusters must have network access to the replication repository during migration.

In a restricted environment, you can create an internally hosted replication repository. If you use a proxy server, you must ensure that your replication repository is allowed.

3.4.1. Configuring a Multi-Cloud Object Gateway storage bucket as a replication repository

You can install the OpenShift Container Storage Operator and configure a Multi-Cloud Object Gateway (MCG) storage bucket as a replication repository.

3.4.1.1. Installing the OpenShift Container Storage Operator

You can install the OpenShift Container Storage Operator from OperatorHub.

Procedure

  1. In the OpenShift Container Platform web console, click OperatorsOperatorHub.
  2. Use Filter by keyword (in this case, OCS) to find the OpenShift Container Storage Operator.
  3. Select the OpenShift Container Storage Operator and click Install.
  4. Select an Update Channel, Installation Mode, and Approval Strategy.
  5. Click Install.

    On the Installed Operators page, the OpenShift Container Storage Operator appears in the openshift-storage project with the status Succeeded.

3.4.1.2. Creating the Multi-Cloud Object Gateway storage bucket

You can create the Multi-Cloud Object Gateway (MCG) storage bucket’s Custom Resources (CRs).

Procedure

  1. Log in to the OpenShift Container Platform cluster:

    $ oc login
  2. Create the NooBaa CR configuration file, noobaa.yml, with the following content:

    apiVersion: noobaa.io/v1alpha1
    kind: NooBaa
    metadata:
      name: noobaa
      namespace: openshift-storage
    spec:
     dbResources:
       requests:
         cpu: 0.5 1
         memory: 1Gi
     coreResources:
       requests:
         cpu: 0.5 2
         memory: 1Gi
    1 2
    For a very small cluster, you can change the cpu value to 0.1.
  3. Create the NooBaa object:

    $ oc create -f noobaa.yml
  4. Create the BackingStore CR configuration file, bs.yml, with the following content:

    apiVersion: noobaa.io/v1alpha1
    kind: BackingStore
    metadata:
      finalizers:
      - noobaa.io/finalizer
      labels:
        app: noobaa
      name: mcg-pv-pool-bs
      namespace: openshift-storage
    spec:
      pvPool:
        numVolumes: 3 1
        resources:
          requests:
            storage: 50Gi 2
        storageClass: gp2 3
      type: pv-pool
    1
    Specify the number of volumes in the PV pool.
    2
    Specify the size of the volumes.
    3
    Specify the storage class.
  5. Create the BackingStore object:

    $ oc create -f bs.yml
  6. Create the BucketClass CR configuration file, bc.yml, with the following content:

    apiVersion: noobaa.io/v1alpha1
    kind: BucketClass
    metadata:
      labels:
        app: noobaa
      name: mcg-pv-pool-bc
      namespace: openshift-storage
    spec:
      placementPolicy:
        tiers:
        - backingStores:
          - mcg-pv-pool-bs
          placement: Spread
  7. Create the BucketClass object:

    $ oc create -f bc.yml
  8. Create the ObjectBucketClaim CR configuration file, obc.yml, with the following content:

    apiVersion: objectbucket.io/v1alpha1
    kind: ObjectBucketClaim
    metadata:
      name: migstorage
      namespace: openshift-storage
    spec:
      bucketName: migstorage 1
      storageClassName: openshift-storage.noobaa.io
      additionalConfig:
        bucketclass: mcg-pv-pool-bc
    1
    Record the bucket name for adding the replication repository to the MTC web console.
  9. Create the ObjectBucketClaim object:

    $ oc create -f obc.yml
  10. Watch the resource creation process to verify that the ObjectBucketClaim status is Bound:

    $ watch -n 30 'oc get -n openshift-storage objectbucketclaim migstorage -o yaml'

    This process can take five to ten minutes.

  11. Obtain and record the following values, which are required when you add the replication repository to the MTC web console:

    • S3 endpoint:

      $ oc get route -n openshift-storage s3
    • S3 provider access key:

      $ oc get secret -n openshift-storage migstorage -o go-template='{{ .data.AWS_ACCESS_KEY_ID }}' | base64 -d
    • S3 provider secret access key:

      $ oc get secret -n openshift-storage migstorage -o go-template='{{ .data.AWS_SECRET_ACCESS_KEY }}' | base64 -d

3.4.2. Configuring an AWS S3 storage bucket as a replication repository

You can configure an AWS S3 storage bucket as a replication repository.

Prerequisites

  • The AWS S3 storage bucket must be accessible to the source and target clusters.
  • You must have the AWS CLI installed.
  • If you are using the snapshot copy method:

    • You must have access to EC2 Elastic Block Storage (EBS).
    • The source and target clusters must be in the same region.
    • The source and target clusters must have the same storage class.
    • The storage class must be compatible with snapshots.

Procedure

  1. Create an AWS S3 bucket:

    $ aws s3api create-bucket \
        --bucket <bucket_name> \ 1
        --region <bucket_region> 2
    1
    Specify your S3 bucket name.
    2
    Specify your S3 bucket region, for example, us-east-1.
  2. Create the IAM user velero:

    $ aws iam create-user --user-name velero
  3. Create an EC2 EBS snapshot policy:

    $ cat > velero-ec2-snapshot-policy.json <<EOF
    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Effect": "Allow",
                "Action": [
                    "ec2:DescribeVolumes",
                    "ec2:DescribeSnapshots",
                    "ec2:CreateTags",
                    "ec2:CreateVolume",
                    "ec2:CreateSnapshot",
                    "ec2:DeleteSnapshot"
                ],
                "Resource": "*"
            }
        ]
    }
    EOF
  4. Create an AWS S3 access policy for one or for all S3 buckets:

    $ cat > velero-s3-policy.json <<EOF
    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Effect": "Allow",
                "Action": [
                    "s3:GetObject",
                    "s3:DeleteObject",
                    "s3:PutObject",
                    "s3:AbortMultipartUpload",
                    "s3:ListMultipartUploadParts"
                ],
                "Resource": [
                    "arn:aws:s3:::<bucket_name>/*" 1
                ]
            },
            {
                "Effect": "Allow",
                "Action": [
                    "s3:ListBucket",
                    "s3:GetBucketLocation",
                    "s3:ListBucketMultipartUploads"
                ],
                "Resource": [
                    "arn:aws:s3:::<bucket_name>" 2
                ]
            }
        ]
    }
    EOF
    1 2
    To grant access to a single S3 bucket, specify the bucket name. To grant access to all AWS S3 buckets, specify * instead of a bucket name as in the following example:

    Example output

    "Resource": [
        "arn:aws:s3:::*"

  5. Attach the EC2 EBS policy to velero:

    $ aws iam put-user-policy \
      --user-name velero \
      --policy-name velero-ebs \
      --policy-document file://velero-ec2-snapshot-policy.json
  6. Attach the AWS S3 policy to velero:

    $ aws iam put-user-policy \
      --user-name velero \
      --policy-name velero-s3 \
      --policy-document file://velero-s3-policy.json
  7. Create an access key for velero:

    $ aws iam create-access-key --user-name velero
    {
      "AccessKey": {
            "UserName": "velero",
            "Status": "Active",
            "CreateDate": "2017-07-31T22:24:41.576Z",
            "SecretAccessKey": <AWS_SECRET_ACCESS_KEY>, 1
            "AccessKeyId": <AWS_ACCESS_KEY_ID> 2
        }
    }
    1 2
    Record the AWS_SECRET_ACCESS_KEY and the AWS_ACCESS_KEY_ID for adding the AWS repository to the MTC web console.

3.4.3. Configuring a Google Cloud Provider storage bucket as a replication repository

You can configure a Google Cloud Provider (GCP) storage bucket as a replication repository.

Prerequisites

  • The GCP storage bucket must be accessible to the source and target clusters.
  • You must have gsutil installed.
  • If you are using the snapshot copy method:

    • The source and target clusters must be in the same region.
    • The source and target clusters must have the same storage class.
    • The storage class must be compatible with snapshots.

Procedure

  1. Run gsutil init to log in:

    Example output

    Welcome! This command will take you through the configuration of gcloud.
    
    Your current configuration has been set to: [default]
    
    To continue, you must login. Would you like to login (Y/n)?

  2. Set the BUCKET variable:

    $ BUCKET=<bucket_name> 1
    1
    Specify your bucket name.
  3. Create a storage bucket:

    $ gsutil mb gs://$BUCKET/
  4. Set the PROJECT_ID variable to your active project:

    $ PROJECT_ID=$(gcloud config get-value project)
  5. Create a velero IAM service account:

    $ gcloud iam service-accounts create velero \
        --display-name "Velero Storage"
  6. Create the SERVICE_ACCOUNT_EMAIL variable:

    $ SERVICE_ACCOUNT_EMAIL=$(gcloud iam service-accounts list \
      --filter="displayName:Velero Storage" \
      --format 'value(email)')
  7. Create the ROLE_PERMISSIONS variable:

    $ ROLE_PERMISSIONS=(
        compute.disks.get
        compute.disks.create
        compute.disks.createSnapshot
        compute.snapshots.get
        compute.snapshots.create
        compute.snapshots.useReadOnly
        compute.snapshots.delete
        compute.zones.get
    )
  8. Create the velero.server custom role:

    $ gcloud iam roles create velero.server \
        --project $PROJECT_ID \
        --title "Velero Server" \
        --permissions "$(IFS=","; echo "${ROLE_PERMISSIONS[*]}")"
  9. Add IAM policy binding to the project:

    $ gcloud projects add-iam-policy-binding $PROJECT_ID \
        --member serviceAccount:$SERVICE_ACCOUNT_EMAIL \
        --role projects/$PROJECT_ID/roles/velero.server
  10. Update the IAM service account:

    $ gsutil iam ch serviceAccount:$SERVICE_ACCOUNT_EMAIL:objectAdmin gs://${BUCKET}
  11. Save the IAM service account keys to the credentials-velero file in the current directory:

    $ gcloud iam service-accounts keys create credentials-velero \
      --iam-account $SERVICE_ACCOUNT_EMAIL

3.4.4. Configuring a Microsoft Azure Blob storage container as a replication repository

You can configure a Microsoft Azure Blob storage container as a replication repository.

Prerequisites

  • You must have an Azure storage account.
  • You must have the Azure CLI installed.
  • The Azure Blob storage container must be accessible to the source and target clusters.
  • If you are using the snapshot copy method:

    • The source and target clusters must be in the same region.
    • The source and target clusters must have the same storage class.
    • The storage class must be compatible with snapshots.

Procedure

  1. Set the AZURE_RESOURCE_GROUP variable:

    $ AZURE_RESOURCE_GROUP=Velero_Backups
  2. Create an Azure resource group:

    $ az group create -n $AZURE_RESOURCE_GROUP --location <CentralUS> 1
    1
    Specify your location.
  3. Set the AZURE_STORAGE_ACCOUNT_ID variable:

    $ AZURE_STORAGE_ACCOUNT_ID=velerobackups
  4. Create an Azure storage account:

    $ az storage account create \
      --name $AZURE_STORAGE_ACCOUNT_ID \
      --resource-group $AZURE_RESOURCE_GROUP \
      --sku Standard_GRS \
      --encryption-services blob \
      --https-only true \
      --kind BlobStorage \
      --access-tier Hot
  5. Set the BLOB_CONTAINER variable:

    $ BLOB_CONTAINER=velero
  6. Create an Azure Blob storage container:

    $ az storage container create \
      -n $BLOB_CONTAINER \
      --public-access off \
      --account-name $AZURE_STORAGE_ACCOUNT_ID
  7. Create a service principal and credentials for velero:

    $ AZURE_SUBSCRIPTION_ID=`az account list --query '[?isDefault].id' -o tsv` \
      AZURE_TENANT_ID=`az account list --query '[?isDefault].tenantId' -o tsv` \
      AZURE_CLIENT_SECRET=`az ad sp create-for-rbac --name "velero" --role "Contributor" --query 'password' -o tsv` \
      AZURE_CLIENT_ID=`az ad sp list --display-name "velero" --query '[0].appId' -o tsv`
  8. Save the service principal credentials in the credentials-velero file:

    $ cat << EOF  > ./credentials-velero
    AZURE_SUBSCRIPTION_ID=${AZURE_SUBSCRIPTION_ID}
    AZURE_TENANT_ID=${AZURE_TENANT_ID}
    AZURE_CLIENT_ID=${AZURE_CLIENT_ID}
    AZURE_CLIENT_SECRET=${AZURE_CLIENT_SECRET}
    AZURE_RESOURCE_GROUP=${AZURE_RESOURCE_GROUP}
    AZURE_CLOUD_NAME=AzurePublicCloud
    EOF

3.5. Migrating your applications

You must add your clusters and a replication repository to the MTC web console. Then, you can create and run a migration plan.

If your cluster or replication repository are secured with self-signed certificates, you can create a CA certificate bundle file or disable SSL verification.

3.5.1. Creating a CA certificate bundle file

If you use a self-signed certificate to secure a cluster or a replication repository, certificate verification might fail with the following error message: Certificate signed by unknown authority.

You can create a custom CA certificate bundle file and upload it in the MTC web console when you add a cluster or a replication repository.

Procedure

Download a CA certificate from a remote endpoint and save it as a CA bundle file:

$ echo -n | openssl s_client -connect <host_FQDN>:<port> \ 1
  | sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' > <ca_bundle.cert> 2
1
Specify the host FQDN and port of the endpoint, for example, api.my-cluster.example.com:6443.
2
Specify the name of the CA bundle file.

3.5.2. Configuring a migration plan

3.5.2.1. Increasing Migration Controller limits for large migrations

You can increase the Migration Controller limits on migration objects and container resources for large migrations.

Important

You must test these changes before you perform a migration in a production environment.

Procedure

  1. Edit the Migration Controller manifest:

    $ oc edit migrationcontroller -n openshift-migration
  2. Update the following parameters:

    ...
    mig_controller_limits_cpu: "1" 1
    mig_controller_limits_memory: "10Gi" 2
    ...
    mig_controller_requests_cpu: "100m" 3
    mig_controller_requests_memory: "350Mi" 4
    ...
    mig_pv_limit: 100 5
    mig_pod_limit: 100 6
    mig_namespace_limit: 10 7
    ...
    1
    Specifies the number of CPUs available to the Migration Controller.
    2
    Specifies the amount of memory available to the Migration Controller.
    3
    Specifies the number of CPU units available for Migration Controller requests. 100m represents 0.1 CPU units (100 * 1e-3).
    4
    Specifies the amount of memory available for Migration Controller requests.
    5
    Specifies the number of PVs that can be migrated.
    6
    Specifies the number of pods that can be migrated.
    7
    Specifies the number of namespaces that can be migrated.
  3. Create a migration plan that uses the updated parameters to verify the changes.

    If your migration plan exceeds the Migration Controller limits, the MTC console displays a warning message when you save the migration plan.

3.5.2.2. Excluding resources from a migration plan

You can exclude resources, for example, ImageStreams, persistent volumes (PVs), or subscriptions, from a migration plan in order to reduce the load or to migrate images or PVs with a different tool.

Procedure

  1. Edit the Migration Controller CR:

    $ oc edit migrationcontroller -n openshift-migration
  2. Update the spec section by adding a parameter to exclude specific resources or by adding a resource to the excluded_resources parameter if it does not have its own exclusion parameter:

    apiVersion: migration.openshift.io/v1alpha1
    kind: MigrationController
    metadata:
      name: migration-controller
      namespace: openshift-migration
    spec:
      disable_image_migration: true 1
      disable_pv_migration: true 2
      ...
      excluded_resources: 3
      - imagetags
      - templateinstances
      - clusterserviceversions
      - packagemanifests
      - subscriptions
      - servicebrokers
      - servicebindings
      - serviceclasses
      - serviceinstances
      - serviceplans
    1
    Add disable_image_migration: true to exclude imagestreams from the migration. Do not edit the excluded_resources parameter. imagestreams is added to excluded_resources when the Migration Controller Pod restarts.
    2
    Add disable_pv_migration: true to exclude PVs from the migration plan. Do not edit the excluded_resources parameter. persistentvolumes and persistentvolumeclaims are added to excluded_resources when the Migration Controller Pod restarts. Disabling PV migration also disables PV discovery when you create the migration plan.
    3
    You can add OpenShift Container Platform resources to the excluded_resources list. Do not delete any of the default excluded resources. These resources are known to be problematic for migration.
  3. Wait two minutes for the Migration Controller Pod to restart so that the changes are applied.
  4. Verify that the resource is excluded:

    $ oc get deployment -n openshift-migration migration-controller -o yaml | grep EXCLUDED_RESOURCES -A1

    The output contains the excluded resources, as shown in the following example:

        - name: EXCLUDED_RESOURCES
          value:
          imagetags,templateinstances,clusterserviceversions,packagemanifests,subscriptions,servicebrokers,servicebindings,serviceclasses,serviceinstances,serviceplans,imagestreams,persistentvolumes,persistentvolumeclaims

3.5.3. Adding a cluster to the MTC web console

You can add a cluster to the MTC web console.

Prerequisites

If you are using Azure snapshots to copy data:

  • You must provide the Azure resource group name when you add the source cluster.
  • The source and target clusters must be in the same Azure resource group and in the same location.

Procedure

  1. Log in to the cluster.
  2. Obtain the service account token:

    $ oc sa get-token migration-controller -n openshift-migration

    Example output

    eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJtaWciLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlY3JldC5uYW1lIjoibWlnLXRva2VuLWs4dDJyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6Im1pZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE1YjFiYWMwLWMxYmYtMTFlOS05Y2NiLTAyOWRmODYwYjMwOCIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDptaWc6bWlnIn0.xqeeAINK7UXpdRqAtOj70qhBJPeMwmgLomV9iFxr5RoqUgKchZRG2J2rkqmPm6vr7K-cm7ibD1IBpdQJCcVDuoHYsFgV4mp9vgOfn9osSDp2TGikwNz4Az95e81xnjVUmzh-NjDsEpw71DH92iHV_xt2sTwtzftS49LpPW2LjrV0evtNBP_t_RfskdArt5VSv25eORl7zScqfe1CiMkcVbf2UqACQjo3LbkpfN26HAioO2oH0ECPiRzT0Xyh-KwFutJLS9Xgghyw-LD9kPKcE_xbbJ9Y4Rqajh7WdPYuB0Jd9DPVrslmzK-F6cgHHYoZEv0SvLQi-PO0rpDrcjOEQQ

  3. Log in to the MTC web console.
  4. In the Clusters section, click Add cluster.
  5. Fill in the following fields:

    • Cluster name: May contain lower-case letters (a-z) and numbers (0-9). Must not contain spaces or international characters.
    • Url: URL of the cluster’s API server, for example, https://<master1.example.com>:8443.
    • Service account token: String that you obtained from the source cluster.
    • Azure cluster: Optional. Select it if you are using Azure snapshots to copy your data.
    • Azure resource group: This field appears if Azure cluster is checked.
    • If you use a custom CA bundle, click Browse and browse to the CA bundle file.
  6. Click Add cluster.

    The cluster appears in the Clusters section of the MTC web console.

3.5.4. Adding a replication repository to the MTC web console

You can add an object storage bucket as a replication repository to the MTC web console.

Prerequisites

  • You must configure an object storage bucket for migrating the data.

Procedure

  1. Log in to the MTC web console.
  2. In the Replication repositories section, click Add repository.
  3. Select a Storage provider type and fill in the following fields:

    • AWS for AWS S3, MCG, and generic S3 providers:

      • Replication repository name: Specify the replication repository name in the MTC web console.
      • S3 bucket name: Specify the name of the S3 bucket you created.
      • S3 bucket region: Specify the S3 bucket region. Required for AWS S3. Optional for other S3 providers.
      • S3 endpoint: Specify the URL of the S3 service, not the bucket, for example, https://<s3-storage.apps.cluster.com>. Required for a generic S3 provider. You must use the https:// prefix.
      • S3 provider access key: Specify the <AWS_SECRET_ACCESS_KEY> for AWS or the S3 provider access key for MCG.
      • S3 provider secret access key: Specify the <AWS_ACCESS_KEY_ID> for AWS or the S3 provider secret access key for MCG.
      • Require SSL verification: Clear this check box if you are using a generic S3 provider.
      • If you use a custom CA bundle, click Browse and browse to the Base64-encoded CA bundle file.
    • GCP:

      • Replication repository name: Specify the replication repository name in the MTC web console.
      • GCP bucket name: Specify the name of the GCP bucket.
      • GCP credential JSON blob: Specify the string in the credentials-velero file.
    • Azure:

      • Replication repository name: Specify the replication repository name in the MTC web console.
      • Azure resource group: Specify the resource group of the Azure Blob storage.
      • Azure storage account name: Specify the Azure Blob storage account name.
      • Azure credentials - INI file contents: Specify the string in the credentials-velero file.
  4. Click Add repository and wait for connection validation.
  5. Click Close.

    The new repository appears in the Replication repositories section.

3.5.5. Creating a migration plan in the MTC web console

You can create a migration plan in the MTC web console.

Prerequisites

  • The MTC web console must contain the following:

    • Source cluster
    • Target cluster
    • Replication repository
  • The source and target clusters must have network access to each other and to the replication repository.
  • If you use snapshots to copy data, the source and target clusters must run on the same cloud provider (AWS, GCP, or Azure) and be located in the same region.

Procedure

  1. Log in to the MTC web console.
  2. In the Plans section, click Add plan.
  3. Enter the Plan name and click Next.

    The Plan name can contain up to 253 lower-case alphanumeric characters (a-z, 0-9). It must not contain spaces or underscores (_).

  4. Select a Source cluster.
  5. Select a Target cluster.
  6. Select a Replication repository.
  7. Select the projects to be migrated and click Next.
  8. Select Copy or Move for the PVs:

    • Copy copies the data in a source cluster’s PV to the replication repository and then restores it on a newly created PV, with similar characteristics, in the target cluster.

      Optional: You can verify data copied with the file system method by selecting Verify copy. This option generates a checksum for each source file and checks it after restoration. The operation significantly reduces performance.

    • Move unmounts a remote volume (for example, NFS) from the source cluster, creates a PV resource on the target cluster pointing to the remote volume, and then mounts the remote volume on the target cluster. Applications running on the target cluster use the same remote volume that the source cluster was using. The remote volume must be accessible to the source and target clusters.
  9. Click Next.
  10. Select a Copy method for the PVs:

    • Snapshot backs up and restores the disk using the cloud provider’s snapshot functionality. It is significantly faster than Filesystem.

      Note

      The storage and clusters must be in the same region and the storage class must be compatible.

    • Filesystem copies the data files from the source disk to a newly created target disk.
  11. Select a Storage class for the PVs.

    If you selected the Filesystem copy method, you can change the storage class during migration, for example, from Red Hat Gluster Storage or NFS storage to Red Hat Ceph Storage.

  12. Click Next.
  13. If you want to add a migration hook, click Add Hook and perform the following steps:

    1. Specify the name of the hook.
    2. Select Ansible playbook to use your own playbook or Custom container image for a hook written in another language.
    3. Click Browse to upload the playbook.
    4. Optional: If you are not using the default Ansible runtime image, specify your custom Ansible image.
    5. Specify the cluster on which you want the hook to run.
    6. Specify the service account name.
    7. Specify the namespace.
    8. Select the migration step at which you want the hook to run:

      • PreBackup: Before backup tasks are started on the source cluster
      • PostBackup: After backup tasks are complete on the source cluster
      • PreRestore: Before restore tasks are started on the target cluster
      • PostRestore: After restore tasks are complete on the target cluster
  14. Click Add.

    You can add up to four hooks to a migration plan, assigning each hook to a different migration step.

  15. Click Finish.
  16. Click Close.

    The migration plan appears in the Plans section.

3.5.6. Running a migration plan in the MTC web console

You can stage or migrate applications and data with the migration plan you created in the MTC web console.

Prerequisites

The MTC web console must contain the following:

  • Source cluster
  • Target cluster
  • Replication repository
  • Valid migration plan

Procedure

  1. Log in to the source cluster.
  2. Delete old images:

    $ oc adm prune images
  3. Log in to the MTC web console.
  4. Select a migration plan.
  5. Click Stage to copy data from the source cluster to the target cluster without stopping the application.

    You can run Stage multiple times to reduce the actual migration time.

  6. When you are ready to migrate the application workload, click Migrate.

    Migrate stops the application workload on the source cluster and recreates its resources on the target cluster.

  7. Optional: In the Migrate window, you can select Do not stop applications on the source cluster during migration.
  8. Click Migrate.
  9. Optional: To stop a migration in progress, click the Options menu kebab and select Cancel.
  10. When the migration is complete, verify that the application migrated successfully in the OpenShift Container Platform web console:

    1. Click HomeProjects.
    2. Click the migrated project to view its status.
    3. In the Routes section, click Location to verify that the application is functioning, if applicable.
    4. Click WorkloadsPods to verify that the pods are running in the migrated namespace.
    5. Click StoragePersistent volumes to verify that the migrated persistent volume is correctly provisioned.

3.6. Troubleshooting

You can view the migration Custom Resources (CRs) and download logs to troubleshoot a failed migration.

If the application was stopped during the failed migration, you must roll it back manually in order to prevent data corruption.

Note

Manual rollback is not required if the application was not stopped during migration because the original application is still running on the source cluster.

3.6.1. Viewing migration Custom Resources

The Migration Toolkit for Containers (MTC) creates the following Custom Resources (CRs):

migration architecture diagram

20 MigCluster (configuration, MTC cluster): Cluster definition

20 MigStorage (configuration, MTC cluster): Storage definition

20 MigPlan (configuration, MTC cluster): Migration plan

The MigPlan CR describes the source and target clusters, replication repository, and namespaces being migrated. It is associated with 0, 1, or many MigMigration CRs.

Note

Deleting a MigPlan CR deletes the associated MigMigration CRs.

20 BackupStorageLocation (configuration, MTC cluster): Location of Velero backup objects

20 VolumeSnapshotLocation (configuration, MTC cluster): Location of Velero volume snapshots

20 MigMigration (action, MTC cluster): Migration, created every time you stage or migrate data. Each MigMigration CR is associated with a MigPlan CR.

20 Backup (action, source cluster): When you run a migration plan, the MigMigration CR creates two Velero backup CRs on each source cluster:

  • Backup CR #1 for Kubernetes objects
  • Backup CR #2 for PV data

20 Restore (action, target cluster): When you run a migration plan, the MigMigration CR creates two Velero restore CRs on the target cluster:

  • Restore CR #1 (using Backup CR #2) for PV data
  • Restore CR #2 (using Backup CR #1) for Kubernetes objects

Procedure

  1. View the CR:

    $ oc get <cr> -n openshift-migration 1
    1
    Specify the migration CR, for example, migmigration.

    Example output

    NAME                                   AGE
    88435fe0-c9f8-11e9-85e6-5d593ce65e10   6m42s

  2. Inspect the migmigration CR:

    $ oc describe <migmigration> <88435fe0-c9f8-11e9-85e6-5d593ce65e10> -n openshift-migration

    The output is similar to the following examples.

MigMigration example output

name:         88435fe0-c9f8-11e9-85e6-5d593ce65e10
namespace:    openshift-migration
labels:       <none>
annotations:  touch: 3b48b543-b53e-4e44-9d34-33563f0f8147
apiVersion:  migration.openshift.io/v1alpha1
kind:         MigMigration
metadata:
  creationTimestamp:  2019-08-29T01:01:29Z
  generation:          20
  resourceVersion:    88179
  selfLink:           /apis/migration.openshift.io/v1alpha1/namespaces/openshift-migration/migmigrations/88435fe0-c9f8-11e9-85e6-5d593ce65e10
  uid:                 8886de4c-c9f8-11e9-95ad-0205fe66cbb6
spec:
  migPlanRef:
    name:        socks-shop-mig-plan
    namespace:   openshift-migration
  quiescePods:  true
  stage:         false
status:
  conditions:
    category:              Advisory
    durable:               True
    lastTransitionTime:  2019-08-29T01:03:40Z
    message:               The migration has completed successfully.
    reason:                Completed
    status:                True
    type:                  Succeeded
  phase:                   Completed
  startTimestamp:         2019-08-29T01:01:29Z
events:                    <none>

Velero backup CR #2 example output that describes the PV data

apiVersion: velero.io/v1
kind: Backup
metadata:
  annotations:
    openshift.io/migrate-copy-phase: final
    openshift.io/migrate-quiesce-pods: "true"
    openshift.io/migration-registry: 172.30.105.179:5000
    openshift.io/migration-registry-dir: /socks-shop-mig-plan-registry-44dd3bd5-c9f8-11e9-95ad-0205fe66cbb6
  creationTimestamp: "2019-08-29T01:03:15Z"
  generateName: 88435fe0-c9f8-11e9-85e6-5d593ce65e10-
  generation: 1
  labels:
    app.kubernetes.io/part-of: migration
    migmigration: 8886de4c-c9f8-11e9-95ad-0205fe66cbb6
    migration-stage-backup: 8886de4c-c9f8-11e9-95ad-0205fe66cbb6
    velero.io/storage-location: myrepo-vpzq9
  name: 88435fe0-c9f8-11e9-85e6-5d593ce65e10-59gb7
  namespace: openshift-migration
  resourceVersion: "87313"
  selfLink: /apis/velero.io/v1/namespaces/openshift-migration/backups/88435fe0-c9f8-11e9-85e6-5d593ce65e10-59gb7
  uid: c80dbbc0-c9f8-11e9-95ad-0205fe66cbb6
spec:
  excludedNamespaces: []
  excludedResources: []
  hooks:
    resources: []
  includeClusterResources: null
  includedNamespaces:
  - sock-shop
  includedResources:
  - persistentvolumes
  - persistentvolumeclaims
  - namespaces
  - imagestreams
  - imagestreamtags
  - secrets
  - configmaps
  - pods
  labelSelector:
    matchLabels:
      migration-included-stage-backup: 8886de4c-c9f8-11e9-95ad-0205fe66cbb6
  storageLocation: myrepo-vpzq9
  ttl: 720h0m0s
  volumeSnapshotLocations:
  - myrepo-wv6fx
status:
  completionTimestamp: "2019-08-29T01:02:36Z"
  errors: 0
  expiration: "2019-09-28T01:02:35Z"
  phase: Completed
  startTimestamp: "2019-08-29T01:02:35Z"
  validationErrors: null
  version: 1
  volumeSnapshotsAttempted: 0
  volumeSnapshotsCompleted: 0
  warnings: 0

Velero restore CR #2 example output that describes the Kubernetes resources

apiVersion: velero.io/v1
kind: Restore
metadata:
  annotations:
    openshift.io/migrate-copy-phase: final
    openshift.io/migrate-quiesce-pods: "true"
    openshift.io/migration-registry: 172.30.90.187:5000
    openshift.io/migration-registry-dir: /socks-shop-mig-plan-registry-36f54ca7-c925-11e9-825a-06fa9fb68c88
  creationTimestamp: "2019-08-28T00:09:49Z"
  generateName: e13a1b60-c927-11e9-9555-d129df7f3b96-
  generation: 3
  labels:
    app.kubernetes.io/part-of: migration
    migmigration: e18252c9-c927-11e9-825a-06fa9fb68c88
    migration-final-restore: e18252c9-c927-11e9-825a-06fa9fb68c88
  name: e13a1b60-c927-11e9-9555-d129df7f3b96-gb8nx
  namespace: openshift-migration
  resourceVersion: "82329"
  selfLink: /apis/velero.io/v1/namespaces/openshift-migration/restores/e13a1b60-c927-11e9-9555-d129df7f3b96-gb8nx
  uid: 26983ec0-c928-11e9-825a-06fa9fb68c88
spec:
  backupName: e13a1b60-c927-11e9-9555-d129df7f3b96-sz24f
  excludedNamespaces: null
  excludedResources:
  - nodes
  - events
  - events.events.k8s.io
  - backups.velero.io
  - restores.velero.io
  - resticrepositories.velero.io
  includedNamespaces: null
  includedResources: null
  namespaceMapping: null
  restorePVs: true
status:
  errors: 0
  failureReason: ""
  phase: Completed
  validationErrors: null
  warnings: 15

3.6.2. Downloading migration logs

You can download the Velero, Restic, and Migration controller logs in the MTC web console to troubleshoot a failed migration.

Procedure

  1. Log in to the MTC console.
  2. Click Plans to view the list of migration plans.
  3. Click the Options menu kebab of a specific migration plan and select Logs.
  4. Click Download Logs to download the logs of the Migration controller, Velero, and Restic for all clusters.
  5. To download a specific log:

    1. Specify the log options:

      • Cluster: Select the source, target, or MTC host cluster.
      • Log source: Select Velero, Restic, or Controller.
      • Pod source: Select the Pod name, for example, controller-manager-78c469849c-v6wcf

        The selected log is displayed.

        You can clear the log selection settings by changing your selection.

    2. Click Download Selected to download the selected log.

Optionally, you can access the logs by using the CLI, as in the following example:

$ oc logs <pod-name> -f -n openshift-migration 1
1
Specify the Pod name.

3.6.3. Error messages and resolutions

This section describes common error messages and how to resolve their underlying causes.

3.6.3.1. CA certificate error in the MTC console

If a CA certificate error message is displayed the first time you try to access the MTC console, the likely cause is the use of self-signed CA certificates in one of the clusters.

To resolve this issue, navigate to the oauth-authorization-server URL displayed in the error message and accept the certificate. To resolve this issue permanently, add the certificate to the trust store of your web browser.

If an Unauthorized message is displayed after you have accepted the certificate, navigate to the MTC console and refresh the web page.

3.6.3.2. OAuth timeout error in the MTC console

If a connection has timed out message is displayed in the MTC console after you have accepted a self-signed certificate, the causes are likely to be the following:

You can determine the cause of the timeout.

Procedure

  1. Navigate to the MTC console and inspect the elements with the browser web inspector.
  2. Check the migration-ui pod log:

    $ oc logs migration-ui-<86b679ffc7-h6l6v> -n openshift-migration

3.6.3.3. PodVolumeBackups timeout error in Velero log

If a migration fails because Restic times out, the following error is displayed in the Velero log.

Example output

level=error msg="Error backing up item" backup=velero/monitoring error="timed out waiting for all PodVolumeBackups to complete" error.file="/go/src/github.com/heptio/velero/pkg/restic/backupper.go:165" error.function="github.com/heptio/velero/pkg/restic.(*backupper).BackupPodVolumes" group=v1

The default value of restic_timeout is one hour. You can increase this parameter for large migrations, keeping in mind that a higher value may delay the return of error messages.

Procedure

  1. In the OpenShift Container Platform web console, navigate to OperatorsInstalled Operators.
  2. Click MTC Operator.
  3. In the MigrationController tab, click migration-controller.
  4. In the YAML tab, update the following parameter value:

    spec:
      restic_timeout: 1h 1
    1
    Valid units are h (hours), m (minutes), and s (seconds), for example, 3h30m15s.
  5. Click Save.

3.6.3.4. ResticVerifyErrors in the MigMigration Custom Resource

If data verification fails when migrating a PV with the file system data copy method, the following error is displayed in the MigMigration Custom Resource (CR).

Example output

status:
  conditions:
  - category: Warn
    durable: true
    lastTransitionTime: 2020-04-16T20:35:16Z
    message: There were verify errors found in 1 Restic volume restores. See restore `<registry-example-migration-rvwcm>`
      for details 1
    status: "True"
    type: ResticVerifyErrors 2

1
The error message identifies the Restore CR name.
2
ResticErrors is a general error warning that includes verification errors.
Note

A data verification error does not cause the migration process to fail.

You can check the Restore CR to identify the source of the data verification error.

Procedure

  1. Log in to the target cluster.
  2. View the Restore CR:

    $ oc describe <registry-example-migration-rvwcm> -n openshift-migration

    The output identifies the PV with PodVolumeRestore errors.

    Example output

    status:
      phase: Completed
      podVolumeRestoreErrors:
      - kind: PodVolumeRestore
        name: <registry-example-migration-rvwcm-98t49>
        namespace: openshift-migration
      podVolumeRestoreResticErrors:
      - kind: PodVolumeRestore
        name: <registry-example-migration-rvwcm-98t49>
        namespace: openshift-migration

  3. View the PodVolumeRestore CR:

    $ oc describe <migration-example-rvwcm-98t49>

    The output identifies the Restic pod that logged the errors.

    Example output

      completionTimestamp: 2020-05-01T20:49:12Z
      errors: 1
      resticErrors: 1
      ...
      resticPod: <restic-nr2v5>

  4. View the Restic pod log to locate the errors:

    $ oc logs -f <restic-nr2v5>

3.6.4. Manually rolling back a migration

If your application was stopped during a failed migration, you must roll it back manually in order to prevent data corruption in the PV.

This procedure is not required if the application was not stopped during migration, because the original application is still running on the source cluster.

Procedure

  1. On the target cluster, switch to the migrated project:

    $ oc project <project>
  2. Get the deployed resources:

    $ oc get all
  3. Delete the deployed resources to ensure that the application is not running on the target cluster and accessing data on the PVC:

    $ oc delete <resource_type>
  4. To stop a daemon set without deleting it, update the nodeSelector in the YAML file:

    apiVersion: apps/v1
    kind: DaemonSet
    metadata:
      name: hello-daemonset
    spec:
      selector:
          matchLabels:
            name: hello-daemonset
      template:
        metadata:
          labels:
            name: hello-daemonset
        spec:
          nodeSelector:
            role: worker 1
    1
    Specify a nodeSelector value that does not exist on any node.
  5. Update each PV’s reclaim policy so that unnecessary data is removed. During migration, the reclaim policy for bound PVs is Retain, to ensure that data is not lost when an application is removed from the source cluster. You can remove these PVs during rollback.

    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: pv0001
    spec:
      capacity:
        storage: 5Gi
      accessModes:
        - ReadWriteOnce
      persistentVolumeReclaimPolicy: Retain 1
      ...
    status:
      ...
    1
    Specify Recycle or Delete.
  6. On the source cluster, switch to your migrated project:

    $ oc project <project_name>
  7. Obtain the project’s deployed resources:

    $ oc get all
  8. Start one or more replicas of each deployed resource:

    $ oc scale --replicas=1 <resource_type>/<resource_name>
  9. Update the nodeSelector of the DaemonSet resource to its original value, if you changed it during the procedure.

3.6.5. Using must-gather to collect data

You must run the must-gather tool if you open a customer support case on the Red Hat Customer Portal.

The openshift-migration-must-gather-rhel8 image collects migration-specific logs and data that are not collected by the default must-gather image.

Procedure

  1. Navigate to the directory where you want to store the must-gather data.
  2. Run the must-gather command:

    $ oc adm must-gather --image=openshift-migration-must-gather-rhel8:v1.3.0
  3. Remove authentication keys and other sensitive information.
  4. Create an archive file containing the contents of the must-gather data directory:

    $ tar cvaf must-gather.tar.gz must-gather.local.<uid>/
  5. Upload the compressed file as an attachment to your customer support case.

3.6.6. Known issues

This release has the following known issues:

  • During migration, the Migration Toolkit for Containers (MTC) preserves the following namespace annotations:

    • openshift.io/sa.scc.mcs
    • openshift.io/sa.scc.supplemental-groups
    • openshift.io/sa.scc.uid-range

      These annotations preserve the UID range, ensuring that the containers retain their file system permissions on the target cluster. There is a risk that the migrated UIDs could duplicate UIDs within an existing or future namespace on the target cluster. (BZ#1748440)

  • If an AWS bucket is added to the MTC web console and then deleted, its status remains True because the MigStorage CR is not updated. (BZ#1738564)
  • Most cluster-scoped resources are not yet handled by MTC. If your applications require cluster-scoped resources, you may have to create them manually on the target cluster.
  • If a migration fails, the migration plan does not retain custom PV settings for quiesced pods. You must manually roll back the migration, delete the migration plan, and create a new migration plan with your PV settings. (BZ#1784899)
  • If a large migration fails because Restic times out, you can increase the restic_timeout parameter value (default: 1h) in the Migration Controller CR.
  • If you select the data verification option for PVs that are migrated with the file system copy method, performance is significantly slower.
  • If you are migrating data from NFS storage and root_squash is enabled, Restic maps to nfsnobody. The migration fails and a permission error is displayed in the Restic Pod log. You can resolve this issue by creating a supplemental group for Restic. (BZ#1873641)
  • If Velero has an invalid BackupStorageLocation during start-up, it will crash-loop until the invalid BackupStorageLocation is removed. This scenario is triggered by incorrect credentials, a non-existent S3 bucket, and other configuration errors. (BZ#1881707)

Legal Notice

Copyright © 2020 Red Hat, Inc.
The text of and illustrations in this document are licensed by Red Hat under a Creative Commons Attribution–Share Alike 3.0 Unported license ("CC-BY-SA"). An explanation of CC-BY-SA is available at http://creativecommons.org/licenses/by-sa/3.0/. In accordance with CC-BY-SA, if you distribute this document or an adaptation of it, you must provide the URL for the original version.
Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert, Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat, Red Hat Enterprise Linux, the Shadowman logo, the Red Hat logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
Java® is a registered trademark of Oracle and/or its affiliates.
XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.
MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.
Node.js® is an official trademark of Joyent. Red Hat is not formally related to or endorsed by the official Joyent Node.js open source or commercial project.
The OpenStack® Word Mark and OpenStack logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation's permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
All other trademarks are the property of their respective owners.