Chapter 2. Migrating a non-containerized Red Hat Ceph Storage cluster to a containerized environment

Red Hat Ceph Storage 5 supports only containerized daemons. It does not support non-containerized storage clusters. If you are upgrading a non-containerized Bare-metal storage cluster from Red Hat Ceph Storage 4 to Red Hat Ceph Storage 5, the upgrade process includes the conversion to a containerized deployment. You can manually migrate a non-containerized, Bare-metal, cluster to a containerized cluster using the command line interface.

Prerequisites

  • A running Red Hat Ceph Storage non-containerized cluster.
  • Root-level access to all the nodes.

Procedure

  1. Optional: For two-way RBD mirroring configured using the command-line interface in a bare-metal storage cluster, the cluster does not migrate RBD mirroring. For such a configuration, follow the below steps before migrating the non-containerized storage cluster to a containerized storage cluster:

    1. Create a user on the Ceph client node:

      Syntax

      ceph auth get client.PRIMARY_CLUSTER_NAME -o /etc/ceph/ceph.PRIMARY_CLUSTER_NAME.keyring

      Example

      [root@rbd-client-site-a ~]# ceph auth get client.rbd-mirror.site-a -o /etc/ceph/ceph.client.rbd-mirror.site-a.keyring

    2. Change the username in the auth file in /etc/ceph directory:

      Example

      [client.rbd-mirror.rbd-client-site-a]
          key = AQCbKbVg+E7POBAA7COSZCodvOrg2LWIFc9+3g==
          caps mds = "allow *"
          caps mgr = "allow *"
          caps mon = "allow *"
          caps osd = "allow *"

    3. Import the auth file to add relevant permissions:

      Syntax

      ceph auth import -i PATH_TO_KEYRING

      Example

      [root@rbd-client-site-a ~]# ceph auth import -i /etc/ceph/ceph.client.rbd-mirror.rbd-client-site-a.keyring

    4. Check the service name of the RBD mirror node:

      Example

      [root@rbd-client-site-a ~]# systemctl list-units --all
      
      systemctl stop ceph-rbd-mirror@rbd-client-site-a.service
      systemctl disable ceph-rbd-mirror@rbd-client-site-a.service
      systemctl reset-failed ceph-rbd-mirror@rbd-client-site-a.service
      systemctl start ceph-rbd-mirror@rbd-mirror.rbd-client-site-a.service
      systemctl enable ceph-rbd-mirror@rbd-mirror.rbd-client-site-a.service
      systemctl status ceph-rbd-mirror@rbd-mirror.rbd-client-site-a.service

    5. Add the rbd-mirror node to the /etc/ansible/hosts file:

      Example

      [rbdmirrors]
      ceph.client.rbd-mirror.rbd-client-site-a

  2. Edit the group_vars/all.yml file to include configuration for containers:

    ceph_docker_image_tag: "latest"
    ceph_docker_image: rhceph/rhceph-4-rhel8
    containerized_deployment: true
  3. Navigate to the /usr/share/ceph-ansible directory:

    [ansible@admin ~]$ cd /usr/share/ceph-ansible
  4. On the Ansible administration node, run the Ansible migration playbook:

    Syntax

    ansible-playbook ./infrastructure-playbooks/switch-from-non-containerized-to-containerized-ceph-daemons.yml -i INVENTORY_FILE

    Example

    [ansible@admin ceph-ansible]$ ansible-playbook ./infrastructure-playbooks/switch-from-non-containerized-to-containerized-ceph-daemons.yml -i hosts

    Verify the cluster is switched to containerized environment.

  5. On the monitor node, list all running containers:

    Example

    [root@mon ~]$ sudo podman ps

Additional Resources