Chapter 5. Upgrade a Red Hat Ceph Storage cluster using cephadm

As a storage administrator, you can use the cephadm Orchestrator to upgrade Red Hat Ceph Storage 5.0 and later.

The automated upgrade process follows Ceph best practices. For example:

  • The upgrade order starts with Ceph Managers, Ceph Monitors, then other daemons.
  • Each daemon is restarted only after Ceph indicates that the cluster will remain available.

The storage cluster health status is likely to switch to HEALTH_WARNING during the upgrade. When the upgrade is complete, the health status should switch back to HEALTH_OK.

Note

ceph-ansible is currently not supported with Red Hat Ceph Storage 5. This means that once you have migrated your storage cluster to Red Hat Ceph Storage 5, you must use cephadm and cephadm-ansible to perform subsequent updates.

Note

You do not get a message once the upgrade is successful. Run ceph versions and ceph orch ps commands to verify the new image ID and the version of the storage cluster.

5.1. Upgrading the Red Hat Ceph Storage cluster

You can use ceph orch upgrade command for upgrading a Red Hat Ceph Storage 5.0 cluster.

Prerequisites

  • A running Red Hat Ceph Storage cluster 5.0.
  • Root-level access to all the nodes.
  • At least two Ceph Manager nodes in the storage cluster: one active and one standby.
Note

Red Hat Ceph Storage 5 also includes a health check function that returns a DAEMON_OLD_VERSION warning if it detects that any of the daemons in the storage cluster are running multiple versions of RHCS. The warning is triggered when the daemons continue to run multiple versions of Red Hat Ceph Storage beyond the time value set in the mon_warn_older_version_delay option. By default, the mon_warn_older_version_delay option is set to 1 week. This setting allows most upgrades to proceed without falsely seeing the warning. If the upgrade process is paused for an extended time period, you can mute the health warning:

ceph health mute DAEMON_OLD_VERSION --sticky

After the upgrade has finished, unmute the health warning:

ceph health unmute DAEMON_OLD_VERSION

Procedure

  1. Update the cephadm-ansible package.

    Example

    [root@host01 ~] dnf update cephadm-ansible

  2. Run the preflight playbook with the upgrade_ceph_packages parameter set to true on the bootstrapped host in the storage cluster:

    Syntax

    ansible-playbook -i INVENTORY-FILE cephadm-preflight.yml --extra-vars "ceph_origin=rhcs upgrade_ceph_packages=true"

    Example

    [root@host01 ~]# ansible-playbook -i /etc/ansible/hosts cephadm-preflight.yml --extra-vars "ceph_origin=rhcs upgrade_ceph_packages=true"

    This package upgrades cephadm on all the nodes.

    Important

    After migrating a storage cluster from Red Hat Ceph Storage 4 to Red Hat Ceph Storage 5, the cephadm package does not update.

    To workaround this issue, manually remove the cephadm binary and then reinstall it:

    Example

    [root@host01 /]# rm /usr/sbin/cephadm && dnf install cephadm

  3. Log into the cephadm shell:

    Example

    [root@host01 ~]# cephadm shell

  4. Ensure all the hosts are online and that the storage cluster is healthy:

    Example

    [ceph: root@host01 /]# ceph -s

  5. Check service versions and the available target containers:

    Syntax

    ceph orch upgrade check --image IMAGE_NAME

    Example

    [ceph: root@host01 /]# ceph orch upgrade check --image registry.redhat.io/rhceph/rhceph-5-rhel8:latest

  6. Upgrade the storage cluster:

    Syntax

    ceph orch upgrade start --image IMAGE_NAME

    Example

    [ceph: root@host01 /]# ceph orch upgrade start --image registry.redhat.io/rhceph/rhceph-5-rhel8:latest

    While the upgrade is underway, a progress bar appears in the ceph status output.

    Example

    [ceph: root@host01 /]# ceph status
    [...]
    progress:
        Upgrade to 16.2.0-115.el8cp (1s)
          [............................]

  7. Verify the new IMAGE_ID and VERSION of the Ceph cluster:

    Example

    [ceph: root@host01 /]# ceph versions
    [ceph: root@host01 /]# ceph orch ps

5.2. Installing and configuring cephadm-ansible

cephadm-ansible is an Ansible playbook that allows you to deploy a new standalone Red Hat Ceph Storage cluster or upgrade an existing storage cluster.

The default location for cephadm-ansible is /usr/share/cephadm-ansible.

Prerequisites

  • Ansible is installed on all nodes in the storage cluster.
  • Podman or docker is installed on all nodes in the storage cluster.
  • Root-level access to all nodes in the storage cluster.
  • The Monitor node has an IP address in the public_network in the ceph-ansible code

Procedure

  1. Navigate to the /usr/share/cephadm-ansible directory.
  2. Run the cephadm-ansible playbook on the initial host in the storage cluster:

    Syntax

    ansible-playbook -i INVENTORY-FILE cephadm-ansible.yml

    Example

    [root@admin ~]# ansible-playbook -i /usr/share/cephadm-ansible/hosts cephadm-ansible.yml

  3. After installation is complete, cephadm-ansible resides in the /usr/share/cephadm-ansible directory.

5.3. Upgrading the Red Hat Ceph Storage cluster in a disconnected environment

You can upgrade the storage cluster in a disconnected environment by using the --image tag.

You can use ceph orch upgrade command for upgrading a Red Hat Ceph Storage 5.0 cluster.

Prerequisites

  • A running Red Hat Ceph Storage cluster 5.0.
  • Root-level access to all the nodes.
  • At least two Ceph Manager nodes in the storage cluster: one active and one standby.
  • Register the nodes to CDN and attach subscriptions.
  • Check for the customer container images in a disconnected environment and change the configuration, if required. See the Configuring a custom registry for disconnected installation section in the Red Hat Ceph Storage Installation Guide for more details.

Procedure

  1. Update the cephadm-ansible package.

    Example

    [root@host01 ~] dnf update cephadm-ansible

  2. Run the preflight playbook with the upgrade_ceph_packages parameter set to true on the bootstrapped host in the storage cluster:

    Syntax

    ansible-playbook -i INVENTORY-FILE cephadm-preflight.yml --extra-vars "ceph_origin=rhcs upgrade_ceph_packages=true"

    Example

    [root@host01 ~]# ansible-playbook -i /etc/ansible/hosts cephadm-preflight.yml --extra-vars "ceph_origin=rhcs upgrade_ceph_packages=true"

    This package upgrades cephadm on all the nodes.

  3. Log into the cephadm shell:

    Example

    [root@host01 ~]# cephadm shell

  4. Ensure all the hosts are online and that the storage cluster is healthy:

    Example

    [ceph: roothost01 /]# ceph -s

  5. Check service versions and the available target containers:

    Syntax

    ceph orch upgrade check --image IMAGE_NAME

    Example

    [ceph: roothost01 /]# ceph orch upgrade check --image _LOCAL_NODE_FQDN_:5000/rhceph/rhceph-5-rhel8

  6. Upgrade the storage cluster:

    Syntax

    ceph orch upgrade start --image IMAGE_NAME

    Example

    [ceph: roothost01 /]# ceph orch upgrade start --image _LOCAL_NODE_FQDN_:5000/rhceph/rhceph-5-rhel8

    While the upgrade is underway, a progress bar appears in the ceph status output.

    Example

    [cephadm@cephadm /~]# ceph status
    [...]
    progress:
        Upgrade to 16.2.0-115.el8cp (1s)
          [............................]

  7. Verify the new IMAGE_ID and VERSION of the Ceph cluster:

    Example

    [ceph: roothost01 /]# ceph orch versions
    [ceph: roothost01 /]# ceph orch ps

Additional Resources

5.4. Monitoring and managing upgrade of the storage cluster

After running the ceph orch upgrade start command to upgrade the Red Hat Ceph Storage cluster, you can check the status, pause, resume, or stop the upgrade process.

Prerequisites

  • A running Red Hat Ceph Storage cluster 5.0.
  • Root-level access to all the nodes.
  • At least two Ceph Manager nodes in the storage cluster: one active and one standby.
  • Upgrade for the storage cluster initiated.

Procedure

  1. Determine whether an upgrade is in process and the version to which the cluster is upgrading:

    Example

    [ceph: roothost01 /]# ceph orch upgrade status

    Note

    You do not get a message once the upgrade is successful. Run ceph versions and ceph orch ps commands to verify the new image ID and the version of the storage cluster.

  2. Optional: Pause the upgrade process:

    Example

    [ceph: roothost01 /]# ceph orch upgrade pause

  3. Optional: Resume a paused upgrade process:

    Example

    [ceph: roothost01 /]# ceph orch upgrade resume

  4. Optional: Stop the upgrade process:

    Example

    [ceph: roothost01 /]# ceph orch upgrade stop