Chapter 7. Upgrading a Red Hat Ceph Storage cluster

As a storage administrator, you can upgrade a Red Hat Ceph Storage cluster to a new major version or to a new minor version or to just apply asynchronous updates to the current version. The rolling_update.yml Ansible playbook performs upgrades for bare-metal or containerized deployments of Red Hat Ceph Storage. Ansible upgrades the Ceph nodes in the following order:

  • Monitor nodes
  • MGR nodes
  • OSD nodes
  • MDS nodes
  • Ceph Object Gateway nodes
  • All other Ceph client nodes
Note

Starting with Red Hat Ceph Storage 3.1, new Ansible playbooks were added to optimize storage for performance when using Object Gateway and high speed NVMe based SSDs (and SATA SSDs). The playbooks do this by placing journals and bucket indexes together on SSDs; this increases performance compared to having all journals on one device. These playbooks are designed to be used when installing Ceph. Existing OSDs continue to work and need no extra steps during an upgrade. There is no way to upgrade a Ceph cluster while simultaneously reconfiguring OSDs to optimize storage in this way. To use different devices for journals or bucket indexes requires reprovisioning OSDs. For more information see Using NVMe with LVM optimally in Ceph Object Gateway for Production Guide.

Important

When upgrading a Red Hat Ceph Storage cluster from a previous supported version to version 4.2z2, the upgrade completes with the storage cluster in a HEALTH_WARN state stating that monitors are allowing insecure global_id reclaim. This is due to a patched CVE, the details of which are available in the CVE-2021-20288. This issue is fixed by CVE for Red Hat Ceph Storage 4.2z2.

Recommendations to mute health warnings:

  1. Identify clients that are not updated by checking the ceph health detail output for the AUTH_INSECURE_GLOBAL_ID_RECLAIM alert.
  2. Upgrade all clients to Red Hat Ceph Storage 4.2z2 release.
  3. After validating all clients have been updated and the AUTH_INSECURE_GLOBAL_ID_RECLAIM alert is no longer present for a client, set auth_allow_insecure_global_id_reclaim to false. When this option is set to false, then an unpatched client cannot reconnect to the storage cluster after an intermittent network disruption breaking its connection to a monitor, or be able to renew its authentication ticket when it times out, which is 72 hours by default.

    Syntax

    ceph config set mon auth_allow_insecure_global_id_reclaim false

  4. Ensure that no clients are listed with the AUTH_INSECURE_GLOBAL_ID_RECLAIM alert.
Important

The rolling_update.yml playbook includes the serial variable that adjusts the number of nodes to be updated simultaneously. Red Hat strongly recommends to use the default value (1), which ensures that Ansible will upgrade cluster nodes one by one.

Important

If the upgrade fails at any point, check the cluster status with the ceph status command to understand the upgrade failure reason. If you are not sure of the failure reason and how to resolve , please contact Red hat Support for assistance.

Important

When upgrading a Red Hat Ceph Storage cluster from a previous version to version 4, the Ceph Ansible configuration will default the object store type to BlueStore. If you still want to use FileStore as the OSD object store, then explicitly set the Ceph Ansible configuration to FileStore. This ensures newly deployed and replaced OSDs are using FileStore.

Warning

If upgrading a multisite setup from Red Hat Ceph Storage 3 to Red Hat Ceph Storage 4, heed the following recommendations or else replication may break. Set rgw_multisite: false in all.yml before running rolling_update.yml. Do not re-enable rgw_multisite after upgrade. Use it only if you need to add new gateways after upgrade. Only upgrade a Red Hat Ceph Storage 3 cluster at version 3.3z5 or higher to Red Hat Ceph Storage 4. If you cannot update to 3.3z5 or a higher, disable synchronization between sites before upgrading the clusters. To disable synchronization, set rgw_run_sync_thread = false and restart the RADOS Gateway daemon. Upgrade the primary cluster first. Upgrade to Red Hat Ceph Storage 4.1 or later. To see the package versions that correlate to 3.3z5 see What are the Red Hat Ceph Storage releases and corresponding Ceph package versions? For instructions on how to disable synchronization, see How to disable RGW Multisite sync temporarily?

Warning

When using Ceph Object Gateway and upgrading from Red Hat Ceph Storage 3.x to Red Hat Ceph Storage 4.x, the front end is automatically changed from CivetWeb to Beast, which is the new default. For more information, see Configuration in the Object Gateway Configuration and Administration Guide.

Warning

If using RADOS Gateway, Ansible will switch the front end from CivetWeb to Beast. In the process of this the RGW instance names are changed from rgw.HOSTNAME to rgw.HOSTNAME.rgw0. Due to the name change Ansible does not update the existing RGW configuration in ceph.conf and instead appends a default configuration, leaving intact the old CivetWeb based RGW setup, however it is not used. Custom RGW configuration changes would then be lost, which could cause an RGW service interruption. To avoid this, before upgrade, add the existing RGW configuration in the ceph_conf_overrides section of all.yml, but change the RGW instance names by appending .rgw0, then restart the RGW service. This will preserve non-default RGW configuration changes after upgrade. For information on ceph_conf_overrides, see Overriding Ceph Default Settings.

7.1. Supported Red Hat Ceph Storage upgrade scenarios

Red Hat supports the following upgrade scenarios.

Read the tables for bare-metal, and containerized to understand what pre-upgrade state your cluster must be in to move to certain post-upgrade states.

Use ceph-ansible to perform bare-metal and containerized upgrades where the bare-metal or host operating system does not change major versions. Upgrading from Red Hat Enterprise Linux 7 to Red Hat Enterprise Linux 8 is not supported with ceph-ansible. To upgrade the bare-metal operating system from Red Hat Enterprise Linux 7.9 to Red Hat Enterprise Linux 8.4 as a part of upgrading Red Hat Ceph Storage, see the Manually upgrading a Red Hat Ceph Storage cluster and operating system section in the Red Hat Ceph Storage Installation Guide.

Note

To upgrade your cluster to Red Hat Ceph Storage 4, Red Hat recommends your cluster to be on the latest version of the Red Hat Ceph Storage 3. To know the latest version of Red Hat Ceph Storage, see the What are the Red Hat Ceph Storage releases? Knowledgebase article for more information.

Table 7.1. Supported upgrade scenarios for Bare-metal deployments

Pre-upgrade statePost-upgrade state

Red Hat Enterprise Linux version

Red Hat Ceph Storage version

Red Hat Enterprise Linux version

Red Hat Ceph Storage version

7.6

3.3

7.9

4.2

7.6

3.3

8.4

4.2

7.7

3.3

7.9

4.2

7.7

4.0

7.9

4.2

7.8

3.3

7.9

4.2

7.8

3.3

8.4

4.2

7.9

3.3

8.4

4.2

8.1

4.0

8.4

4.2

8.2

4.1

8.4

4.2

8.2

4.1

8.4

4.2

8.3

4.1

8.4

4.2

Table 7.2. Supported upgrade scenarios for Containerized deployments

Pre-upgrade statePost-upgrade state

Host Red Hat Enterprise Linux version

Container Red Hat Enterprise Linux version

Red Hat Ceph Storage version

Host Red Hat Enterprise Linux version

Container Red Hat Enterprise Linux version

Red Hat Ceph Storage version

7.6

7.8

3.3

7.9

8.4

4.2

7.7

7.8

3.3

7.9

8.4

4.2

7.7

8.1

4.0

7.9

8.4

4.2

7.8

7.8

3.3

7.9

8.4

4.2

8.1

8.1

4.0

8.4

8.4

4.2

8.2

8.2

4.1

8.4

8.4

4.2

8.3

8.3

4.1

8.4

8.4

4.2

7.2. Preparing for an upgrade

There are a few things to complete before you can start an upgrade of a Red Hat Ceph Storage cluster. These steps apply to both bare-metal and container deployments of a Red Hat Ceph Storage cluster, unless specified for one or the other.

Important

You can only upgrade to the latest version of Red Hat Ceph Storage 4. For example, if version 4.1 is available, you cannot upgrade from 3 to 4.0; you must go directly to 4.1.

Important

If using the FileStore object store, after upgrading from Red Hat Ceph Storage 3 to Red Hat Ceph Storage 4, you must migrate to BlueStore.

Important

You cannot use ceph-ansible to upgrade Red Hat Ceph Storage while also upgrading Red Hat Enterprise Linux 7 to Red Hat Enterprise Linux 8. You must stay on Red Hat Enterprise Linux 7. To upgrade the operating system as well, see Manually upgrading a Red Hat Ceph Storage cluster and operating system.

Important

The option bluefs_buffered_io is set to True by default for Red Hat Ceph Storage 4.2z2 and later versions. This option enables BlueFS to perform buffered reads in some cases and enables the kernel page cache to act as a secondary cache for reads like RocksDB block reads. For example, if the RocksDB block cache is not large enough to hold all blocks during the OMAP iteration, it may be possible to read them from the page cache instead of the disk. This can dramatically improve performance when osd_memory_target is too small to hold all entries in the block cache. Currently enabling bluefs_buffered_io and disabling the system level swap prevents performance degradation.

Prerequisites

  • Root-level access to all nodes in the storage cluster.
  • If upgrading from version 3, the version 3 cluster is upgraded to the latest version of Red Hat Ceph Storage 3.
  • Before upgrading to version 4, if the Prometheus node exporter service is running, then stop the service:

    Example

    [root@mon ~]# systemctl stop prometheus-node-exporter.service

    Important

    This is a known issue, that will be fixed in an upcoming Red Hat Ceph Storage release. See the Red Hat Knowledgebase article for more details regarding this issue.

    Note

    For Bare-metal or Container Red Hat Ceph Storage cluster nodes that cannot access the internet during an upgrade, follow the procedure provided in the section Registering Red Hat Ceph Storage nodes to the CDN and attaching subscriptions in the Red Hat Ceph Storage Installation Guide.

Procedure

  1. Log in as the root user on all nodes in the storage cluster.
  2. If the Ceph nodes are not connected to the Red Hat Content Delivery Network (CDN), you can use an ISO image to upgrade Red Hat Ceph Storage by updating the local repository with the latest version of Red Hat Ceph Storage.
  3. If upgrading Red Hat Ceph Storage from version 3 to version 4, remove an existing Ceph dashboard installation.

    1. On the Ansible administration node, change to the cephmetrics-ansible directory:

      [root@admin ~]# cd /usr/share/cephmetrics-ansible
    2. Run the purge.yml playbook to remove an existing Ceph dashboard installation:

      [root@admin cephmetrics-ansible]# ansible-playbook -v purge.yml
  4. If upgrading Red Hat Ceph Storage from version 3 to version 4, enable the Ceph and Ansible repositories on the Ansible administration node:

    Red Hat Enterprise Linux 7

    [root@admin ~]# subscription-manager repos --enable=rhel-7-server-rhceph-4-tools-rpms --enable=rhel-7-server-ansible-2.9-rpms

    Red Hat Enterprise Linux 8

    [root@admin ~]# subscription-manager repos --enable=rhceph-4-tools-for-rhel-8-x86_64-rpms --enable=ansible-2.9-for-rhel-8-x86_64-rpms

  5. On the Ansible administration node, ensure the latest versions of the ansible and ceph-ansible packages are installed.

    Red Hat Enterprise Linux 7

    [root@admin ~]# yum update ansible ceph-ansible

    Red Hat Enterprise Linux 8

    [root@admin ~]# dnf update ansible ceph-ansible

  6. Edit the infrastructure-playbooks/rolling_update.yml playbook and change the health_osd_check_retries and health_osd_check_delay values to 50 and 30 respectively:

    health_osd_check_retries: 50
    health_osd_check_delay: 30

    For each OSD node, these values cause Ansible to wait for up to 25 minutes, and will check the storage cluster health every 30 seconds, waiting before continuing the upgrade process.

    Note

    Adjust the health_osd_check_retries option value up or down based on the used storage capacity of the storage cluster. For example, if you are using 218 TB out of 436 TB, basically using 50% of the storage capacity, then set the health_osd_check_retries option to 50.

  7. If the storage cluster you want to upgrade contains Ceph Block Device images that use the exclusive-lock feature, ensure that all Ceph Block Device users have permissions to blacklist clients:

    ceph auth caps client.ID mon 'allow r, allow command "osd blacklist"' osd 'EXISTING_OSD_USER_CAPS'
  8. If the storage cluster was originally installed using Cockpit, create a symbolic link in the /usr/share/ceph-ansible directory to the inventory file where Cockpit created it, at /usr/share/ansible-runner-service/inventory/hosts:

    1. Change to the /usr/share/ceph-ansible directory:

      # cd /usr/share/ceph-ansible
    2. Create the symbolic link:

      # ln -s /usr/share/ansible-runner-service/inventory/hosts hosts
  9. To upgrade the cluster using ceph-ansible, create the symbolic link in the etc/ansible/hosts directory to the hosts inventory file:

    # ln -s /etc/ansible/hosts hosts
  10. If the storage cluster was originally installed using Cockpit, copy the Cockpit generated SSH keys to the Ansible user’s ~/.ssh directory:

    1. Copy the keys:

      # cp /usr/share/ansible-runner-service/env/ssh_key.pub /home/ANSIBLE_USERNAME/.ssh/id_rsa.pub
      # cp /usr/share/ansible-runner-service/env/ssh_key /home/ANSIBLE_USERNAME/.ssh/id_rsa

      Replace ANSIBLE_USERNAME with the username for Ansible, usually admin.

      Example

      # cp /usr/share/ansible-runner-service/env/ssh_key.pub /home/admin/.ssh/id_rsa.pub
      # cp /usr/share/ansible-runner-service/env/ssh_key /home/admin/.ssh/id_rsa

    2. Set the appropriate owner, group, and permissions on the key files:

      # chown ANSIBLE_USERNAME:_ANSIBLE_USERNAME_ /home/ANSIBLE_USERNAME/.ssh/id_rsa.pub
      # chown ANSIBLE_USERNAME:_ANSIBLE_USERNAME_ /home/ANSIBLE_USERNAME/.ssh/id_rsa
      # chmod 644 /home/ANSIBLE_USERNAME/.ssh/id_rsa.pub
      # chmod 600 /home/ANSIBLE_USERNAME/.ssh/id_rsa

      Replace ANSIBLE_USERNAME with the username for Ansible, usually admin.

      Example

      # chown admin:admin /home/admin/.ssh/id_rsa.pub
      # chown admin:admin /home/admin/.ssh/id_rsa
      # chmod 644 /home/admin/.ssh/id_rsa.pub
      # chmod 600 /home/admin/.ssh/id_rsa

Additional Resources

7.3. Upgrading the storage cluster using Ansible

Using the Ansible deployment tool, you can upgrade a Red Hat Ceph Storage cluster by doing a rolling upgrade. These steps apply to both bare-metal and container deployment, unless otherwise noted.

Prerequisites

  • Root-level access to the Ansible administration node.
  • An ansible user account.

Procedure

  1. Navigate to the /usr/share/ceph-ansible/ directory:

    Example

    [root@admin ~]# cd /usr/share/ceph-ansible/

  2. If upgrading from Red Hat Ceph Storage 3 to Red Hat Ceph Storage 4, make backup copies of the group_vars/all.yml, group_vars/osds.yml, and group_vars/clients.yml files:

    [root@admin ceph-ansible]# cp group_vars/all.yml group_vars/all_old.yml
    [root@admin ceph-ansible]# cp group_vars/osds.yml group_vars/osds_old.yml
    [root@admin ceph-ansible]# cp group_vars/clients.yml group_vars/clients_old.yml
  3. If upgrading from Red Hat Ceph Storage 3 to Red Hat Ceph Storage 4, create new copies of the group_vars/all.yml.sample, group_vars/osds.yml.sample and group_vars/clients.yml.sample files, and rename them to group_vars/all.yml, group_vars/osds.yml, and group_vars/clients.yml respectively. Open and edit them accordingly, basing the changes on your previously backed up copies.

    [root@admin ceph-ansible]# cp group_vars/all.yml.sample group_vars/all.yml
    [root@admin ceph-ansible]# cp group_vars/osds.yml.sample group_vars/osds.yml
    [root@admin ceph-ansible]# cp group_vars/clients.yml.sample group_vars/clients.yml
  4. Edit the group_vars/osds.yml file. Add and set the following options:

    nb_retry_wait_osd_up: 60
    delay_wait_osd_up: 10
    Note

    These are the default values , you can modify the values as per your use case.

  5. If upgrading to a new minor version of Red Hat Ceph Storage 4, verify the value for grafana_container_image in group_vars/all.yml is the same as in group_vars/all.yml.sample. If it is not the same, edit it so it is.

    Example

    grafana_container_image: registry.redhat.io/rhceph/rhceph-4-dashboard-rhel8:4

    Note

    The image path shown is included in ceph-ansible version 4.0.23-1.

  6. Copy the latest site.yml or site-container.yml file from the sample files:

    1. For bare-metal deployments:

      [root@admin ceph-ansible]# cp site.yml.sample site.yml
    2. For container deployments:

      [root@admin ceph-ansible]# cp site-container.yml.sample site-container.yml
  7. Open the group_vars/all.yml file and edit the following options.

    1. Add the fetch_directory option:

      fetch_directory: FULL_DIRECTORY_PATH
      Replace
      • FULL_DIRECTORY_PATH with a writable location, such as the Ansible user’s home directory.
    2. If the cluster you want to upgrade contains any Ceph Object Gateway nodes, add the radosgw_interface option:

      radosgw_interface: INTERFACE
      Replace
      • INTERFACE with the interface that the Ceph Object Gateway nodes listen to.
    3. If your current setup has SSL certificates configured, you need to edit the following:

      radosgw_frontend_ssl_certificate: /etc/pki/ca-trust/extracted/CERTIFICATE_NAME
      radosgw_frontend_port: 443
    4. The default OSD object store is BlueStore. To keep the traditional OSD object store, you must explicitly set the osd_objectstore option to filestore:

      osd_objectstore: filestore
      Note

      With the osd_objectstore option set to filestore, replacing an OSD will use FileStore, instead of BlueStore.

      Important

      Starting with Red Hat Ceph Storage 4, FileStore is a deprecated feature. Red Hat recommends migrating the FileStore OSDs to BlueStore OSDs.

    5. Starting with Red Hat Ceph Storage 4.1, you must uncomment or set dashboard_admin_password and grafana_admin_password in /usr/share/ceph-ansible/group_vars/all.yml. Set secure passwords for each. Also set custom user names for dashboard_admin_user and grafana_admin_user.
    6. For both bare-metal and containers deployments:

      1. Uncomment the upgrade_ceph_packages option and set it to True:

        upgrade_ceph_packages: True
      2. Set the ceph_rhcs_version option to 4:

        ceph_rhcs_version: 4
        Note

        Setting the ceph_rhcs_version option to 4 will pull in the latest version of Red Hat Ceph Storage 4.

      3. Add the ceph_docker_registry information to all.yml:

        Syntax

        ceph_docker_registry: registry.redhat.io
        ceph_docker_registry_username: SERVICE_ACCOUNT_USER_NAME
        ceph_docker_registry_password: TOKEN

        Note

        If you do not have a Red Hat Registry Service Account, create one using the Registry Service Account webpage. See the Red Hat Container Registry Authentication Knowledgebase article for more details.

        Note

        In addition to using a Service Account for the ceph_docker_registry_username and ceph_docker_registry_password parameters, you can also use your Customer Portal credentials, but to ensure security, encrypt the ceph_docker_registry_password parameter. For more information, see Encrypting Ansible password variables with ansible-vault.

    7. For containers deployments:

      1. Change the ceph_docker_image option to point to the Ceph 4 container version:

        ceph_docker_image: rhceph/rhceph-4-rhel8
      2. Change the ceph_docker_image_tag option to point to the latest version of rhceph/rhceph-4-rhel8:

        ceph_docker_image_tag: latest
  8. If upgrading from Red Hat Ceph Storage 3 to Red Hat Ceph Storage 4, open the Ansible inventory file for editing, /etc/ansible/hosts by default, and add the Ceph dashboard node name or IP address under the [grafana-server] section. If this section does not exist, then also add this section along with the node name or IP address.
  9. Switch to or log in as the Ansible user, then run the rolling_update.yml playbook:

    [ansible@admin ceph-ansible]$ ansible-playbook infrastructure-playbooks/rolling_update.yml -i hosts
    Important

    Do not use the limit ansible option with the rolling_update.yml playbook.

  10. As the root user on the RBD mirroring daemon node, upgrade the rbd-mirror package manually:

    [root@rbd ~]# yum upgrade rbd-mirror
  11. Restart the rbd-mirror daemon:

    systemctl restart ceph-rbd-mirror@CLIENT_ID
  12. If upgrading from Red Hat Ceph Storage 3 to Red Hat Ceph Storage 4, import any configuration options in ceph.conf file into the storage cluster’s configuration database.

    Example

    [root@mon ~]# ceph config assimilate-conf -i /etc/ceph/ceph.conf

    1. Check the storage cluster’s configuration database:

      Example

      [root@mon ~]# ceph config dump

    2. Repeat the above steps on all Ceph nodes as to import any configuration options in ceph.conf file into the storage cluster’s configuration database.
  13. Verify the health status of the storage cluster.

    1. For bare-metal deployments, log into a monitor node as the root user and run the Ceph status command:

      [root@mon ~]# ceph -s
    2. For container deployments, log into a Ceph Monitor node as the root user.

      1. List all running containers:

        Red Hat Enterprise Linux 7

        [root@mon ~]# docker ps

        Red Hat Enterprise Linux 8

        [root@mon ~]# podman ps

      2. Check health status:

        Red Hat Enterprise Linux 7

        [root@mon ~]# docker exec ceph-mon-MONITOR_NAME ceph -s

        Red Hat Enterprise Linux 8

        [root@mon ~]# podman exec ceph-mon-MONITOR_NAME ceph -s

        Replace
        • MONITOR_NAME with the name of the Ceph Monitor container found in the previous step.

          Example

          [root@mon ~]# podman exec ceph-mon-mon01 ceph -s

  14. Optional: If upgrading from Red Hat Ceph Storage 3.x to Red Hat Ceph Storage 4.x, you might see this health warning: Legacy BlueStore stats reporting detected on 336 OSD(s). This is caused by newer code calculating pool stats differently. To ensure accurate reporting and eliminate the warning, run the repair function on all OSDs:

    1. Stop the OSD service:

      [root@osd ~]# systemctl stop ceph-osd.target
    2. Run the repair function on the OSD, specifying its actual OSD ID:

      Syntax

      ceph-bluestore-tool --path /var/lib/ceph/osd/ceph-OSDID repair

      Example

      [root@osd ~]# ceph-bluestore-tool --path /var/lib/ceph/osd/ceph-2 repair

    3. Start the OSD service:

      [root@osd ~]# systemctl start ceph-osd.target
  15. Once the upgrade finishes, you can migrate the FileStore OSDs to BlueStore OSDs, by running the Ansible playbook:

    Syntax

    ansible-playbook infrastructure-playbooks/filestore-to-bluestore.yml --limit OSD_NODE_TO_MIGRATE

    Example

    [ansible@admin ceph-ansible]$ ansible-playbook infrastructure-playbooks/filestore-to-bluestore.yml --limit osd01

    Once the migration completes do the following sub steps.

    1. Open for editing the group_vars/osds.yml file, and set the osd_objectstore option to bluestore, for example:

      osd_objectstore: bluestore
    2. If you are using the lvm_volumes variable, then change the journal and journal_vg options to db and db_vg respectively, for example:

      Before

      lvm_volumes:
        - data: /dev/sdb
          journal: /dev/sdc1
        - data: /dev/sdd
          journal: journal1
          journal_vg: journals

      After converting to Bluestore

      lvm_volumes:
        - data: /dev/sdb
          db: /dev/sdc1
        - data: /dev/sdd
          db: journal1
          db_vg: journals

  16. If working in an OpenStack environment, update all the cephx users to use the RBD profile for pools. The following commands must be run as the root user:

    1. Glance users:

      Syntax

      ceph auth caps client.glance mon 'profile rbd' osd 'profile rbd pool=GLANCE_POOL_NAME'

      Example

      [root@mon ~]# ceph auth caps client.glance mon 'profile rbd' osd 'profile rbd pool=images'

    2. Cinder users:

      Syntax

      ceph auth caps client.cinder mon 'profile rbd' osd 'profile rbd pool=CINDER_VOLUME_POOL_NAME, profile rbd pool=NOVA_POOL_NAME, profile rbd-read-only pool=GLANCE_POOL_NAME'

      Example

      [root@mon ~]# ceph auth caps client.cinder mon 'profile rbd' osd 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd-read-only pool=images'

    3. OpenStack general users:

      Syntax

      ceph auth caps client.openstack mon 'profile rbd' osd 'profile rbd-read-only pool=CINDER_VOLUME_POOL_NAME, profile rbd pool=NOVA_POOL_NAME, profile rbd-read-only pool=GLANCE_POOL_NAME'

      Example

      [root@mon ~]# ceph auth caps client.openstack mon 'profile rbd' osd 'profile rbd-read-only pool=volumes, profile rbd pool=vms, profile rbd-read-only pool=images'

      Important

      Do these CAPS updates before performing any live client migrations. This allows clients to use the new libraries running in memory, causing the old CAPS settings to drop from cache and applying the new RBD profile settings.

  17. Optional: On client nodes, restart any applications that depend on the Ceph client-side libraries.

    Note

    If you are upgrading OpenStack Nova compute nodes that have running QEMU or KVM instances or use a dedicated QEMU or KVM client, stop and start the QEMU or KVM instance because restarting the instance does not work in this case.

Additional Resources

7.4. Upgrading the storage cluster using the command-line interface

You can upgrade from Red Hat Ceph Storage 3.3 to Red Hat Ceph Storage 4 while the storage cluster is running. An important difference between these versions is that Red Hat Ceph Storage 4 uses the msgr2 protocol by default, which uses port 3300. If it is not open, the cluster will issue a HEALTH_WARN error.

Here are the constraints to consider when upgrading the storage cluster:

  • Red Hat Ceph Storage 4 uses msgr2 protocol by default. Ensure port 3300 is open on Ceph Monitor nodes
  • Once you upgrade the ceph-monitor daemons from Red Hat Ceph Storage 3 to Red Hat Ceph Storage 4, the Red Hat Ceph Storage 3 ceph-osd daemons cannot create new OSDs until you upgrade them to Red Hat Ceph Storage 4.
  • Do not create any pools while the upgrade is in progress.

Prerequisites

  • Root-level access to the Ceph Monitor, OSD, and Object Gateway nodes.

Procedure

  1. Ensure that the cluster has completed at least one full scrub of all PGs while running Red Hat Ceph Storage 3. Failure to do so can cause your monitor daemons to refuse to join the quorum on start, leaving them non-functional. To ensure the cluster has completed at least one full scrub of all PGs, execute the following:

    # ceph osd dump | grep ^flags

    To proceed with an upgrade from Red Hat Ceph Storage 3 to Red Hat Ceph Storage 4, the OSD map must include the recovery_deletes and purged_snapdirs flags.

  2. Ensure the cluster is in a healthy and clean state.

    # ceph health
    HEALTH_OK
  3. For nodes running ceph-mon and ceph-manager, execute:

    # subscription-manager repos --enable=rhel-7-server-rhceph-4-mon-rpms

    Once the Red Hat Ceph Storage 4 package is enabled, execute the following on each of the ceph-mon and ceph-manager nodes:

    # firewall-cmd --add-port=3300/tcp
    # firewall-cmd --add-port=3300/tcp --permanent
    # yum update -y
    # systemctl restart ceph-mon@<mon-hostname>
    # systemctl restart ceph-mgr@<mgr-hostname>

    Replace <mon-hostname> and <mgr-hostname> with the hostname of the target host.

  4. Before upgrading OSDs, set the norebalance flag on a Ceph Monitor node to prevent OSDs from rebalancing during upgrade.

    # ceph osd set norebalance
  5. On each OSD node, execute:

    # subscription-manager repos --enable=rhel-7-server-rhceph-4-osd-rpms

    Once the Red Hat Ceph Storage 4 package is enabled, update the OSD node:

    # yum update -y

    For each OSD daemon running on the node, execute:

    # systemctl restart ceph-osd@<osd-num>

    Replace <osd-num> with the osd number to restart. Ensure all OSDs on the node have restarted before proceeding to the next OSD node.

  6. If there are any OSDs in the storage cluster deployed with ceph-disk, instruct ceph-volume to start the daemons.

    # ceph-volume simple scan
    # ceph-volume simple activate --all
  7. Enable the Nautilus only functionality:

    # ceph osd require-osd-release nautilus
    Important

    Failure to execute this step will make it impossible for OSDs to communicate after msgr2 is enabled.

  8. After upgrading all OSD nodes, unset the noout flag on a Ceph Monitor node.

    # ceph osd unset noout
  9. Switch any existing CRUSH buckets to the latest bucket type straw2.

    # ceph osd getcrushmap -o backup-crushmap
    # ceph osd crush set-all-straw-buckets-to-straw2
  10. Enable the messenger v2 protocol, msgr2:

    # ceph mon enable-msgr2

    This instructs all Ceph Monitors that bind to the old default port of 6789, to also bind to the new port of 3300.

    1. Verify the status of the monitor:

      #ceph mon dump
  11. On Ceph Object Gateway nodes, execute:

    # subscription-manager repos --enable=rhel-7-server-rhceph-4-tools-rpms

    Once the Red Hat Ceph Storage 4 package is enabled, update the node and restart the ceph-rgw daemon:

    # yum update -y
    # systemctl restart ceph-rgw@<rgw-target>

    Replace <rgw-target> with the rgw target to restart.

  12. For the administration node, execute:

    # subscription-manager repos --enable=rhel-7-server-rhceph-4-tools-rpms
    # yum update -y
  13. Ensure the cluster is in a healthy and clean state.

    # ceph health
    HEALTH_OK
  14. Optional: On client nodes, restart any applications that depend on the Ceph client-side libraries.

    Note

    If you are upgrading OpenStack Nova compute nodes that have running QEMU or KVM instances or use a dedicated QEMU or KVM client, stop and start the QEMU or KVM instance because restarting the instance does not work in this case.

7.5. Manually upgrading the Ceph File System Metadata Server nodes

You can manually upgrade the Ceph File System (CephFS) Metadata Server (MDS) software on a Red Hat Ceph Storage cluster running either Red Hat Enterprise Linux 7 or 8.

Important

Before you upgrade the storage cluster, reduce the number of active MDS ranks to one per file system. This eliminates any possible version conflicts between multiple MDS. In addition, take all standby nodes offline before upgrading.

This is because the MDS cluster does not possess built-in versioning or file system flags. Without these features, multiple MDS might communicate using different versions of the MDS software, and could cause assertions or other faults to occur.

Prerequisites

  • A running Red Hat Ceph Storage cluster.
  • The nodes are using Red Hat Ceph Storage version 3.3z64 or 4.1.
  • Root-level access to all nodes in the storage cluster.
Important

The underlying XFS filesystem must be formatted with ftype=1 or with d_type support. Run the command xfs_info /var to ensure the ftype is set to 1. If the value of ftype is not 1, attach a new disk or create a volume. On top of this new device, create a new XFS filesystem and mount it on /var/lib/containers.

Starting with Red Hat Enterprise Linux 8.0, mkfs.xfs enables ftype=1 by default.

Procedure

  1. Reduce the number of active MDS ranks to 1:

    Syntax

    ceph fs set FILE_SYSTEM_NAME max_mds 1

    Example

    [root@mds ~]# ceph fs set fs1 max_mds 1

  2. Wait for the cluster to stop all of the MDS ranks. When all of the MDS have stopped, only rank 0 should be active. The rest should be in standby mode. Check the status of the file system:

    [root@mds ~]# ceph status
  3. Use systemctl to take all standby MDS offline:

    [root@mds ~]# systemctl stop ceph-mds.target
  4. Confirm that only one MDS is online, and that it has rank 0 for your file system:

    [root@mds ~]# ceph status
  5. If you are upgrading from Red Hat Ceph Storage 3 on RHEL 7, disable the Red Hat Ceph Storage 3 tools repository and enable the Red Hat Ceph Storage 4 tools repository:

    [root@mds ~]# subscription-manager repos --disable=rhel-7-server-rhceph-3-tools-rpms
    [root@mds ~]# subscription-manager repos --enable=rhel-7-server-rhceph-4-tools-rpms
  6. Update the node and restart the ceph-mds daemon:

    [root@mds ~]# yum update -y
    [root@mds ~]# systemctl restart ceph-mds.target
  7. Follow the same processes for the standby daemons. Disable and enable the tools repositories, and then upgrade and restart each standby MDS:

    [root@mds ~]# subscription-manager repos --disable=rhel-7-server-rhceph-3-tools-rpms
    [root@mds ~]# subscription-manager repos --enable=rhel-7-server-rhceph-4-tools-rpms
    [root@mds ~]# yum update -y
    [root@mds ~]# systemctl restart ceph-mds.target
  8. When you have finished restarting all of the MDS in standby, restore the previous value of max_mds for the storage cluster:

    Syntax

    ceph fs set FILE_SYSTEM_NAME max_mds ORIGINAL_VALUE

    Example

    [root@mds ~]# ceph fs set fs1 max_mds 5

7.6. Additional Resources