Chapter 8. Manually upgrading a Red Hat Ceph Storage cluster and operating system

Normally, using ceph-ansible, it is not possible to upgrade Red Hat Ceph Storage and Red Hat Enterprise Linux to a new major release at the same time. For example, if you are on Red Hat Enterprise Linux 7, using ceph-ansible, you must stay on that version. As a system administrator, you can do this manually, however.

Use this chapter to manually upgrade a Red Hat Ceph Storage cluster at version 4.1 or 3.3z6 running on Red Hat Enterprise Linux 7.9, to a Red Hat Ceph Storage cluster at version 4.2 running on Red Hat Enterprise Linux 8.4.

Important

To upgrade a containerized Red Hat Ceph Storage cluster at version 3.x or 4.x to a version 4.2, see the following three sections, Supported Red Hat Ceph Storage upgrade scenarios, Preparing for an upgrade, and Upgrading the storage cluster using Ansible in the Red Hat Ceph Storage Installation Guide.

To migrate existing systemd templates, run docker-to-podman playbook:

[user@admin ceph-ansible]$ ansible-playbook infrastructure-playbooks/docker-to-podman.yml -i hosts

Where user is the Ansible user.

Important

If a node is collocated with more than one daemon, follow the specific section in this chapter , for the daemons collocated in the node. For example a node collocated with the Ceph Monitor daemon and the OSD daemon:

see Manually upgrading Ceph Monitor nodes and their operating systems and Manually upgrading Ceph OSD nodes and their operating systems.

Important

Manually upgrading Ceph OSD nodes and their operating systems will not work with encrypted OSD partitions as the Leapp upgrade utility does not support upgrading with OSD encryption.

8.1. Prerequisites

  • A running Red Hat Ceph Storage cluster.
  • The nodes are running Red Hat Enterprise Linux 7.9.
  • The nodes are using Red Hat Ceph Storage version 3.3z6 or 4.1
  • Access to the installation source for Red Hat Enterprise Linux 8.3.

8.2. Manually upgrading Ceph Monitor nodes and their operating systems

As a system administrator, you can manually upgrade the Ceph Monitor software on a Red Hat Ceph Storage cluster node and the Red Hat Enterprise Linux operating system to a new major release at the same time.

Important

Perform the procedure on only one Monitor node at a time. To prevent cluster access issues, ensure the current upgraded Monitor node has returned to normal operation prior to proceeding to the next node.

Prerequisites

  • A running Red Hat Ceph Storage cluster.
  • The nodes are running Red Hat Enterprise Linux 7.9.
  • The nodes are using Red Hat Ceph Storage version 3.3z6 or 4.1
  • Access to the installation source for Red Hat Enterprise Linux 8.3.

Procedure

  1. Stop the monitor service:

    Syntax

    systemctl stop ceph-mon@MONITOR_ID

    Replace MONITOR_ID with the Monitor’s ID number.

  2. If using Red Hat Ceph Storage 3, disable the Red Hat Ceph Storage 3 repositories.

    1. Disable the tools repository:

      [root@mon ~]# subscription-manager repos --disable=rhel-7-server-rhceph-3-tools-rpms
    2. Disable the mon repository:

      [root@mon ~]# subscription-manager repos --disable=rhel-7-server-rhceph-3-mon-rpms
  3. If using Red Hat Ceph Storage 4, disable the Red Hat Ceph Storage 4 repositories.

    1. Disable the tools repository:

      [root@mon ~]# subscription-manager repos --disable=rhel-7-server-rhceph-4-tools-rpms
    2. Disable the mon repository:

      [root@mon ~]# subscription-manager repos --disable=rhel-7-server-rhceph-4-mon-rpms
  4. Install the leapp utility. See Upgrading from Red Hat Enterprise Linux 7 to Red Hat Enterprise Linux 8.
  5. Run through the leapp preupgrade checks. See Assessing upgradability from the command line.
  6. Set PermitRootLogin yes in /etc/ssh/sshd_config.
  7. Restart the OpenSSH SSH daemon:

    [root@mon ~]# systemctl restart sshd.service
  8. Remove the iSCSI module from the Linux kernel:

    [root@mon ~]# modprobe -r iscsi
  9. Perform the upgrade by following Performing the upgrade from RHEL 7 to RHEL 8.
  10. Reboot the node.
  11. Enable the repositories for Red Hat Ceph Storage 4 for Red Hat Enterprise Linux 8.

    1. Enable the tools repository:

      [root@mon ~]# subscription-manager repos --enable=rhceph-4-tools-for-rhel-8-x86_64-rpms
    2. Enable the mon repository:

      [root@mon ~]# subscription-manager repos --enable=rhceph-4-mon-for-rhel-8-x86_64-rpms
  12. Install the ceph-mon package:

    [root@mon ~]# dnf install ceph-mon
  13. If the manager service is colocated with the monitor service, install the ceph-mgr package:

    [root@mon ~]# dnf install ceph-mgr
  14. Restore the ceph-client-admin.keyring and ceph.conf files from a Monitor node which has not been upgraded yet or from a node that has already had those files restored.
  15. Switch any existing CRUSH buckets to the latest bucket type straw2.

    # ceph osd getcrushmap -o backup-crushmap
    # ceph osd crush set-all-straw-buckets-to-straw2
  16. Once all the daemons are updated after upgrading from Red Hat Ceph Storage 3 to Red Hat Ceph Storage 4, run the following steps:

    1. Enable the messenger v2 protocol, msgr2:

      ceph mon enable-msgr2

      This instructs all Ceph Monitors that bind to the old default port of 6789, to also bind to the new port of 3300.

      Important

      Ensure all the Ceph Monitors are upgraded from Red Hat Ceph Storage 3 to Red Hat Ceph Storage 4 before performing any further Ceph Monitor configuration.

    2. Verify the status of the monitor:

      ceph mon dump
      Note

      Running nautilus OSDs does not bind to their v2 address automatically. They must be restarted.

  17. For each host upgraded from Red Hat Ceph Storage 3 to Red Hat Ceph Storage 4, update the ceph.conf file to either not specify any monitor port or reference both the v2 and v1 addresses and ports. Import any configuration options in ceph.conf file into the storage cluster’s configuration database.

    Example

    [root@mon ~]# ceph config assimilate-conf -i /etc/ceph/ceph.conf

    1. Check the storage cluster’s configuration database.

      Example

      [root@mon ~]# ceph config dump

    2. Optional: After upgrading to Red Hat Ceph Storage 4, create a minimal ceph.conf file for each host:

      Example

      [root@mon ~]# ceph config generate-minimal-conf > /etc/ceph/ceph.conf.new
      [root@mon ~]# mv /etc/ceph/ceph.conf.new /etc/ceph/ceph.conf

  18. Install the leveldb package:

    [root@mon ~]# dnf install leveldb
  19. Start the monitor service:

    [root@mon ~]# systemctl start ceph-mon.target
  20. If the manager service is colocated with the monitor service, start the manager service too:

    [root@mon ~]# systemctl start ceph-mgr.target
  21. Verify the monitor service came back up and is in quorum.

    [root@mon ~]# ceph -s

    On the mon: line under services:, ensure the node is listed as in quorum and not as out of quorum.

    Example

    mon: 3 daemons, quorum ceph4-mon,ceph4-mon2,ceph4-mon3 (age 2h)

  22. If the manager service is colocated with the monitor service, verify it is up too:

    [root@mon ~]# ceph -s

    Look for the manager’s node name on the mgr: line under services.

    Example

    mgr: ceph4-mon(active, since 2h), standbys: ceph4-mon3, ceph4-mon2

  23. Repeat the above steps on all Monitor nodes until they have all been upgraded.

8.3. Manually upgrading Ceph OSD nodes and their operating systems

As a system administrator, you can manually upgrade the Ceph OSD software on a Red Hat Ceph Storage cluster node and the Red Hat Enterprise Linux operating system to a new major release at the same time.

Important

This procedure should be performed for each OSD node in the Ceph cluster, but typically only for one OSD node at a time. A maximum of one failure domains worth of OSD nodes may be performed in parallel. For example, if per-rack replication is in use, one entire rack’s OSD nodes can be upgraded in parallel. To prevent data access issues, ensure the current OSD node’s OSDs have returned to normal operation and all of the cluster’s PGs are in the active+clean state prior to proceeding to the next OSD.

Important

This procedure will not work with encrypted OSD partitions as the Leapp upgrade utility does not support upgrading with OSD encryption.

Important

If the OSDs were created using ceph-disk, and are still managed by ceph-disk, you must use ceph-volume to take over management of them. This is covered in an optional step below.

Prerequisites

  • A running Red Hat Ceph Storage cluster.
  • The nodes are running Red Hat Enterprise Linux 7.9.
  • The nodes are using Red Hat Ceph Storage version 3.3z6 or 4.0
  • Access to the installation source for Red Hat Enterprise Linux 8.3.

Procedure

  1. Set the OSD noout flag to prevent OSDs from getting marked down during the migration:

    ceph osd set noout
  2. Set the OSD nobackfill, norecover, norrebalance, noscrub and nodeep-scrub flags to avoid unnecessary load on the cluster and to avoid any data reshuffling when the node goes down for migration:

    ceph osd set nobackfill
    ceph osd set norecover
    ceph osd set norebalance
    ceph osd set noscrub
    ceph osd set nodeep-scrub
  3. Gracefully shut down all the OSD processes on the node:

    [root@mon ~]# systemctl stop ceph-osd.target
  4. If using Red Hat Ceph Storage 3, disable the Red Hat Ceph Storage 3 repositories.

    1. Disable the tools repository:

      [root@mon ~]# subscription-manager repos --disable=rhel-7-server-rhceph-3-tools-rpms
    2. Disable the osd repository:

      [root@mon ~]# subscription-manager repos --disable=rhel-7-server-rhceph-3-osd-rpms
  5. If using Red Hat Ceph Storage 4, disable the Red Hat Ceph Storage 4 repositories.

    1. Disable the tools repository:

      [root@mon ~]# subscription-manager repos --disable=rhel-7-server-rhceph-4-tools-rpms
    2. Disable the osd repository:

      [root@mon ~]# subscription-manager repos --disable=rhel-7-server-rhceph-4-osd-rpms
  6. Install the leapp utility. See Upgrading from Red Hat Enterprise Linux 7 to Red Hat Enterprise Linux 8.
  7. Run through the leapp preupgrade checks. See Assessing upgradability from the command line.
  8. Set PermitRootLogin yes in /etc/ssh/sshd_config.
  9. Restart the OpenSSH SSH daemon:

    [root@mon ~]# systemctl restart sshd.service
  10. Remove the iSCSI module from the Linux kernel:

    [root@mon ~]# modprobe -r iscsi
  11. Perform the upgrade by following Performing the upgrade from RHEL 7 to RHEL 8.
  12. Reboot the node.
  13. Enable the repositories for Red Hat Ceph Storage 4 for Red Hat Enterprise Linux 8.

    1. Enable the tools repository:

      [root@mon ~]# subscription-manager repos --enable=rhceph-4-tools-for-rhel-8-x86_64-rpms
    2. Enable the osd repository:

      [root@mon ~]# subscription-manager repos --enable=rhceph-4-osd-for-rhel-8-x86_64-rpms
  14. Install the ceph-osd package:

    [root@mon ~]# dnf install ceph-osd
  15. Install the leveldb package:

    [root@mon ~]# dnf install leveldb
  16. Restore the ceph.conf file from a node which has not been upgraded yet or from a node that has already had those files restored.
  17. Unset the noout, nobackfill, norecover, norrebalance, noscrub and nodeep-scrub flags:

    # ceph osd unset noout
    # ceph osd unset nobackfill
    # ceph osd unset norecover
    # ceph osd unset norebalance
    # ceph osd unset noscrub
    # ceph osd unset nodeep-scrub
  18. Switch any existing CRUSH buckets to the latest bucket type straw2.

    # ceph osd getcrushmap -o backup-crushmap
    # ceph osd crush set-all-straw-buckets-to-straw2
  19. Optional: If the OSDs were created using ceph-disk, and are still managed by ceph-disk, you must use ceph-volume to take over management of them.

    1. Mount each object storage device:

      Syntax

      /dev/DRIVE /var/lib/ceph/osd/ceph-OSD_ID

      Replace DRIVE with the storage device name and partition number.

      Replace OSD_ID with the OSD ID.

      Example

      [root@mon ~]# mount /dev/sdb1 /var/lib/ceph/osd/ceph-0

      Verify the ID_NUMBER is correct.

      Syntax

      cat /var/lib/ceph/osd/ceph-OSD_ID/whoami

      Replace OSD_ID with the OSD ID.

      Example

      [root@mon ~]# cat /var/lib/ceph/osd/ceph-0/whoami
      0

      Repeat the above steps for any additional object store devices.

    2. Scan the newly mounted devices:

      Syntax

      ceph-volume simple scan /var/lib/ceph/osd/ceph-OSD_ID

      Replace OSD_ID with the OSD ID.

      Example

      [root@mon ~]# ceph-volume simple scan /var/lib/ceph/osd/ceph-0
       stderr: lsblk: /var/lib/ceph/osd/ceph-0: not a block device
       stderr: lsblk: /var/lib/ceph/osd/ceph-0: not a block device
       stderr: Unknown device, --name=, --path=, or absolute path in /dev/ or /sys expected.
      Running command: /usr/sbin/cryptsetup status /dev/sdb1
      --> OSD 0 got scanned and metadata persisted to file: /etc/ceph/osd/0-0c9917f7-fce8-42aa-bdec-8c2cf2d536ba.json
      --> To take over management of this scanned OSD, and disable ceph-disk and udev, run:
      -->     ceph-volume simple activate 0 0c9917f7-fce8-42aa-bdec-8c2cf2d536ba

      Repeat the above step for any additional object store devices.

    3. Activate the device:

      Syntax

      ceph-volume simple activate OSD_ID UUID

      Replace OSD_ID with the OSD ID and UUID with the UUID printed in the scan output from earlier.

      Example

      [root@mon ~]# ceph-volume simple activate 0 0c9917f7-fce8-42aa-bdec-8c2cf2d536ba
      Running command: /usr/bin/ln -snf /dev/sdb2 /var/lib/ceph/osd/ceph-0/journal
      Running command: /usr/bin/chown -R ceph:ceph /dev/sdb2
      Running command: /usr/bin/systemctl enable ceph-volume@simple-0-0c9917f7-fce8-42aa-bdec-8c2cf2d536ba
       stderr: Created symlink /etc/systemd/system/multi-user.target.wants/ceph-volume@simple-0-0c9917f7-fce8-42aa-bdec-8c2cf2d536ba.service → /usr/lib/systemd/system/ceph-volume@.service.
      Running command: /usr/bin/ln -sf /dev/null /etc/systemd/system/ceph-disk@.service
      --> All ceph-disk systemd units have been disabled to prevent OSDs getting triggered by UDEV events
      Running command: /usr/bin/systemctl enable --runtime ceph-osd@0
       stderr: Created symlink /run/systemd/system/ceph-osd.target.wants/ceph-osd@0.service → /usr/lib/systemd/system/ceph-osd@.service.
      Running command: /usr/bin/systemctl start ceph-osd@0
      --> Successfully activated OSD 0 with FSID 0c9917f7-fce8-42aa-bdec-8c2cf2d536ba

      Repeat the above step for any additional object store devices.

  20. Optional: If your OSDs were created with ceph-volume and you did not complete the previous step, start the OSD service now:

    [root@mon ~]# systemctl start ceph-osd.target
  21. Activate the OSDs:

    BlueStore

    [root@mon ~]# ceph-volume lvm activate --all

  22. Verify that the OSDs are up and in, and that they are in the active+clean state.

    [root@mon ~]# ceph -s

    On the osd: line under services:, ensure that all OSDs are up and in:

    Example

    osd: 3 osds: 3 up (since 8s), 3 in (since 3M)

  23. Repeat the above steps on all OSD nodes until they have all been upgraded.
  24. If upgrading from Red Hat Ceph Storage 3, disallow pre-Nautilus OSDs and enable the Nautilus-only functionality:

    [root@mon ~]# ceph osd require-osd-release nautilus
    Note

    Failure to execute this step makes it impossible for OSDs to communicate after msgrv2 is enabled.

  25. Once all the daemons are updated after upgrading from Red Hat Ceph Storage 3 to Red Hat Ceph Storage 4, run the following steps:

    1. Enable the messenger v2 protocol, msgr2:

      [root@mon ~]# ceph mon enable-msgr2

      This instructs all Ceph Monitors that bind to the old default port of 6789, to also bind to the new port of 3300.

    2. On every node, import any configuration options in ceph.conf file into the storage cluster’s configuration database:

      Example

      [root@mon ~]# ceph config assimilate-conf -i /etc/ceph/ceph.conf

      Note

      When you assimilate a config into your monitors, for example, if you have different config values set for the same set of options, the end result depends on the order in which the files are assimilated.

    3. Check the storage cluster’s configuration database:

      Example

      [root@mon ~]# ceph config dump

8.4. Manually upgrading Ceph Object Gateway nodes and their operating systems

As a system administrator, you can manually upgrade the Ceph Object Gateway (RGW) software on a Red Hat Ceph Storage cluster node and the Red Hat Enterprise Linux operating system to a new major release at the same time.

Important

This procedure should be performed for each RGW node in the Ceph cluster, but only for one RGW node at a time. Ensure the current upgraded RGW has returned to normal operation prior to proceeding to the next node to prevent any client access issues.

Prerequisites

  • A running Red Hat Ceph Storage cluster.
  • The nodes are running Red Hat Enterprise Linux 7.9.
  • The nodes are using Red Hat Ceph Storage version 3.3z6 or 4.1
  • Access to the installation source for Red Hat Enterprise Linux 8.3.

Procedure

  1. Stop the Ceph Object Gateway service:

    # systemctl stop ceph-radosgw.target
  2. If using Red Hat Ceph Storage 3, disable the Red Hat Ceph Storage 3 tool repository:

    # subscription-manager repos --disable=rhel-7-server-rhceph-3-tools-rpms
  3. If using Red Hat Ceph Storage 4, disable the Red Hat Ceph Storage 4 tools repository:

    # subscription-manager repos --disable=rhel-7-server-rhceph-4-tools-rpms
  4. Install the leapp utility. See Upgrading from Red Hat Enterprise Linux 7 to Red Hat Enterprise Linux 8.
  5. Run through the leapp preupgrade checks. See Assessing upgradability from the command line.
  6. Set PermitRootLogin yes in /etc/ssh/sshd_config.
  7. Restart the OpenSSH SSH daemon:

    # systemctl restart sshd.service
  8. Remove the iSCSI module from the Linux kernel:

    # modprobe -r iscsi
  9. Perform the upgrade by following Performing the upgrade from RHEL 7 to RHEL 8.
  10. Reboot the node.
  11. Enable the tools repositories for Red Hat Ceph Storage 4 for Red Hat Enterprise Linux 8.

    # subscription-manager repos --enable=rhceph-4-tools-for-rhel-8-x86_64-rpms
  12. Install the ceph-radosgw package:

    # dnf install ceph-radosgw
  13. Optional: Install the packages for any Ceph services that are colocated on this node. Enable additional Ceph repositories if needed.
  14. Optional: Install the leveldb package which is needed by other Ceph services.

    # dnf install leveldb
  15. Restore the ceph-client-admin.keyring and ceph.conf files from a node which has not been upgraded yet or from a node that has already had those files restored.
  16. Start the RGW service:

    # systemctl start ceph-radosgw.target
  17. Switch any existing CRUSH buckets to the latest bucket type straw2.

    # ceph osd getcrushmap -o backup-crushmap
    # ceph osd crush set-all-straw-buckets-to-straw2
  18. Verify the daemon is active:

    # ceph -s

    There is an rgw: line under services:.

    Example

    rgw: 1 daemon active (jb-ceph4-rgw.rgw0)

  19. Repeat the above steps on all Ceph Object Gateway nodes until they have all been upgraded.

8.5. Manually upgrading the Ceph Dashboard node and its operating system

As a system administrator, you can manually upgrade the Ceph Dashboard software on a Red Hat Ceph Storage cluster node and the Red Hat Enterprise Linux operating system to a new major release at the same time.

Prerequisites

  • A running Red Hat Ceph Storage cluster.
  • The node is running Red Hat Enterprise Linux 7.9.
  • The node is running Red Hat Ceph Storage version 3.3z6 or 4.1
  • Access to the installation source for Red Hat Enterprise Linux 8.3.

Procedure

  1. Uninstall the existing dashboard from the cluster.

    1. Change to the /usr/share/cephmetrics-ansible directory:

      # cd /usr/share/cephmetrics-ansible
    2. Run the purge.yml Ansible playbook:

      # ansible-playbook -v purge.yml
  2. If using Red Hat Ceph Storage 3, disable the Red Hat Ceph Storage 3 tools repository:

    # subscription-manager repos --disable=rhel-7-server-rhceph-3-tools-rpms
  3. If using Red Hat Ceph Storage 4, disable the Red Hat Ceph Storage 4 tools repository:

    # subscription-manager repos --disable=rhel-7-server-rhceph-4-tools-rpms
  4. Install the leapp utility. See Upgrading from Red Hat Enterprise Linux 7 to Red Hat Enterprise Linux 8.
  5. Run through the leapp preupgrade checks. See Assessing upgradability from the command line.
  6. Set PermitRootLogin yes in /etc/ssh/sshd_config.
  7. Restart the OpenSSH SSH daemon:

    # systemctl restart sshd.service
  8. Remove the iSCSI module from the Linux kernel:

    # modprobe -r iscsi
  9. Perform the upgrade by following Performing the upgrade from RHEL 7 to RHEL 8.
  10. Reboot the node.
  11. Enable the tools repository for Red Hat Ceph Storage 4 for Red Hat Enterprise Linux 8:

    # subscription-manager repos --enable=rhceph-4-tools-for-rhel-8-x86_64-rpms
  12. Enable the Ansible repository:

    # subscription-manager repos --enable=ansible-2.9-for-rhel-8-x86_64-rpms
  13. Configure ceph-ansible to manage the cluster. It will install the dashboard. Follow the instructions in Installing Red Hat Ceph Storage using Ansible, including the prerequisites.
  14. After you run ansible-playbook site.yml as a part of the above procedures, the URL for the dashboard will be printed. See Installing dashboard using Ansible in the Dashboard guide for more information on locating the URL and accessing the dashboard.

8.6. Manually upgrading Ceph Ansible nodes and reconfiguring settings

Manually upgrade the Ceph Ansible software on a Red Hat Ceph Storage cluster node and the Red Hat Enterprise Linux operating system to a new major release at the same time. This procedure applies to both bare-metal and container deployments, unless specified.

Important

Before upgrading hostOS on the Ceph Ansible node, take a backup of group_vars and hosts file. Use the created backup before re-configuring the Ceph Ansible node.

Prerequisites

  • A running Red Hat Ceph Storage cluster.
  • The node is running Red Hat Enterprise Linux 7.9.
  • The node is running Red Hat Ceph Storage version 3.3z6 or 4.1
  • Access to the installation source for Red Hat Enterprise Linux 8.3.

Procedure

  1. Enable the tools repository for Red Hat Ceph Storage 4 for Red Hat Enterprise Linux 8:

    [root@dashboard ~]# subscription-manager repos --enable=rhceph-4-tools-for-rhel-8-x86_64-rpms
  2. Enable the Ansible repository:

    [root@dashboard ~]# subscription-manager repos --enable=ansible-2.9-for-rhel-8-x86_64-rpms
  3. Configure ceph-ansible to manage the storage cluster. It will install the dashboard. Follow the instructions in Installing Red Hat Ceph Storage using Ansible, including the prerequisites.
  4. After you run ansible-playbook site.yml as a part of the above procedures, the URL for the dashboard will be printed. See Installing dashboard using Ansible in the Dashboard guide for more information on locating the URL and accessing the dashboard.

8.7. Manually upgrading the Ceph File System Metadata Server nodes and their operating systems

You can manually upgrade the Ceph File System (CephFS) Metadata Server (MDS) software on a Red Hat Ceph Storage cluster and the Red Hat Enterprise Linux operating system to a new major release at the same time.

Important

Before you upgrade the storage cluster, reduce the number of active MDS ranks to one per file system. This eliminates any possible version conflicts between multiple MDS. In addition, take all standby nodes offline before upgrading.

This is because the MDS cluster does not possess built-in versioning or file system flags. Without these features, multiple MDS might communicate using different versions of the MDS software, and could cause assertions or other faults to occur.

Prerequisites

  • A running Red Hat Ceph Storage cluster.
  • The nodes are running Red Hat Enterprise Linux 7.9.
  • The nodes are using Red Hat Ceph Storage version 3.3z6 or 4.1.
  • Access to the installation source for Red Hat Enterprise Linux 8.3.
  • Root-level access to all nodes in the storage cluster.
Important

The underlying XFS filesystem must be formatted with ftype=1 or with d_type support. Run the command xfs_info /var to ensure the ftype is set to 1. If the value of ftype is not 1, attach a new disk or create a volume. On top of this new device, create a new XFS filesystem and mount it on /var/lib/containers.

Starting with Red Hat Enterprise Linux 8, mkfs.xfs enables ftype=1 by default.

Procedure

  1. Reduce the number of active MDS ranks to 1:

    Syntax

    ceph fs set FILE_SYSTEM_NAME max_mds 1

    Example

    [root@mds ~]# ceph fs set fs1 max_mds 1

  2. Wait for the cluster to stop all of the MDS ranks. When all of the MDS have stopped, only rank 0 should be active. The rest should be in standby mode. Check the status of the file system:

    [root@mds ~]# ceph status
  3. Use systemctl to take all standby MDS offline:

    [root@mds ~]# systemctl stop ceph-mds.target
  4. Confirm that only one MDS is online, and that it has rank 0 for the file system:

    [root@mds ~]# ceph status
  5. Disable the tools repository for the operating system version:

    1. If you are upgrading from Red Hat Ceph Storage 3 on RHEL 7, disable the Red Hat Ceph Storage 3 tools repository:

      [root@mds ~]# subscription-manager repos --disable=rhel-7-server-rhceph-3-tools-rpms
    2. If you are using Red Hat Ceph Storage 4, disable the Red Hat Ceph Storage 4 tools repository:

      [root@mds ~]# subscription-manager repos --disable=rhel-7-server-rhceph-4-tools-rpms
  6. Install the leapp utility. For more information about leapp, refer to Upgrading from Red Hat Enterprise Linux 7 to Red Hat Enterprise Linux 8.
  7. Run through the leapp preupgrade checks. For more information, refer to Assessing upgradability from the command line.
  8. Edit /etc/ssh/sshd_config and set PermitRootLogin to yes.
  9. Restart the OpenSSH SSH daemon:

    [root@mds ~]# systemctl restart sshd.service
  10. Remove the iSCSI module from the Linux kernel:

    [root@mds ~]# modprobe -r iscsi
  11. Perform the upgrade. See Performing the upgrade from RHEL 7 to RHEL 8.
  12. Reboot the MDS node.
  13. Enable the tools repositories for Red Hat Ceph Storage 4 for Red Hat Enterprise Linux 8:

    [root@mds ~]# subscription-manager repos --enable=rhceph-4-tools-for-rhel-8-x86_64-rpms
  14. Install the ceph-mds package:

    [root@mds ~]# dnf install ceph-mds -y
  15. Optional: Install the packages for any Ceph services that are colocated on this node. Enable additional Ceph repositories, if needed.
  16. Optional: Install the leveldb package, which is needed by other Ceph services:

    [root@mds ~]# dnf install leveldb
  17. Restore the ceph-client-admin.keyring and ceph.conf files from a node that has not been upgraded yet, or from a node that has already had those files restored.
  18. Switch any existing CRUSH buckets to the latest bucket type straw2.

    # ceph osd getcrushmap -o backup-crushmap
    # ceph osd crush set-all-straw-buckets-to-straw2
  19. Start the MDS service:

    [root@mds ~]# systemctl restart ceph-mds.target
  20. Verify that the daemon is active:

    [root@mds ~]# ceph -s
  21. Follow the same processes for the standby daemons.
  22. When you have finished restarting all of the MDS in standby, restore the previous value of max_mds for your cluster:

    Syntax

    ceph fs set FILE_SYSTEM_NAME max_mds ORIGINAL_VALUE

    Example

    [root@mds ~]# ceph fs set fs1 max_mds 5

8.8. Recovering from an operating system upgrade failure on an OSD node

As a system administrator, if you have a failure when using the procedure Manually upgrading Ceph OSD nodes and their operating systems, you can recover from the failure using the following procedure. In the procedure you will do a fresh install of Red Hat Enterprise Linux 8.4 on the node and still be able to recover the OSDs without any major backfilling of data besides the writes to the OSDs that were down while they were out.

Important

DO NOT touch the media backing the OSDs or their respective wal.db or block.db databases.

Prerequisites

  • A running Red Hat Ceph Storage cluster.
  • An OSD node that failed to upgrade.
  • Access to the installation source for Red Hat Enterprise Linux 8.4.

Procedure

  1. Perform a standard installation of Red Hat Enterprise Linux 8.4 on the failed node and enable the Red Hat Enterprise Linux repositories.

  2. Enable the repositories for Red Hat Ceph Storage 4 for Red Hat Enterprise Linux 8.

    1. Enable the tools repository:

      # subscription-manager repos --enable=rhceph-4-tools-for-rhel-8-x86_64-rpms
    2. Enable the osd repository:

      # subscription-manager repos --enable=rhceph-4-osd-for-rhel-8-x86_64-rpms
  3. Install the ceph-osd package:

    # dnf install ceph-osd
  4. Restore the ceph.conf file to /etc/ceph from a node which has not been upgraded yet or from a node that has already had those files restored.
  5. Start the OSD service:

    # systemctl start ceph-osd.target
  6. Activate the object store devices:

    ceph-volume lvm activate --all
  7. Watch the recovery of the OSDs and cluster backfill writes to recovered OSDs:

    # ceph -w

    Monitor the output until all PGs are in state active+clean.

8.9. Additional Resources