Chapter 8. Manually upgrading a Red Hat Ceph Storage cluster and operating system
Normally, using ceph-ansible
, it is not possible to upgrade Red Hat Ceph Storage and Red Hat Enterprise Linux to a new major release at the same time. For example, if you are on Red Hat Enterprise Linux 7, using ceph-ansible
, you must stay on that version. As a system administrator, you can do this manually, however.
Use this chapter to manually upgrade a Red Hat Ceph Storage cluster at version 4.1 or 3.3z6 running on Red Hat Enterprise Linux 7.9, to a Red Hat Ceph Storage cluster at version 4.2 running on Red Hat Enterprise Linux 8.4.
To upgrade a containerized Red Hat Ceph Storage cluster at version 3.x or 4.x to a version 4.2, see the following three sections, Supported Red Hat Ceph Storage upgrade scenarios, Preparing for an upgrade, and Upgrading the storage cluster using Ansible in the Red Hat Ceph Storage Installation Guide.
To migrate existing systemd templates, run docker-to-podman
playbook:
[user@admin ceph-ansible]$ ansible-playbook infrastructure-playbooks/docker-to-podman.yml -i hosts
Where user is the Ansible user.
If a node is collocated with more than one daemon, follow the specific section in this chapter , for the daemons collocated in the node. For example a node collocated with the Ceph Monitor daemon and the OSD daemon:
see Manually upgrading Ceph Monitor nodes and their operating systems and Manually upgrading Ceph OSD nodes and their operating systems.
Manually upgrading Ceph OSD nodes and their operating systems will not work with encrypted OSD partitions as the Leapp upgrade utility does not support upgrading with OSD encryption.
8.1. Prerequisites
- A running Red Hat Ceph Storage cluster.
- The nodes are running Red Hat Enterprise Linux 7.9.
- The nodes are using Red Hat Ceph Storage version 3.3z6 or 4.1
- Access to the installation source for Red Hat Enterprise Linux 8.3.
8.2. Manually upgrading Ceph Monitor nodes and their operating systems
As a system administrator, you can manually upgrade the Ceph Monitor software on a Red Hat Ceph Storage cluster node and the Red Hat Enterprise Linux operating system to a new major release at the same time.
Perform the procedure on only one Monitor node at a time. To prevent cluster access issues, ensure the current upgraded Monitor node has returned to normal operation prior to proceeding to the next node.
Prerequisites
- A running Red Hat Ceph Storage cluster.
- The nodes are running Red Hat Enterprise Linux 7.9.
- The nodes are using Red Hat Ceph Storage version 3.3z6 or 4.1
- Access to the installation source for Red Hat Enterprise Linux 8.3.
Procedure
Stop the monitor service:
Syntax
systemctl stop ceph-mon@MONITOR_ID
Replace MONITOR_ID with the Monitor’s ID number.
If using Red Hat Ceph Storage 3, disable the Red Hat Ceph Storage 3 repositories.
Disable the tools repository:
[root@mon ~]# subscription-manager repos --disable=rhel-7-server-rhceph-3-tools-rpms
Disable the mon repository:
[root@mon ~]# subscription-manager repos --disable=rhel-7-server-rhceph-3-mon-rpms
If using Red Hat Ceph Storage 4, disable the Red Hat Ceph Storage 4 repositories.
Disable the tools repository:
[root@mon ~]# subscription-manager repos --disable=rhel-7-server-rhceph-4-tools-rpms
Disable the mon repository:
[root@mon ~]# subscription-manager repos --disable=rhel-7-server-rhceph-4-mon-rpms
-
Install the
leapp
utility. See Upgrading from Red Hat Enterprise Linux 7 to Red Hat Enterprise Linux 8. - Run through the leapp preupgrade checks. See Assessing upgradability from the command line.
-
Set
PermitRootLogin yes
in/etc/ssh/sshd_config
. Restart the OpenSSH SSH daemon:
[root@mon ~]# systemctl restart sshd.service
Remove the iSCSI module from the Linux kernel:
[root@mon ~]# modprobe -r iscsi
- Perform the upgrade by following Performing the upgrade from RHEL 7 to RHEL 8.
- Reboot the node.
Enable the repositories for Red Hat Ceph Storage 4 for Red Hat Enterprise Linux 8.
Enable the tools repository:
[root@mon ~]# subscription-manager repos --enable=rhceph-4-tools-for-rhel-8-x86_64-rpms
Enable the mon repository:
[root@mon ~]# subscription-manager repos --enable=rhceph-4-mon-for-rhel-8-x86_64-rpms
Install the
ceph-mon
package:[root@mon ~]# dnf install ceph-mon
If the manager service is colocated with the monitor service, install the
ceph-mgr
package:[root@mon ~]# dnf install ceph-mgr
-
Restore the
ceph-client-admin.keyring
andceph.conf
files from a Monitor node which has not been upgraded yet or from a node that has already had those files restored. Once all the daemons are updated after upgrading from Red Hat Ceph Storage 3 to Red Hat Ceph Storage 4, run the following steps:
Enable the messenger v2 protocol,
msgr2
:ceph mon enable-msgr2
This instructs all Ceph Monitors that bind to the old default port of 6789, to also bind to the new port of 3300.
ImportantEnsure all the Ceph Monitors are upgraded from Red Hat Ceph Storage 3 to Red Hat Ceph Storage 4 before performing any further Ceph Monitor configuration.
Verify the status of the monitor:
ceph mon dump
NoteRunning nautilus OSDs does not bind to their v2 address automatically. They must be restarted.
For each host upgraded from Red Hat Ceph Storage 3 to Red Hat Ceph Storage 4, update the
ceph.conf
file to either not specify any monitor port or reference both the v2 and v1 addresses and ports. Import any configuration options inceph.conf
file into the storage cluster’s configuration database.Example
[root@mon ~]# ceph config assimilate-conf -i /etc/ceph/ceph.conf
Check the storage cluster’s configuration database.
Example
[root@mon ~]# ceph config dump
Optional: After upgrading to Red Hat Ceph Storage 4, create a minimal
ceph.conf
file for each host:Example
[root@mon ~]# ceph config generate-minimal-conf > /etc/ceph/ceph.conf.new [root@mon ~]# mv /etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Install the
leveldb
package:[root@mon ~]# dnf install leveldb
Start the monitor service:
[root@mon ~]# systemctl start ceph-mon.target
If the manager service is colocated with the monitor service, start the manager service too:
[root@mon ~]# systemctl start ceph-mgr.target
Verify the monitor service came back up and is in quorum.
[root@mon ~]# ceph -s
On the mon: line under services:, ensure the node is listed as in quorum and not as out of quorum.
Example
mon: 3 daemons, quorum ceph4-mon,ceph4-mon2,ceph4-mon3 (age 2h)
If the manager service is colocated with the monitor service, verify it is up too:
[root@mon ~]# ceph -s
Look for the manager’s node name on the mgr: line under services.
Example
mgr: ceph4-mon(active, since 2h), standbys: ceph4-mon3, ceph4-mon2
- Repeat the above steps on all Monitor nodes until they have all been upgraded.
Additional Resources
- See Manually upgrading a Red Hat Ceph Storage cluster and operating system in the Installation Guide for more information.
- See Upgrading from Red Hat Enterprise Linux 7 to Red Hat Enterprise Linux 8 for more information.
8.3. Manually upgrading Ceph OSD nodes and their operating systems
As a system administrator, you can manually upgrade the Ceph OSD software on a Red Hat Ceph Storage cluster node and the Red Hat Enterprise Linux operating system to a new major release at the same time.
This procedure should be performed for each OSD node in the Ceph cluster, but typically only for one OSD node at a time. A maximum of one failure domains worth of OSD nodes may be performed in parallel. For example, if per-rack replication is in use, one entire rack’s OSD nodes can be upgraded in parallel. To prevent data access issues, ensure the current OSD node’s OSDs have returned to normal operation and all of the cluster’s PGs are in the active+clean
state prior to proceeding to the next OSD.
This procedure will not work with encrypted OSD partitions as the Leapp upgrade utility does not support upgrading with OSD encryption.
If the OSDs were created using ceph-disk
, and are still managed by ceph-disk
, you must use ceph-volume
to take over management of them. This is covered in an optional step below.
Prerequisites
- A running Red Hat Ceph Storage cluster.
- The nodes are running Red Hat Enterprise Linux 7.9.
- The nodes are using Red Hat Ceph Storage version 3.3z6 or 4.0
- Access to the installation source for Red Hat Enterprise Linux 8.3.
Procedure
Set the OSD
noout
flag to prevent OSDs from getting marked down during the migration:ceph osd set noout
Set the OSD
nobackfill
,norecover
,norrebalance
,noscrub
andnodeep-scrub
flags to avoid unnecessary load on the cluster and to avoid any data reshuffling when the node goes down for migration:ceph osd set nobackfill ceph osd set norecover ceph osd set norebalance ceph osd set noscrub ceph osd set nodeep-scrub
Gracefully shut down all the OSD processes on the node:
[root@mon ~]# systemctl stop ceph-osd.target
If using Red Hat Ceph Storage 3, disable the Red Hat Ceph Storage 3 repositories.
Disable the tools repository:
[root@mon ~]# subscription-manager repos --disable=rhel-7-server-rhceph-3-tools-rpms
Disable the osd repository:
[root@mon ~]# subscription-manager repos --disable=rhel-7-server-rhceph-3-osd-rpms
If using Red Hat Ceph Storage 4, disable the Red Hat Ceph Storage 4 repositories.
Disable the tools repository:
[root@mon ~]# subscription-manager repos --disable=rhel-7-server-rhceph-4-tools-rpms
Disable the osd repository:
[root@mon ~]# subscription-manager repos --disable=rhel-7-server-rhceph-4-osd-rpms
-
Install the
leapp
utility. See Upgrading from Red Hat Enterprise Linux 7 to Red Hat Enterprise Linux 8. - Run through the leapp preupgrade checks. See Assessing upgradability from the command line.
-
Set
PermitRootLogin yes
in/etc/ssh/sshd_config
. Restart the OpenSSH SSH daemon:
[root@mon ~]# systemctl restart sshd.service
Remove the iSCSI module from the Linux kernel:
[root@mon ~]# modprobe -r iscsi
- Perform the upgrade by following Performing the upgrade from RHEL 7 to RHEL 8.
- Reboot the node.
Enable the repositories for Red Hat Ceph Storage 4 for Red Hat Enterprise Linux 8.
Enable the tools repository:
[root@mon ~]# subscription-manager repos --enable=rhceph-4-tools-for-rhel-8-x86_64-rpms
Enable the osd repository:
[root@mon ~]# subscription-manager repos --enable=rhceph-4-osd-for-rhel-8-x86_64-rpms
Install the
ceph-osd
package:[root@mon ~]# dnf install ceph-osd
Install the
leveldb
package:[root@mon ~]# dnf install leveldb
-
Restore the
ceph.conf
file from a node which has not been upgraded yet or from a node that has already had those files restored. Unset the
noout
,nobackfill
,norecover
,norrebalance
,noscrub
andnodeep-scrub
flags:# ceph osd unset noout # ceph osd unset nobackfill # ceph osd unset norecover # ceph osd unset norebalance # ceph osd unset noscrub # ceph osd unset nodeep-scrub
Optional: If the OSDs were created using
ceph-disk
, and are still managed byceph-disk
, you must useceph-volume
to take over management of them.Mount each object storage device:
Syntax
/dev/DRIVE /var/lib/ceph/osd/ceph-OSD_ID
Replace DRIVE with the storage device name and partition number.
Replace OSD_ID with the OSD ID.
Example
[root@mon ~]# mount /dev/sdb1 /var/lib/ceph/osd/ceph-0
Verify the ID_NUMBER is correct.
Syntax
cat /var/lib/ceph/osd/ceph-OSD_ID/whoami
Replace OSD_ID with the OSD ID.
Example
[root@mon ~]# cat /var/lib/ceph/osd/ceph-0/whoami 0
Repeat the above steps for any additional object store devices.
Scan the newly mounted devices:
Syntax
ceph-volume simple scan /var/lib/ceph/osd/ceph-OSD_ID
Replace OSD_ID with the OSD ID.
Example
[root@mon ~]# ceph-volume simple scan /var/lib/ceph/osd/ceph-0 stderr: lsblk: /var/lib/ceph/osd/ceph-0: not a block device stderr: lsblk: /var/lib/ceph/osd/ceph-0: not a block device stderr: Unknown device, --name=, --path=, or absolute path in /dev/ or /sys expected. Running command: /usr/sbin/cryptsetup status /dev/sdb1 --> OSD 0 got scanned and metadata persisted to file: /etc/ceph/osd/0-0c9917f7-fce8-42aa-bdec-8c2cf2d536ba.json --> To take over management of this scanned OSD, and disable ceph-disk and udev, run: --> ceph-volume simple activate 0 0c9917f7-fce8-42aa-bdec-8c2cf2d536ba
Repeat the above step for any additional object store devices.
Activate the device:
Syntax
ceph-volume simple activate OSD_ID UUID
Replace OSD_ID with the OSD ID and UUID with the UUID printed in the scan output from earlier.
Example
[root@mon ~]# ceph-volume simple activate 0 0c9917f7-fce8-42aa-bdec-8c2cf2d536ba Running command: /usr/bin/ln -snf /dev/sdb2 /var/lib/ceph/osd/ceph-0/journal Running command: /usr/bin/chown -R ceph:ceph /dev/sdb2 Running command: /usr/bin/systemctl enable ceph-volume@simple-0-0c9917f7-fce8-42aa-bdec-8c2cf2d536ba stderr: Created symlink /etc/systemd/system/multi-user.target.wants/ceph-volume@simple-0-0c9917f7-fce8-42aa-bdec-8c2cf2d536ba.service → /usr/lib/systemd/system/ceph-volume@.service. Running command: /usr/bin/ln -sf /dev/null /etc/systemd/system/ceph-disk@.service --> All ceph-disk systemd units have been disabled to prevent OSDs getting triggered by UDEV events Running command: /usr/bin/systemctl enable --runtime ceph-osd@0 stderr: Created symlink /run/systemd/system/ceph-osd.target.wants/ceph-osd@0.service → /usr/lib/systemd/system/ceph-osd@.service. Running command: /usr/bin/systemctl start ceph-osd@0 --> Successfully activated OSD 0 with FSID 0c9917f7-fce8-42aa-bdec-8c2cf2d536ba
Repeat the above step for any additional object store devices.
Optional: If your OSDs were created with
ceph-volume
and you did not complete the previous step, start the OSD service now:[root@mon ~]# systemctl start ceph-osd.target
Activate the OSDs:
BlueStore
[root@mon ~]# ceph-volume lvm activate --all
Verify that the OSDs are
up
andin
, and that they are in theactive+clean
state.[root@mon ~]# ceph -s
On the osd: line under services:, ensure that all OSDs are
up
andin
:Example
osd: 3 osds: 3 up (since 8s), 3 in (since 3M)
- Repeat the above steps on all OSD nodes until they have all been upgraded.
If upgrading from Red Hat Ceph Storage 3, disallow pre-Nautilus OSDs and enable the Nautilus-only functionality:
[root@mon ~]# ceph osd require-osd-release nautilus
NoteFailure to execute this step makes it impossible for OSDs to communicate after
msgrv2
is enabled.Once all the daemons are updated after upgrading from Red Hat Ceph Storage 3 to Red Hat Ceph Storage 4, run the following steps:
Enable the messenger v2 protocol,
msgr2
:[root@mon ~]# ceph mon enable-msgr2
This instructs all Ceph Monitors that bind to the old default port of 6789, to also bind to the new port of 3300.
On every node, import any configuration options in
ceph.conf
file into the storage cluster’s configuration database:Example
[root@mon ~]# ceph config assimilate-conf -i /etc/ceph/ceph.conf
NoteWhen you assimilate a config into your monitors, for example, if you have different config values set for the same set of options, the end result depends on the order in which the files are assimilated.
Check the storage cluster’s configuration database:
Example
[root@mon ~]# ceph config dump
Additional Resources
- See Manually upgrading a Red Hat Ceph Storage cluster and operating system in the Installation Guide for more information.
- See Upgrading from Red Hat Enterprise Linux 7 to Red Hat Enterprise Linux 8 for more information.
8.4. Manually upgrading Ceph Object Gateway nodes and their operating systems
As a system administrator, you can manually upgrade the Ceph Object Gateway (RGW) software on a Red Hat Ceph Storage cluster node and the Red Hat Enterprise Linux operating system to a new major release at the same time.
This procedure should be performed for each RGW node in the Ceph cluster, but only for one RGW node at a time. Ensure the current upgraded RGW has returned to normal operation prior to proceeding to the next node to prevent any client access issues.
Prerequisites
- A running Red Hat Ceph Storage cluster.
- The nodes are running Red Hat Enterprise Linux 7.9.
- The nodes are using Red Hat Ceph Storage version 3.3z6 or 4.1
- Access to the installation source for Red Hat Enterprise Linux 8.3.
Procedure
Stop the Ceph Object Gateway service:
# systemctl stop ceph-radosgw.target
If using Red Hat Ceph Storage 3, disable the Red Hat Ceph Storage 3 tool repository:
# subscription-manager repos --disable=rhel-7-server-rhceph-3-tools-rpms
If using Red Hat Ceph Storage 4, disable the Red Hat Ceph Storage 4 tools repository:
# subscription-manager repos --disable=rhel-7-server-rhceph-4-tools-rpms
-
Install the
leapp
utility. See Upgrading from Red Hat Enterprise Linux 7 to Red Hat Enterprise Linux 8. - Run through the leapp preupgrade checks. See Assessing upgradability from the command line.
-
Set
PermitRootLogin yes
in/etc/ssh/sshd_config
. Restart the OpenSSH SSH daemon:
# systemctl restart sshd.service
Remove the iSCSI module from the Linux kernel:
# modprobe -r iscsi
- Perform the upgrade by following Performing the upgrade from RHEL 7 to RHEL 8.
- Reboot the node.
Enable the tools repositories for Red Hat Ceph Storage 4 for Red Hat Enterprise Linux 8.
# subscription-manager repos --enable=rhceph-4-tools-for-rhel-8-x86_64-rpms
Install the
ceph-radosgw
package:# dnf install ceph-radosgw
- Optional: Install the packages for any Ceph services that are colocated on this node. Enable additional Ceph repositories if needed.
Optional: Install the
leveldb
package which is needed by other Ceph services.# dnf install leveldb
-
Restore the
ceph-client-admin.keyring
andceph.conf
files from a node which has not been upgraded yet or from a node that has already had those files restored. Start the RGW service:
# systemctl start ceph-radosgw.target
Verify the daemon is active:
# ceph -s
There is an rgw: line under services:.
Example
rgw: 1 daemon active (jb-ceph4-rgw.rgw0)
- Repeat the above steps on all Ceph Object Gateway nodes until they have all been upgraded.
Additional Resources
- See Manually upgrading a Red Hat Ceph Storage cluster and operating system in the Installation Guide for more information.
- See Upgrading from Red Hat Enterprise Linux 7 to Red Hat Enterprise Linux 8 for more information.
8.5. Manually upgrading the Ceph Dashboard node and its operating system
As a system administrator, you can manually upgrade the Ceph Dashboard software on a Red Hat Ceph Storage cluster node and the Red Hat Enterprise Linux operating system to a new major release at the same time.
Prerequisites
- A running Red Hat Ceph Storage cluster.
- The node is running Red Hat Enterprise Linux 7.9.
- The node is running Red Hat Ceph Storage version 3.3z6 or 4.1
- Access to the installation source for Red Hat Enterprise Linux 8.3.
Procedure
Uninstall the existing dashboard from the cluster.
Change to the
/usr/share/cephmetrics-ansible
directory:# cd /usr/share/cephmetrics-ansible
Run the
purge.yml
Ansible playbook:# ansible-playbook -v purge.yml
If using Red Hat Ceph Storage 3, disable the Red Hat Ceph Storage 3 tools repository:
# subscription-manager repos --disable=rhel-7-server-rhceph-3-tools-rpms
If using Red Hat Ceph Storage 4, disable the Red Hat Ceph Storage 4 tools repository:
# subscription-manager repos --disable=rhel-7-server-rhceph-4-tools-rpms
-
Install the
leapp
utility. See Upgrading from Red Hat Enterprise Linux 7 to Red Hat Enterprise Linux 8. -
Run through the
leapp
preupgrade checks. See Assessing upgradability from the command line. -
Set
PermitRootLogin yes
in/etc/ssh/sshd_config
. Restart the OpenSSH SSH daemon:
# systemctl restart sshd.service
Remove the iSCSI module from the Linux kernel:
# modprobe -r iscsi
- Perform the upgrade by following Performing the upgrade from RHEL 7 to RHEL 8.
- Reboot the node.
Enable the tools repository for Red Hat Ceph Storage 4 for Red Hat Enterprise Linux 8:
# subscription-manager repos --enable=rhceph-4-tools-for-rhel-8-x86_64-rpms
Enable the Ansible repository:
# subscription-manager repos --enable=ansible-2.9-for-rhel-8-x86_64-rpms
-
Configure
ceph-ansible
to manage the cluster. It will install the dashboard. Follow the instructions in Installing Red Hat Ceph Storage using Ansible, including the prerequisites. -
After you run
ansible-playbook site.yml
as a part of the above procedures, the URL for the dashboard will be printed. See Installing dashboard using Ansible in the Dashboard guide for more information on locating the URL and accessing the dashboard.
Additional Resources
- See Manually upgrading a Red Hat Ceph Storage cluster and operating system in the Installation Guide for more information.
- See Upgrading from Red Hat Enterprise Linux 7 to Red Hat Enterprise Linux 8 for more information.
- See Installing dashboard using Ansible in the Dashboard guide for more information.
8.6. Manually upgrading Ceph Ansible nodes and reconfiguring settings
Manually upgrade the Ceph Ansible software on a Red Hat Ceph Storage cluster node and the Red Hat Enterprise Linux operating system to a new major release at the same time. This procedure applies to both bare-metal and container deployments, unless specified.
Before upgrading hostOS on the Ceph Ansible node, take a backup of group_vars
and hosts
file. Use the created backup before re-configuring the Ceph Ansible node.
Prerequisites
- A running Red Hat Ceph Storage cluster.
- The node is running Red Hat Enterprise Linux 7.9.
- The node is running Red Hat Ceph Storage version 3.3z6 or 4.1
- Access to the installation source for Red Hat Enterprise Linux 8.3.
Procedure
Enable the tools repository for Red Hat Ceph Storage 4 for Red Hat Enterprise Linux 8:
[root@dashboard ~]# subscription-manager repos --enable=rhceph-4-tools-for-rhel-8-x86_64-rpms
Enable the Ansible repository:
[root@dashboard ~]# subscription-manager repos --enable=ansible-2.9-for-rhel-8-x86_64-rpms
-
Configure
ceph-ansible
to manage the storage cluster. It will install the dashboard. Follow the instructions in Installing Red Hat Ceph Storage using Ansible, including the prerequisites. -
After you run
ansible-playbook site.yml
as a part of the above procedures, the URL for the dashboard will be printed. See Installing dashboard using Ansible in the Dashboard guide for more information on locating the URL and accessing the dashboard.
Additional Resources
- See Manually upgrading a Red Hat Ceph Storage cluster and operating system in the Installation Guide for more information.
- See Upgrading from Red Hat Enterprise Linux 7 to Red Hat Enterprise Linux 8 for more information.
- See Installing dashboard using Ansible in the Dashboard guide for more information.
8.7. Manually upgrading the Ceph File System Metadata Server nodes and their operating systems
You can manually upgrade the Ceph File System (CephFS) Metadata Server (MDS) software on a Red Hat Ceph Storage cluster and the Red Hat Enterprise Linux operating system to a new major release at the same time.
Before you upgrade the storage cluster, reduce the number of active MDS ranks to one per file system. This eliminates any possible version conflicts between multiple MDS. In addition, take all standby nodes offline before upgrading.
This is because the MDS cluster does not possess built-in versioning or file system flags. Without these features, multiple MDS might communicate using different versions of the MDS software, and could cause assertions or other faults to occur.
Prerequisites
- A running Red Hat Ceph Storage cluster.
- The nodes are running Red Hat Enterprise Linux 7.9.
- The nodes are using Red Hat Ceph Storage version 3.3z6 or 4.1.
- Access to the installation source for Red Hat Enterprise Linux 8.3.
- Root-level access to all nodes in the storage cluster.
The underlying XFS filesystem must be formatted with ftype=1
or with d_type
support. Run the command xfs_info /var
to ensure the ftype
is set to 1
. If the value of ftype
is not 1
, attach a new disk or create a volume. On top of this new device, create a new XFS filesystem and mount it on /var/lib/containers
.
Starting with Red Hat Enterprise Linux 8, mkfs.xfs
enables ftype=1
by default.
Procedure
Reduce the number of active MDS ranks to 1:
Syntax
ceph fs set FILE_SYSTEM_NAME max_mds 1
Example
[root@mds ~]# ceph fs set fs1 max_mds 1
Wait for the cluster to stop all of the MDS ranks. When all of the MDS have stopped, only rank 0 should be active. The rest should be in standby mode. Check the status of the file system:
[root@mds ~]# ceph status
Use
systemctl
to take all standby MDS offline:[root@mds ~]# systemctl stop ceph-mds.target
Confirm that only one MDS is online, and that it has rank 0 for the file system:
[root@mds ~]# ceph status
Disable the tools repository for the operating system version:
If you are upgrading from Red Hat Ceph Storage 3 on RHEL 7, disable the Red Hat Ceph Storage 3 tools repository:
[root@mds ~]# subscription-manager repos --disable=rhel-7-server-rhceph-3-tools-rpms
If you are using Red Hat Ceph Storage 4, disable the Red Hat Ceph Storage 4 tools repository:
[root@mds ~]# subscription-manager repos --disable=rhel-7-server-rhceph-4-tools-rpms
-
Install the
leapp
utility. For more information aboutleapp
, refer to Upgrading from Red Hat Enterprise Linux 7 to Red Hat Enterprise Linux 8. -
Run through the
leapp
preupgrade checks. For more information, refer to Assessing upgradability from the command line. -
Edit
/etc/ssh/sshd_config
and setPermitRootLogin
toyes
. Restart the OpenSSH SSH daemon:
[root@mds ~]# systemctl restart sshd.service
Remove the iSCSI module from the Linux kernel:
[root@mds ~]# modprobe -r iscsi
- Perform the upgrade. See Performing the upgrade from RHEL 7 to RHEL 8.
- Reboot the MDS node.
Enable the tools repositories for Red Hat Ceph Storage 4 for Red Hat Enterprise Linux 8:
[root@mds ~]# subscription-manager repos --enable=rhceph-4-tools-for-rhel-8-x86_64-rpms
Install the
ceph-mds
package:[root@mds ~]# dnf install ceph-mds -y
- Optional: Install the packages for any Ceph services that are colocated on this node. Enable additional Ceph repositories, if needed.
Optional: Install the
leveldb
package, which is needed by other Ceph services:[root@mds ~]# dnf install leveldb
-
Restore the
ceph-client-admin.keyring
andceph.conf
files from a node that has not been upgraded yet, or from a node that has already had those files restored. Start the MDS service:
[root@mds ~]# systemctl restart ceph-mds.target
Verify that the daemon is active:
[root@mds ~]# ceph -s
- Follow the same processes for the standby daemons.
When you have finished restarting all of the MDS in standby, restore the previous value of
max_mds
for your cluster:Syntax
ceph fs set FILE_SYSTEM_NAME max_mds ORIGINAL_VALUE
Example
[root@mds ~]# ceph fs set fs1 max_mds 5
8.8. Recovering from an operating system upgrade failure on an OSD node
As a system administrator, if you have a failure when using the procedure Manually upgrading Ceph OSD nodes and their operating systems, you can recover from the failure using the following procedure. In the procedure you will do a fresh install of Red Hat Enterprise Linux 8.4 on the node and still be able to recover the OSDs without any major backfilling of data besides the writes to the OSDs that were down while they were out.
DO NOT touch the media backing the OSDs or their respective wal.db
or block.db
databases.
Prerequisites
- A running Red Hat Ceph Storage cluster.
- An OSD node that failed to upgrade.
- Access to the installation source for Red Hat Enterprise Linux 8.4.
Procedure
Perform a standard installation of Red Hat Enterprise Linux 8.4 on the failed node and enable the Red Hat Enterprise Linux repositories.
Enable the repositories for Red Hat Ceph Storage 4 for Red Hat Enterprise Linux 8.
Enable the tools repository:
# subscription-manager repos --enable=rhceph-4-tools-for-rhel-8-x86_64-rpms
Enable the osd repository:
# subscription-manager repos --enable=rhceph-4-osd-for-rhel-8-x86_64-rpms
Install the
ceph-osd
package:# dnf install ceph-osd
-
Restore the
ceph.conf
file to/etc/ceph
from a node which has not been upgraded yet or from a node that has already had those files restored. Start the OSD service:
# systemctl start ceph-osd.target
Activate the object store devices:
ceph-volume lvm activate --all
Watch the recovery of the OSDs and cluster backfill writes to recovered OSDs:
# ceph -w
Monitor the output until all PGs are in state
active+clean
.
Additional Resources
- See Manually upgrading a Red Hat Ceph Storage cluster and operating system in the Installation Guide for more information.
- See Upgrading from Red Hat Enterprise Linux 7 to Red Hat Enterprise Linux 8 for more information.
8.9. Additional Resources
- If you do not need to upgrade the operating system to a new major release, see Upgrading a Red Hat Ceph Storage cluster.