Appendix G. Manually Upgrading from Red Hat Ceph Storage 2 to 3
This chapter provides information on how to manually upgrade from Red Hat Ceph storage version 2.4 to 3.0.
You can upgrade the Ceph Storage Cluster in a rolling fashion and while the cluster is running. Upgrade each node in the cluster sequentially, only proceeding to the next node after the previous node is done.
Red Hat recommends upgrading the Ceph components in the following order:
- Monitor nodes
- OSD nodes
- Ceph Object Gateway nodes
- All other Ceph client nodes
Two methods are available to upgrade a Red Hat Ceph Storage 2.3 to 3:
- Using Red Hat’s Content Delivery Network (CDN)
- Using a Red Hat provided ISO image file
After upgrading the storage cluster you might have a health warning regarding the CRUSH map using legacy tunables. For details, see the CRUSH Tunables section in the Storage Strategies guide for Red Hat Ceph Storage 3.
Example
$ ceph -s
cluster 848135d7-cdb9-4084-8df2-fb5e41ae60bd
health HEALTH_WARN
crush map has legacy tunables (require bobtail, min is firefly)
monmap e1: 1 mons at {ceph1=192.168.0.121:6789/0}
election epoch 2, quorum 0 ceph1
osdmap e83: 2 osds: 2 up, 2 in
pgmap v1864: 64 pgs, 1 pools, 38192 kB data, 17 objects
10376 MB used, 10083 MB / 20460 MB avail
64 active+clean
Red Hat recommends all Ceph clients to be running the same version as the Ceph storage cluster.
Prerequisites
If the cluster you want to upgrade contains Ceph Block Device images that use the
exclusive-lockfeature, ensure that all Ceph Block Device users have permissions to blacklist clients:ceph auth caps client.<ID> mon 'allow r, allow command "osd blacklist"' osd '<existing-OSD-user-capabilities>'
Upgrading Monitor Nodes
This section describes steps to upgrade a Ceph Monitor node to a later version. There must be an odd number of Monitors. While you are upgrading one Monitor, the storage cluster will still have quorum.
Procedure
Do the following steps on each Monitor node in the storage cluster. Upgrade only one Monitor node at a time.
If you installed Red Hat Ceph Storage 2 by using software repositories, disable the repositories:
# subscription-manager repos --disable=rhel-7-server-rhceph-2-mon-rpms --disable=rhel-7-server-rhceph-2-installer-rpms
Enable the Red Hat Ceph Storage 3 Monitor repository:
[root@monitor ~]# subscription-manager repos --enable=rhel-7-server-rhceph-3-mon-rpms
As
root, stop the Monitor process:Syntax
# service ceph stop <daemon_type>.<monitor_host_name>
Example
# service ceph stop mon.node1
As
root, update theceph-monpackage:# yum update ceph-mon
As
root, update the owner and group permissions:Syntax
# chown -R <owner>:<group> <path_to_directory>
Example
# chown -R ceph:ceph /var/lib/ceph/mon # chown -R ceph:ceph /var/log/ceph # chown -R ceph:ceph /var/run/ceph # chown ceph:ceph /etc/ceph/ceph.client.admin.keyring # chown ceph:ceph /etc/ceph/ceph.conf # chown ceph:ceph /etc/ceph/rbdmap
NoteIf the Ceph Monitor node is colocated with an OpenStack Controller node, then the Glance and Cinder keyring files must be owned by
glanceandcinderrespectively. For example:# ls -l /etc/ceph/ ... -rw-------. 1 glance glance 64 <date> ceph.client.glance.keyring -rw-------. 1 cinder cinder 64 <date> ceph.client.cinder.keyring ...
If SELinux is in enforcing or permissive mode, relabel the SELinux context on the next reboot.
# touch /.autorelabel
WarningRelabeling can take a long time to complete because SELinux must traverse every file system and fix any mislabeled files. To exclude directories from being relabeled, add the directories to the
/etc/selinux/fixfiles_exclude_dirsfile before rebooting.As
root, enable theceph-monprocess:# systemctl enable ceph-mon.target # systemctl enable ceph-mon@<monitor_host_name>
As
root, reboot the Monitor node:# shutdown -r now
Once the Monitor node is up, check the health of the Ceph storage cluster before moving to the next Monitor node:
# ceph -s
Upgrading OSD Nodes
This section describes steps to upgrade a Ceph OSD node to a later version.
Prerequisites
When upgrading an OSD node, some placement groups will become degraded because the OSD might be down or restarting. To prevent Ceph from starting the recovery process, on a Monitor node, set the noout and norebalance OSD flags:
[root@monitor ~]# ceph osd set noout [root@monitor ~]# ceph osd set norebalance
Procedure
Do the following steps on each OSD node in the storage cluster. Upgrade only one OSD node at a time. If an ISO-based installation was performed for Red Hat Ceph Storage 2.3, then skip this first step.
As
root, disable the Red Hat Ceph Storage 2 repositories:# subscription-manager repos --disable=rhel-7-server-rhceph-2-osd-rpms --disable=rhel-7-server-rhceph-2-installer-rpms
Enable the Red Hat Ceph Storage 3 OSD repository:
[root@osd ~]# subscription-manager repos --enable=rhel-7-server-rhceph-3-osd-rpms
As
root, stop any running OSD process:Syntax
# service ceph stop <daemon_type>.<osd_id>
Example
# service ceph stop osd.0
As
root, update theceph-osdpackage:# yum update ceph-osd
As
root, update the owner and group permissions on the newly created directory and files:Syntax
# chown -R <owner>:<group> <path_to_directory>
Example
# chown -R ceph:ceph /var/lib/ceph/osd # chown -R ceph:ceph /var/log/ceph # chown -R ceph:ceph /var/run/ceph # chown -R ceph:ceph /etc/ceph
NoteUsing the following
findcommand might quicken the process of changing ownership by using thechowncommand in parallel on a Ceph storage cluster with a large number of disks:# find /var/lib/ceph/osd -maxdepth 1 -mindepth 1 -print | xargs -P12 -n1 chown -R ceph:ceph
If SELinux is set to enforcing or permissive mode, then set a relabelling of the SELinux context on files for the next reboot:
# touch /.autorelabel
WarningRelabeling will take a long time to complete, because SELinux must traverse every file system and fix any mislabeled files. To exclude directories from being relabelled, add the directory to the
/etc/selinux/fixfiles_exclude_dirsfile before rebooting.NoteIn environments with large number of objects per placement group (PG), the directory enumeration speed will decrease, causing a negative impact to performance. This is caused by the addition of xattr queries which verifies the SELinux context. Setting the context at mount time removes the xattr queries for context and helps overall disk performance, especially on slower disks.
Add the following line to the
[osd]section in the/etc/ceph/ceph.conffile:+
osd_mount_options_xfs=rw,noatime,inode64,context="system_u:object_r:ceph_var_lib_t:s0"
As
root, replay device events from the kernel:# udevadm trigger
As
root, enable theceph-osdprocess:# systemctl enable ceph-osd.target # systemctl enable ceph-osd@<osd_id>
As
root, reboot the OSD node:# shutdown -r now
Move to the next OSD node.
NoteIf the
nooutandnorebalanceflags are set, the storage cluster is inHEALTH_WARNstate$ ceph health HEALTH_WARN noout,norebalance flag(s) set
Once you are done upgrading the Ceph Storage Cluster, unset the previously set OSD flags and verify the storage cluster status.
On a Monitor node, and after all OSD nodes have been upgraded, unset the noout and norebalance flags:
# ceph osd unset noout # ceph osd unset norebalance
In addition, execute the ceph osd require-osd-release <release> command. This command ensures that no more OSDs with Red Hat Ceph Storage 2.3 can be added to the storage cluster. If you do not run this command, the storage status will be HEALTH_WARN.
# ceph osd require-osd-release luminous
Additional Resources
- To expand the storage capacity by adding new OSDs to the storage cluster, see the Add an OSD section in the Administration Guide for Red Hat Ceph Storage 3
Upgrading the Ceph Object Gateway Nodes
This section describes steps to upgrade a Ceph Object Gateway node to a later version.
Prerequisites
- Red Hat recommends putting a Ceph Object Gateway behind a load balancer, such as HAProxy. If you use a load balancer, remove the Ceph Object Gateway from the load balancer once no requests are being served.
If you use a custom name for the region pool, specified in the
rgw_region_root_poolparameter, add thergw_zonegroup_root_poolparameter to the[global]section of the Ceph configuration file. Set the value ofrgw_zonegroup_root_poolto be the same asrgw_region_root_pool, for example:[global] rgw_zonegroup_root_pool = .us.rgw.root
Procedure
Do the following steps on each Ceph Object Gateway node in the storage cluster. Upgrade only one node at a time.
If you used online repositories to install Red Hat Ceph Storage, disable the 2 repositories.
# subscription-manager repos --disable=rhel-7-server-rhceph-2.3-tools-rpms --disable=rhel-7-server-rhceph-2-installer-rpms
Enable the Red Hat Ceph Storage 3 Tools repository:
[root@gateway ~]# subscription-manager repos --enable=rhel-7-server-rhceph-3-tools-rpms
Stop the Ceph Object Gateway process (
ceph-radosgw):# service ceph-radosgw stop
Update the
ceph-radosgwpackage:# yum update ceph-radosgw
Change the owner and group permissions on the newly created
/var/lib/ceph/radosgw/and/var/log/ceph/directories and their content toceph.# chown -R ceph:ceph /var/lib/ceph/radosgw # chown -R ceph:ceph /var/log/ceph
If SELinux is set to run in enforcing or permissive mode, instruct it to relabel SELinux context on the next boot.
# touch /.autorelabel
ImportantRelabeling takes a long time to complete, because SELinux must traverse every file system and fix any mislabeled files. To exclude directories from being relabeled, add them to the
/etc/selinux/fixfiles_exclude_dirsfile before rebooting.Enable the
ceph-radosgwprocess.# systemctl enable ceph-radosgw.target # systemctl enable ceph-radosgw@rgw.<hostname>
Replace
<hostname>with the name of the Ceph Object Gateway host, for examplegateway-node.# systemctl enable ceph-radosgw.target # systemctl enable ceph-radosgw@rgw.gateway-node
Reboot the Ceph Object Gateway node.
# shutdown -r now
- If you use a load balancer, add the Ceph Object Gateway node back to the load balancer.
See Also
Upgrading a Ceph Client Node
Ceph clients are:
- Ceph Block Devices
- OpenStack Nova compute nodes
- QEMU/KVM hypervisors
- Any custom application that uses the Ceph client-side libraries
Red Hat recommends all Ceph clients to be running the same version as the Ceph storage cluster.
Prerequisites
- Stop all I/O requests against a Ceph client node while upgrading the packages to prevent unexpected errors to occur
Procedure
If you installed Red Hat Ceph Storage 2 clients by using software repositories, disable the repositories:
# subscription-manager repos --disable=rhel-7-server-rhceph-2-tools-rpms --disable=rhel-7-server-rhceph-2-installer-rpms
NoteIf an ISO-based installation was performed for Red Hat Ceph Storage 2 clients, skip this first step.
On the client node, enable the Red Hat Ceph Storage Tools 3 repository:
[root@gateway ~]# subscription-manager repos --enable=rhel-7-server-rhceph-3-tools-rpms
On the client node, update the
ceph-commonpackage:# yum update ceph-common
Restart any application that depends on the Ceph client-side libraries after upgrading the ceph-common package.
If you are upgrading OpenStack Nova compute nodes that have running QEMU/KVM instances or use a dedicated QEMU/KVM client, stop and start the QEMU/KVM instance because restarting the instance does not work in this case.

Where did the comment section go?
Red Hat's documentation publication system recently went through an upgrade to enable speedier, more mobile-friendly content. We decided to re-evaluate our commenting platform to ensure that it meets your expectations and serves as an optimal feedback mechanism. During this redesign, we invite your input on providing feedback on Red Hat documentation via the discussion platform.