Red Hat Training
A Red Hat training course is available for Red Hat Ceph Storage
Appendix G. Manually Upgrading from Red Hat Ceph Storage 2 to 3
You can upgrade the Ceph Storage Cluster from version 2 to 3 in a rolling fashion and while the cluster is running. Upgrade each node in the cluster sequentially, only proceeding to the next node after the previous node is done.
Red Hat recommends upgrading the Ceph components in the following order:
- Monitor nodes
- OSD nodes
- Ceph Object Gateway nodes
- All other Ceph client nodes
Red Hat Ceph Storage 3 introduces a new daemon Ceph Manager (ceph-mgr
). Install ceph-mgr
after upgrading the Monitor nodes.
Two methods are available to upgrade a Red Hat Ceph Storage 2 to 3:
- Using Red Hat’s Content Delivery Network (CDN)
- Using a Red Hat provided ISO image file
After upgrading the storage cluster you can have a health warning regarding the CRUSH map using legacy tunables. For details, see the CRUSH Tunables section in the Storage Strategies guide for Red Hat Ceph Storage 3.
Example
$ ceph -s cluster 848135d7-cdb9-4084-8df2-fb5e41ae60bd health HEALTH_WARN crush map has legacy tunables (require bobtail, min is firefly) monmap e1: 1 mons at {ceph1=192.168.0.121:6789/0} election epoch 2, quorum 0 ceph1 osdmap e83: 2 osds: 2 up, 2 in pgmap v1864: 64 pgs, 1 pools, 38192 kB data, 17 objects 10376 MB used, 10083 MB / 20460 MB avail 64 active+clean
Red Hat recommends all Ceph clients to be running the same version as the Ceph storage cluster.
Prerequisites
If the cluster you want to upgrade contains Ceph Block Device images that use the
exclusive-lock
feature, ensure that all Ceph Block Device users have permissions to blacklist clients:ceph auth caps client.<ID> mon 'allow r, allow command "osd blacklist"' osd '<existing-OSD-user-capabilities>'
Upgrading Monitor Nodes
This section describes steps to upgrade a Ceph Monitor node to a later version. There must be an odd number of Monitors. While you are upgrading one Monitor, the storage cluster will still have quorum.
Procedure
Do the following steps on each Monitor node in the storage cluster. Upgrade only one Monitor node at a time.
If you installed Red Hat Ceph Storage 2 by using software repositories, disable the repositories:
# subscription-manager repos --disable=rhel-7-server-rhceph-2-mon-rpms --disable=rhel-7-server-rhceph-2-installer-rpms
Enable the Red Hat Ceph Storage 3 Monitor repository:
[root@monitor ~]# subscription-manager repos --enable=rhel-7-server-rhceph-3-mon-els-rpms
As
root
, stop the Monitor process:Syntax
# service ceph stop <daemon_type>.<monitor_host_name>
Example
# service ceph stop mon.node1
As
root
, update theceph-mon
package:# yum update ceph-mon
As
root
, update the owner and group permissions:Syntax
# chown -R <owner>:<group> <path_to_directory>
Example
# chown -R ceph:ceph /var/lib/ceph/mon # chown -R ceph:ceph /var/log/ceph # chown -R ceph:ceph /var/run/ceph # chown ceph:ceph /etc/ceph/ceph.client.admin.keyring # chown ceph:ceph /etc/ceph/ceph.conf # chown ceph:ceph /etc/ceph/rbdmap
NoteIf the Ceph Monitor node is colocated with an OpenStack Controller node, then the Glance and Cinder keyring files must be owned by
glance
andcinder
respectively. For example:# ls -l /etc/ceph/ ... -rw-------. 1 glance glance 64 <date> ceph.client.glance.keyring -rw-------. 1 cinder cinder 64 <date> ceph.client.cinder.keyring ...
If SELinux is in enforcing or permissive mode, relabel the SELinux context on the next reboot.
# touch /.autorelabel
WarningRelabeling can take a long time to complete because SELinux must traverse every file system and fix any mislabeled files. To exclude directories from being relabeled, add the directories to the
/etc/selinux/fixfiles_exclude_dirs
file before rebooting.As
root
, enable theceph-mon
process:# systemctl enable ceph-mon.target # systemctl enable ceph-mon@<monitor_host_name>
As
root
, reboot the Monitor node:# shutdown -r now
Once the Monitor node is up, check the health of the Ceph storage cluster before moving to the next Monitor node:
# ceph -s
G.1. Manually installing Ceph Manager
Usually, the Ansible automation utility installs the Ceph Manager daemon (ceph-mgr
) when you deploy the Red Hat Ceph Storage cluster. However, if you do not use Ansible to manage Red Hat Ceph Storage, you can install Ceph Manager manually. Red Hat recommends to colocate the Ceph Manager and Ceph Monitor daemons on a same node.
Prerequisites
- A working Red Hat Ceph Storage cluster
-
root
orsudo
access -
The
rhel-7-server-rhceph-3-mon-els-rpms
repository enabled -
Open ports
6800-7300
on the public network if firewall is used
Procedure
Use the following commands on the node where ceph-mgr
will be deployed and as the root
user or with the sudo
utility.
Install the
ceph-mgr
package:[root@node1 ~]# yum install ceph-mgr
Create the
/var/lib/ceph/mgr/ceph-hostname/
directory:mkdir /var/lib/ceph/mgr/ceph-hostname
Replace hostname with the host name of the node where the
ceph-mgr
daemon will be deployed, for example:[root@node1 ~]# mkdir /var/lib/ceph/mgr/ceph-node1
In the newly created directory, create an authentication key for the
ceph-mgr
daemon:[root@node1 ~]# ceph auth get-or-create mgr.`hostname -s` mon 'allow profile mgr' osd 'allow *' mds 'allow *' -o /var/lib/ceph/mgr/ceph-node1/keyring
Change the owner and group of the
/var/lib/ceph/mgr/
directory toceph:ceph
:[root@node1 ~]# chown -R ceph:ceph /var/lib/ceph/mgr
Enable the
ceph-mgr
target:[root@node1 ~]# systemctl enable ceph-mgr.target
Enable and start the
ceph-mgr
instance:systemctl enable ceph-mgr@hostname systemctl start ceph-mgr@hostname
Replace hostname with the host name of the node where the
ceph-mgr
will be deployed, for example:[root@node1 ~]# systemctl enable ceph-mgr@node1 [root@node1 ~]# systemctl start ceph-mgr@node1
Verify that the
ceph-mgr
daemon started successfully:ceph -s
The output will include a line similar to the following one under the
services:
section:mgr: node1(active)
-
Install more
ceph-mgr
daemons to serve as standby daemons that become active if the current active daemon fails.
Additional resources
Upgrading OSD Nodes
This section describes steps to upgrade a Ceph OSD node to a later version.
Prerequisites
When upgrading an OSD node, some placement groups will become degraded because the OSD might be down or restarting. To prevent Ceph from starting the recovery process, on a Monitor node, set the noout
and norebalance
OSD flags:
[root@monitor ~]# ceph osd set noout [root@monitor ~]# ceph osd set norebalance
Procedure
Do the following steps on each OSD node in the storage cluster. Upgrade only one OSD node at a time. If an ISO-based installation was performed for Red Hat Ceph Storage 2.3, then skip this first step.
As
root
, disable the Red Hat Ceph Storage 2 repositories:# subscription-manager repos --disable=rhel-7-server-rhceph-2-osd-rpms --disable=rhel-7-server-rhceph-2-installer-rpms
Enable the Red Hat Ceph Storage 3 OSD repository:
[root@osd ~]# subscription-manager repos --enable=rhel-7-server-rhceph-3-osd-els-rpms
As
root
, stop any running OSD process:Syntax
# service ceph stop <daemon_type>.<osd_id>
Example
# service ceph stop osd.0
As
root
, update theceph-osd
package:# yum update ceph-osd
As
root
, update the owner and group permissions on the newly created directory and files:Syntax
# chown -R <owner>:<group> <path_to_directory>
Example
# chown -R ceph:ceph /var/lib/ceph/osd # chown -R ceph:ceph /var/log/ceph # chown -R ceph:ceph /var/run/ceph # chown -R ceph:ceph /etc/ceph
NoteUsing the following
find
command might quicken the process of changing ownership by using thechown
command in parallel on a Ceph storage cluster with a large number of disks:# find /var/lib/ceph/osd -maxdepth 1 -mindepth 1 -print | xargs -P12 -n1 chown -R ceph:ceph
If SELinux is set to enforcing or permissive mode, then set a relabelling of the SELinux context on files for the next reboot:
# touch /.autorelabel
WarningRelabeling will take a long time to complete, because SELinux must traverse every file system and fix any mislabeled files. To exclude directories from being relabelled, add the directory to the
/etc/selinux/fixfiles_exclude_dirs
file before rebooting.NoteIn environments with large number of objects per placement group (PG), the directory enumeration speed will decrease, causing a negative impact to performance. This is caused by the addition of xattr queries which verifies the SELinux context. Setting the context at mount time removes the xattr queries for context and helps overall disk performance, especially on slower disks.
Add the following line to the
[osd]
section in the/etc/ceph/ceph.conf
file:+
osd_mount_options_xfs=rw,noatime,inode64,context="system_u:object_r:ceph_var_lib_t:s0"
As
root
, replay device events from the kernel:# udevadm trigger
As
root
, enable theceph-osd
process:# systemctl enable ceph-osd.target # systemctl enable ceph-osd@<osd_id>
As
root
, reboot the OSD node:# shutdown -r now
Move to the next OSD node.
NoteIf the
noout
andnorebalance
flags are set, the storage cluster is inHEALTH_WARN
state$ ceph health HEALTH_WARN noout,norebalance flag(s) set
Once you are done upgrading the Ceph Storage Cluster, unset the previously set OSD flags and verify the storage cluster status.
On a Monitor node, and after all OSD nodes have been upgraded, unset the noout
and norebalance
flags:
# ceph osd unset noout # ceph osd unset norebalance
In addition, execute the ceph osd require-osd-release <release>
command. This command ensures that no more OSDs with Red Hat Ceph Storage 2.3 can be added to the storage cluster. If you do not run this command, the storage status will be HEALTH_WARN
.
# ceph osd require-osd-release luminous
Additional Resources
- To expand the storage capacity by adding new OSDs to the storage cluster, see the Add an OSD section in the Administration Guide for Red Hat Ceph Storage 3
Upgrading the Ceph Object Gateway Nodes
This section describes steps to upgrade a Ceph Object Gateway node to a later version.
Prerequisites
- Red Hat recommends putting a Ceph Object Gateway behind a load balancer, such as HAProxy. If you use a load balancer, remove the Ceph Object Gateway from the load balancer once no requests are being served.
If you use a custom name for the region pool, specified in the
rgw_region_root_pool
parameter, add thergw_zonegroup_root_pool
parameter to the[global]
section of the Ceph configuration file. Set the value ofrgw_zonegroup_root_pool
to be the same asrgw_region_root_pool
, for example:[global] rgw_zonegroup_root_pool = .us.rgw.root
Procedure
Do the following steps on each Ceph Object Gateway node in the storage cluster. Upgrade only one node at a time.
If you used online repositories to install Red Hat Ceph Storage, disable the 2 repositories.
# subscription-manager repos --disable=rhel-7-server-rhceph-2.3-tools-rpms --disable=rhel-7-server-rhceph-2-installer-rpms
Enable the Red Hat Ceph Storage 3 Tools repository:
[root@gateway ~]# subscription-manager repos --enable=rhel-7-server-rhceph-3-tools-els-rpms
Stop the Ceph Object Gateway process (
ceph-radosgw
):# service ceph-radosgw stop
Update the
ceph-radosgw
package:# yum update ceph-radosgw
Change the owner and group permissions on the newly created
/var/lib/ceph/radosgw/
and/var/log/ceph/
directories and their content toceph
.# chown -R ceph:ceph /var/lib/ceph/radosgw # chown -R ceph:ceph /var/log/ceph
If SELinux is set to run in enforcing or permissive mode, instruct it to relabel SELinux context on the next boot.
# touch /.autorelabel
ImportantRelabeling takes a long time to complete, because SELinux must traverse every file system and fix any mislabeled files. To exclude directories from being relabeled, add them to the
/etc/selinux/fixfiles_exclude_dirs
file before rebooting.Enable the
ceph-radosgw
process.# systemctl enable ceph-radosgw.target # systemctl enable ceph-radosgw@rgw.<hostname>
Replace
<hostname>
with the name of the Ceph Object Gateway host, for examplegateway-node
.# systemctl enable ceph-radosgw.target # systemctl enable ceph-radosgw@rgw.gateway-node
Reboot the Ceph Object Gateway node.
# shutdown -r now
- If you use a load balancer, add the Ceph Object Gateway node back to the load balancer.
See Also
Upgrading a Ceph Client Node
Ceph clients are:
- Ceph Block Devices
- OpenStack Nova compute nodes
- QEMU/KVM hypervisors
- Any custom application that uses the Ceph client-side libraries
Red Hat recommends all Ceph clients to be running the same version as the Ceph storage cluster.
Prerequisites
- Stop all I/O requests against a Ceph client node while upgrading the packages to prevent unexpected errors to occur
Procedure
If you installed Red Hat Ceph Storage 2 clients by using software repositories, disable the repositories:
# subscription-manager repos --disable=rhel-7-server-rhceph-2-tools-rpms --disable=rhel-7-server-rhceph-2-installer-rpms
NoteIf an ISO-based installation was performed for Red Hat Ceph Storage 2 clients, skip this first step.
On the client node, enable the Red Hat Ceph Storage Tools 3 repository:
[root@gateway ~]# subscription-manager repos --enable=rhel-7-server-rhceph-3-tools-els-rpms
On the client node, update the
ceph-common
package:# yum update ceph-common
Restart any application that depends on the Ceph client-side libraries after upgrading the ceph-common
package.
If you are upgrading OpenStack Nova compute nodes that have running QEMU/KVM instances or use a dedicated QEMU/KVM client, stop and start the QEMU/KVM instance because restarting the instance does not work in this case.