Appendix H. Manually Upgrading from Red Hat Ceph Storage 2 to 3

This chapter provides information on how to manually upgrade from Red Hat Ceph storage version 2.4 to 3.0.

You can upgrade the Ceph Storage Cluster in a rolling fashion and while the cluster is running. Upgrade each node in the cluster sequentially, only proceeding to the next node after the previous node is done.

Red Hat recommends upgrading the Ceph components in the following order:

  • Monitor nodes
  • OSD nodes
  • Ceph Object Gateway nodes
  • All other Ceph client nodes

Two methods are available to upgrade a Red Hat Ceph Storage 2.3 to 3:

  • Using Red Hat’s Content Delivery Network (CDN)
  • Using a Red Hat provided ISO image file

After upgrading the storage cluster you might have a health warning regarding the CRUSH map using legacy tunables. For details, see the CRUSH Tunables section in the Storage Strategies guide for Red Hat Ceph Storage 3.

Example

$ ceph -s
    cluster 848135d7-cdb9-4084-8df2-fb5e41ae60bd
     health HEALTH_WARN
            crush map has legacy tunables (require bobtail, min is firefly)
     monmap e1: 1 mons at {ceph1=192.168.0.121:6789/0}
            election epoch 2, quorum 0 ceph1
     osdmap e83: 2 osds: 2 up, 2 in
      pgmap v1864: 64 pgs, 1 pools, 38192 kB data, 17 objects
            10376 MB used, 10083 MB / 20460 MB avail
                  64 active+clean

Important

Red Hat recommends all Ceph clients to be running the same version as the Ceph storage cluster.

Prerequisites

  • If the cluster you want to upgrade contains Ceph Block Device images that use the exclusive-lock feature, ensure that all Ceph Block Device users have permissions to blacklist clients:

    ceph auth caps client.<ID> mon 'allow r, allow command "osd blacklist"' osd '<existing-OSD-user-capabilities>'

Upgrading Monitor Nodes

This section describes steps to upgrade a Ceph Monitor node to a later version. There must be an odd number of Monitors. While you are upgrading one Monitor, the storage cluster will still have quorum.

Procedure

Do the following steps on each Monitor node in the storage cluster. Upgrade only one Monitor node at a time.

  1. If you installed Red Hat Ceph Storage 2 by using software repositories, disable the repositories:

    1. If the following lines exist in the /etc/apt/sources.list or /etc/apt/sources.list.d/ceph.list files, comment out the online repositories for Red Hat Ceph Storage 2 by adding a hash sign (#) to the beginning of the line.

      deb https://<customer_name>:<customer_password>@rhcs.download.redhat.com/ubuntu/2-updates/Installer
      deb https://<customer_name>:<customer_password>@rhcs.download.redhat.com/ubuntu/2-updates/Tools
    2. Remove the following files from the /etc/apt/sources.list.d/ directory:

      Installer.list
      Tools.list
  2. Enable the Red Hat Ceph Storage 3 Monitor repository:

    $ sudo bash -c 'umask 0077; echo deb https://customername:customerpasswd@rhcs.download.redhat.com/3-updates/MON $(lsb_release -sc) main | tee /etc/apt/sources.list.d/MON.list'
    $ sudo bash -c 'wget -O - https://www.redhat.com/security/fd431d51.txt | apt-key add -'
    $ sudo apt-get update
  3. As root, stop the Monitor process:

    Syntax

    $ sudo stop ceph-mon id=<monitor_host_name>

    Example

    $ sudo stop ceph-mon id=node1

  4. As root, update the ceph-mon package:

    $ sudo apt-get update
    $ sudo apt-get dist-upgrade
    $ sudo apt-get install ceph-mon
    1. Verify the latest Red Hat version is installed:

      $ dpkg -s ceph-base | grep Version
      Version: 10.2.2-19redhat1trusty
  5. As root, update the owner and group permissions:

    Syntax

    # chown -R <owner>:<group> <path_to_directory>

    Example

    # chown -R ceph:ceph /var/lib/ceph/mon
    # chown -R ceph:ceph /var/log/ceph
    # chown -R ceph:ceph /var/run/ceph
    # chown ceph:ceph /etc/ceph/ceph.client.admin.keyring
    # chown ceph:ceph /etc/ceph/ceph.conf
    # chown ceph:ceph /etc/ceph/rbdmap

    Note

    If the Ceph Monitor node is colocated with an OpenStack Controller node, then the Glance and Cinder keyring files must be owned by glance and cinder respectively. For example:

    # ls -l /etc/ceph/
    ...
    -rw-------.  1 glance glance      64 <date> ceph.client.glance.keyring
    -rw-------.  1 cinder cinder      64 <date> ceph.client.cinder.keyring
    ...
  6. Remove packages that are no longer needed:

    $ sudo apt-get purge ceph ceph-osd
  7. As root, replay device events from the kernel:

    # udevadm trigger
  8. As root, enable the ceph-mon process:

    $ sudo systemctl enable ceph-mon.target
    $ sudo systemctl enable ceph-mon@<monitor_host_name>
  9. As root, reboot the Monitor node:

    # shutdown -r now
  10. Once the Monitor node is up, check the health of the Ceph storage cluster before moving to the next Monitor node:

    # ceph -s

Upgrading OSD Nodes

This section describes steps to upgrade a Ceph OSD node to a later version.

Prerequisites

When upgrading an OSD node, some placement groups will become degraded because the OSD might be down or restarting. To prevent Ceph from starting the recovery process, on a Monitor node, set the noout and norebalance OSD flags:

[root@monitor ~]# ceph osd set noout
[root@monitor ~]# ceph osd set norebalance

Procedure

Do the following steps on each OSD node in the storage cluster. Upgrade only one OSD node at a time. If an ISO-based installation was performed for Red Hat Ceph Storage 2.3, then skip this first step.

  1. As root, disable the Red Hat Ceph Storage 2 repositories:

    1. If the following lines exist in the /etc/apt/sources.list or /etc/apt/sources.list.d/ceph.list files, comment out the online repositories for Red Hat Ceph Storage 2 by adding a hash sign (#) to the beginning of the line.

      deb https://<customer_name>:<customer_password>@rhcs.download.redhat.com/ubuntu/2-updates/Installer
      deb https://<customer_name>:<customer_password>@rhcs.download.redhat.com/ubuntu/2-updates/Tools
    2. Remove the following files from the /etc/apt/sources.list.d/ directory:

      Installer.list
      Tools.list
      Note

      Remove any reference to Red Hat Ceph Storage 2 in the APT source file(s). If an ISO-based installation was performed for Red Hat Ceph Storage 2, then skip this first step.

  2. Enable the Red Hat Ceph Storage 3 OSD repository:

    $ sudo bash -c 'umask 0077; echo deb https://customername:customerpasswd@rhcs.download.redhat.com/3-updates/OSD $(lsb_release -sc) main | tee /etc/apt/sources.list.d/OSD.list'
    $ sudo bash -c 'wget -O - https://www.redhat.com/security/fd431d51.txt | apt-key add -'
    $ sudo apt-get update
  3. As root, stop any running OSD process:

    Syntax

    $ sudo stop ceph-osd id=<osd_id>

    Example

    $ sudo stop ceph-osd id=0

  4. As root, update the ceph-osd package:

    $ sudo apt-get update
    $ sudo apt-get dist-upgrade
    $ sudo apt-get install ceph-osd
    1. Verify the latest Red Hat version is installed:

      $ dpkg -s ceph-base | grep Version
      Version: 10.2.2-19redhat1trusty
  5. As root, update the owner and group permissions on the newly created directory and files:

    Syntax

    # chown -R <owner>:<group> <path_to_directory>

    Example

    # chown -R ceph:ceph /var/lib/ceph/osd
    # chown -R ceph:ceph /var/log/ceph
    # chown -R ceph:ceph /var/run/ceph
    # chown -R ceph:ceph /etc/ceph

    Note

    Using the following find command might quicken the process of changing ownership by using the chown command in parallel on a Ceph storage cluster with a large number of disks:

    # find /var/lib/ceph/osd -maxdepth 1 -mindepth 1 -print | xargs -P12 -n1 chown -R ceph:ceph
  6. Remove packages that are no longer needed:

    $ sudo apt-get purge ceph ceph-mon
    Note

    The ceph package is now a meta-package. Only the ceph-mon package is needed on the Monitor nodes, only the ceph-osd package is needed on the OSD nodes, and only the ceph-radosgw package is needed on the RADOS Gateway nodes.

  7. As root, replay device events from the kernel:

    # udevadm trigger
  8. As root, enable the ceph-osd process:

    $ sudo systemctl enable ceph-osd.target
    $ sudo systemctl enable ceph-osd@<osd_id>
  9. As root, reboot the OSD node:

    # shutdown -r now
  10. Move to the next OSD node.

    Note

    If the noout and norebalance flags are set, the storage cluster is in HEALTH_WARN state

    $ ceph health
    HEALTH_WARN noout,norebalance flag(s) set

Once you are done upgrading the Ceph Storage Cluster, unset the previously set OSD flags and verify the storage cluster status.

On a Monitor node, and after all OSD nodes have been upgraded, unset the noout and norebalance flags:

# ceph osd unset noout
# ceph osd unset norebalance

In addition, execute the ceph osd require-osd-release <release> command. This command ensures that no more OSDs with Red Hat Ceph Storage 2.3 can be added to the storage cluster. If you do not run this command, the storage status will be HEALTH_WARN.

# ceph osd require-osd-release luminous

Additional Resources

  • To expand the storage capacity by adding new OSDs to the storage cluster, see the Add an OSD section in the Administration Guide for Red Hat Ceph Storage 3

Upgrading the Ceph Object Gateway Nodes

This section describes steps to upgrade a Ceph Object Gateway node to a later version.

Important

Red Hat recommends to back up the system before proceeding with these upgrade procedures.

Prerequisites

  • Red Hat recommends putting a Ceph Object Gateway behind a load balancer, such as HAProxy. If you use a load balancer, remove the Ceph Object Gateway from the load balancer once no requests are being served.
  • If you use a custom name for the region pool, specified in the rgw_region_root_pool parameter, add the rgw_zonegroup_root_pool parameter to the [global] section of the Ceph configuration file. Set the value of rgw_zonegroup_root_pool to be the same as rgw_region_root_pool, for example:

    [global]
    rgw_zonegroup_root_pool = .us.rgw.root

Procedure

Do the following steps on each Ceph Object Gateway node in the storage cluster. Upgrade only one node at a time.

  1. If you used online repositories to install Red Hat Ceph Storage, disable the 2 repositories.

    1. Comment out the following lines in the /etc/apt/sources.list and /etc/apt/sources.list.d/ceph.list files.

      # deb https://<customer_name>:<customer_password>@rhcs.download.redhat.com/ubuntu/2-updates/Installer
      # deb https://<customer_name>:<customer_password>@rhcs.download.redhat.com/ubuntu/2-updates/Tools
    2. Remove the following files from the /etc/apt/sources.list.d/ directory.

      # rm /etc/apt/sources.list.d/Installer.list
      # rm /etc/apt/sources.list.d/Tools.list
  2. Enable the Red Hat Ceph Storage 3 Tools repository:

    $ sudo bash -c 'umask 0077; echo deb https://customername:customerpasswd@rhcs.download.redhat.com/3-updates/Tools $(lsb_release -sc) main | tee /etc/apt/sources.list.d/Tools.list'
    $ sudo bash -c 'wget -O - https://www.redhat.com/security/fd431d51.txt | apt-key add -'
    $ sudo apt-get update
  3. Stop the Ceph Object Gateway process (ceph-radosgw):

    $ sudo stop radosgw id=rgw.<hostname>

    Replace <hostname> with the name of Ceph Object Gateway host, for example gateway-node.

    $ sudo stop radosgw id=rgw.node
  4. Update the ceph-radosgw package:

    $ sudo apt-get update
    $ sudo apt-get dist-upgrade
    $ sudo apt-get install radosgw
  5. Change the owner and group permissions on the newly created /var/lib/ceph/radosgw/ and /var/log/ceph/ directories and their content to ceph.

    # chown -R ceph:ceph /var/lib/ceph/radosgw
    # chown -R ceph:ceph /var/log/ceph
  6. Remove packages that are no longer needed.

    $ sudo apt-get purge ceph
    Note

    The ceph package is now a meta-package. Only the ceph-mon, ceph-osd, and ceph-radosgw packages are required on the Monitor, OSD, and Ceph Object Gateway nodes respectively.

  7. Enable the ceph-radosgw process:

    $ sudo systemctl enable ceph-radosgw.target
    $ sudo systemctl enable ceph-radosgw@rgw.<hostname>

    Replace <hostname> with the name of the Ceph Object Gateway host, for example gateway-node.

    $ sudo systemctl enable ceph-radosgw.target
    $ sudo systemctl enable ceph-radosgw@rgw.gateway-node
  8. Reboot the Ceph Object Gateway node:

    # shutdown -r now
  9. If you use a load balancer, add the Ceph Object Gateway node back to the load balancer.

See Also

Upgrading a Ceph Client Node

Ceph clients are:

  • Ceph Block Devices
  • OpenStack Nova compute nodes
  • QEMU/KVM hypervisors
  • Any custom application that uses the Ceph client-side libraries

Red Hat recommends all Ceph clients to be running the same version as the Ceph storage cluster.

Prerequisites

  • Stop all I/O requests against a Ceph client node while upgrading the packages to prevent unexpected errors to occur

Procedure

  1. If you installed Red Hat Ceph Storage 2 clients by using software repositories, disable the repositories:

    1. If the following lines exist in the /etc/apt/sources.list or /etc/apt/sources.list.d/ceph.list files, comment out the online repositories for Red Hat Ceph Storage 2 by adding a hash sign (#) to the beginning of the line.

      deb https://<customer_name>:<customer_password>@rhcs.download.redhat.com/ubuntu/2-updates/Installer
      deb https://<customer_name>:<customer_password>@rhcs.download.redhat.com/ubuntu/2-updates/Tools
    2. Remove the following files from the /etc/apt/sources.list.d/ directory:

      Installer.list
      Tools.list
      Note

      Remove any reference to Red Hat Ceph Storage 2 in the APT source file(s).

  2. On the client node, enable the Red Hat Ceph Storage Tools 3 repository:

    $ sudo bash -c 'umask 0077; echo deb https://customername:customerpasswd@rhcs.download.redhat.com/3-updates/Tools $(lsb_release -sc) main | tee /etc/apt/sources.list.d/Tools.list'
    $ sudo bash -c 'wget -O - https://www.redhat.com/security/fd431d51.txt | apt-key add -'
    $ sudo apt-get update
  3. On the client node, update the ceph-common package:

    $ sudo apt-get install ceph-common

Restart any application that depends on the Ceph client-side libraries after upgrading the ceph-common package.

Note

If you are upgrading OpenStack Nova compute nodes that have running QEMU/KVM instances or use a dedicated QEMU/KVM client, stop and start the QEMU/KVM instance because restarting the instance does not work in this case.