Chapter 3. Upgrading the Storage Cluster
To keep your administration server and your Ceph Storage cluster running optimally, upgrade them when Red Hat provides bug fixes or delivers major updates.
There is only one supported upgrade path to upgrade your cluster to the latest 1.3 version:
If your cluster nodes run Ubuntu Precise 12.04, you must upgrade your operating systems to Ubuntu Trusty 14.04. Red Hat Ceph Storage 1.3 is only supported on Ubuntu Trusty. Please see the separate Upgrade Ceph Cluster on Ubuntu Precise to Ubuntu Trusty document if your cluster is running on Ubuntu Precise.
3.1. Upgrading 1.3.x to 1.3.3
There are two ways to upgrade Red Hat Ceph Storage 1.3.2 to 1.3.3:
- CDN or online-based installations
- ISO-based installations
For upgrading Ceph with an online or an ISO-based installation method, Red Hat recommends upgrading in the following order:
- Administration Node
- Monitor Nodes
- OSD Nodes
- Object Gateway Nodes
Due to changes in encoding of the OSD map in the ceph package version 0.94.7, upgrading Monitor nodes to Red Hat Ceph Storage 1.3.3 before OSD nodes can lead to serious performance issues on large clusters that contain hundreds of OSDs.
To work around this issue, upgrade the OSD nodes before the Monitor nodes when upgrading to Red Hat Ceph Storage 1.3.3 from previous versions.
3.1.1. Administration Node
Using the Online Repositories
To upgrade admin node, remove Calamari, Installer, and Tools repositories under /etc/apt/sources.list.d/, remove cephdeploy.conf from the working directory, for example /home/example/ceph/, remove .cephdeploy.conf from the home directory, set Installer (ceph-deploy) online repository, upgrade ceph-deploy, enable Calamari and Tools online repositories, upgrade calamari-server, calamari-clients, re-initialize Calamari/Salt and upgrade Ceph.
Remove existing Ceph repositories:
$ cd /etc/apt/sources.list.d/ $ sudo rm -rf Calamari.list Installer.list Tools.list
Remove existing
cephdeploy.conffile from the Ceph working directory:Syntax
# rm -rf <directory>/cephdeploy.conf
Example
# rm -rf /home/example/ceph/cephdeploy.conf
Remove existing
.cephdeploy.conffile from the home directory:Syntax
$ rm -rf <directory>/.cephdeploy.conf
Example
# rm -rf /home/example/ceph/.cephdeploy.conf
Set the Installer (
ceph-deploy) repository then useceph-deployto enable the Calamari and Tools repositories.:$ sudo bash -c 'umask 0077; echo deb https://customername:customerpasswd@rhcs.download.redhat.com/ubuntu/1.3-updates/Installer $(lsb_release -sc) main | tee /etc/apt/sources.list.d/Installer.list' $ sudo bash -c 'wget -O - https://www.redhat.com/security/fd431d51.txt | apt-key add -' $ sudo apt-get update $ sudo apt-get install ceph-deploy $ ceph-deploy repo --repo-url 'https://customername:customerpasswd@rhcs.download.redhat.com/ubuntu/1.3-updates/Calamari' Calamari `hostname -f` $ ceph-deploy repo --repo-url 'https://customername:customerpasswd@rhcs.download.redhat.com/ubuntu/1.3-updates/Tools' Tools `hostname -f` $ sudo apt-get update
Upgrade Calamari:
$ sudo apt-get install calamari-server calamari-clients
Re-initialize Calamari:
$ sudo calamari-ctl initialize
Update existing cluster nodes that report to Calamari:
$ sudo salt '*' state.highstate
Upgrade Ceph:
$ ceph-deploy install --no-adjust-repos --cli <admin-node> $ sudo apt-get upgrade $ sudo restart ceph-all
Using an ISO
To upgrade admin node, remove Calamari, Installer, and Tools repositories under /etc/apt/sources.list.d/, remove cephdeploy.conf from the working directory, for example ceph-config, remove .cephdeploy.conf from the home directory, download and mount the latest Ceph ISO, run ice_setup, re-initialize Calamari and upgrade Ceph.
To support upgrading the other Ceph daemons, you must upgrade the Administration node first.
Remove existing Ceph repositories:
$ cd /etc/apt/sources.list.d/ $ sudo rm -rf Calamari.list Installer.list Tools.list
Remove existing
cephdeploy.conffile from the Ceph working directory:Syntax
$ rm -rf <directory>/cephdeploy.conf
Example
$ rm -rf /home/example/ceph/cephdeploy.conf
Remove existing
.cephdeploy.conffile from the home directory:Syntax
$ rm -rf <directory>/.cephdeploy.conf
Example
$ rm -rf /home/example/ceph/.cephdeploy.conf
- Visit the Red Hat Customer Portal to obtain the Red Hat Ceph Storage ISO image file.
-
Download
rhceph-1.3.3-ubuntu-x86_64-dvd.isofile. Using
sudo, mount the image:$ sudo mount /<path_to_iso>/rhceph-1.3.3-ubuntu-x86_64-dvd.iso /mnt
Using
sudo, install the setup program:$ sudo dpkg -i /mnt/ice-setup_*.deb
Noteif you receive an error about missing
python-pkg-resources, runsudo apt-get -f installto install the missingpython-pkg-resourcesdependency.Navigate to the working directory:
$ cd ~/ceph-config
Using
sudo, run the setup script in the working directory:$ sudo ice_setup -d /mnt
The
ice_setupprogram will install upgraded version ofceph-deploy,calamari-server,calamari-clients, create new local repositories and a.cephdeploy.conffile.Initialize Calamari and update existing cluster nodes that report to Calamari:
$ sudo calamari-ctl initialize $ sudo salt '*' state.highstate
Upgrade Ceph:
$ ceph-deploy install --no-adjust-repos --cli <admin-node> $ sudo apt-get upgrade $ sudo restart ceph-all
3.1.2. Monitor Nodes
To upgrade a Monitor node, log in to the node, remove ceph-mon repository under /etc/apt/sources.list.d/, install online repository for Monitor from the admin node, re-install Ceph and reconnect Monitor node to Calamari. Finally, upgrade and restart the Ceph Monitor daemon.
Only upgrade one Monitor node at a time, and allow the Monitor to come up and in, rejoining the Monitor quorum, before proceeding to upgrade the next Monitor.
Online Repository
Remove existing Ceph repositories in Monitor node:
$ cd /etc/apt/sources.list.d/ $ sudo rm -rf ceph-mon.list
Set online Monitor repository in Monitor node from admin node:
$ ceph-deploy repo --repo-url 'https://customername:customerpasswd@rhcs.download.redhat.com/ubuntu/1.3-updates/MON' --gpg-url https://www.redhat.com/security/fd431d51.txt ceph-mon <monitor-node>
Reinstall Ceph in Monitor node from the admin node:
$ ceph-deploy install --no-adjust-repos --mon <monitor-node>
NoteYou need to specify
--no-adjust-reposwithceph-deployso thatceph-deploydoes not createceph.listfile on Monitor node.Reconnect the Monitor node to Calamari. From the admin node, execute:
$ ceph-deploy calamari connect --master '<FQDN for the Calamari admin node>' <monitor-node>
Upgrade and restart the Ceph Monitor daemon. From the Monitor node, execute:
$ sudo apt-get update $ sudo apt-get upgrade $ sudo restart ceph-mon id={hostname}
Using an ISO
To upgrade a Monitor node, log in to the node, remove ceph-mon repository under /etc/apt/sources.list.d/, re-install Ceph from the administration node and reconnect Monitor node to Calamari. Finally, upgrade and restart the monitor daemon.
Only upgrade one Monitor node at a time, and allow the Monitor to come up and in, rejoining the Monitor quorum, before proceeding to upgrade the next Monitor.
Execute on the Monitor node:
$ cd /etc/apt/sources.list.d/ $ sudo rm -rf ceph-mon.list
From the administration node, execute:
$ ceph-deploy repo ceph-mon <monitor-node> $ ceph-deploy install --no-adjust-repos --mon <monitor-node>
Reconnect the Monitor node to Calamari. From the administration node, execute:
$ ceph-deploy calamari connect --master '<FQDN for the Calamari admin node>' <monitor-node>
Upgrade and restart Ceph Monitor daemon. From the Monitor node, execute:
$ sudo apt-get update $ sudo apt-get upgrade $ sudo restart ceph-mon id={hostname}
3.1.3. OSD Nodes
To upgrade a Ceph OSD node, reinstall the OSD daemon from the administration node, and reconnect OSD node to Calamari. Finally, upgrade the OSD node and restart the OSDs.
Only upgrade one OSD node at a time, and preferably within a CRUSH hierarchy. Allow the OSDs to come up and in, and the cluster achieving the active + clean state, before proceeding to upgrade the next OSD node.
Before starting the upgrade of the OSD nodes, set the noout and the norebalance flags:
# ceph osd set noout # ceph osd set norebalance
Once all the OSD nodes are upgraded in the storage cluster, unset the the noout and the norebalance flags:
# ceph osd unset noout # ceph osd unset norebalance
Using the Online Repositories
Remove existing Ceph repositories in the OSD node:
$ cd /etc/apt/sources.list.d/ $ sudo rm -rf ceph-osd.list
Set online OSD repository on OSD node from administration node:
$ ceph-deploy repo --repo-url 'https://customername:customerpasswd@rhcs.download.redhat.com/ubuntu/1.3-updates/OSD' --gpg-url https://www.redhat.com/security/fd431d51.txt ceph-osd <osd-node>
Reinstall Ceph on OSD node from the administration node:
$ ceph-deploy install --no-adjust-repos --osd <osd-node>
NoteYou need to specify
--no-adjust-reposwithceph-deployso thatceph-deploydoes not createceph.listfile on OSD node.Reconnect the OSD node to Calamari. From the administration node, execute:
$ ceph-deploy calamari connect --master '<FQDN for the Calamari admin node>' <osd-node>
Update and restart the Ceph OSD daemon. From the OSD node, execute:
$ sudo apt-get update $ sudo apt-get upgrade $ sudo restart ceph-osd id={id}
Using an ISO
To upgrade a OSD node, log in to the node, remove ceph-osd repository under /etc/apt/sources.list.d/, re-install Ceph from the administration node and reconnect OSD node to Calamari. Finally, upgrade and restart the OSD daemon(s).
Execute on the OSD node:
$ cd /etc/apt/sources.list.d/ $ sudo rm -rf ceph-osd.list
From the administration node, execute:
$ ceph-deploy repo ceph-osd <osd-node> $ ceph-deploy install --no-adjust-repos --osd <osd-node>
Reconnect the OSD node to Calamari. From the administration node, execute:
$ ceph-deploy calamari connect --master '<FQDN_Calamari_admin_node>' <osd-node>
Upgrade and restart the Ceph OSD daemon. From the OSD node, execute:
$ sudo apt-get update $ sudo apt-get upgrade $ sudo restart ceph-osd id=<id>
3.1.4. Object Gateway Nodes
To upgrade a Ceph Object Gateway node, log in to the node, remove ceph-mon or ceph-osd repository, whichever was installed for the radosgw package in Red Hat Ceph Storage 1.3.0 or 1.3.1, under /etc/apt/sources.list.d/, set the online Tools repository from the administration node, and re-install the Ceph Object Gateway daemon. Finally, upgrade and restart Ceph Object Gateway.
Using the Online Repositories
Remove existing Ceph repository on the Object Gateway node:
$ cd /etc/apt/sources.list.d/
$ sudo rm -rf ceph-mon.list
OR
$ sudo rm -rf ceph-osd.list
NoteFor Red Hat Ceph Storage v1.3.1, you had to install either
ceph-monorceph-osdrepository for theradosgwpackage. Remove the repository that was previous installed before setting theToolsrepository for Red Hat Ceph Storage v1.3.3.If upgrading from Red Hat Ceph Storage 1.3.2, then this step can be skipped.
Set the online Tools repository from administration node:
$ ceph-deploy repo --repo-url 'https://customername:customerpasswd@rhcs.download.redhat.com/ubuntu/1.3-updates/Tools' --gpg-url https://www.redhat.com/security/fd431d51.txt Tools <rgw-node>
Reinstall Object Gateway from the administration node:
$ ceph-deploy install --no-adjust-repos --rgw <rgw-node>
For federated deployments, from the Object Gateway node, execute:
$ sudo apt-get install radosgw-agent
Upgrade and restart the Object Gateway:
$ sudo apt-get update $ sudo apt-get upgrade $ sudo service radosgw restart id=rgw.<short-hostname>
NoteIf you modify the
ceph.conffile forradosgwto run on port80then runsudo service apache2 stopbefore restarting the gateway.
Using an ISO
To upgrade a Ceph Object Gateway node, log in to the node, remove the ceph repository under /etc/apt/sources.list.d/, stop the Ceph Object Gateway daemon (radosgw) and stop the Apache/FastCGI instance. From the administration node, re-install the Ceph Object Gateway daemon. Finally, restart Ceph Object Gateway.
Remove existing Ceph repository in the Ceph Object Gateway node:
$ cd /etc/apt/sources.list.d/ $ sudo rm -rf ceph.list
Stop Apache/Radosgw:
$ sudo service apache2 stop $ sudo /etc/init.d/radosgw stop
From the administration node, execute:
$ ceph-deploy repo ceph-mon <rgw-node> $ ceph-deploy install --no-adjust-repos --rgw <rgw-node>
NoteBoth
ceph-monandceph-osdrepository contains theradosgwpackage. So, you can use anyone of them for the Object Gateway upgrade.For federated deployments, from the Ceph Object Gateway node, execute:
$ sudo apt-get install radosgw-agent
Finally, from the Ceph Object Gateway node, restart the gateway:
$ sudo service radosgw restart
To upgrade a Ceph Object Gateway node, log in to the node and remove ceph-mon or ceph-osd repository under /etc/apt/sources.list.d/, whichever was previous installed for the radosgw package in Red Hat Cecph Storage 1.3.0. From the administration node, re-install the Ceph Object Gateway daemon. Finally, upgrade and restart Ceph Object Gateway.
Remove existing Ceph repository in the Ceph Object Gateway node:
$ cd /etc/apt/sources.list.d/
$ sudo rm -rf ceph-mon.list
OR
$ sudo rm -rf ceph-osd.list
NoteFor Red Hat Ceph Storage v1.3.1, you had to install either
ceph-monorceph-osdrepository for theradosgwpackage. You have to remove the repository that was previous installed before setting the new repo for RHCS v1.3.2.From the administration node, execute:
$ ceph-deploy repo ceph-mon <rgw-node> $ ceph-deploy install --no-adjust-repos --rgw <rgw-node>
NoteBoth
ceph-monandceph-osdrepo contains theradosgwpackage. So, you can use anyone of them for the gateway upgrade.For federated deployments, from the Object Gateway node, execute:
$ sudo apt-get install radosgw-agent
Upgrade and restart the Object Gateway:
$ sudo apt-get update $ sudo apt-get upgrade $ sudo service radosgw restart id=rgw.<short-hostname>
NoteIf you modify the
ceph.conffile forradosgwto run on port80then runsudo service apache2 stopbefore restarting the gateway.
3.2. Reviewing CRUSH Tunables
If you have been using Ceph for a while and you are using an older CRUSH tunables setting such as bobtail, you should investigate and set your CRUSH tunables to optimal.
Resetting your CRUSH tunables may result in significant rebalancing. See the Storage Strategies Guide, Chapter 9, Tunables for additional details on CRUSH tunables.
For example:
ceph osd crush tunables optimal

Where did the comment section go?
Red Hat's documentation publication system recently went through an upgrade to enable speedier, more mobile-friendly content. We decided to re-evaluate our commenting platform to ensure that it meets your expectations and serves as an optimal feedback mechanism. During this redesign, we invite your input on providing feedback on Red Hat documentation via the discussion platform.