Chapter 5. Upgrading Ceph Storage Cluster

There are two main upgrading paths:

  • from Red Hat Ceph Storage 1.3 to 2 (Important)
  • between minor versions of Red Hat Ceph Storage 2 or between asynchronous updates (Important)

5.1. Upgrading from Red Hat Ceph Storage 1.3 to 2

Important

Please contact Red Hat support prior to upgrading, if you have a large Ceph Object Gateway storage cluster with millions of objects present in buckets.

For more details refer to the Red Hat Ceph Storage 2.5 Release Notes, under the Slow OSD startup after upgrading to Red Hat Ceph Storage 2.5 heading.

You can upgrade the Ceph Storage Cluster in a rolling fashion and while the cluster is running. Upgrade each node in the cluster sequentially, only proceeding to the next node after the previous node is done.

Red Hat recommends upgrading the Ceph components in the following order:

  • Monitor nodes
  • OSD nodes
  • Ceph Object Gateway nodes
  • All other Ceph client nodes

Two methods are available to upgrade a Red Hat Ceph Storage 1.3.2 to 2.0:

  • Using Red Hat’s Content Delivery Network (CDN)
  • Using a Red Hat provided ISO image file

After upgrading the storage cluster you might have a health warning regarding the CRUSH map using legacy tunables. See the Red Hat Ceph Storage Strategies Guide for more information.

Example

$ ceph -s
    cluster 848135d7-cdb9-4084-8df2-fb5e41ae60bd
     health HEALTH_WARN
            crush map has legacy tunables (require bobtail, min is firefly)
     monmap e1: 1 mons at {ceph1=192.168.0.121:6789/0}
            election epoch 2, quorum 0 ceph1
     osdmap e83: 2 osds: 2 up, 2 in
      pgmap v1864: 64 pgs, 1 pools, 38192 kB data, 17 objects
            10376 MB used, 10083 MB / 20460 MB avail
                  64 active+clean

Important

Red Hat recommends all Ceph clients to be running the same version as the Ceph storage cluster.

5.1.1. Upgrading a Ceph Monitor Node

Red Hat recommends a minimum of three Monitors for a production storage cluster. There must be an odd number of Monitors. While you are upgrading one Monitor, the storage cluster will still have quorum.

Upgrading Red Hat Ceph Storage from version 1.3.2 to version 2 running on Ubuntu 14.04 Trusty to Ubuntu 16.04 Xenial has two main tasks. The Red Hat Ceph Storage packages will be upgraded first, then the Ubuntu operating system will be upgraded next. These two main tasks will need to be done on each Monitor node in the storage cluster. Perform the following steps on each Monitor node in the storage cluster, sequentially upgrading one Monitor node at a time.

Important

Red Hat does not support running Red Hat Ceph Storage 2 clusters on Ubuntu 14.04 Trusty in a production environment. This is only a transitional step to get to Red Hat Ceph Storage 2 on Ubuntu 16.04 Xenial, which is the supported platform. Red Hat recommends having a full system backup before proceeding with these upgrade procedures.

  1. As root, disable any Red Hat Ceph Storage 1.3.x repositories:

    If the following lines exist in the /etc/apt/sources.list or in the /etc/apt/sources.list.d/ceph.list files, then comment out the online repositories for Red Hat Ceph Storage 1.3 by adding a # to the beginning of the line.

    deb https://<customer_name>:<customer_password>@rhcs.download.redhat.com/ubuntu/1.3-updates/Installer
    deb https://<customer_name>:<customer_password>@rhcs.download.redhat.com/ubuntu/1.3-updates/Calamari
    deb https://<customer_name>:<customer_password>@rhcs.download.redhat.com/ubuntu/1.3-updates/Tools

    Also, check for the following files in /etc/apt/sources.list.d/:

    Calamari.list
    ceph-mon.list
    ceph-osd.list
    Installer.list
    Tools.list
    Note

    Remove any reference to Red Hat Ceph Storage 1.3.x in the APT source file(s). If an ISO-based installation was performed for Red Hat Ceph Storage 1.3.x, then skip this first step.

  2. Enable the Red Hat Ceph Storage 2 Monitor repository. For ISO-based installations, see the ISO installation section.
  3. As root, stop the Monitor process:

    Syntax

    $ sudo stop ceph-mon id=<monitor_host_name>

    Example

    $ sudo stop ceph-mon id=node1

  4. As root, update the ceph-mon package:

    $ sudo apt-get update
    $ sudo apt-get dist-upgrade
    $ sudo apt-get install ceph-mon
    1. Verify the latest Red Hat version is installed:

      $ dpkg -s ceph-base | grep Version
      Version: 10.2.2-19redhat1trusty
  5. As root, update the owner and group permissions:

    Syntax

    # chown -R <owner>:<group> <path_to_directory>

    Example

    # chown -R ceph:ceph /var/lib/ceph/mon
    # chown -R ceph:ceph /var/log/ceph
    # chown -R ceph:ceph /var/run/ceph
    # chown ceph:ceph /etc/ceph/ceph.client.admin.keyring
    # chown ceph:ceph /etc/ceph/ceph.conf
    # chown ceph:ceph /etc/ceph/rbdmap

    Note

    If the Ceph Monitor node is co-located with an OpenStack Controller node, then the Glance and Cinder keyring files must be owned by glance and cinder respectively. For example:

    # ls -l /etc/ceph/
    ...
    -rw-------.  1 glance glance      64 <date> ceph.client.glance.keyring
    -rw-------.  1 cinder cinder      64 <date> ceph.client.cinder.keyring
    ...
  6. Remove packages that are no longer needed:

    $ sudo apt-get purge ceph ceph-osd
    Note

    The ceph package is now a meta-package. Only the ceph-mon package is needed on the Monitor nodes, only the ceph-osd package is needed on the OSD nodes, and only the ceph-radosgw package is needed on the RADOS Gateway nodes.

  7. As root, replay device events from the kernel:

    # udevadm trigger
  8. Upgrade from Ubuntu 14.04 Trusty to Ubuntu 16.04 Xenial.

    1. Configure update-manager for the Red Hat Ceph Storage packages:

      1. Create a new file

        $ sudo touch /etc/update-manager/release-upgrades.d/rhcs.cfg
      2. Add the following lines to the new file

        [Sources]
        AllowThirdParty=yes
    2. Start the Ubuntu upgrade:

      $ sudo do-release-upgrade -d
    3. Follow the on screen instructions
    4. Verify the Ceph package versions:

      $ dpkg -s ceph-base | grep Version
      Version: 10.2.2-19redhat1xenial
    5. As root, enable the ceph-mon process:

      $ sudo systemctl enable ceph-mon.target
      $ sudo systemctl enable ceph-mon@<monitor_host_name>
    6. As root, reboot the Monitor node:

      # shutdown -r now
    7. Once the Monitor node is up, check the health of the Ceph storage cluster before moving to the next Monitor node:

      # ceph -s

To add more Red Hat Ceph Storage Monitors to the storage cluster, see the Red Hat Ceph Storage Administration Guide

5.1.2. Upgrading a Ceph OSD Node

Upgrading Red Hat Ceph Storage from version 1.3.2 to version 2 running on Ubuntu 14.04 Trusty to Ubuntu 16.04 Xenial has two main tasks. The Red Hat Ceph Storage packages will be upgraded first, then the Ubuntu operating system will be upgraded next. These two main tasks will need to be done on each OSD node in the storage cluster. Perform the following steps on each OSD node in the storage cluster, sequentially upgrading one OSD node at a time.

Important

Red Hat does not support running Red Hat Ceph Storage 2 clusters on Ubuntu 14.04 Trusty in a production environment. This is only a transitional step to get to Red Hat Ceph Storage 2 on Ubuntu 16.04 Xenial, which is the supported platform. Red Hat recommends having a full system backup before proceeding with these upgrade procedures.

During the upgrade of an OSD node, some placement groups will become degraded, because the OSD might be down or restarting. You will need to tell the storage cluster not to mark an OSD out, because you do not want to trigger a recovery. The default behavior is to mark an OSD out of the CRUSH map after five minutes.

On a Monitor node, set noout and norebalance flags for the OSDs:

# ceph osd set noout
# ceph osd set norebalance

Perform the following steps on each OSD node in the storage cluster. Sequentially upgrading one OSD node at a time. If an ISO-based installation was performed for Red Hat Ceph Storage 1.3, then skip this first step.

  1. As root, disable the Red Hat Ceph Storage 1.3 repositories:

    If the following lines exist in the /etc/apt/sources.list or in the /etc/apt/sources.list.d/ceph.list files, then comment out the online repositories for Red Hat Ceph Storage 1.3 by adding a # to the beginning of the line.

    deb https://<customer_name>:<customer_password>@rhcs.download.redhat.com/ubuntu/1.3-updates/Installer
    deb https://<customer_name>:<customer_password>@rhcs.download.redhat.com/ubuntu/1.3-updates/Calamari
    deb https://<customer_name>:<customer_password>@rhcs.download.redhat.com/ubuntu/1.3-updates/Tools

    Also, check for the following files in /etc/apt/sources.list.d/:

    Calamari.list
    ceph-mon.list
    ceph-osd.list
    Installer.list
    Tools.list
    Note

    Remove any reference to Red Hat Ceph Storage 1.3.x in the APT source file(s). If an ISO-based installation was performed for Red Hat Ceph Storage 1.3.x, then skip this first step.

  2. Enable the Red Hat Ceph Storage 2 OSD repository. For ISO-based installations, see the ISO installation section.
  3. As root, stop any running OSD process:

    Syntax

    $ sudo stop ceph-osd id=<osd_id>

    Example

    $ sudo stop ceph-osd id=0

  4. As root, update the ceph-osd package:

    $ sudo apt-get update
    $ sudo apt-get dist-upgrade
    $ sudo apt-get install ceph-osd
    1. Verify the latest Red Hat version is installed:

      $ dpkg -s ceph-base | grep Version
      Version: 10.2.2-19redhat1trusty
  5. As root, update the owner and group permissions on the newly created directory and files:

    Syntax

    # chown -R <owner>:<group> <path_to_directory>

    Example

    # chown -R ceph:ceph /var/lib/ceph/osd
    # chown -R ceph:ceph /var/log/ceph
    # chown -R ceph:ceph /var/run/ceph
    # chown -R ceph:ceph /etc/ceph

    Note

    Running the following find command might speed up the process of changing ownership by running the chown command in parallel on a Ceph storage cluster with a large number of disks:

    # find /var/lib/ceph/osd -maxdepth 1 -mindepth 1 -print | xargs -P12 -n1 chown -R ceph:ceph
  6. Remove packages that are no longer needed:

    $ sudo apt-get purge ceph ceph-mon
    Note

    The ceph package is now a meta-package. Only the ceph-mon package is needed on the Monitor nodes, only the ceph-osd package is needed on the OSD nodes, and only the ceph-radosgw package is needed on the RADOS Gateway nodes.

  7. As root, replay device events from the kernel:

    # udevadm trigger
  8. Upgrade from Ubuntu 14.04 Trusty to Ubuntu 16.04 Xenial.

    1. Configure update-manager for the Red Hat Ceph Storage packages:

      1. Create a new file

        $ sudo touch /etc/update-manager/release-upgrades.d/rhcs.cfg
      2. Add the following lines to the new file

        [Sources]
        AllowThirdParty=yes
    2. Start the Ubuntu upgrade:

      $ sudo do-release-upgrade -d
    3. Follow the on screen instructions
    4. Verify the Ceph package versions:

      $ dpkg -s ceph-base | grep Version
      Version: 10.2.2-19redhat1xenial
    5. As root, enable the ceph-osd process:

      $ sudo systemctl enable ceph-osd.target
      $ sudo systemctl enable ceph-osd@<osd_id>
    6. As root, reboot the OSD node:

      # shutdown -r now
  9. Move to the next OSD node.

    Note

    While the noout and norebalance flags are set, the storage cluster will have the HEALTH_WARN status:

    $ ceph health
    HEALTH_WARN noout,norebalance flag(s) set

Once you are done upgrading the Ceph storage cluster, unset the previously set OSD flags and verify the storage cluster status.

On a Monitor node, and after all OSD nodes have been upgraded, unset the noout and norebalance flags:

# ceph osd unset noout
# ceph osd unset norebalance

In addition, set the require_jewel_osds flag. This flag ensures that no more OSDs with Red Hat Ceph Storage 1.3 can be added to the storage cluster. If you do not set this flag, the storage status will be HEALTH_WARN.

# ceph osd set require_jewel_osds

To expand the storage capacity by adding new OSDs to the storage cluster, see the Red Hat Ceph Storage Administration Guide for more details.

5.1.3. Upgrading the Ceph Object Gateway Nodes

This section describes steps to upgrade a Ceph Object Gateway node to a later version.

Important

Red Hat does not support running Red Hat Ceph Storage 2 clusters on Ubuntu 14.04 in a production environment. Therefore, upgrading Red Hat Ceph Storage from 1.3 to 2 includes:

  • Upgrading the Red Hat Ceph Storage packages from 1.3 to 2
  • Upgrading the Ubuntu operation system from 14.04 to 16.04

Perform these steps on each Ceph Object Gateway node in use, sequentially upgrading one node at time.

Red Hat recommends to back up the system before proceeding with these upgrade procedures.

Before You Start
  • Red Hat recommends putting a Ceph Object Gateway behind a load balancer, such as HAProxy. If you use a load balancer, remove the Ceph Object Gateway from the load balancer once no requests are being served.
  • If you use a custom name for the region pool, specified in the rgw_region_root_pool parameter, add the rgw_zonegroup_root_pool parameter to the [global] section of the Ceph configuration file. Set the value of rgw_zonegroup_root_pool to be the same as rgw_region_root_pool, for example:

    [global]
    rgw_zonegroup_root_pool = .us.rgw.root
Procedure: Upgrading the Ceph Object Gateway Node
  1. If you used online repositories to install Red Hat Ceph Storage, disable the 1.3 repositories.

    1. Comment out the following lines in the /etc/apt/sources.list and /etc/apt/sources.list.d/ceph.list files.

      # deb https://<customer_name>:<customer_password>@rhcs.download.redhat.com/ubuntu/1.3-updates/Installer
      # deb https://<customer_name>:<customer_password>@rhcs.download.redhat.com/ubuntu/1.3-updates/Calamari
      # deb https://<customer_name>:<customer_password>@rhcs.download.redhat.com/ubuntu/1.3-updates/Tools
    2. Remove the following files from the /etc/apt/sources.list.d/ directory.

      # rm /etc/apt/sources.list.d/Calamari.list
      # rm /etc/apt/sources.list.d/ceph-mon.list
      # rm /etc/apt/sources.list.d/ceph-osd.list
      # rm /etc/apt/sources.list.d/Installer.list
      # rm /etc/apt/sources.list.d/Tools.list
  2. Enable the Red Hat Ceph Storage 2 Tools repository. For ISO-based installations, see the ISO Installation section.
  3. Stop the Ceph Object Gateway process (ceph-radosgw):

    $ sudo stop radosgw id=rgw.<hostname>

    Replace <hostname> with the name of Ceph Object Gateway host, for example gateway-node.

    $ sudo stop radosgw id=rgw.node
  4. Update the ceph-radosgw package:

    $ sudo apt-get update
    $ sudo apt-get dist-upgrade
    $ sudo apt-get install radosgw
  5. Change the owner and group permissions on the newly created /var/lib/ceph/radosgw/ and /var/log/ceph/ directories and their content to ceph.

    # chown -R ceph:ceph /var/lib/ceph/radosgw
    # chown -R ceph:ceph /var/log/ceph
  6. Remove packages that are no longer needed.

    $ sudo apt-get purge ceph
    Note

    The ceph package is now a meta-package. Only the ceph-mon, ceph-osd, and ceph-radosgw packages are required on the Monitor, OSD, and Ceph Object Gateway nodes respectively.

  7. Upgrade from Ubuntu 14.04 to 16.04.

    1. Configure the update-manager utility for the Red Hat Ceph Storage packages:

      1. Create a new rhcs.cfg file in the /etc/update-manager/release-upgrades.d/ directory.

        $ sudo touch /etc/update-manager/release-upgrades.d/rhcs.cfg
      2. Add the following lines to the file.

        [Sources]
        AllowThirdParty=yes
    2. Start the upgrading process and follow the instructions on the screen.

      $ sudo do-release-upgrade -d
    3. Verify the Ceph package versions:

      $ dpkg -s ceph-base | grep Version
      Version: 10.2.2-19redhat1xenial
  8. Enable the ceph-radosgw process:

    $ sudo systemctl enable ceph-radosgw.target
    $ sudo systemctl enable ceph-radosgw@rgw.<hostname>

    Replace <hostname> with the name of the Ceph Object Gateway host, for example gateway-node.

    $ sudo systemctl enable ceph-radosgw.target
    $ sudo systemctl enable ceph-radosgw@rgw.gateway-node
  9. Reboot the Ceph Object Gateway node:

    # shutdown -r now
  10. If you use a load balancer, add the Ceph Object Gateway node back to the load balancer.
  11. Repeat these steps on a next Ceph Object Gateway node.
See Also

5.1.4. Upgrading a Ceph Client Node

Ceph clients can be the RADOS Gateway, RADOS block devices, the Ceph command-line interface (CLI), Nova compute nodes, qemu-kvm, or any custom application using the Ceph client-side libraries. Red Hat recommends all Ceph clients to be running the same version as the Ceph storage cluster.

Important

Red Hat recommends stopping all IO running against a Ceph client node while the packages are being upgraded. Not stopping all IO might cause unexpected errors to occur.

  1. As root, disable any Red Hat Ceph Storage 1.3 repositories:

    If the following lines exist in the /etc/apt/sources.list or in the /etc/apt/sources.list.d/ceph.list files, then comment out the online repositories for Red Hat Ceph Storage 1.3 by adding a # to the beginning of the line.

    deb https://<customer_name>:<customer_password>@rhcs.download.redhat.com/ubuntu/1.3-updates/Installer
    deb https://<customer_name>:<customer_password>@rhcs.download.redhat.com/ubuntu/1.3-updates/Calamari
    deb https://<customer_name>:<customer_password>@rhcs.download.redhat.com/ubuntu/1.3-updates/Tools

    Also, check for the following files in /etc/apt/sources.list.d/:

    Calamari.list
    ceph-mon.list
    ceph-osd.list
    Installer.list
    Tools.list
    Note

    Remove any reference to Red Hat Ceph Storage 1.3.x in the APT source file(s).

  2. On the client node, enable the Tools repository.
  3. On the client node, update the ceph-common package:

    $ sudo apt-get install ceph-common

Any application depending on the Ceph client-side libraries will have to be restarted after upgrading the Ceph client package.

Note

For Nova compute nodes with running qemu-kvm instances or if using a dedicated qemu-kvm client, then stopping and starting the qemu-kvm instance processes is required. A simple restart will not work here.

5.2. Upgrading Between Minor Versions and Applying Asynchronous Updates

Important

Please contact Red Hat support prior to upgrading, if you have a large Ceph Object Gateway storage cluster with millions of objects present in buckets.

For more details refer to the Red Hat Ceph Storage 2.5 Release Notes, under the Slow OSD startup after upgrading to Red Hat Ceph Storage 2.5 heading.

In Red Hat Ceph Storage version 2.5 and later, ceph-ansible is available on Ubuntu nodes and can be used to upgrade your cluster.

Use the Ansible rolling_update.yml playbook located in the infrastructure-playbooks directory from the administration node to upgrade between two minor versions of Red Hat Ceph Storage 2 or to apply asynchronous updates.

Currently, this is the only supported way to upgrade to a minor version. If you use a cluster that was not deployed by using Ansible, see Taking Over an Existing Cluster for details on configuring Ansible to use a cluster that was deployed without it.

Note

The administration node must use Red Hat Enterprise Linux because the ceph-ansible package is not supported on Ubuntu. See the Installing Red Hat Ceph Storage using Ansible chapter in the Red Hat Ceph Storage 2 Installation Guide for Red Hat Enterprise Linux.

Ansible upgrades the Ceph nodes in the following order:

  • Monitor nodes
  • OSD nodes
  • MDS nodes
  • Ceph Object Gateway nodes
  • All other Ceph client nodes
Note

Upgrading encrypted OSD nodes is the same as upgrading OSD nodes that are not encrypted.

Before you Start

  • On the Ansible Administration node, enable the Red Hat Ceph Storage 2 Tools repository:

    $ sudo bash -c 'umask 0077; echo deb https://customername:customerpasswd@rhcs.download.redhat.com/2-updates/Tools $(lsb_release -sc) main | tee /etc/apt/sources.list.d/Tools.list'
    $ sudo bash -c 'wget -O - https://www.redhat.com/security/fd431d51.txt | apt-key add -'
    $ sudo apt-get update
  • On the Ansible Administration node, ensure the latest version of ceph-ansible is installed:

    $ sudo apt-get install ceph-ansible
  • In the rolling_update.yml playbook, verify the health_osd_check_retries and health_osd_check_delay values; tune if needed. For each OSD node, Ansible will wait up to 20 minutes. Also, Ansible will check the cluster health every 30 seconds, waiting before continuing the upgrade process. The default values are:

    health_osd_check_retries: 40
    health_osd_check_delay: 30
  • If the Ceph nodes are not connected to the Red Hat Content Delivery Network (CDN) and you used an ISO image to install Red Hat Ceph Storage, update the local repository with the latest version of Red Hat Ceph Storage. See Section 2.2, “Enabling the Red Hat Ceph Storage Repositories” for details.
  • If the ansible node you are using has been changed from RHEL to Ubuntu, copy all the old variables in the group_vars/all.yml file to new Ansible node.
  • If you upgrade from Red Hat Ceph Storage 2.1 to 2.2, review the Section 5.2.1, “Changes Between Ansible 2.1 and 2.2” section first. Ansible 2.2 uses slightly different file names and setting.

Procedure: Updating the Ceph Storage Cluster by using Ansible

  1. On the Ansible administration node, edit the /etc/ansible/hosts file with custom osd_scenarios if your cluster has any.
  2. On the Ansible administration node, navigate to the /usr/share/ceph-ansible/ directory:

    # cd /usr/share/ceph-ansible
  3. In the group_vars/all.yml file, uncomment the upgrade_ceph_packages option and set it to True:

    upgrade_ceph_packages: True
  4. In the group_vars/all.yml file, set generate_fsid to false.
  5. Get the current cluster fsid by executing ceph fsid. Set the retrieved fsid in group_vars/all.yml.
  6. If the cluster you want to upgrade contains any Ceph Object Gateway nodes, add the radosgw_interface parameter to the group_vars/all.yml file.

    radosgw_interface: <interface>

    Replace:

    • <interface> with the interface that the Ceph Object Gateway nodes listen to
  7. For RCHS 2.5 and later versions, uncomment and set ceph_rhcs_cdn_debian_repo and ceph_rhcs_cdn_debian_repo_version in the group_vars/all.yml file so that Ansible can automatically enable and access Ubuntu online repositories:

    ceph_rhcs_cdn_debian_repo: <repo-path>
    ceph_rhcs_cdn_debian_repo_version: <repo-version>

    Example

    ceph_rhcs_cdn_debian_repo: https://<login>:<pwd>@rhcs.download.redhat.com
    ceph_rhcs_cdn_debian_repo_version: /2-release/

    Where <login> is the RHN user login and <pwd> is the RHN user’s password.

  8. Run the rolling_update.yml playbook:

    Note that the jewel_minor_update=true option means the mgrs tasks are skipped

    # cp infrastructure-playbooks/rolling_update.yml .
    $ ansible-playbook rolling_update.yml

    When upgrading from version 2.4 to 2.5, run the playbook using the following command:

    $ ansible-playbook rolling_update.yml -e jewel_minor_update=true
  9. From the RBD mirroring daemon node, upgrade rbd-mirror manually:

    $ sudo apt-get install rbd-mirror

    Restart the daemon:

    # systemctl restart  ceph-rbd-mirror@<client-id>
Important

The rolling_update.yml playbook includes the serial variable that adjusts the number of nodes to be updated simultaneously. Red Hat strongly recommends to use the default value (1), which ensures that hosts will be upgraded one by one.

5.2.1. Changes Between Ansible 2.1 and 2.2

Red Hat Ceph Storage 2.2 includes Ansible 2.2 that introduces the following changes:

  • Files in the group_vars directory have the .yml extension. Before updating to 2.2, you must rename them. To do so:

    Navigate to the Ansible directory:

    # cd usr/share/ceph-ansible

    Change the names of the files in group_vars:

    # mv groups_vars/all groups_vars/all.yml
    # mv groups_vars/mons groups_vars/mons.yml
    # mv groups_vars/osds groups_vars/osds.yml
    # mv groups_vars/mdss groups_vars/mdss.yml
    # mv groups_vars/rgws groups_vars/rgws.yml
  • Ansible 2.2 uses different variable names and handles this change automatically when updating to version 2.2. See Table 5.1, “Differences in Variable Names Between Ansible 2.1 and 2.2” table for details.

    Table 5.1. Differences in Variable Names Between Ansible 2.1 and 2.2

    Ansible 2.1 variable nameAnsible 2.2 variable name

    ceph_stable_rh_storage

    ceph_rhcs

    ceph_stable_rh_storage_version

    ceph_rhcs_version

    ceph_stable_rh_storage_cdn_install

    ceph_rhcs_cdn_install

    ceph_stable_rh_storage_iso_install

    ceph_rhcs_iso_install

    ceph_stable_rh_storage_iso_path

    ceph_rhcs_iso_path

    ceph_stable_rh_storage_mount_path

    ceph_rhcs_mount_path

    ceph_stable_rh_storage_repository_path

    ceph_rhcs_repository_path