Chapter 3. Deploying Red Hat Ceph Storage

This chapter describes how to use the Ansible application to deploy a Red Hat Ceph Storage cluster and other components, such as Metadata Servers or the Ceph Object Gateway.

3.1. Prerequisites

3.2. Installing a Red Hat Ceph Storage Cluster

Use the Ansible application with the ceph-ansible playbook to install Red Hat Ceph Storage 3.

Production Ceph storage clusters start with a minimum of three monitor hosts and three OSD nodes containing multiple OSD daemons.

ceph storage cluster


  • On the Ansible administration node, install the ceph-ansible package:

    [root@admin ~]# yum install ceph-ansible


Use the following commands from the Ansible administration node if not instructed otherwise.

  1. In the user’s home directory, create the ceph-ansible-keys directory where Ansible stores temporary values generated by the ceph-ansible playbook.

    [user@admin ~]$ mkdir ~/ceph-ansible-keys
  2. Create a symbolic link to the /usr/share/ceph-ansible/group_vars directory in the /etc/ansible/ directory:

    [root@admin ~]# ln -s /usr/share/ceph-ansible/group_vars /etc/ansible/group_vars
  3. Navigate to the /usr/share/ceph-ansible/ directory:

    [user@admin ~]$ cd /usr/share/ceph-ansible
  4. Create new copies of the yml.sample files:

    [root@admin ceph-ansible]# cp group_vars/all.yml.sample group_vars/all.yml
    [root@admin ceph-ansible]# cp group_vars/osds.yml.sample group_vars/osds.yml
    [root@admin ceph-ansible]# cp site.yml.sample site.yml
  5. Edit the copied files.

    1. Edit the group_vars/all.yml file. See the table below for the most common required and optional parameters to uncomment. Note that the table does not include all parameters.

      Table 3.1. General Ansible Settings



      cdn or iso








      The path to the ISO image

      Yes if using an ISO image



      The interface that the Monitor nodes listen to

      monitor_interface, monitor_address, or monitor_address_block is required



      The address that the Monitor nodes listen to



      The subnet of the Ceph public network

      Use when the IP addresses of the nodes are unknown, but the subnet is known



      Yes if using IPv6 addressing



      The IP address and netmask of the Ceph public network


      Section 2.7, “Verifying the Network Configuration for Red Hat Ceph Storage”


      The IP address and netmask of the Ceph cluster network

      No, defaults to public_network

      An example of the all.yml file can look like:

      ceph_origin: repository
      ceph_repository: rhcs
      ceph_repository_type: cdn
      ceph_rhcs_version: 3
      monitor_interface: eth0

      For additional details, see the all.yml file.

    2. Edit the group_vars/osds.yml file. See the table below for the most common required and optional parameters to uncomment. Note that the table does not include all parameters.

      Table 3.2. OSD Ansible Settings



      collocated to use the same device for journal and OSD data

      non-collocated to use a dedicated device to store journal data

      lvm to use the Logical Volume Manager to store OSD data


      When using osd_scenario: non-collocated, ceph-ansible expects the variables devices and dedicated_devices to match. For example, if you specify 10 disks in devices, you must specify 10 entries in dedicated_devices. Currently, Red Hat only supports dedicated journals when using osd_scenario: lvm, not collocated journals.


      true to automatically discover OSDs

      Yes if using osd_scenario: collocated

      Cannot be used when devices setting is used


      List of devices where ceph data is stored

      Yes to specify the list of devices

      Cannot be used when osd_auto_discovery setting is used


      List of dedicated devices for non-collocated OSDs where ceph journal is stored

      Yes if osd_scenario: non-collocated

      Should be nonpartitioned devices


      true to encrypt OSDs


      Defaults to false


      a list of dictionaries

      Yes if using osd_scenario: lvm

      Each dictionary must contain a data, journal and data_vg keys. The data key must be a logical volume. The journal key can be a logical volume (LV), device or partition, but do not use one journal for multiple data LVs. The data_vg key must be the volume group containing the data LV. Optionally, the journal_vg key can be used to specify the volume group containing the journal LV, if applicable.

      The following are examples of the osds.yml file using these three osd_scenario:: collocated, non-collocated, and lvm.

      osd_scenario: collocated
        - /dev/sda
        - /dev/sdb
      dmcrypt: true
      osd_scenario: non-collocated
        - /dev/sda
        - /dev/sdb
        - /dev/sdc
        - /dev/sdd
         - /dev/nvme0n1
         - /dev/nvme0n1
         - /dev/nvme0n1
         - /dev/nvme0n1
      osd_scenario: lvm
         - data: data-lv1
           data_vg: vg1
           journal: journal-lv1
           journal_vg: vg2
         - data: data-lv2
           journal: /dev/sda
           data_vg: vg1

      For additional details, see the comments in the osds.yml file.


      Currently, ceph-ansible does not create the volume groups or the logical volumes. This must be done before running the Anisble playbook.

  6. Edit the Ansible inventory file located by default at /etc/ansible/hosts. Remember to comment out example hosts.

    1. Add the Monitor nodes under the [mons] section:

    2. Add OSD nodes under the [osds] section. If the nodes have sequential naming, consider using a range:


      Optionally, use the devices parameter to specify devices that the OSD nodes will use. Use a comma-separated list to list multiple devices.

      <ceph-host-name> devices="[ '<device_1>', '<device_2>' ]"

      For example:

      ceph-osd-01 devices="[ '/dev/sdb', '/dev/sdc' ]"
      ceph-osd-02 devices="[ '/dev/sdb', '/dev/sdc', '/dev/sdd' ]"

      When specifying no devices, set the osd_auto_discovery option to true in the osds.yml file.


      Using the devices parameter is useful when OSDs use devices with different names or when one of the devices failed on one of the OSDs.

    3. Add the Ceph Manager (ceph-mgr) nodes under the [mgrs] section. Colocate the Ceph Manager daemon with Monitor nodes.

  7. As the Ansible user, ensure that Ansible can reach the Ceph hosts:

    [user@admin ~]$ ansible all -m ping
  8. Add the following line to the /etc/ansible/ansible.cfg file:

    retry_files_save_path = ~/
  9. As the Ansible user, run the ceph-ansible playbook.

    [user@admin ceph-ansible]$ ansible-playbook site.yml
  10. From a Monitor node, verify the status of the Ceph cluster.

    [root@monitor ~]# ceph health
  11. Verify the cluster is functioning using rados.

    1. From a monitor node, create a test pool with eight placement groups:


      [root@monitor ~]# ceph osd pool create <pool-name> <pg-number>


      [root@monitor ~]# ceph osd pool create test 8
    2. Create a file called hello-world.txt:


      [root@monitor ~]# vim <file-name>


      [root@monitor ~]# vim hello-world.txt
    3. Upload hello-world.txt to the test pool using the object name hello-world:


      [root@monitor ~]# rados --pool <pool-name> put <object-name> <object-file>


      [root@monitor ~]# rados --pool test put hello-world hello-world.txt
    4. Download hello-world from the test pool as file name fetch.txt:


      [root@monitor ~]# rados --pool <pool-name> get <object-name> <object-file>


      [root@monitor ~]# rados --pool test get hello-world fetch.txt
    5. Check the contents of fetch.txt:

      [root@monitor ~]# cat fetch.txt

      The output should be:

      "Hello World!"

    In addition to verifying the cluster status, you can use the ceph-medic utility to overall diagnose the Ceph Storage Cluster. See the Installing and Using ceph-medic to Diagnose a Ceph Storage Cluster chapter in the Red Hat Ceph Storage 3 Troubleshooting Guide.

3.3. Installing Metadata Servers

Use the Ansible automation application to install a Ceph Metadata Server (MDS). Metadata Server daemons are necessary for deploying a Ceph File System.


Perform the following steps on the Ansible administration node.

  1. Add a new section [mdss] to the /etc/ansible/hosts file:


    Replace <hostname> with the host names of the nodes where you want to install the Ceph Metadata Servers.

  2. Navigate to the /usr/share/ceph-ansible directory:

    [root@admin ~]# cd /usr/share/ceph-ansible
  3. Create a copy of the group_vars/mdss.yml.sample file named mdss.yml:

    [root@admin ceph-ansible]# cp group_vars/mdss.yml.sample group_vars/mdss.yml
  4. Optionally, edit parameters in mdss.yml. See mdss.yml for details.
  5. Run the Ansible playbook:

    [user@admin ceph-ansible]$ ansible-playbook site.yml --limit mdss
  6. After installing Metadata Servers, configure them. For details, see the Configuring Metadata Server Daemons chapter in the Ceph File System Guide for Red Hat Ceph Storage 3.

Additional Resources

3.4. Installing the Ceph Client Role

The ceph-ansible utility provides the ceph-client role that copies the Ceph configuration file and the administration keyring to nodes. In addition, you can use this role to create custom pools and clients.



Perform the following tasks on the Ansible administration node.

  1. Add a new section [clients] to the /etc/ansible/hosts file:


    Replace <client-hostname> with the host name of the node where you want to install the ceph-client role.

  2. Navigate to the /usr/share/ceph-ansible directory:

    [root@admin ~]# cd /usr/share/ceph-ansible
  3. Create a new copy of the clients.yml.sample file named clients.yml:

    [root@admin ceph-ansible ~]# cp group_vars/clients.yml.sample group_vars/clients.yml
  4. Optionally, instruct ceph-client to create pools and clients.

    1. Update clients.yml.

      • Uncomment the user_config setting and set it to true.
      • Uncomment the pools and keys sections and update them as required. You can define custom pools and client names altogether with the cephx capabilities.
    2. Add the osd_pool_default_pg_num setting to the ceph_conf_overrides section in the all.yml file:

            osd_pool_default_pg_num: <number>

      Replace <number> with the default number of placement groups.

  5. Run the Ansible playbook:

    [user@admin ceph-ansible]$ ansible-playbook site.yml --limit clients

Additional Resources

3.5. Installing the Ceph Object Gateway

The Ceph Object Gateway, also know as the RADOS gateway, is an object storage interface built on top of the librados API to provide applications with a RESTful gateway to Ceph storage clusters.



Perform the following tasks on the Ansible administration node.

  1. Create the rgws file from the sample file:

    [root@ansible ~]# cd /etc/ansible/group_vars
    [root@ansible ~]# cp rgws.yml.sample rgws.yml
  2. Add gateway hosts to the /etc/ansible/hosts file under the [rgws] section to identify their roles to Ansible. If the hosts have sequential naming, use a range. For example:

  3. Navigate to the Ansible configuration directory, /etc/ansible/:

    [root@ansible ~]# cd /usr/share/ceph-ansible
  4. To copy the administrator key to the Ceph Object Gateway node, uncomment the copy_admin_key setting in the /usr/share/ceph-ansible/group_vars/rgws.yml file:

    copy_admin_key: true
  5. The rgws.yml file may specify a different default port than the default port 7480. For example:

    ceph_rgw_civetweb_port: 80
  6. The all.yml file MUST specify a radosgw_interface. For example:

    radosgw_interface: eth0

    Specifying the interface prevents Civetweb from binding to the same IP address as another Civetweb instance when running multiple instances on the same host.

  7. Generally, to change default settings, uncomment the settings in the rgw.yml file, and make changes accordingly. To make additional changes to settings that aren’t in the rgw.yml file, use ceph_conf_overrides: in the all.yml file. For example, set the rgw_dns_name: with the host of the DNS server and ensure the cluster’s DNS server to configure it for wild cards to enable S3 subdomains.

          rgw_dns_name: <host_name>
          rgw_override_bucket_index_max_shards: 16
          rgw_bucket_default_quota_max_objects: 1638400

    For advanced configuration details, see the Red Hat Ceph Storage 3 Ceph Object Gateway for Production guide. Advanced topics include:

  8. Uncomment the radosgw_interface parameter in the group_vars/all.yml file.

    radosgw_interface: <interface>


    • <interface> with the interface that the Ceph Object Gateway nodes listen to

    For additional details, see the all.yml file.

  9. Run the Ansible playbook:

    [user@admin ceph-ansible]$ ansible-playbook site.yml --limit rgws

Ansible ensures that each Ceph Object Gateway is running.

For a single site configuration, add Ceph Object Gateways to the Ansible configuration.

For multi-site deployments, you should have an Ansible configuration for each zone. That is, Ansible will create a Ceph storage cluster and gateway instances for that zone.

After installation for a multi-site cluster is complete, proceed to the Multi-site chapter in the Object Gateway Guide for Red Hat Enterprise Linux for details on configuring a cluster for multi-site.

Additional Resources

3.6. Understanding the limit option

This section contains information about the Ansible --limit option.

Ansible supports the --limit option that enables you to use the site, site-docker, and rolling_upgrade Ansible playbooks for a particular section of the inventory file.

$ ansible-playbook site.yml|rolling_upgrade.yml|site-docker.yml --limit osds|rgws|clients|mdss|nfss

For example, to redeploy only OSDs:

$ ansible-playbook /usr/share/ceph-ansible/site.yml --limit osds

If you colocate Ceph components on one node, Ansible applies a playbook to all components on the node despite that only one component type was specified with the limit option. For example, if you run the rolling_update playbook with the --limit osds option on a node that contains OSDs and Metadata Servers (MDS), Ansible will upgrade both components, OSDs and MDSs.

3.7. Additional Resources