Chapter 3. Deploying Red Hat Ceph Storage

This chapter describes how to use the Ansible application to deploy a Red Hat Ceph Storage cluster and other components, such as Metadata Servers or the Ceph Object Gateway.

Previously, Red Hat did not provide the ceph-ansible package for Ubuntu. In Red Hat Ceph Storage version 3 and later, you can use the Ansible automation application to deploy a Ceph cluster from an Ubuntu node.

3.1. Prerequisites

3.2. Installing a Red Hat Ceph Storage Cluster

Use the Ansible application with the ceph-ansible playbook to install Red Hat Ceph Storage 3.

Production Ceph storage clusters start with a minimum of three monitor hosts and three OSD nodes containing multiple OSD daemons.

ceph storage cluster

Prerequisites

  • On the Ansible administration node, install the ceph-ansible package:

    [user@admin ~]$ sudo apt-get install ceph-ansible

Procedure

Use the following commands from the Ansible administration node if not instructed otherwise.

  1. In the user’s home directory, create the ceph-ansible-keys directory where Ansible stores temporary values generated by the ceph-ansible playbook.

    [user@admin ~]$ mkdir ~/ceph-ansible-keys
  2. Create a symbolic link to the /usr/share/ceph-ansible/group_vars directory in the /etc/ansible/ directory:

    [root@admin ~]# ln -s /usr/share/ceph-ansible/group_vars /etc/ansible/group_vars
  3. Navigate to the /usr/share/ceph-ansible/ directory:

    [user@admin ~]$ cd /usr/share/ceph-ansible
  4. Create new copies of the yml.sample files:

    [root@admin ceph-ansible]# cp group_vars/all.yml.sample group_vars/all.yml
    [root@admin ceph-ansible]# cp group_vars/osds.yml.sample group_vars/osds.yml
    [root@admin ceph-ansible]# cp site.yml.sample site.yml
  5. Edit the copied files.

    1. Edit the group_vars/all.yml file. See the table below for the most common required and optional parameters to uncomment. Note that the table does not include all parameters.

      Table 3.1. General Ansible Settings

      OptionValueRequiredNotes

      ceph_repository_type

      cdn or iso

      Yes

       

      ceph_rhcs_iso_path

      The path to the ISO image

      Yes if using an ISO image

       

      ceph_rhcs_cdn_debian_repo

      The credentials to access the online Ubuntu Ceph repositories. For example, https://username:password@rhcs.download.redhat.com.

      Yes

       

      monitor_interface

      The interface that the Monitor nodes listen to

      monitor_interface, monitor_address, or monitor_address_block is required

       

      monitor_address

      The address that the Monitor nodes listen to

       

      monitor_address_block

      The subnet of the Ceph public network

      Use when the IP addresses of the nodes are unknown, but the subnet is known

      ip_version

      ipv6

      Yes if using IPv6 addressing

       

      public_network

      The IP address and netmask of the Ceph public network

      Yes

      Section 2.7, “Verifying the Network Configuration for Red Hat Ceph Storage”

      cluster_network

      The IP address and netmask of the Ceph cluster network

      No, defaults to public_network

      An example of the all.yml file can look like:

      ceph_origin: repository
      ceph_repository: rhcs
      ceph_repository_type: cdn
      ceph_rhcs_version: 3
      monitor_interface: eth0
      public_network: 192.168.0.0/24

      For additional details, see the all.yml file.

    2. Edit the group_vars/osds.yml file. See the table below for the most common required and optional parameters to uncomment. Note that the table does not include all parameters.

      Table 3.2. OSD Ansible Settings

      OptionValueRequiredNotes

      osd_scenario

      collocated to use the same device for journal and OSD data

      non-collocated to use a dedicated device to store journal data

      Yes

       

      osd_auto_discovery

      true to automatically discover OSDs

      Yes if osd_scenario: collocated

       

      devices

      a list of devices

      Yes to specify the list of devices

       

      dedicated_devices

      List of dedicated devices for non-colocated OSDs

      Yes if osd_scenario: non-collocated

       

      dmcrypt

      true to encrypt OSDs

      No

      Defaults to false

      An example of the osd.yml file can look like:

      osd_scenario: collocated
      devices:
        - /dev/sda
        - /dev/sdb
      dmcrypt: true

      For additional details, see the osds.yml file.

  6. Edit the Ansible inventory file located by default at /etc/ansible/hosts. Remember to comment out example hosts.

    1. Add the Monitor nodes under the [mons] section:

      [mons]
      <monitor-host-name>
      <monitor-host-name>
      <monitor-host-name>
    2. Add OSD nodes under the [osds] section. If the nodes have sequential naming, consider using a range:

      [osds]
      <osd-host-name[1:10]>

      Optionally, use the devices parameter to specify devices that the OSD nodes will use. Use a comma-separated list to list multiple devices.

      [osds]
      <ceph-host-name> devices="[ '<device_1>', '<device_2>' ]"

      For example:

      [osds]
      ceph-osd-01 devices="[ '/dev/sdb', '/dev/sdc' ]"
      ceph-osd-02 devices="[ '/dev/sdb', '/dev/sdc', '/dev/sdd' ]"

      When specifying no devices, set the osd_auto_discovery option to true in the osds.yml file.

      Note

      Using the devices parameter is useful when OSDs use devices with different names or when one of the devices failed on one of the OSDs.

    3. Add the Ceph Manager (ceph-mgr) nodes under the [mgrs] section. Colocate the Ceph Manager daemon with Monitor nodes.

      [mgrs]
      <monitor-host-name>
      <monitor-host-name>
      <monitor-host-name>
  7. As the Ansible user, ensure that Ansible can reach the Ceph hosts:

    [user@admin ~]$ ansible all -m ping
  8. Add the following line to the /etc/ansible/ansible.cfg file:

    retry_files_save_path = ~/
  9. As the Ansible user, run the ceph-ansible playbook.

    [user@admin ceph-ansible]$ ansible-playbook site.yml
  10. From a Monitor node, verify the status of the Ceph cluster.

    [root@monitor ~]# ceph health
    HEALTH_OK
  11. Verify the cluster is functioning using rados.

    1. From a monitor node, create a test pool with eight placement groups:

      Syntax

      [root@monitor ~]# ceph osd pool create <pool-name> <pg-number>

      Example

      [root@monitor ~]# ceph osd pool create test 8
    2. Create a file called hello-world.txt:

      Syntax

      [root@monitor ~]# vim <file-name>

      Example

      [root@monitor ~]# vim hello-world.txt
    3. Upload hello-world.txt to the test pool using the object name hello-world:

      Syntax

      [root@monitor ~]# rados --pool <pool-name> put <object-name> <object-file>

      Example

      [root@monitor ~]# rados --pool test put hello-world hello-world.txt
    4. Download hello-world from the test pool as file name fetch.txt:

      Syntax

      [root@monitor ~]# rados --pool <pool-name> get <object-name> <object-file>

      Example

      [root@monitor ~]# rados --pool test get hello-world fetch.txt
    5. Check the contents of fetch.txt:

      [root@monitor ~]# cat fetch.txt

      The output should be:

      "Hello World!"
    Note

    In addition to verifying the cluster status, you can use the ceph-medic utility to overall diagnose the Ceph Storage Cluster. See the Installing and Using ceph-medic to Diagnose a Ceph Storage Cluster chapter in the Red Hat Ceph Storage 3 Troubleshooting Guide.

3.3. Installing Metadata Servers

Use the Ansible automation application to install a Ceph Metadata Server (MDS). Metadata Server daemons are necessary for deploying a Ceph File System.

Procedure

Perform the following steps on the Ansible administration node.

  1. Add a new section [mdss] to the /etc/ansible/hosts file:

    [mdss]
    <hostname>
    <hostname>
    <hostname>

    Replace <hostname> with the host names of the nodes where you want to install the Ceph Metadata Servers.

  2. Navigate to the /usr/share/ceph-ansible directory:

    [root@admin ~]# cd /usr/share/ceph-ansible
  3. Create a copy of the group_vars/mdss.yml.sample file named mdss.yml:

    [root@admin ceph-ansible]# cp group_vars/mdss.yml.sample group_vars/mdss.yml
  4. Optionally, edit parameters in mdss.yml. See mdss.yml for details.
  5. Run the Ansible playbook:

    [user@admin ceph-ansible]$ ansible-playbook site.yml --limit mdss
  6. After installing Metadata Servers, configure them. For details, see the Configuring Metadata Server Daemons chapter in the Ceph File System Guide for Red Hat Ceph Storage 3.

Additional Resources

3.4. Installing the Ceph Client Role

The ceph-ansible utility provides the ceph-client role that copies the Ceph configuration file and the administration keyring to nodes. In addition, you can use this role to create custom pools and clients.

Prerequisites

Procedure

Perform the following tasks on the Ansible administration node.

  1. Add a new section [clients] to the /etc/ansible/hosts file:

    [clients]
    <client-hostname>

    Replace <client-hostname> with the host name of the node where you want to install the ceph-client role.

  2. Navigate to the /usr/share/ceph-ansible directory:

    [root@admin ~]# cd /usr/share/ceph-ansible
  3. Create a new copy of the clients.yml.sample file named clients.yml:

    [root@admin ceph-ansible ~]# cp group_vars/clients.yml.sample group_vars/clients.yml
  4. Optionally, instruct ceph-client to create pools and clients.

    1. Update clients.yml.

      • Uncomment the user_config setting and set it to true.
      • Uncomment the pools and keys sections and update them as required. You can define custom pools and client names altogether with the cephx capabilities.
    2. Add the osd_pool_default_pg_num setting to the ceph_conf_overrides section in the all.yml file:

      ceph_conf_overrides:
         global:
            osd_pool_default_pg_num: <number>

      Replace <number> with the default number of placement groups.

  5. Run the Ansible playbook:

    [user@admin ceph-ansible]$ ansible-playbook site.yml --limit clients

Additional Resources

3.5. Installing the Ceph Object Gateway

The Ceph Object Gateway, also know as the RADOS gateway, is an object storage interface built on top of the librados API to provide applications with a RESTful gateway to Ceph storage clusters.

Prerequisites

Procedure

Perform the following tasks on the Ansible administration node.

  1. Create the rgws file from the sample file:

    [root@ansible ~]# cd /etc/ansible/group_vars
    [root@ansible ~]# cp rgws.yml.sample rgws.yml
  2. Add gateway hosts to the /etc/ansible/hosts file under the [rgws] section to identify their roles to Ansible. If the hosts have sequential naming, use a range. For example:

    [rgws]
    <rgw_host_name_1>
    <rgw_host_name_2>
    <rgw_host_name[3..10]>
  3. Navigate to the Ansible configuration directory, /etc/ansible/:

    [root@ansible ~]# cd /usr/share/ceph-ansible
  4. To copy the administrator key to the Ceph Object Gateway node, uncomment the copy_admin_key setting in the /usr/share/ceph-ansible/group_vars/rgws.yml file:

    copy_admin_key: true
  5. The rgws.yml file may specify a different default port than the default port 7480. For example:

    ceph_rgw_civetweb_port: 80
  6. Generally, to change default settings, uncomment the settings in the rgw.yml file, and make changes accordingly. To make additional changes to settings that aren’t in the rgw.yml file, use ceph_conf_overrides: in the all.yml file. For example, set the rgw_dns_name: with the host of the DNS server and ensure the cluster’s DNS server to configure it for wild cards to enable S3 subdomains.

    ceph_conf_overrides:
       client.rgw.rgw1:
          rgw_dns_name: <host_name>
          rgw_override_bucket_index_max_shards: 16
          rgw_bucket_default_quota_max_objects: 1638400

    For advanced configuration details, see the Red Hat Ceph Storage 3 Ceph Object Gateway for Production guide. Advanced topics include:

  7. Uncomment the radosgw_interface parameter in the group_vars/all.yml file.

    radosgw_interface: <interface>

    Replace:

    • <interface> with the interface that the Ceph Object Gateway nodes listen to

    For additional details, see the all.yml file.

  8. Run the Ansible playbook:

    [user@admin ceph-ansible]$ ansible-playbook site.yml --limit rgws
Note

Ansible ensures that each Ceph Object Gateway is running.

For a single site configuration, add Ceph Object Gateways to the Ansible configuration.

For multi-site deployments, you should have an Ansible configuration for each zone. That is, Ansible will create a Ceph storage cluster and gateway instances for that zone.

After installation for a multi-site cluster is complete, proceed to the Multi-site chapter in the Object Gateway Guide for Ubuntu for details on configuring a cluster for multi-site.

Additional Resources

3.6. Understanding the limit option

This section contains information about the Ansible --limit option.

Ansible supports the --limit option that enables you to use the site, site-docker, and rolling_upgrade Ansible playbooks for a particular section of the inventory file.

$ ansible-playbook site.yml|rolling_upgrade.yml|site-docker.yml --limit osds|rgws|clients|mdss|nfss

For example, to redeploy only OSDs:

$ ansible-playbook /usr/share/ceph-ansible/site.yml --limit osds
Important

If you colocate Ceph components on one node, Ansible applies a playbook to all components on the node despite that only one component type was specified with the limit option. For example, if you run the rolling_update playbook with the --limit osds option on a node that contains OSDs and Metadata Servers (MDS), Ansible will upgrade both components, OSDs and MDSs.

3.7. Additional Resources