Red Hat Training

A Red Hat training course is available for Red Hat Ceph Storage

Chapter 3. Deploying Red Hat Ceph Storage

This chapter describes how to use the Ansible application to deploy a Red Hat Ceph Storage cluster and other components, such as Metadata Servers or the Ceph Object Gateway.

Previously, Red Hat did not provide the ceph-ansible package for Ubuntu. In Red Hat Ceph Storage version 3 and later, you can use the Ansible automation application to deploy a Ceph cluster from an Ubuntu node.

3.1. Prerequisites

3.2. Installing a Red Hat Ceph Storage Cluster

Use the Ansible application with the ceph-ansible playbook to install Red Hat Ceph Storage 3.

Production Ceph storage clusters start with a minimum of three monitor hosts and three OSD nodes containing multiple OSD daemons.

ceph storage cluster

Prerequisites

  • On the Ansible administration node, install the ceph-ansible package:

    [user@admin ~]$ sudo apt-get install ceph-ansible

Procedure

Run the following commands from the Ansible administration node unless instructed otherwise.

  1. As the Ansible user, create the ceph-ansible-keys directory where Ansible stores temporary values generated by the ceph-ansible playbook.

    [user@admin ~]$ mkdir ~/ceph-ansible-keys
  2. As root, create a symbolic link to the /usr/share/ceph-ansible/group_vars directory in the /etc/ansible/ directory:

    [root@admin ~]# ln -s /usr/share/ceph-ansible/group_vars /etc/ansible/group_vars
  3. Navigate to the /usr/share/ceph-ansible/ directory:

    [root@admin ~]$ cd /usr/share/ceph-ansible
  4. Create new copies of the yml.sample files:

    [root@admin ceph-ansible]# cp group_vars/all.yml.sample group_vars/all.yml
    [root@admin ceph-ansible]# cp group_vars/osds.yml.sample group_vars/osds.yml
    [root@admin ceph-ansible]# cp site.yml.sample site.yml
  5. Edit the copied files.

    1. Edit the group_vars/all.yml file. See the table below for the most common required and optional parameters to uncomment. Note that the table does not include all parameters.

      Important

      Do not set the cluster: ceph parameter to any value other than ceph because using custom cluster names is not supported.

      Table 3.1. General Ansible Settings

      OptionValueRequiredNotes

      ceph_origin

      repository or distro or local

      Yes

      The repository value means Ceph will be installed through a new repository. The distro value means that no separate repository file will be added, and you will get whatever version of Ceph that is included with the Linux distribution. The local value means the Ceph binaries will be copied from the local machine.

      ceph_repository_type

      cdn or iso

      Yes

       

      ceph_rhcs_iso_path

      The path to the ISO image

      Yes if using an ISO image

       

      ceph_rhcs_cdn_debian_repo

      The credentials to access the online Ubuntu Ceph repositories. For example, https://username:password@rhcs.download.redhat.com.

      Yes

       

      ceph_rhcs_cdn_debian_repo_version

      Use /3-release/ for new installations; use /3-updates/ for updates.

      Yes

       

      monitor_interface

      The interface that the Monitor nodes listen to

      monitor_interface, monitor_address, or monitor_address_block is required

       

      monitor_address

      The address that the Monitor nodes listen to

       

      monitor_address_block

      The subnet of the Ceph public network

      Use when the IP addresses of the nodes are unknown, but the subnet is known

      ip_version

      ipv6

      Yes if using IPv6 addressing

       

      public_network

      The IP address and netmask of the Ceph public network, or the corresponding IPv6 address if using IPv6

      Yes

      Section 2.7, “Verifying the Network Configuration for Red Hat Ceph Storage”

      cluster_network

      The IP address and netmask of the Ceph cluster network

      No, defaults to public_network

      An example of the all.yml file can look like:

      ceph_origin: distro
      ceph_repository: rhcs
      ceph_repository_type: cdn
      ceph_rhcs_version: 3
      monitor_interface: eth0
      public_network: 192.168.0.0/24
      Note

      Be sure to set ceph_origin to distro in the all.yml file. This ensures that the installation process uses the correct download repository.

      Note

      Having the ceph_rhcs_version option set to 3 will pull in the latest version of Red Hat Ceph Storage 3.

      For additional details, see the all.yml file.

    2. Edit the group_vars/osds.yml file. See the table below for the most common required and optional parameters to uncomment. Note that the table does not include all parameters.

      Important

      Use a different physical device to install an OSD than the device where the operating system is installed. Sharing the same device between the operating system and OSDs causes performance issues.

      Table 3.2. OSD Ansible Settings

      OptionValueRequiredNotes

      osd_scenario

      collocated to use the same device for write-ahead logging and key/value data (BlueStore) or journal (FileStore) and OSD data

      non-collocated to use a dedicated device, such as SSD or NVMe media to store write-ahead log and key/value data (BlueStore) or journal data (FileStore)

      lvm to use the Logical Volume Manager to store OSD data

      Yes

      When using osd_scenario: non-collocated, ceph-ansible expects the numbers of variables in devices and dedicated_devices to match. For example, if you specify 10 disks in devices, you must specify 10 entries in dedicated_devices.

      osd_auto_discovery

      true to automatically discover OSDs

      Yes if using osd_scenario: collocated

      Cannot be used when devices setting is used

      devices

      List of devices where ceph data is stored

      Yes to specify the list of devices

      Cannot be used when osd_auto_discovery setting is used. When using lvm as the osd_scenario and setting the devices option, ceph-volume lvm batch mode creates the optimized OSD configuration.

      dedicated_devices

      List of dedicated devices for non-collocated OSDs where ceph journal is stored

      Yes if osd_scenario: non-collocated

      Should be nonpartitioned devices

      dmcrypt

      true to encrypt OSDs

      No

      Defaults to false

      lvm_volumes

      A list of FileStore or BlueStore dictionaries

      Yes if using osd_scenario: lvm and storage devices are not defined using devices

      Each dictionary must contain a data, journal and data_vg keys. Any logical volume or volume group must be the name and not the full path. The data, and journal keys can be a logical volume (LV) or partition, but do not use one journal for multiple data LVs. The data_vg key must be the volume group containing the data LV. Optionally, the journal_vg key can be used to specify the volume group containing the journal LV, if applicable. See the examples below for various supported configurations.

      osds_per_device

      The number of OSDs to create per device.

      No

      Defaults to 1

      osd_objectstore

      The Ceph object store type for the OSDs.

      No

      Defaults to bluestore. The other option is filestore. Required for upgrades.

      The following are examples of the osds.yml file when using the three OSD scenarios: collocated, non-collocated, and lvm. The default OSD object store format is BlueStore, if not specified.

      Collocated

      osd_objectstore: filestore
      osd_scenario: collocated
      devices:
        - /dev/sda
        - /dev/sdb

      Non-collocated - BlueStore

      osd_objectstore: bluestore
      osd_scenario: non-collocated
      devices:
       - /dev/sda
       - /dev/sdb
       - /dev/sdc
       - /dev/sdd
      dedicated_devices:
       - /dev/nvme0n1
       - /dev/nvme0n1
       - /dev/nvme1n1
       - /dev/nvme1n1

      This non-collocated example will create four BlueStore OSDs, one per device. In this example, the traditional hard drives (sda, sdb, sdc, sdd) are used for object data, and the solid state drives (SSDs) (/dev/nvme0n1, /dev/nvme1n1) are used for the BlueStore databases and write-ahead logs. This configuration pairs the /dev/sda and /dev/sdb devices with the /dev/nvme0n1 device, and pairs the /dev/sdc and /dev/sdd devices with the /dev/nvme1n1 device.

      Non-collocated - FileStore

      osd_objectstore: filestore
      osd_scenario: non-collocated
      devices:
        - /dev/sda
        - /dev/sdb
        - /dev/sdc
        - /dev/sdd
      dedicated_devices:
         - /dev/nvme0n1
         - /dev/nvme0n1
         - /dev/nvme1n1
         - /dev/nvme1n1

      LVM simple

      osd_objectstore: bluestore
      osd_scenario: lvm
      devices:
        - /dev/sda
        - /dev/sdb

      or

      osd_objectstore: bluestore
      osd_scenario: lvm
      devices:
        - /dev/sda
        - /dev/sdb
        - /dev/nvme0n1

      With these simple configurations ceph-ansible uses batch mode (ceph-volume lvm batch) to create the OSDs.

      In the first scenario, if the devices are traditional hard drives or SSDs, then one OSD per device is created.

      In the second scenario, when there is a mix of traditional hard drives and SSDs, the data is placed on the traditional hard drives (sda, sdb) and the BlueStore database (block.db) is created as large as possible on the SSD (nvme0n1).

      LVM advance

      osd_objectstore: filestore
      osd_scenario: lvm
      lvm_volumes:
         - data: data-lv1
           data_vg: vg1
           journal: journal-lv1
           journal_vg: vg2
         - data: data-lv2
           journal: /dev/sda
           data_vg: vg1

      or

      osd_objectstore: bluestore
      osd_scenario: lvm
      lvm_volumes:
        - data: data-lv1
          data_vg: data-vg1
          db: db-lv1
          db_vg: db-vg1
          wal: wal-lv1
          wal_vg: wal-vg1
        - data: data-lv2
          data_vg: data-vg2
          db: db-lv2
          db_vg: db-vg2
          wal: wal-lv2
          wal_vg: wal-vg2

      With these advance scenario examples, the volume groups and logical volumes must be created beforehand. They will not be created by ceph-ansible.

      Note

      If using all NVMe SSDs set the osd_scenario: lvm and osds_per_device: 4 options. For more information, see Configuring OSD Ansible settings for all NVMe Storage for Red Hat Enterprise Linux or Configuring OSD Ansible settings for all NVMe Storage for Ubuntu in the Red Hat Ceph Storage Installation Guides.

      For additional details, see the comments in the osds.yml file.

  6. Edit the Ansible inventory file located by default at /etc/ansible/hosts. Remember to comment out example hosts.

    1. Add the Monitor nodes under the [mons] section:

      [mons]
      MONITOR_NODE_NAME1
      MONITOR_NODE_NAME2
      MONITOR_NODE_NAME3
    2. Add OSD nodes under the [osds] section. If the nodes have sequential naming, consider using a range:

      [osds]
      OSD_NODE_NAME1[1:10]
      Note

      For OSDs in a new installation, the default object store format is BlueStore.

      1. Optionally, use the devices and dedicated_devices options to specify devices that the OSD nodes will use. Use a comma-separated list to list multiple devices.

        Syntax

        [osds]
        CEPH_NODE_NAME devices="['DEVICE_1', 'DEVICE_2']" dedicated_devices="['DEVICE_3', 'DEVICE_4']"

        Example

        [osds]
        ceph-osd-01 devices="['/dev/sdc', '/dev/sdd']" dedicated_devices="['/dev/sda', '/dev/sdb']"
        ceph-osd-02 devices="['/dev/sdc', '/dev/sdd', '/dev/sde']" dedicated_devices="['/dev/sdf', '/dev/sdg']"

        When specifying no devices, set the osd_auto_discovery option to true in the osds.yml file.

        Note

        Using the devices and dedicated_devices parameters is useful when OSDs use devices with different names or when one of the devices failed on one of the OSDs.

  7. Optionally, if you want to use host specific parameters, for all deployments, bare-metal or in containers, create host files in the host_vars directory to include any parameters specific to hosts.

    1. Create a new file for each new Ceph OSD node added to the storage cluster, under the /etc/ansible/host_vars/ directory:

      Syntax

      touch /etc/ansible/host_vars/OSD_NODE_NAME

      Example

      [root@admin ~]# touch /etc/ansible/host_vars/osd07

    2. Update the file with any host specific parameters. In bare-metal deployments, you can add the devices: and dedicated_devices: sections to the file.

      Example

      devices:
        - /dev/sdc
        - /dev/sdd
        - /dev/sde
        - /dev/sdf
      
      dedicated_devices:
        - /dev/sda
        - /dev/sdb

  8. Optionally, for all deployments, bare-metal or in containers, you can create a custom CRUSH hierarchy using ansible-playbook:

    1. Setup your Ansible inventory file. Specify where you want the OSD hosts to be in the CRUSH map’s hierarchy by using the osd_crush_location parameter. You must specify at least two CRUSH bucket types to specify the location of the OSD, and one bucket type must be host. By default, these include root, datacenter, room, row, pod, pdu, rack, chassis and host.

      Syntax

      [osds]
      CEPH_OSD_NAME osd_crush_location="{ 'root': ROOT_BUCKET', 'rack': 'RACK_BUCKET', 'pod': 'POD_BUCKET', 'host': 'CEPH_HOST_NAME' }"

      Example

      [osds]
      ceph-osd-01 osd_crush_location="{ 'root': 'default', 'rack': 'rack1', 'pod': 'monpod', 'host': 'ceph-osd-01' }"

    2. Set the crush_rule_config and create_crush_tree parameters to True, and create at least one CRUSH rule if you do not want to use the default CRUSH rules. For example, if you are using HDD devices, edit the paramters as follows:

      crush_rule_config: True
      crush_rule_hdd:
          name: replicated_hdd_rule
          root: root-hdd
          type: host
          class: hdd
          default: True
      crush_rules:
        - "{{ crush_rule_hdd }}"
      create_crush_tree: True

      If you are using SSD devices, then edit the parameters as follows:

      crush_rule_config: True
      crush_rule_ssd:
          name: replicated_ssd_rule
          root: root-ssd
          type: host
          class: ssd
          default: True
      crush_rules:
        - "{{ crush_rule_ssd }}"
      create_crush_tree: True
      Note

      The default CRUSH rules fail if both ssd and hdd OSDs are not deployed because the default rules now include the class parameter, which must be defined.

      Note

      Additionally, add the custom CRUSH hierarchy to the OSD files in the host_vars directory as described in a step above to make this configuration work.

    3. Create pools, with created crush_rules in group_vars/clients.yml file.

      Example

      >>>>>>> 3993c70c7f25ab628cbfd9c8e27623403ca18c99

      copy_admin_key: True
      user_config: True
      pool1:
        name: "pool1"
        pg_num: 128
        pgp_num: 128
        rule_name: "HDD"
        type: "replicated"
        device_class: "hdd"
      pools:
        - "{{ pool1 }}"
    4. View the tree.

      [root@mon ~]# ceph osd tree
    5. Validate the pools.

      # for i in $(rados lspools);do echo "pool: $i"; ceph osd pool get $i crush_rule;done
      
      pool: pool1
      crush_rule: HDD
  9. For all deployments, bare-metal or in containers, open for editing the Ansible inventory file, by default the /etc/ansible/hosts file. Comment out the example hosts.

    1. Add the Ceph Manager (ceph-mgr) nodes under the [mgrs] section. Colocate the Ceph Manager daemon with Monitor nodes.

      [mgrs]
      <monitor-host-name>
      <monitor-host-name>
      <monitor-host-name>
  10. As the Ansible user, ensure that Ansible can reach the Ceph hosts:

    [user@admin ~]$ ansible all -m ping
  11. Add the following line to the /etc/ansible/ansible.cfg file:

    retry_files_save_path = ~/
  12. As root, create the /var/log/ansible/ directory and assign the appropriate permissions for the ansible user:

    [root@admin ~]# mkdir /var/log/ansible
    [root@admin ~]# chown ansible:ansible  /var/log/ansible
    [root@admin ~]# chmod 755 /var/log/ansible
    1. Edit the /usr/share/ceph-ansible/ansible.cfg file, updating the log_path value as follows:

      log_path = /var/log/ansible/ansible.log
  13. As the Ansible user, change to the /usr/share/ceph-ansible/ directory:

    [user@admin ~]$ cd /usr/share/ceph-ansible/
  14. Run the ceph-ansible playbook:

    [user@admin ceph-ansible]$ ansible-playbook site.yml
    Note

    To increase the deployment speed, use the --forks option to ansible-playbook. By default, ceph-ansible sets forks to 20. With this setting, up to twenty nodes will be installed at the same time. To install up to thirty nodes at a time, run ansible-playbook --forks 30 PLAYBOOK FILE. The resources on the admin node must be monitored to ensure they are not overused. If they are, lower the number passed to --forks.

  15. Using the root account on a Monitor node, verify the status of the Ceph cluster:

    [root@monitor ~]# ceph health
    HEALTH_OK
  16. Verify the cluster is functioning using rados.

    1. From a monitor node, create a test pool with eight placement groups:

      Syntax

      [root@monitor ~]# ceph osd pool create <pool-name> <pg-number>

      Example

      [root@monitor ~]# ceph osd pool create test 8
    2. Create a file called hello-world.txt:

      Syntax

      [root@monitor ~]# vim <file-name>

      Example

      [root@monitor ~]# vim hello-world.txt
    3. Upload hello-world.txt to the test pool using the object name hello-world:

      Syntax

      [root@monitor ~]# rados --pool <pool-name> put <object-name> <object-file>

      Example

      [root@monitor ~]# rados --pool test put hello-world hello-world.txt
    4. Download hello-world from the test pool as file name fetch.txt:

      Syntax

      [root@monitor ~]# rados --pool <pool-name> get <object-name> <object-file>

      Example

      [root@monitor ~]# rados --pool test get hello-world fetch.txt
    5. Check the contents of fetch.txt:

      [root@monitor ~]# cat fetch.txt

      The output should be:

      "Hello World!"
      Note

      In addition to verifying the cluster status, you can use the ceph-medic utility to overall diagnose the Ceph Storage Cluster. See the Using ceph-medic to diagnose a Ceph Storage Cluster chapter in the Red Hat Ceph Storage 3 Administration Guide.

3.3. Configuring OSD Ansible settings for all NVMe storage

To optimize performance when using only non-volatile memory express (NVMe) devices for storage, configure four OSDs on each NVMe device. Normally only one OSD is configured per device, which will underutilize the throughput of an NVMe device.

Note

If you mix SSDs and HDDs, then SSDs will be used for either journals or block.db, not OSDs.

Note

In testing, configuring four OSDs on each NVMe device was found to provide optimal performance. It is recommended to set osds_per_device: 4, but it is not required. Other values may provide better performance in your environment.

Prerequisites

  • Satisfying all software and hardware requirements for a Ceph cluster.

Procedure

  1. Set osd_scenario: lvm and osds_per_device: 4 in group_vars/osds.yml:

    osd_scenario: lvm
    osds_per_device: 4
  2. List the NVMe devices under devices:

    devices:
      - /dev/nvme0n1
      - /dev/nvme1n1
      - /dev/nvme2n1
      - /dev/nvme3n1
  3. The settings in group_vars/osds.yml will look similar to this example:

    osd_scenario: lvm
    osds_per_device: 4
    devices:
      - /dev/nvme0n1
      - /dev/nvme1n1
      - /dev/nvme2n1
      - /dev/nvme3n1
Note

You must use devices with this configuration, not lvm_volumes. This is because lvm_volumes is generally used with pre-created logical volumes and osds_per_device implies automatic logical volume creation by Ceph.

3.4. Installing Metadata Servers

Use the Ansible automation application to install a Ceph Metadata Server (MDS). Metadata Server daemons are necessary for deploying a Ceph File System.

Prerequisites

  • A working Red Hat Ceph Storage cluster.

Procedure

Perform the following steps on the Ansible administration node.

  1. Add a new section [mdss] to the /etc/ansible/hosts file:

    [mdss]
    hostname
    hostname
    hostname

    Replace hostname with the host names of the nodes where you want to install the Ceph Metadata Servers.

  2. Navigate to the /usr/share/ceph-ansible directory:

    [root@admin ~]# cd /usr/share/ceph-ansible
  3. Optional. Change the default variables.

    1. Create a copy of the group_vars/mdss.yml.sample file named mdss.yml:

      [root@admin ceph-ansible]# cp group_vars/mdss.yml.sample group_vars/mdss.yml
    2. Optionally, edit parameters in mdss.yml. See mdss.yml for details.
  4. As the Ansible user, run the Ansible playbook:

    [user@admin ceph-ansible]$ ansible-playbook site.yml --limit mdss
  5. After installing Metadata Servers, configure them. For details, see the Configuring Metadata Server Daemons chapter in the Ceph File System Guide for Red Hat Ceph Storage 3.

Additional Resources

3.5. Installing the Ceph Client Role

The ceph-ansible utility provides the ceph-client role that copies the Ceph configuration file and the administration keyring to nodes. In addition, you can use this role to create custom pools and clients.

Prerequisites

Procedure

Perform the following tasks on the Ansible administration node.

  1. Add a new section [clients] to the /etc/ansible/hosts file:

    [clients]
    <client-hostname>

    Replace <client-hostname> with the host name of the node where you want to install the ceph-client role.

  2. Navigate to the /usr/share/ceph-ansible directory:

    [root@admin ~]# cd /usr/share/ceph-ansible
  3. Create a new copy of the clients.yml.sample file named clients.yml:

    [root@admin ceph-ansible ~]# cp group_vars/clients.yml.sample group_vars/clients.yml
  4. Open the group_vars/clients.yml file, and uncomment the following lines:

    keys:
      - { name: client.test, caps: { mon: "allow r", osd: "allow class-read object_prefix rbd_children, allow rwx pool=test" },  mode: "{{ ceph_keyring_permissions }}" }
    1. Replace client.test with the real client name, and add the client key to the client definition line, for example:

      key: "ADD-KEYRING-HERE=="

      Now the whole line example would look similar to this:

      - { name: client.test, key: "AQAin8tUMICVFBAALRHNrV0Z4MXupRw4v9JQ6Q==", caps: { mon: "allow r", osd: "allow class-read object_prefix rbd_children, allow rwx pool=test" },  mode: "{{ ceph_keyring_permissions }}" }
      Note

      The ceph-authtool --gen-print-key command can generate a new client key.

  5. Optionally, instruct ceph-client to create pools and clients.

    1. Update clients.yml.

      • Uncomment the user_config setting and set it to true.
      • Uncomment the pools and keys sections and update them as required. You can define custom pools and client names altogether with the cephx capabilities.
    2. Add the osd_pool_default_pg_num setting to the ceph_conf_overrides section in the all.yml file:

      ceph_conf_overrides:
         global:
            osd_pool_default_pg_num: <number>

      Replace <number> with the default number of placement groups.

  6. Run the Ansible playbook:

    [user@admin ceph-ansible]$ ansible-playbook site.yml --limit clients

Additional Resources

3.6. Installing the Ceph Object Gateway

The Ceph Object Gateway, also know as the RADOS gateway, is an object storage interface built on top of the librados API to provide applications with a RESTful gateway to Ceph storage clusters.

Prerequisites

Procedure

Perform the following tasks on the Ansible administration node.

  1. Add gateway hosts to the /etc/ansible/hosts file under the [rgws] section to identify their roles to Ansible. If the hosts have sequential naming, use a range, for example:

    [rgws]
    <rgw_host_name_1>
    <rgw_host_name_2>
    <rgw_host_name[3..10]>
  2. Navigate to the Ansible configuration directory:

    [root@ansible ~]# cd /usr/share/ceph-ansible
  3. Create the rgws.yml file from the sample file:

    [root@ansible ~]# cp group_vars/rgws.yml.sample group_vars/rgws.yml
  4. Open and edit the group_vars/rgws.yml file. To copy the administrator key to the Ceph Object Gateway node, uncomment the copy_admin_key option:

    copy_admin_key: true
  5. The rgws.yml file may specify a different default port than the default port 7480. For example:

    ceph_rgw_civetweb_port: 80
  6. The all.yml file MUST specify a radosgw_interface. For example:

    radosgw_interface: eth0

    Specifying the interface prevents Civetweb from binding to the same IP address as another Civetweb instance when running multiple instances on the same host.

  7. Generally, to change default settings, uncomment the settings in the rgw.yml file, and make changes accordingly. To make additional changes to settings that are not in the rgw.yml file, use ceph_conf_overrides: in the all.yml file. For example, set the rgw_dns_name: with the host of the DNS server and ensure the cluster’s DNS server to configure it for wild cards to enable S3 subdomains.

    ceph_conf_overrides:
       client.rgw.rgw1:
          rgw_dns_name: <host_name>
          rgw_override_bucket_index_max_shards: 16
          rgw_bucket_default_quota_max_objects: 1638400

    For advanced configuration details, see the Red Hat Ceph Storage 3 Ceph Object Gateway for Production guide. Advanced topics include:

  8. Uncomment the radosgw_interface parameter in the group_vars/all.yml file.

    radosgw_interface: <interface>

    Replace:

    • <interface> with the interface that the Ceph Object Gateway nodes listen to

    For additional details, see the all.yml file.

  9. Run the Ansible playbook:

    [user@admin ceph-ansible]$ ansible-playbook site.yml --limit rgws
Note

Ansible ensures that each Ceph Object Gateway is running.

For a single site configuration, add Ceph Object Gateways to the Ansible configuration.

For multi-site deployments, you should have an Ansible configuration for each zone. That is, Ansible will create a Ceph storage cluster and gateway instances for that zone.

After installation for a multi-site cluster is complete, proceed to the Multi-site chapter in the Object Gateway Guide for Ubuntu for details on configuring a cluster for multi-site.

Additional Resources

3.6.1. Configuring a multisite Ceph Object Gateway

Ansible will configure the realm, zonegroup, along with the master and secondary zones for a Ceph Object Gateway in a multisite environment.

Prerequisites

  • Two running Red Hat Ceph Storage clusters.
  • On the Ceph Object Gateway node, perform the tasks listed in the Requirements for Installing Red Hat Ceph Storage found in the Red Hat Ceph Storage Installation Guide.
  • Install and configure one Ceph Object Gateway per storage cluster.

Procedure

  1. Do the following steps on Ansible node for the primary storage cluster:

    1. Generate the system keys and capture their output in the multi-site-keys.txt file:

      [root@ansible ~]# echo system_access_key: $(cat /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w 20 | head -n 1) > multi-site-keys.txt
      [root@ansible ~]# echo system_secret_key: $(cat /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w 40 | head -n 1) >> multi-site-keys.txt
    2. Navigate to the Ansible configuration directory, /usr/share/ceph-ansible:

      [root@ansible ~]# cd /usr/share/ceph-ansible
    3. Open and edit the group_vars/all.yml file. Enable multisite support by adding the following options, along with updating the $ZONE_NAME, $ZONE_GROUP_NAME, $REALM_NAME, $ACCESS_KEY, and $SECRET_KEY values accordingly.

      When more than one Ceph Object Gateway is in the master zone, then the rgw_multisite_endpoints option needs to be set. The value for the rgw_multisite_endpoints option is a comma separated list, with no spaces.

      Example

      rgw_multisite: true
      rgw_zone: $ZONE_NAME
      rgw_zonemaster: true
      rgw_zonesecondary: false
      rgw_multisite_endpoint_addr: "{{ ansible_fqdn }}"
      rgw_multisite_endpoints: http://foo.example.com:8080,http://bar.example.com:8080,http://baz.example.com:8080
      rgw_zonegroup: $ZONE_GROUP_NAME
      rgw_zone_user: zone.user
      rgw_realm: $REALM_NAME
      system_access_key: $ACCESS_KEY
      system_secret_key: $SECRET_KEY

      Note

      The ansible_fqdn domain name must be resolvable from the secondary storage cluster.

      Note

      When adding a new Object Gateway, append it to the end of the rgw_multisite_endpoints list with the endpoint URL of the new Object Gateway before running the Ansible playbook.

    4. Run the Ansible playbook:

      [user@ansible ceph-ansible]$ ansible-playbook site.yml --limit rgws
    5. Restart the Ceph Object Gateway daemon:

      [root@rgw ~]# systemctl restart ceph-radosgw@rgw.`hostname -s`
  2. Do the following steps on Ansible node for the secondary storage cluster:

    1. Navigate to the Ansible configuration directory, /usr/share/ceph-ansible:

      [root@ansible ~]# cd /usr/share/ceph-ansible
    2. Open and edit the group_vars/all.yml file. Enable multisite support by adding the following options, along with updating the $ZONE_NAME, $ZONE_GROUP_NAME, $REALM_NAME, $ACCESS_KEY, and $SECRET_KEY values accordingly: The rgw_zone_user, system_access_key, and system_secret_key must be the same value as used in the master zone configuration. The rgw_pullhost option must be the Ceph Object Gateway for the master zone.

      When more than one Ceph Object Gateway is in the secondary zone, then the rgw_multisite_endpoints option needs to be set. The value for the rgw_multisite_endpoints option is a comma separated list, with no spaces.

      Example

      rgw_multisite: true
      rgw_zone: $ZONE_NAME
      rgw_zonemaster: false
      rgw_zonesecondary: true
      rgw_multisite_endpoint_addr: "{{ ansible_fqdn }}"
      rgw_multisite_endpoints: http://foo.example.com:8080,http://bar.example.com:8080,http://baz.example.com:8080
      rgw_zonegroup: $ZONE_GROUP_NAME
      rgw_zone_user: zone.user
      rgw_realm: $REALM_NAME
      system_access_key: $ACCESS_KEY
      system_secret_key: $SECRET_KEY
      rgw_pull_proto: http
      rgw_pull_port: 8080
      rgw_pullhost: $MASTER_RGW_NODE_NAME

      Note

      The ansible_fqdn domain name must be resolvable from the primary storage cluster.

      Note

      When adding a new Object Gateway, append it to the end of the rgw_multisite_endpoints list with the endpoint URL of the new Object Gateway before running the Ansible playbook.

    3. Run the Ansible playbook:

      [user@ansible ceph-ansible]$ ansible-playbook site.yml --limit rgws
    4. Restart the Ceph Object Gateway daemon:

      [root@rgw ~]# systemctl restart ceph-radosgw@rgw.`hostname -s`
  3. After running the Ansible playbook on the master and secondary storage clusters, you will have a running active-active Ceph Object Gateway configuration.
  4. Verify the multisite Ceph Object Gateway configuration:

    1. From the Ceph Monitor and Object Gateway nodes at each site, primary and secondary, must be able to curl the other site.
    2. Run the radosgw-admin sync status command on both sites.

3.7. Installing the NFS-Ganesha Gateway

The Ceph NFS Ganesha Gateway is an NFS interface built on top of the Ceph Object Gateway to provide applications with a POSIX filesystem interface to the Ceph Object Gateway for migrating files within filesystems to Ceph Object Storage.

Prerequisites

  • A running Ceph storage cluster, preferably in the active + clean state.
  • At least one node running a Ceph Object Gateway.
  • Perform the Before You Start procedure.

Procedure

Perform the following tasks on the Ansible administration node.

  1. Create the nfss file from the sample file:

    [root@ansible ~]# cd /usr/share/ceph-ansible/group_vars
    [root@ansible ~]# cp nfss.yml.sample nfss.yml
  2. Add gateway hosts to the /etc/ansible/hosts file under an [nfss] group to identify their group membership to Ansible. If the hosts have sequential naming, use a range. For example:

    [nfss]
    <nfs_host_name_1>
    <nfs_host_name_2>
    <nfs_host_name[3..10]>
  3. Navigate to the Ansible configuration directory, /etc/ansible/:

    [root@ansible ~]# cd /usr/share/ceph-ansible
  4. To copy the administrator key to the Ceph Object Gateway node, uncomment the copy_admin_key setting in the /usr/share/ceph-ansible/group_vars/nfss.yml file:

    copy_admin_key: true
  5. Configure the FSAL (File System Abstraction Layer) sections of the /usr/share/ceph-ansible/group_vars/nfss.yml file. Provide an ID, S3 user ID, S3 access key and secret. For NFSv4, it should look something like this:

    ###################
    # FSAL RGW Config #
    ###################
    #ceph_nfs_rgw_export_id: <replace-w-numeric-export-id>
    #ceph_nfs_rgw_pseudo_path: "/"
    #ceph_nfs_rgw_protocols: "3,4"
    #ceph_nfs_rgw_access_type: "RW"
    #ceph_nfs_rgw_user: "cephnfs"
    # Note: keys are optional and can be generated, but not on containerized, where
    # they must be configered.
    #ceph_nfs_rgw_access_key: "<replace-w-access-key>"
    #ceph_nfs_rgw_secret_key: "<replace-w-secret-key>"
    Warning

    Access and secret keys are optional, and can be generated.

  6. Run the Ansible playbook:

    [user@admin ceph-ansible]$ ansible-playbook site-docker.yml --limit nfss

Additional Resources

3.8. Understanding the limit option

This section contains information about the Ansible --limit option.

Ansible supports the --limit option that enables you to use the site, site-docker, and rolling_upgrade Ansible playbooks for a particular section of the inventory file.

$ ansible-playbook site.yml|rolling_upgrade.yml|site-docker.yml --limit osds|rgws|clients|mdss|nfss|iscsigws

For example, to redeploy only OSDs on bare metal, run the following command as the Ansible user:

$ ansible-playbook /usr/share/ceph-ansible/site.yml --limit osds
Important

If you colocate Ceph components on one node, Ansible applies a playbook to all components on the node despite that only one component type was specified with the limit option. For example, if you run the rolling_update playbook with the --limit osds option on a node that contains OSDs and Metadata Servers (MDS), Ansible will upgrade both components, OSDs and MDSs.

3.9. Additional Resources