Chapter 1. Deploying Red Hat Ceph Storage in Containers

This chapter describes how to use the Ansible application with the ceph-ansible playbook to deploy Red Hat Ceph Storage 3 in containers.

1.1. Prerequisites

1.1.1. Registering Red Hat Ceph Storage Nodes to the CDN and Attaching Subscriptions

Register each Red Hat Ceph Storage (RHCS) node to the Content Delivery Network (CDN) and attach the appropriate subscription so that the node has access to software repositories. Each RHCS node must be able to access the full Red Hat Enterprise Linux 7 base content and the extras repository content.

Prerequisites
  • A valid Red Hat subscription
  • RHCS nodes must be able to connect to the Internet.
  • For RHCS nodes that cannot access the internet during installation, you must first follow these steps on a system with internet access:

    1. Start a local Docker registry:

      # docker run -d -p 5000:5000 --restart=always --name registry registry:2
    2. Pull the Red Hat Ceph Storage 3.x image from the Red Hat Customer Portal:

      # docker pull registry.access.redhat.com/rhceph/rhceph-3-rhel7
    3. Tag the image:

       # docker tag registry.access.redhat.com/rhceph/rhceph-3-rhel7 <local-host-fqdn>:5000/cephimageinlocalreg

      Replace <local-host-fqdn> with your local host FQDN.

    4. Push the image to the local Docker registry you started:

      # docker push <local-host-fqdn>:5000/cephimageinlocalreg

      Replace <local-host-fqdn> with your local host FQDN.

Procedure

Perform the following steps on all nodes in the storage cluster as the root user.

  1. Register the node. When prompted, enter your Red Hat Customer Portal credentials:

    # subscription-manager register
  2. Pull the latest subscription data from the CDN:

    # subscription-manager refresh
  3. List all available subscriptions for Red Hat Ceph Storage:

    # subscription-manager list --available --all --matches="*Ceph*"

    Identify the appropriate subscription and retrieve its Pool ID.

  4. Attach the subscription:

    # subscription-manager attach --pool=$POOL_ID
    Replace
    • $POOL_ID with the Pool ID identified in the previous step.
  5. Disable the default software repositories. Then, enable the Red Hat Enterprise Linux 7 Server and Red Hat Enterprise Linux 7 Server Extras repositories:

    # subscription-manager repos --disable=*
    # subscription-manager repos --enable=rhel-7-server-rpms
    # subscription-manager repos --enable=rhel-7-server-extras-rpms
  6. Update the system to receive the latest packages:

    # yum update
Additional Resources

1.1.2. Creating an Ansible user with sudo access

Ansible must be able to log into all the Red Hat Ceph Storage (RHCS) nodes as a user that has root privileges to install software and create configuration files without prompting for a password. You must create an Ansible user with password-less root access on all nodes in the storage cluster when deploying and configuring a Red Hat Ceph Storage cluster with Ansible.

Prerequisite

  • Having root or sudo access to all nodes in the storage cluster.

Procedure

  1. Log in to a Ceph node as the root user:

    ssh root@$HOST_NAME
    Replace
    • $HOST_NAME with the host name of the Ceph node.

    Example

    # ssh root@mon01

    Enter the root password when prompted.

  2. Create a new Ansible user:

    adduser $USER_NAME
    Replace
    • $USER_NAME with the new user name for the Ansible user.

    Example

    # adduser admin

    Important

    Do not use ceph as the user name. The ceph user name is reserved for the Ceph daemons. A uniform user name across the cluster can improve ease of use, but avoid using obvious user names, because intruders typically use them for brute-force attacks.

  3. Set a new password for this user:

    # passwd $USER_NAME
    Replace
    • $USER_NAME with the new user name for the Ansible user.
    # passwd admin

    Enter the new password twice when prompted.

  4. Configure sudo access for the newly created user:

    cat << EOF >/etc/sudoers.d/$USER_NAME
    $USER_NAME ALL = (root) NOPASSWD:ALL
    EOF
    Replace
    • $USER_NAME with the new user name for the Ansible user.

    Example

    # cat << EOF >/etc/sudoers.d/admin
    admin ALL = (root) NOPASSWD:ALL
    EOF

  5. Assign the correct file permissions to the new file:

    chmod 0440 /etc/sudoers.d/$USER_NAME
    Replace
    • $USER_NAME with the new user name for the Ansible user.

    Example

    # chmod 0440 /etc/sudoers.d/admin

Additional Resources

  • The Adding a New User section in the System Administrator’s Guide for Red Hat Enterprise Linux 7.

1.1.3. Enabling Password-less SSH for Ansible

Generate an SSH key pair on the Ansible administration node and distribute the public key to each node in the storage cluster so that Ansible can access the nodes without being prompted for a password.

Prerequisites
Procedure

Do the following steps from the Ansible administration node, and as the Ansible user.

  1. Generate the SSH key pair, accept the default file name and leave the passphrase empty:

    [user@admin ~]$ ssh-keygen
  2. Copy the public key to all nodes in the storage cluster:

    ssh-copy-id $USER_NAME@$HOST_NAME
    Replace
    • $USER_NAME with the new user name for the Ansible user.
    • $HOST_NAME with the host name of the Ceph node.

    Example

    [user@admin ~]$ ssh-copy-id ceph-admin@ceph-mon01

  3. Create and edit the ~/.ssh/config file.

    Important

    By creating and editing the ~/.ssh/config file you do not have to specify the -u $USER_NAME option each time you execute the ansible-playbook command.

    1. Create the SSH config file:

      [user@admin ~]$ touch ~/.ssh/config
    2. Open the config file for editing. Set the Hostname and User options for each node in the storage cluster:

      Host node1
         Hostname $HOST_NAME
         User $USER_NAME
      Host node2
         Hostname $HOST_NAME
         User $USER_NAME
      ...
      Replace
      • $HOST_NAME with the host name of the Ceph node.
      • $USER_NAME with the new user name for the Ansible user.

      Example

      Host node1
         Hostname monitor
         User admin
      Host node2
         Hostname osd
         User admin
      Host node3
         Hostname gateway
         User admin

  4. Set the correct file permissions for the ~/.ssh/config file:

    [admin@admin ~]$ chmod 600 ~/.ssh/config
Additional Resources
  • The ssh_config(5) manual page
  • The OpenSSH chapter in the System Administrator’s Guide for Red Hat Enterprise Linux 7

1.1.4. Configuring a firewall for Red Hat Ceph Storage

Red Hat Ceph Storage (RHCS) uses the firewalld service.

The Monitor daemons use port 6789 for communication within the Ceph storage cluster.

On each Ceph OSD node, the OSD daemons use several ports in the range 6800-7300:

  • One for communicating with clients and monitors over the public network
  • One for sending data to other OSDs over a cluster network, if available; otherwise, over the public network
  • One for exchanging heartbeat packets over a cluster network, if available; otherwise, over the public network

The Ceph Manager (ceph-mgr) daemons use ports in range 6800-7300. Consider colocating the ceph-mgr daemons with Ceph Monitors on same nodes.

The Ceph Metadata Server nodes (ceph-mds) use port 6800.

The Ceph Object Gateway nodes use port 7480 by default. However, you can change the default port, for example to port 80.

To use the SSL/TLS service, open port 443.

Prerequisite

  • Network hardware is connected.

Procedure

  1. On all RHCS nodes, start the firewalld service. Enable it to run on boot, and ensure that it is running:

    # systemctl enable firewalld
    # systemctl start firewalld
    # systemctl status firewalld
  2. On all Monitor nodes, open port 6789 on the public network:

    [root@monitor ~]# firewall-cmd --zone=public --add-port=6789/tcp
    [root@monitor ~]# firewall-cmd --zone=public --add-port=6789/tcp --permanent

    To limit access based on the source address:

    firewall-cmd --zone=public --add-rich-rule="rule family="ipv4" \
    source address="$IP_ADDR/$NETMASK_PREFIX" port protocol="tcp" \
    port="6789" accept"
    firewall-cmd --zone=public --add-rich-rule="rule family="ipv4" \
    source address="$IP_ADDR/$NETMASK_PREFIX" port protocol="tcp" \
    port="6789" accept" --permanent
    Replace
    • $IP_ADDR with the network address of the Monitor node.
    • $NETMASK_PREFIX with the netmask in CIDR notation.

    Example

    [root@monitor ~]# firewall-cmd --zone=public --add-rich-rule="rule family="ipv4" \
    source address="192.168.0.11/24" port protocol="tcp" \
    port="6789" accept"

    [root@monitor ~]# firewall-cmd --zone=public --add-rich-rule="rule family="ipv4" \
    source address="192.168.0.11/24" port protocol="tcp" \
    port="6789" accept" --permanent
  3. On all OSD nodes, open ports 6800-7300 on the public network:

    [root@osd ~]# firewall-cmd --zone=public --add-port=6800-7300/tcp
    [root@osd ~]# firewall-cmd --zone=public --add-port=6800-7300/tcp --permanent

    If you have a separate cluster network, repeat the commands with the appropriate zone.

  4. On all Ceph Manager (ceph-mgr) nodes (usually the same nodes as Monitor ones), open ports 6800-7300 on the public network:

    [root@monitor ~]# firewall-cmd --zone=public --add-port=6800-7300/tcp
    [root@monitor ~]# firewall-cmd --zone=public --add-port=6800-7300/tcp --permanent

    If you have a separate cluster network, repeat the commands with the appropriate zone.

  5. On all Ceph Metadata Server (ceph-mds) nodes, open port 6800 on the public network:

    [root@monitor ~]# firewall-cmd --zone=public --add-port=6800/tcp
    [root@monitor ~]# firewall-cmd --zone=public --add-port=6800/tcp --permanent

    If you have a separate cluster network, repeat the commands with the appropriate zone.

  6. On all Ceph Object Gateway nodes, open the relevant port or ports on the public network.

    1. To open the default port 7480:

      [root@gateway ~]# firewall-cmd --zone=public --add-port=7480/tcp
      [root@gateway ~]# firewall-cmd --zone=public --add-port=7480/tcp --permanent

      To limit access based on the source address:

      firewall-cmd --zone=public --add-rich-rule="rule family="ipv4" \
      source address="$IP_ADDR/$NETMASK_PREFIX" port protocol="tcp" \
      port="7480" accept"
      firewall-cmd --zone=public --add-rich-rule="rule family="ipv4" \
      source address="$IP_ADDR/$NETMASK_PREFIX" port protocol="tcp" \
      port="7480" accept" --permanent
      Replace
      • $IP_ADDR with the network address of the object gateway node.
      • $NETMASK_PREFIX with the netmask in CIDR notation.

      Example

      [root@gateway ~]# firewall-cmd --zone=public --add-rich-rule="rule family="ipv4" \
      source address="192.168.0.31/24" port protocol="tcp" \
      port="7480" accept"

      [root@gateway ~]# firewall-cmd --zone=public --add-rich-rule="rule family="ipv4" \
      source address="192.168.0.31/24" port protocol="tcp" \
      port="7480" accept" --permanent
    2. Optional. If you changed the default Ceph Object Gateway port, for example, to port 80, open this port:

      [root@gateway ~]# firewall-cmd --zone=public --add-port=80/tcp
      [root@gateway ~]# firewall-cmd --zone=public --add-port=80/tcp --permanent

      To limit access based on the source address, run the following commands:

      firewall-cmd --zone=public --add-rich-rule="rule family="ipv4" \
      source address="$IP_ADDR/$NETMASK_PREFIX" port protocol="tcp" \
      port="80" accept"
      firewall-cmd --zone=public --add-rich-rule="rule family="ipv4" \
      source address="$IP_ADDR/$NETMASK_PREFIX" port protocol="tcp" \
      port="80" accept" --permanent
      Replace
      • $IP_ADDR with the network address of the object gateway node.
      • $NETMASK_PREFIX with the netmask in CIDR notation.

      Example

      [root@gateway ~]# firewall-cmd --zone=public --add-rich-rule="rule family="ipv4" \
      source address="192.168.0.31/24" port protocol="tcp" \
      port="80" accept"

      [root@gateway ~]# firewall-cmd --zone=public --add-rich-rule="rule family="ipv4" \
      source address="192.168.0.31/24" port protocol="tcp" \
      port="80" accept" --permanent
    3. Optional. To use SSL/TLS, open port 443:

      [root@gateway ~]# firewall-cmd --zone=public --add-port=443/tcp
      [root@gateway ~]# firewall-cmd --zone=public --add-port=443/tcp --permanent

      To limit access based on the source address, run the following commands:

      firewall-cmd --zone=public --add-rich-rule="rule family="ipv4" \
      source address="$IP_ADDR/$NETMASK_PREFIX" port protocol="tcp" \
      port="443" accept"
      firewall-cmd --zone=public --add-rich-rule="rule family="ipv4" \
      source address="$IP_ADDR/$NETMASK_PREFIX" port protocol="tcp" \
      port="443" accept" --permanent
      Replace
      • $IP_ADDR with the network address of the object gateway node.
      • $NETMASK_PREFIX with the netmask in CIDR notation.

      Example

      [root@gateway ~]# firewall-cmd --zone=public --add-rich-rule="rule family="ipv4" \
      source address="192.168.0.31/24" port protocol="tcp" \
      port="443" accept"
      [root@gateway ~]# firewall-cmd --zone=public --add-rich-rule="rule family="ipv4" \
      source address="192.168.0.31/24" port protocol="tcp" \
      port="443" accept" --permanent

Additional Resources

1.2. Installing a Red Hat Ceph Storage Cluster in Containers

Use the Ansible application with the ceph-ansible playbook to install Red Hat Ceph Storage 3 in containers.

A Ceph cluster used in production usually consists of ten or more nodes. To deploy Red Hat Ceph Storage as a container image, Red Hat recommends to use a Ceph cluster that consists of at least three OSD and three Monitor nodes.

Important

Ceph can run with one monitor; however, to ensure high availability in a production cluster, Red Hat will only support deployments with at least three monitor nodes.

Prerequisites

  • On the Ansible administration node, enable the Red Hat Ceph Storage 3 Tools repository and Ansible repository:

    [root@admin ~]# subscription-manager repos --enable=rhel-7-server-rhceph-3-tools-rpms --enable=rhel-7-server-ansible-2.4-rpms
  • On the Ansible administration node, install the ceph-ansible package:

    [root@admin ~]# yum install ceph-ansible

Procedure

Use the following commands from the Ansible administration node if not instructed otherwise.

  1. In the user’s home directory, create the ceph-ansible-keys directory where Ansible stores temporary values generated by the ceph-ansible playbook.

    [user@admin ~]$ mkdir ~/ceph-ansible-keys
  2. Create a symbolic link to the /usr/share/ceph-ansible/group_vars directory in the /etc/ansible/ directory:

    [root@admin ~]# ln -s /usr/share/ceph-ansible/group_vars /etc/ansible/group_vars
  3. Navigate to the /usr/share/ceph-ansible/ directory:

    [user@admin ~]$ cd /usr/share/ceph-ansible
  4. Create new copies of the yml.sample files:

    [root@admin ceph-ansible]# cp group_vars/all.yml.sample group_vars/all.yml
    [root@admin ceph-ansible]# cp group_vars/osds.yml.sample group_vars/osds.yml
    [root@admin ceph-ansible]# cp site-docker.yml.sample site-docker.yml
  5. Edit the copied files.

    1. Edit the group_vars/all.yml file. See the table below for the most common required and optional parameters to uncomment. Note that the table does not include all parameters.

      OptionValueRequiredNotes

      monitor_interface

      The interface that the Monitor nodes listen to

      monitor_interface, monitor_address, or monitor_address_block is required

       

      monitor_address

      The address that the Monitor nodes listen to

       

      monitor_address_block

      The subnet of the Ceph public network

      Use when the IP addresses of the nodes are unknown, but the subnet is known

      ip_version

      ipv6

      Yes if using IPv6 addressing

       

      journal_size

      The required size of the journal in MB

      No

       

      public_network

      The IP address and netmask of the Ceph public network

      Yes

      The Verifying the Network Configuration for Red Hat Ceph Storage section in the Installation Guide for Red Hat Enterprise Linux

      cluster_network

      The IP address and netmask of the Ceph cluster network

      No

      ceph_docker_image

      rhceph/rhceph-3-rhel7, or cephimageinlocalreg if using a local Docker registry

      Yes

       

      containerized_deployment

      true

      Yes

       

      ceph_docker_registry

      registry.access.redhat.com, or <local-host-fqdn> if using a local Docker registry

      Yes

       

      An example of the all.yml file can look like:

      monitor_interface: eth0
      journal_size: 5120
      monitor_interface: eth0
      public_network: 192.168.0.0/24
      ceph_docker_image: rhceph/rhceph-3-rhel7
      containerized_deployment: true
      ceph_docker_registry: registry.access.redhat.com

      For additional details, see the all.yml file.

    2. Edit the group_vars/osds.yml file. See the table below for the most common required and optional parameters to uncomment. Note that the table does not include all parameters.

      Table 1.1. OSD Ansible Settings

      OptionValueRequiredNotes

      osd_scenario

      collocated to use the same device for journal and OSD data

      non-collocated to use a dedicated device to store journal data

      lvm to use the Logical Volume Manager to store OSD data

      Yes

      When using osd_scenario: non-collocated, ceph-ansible expects the variables devices and dedicated_devices to match. For example, if you specify 10 disks in devices, you must specify 10 entries in dedicated_devices. Currently, Red Hat only supports dedicated journals when using osd_scenario: lvm, not collocated journals.

      osd_auto_discovery

      true to automatically discover OSDs

      Yes if using osd_scenario: collocated

      Cannot be used when devices setting is used

      devices

      List of devices where ceph data is stored

      Yes to specify the list of devices

      Cannot be used when osd_auto_discovery setting is used

      dedicated_devices

      List of dedicated devices for non-collocated OSDs where ceph journal is stored

      Yes if osd_scenario: non-collocated

      Should be nonpartitioned devices

      dmcrypt

      true to encrypt OSDs

      No

      Defaults to false

      lvm_volumes

      a list of dictionaries

      Yes if using osd_scenario: lvm

      Each dictionary must contain a data, journal and data_vg keys. The data key must be a logical volume. The journal key can be a logical volume (LV), device or partition, but do not use one journal for multiple data LVs. The data_vg key must be the volume group containing the data LV. Optionally, the journal_vg key can be used to specify the volume group containing the journal LV, if applicable.

      The following are examples of the osds.yml file using these three osd_scenario:: collocated, non-collocated, and lvm.

      osd_scenario: non-collocated
      devices:
        - /dev/sda
        - /dev/sdb
        - /dev/sdc
        - /dev/sdd
      dedicated_devices:
         - /dev/nvme0n1
         - /dev/nvme0n1
         - /dev/nvme0n1
         - /dev/nvme0n1
      osd_scenario: non-collocated
      devices:
        - /dev/sda
        - /dev/sdb
        - /dev/sdc
        - /dev/sdd
      dedicated_devices:
         - /dev/nvme0n1
         - /dev/nvme0n1
         - /dev/nvme0n1
         - /dev/nvme0n1
      osd_scenario: lvm
      lvm_volumes:
         - data: data-lv1
           data_vg: vg1
           journal: journal-lv1
           journal_vg: vg2
         - data: data-lv2
           journal: /dev/sda
           data_vg: vg1

      For additional details, see the comments in the osds.yml file.

      Note

      Currently, ceph-ansible does not create the volume groups or the logical volumes. This must be done before running the Anisble playbook.

  6. Edit the Ansible inventory file located by default at /etc/ansible/hosts. Remember to comment out example hosts.

    1. Add the Monitor nodes under the [mons] section:

      [mons]
      <monitor-host-name>
      <monitor-host-name>
      <monitor-host-name>
    2. Add OSD nodes under the [osds] section. If the nodes have sequential naming, consider using a range:

      [osds]
      <osd-host-name[1:10]>

      Alternatively, you can colocate Monitors with the OSD daemons on one node by adding the same node under the [mons] and [osds] sections. See Chapter 2, Colocation of Containerized Ceph Daemons for details.

    3. Add the Ceph Manager (ceph-mgr) nodes under the [mgrs] section. Colocate the Ceph Manager daemon with Monitor nodes.

      [mgrs]
      <monitor-host-name>
      <monitor-host-name>
      <monitor-host-name>
  7. As the Ansible user, ensure that Ansible can reach the Ceph hosts:

    [user@admin ~]$ ansible all -m ping
  8. As root, create the /var/log/ansible/ directory and assign the appropriate permissions for the ansible user:

    [root@admin ceph-ansible]# mkdir /var/log/ansible
    [root@admin ceph-ansible]# chown ansible:ansible  /var/log/ansible
    [root@admin ceph-ansible]# chmod 755 /var/log/ansible
    1. Edit the /usr/share/ceph-ansible/ansible.cfg file, updating the log_path value as follows:

      log_path = /var/log/ansible/ansible.log
  9. As the Ansible user, run the ceph-ansible playbook.

    [user@admin ceph-ansible]$ ansible-playbook site-docker.yml
    Note

    If you deploy Red Hat Ceph Storage to Red Hat Enterprise Linux Atomic Host hosts, use the --skip-tags=with_pkg option:

    [user@admin ceph-ansible]$ ansible-playbook --skip-tags=with_pkg site-docker.yml
  10. From a Monitor node, verify the status of the Ceph cluster.

    docker exec ceph-<mon|mgr>-<id> ceph health

    Replace:

    • <id> with the host name of the Monitor node:

    For example:

    [root@monitor ~]# docker exec ceph-mon-mon0 ceph health
    HEALTH_OK
    Note

    In addition to verifying the cluster status, you can use the ceph-medic utility to overall diagnose the Ceph Storage Cluster. See the Installing and Using ceph-medic to Diagnose a Ceph Storage Cluster chapter in the Red Hat Ceph Storage 3 Troubleshooting Guide.

1.3. Installing the Ceph Object Gateway in a Container

Use the Ansible application with the ceph-ansible playbook to install the Ceph Object Gateway in a container.

Prerequisites

Procedure

Use the following commands from the Ansible administration node.

  1. Navigate to the /usr/share/ceph-ansible/ directory.

    [user@admin ~]$ cd /usr/share/ceph-ansible/
  2. Uncomment the radosgw_interface parameter in the group_vars/all.yml file.

    radosgw_interface: <interface>

    Replace:

    • <interface> with the interface that the Ceph Object Gateway nodes listen to

    For additional details, see the all.yml file.

  3. Create a new copy of the rgws.yml.sample file located in the group_vars directory.

    [root@admin ceph-ansible]# cp group_vars/rgws.yml.sample group_vars/rgws.yml
  4. Optional. Edit the group_vars/rgws.yml file. For additional details, see the rgws.yml file.
  5. Add the host name of the Ceph Object Gateway node to the [rgws] section of the Ansible inventory file located by default at /etc/ansible/hosts.

    [rgws]
        gateway01

    Alternatively, you can colocate the Ceph Object Gateway with the OSD daemon on one node by adding the same node under the [osds] and [rgws] sections. See Chapter 2, Colocation of Containerized Ceph Daemons for details.

  6. Run the ceph-ansible playbook.

    [user@admin ceph-ansible]$ ansible-playbook site-docker.yml --limit rgws
    Note

    If you deploy Red Hat Ceph Storage to Red Hat Enterprise Linux Atomic Host hosts, use the --skip-tags=with_pkg option:

    [user@admin ceph-ansible]$ ansible-playbook --skip-tags=with_pkg site-docker.yml
  7. Verify that the Ceph Object Gateway node was deployed successfully.

    1. Connect to a Monitor node:

      ssh <hostname>

      Replace <hostname> with the host name of the Monitor node, for example:

      [user@admin ~]$ ssh root@monitor
    2. Verify that the Ceph Object Gateway pools were created properly:

      [root@monitor ~]# docker exec ceph-mon-mon1 rados lspools
      rbd
      cephfs_data
      cephfs_metadata
      .rgw.root
      default.rgw.control
      default.rgw.data.root
      default.rgw.gc
      default.rgw.log
      default.rgw.users.uid
    3. From any client on the same network as the Ceph cluster, for example the Monitor node, use the curl command to send an HTTP request on port 8080 using the IP address of the Ceph Object Gateway host:

      curl http://<ip-address>:8080

      Replace:

      • <ip-address> with the IP address of the Ceph Object Gateway node. To determine the IP address of the Ceph Object Gateway host, use the ifconfig or ip commands.
    4. List buckets:

      [root@monitor ~]# docker exec ceph-mon-mon1 radosgw-admin bucket list

Additional Resources

1.4. Installing Metadata Servers

Use the Ansible automation application to install a Ceph Metadata Server (MDS). Metadata Server daemons are necessary for deploying a Ceph File System.

Procedure

Perform the following steps on the Ansible administration node.

  1. Add a new section [mdss] to the /etc/ansible/hosts file:

    [mdss]
    <hostname>
    <hostname>
    <hostname>

    Replace <hostname> with the host names of the nodes where you want to install the Ceph Metadata Servers.

    Alternatively, you can colocate the Metadata Server with the OSD daemon on one node by adding the same node under the [osds] and [mdss] sections. See Chapter 2, Colocation of Containerized Ceph Daemons for details.

  2. Navigate to the /usr/share/ceph-ansible directory:

    [root@admin ~]# cd /usr/share/ceph-ansible
  3. Create a copy of the group_vars/mdss.yml.sample file named mdss.yml:

    [root@admin ceph-ansible]# cp group_vars/mdss.yml.sample group_vars/mdss.yml
  4. Optionally, edit parameters in mdss.yml. See mdss.yml for details.
  5. Run the Ansible playbook:

    [user@admin ceph-ansible]$ ansible-playbook site-docker.yml --limit mdss
  6. After installing Metadata Servers, configure them. For details, see the Configuring Metadata Server Daemons chapter in the Ceph File System Guide for Red Hat Ceph Storage 3.

Additional Resources

1.5. Understanding the limit option

This section contains information about the Ansible --limit option.

Ansible supports the --limit option that enables you to use the site, site-docker, and rolling_upgrade Ansible playbooks for a particular section of the inventory file.

$ ansible-playbook site.yml|rolling_upgrade.yml|site-docker.yml --limit osds|rgws|clients|mdss|nfss

For example, to redeploy only OSDs:

$ ansible-playbook /usr/share/ceph-ansible/site.yml --limit osds
Important

If you colocate Ceph components on one node, Ansible applies a playbook to all components on the node despite that only one component type was specified with the limit option. For example, if you run the rolling_update playbook with the --limit osds option on a node that contains OSDs and Metadata Servers (MDS), Ansible will upgrade both components, OSDs and MDSs.

1.6. Additional Resources