Container Guide

Red Hat Ceph Storage 3

Deploying and Managing Red Hat Ceph Storage in Containers

Red Hat Ceph Storage Documentation Team

Abstract

This document describes how to deploy and manage Red Hat Ceph Storage in containers.

Chapter 1. Deploying Red Hat Ceph Storage in Containers

This chapter describes how to use the Ansible application with the ceph-ansible playbook to deploy Red Hat Ceph Storage 3 in containers.

1.1. Prerequisites

1.1.1. Registering Red Hat Ceph Storage Nodes to CDN and Attaching Subscriptions

Register each Red Hat Ceph Storage (RHCS) node to the Content Delivery Network (CDN) and attach the appropriate subscription so that the node has access to software repositories. Each RHCS node must be able to access the full Red Hat Enterprise Linux 7 base content and content from the extras repository.

Note

For RHCS nodes that cannot access the Internet during the installation, provide the software content by using the Red Hat Satellite server or mounting a local Red Hat Enterprise Linux 7 Server ISO image and pointing the RHCS nodes to it. For additional details, contact the Red Hat Support.

For more information on registering Ceph nodes with the Red Hat Satellite server, see the How to Register Ceph with Satellite 6 and How to Register Ceph with Satellite 5 articles on the Red Hat Customer Portal.

Prerequisites
  • Valid Red Hat subscription
  • RHCS nodes can connect to the Internet.
Procedure

Do the following steps on all nodes in the storage cluster, and as the root user.

  1. Register the node and when prompted, enter your Red Hat Customer Portal credentials:

    # subscription-manager register
  2. Pull the latest subscription data from CDN:

    # subscription-manager refresh
  3. List all available subscriptions for Red Hat Ceph Storage and get its Pool ID:

    # subscription-manager list --available --all --matches="*Ceph*"
  4. Attach the subscriptions:

    subscription-manager attach --pool=$POOL_ID
    Replace
    • $POOL_ID with the Pool ID determined in the previous step.
  5. Disable the default software repositories, and enable the Red Hat Enterprise Linux 7 Server base and Red Hat Enterprise Linux 7 Server extras repositories:

    # subscription-manager repos --disable=*
    # subscription-manager repos --enable=rhel-7-server-rpms
    # subscription-manager repos --enable=rhel-7-server-extras-rpms
  6. Update to the latest packages:

    # yum update
Additional Resources

1.1.2. Creating an Ansible User with sudo Access

Ansible must be able to log in to the Red Hat Ceph Storage (RHCS) nodes as a user that has root privileges to install software and create configuration files without prompting for a password.

Prerequisites
  • Having root or sudo access to all nodes in the storage cluster.
Procedure

Do the following steps on all nodes in the storage cluster, as the root user.

  1. Log in to a Ceph node as the root user:

    ssh root@$HOST_NAME
    Replace
    • $HOST_NAME with the host name of the Ceph node.

    Example

    # ssh root@mon01

    Enter the root password when prompted.

  2. Create a new Ansible user:

    useradd $USER_NAME
    Replace
    • $USER_NAME with the new user name for the Ansible user.

    Example

    # useradd admin

    Enter the new password twice when prompted.

    Important

    Do not use ceph as the user name. The ceph user name is reserved for the Ceph daemons. A uniform user name across the cluster can improve ease of use, but avoid using obvious user names, because intruders typically use them for brute-force attacks.

  3. Set a new password for this user:

    passwd $USER_NAME
    Replace
    • $USER_NAME with the new user name for the Ansible user.

    Example

    # passwd admin

    Enter the new password twice when prompted.

  4. Configure sudo access for the newly created user:

    cat << EOF >/etc/sudoers.d/$USER_NAME
    $USER_NAME ALL = (root) NOPASSWD:ALL
    EOF
    Replace
    • $USER_NAME with the new user name for the Ansible user.

    Example

    # cat << EOF >/etc/sudoers.d/admin
    admin ALL = (root) NOPASSWD:ALL
    EOF

  5. Assign the correct file permissions to the new file:

    chmod 0440 /etc/sudoers.d/$USER_NAME
    Replace
    • $USER_NAME with the new user name for the Ansible user.

    Example

    # chmod 0440 /etc/sudoers.d/admin

Additional Resources
  • The {customerportal}/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/System_Administrators_Guide/[Adding a New User] section in the System Administrator’s Guide for Red Hat Enterprise Linux 7.

1.1.3. Enabling Password-less SSH for Ansible

Generate an SSH key pair on the Ansible administration node and distribute the public key to each node in the storage cluster so that Ansible can access the nodes without being prompted for a password.

Prerequisites
Procedure

Do the following steps from the Ansible administration node, and as the Ansible user.

  1. Generate the SSH key pair, accept the default file name and leave the passphrase empty:

    [user@admin ~]$ ssh-keygen
  2. Copy the public key to all nodes in the storage cluster:

    ssh-copy-id $USER_NAME@$HOST_NAME
    Replace
    • $USER_NAME with the new user name for the Ansible user.
    • $HOST_NAME with the host name of the Ceph node.

    Example

    [user@admin ~]$ ssh-copy-id ceph-admin@ceph-mon01

  3. Create and edit the ~/.ssh/config file.

    Important

    By creating and editing the ~/.ssh/config file you do not have to specify the -u $USER_NAME option each time you execute the ansible-playbook command.

    1. Create the SSH config file:

      [user@admin ~]$ touch ~/.ssh/config
    2. Open the config file for editing. Set the Hostname and User options for each node in the storage cluster:

      Host node1
         Hostname $HOST_NAME
         User $USER_NAME
      Host node2
         Hostname $HOST_NAME
         User $USER_NAME
      ...
      Replace
      • $HOST_NAME with the host name of the Ceph node.
      • $USER_NAME with the new user name for the Ansible user.

      Example

      Host node1
         Hostname monitor
         User admin
      Host node2
         Hostname osd
         User admin
      Host node3
         Hostname gateway
         User admin

  4. Set the correct file permissions for the ~/.ssh/config file:

    [admin@admin ~]$ chmod 600 ~/.ssh/config
Additional Resources
  • The ssh_config(5) manual page
  • The {customerportal}/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/System_Administrators_Guide/#ch-OpenSSH[OpenSSH] chapter in the System Administrator’s Guide for Red Hat Enterprise Linux 7

1.1.4. Configuring a Firewall for Red Hat Ceph Storage (optional)

Red Hat Ceph Storage (RHCS) uses the firewalld service.

The Monitor daemons use port 6789 for communication within the Ceph storage cluster.

On each Ceph OSD node, the OSD daemons use several ports in the range 6800-7300:

  • One for communicating with clients and monitors over the public network
  • One for sending data to other OSDs over a cluster network, if available; otherwise, over the public network
  • One for exchanging heartbeat packets over a cluster network, if available; otherwise, over the public network

The Ceph Manager (ceph-mgr) daemons use ports in range 6800-7300. Consider to colocate ceph-mgr daemons with Monitors on same nodes.

The Ceph Metadata Server nodes (ceph-mds) use port 6800.

The Ceph Object Gateway nodes use port 7480 by default. However, you can change the default port, for example to port 80.

To use the SSL/TLS service, open port 443.

Prerequisites
  • Network hardware is connected.
Procedure

Do the following steps on each node as specified, and as the root user.

  1. On all RHCS nodes, start the firewalld service. Enable it to run on boot, and ensure that it is running:

    # systemctl enable firewalld
    # systemctl start firewalld
    # systemctl status firewalld
  2. On all Monitor nodes, open port 6789 on the public network:

    [root@monitor ~]# firewall-cmd --zone=public --add-port=6789/tcp
    [root@monitor ~]# firewall-cmd --zone=public --add-port=6789/tcp --permanent

    To limit access based on the source address:

    firewall-cmd --zone=public --add-rich-rule="rule family="ipv4" \
    source address="$IP_ADDR/$NETMASK_PREFIX" port protocol="tcp" \
    port="6789" accept"
    firewall-cmd --zone=public --add-rich-rule="rule family="ipv4" \
    source address="$IP_ADDR/$NETMASK_PREFIX" port protocol="tcp" \
    port="6789" accept" --permanent
    Replace
    • $IP_ADDR with the network address of the Monitor node.
    • $NETMASK_PREFIX with the netmask in CIDR notation.

    Example

    [root@monitor ~]# firewall-cmd --zone=public --add-rich-rule="rule family="ipv4" \
    source address="192.168.0.11/24" port protocol="tcp" \
    port="6789" accept"

    [root@monitor ~]# firewall-cmd --zone=public --add-rich-rule="rule family="ipv4" \
    source address="192.168.0.11/24" port protocol="tcp" \
    port="6789" accept" --permanent
  3. On all OSD nodes, open ports 6800-7300 on the public network:

    [root@osd ~]# firewall-cmd --zone=public --add-port=6800-7300/tcp
    [root@osd ~]# firewall-cmd --zone=public --add-port=6800-7300/tcp --permanent

    If you have a separate cluster network, repeat the commands with the appropriate zone.

  4. On all Ceph Manager (ceph-mgr) nodes (usually the same nodes as Monitor ones), open ports 6800-7300 on the public network:

    [root@monitor ~]# firewall-cmd --zone=public --add-port=6800-7300/tcp
    [root@monitor ~]# firewall-cmd --zone=public --add-port=6800-7300/tcp --permanent

    If you have a separate cluster network, repeat the commands with the appropriate zone.

  5. On all Ceph Metadata Server (ceph-mds) nodes, open port 6800 on the public network:

    [root@monitor ~]# firewall-cmd --zone=public --add-port=6800/tcp
    [root@monitor ~]# firewall-cmd --zone=public --add-port=6800/tcp --permanent

    If you have a separate cluster network, repeat the commands with the appropriate zone.

  6. On all Ceph Object Gateway nodes, open the relevant port or ports on the public network.

    1. To open the default port 7480:

      [root@gateway ~]# firewall-cmd --zone=public --add-port=7480/tcp
      [root@gateway ~]# firewall-cmd --zone=public --add-port=7480/tcp --permanent

      To limit access based on the source address:

      firewall-cmd --zone=public --add-rich-rule="rule family="ipv4" \
      source address="$IP_ADDR/$NETMASK_PREFIX" port protocol="tcp" \
      port="7480" accept"
      firewall-cmd --zone=public --add-rich-rule="rule family="ipv4" \
      source address="$IP_ADDR/$NETMASK_PREFIX" port protocol="tcp" \
      port="7480" accept" --permanent
      Replace
      • $IP_ADDR with the network address of the object gateway node.
      • $NETMASK_PREFIX with the netmask in CIDR notation.

      Example

      [root@gateway ~]# firewall-cmd --zone=public --add-rich-rule="rule family="ipv4" \
      source address="192.168.0.31/24" port protocol="tcp" \
      port="7480" accept"

      [root@gateway ~]# firewall-cmd --zone=public --add-rich-rule="rule family="ipv4" \
      source address="192.168.0.31/24" port protocol="tcp" \
      port="7480" accept" --permanent
    2. Optional. If you changed the default Ceph Object Gateway port, for example, to port 80, open this port:

      [root@gateway ~]# firewall-cmd --zone=public --add-port=80/tcp
      [root@gateway ~]# firewall-cmd --zone=public --add-port=80/tcp --permanent

      To limit access based on the source address, run the following commands:

      firewall-cmd --zone=public --add-rich-rule="rule family="ipv4" \
      source address="$IP_ADDR/$NETMASK_PREFIX" port protocol="tcp" \
      port="80" accept"
      firewall-cmd --zone=public --add-rich-rule="rule family="ipv4" \
      source address="$IP_ADDR/$NETMASK_PREFIX" port protocol="tcp" \
      port="80" accept" --permanent
      Replace
      • $IP_ADDR with the network address of the object gateway node.
      • $NETMASK_PREFIX with the netmask in CIDR notation.

      Example

      [root@gateway ~]# firewall-cmd --zone=public --add-rich-rule="rule family="ipv4" \
      source address="192.168.0.31/24" port protocol="tcp" \
      port="80" accept"

      [root@gateway ~]# firewall-cmd --zone=public --add-rich-rule="rule family="ipv4" \
      source address="192.168.0.31/24" port protocol="tcp" \
      port="80" accept" --permanent
    3. Optional. To use SSL/TLS, open port 443:

      [root@gateway ~]# firewall-cmd --zone=public --add-port=443/tcp
      [root@gateway ~]# firewall-cmd --zone=public --add-port=443/tcp --permanent

      To limit access based on the source address, run the following commands:

      firewall-cmd --zone=public --add-rich-rule="rule family="ipv4" \
      source address="$IP_ADDR/$NETMASK_PREFIX" port protocol="tcp" \
      port="443" accept"
      firewall-cmd --zone=public --add-rich-rule="rule family="ipv4" \
      source address="$IP_ADDR/$NETMASK_PREFIX" port protocol="tcp" \
      port="443" accept" --permanent
      Replace
      • $IP_ADDR with the network address of the object gateway node.
      • $NETMASK_PREFIX with the netmask in CIDR notation.

      Example

      [root@gateway ~]# firewall-cmd --zone=public --add-rich-rule="rule family="ipv4" \
      source address="192.168.0.31/24" port protocol="tcp" \
      port="443" accept"
      [root@gateway ~]# firewall-cmd --zone=public --add-rich-rule="rule family="ipv4" \
      source address="192.168.0.31/24" port protocol="tcp" \
      port="443" accept" --permanent

Additional Resources

1.2. Installing a Red Hat Ceph Storage Cluster in Containers

Use the Ansible application with the ceph-ansible playbook to install Red Hat Ceph Storage 3 in containers.

A Ceph cluster used in production usually consists of ten or more nodes. To deploy Red Hat Ceph Storage as a container image, Red Hat recommends to use a Ceph cluster that consists of at least three OSD and three Monitor nodes.

Prerequisites

  • On the Ansible administration node, enable the Red Hat Ceph Storage 3 Tools repository:

    [root@admin ~]# subscription-manager repos --enable=rhel-7-server-rhceph-3-tools-rpms
  • On the Ansible administration node, install the ceph-ansible package:

    [root@admin ~]# yum install ceph-ansible

Procedure

Use the following commands from the Ansible administration node if not instructed otherwise.

  1. In the user’s home directory, create the ceph-ansible-keys directory where Ansible stores temporary values generated by the ceph-ansible playbook.

    [user@admin ~]$ mkdir ~/ceph-ansible-keys
  2. Create a symbolic link to the /usr/share/ceph-ansible/group_vars directory in the /etc/ansible/ directory:

    [root@admin ~]# ln -s /usr/share/ceph-ansible/group_vars /etc/ansible/group_vars
  3. Navigate to the /usr/share/ceph-ansible/ directory:

    [user@admin ~]$ cd /usr/share/ceph-ansible
  4. Create new copies of the yml.sample files:

    [root@admin ceph-ansible]# cp group_vars/all.yml.sample group_vars/all.yml
    [root@admin ceph-ansible]# cp group_vars/osds.yml.sample group_vars/osds.yml
    [root@admin ceph-ansible]# cp site-docker.yml.sample site-docker.yml
  5. Edit the copied files.

    1. Edit the group_vars/all.yml file. See the table below for the most common required and optional parameters to uncomment. Note that the table does not include all parameters.

      OptionValueRequiredNotes

      monitor_interface

      The interface that the Monitor nodes listen to

      monitor_interface, monitor_address, or monitor_address_block is required

       

      monitor_address

      The address that the Monitor nodes listen to

       

      monitor_address_block

      The subnet of the Ceph public network

      Use when the IP addresses of the nodes are unknown, but the subnet is known

      ip_version

      ipv6

      Yes if using IPv6 addressing

       

      journal_size

      The required size of the journal in MB

      No

       

      public_network

      The IP address and netmask of the Ceph public network

      Yes

      The {install-guide-rhel}#verifying-the-network-configuration-for-red-hat-ceph-storage[Verifying the Network Configuration for Red Hat Ceph Storage] section in the Installation Guide for Red Hat Enterprise Linux

      cluster_network

      The IP address and netmask of the Ceph cluster network

      No

      ceph_docker_image

      rhceph/rhceph-3-rhel7

      Yes

       

      containerized_deployment

      true

      Yes

       

      ceph_docker_registry

      registry.access.redhat.com

      Yes

       

      An example of the all.yml file can look like:

      monitor_interface: eth0
      journal_size: 5120
      monitor_interface: eth0
      public_network: 192.168.0.0/24
      ceph_docker_image: rhceph/rhceph-3-rhel7
      containerized_deployment: true
      ceph_docker_registry: registry.access.redhat.com

      For additional details, see the all.yml file.

    2. Edit the group_vars/osds.yml file. See the table below for the most common required and optional parameters to uncomment. Note that the table does not include all parameters.

      Table 1.1. OSD Ansible Settings

      OptionValueRequiredNotes

      osd_scenario

      collocated to use the same device for journal and OSD data

      non-collocated to use a dedicated device to store journal data

      Yes

       

      osd_auto_discovery

      true to automatically discover OSDs

      Yes if osd_scenario: collocated

       

      devices

      a list of devices

      Yes to specify the list of devices

       

      dedicated_devices

      List of dedicated devices for non-colocated OSDs

      Yes if osd_scenario: non-collocated

       

      dmcrypt

      true to encrypt OSDs

      No

      Defaults to false

      An example of the osd.yml file can look like:

      osd_scenario: collocated
      devices:
        - /dev/sda
        - /dev/sdb
      dmcrypt: true

      For additional details, see the osds.yml file.

  6. Edit the Ansible inventory file located by default at /etc/ansible/hosts. Remember to comment out example hosts.

    1. Add the Monitor nodes under the [mons] section:

      [mons]
      <monitor-host-name>
      <monitor-host-name>
      <monitor-host-name>
    2. Add OSD nodes under the [osds] section. If the nodes have sequential naming, consider using a range:

      [osds]
      <osd-host-name[1:10]>

      Alternatively, you can colocate Monitors with the OSD daemons on one node by adding the same node under the [mons] and [osds] sections. See Chapter 2, Colocation of Containerized Ceph Daemons for details.

    3. Add the Ceph Manager (ceph-mgr) nodes under the [mgrs] section. Colocate the Ceph Manager daemon with Monitor nodes.

      [mgrs]
      <monitor-host-name>
      <monitor-host-name>
      <monitor-host-name>
  7. As the Ansible user, ensure that Ansible can reach the Ceph hosts:

    [user@admin ~]$ ansible all -m ping
  8. As the Ansible user, run the ceph-ansible playbook.

    [user@admin ceph-ansible]$ ansible-playbook site-docker.yml
    Note

    If you deploy Red Hat Ceph Storage to Red Hat Enterprise Linux Atomic Host hosts, use the --skip-tags=with_pkg option:

    [user@admin ceph-ansible]$ ansible-playbook --skip-tags=with_pkg site-docker.yml
  9. From a Monitor node, verify the status of the Ceph cluster.

    docker exec ceph-<mon|mgr>-<id> ceph health

    Replace:

    • <id> with the host name of the Monitor node:

    For example:

    [root@monitor ~]# docker exec ceph-mon-mon0 ceph health
    HEALTH_OK
    Note

    In addition to verifying the cluster status, you can use the ceph-medic utility to overall diagnose the Ceph Storage Cluster. See the Installing and Using ceph-medic to Diagnose a Ceph Storage Cluster chapter in the Red Hat Ceph Storage 3 Troubleshooting Guide.

1.3. Installing the Ceph Object Gateway in a Container

Use the Ansible application with the ceph-ansible playbook to install the Ceph Object Gateway in a container.

Prerequisites

Procedure

Use the following commands from the Ansible administration node.

  1. Navigate to the /usr/share/ceph-ansible/ directory.

    [user@admin ~]$ cd /usr/share/ceph-ansible/
  2. Uncomment the radosgw_interface parameter in the group_vars/all.yml file.

    radosgw_interface: <interface>

    Replace:

    • <interface> with the interface that the Ceph Object Gateway nodes listen to

    For additional details, see the all.yml file.

  3. Create a new copy of the rgws.yml.sample file located in the group_vars directory.

    [root@admin ceph-ansible]# cp group_vars/rgws.yml.sample group_vars/rgws.yml
  4. Optional. Edit the group_vars/rgws.yml file. For additional details, see the rgws.yml file.
  5. Add the host name of the Ceph Object Gateway node to the [rgws] section of the Ansible inventory file located by default at /etc/ansible/hosts.

    [rgws]
        gateway01

    Alternatively, you can colocate the Ceph Object Gateway with the OSD daemon on one node by adding the same node under the [osds] and [rgws] sections. See Chapter 2, Colocation of Containerized Ceph Daemons for details.

  6. Run the ceph-ansible playbook.

    [user@admin ceph-ansible]$ ansible-playbook site-docker.yml --limit rgws
    Note

    If you deploy Red Hat Ceph Storage to Red Hat Enterprise Linux Atomic Host hosts, use the --skip-tags=with_pkg option:

    [user@admin ceph-ansible]$ ansible-playbook --skip-tags=with_pkg site-docker.yml
  7. Verify that the Ceph Object Gateway node was deployed successfully.

    1. Connect to a Monitor node:

      ssh <hostname>

      Replace <hostname> with the host name of the Monitor node, for example:

      [user@admin ~]$ ssh root@monitor
    2. Verify that the Ceph Object Gateway pools were created properly:

      [root@monitor ~]# docker exec ceph-mon-mon1 rados lspools
      rbd
      cephfs_data
      cephfs_metadata
      .rgw.root
      default.rgw.control
      default.rgw.data.root
      default.rgw.gc
      default.rgw.log
      default.rgw.users.uid
    3. From any client on the same network as the Ceph cluster, for example the Monitor node, use the curl command to send an HTTP request on port 8080 using the IP address of the Ceph Object Gateway host:

      curl http://<ip-address>:8080

      Replace:

      • <ip-address> with the IP address of the Ceph Object Gateway node. To determine the IP address of the Ceph Object Gateway host, use the ifconfig or ip commands.
    4. List buckets:

      [root@monitor ~]# docker exec ceph-mon-mon1 radosgw-admin bucket list

Additional Resources

1.4. Installing Metadata Servers

Use the Ansible automation application to install a Ceph Metadata Server (MDS). Metadata Server daemons are necessary for deploying a Ceph File System.

Procedure

Perform the following steps on the Ansible administration node.

  1. Add a new section [mdss] to the /etc/ansible/hosts file:

    [mdss]
    <hostname>
    <hostname>
    <hostname>

    Replace <hostname> with the host names of the nodes where you want to install the Ceph Metadata Servers.

    Alternatively, you can colocate the Metadata Server with the OSD daemon on one node by adding the same node under the [osds] and [mdss] sections. See Chapter 2, Colocation of Containerized Ceph Daemons for details.

  2. Navigate to the /usr/share/ceph-ansible directory:

    [root@admin ~]# cd /usr/share/ceph-ansible
  3. Create a copy of the group_vars/mdss.yml.sample file named mdss.yml:

    [root@admin ceph-ansible]# cp group_vars/mdss.yml.sample group_vars/mdss.yml
  4. Optionally, edit parameters in mdss.yml. See mdss.yml for details.
  5. Run the Ansible playbook:

    [user@admin ceph-ansible]$ ansible-playbook site.yml --limit mdss
  6. After installing Metadata Servers, configure them. For details, see the Configuring Metadata Server Daemons chapter in the Ceph File System Guide for Red Hat Ceph Storage 3.

Additional Resources

1.5. Understanding the limit option

This section contains information about the Ansible --limit option.

Ansible supports the --limit option that enables you to use the site, site-docker, and rolling_upgrade Ansible playbooks for a particular section of the inventory file.

$ ansible-playbook site.yml|rolling_upgrade.yml|site-docker.yml --limit osds|rgws|clients|mdss|nfss

For example, to redeploy only OSDs:

$ ansible-playbook /usr/share/ceph-ansible/site.yml --limit osds
Important

If you colocate Ceph components on one node, Ansible applies a playbook to all components on the node despite that only one component type was specified with the limit option. For example, if you run the rolling_update playbook with the --limit osds option on a node that contains OSDs and Metadata Servers (MDS), Ansible will upgrade both components, OSDs and MDSs.

1.6. Additional Resources

Chapter 2. Colocation of Containerized Ceph Daemons

This section describes:

2.1. How Colocation Works and Its Advantages

You can colocate containerized Ceph daemons on a same node. This section describes how colocation works and its advantages.

How Colocation Works

You can colocate one daemon from the following list with an OSD daemon by adding the same node to appropriate sections in the Ansible inventory file.

  • The Ceph Object Gateway (radosgw)
  • Metadata Server (MDS)
  • RBD mirror (rbd-mirror)
  • Monitor and the Ceph Manager daemon (ceph-mgr)
  • NFS Ganesha

The following example shows how the inventory file with colocated daemons can look like

Example 2.1. Ansible inventory file with colocated daemons

[mons]
<hostname1>
<hostname2>
<hostname3>

[mgrs]
<hostname1>
<hostname2>
<hostname3>

[osds]
<hostname4>
<hostname5>
<hostname6>

[rgws]
<hostname4>
<hostname5>

The Figure 2.1, “Colocated Daemons” and Figure 2.2, “Non-colocated Daemons” images shows the difference between clusters with colocated and non-colocated daemons.

Figure 2.1. Colocated Daemons

containers colocated daemons

Figure 2.2. Non-colocated Daemons

containers non colocated daemons

When you colocate two containerized Ceph daemons on a same node, the ceph-ansible playbook reserves dedicated CPU and RAM resources to each. By default, ceph-ansible uses values listed in the {hardware_guide}#ceph-hardware-min-recommend[Recommended Minimum Hardware] chapter in the Red Hat Ceph Storage Hardware Selection Guide 3. To learn how to change the default values, see Section 2.2, “Setting Dedicated Resources for Colocated Daemons”.

Advantages of Colocation

  • Significant improvement in total cost of ownership (TCO) at small scale
  • Reduction from six nodes to three for the minimum configuration
  • Easier upgrade
  • Better resource isolation

2.2. Setting Dedicated Resources for Colocated Daemons

When colocating two Ceph daemon on a same node, the ceph-ansible playbook reserves CPU and RAM resources to each. By default, ceph-ansible uses values listed in the {hardware_guide}#ceph-hardware-min-recommend[Recommended Minimum Hardware] chapter in the Red Hat Ceph Storage Hardware Selection Guide. This section describes how to change the default values.

Procedure

  • To change the default RAM and CPU limit for a daemon, set the ceph_<daemon-type>_docker_memory_limit and ceph_<daemon-type>_docker_cpu_limit parameters in the appropriate .yml configuration file when deploying the daemon.

    For example, to change the default RAM limit to 2 GB and the CPU limit to 2 for the Ceph Object Gateway, edit the /usr/share/ansible/group_vars/rgws.yml file as follows:

    ceph_rgw_docker_memory_limit: 2g
    ceph_rgw_docker_cpu_limit: 2

Additional Resources

  • The sample configuration files in the /usr/share/ansible/group_vars/ directory

2.3. Additional Resources

Chapter 3. Administering Ceph Clusters That Run in Containers

This chapter describes basic administration tasks to perform on Ceph clusters that run in containers, such as:

3.1. Starting, Stopping, and Restarting Ceph Daemons That Run in Containers

This section describes how to start, stop, or restart Ceph daemons that run in containers

Procedure

  • To start, stop, or restart a Ceph daemon running in a container:

    systemctl <action> ceph-<daemon>@<ID>

    Where:

    • <action> is the action to perform; start, stop, or restart
    • <daemon> is the daemon; osd, mon, mds, or rgw
    • <ID> is either

      • The device name that the ceph-osd daemon uses
      • The short host name where the ceph-mon, ceph-mds, or ceph-rgw daemons are running

    For example, to restart a ceph-osd daemon that uses the /dev/sdb device:

    # systemctl restart ceph-osd@sdb

    To start a ceph-mon demon that runs on the ceph-monitor01 host:

    # systemctl start ceph-mon@ceph-monitor01

    To stop a ceph-rgw daemon that runs on the ceph-rgw01 host:

    # systemctl stop ceph-rgw@ceph-rgw01

Additional Resources

3.2. Viewing Log Files of Ceph Daemons That Run in Containers

Use the journald daemon from the container host to view a log file of a Ceph daemon from a container.

Procedure: Viewing Log Files of Ceph Daemons That Run in Containers

  • To view the entire Ceph log file.

    journalctl -u ceph-<daemon>@<ID>

    Where:

    • <daemon> is the Ceph daemon; osd, mon, or rgw
    • <ID> is either

      • The device name that the ceph-osd daemon uses
      • The short host name where the ceph-mon or ceph-rgw daemons are running

    For example, to view the entire log for the ceph-osd daemon that uses the /dev/sdb device:

    # journalctl -u ceph-osd@sdb
  • To show only the recent journal entries, use the -f option.

    journalctl -fu ceph-<daemon>@<ID>

    For example, to view only recent journal entries for the ceph-mon daemon that runs on the ceph-monitor01 host:

    # journalctl -fu ceph-mon@ceph-monitor01
Note

You can also use the sosreport utility to view the journald logs. For more details about SOS reports, see the What is a sosreport and how to create one in Red Hat Enterprise Linux 4.6 and later? solution on the Red Hat Customer Portal.

Additional Resources

  • The journalctl(1) manual page

3.3. Purging Clusters Deployed by Ansible

If you no longer want to use a Ceph cluster, use the purge-docker-cluster.yml playbook to purge the cluster. Purging a cluster is also useful when the installation process failed and you want to start over.

Warning

After purging a Ceph cluster, all data on the OSDs are lost.

Prerequisites

  • Ensure that the /var/log/ansible.log file is writable.

Procedure

Use the following commands from the Ansible administration node.

  1. Navigate to the /usr/share/ceph-ansible/ directory.

    [user@admin ~]$ cd /usr/share/ceph-ansible
  2. Copy the purge-cluster.yml playbook from the /usr/share/infrastructure-playbooks/ directory to the current directory:

    [root@admin ceph-ansible]# cp infrastructure-playbooks/purge-docker-cluster.yml .
  3. Use the purge-docker-cluster.yml playbook to purge the Ceph cluster.

    • To remove all packages, containers, configuration files, and all the data created by the ceph-ansible playbook:

      [user@admin ceph-ansible]$ ansible-playbook purge-docker-cluster.yml
    • To specify a different inventory file than the default one (/etc/ansible/hosts), use -i parameter:

      ansible-playbook purge-docker-cluster.yml -i [inventory-file]

      Replace [inventory-file] with the path to the inventory file.

      For example:

      [user@admin ceph-ansible]$ ansible-playbook purge-docker-cluster.yml -i ~/ansible/hosts
    • To skip the removal of the Ceph container image, use the --skip-tags=”remove_img” option:

      [user@admin ceph-ansible]$ ansible-playbook --skip-tags="remove_img" purge-docker-cluster.yml
    • To skip the removal of the packages that were installed during the installation, use the --skip-tags=”with_pkg” option:

      [user@admin ceph-ansible]$ ansible-playbook --skip-tags="with_pkg" purge-docker-cluster.yml

3.4. Upgrading a Red Hat Ceph Storage Cluster That Runs in Containers

This section describes how to upgrade to a newer minor or major version of the Red Hat Ceph Storage container image.

Use the Ansible rolling_update.yml playbook located in the /usr/share/ceph-ansible/infrastructure-playbooks/ directory from the administration node to upgrade between two major or minor versions of Red Hat Ceph Storage, or to apply asynchronous updates.

Ansible upgrades the Ceph nodes in the following order:

  • Monitor nodes
  • MGR nodes
  • OSD nodes
  • MDS nodes
  • Ceph Object Gateway nodes
  • All other Ceph client nodes
Note

Red Hat Ceph Storage 3 introduces several changes in Ansible configuration files located in the /usr/share/ceph-ansible/group_vars/ directory; certain parameters were renamed or removed. Therefore, create new copies of the all.yml.sample and osds.yml.sample files after upgrading to version 3. For more details about the changes, see Appendix A, Changes in Ansible Variables Between Version 2 and 3.

Important

The rolling_update.yml playbook includes the serial variable that adjusts the number of nodes to be updated simultaneously. Red Hat strongly recommends to use the default value (1), which ensures that Ansible will upgrade cluster nodes one by one.

Prerequisites

  • On all nodes in the cluster, enable the rhel-7-server-extras-rpms repository.

    # subscription-manager repos --enable=rhel-7-server-extras-rpms
  • On the Ansible administration node, ensure the latest version of the ceph-ansible package is installed.

    [root@admin ~]# yum update ceph-ansible

Procedure

Use the following commands from the Ansible administration node.

  1. Navigate to the /usr/share/ceph-ansible/ directory:

    [user@admin ~]$ cd /usr/share/ceph-ansible/
  2. In the group_vars/all.yml file change the ceph_docker_image_tag parameter to point to the newer Ceph container version, for example:

    ceph_docker_image_tag: 3
  3. If the cluster you want to upgrade contains any Ceph Object Gateway nodes, add the radosgw_interface parameter to the group_vars/all.yml file.

    radosgw_interface: <interface>

    Replace:

    • <interface> with the interface that the Ceph Object Gateway nodes listen to
  4. In the Ansible inventory file located at /etc/ansible/hosts, add the Ceph Manager (ceph-mgr) nodes under the [mgrs] section. Colocate the Ceph Manager daemon with Monitor nodes.

    [mgrs]
    <monitor-host-name>
    <monitor-host-name>
    <monitor-host-name>
  5. Copy rolling_update.yml from the infrastructure-playbooks directory to the current directory.

    [root@admin ceph-ansible]# cp infrastructure-playbooks/rolling_update.yml .
  6. Run the playbook:

    [user@admin ceph-ansible]$ ansible-playbook rolling_update.yml

    To use the playbook only for a particular group of nodes from the Ansible inventory file, use the --limit option. For details, see Section 1.5, “Understanding the limit option”.

  7. Back up the group_vars/all.yml and group_vars/osds.yml files.

    [root@admin ceph-ansible]# cp group_vars/all.yml group_vars/all_old.yml
    [root@admin ceph-ansible]# cp group_vars/osds.yml group_vars/osds_old.yml
  8. Create new copies of the group_vars/all.yml.sample and group_vars/osds.yml.sample named group_vars/all.yml and group_vars/osds.yml respectively and edit them according to you deployment. For details, see Appendix A, Changes in Ansible Variables Between Version 2 and 3 and Section 1.2, “Installing a Red Hat Ceph Storage Cluster in Containers” .

    [root@admin ceph-ansible]# cp group_vars/all.yml.sample group_vars/all.yml
    [root@admin ceph-ansible]# cp group_vars/osds.yml.sample group_vars/osds.yml
  9. Verify that the cluster health is OK.

    1. From a monitory node, list all running containers.

      [root@monitor ~]# docker ps
    2. Verify that the cluster health is OK.

      docker exec ceph-mon-<container-name> ceph -s

      Replace:

      • <container-name> with the name of the Monitor container found in the first step.

      For example:

      [root@monitor ~]# docker exec ceph-mon-mon0 ceph -s

Appendix A. Changes in Ansible Variables Between Version 2 and 3

With Red Hat Ceph Storage 3, certain variables in the configuration files located in the /usr/share/ceph-ansible/group_vars/ directory have changed or have been removed. The following table lists all the changes. After upgrading to version 3, copy the all.yml.sample and osds.yml.sample files again to reflect these changes. See Section 3.4, “Upgrading a Red Hat Ceph Storage Cluster That Runs in Containers” for details.

Old OptionNew OptionFile

mon_containerized_deployment

containerized_deployment

all.yml

ceph_mon_docker_interface

monitor_interface

all.yml

ceph_rhcs_cdn_install

ceph_repository_type: cdn

all.yml

ceph_rhcs_iso_install

ceph_repository_type: iso

all.yml

ceph_rhcs

ceph_origin: repository and ceph_repository: rhcs (enabled by default)

all.yml

journal_collocation

osd_scenario: collocated

osds.yml

raw_multi_journal

osd_scenario: non-collocated

osds.yml

raw_journal_devices

dedicated_devices

osds.yml

dmcrytpt_journal_collocation

dmcrypt: true + osd_scenario: collocated

osds.yml

dmcrypt_dedicated_journal

dmcrypt: true + osd_scenario: non-collocated

osds.yml

Legal Notice

Copyright © 2017 Red Hat, Inc.
The text of and illustrations in this document are licensed by Red Hat under a Creative Commons Attribution–Share Alike 3.0 Unported license ("CC-BY-SA"). An explanation of CC-BY-SA is available at http://creativecommons.org/licenses/by-sa/3.0/. In accordance with CC-BY-SA, if you distribute this document or an adaptation of it, you must provide the URL for the original version.
Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert, Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat, Red Hat Enterprise Linux, the Shadowman logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
Java® is a registered trademark of Oracle and/or its affiliates.
XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.
MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.
Node.js® is an official trademark of Joyent. Red Hat Software Collections is not formally related to or endorsed by the official Joyent Node.js open source or commercial project.
The OpenStack® Word Mark and OpenStack logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation's permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
All other trademarks are the property of their respective owners.