Chapter 2. Prerequisites

Figure 2.1. Prerequisite Workflow

RHCS2 Installation Workflow Diagram

Before installing Red Hat Ceph Storage, review the following prerequisites first and prepare each Ceph Monitor, OSD, and client nodes accordingly.

Table 2.1. Prerequisites Checks

TaskRequiredSectionRecommendation

Verifying the operating system version

Yes

Section 2.1, “Operating System”

Verify the PID count

Registering Ceph nodes

Yes

Section 2.2, “Registering Red Hat Ceph Storage Nodes to CDN and Attaching Subscriptions”

 

Enabling Ceph software repositories

Yes

Section 2.3, “Enabling the Red Hat Ceph Storage Repositories”

Two installation methods:

Using a RAID controller

No

Section 2.4, “Configuring RAID Controllers”

For OSD nodes only.

Configuring network Interface

Yes

Section 2.5, “Configuring Network”

Using a public network is required. Having a private network for cluster communication is optional, but recommended.

Configuring a firewall

No

Section 2.6, “Configuring Firewall”

 

Configuring the Network Time Protocol

Yes

Note

 

Creating an Ansible user

No

Section 2.8, “Creating an Ansible User with Sudo Access”

Ansible deployment only. Creating the Ansible user is required on all Ceph nodes.

Enabling password-less SSH

No

Section 2.9, “Enabling Password-less SSH (Ansible Deployment Only)”

Ansible deployment only.

2.1. Operating System

Red Hat Ceph Storage 2 and later requires Red Hat Enterprise Linux 7 Server with a homogeneous version, for example, Red Hat Enterprise Linux 7.2 running on AMD64 and Intel 64 architectures for all Ceph nodes.

Important

Red Hat does not support clusters with heterogeneous operating systems and versions.

Return to prerequisite checklist

2.1.1. Adjusting the PID Count

Hosts with high numbers of OSDs, that being more than 12, may spawn a lot of threads, especially during recovery and re-balancing events. The kernel defaults to a relatively small maximum number of threads, typically 32768.

  1. Check the current pid_max settings:

    # cat /proc/sys/kernel/pid_max
  2. As root, consider setting kernel.pid_max to a higher number of threads. The theoretical maximum is 4,194,303 threads. For example, add the following to the /etc/sysctl.conf file to set it to the maximum value:

    kernel.pid_max = 4194303
  3. As root, to load the changes without a rebooting:

    # sysctl -p
  4. As root, verify the changes:

    # sysctl -a | grep kernel.pid_max

2.2. Registering Red Hat Ceph Storage Nodes to CDN and Attaching Subscriptions

Ceph relies on packages in the Red Hat Enterprise Linux 7 Base content set. Each Ceph node must be able to access the full Red Hat Enterprise Linux 7 Base content.

To do so, register Ceph nodes that can connect to the Internet to the Red Hat Content Delivery Network (CDN) and attach appropriate Ceph subscriptions to the nodes:

Registering Ceph Nodes to CDN

Run all commands in this procedure as root.

  1. Register a node with the Red Hat Subscription Manager. Run the following command and when prompted, enter your Red Hat Customer Portal credentials:

    # subscription-manager register
  2. Pull the latest subscription data from the CDN server:

    # subscription-manager refresh
  3. List all available subscriptions and find the appropriate Red Hat Ceph Storage subscription and determine its Pool ID.

    # subscription-manager list --available
  4. Attach the subscriptions:

    # subscription-manager attach --pool=<pool-id>

    Replace <pool-id> with the Pool ID determined in the previous step.

  5. Enable the Red Hat Enterprise Linux 7 Server Base repository:

    # subscription-manager repos --enable=rhel-7-server-rpms
  6. Enable the Red Hat Enterprise Linux 7 Server Extras repository:

    # subscription-manager repos --enable=rhel-7-server-extras-rpms
  7. Update the node:

    # yum update

Once you register the nodes, enable repositories that provide the Red Hat Ceph Storage packages.

Note

For nodes that cannot access the Internet during the installation, provide the Base content by other means. Either use the Red Hat Satellite server in your environment or mount a local Red Hat Enterprise Linux 7 Server ISO image and point the Ceph cluster nodes to it. For additional details, contact the Red Hat Support.

For more information on registering Ceph nodes with the Red Hat Satellite server, see the How to Register Ceph with Satellite 6 and How to Register Ceph with Satellite 5 articles on the Customer Portal.

Return to prerequisite checklist

2.3. Enabling the Red Hat Ceph Storage Repositories

Before you can install Red Hat Ceph Storage, you must choose an installation method. Red Hat Ceph Storage supports two installation methods:

  • Content Delivery Network (CDN)

    For Ceph Storage clusters with Ceph nodes that can connect directly to the Internet, use Red Hat Subscription Manager to enable the required Ceph repositories on each node.

  • Local Repository

    For Ceph Storage clusters where security measures preclude nodes from accessing the Internet, install Red Hat Ceph Storage 2 from a single software build delivered as an ISO image, which will allow you to install local repositories.

Important

Some Ceph package dependencies require versions that differ from the package versions included in the Extra Packages for Enterprise Linux (EPEL) repository. Disable the EPEL repository to ensure that only the Red Hat Ceph Storage packages are installed.

As root, disable the EPEL repository:

# yum-config-manager --disable epel

This command disables the epel.repo file in /etc/yum.repos.d/.

2.3.1. Content Delivery Network (CDN)

CDN Installations for…​

  • Ansible administration node

    As root, enable the Red Hat Ceph Storage 2 Tools repository:

    # subscription-manager repos --enable=rhel-7-server-rhceph-2-tools-rpms
  • Monitor Nodes

    As root, enable the Red Hat Ceph Storage 2 Monitor repository:

    # subscription-manager repos --enable=rhel-7-server-rhceph-2-mon-rpms
  • OSD Nodes

    As root, enable the Red Hat Ceph Storage 2 OSD repository:

    # subscription-manager repos --enable=rhel-7-server-rhceph-2-osd-rpms
  • RADOS Gateway and Client Nodes

    As root, enable the Red Hat Ceph Storage 2 Tools repository:

    # subscription-manager repos --enable=rhel-7-server-rhceph-2-tools-rpms

Return to prerequisite checklist

2.3.2. Local Repository

For ISO Installations…​

  • Download the Red Hat Ceph Storage ISO

    1. Log in to the Red Hat Customer Portal.
    2. Click Downloads to visit the Software & Download center.
    3. In the Red Hat Ceph Storage area, click Download Software to download the latest version of the software.
    4. Copy the ISO image to the node.
    5. As root, mount the copied ISO image to the /mnt/rhcs2/ directory:

      # mkdir -p /mnt/rhcs2
      # mount -o loop /<path_to_iso>/rhceph-2.0-rhel-7-x86_64.iso /mnt/rhcs2
      Note

      For ISO installations using Ansible to install Red Hat Ceph Storage 2, mounting the ISO and creating a local repository is not required.

  • Create a Local Repository

    1. Copy the ISO image to the node.
    2. Follow the steps in this Knowledgebase solution.
    Note

    If you are completely disconnected from the Internet, then you must use ISO images to receive any updates.

Return to prerequisite checklist

2.4. Configuring RAID Controllers

If a RAID controller with 1-2 GB of cache is installed on a host, enabling write-back caches might result in increased small I/O write throughput. To prevent this problem, the cache must be non-volatile.

Modern RAID controllers usually have super capacitors that provide enough power to drain volatile memory to non-volatile NAND memory during a power loss event. It is important to understand how a particular controller and firmware behave after power is restored.

Some RAID controllers require manual intervention. Hard drives typically advertise to the operating system whether their disk caches should be enabled or disabled by default. However, certain RAID controllers or some firmware do not provide such information, so verify that disk level caches are disabled to avoid file system corruption.

Create a single RAID 0 volume with write-back for each OSD data drive with write-back cache enabled.

If Serial Attached SCSI (SAS) or SATA connected Solid-state Drive (SSD) disks are also present on the controller, investigate whether your controller and firmware support passthrough mode. Passthrough mode helps avoid caching logic, and generally results in much lower latency for fast media.

Return to prerequisite checklist

2.5. Configuring Network

All Ceph clusters require a public network. You must have a network interface card configured to a public network where Ceph clients can reach Ceph monitors and Ceph OSD nodes.

You might have a network interface card for a cluster network so that Ceph can conduct heart-beating, peering, replication, and recovery on a network separate from the public network.

Important

Red Hat does not recommend using a single network interface card for both a public and private network.

Configure the network interfaces and ensure to make the changes persistent so that the settings are identical on reboot. Configure the following settings:

  • The BOOTPROTO parameter is usually set to none for static IP addresses.
  • The ONBOOT parameter must be set to yes. If it is set to no, Ceph might fail to peer on reboot.
  • If you intend to use IPv6 addressing, the IPv6 parameters, for example IPV6INIT must be set to yes except for the IPV6_FAILURE_FATAL parameter. Also, edit the Ceph configuration file to instruct Ceph to use IPv6. Otherwise, Ceph will use IPv4.

Navigate to the /etc/sysconfig/network-scripts/ directory and ensure that the ifcfg-<iface> settings for the public and cluster interfaces are properly configured.

For details on configuring network interface scripts for Red Hat Enterprise Linux 7, see the Configuring a Network Interface Using ifcfg Files chapter in the Networking Guide for Red Hat Enterprise Linux 7.

For additional information on network configuration see the Network Configuration Reference chapter in the Configuration Guide for Red Hat Ceph Storage 2.

Return to prerequisite checklist

2.6. Configuring Firewall

Red Hat Ceph Storage 2 uses the firewalld service, which you must configure to suit your environment.

Monitor nodes use port 6789 for communication within the Ceph cluster. The monitor where the calamari-lite is running uses port 8002 for access to the Calamari REST-based API.

On each Ceph OSD node, the OSD daemon uses several ports in the range 6800-7300:

  • One for communicating with clients and monitors over the public network
  • One for sending data to other OSDs over a cluster network, if available; otherwise, over the public network
  • One for exchanging heartbeat packets over a cluster network, if available; otherwise, over the public network

Ceph object gateway nodes use port 7480 by default. However, you can change the default port, for example to port 80. To use the SSL/TLS service, open port 443.

For more information about public and cluster network, see Network.

Configuring Access

  1. On all Ceph nodes, as root, start the firewalld service, enable it to run on boot, and ensure that it is running:

    # systemctl enable firewalld
    # systemctl start firewalld
    # systemctl status firewalld
  2. As root, on all Ceph Monitor nodes, open port 6789 on the public network:

    # firewall-cmd --zone=public --add-port=6789/tcp
    # firewall-cmd --zone=public --add-port=6789/tcp --permanent

    To limit access based on the source address, run the following commands:

    # firewall-cmd --zone=public --add-rich-rule="rule family="ipv4" \
    source address="<IP-address>/<prefix>" port protocol="tcp" \
    port="6789" accept"
    # firewall-cmd --zone=public --add-rich-rule="rule family="ipv4" \
    source address="<IP-address>/<prefix>" port protocol="tcp" \
    port="6789" accept" --permanent
  3. If calamari-lite is running on the Ceph Monitor node, as root, open port 8002 on the public network:

    # firewall-cmd --zone=public --add-port=8002/tcp
    # firewall-cmd --zone=public --add-port=8002/tcp --permanent

    To limit access based on the source address, run the following commands:

    # firewall-cmd --zone=public --add-rich-rule="rule family="ipv4" \
    source address="<IP-address>/<prefix>" port protocol="tcp" \
    port="8002" accept"
    # firewall-cmd --zone=public --add-rich-rule="rule family="ipv4" \
    source address="<IP-address>/<prefix>" port protocol="tcp" \
    port="8002" accept" --permanent
  4. As root, on all OSD nodes, open ports 6800-7300:

    # firewall-cmd --zone=public --add-port=6800-7300/tcp
    # firewall-cmd --zone=public --add-port=6800-7300/tcp --permanent

    If you have a separate cluster network, repeat the commands with the appropriate zone.

  5. As root, on all object gateway nodes, open the relevant port or ports on the public network.

    1. To open the default port 7480:

      # firewall-cmd --zone=public --add-port=7480/tcp
      # firewall-cmd --zone=public --add-port=7480/tcp --permanent

      To limit access based on the source address, run the following commands:

      # firewall-cmd --zone=public \
      --add-rich-rule="rule family="ipv4" \
      source address="<IP-address>/<prefix>" \
      port protocol="tcp" port="7480" accept"
      # firewall-cmd --zone=public \
      --add-rich-rule="rule family="ipv4" \
      source address="<IP-address>/<prefix>" \
      port protocol="tcp" port="7480" accept" --permanent
    2. Optionally, as root, if you changed the default Ceph object gateway port, for example to port 80, open this port:

      # firewall-cmd --zone=public --add-port=80/tcp
      # firewall-cmd --zone=public --add-port=80/tcp --permanent

      To limit access based on the source address, run the following commands:

      # firewall-cmd --zone=public \
      --add-rich-rule="rule family="ipv4" \
      source address="<IP-address>/<prefix>" \
      port protocol="tcp" port="80" accept"
      # firewall-cmd --zone=public \
      --add-rich-rule="rule family="ipv4" \
      source address="<IP-address>/<prefix>" \
      port protocol="tcp" port="80" accept" --permanent
    3. Optionally, as root, to use SSL/TLS, open port 443:

      # firewall-cmd --zone=public --add-port=443/tcp
      # firewall-cmd --zone=public --add-port=443/tcp --permanent

      To limit access based on the source address, run the following commands:

      # firewall-cmd --zone=public \
      --add-rich-rule="rule family="ipv4" \
      source address="<IP-address>/<prefix>" \
      port protocol="tcp" port="443" accept"
      # firewall-cmd --zone=public \
      --add-rich-rule="rule family="ipv4" \
      source address="<IP-address>/<prefix>" \
      port protocol="tcp" port="443" accept" --permanent

For additional details on firewalld, see the Using Firewalls chapter in the Security Guide for Red Hat Enterprise Linux 7.

Return to prerequisite checklist

2.7. Configuring Network Time Protocol

Note

If using Ansible to deploy a Red Hat Ceph Storage cluster, then the installation, configuration, and enabling NTP is done automatically during the deployment.

You must configure the Network Time Protocol (NTP) on all Ceph Monitor and OSD nodes. Ensure that Ceph nodes are NTP peers. NTP helps preempt issues that arise from clock drift.

  1. As root, install the ntp package:

    # yum install ntp
  2. As root, enable the NTP service to be persistent across a reboot:

    # systemctl enable ntpd
  3. As root, start the NTP service and ensure it is running:

    # systemctl start ntpd
    # systemctl status ntpd
  4. Ensure that NTP is synchronizing Ceph monitor node clocks properly:

    $ ntpq -p

For additional details on NTP for Red Hat Enterprise Linux 7, see the Configuring NTP Using ntpd chapter in the System Administrator’s Guide for Red Hat Enterprise Linux 7.

Return to prerequisite checklist

2.8. Creating an Ansible User with Sudo Access

Ansible must login to Ceph nodes as a user that has passwordless root privileges, because Ansible needs to install software and configuration files without prompting for passwords.

Red Hat recommends creating an Ansible user on all Ceph nodes in the cluster.

Important

Do not use ceph as the user name. The ceph user name is reserved for the Ceph daemons.

A uniform user name across the cluster can improve ease of use, but avoid using obvious user names, because intruders typically use them to for brute force attacks. For example, root, admin, or <productname> are not advised.

The following procedure, substituting <username> for the user name you define, describes how to create an Ansible user with passwordless root privileges on a Ceph node.

  1. Use the ssh command to log in to a Ceph node:

    $ ssh <user_name>@<hostname>

    Replace <hostname> with the host name of the Ceph node.

  2. Create a new Ansible user and set a new password for this user:

    # adduser <username>
    # passwd <username>
  3. Ensure that the user you added has the root privileges:

    # cat << EOF >/etc/sudoers.d/<username>
    <username> ALL = (root) NOPASSWD:ALL
    EOF
  4. Ensure the correct file permissions:

    # chmod 0440 /etc/sudoers.d/<username>

Return to prerequisite checklist

2.9. Enabling Password-less SSH (Ansible Deployment Only)

Since Ansible will not prompt for a password, you must generate SSH keys on the administration node and distribute the public key to each Ceph node.

  1. Generate the SSH keys, but do not use sudo or the root user. Instead, use the Ansible user you created in Creating an Ansible User with Sudo Access. Leave the passphrase empty:

    $ ssh-keygen
    
    Generating public/private key pair.
    Enter file in which to save the key (/ceph-admin/.ssh/id_rsa):
    Enter passphrase (empty for no passphrase):
    Enter same passphrase again:
    Your identification has been saved in /ceph-admin/.ssh/id_rsa.
    Your public key has been saved in /ceph-admin/.ssh/id_rsa.pub.
  2. Copy the key to each Ceph Node, replacing <username> with the user name you created in Creating an Ansible User with Sudo Access and <hostname> with a host name of a Ceph node:

    $ ssh-copy-id <username>@<hostname>
  3. Modify or create (using a utility such as vi) the ~/.ssh/config file of the Ansible administration node so that Ansible can log in to Ceph nodes as the user you created without requiring you to specify the -u <username> option each time you execute the ansible-playbook command. Replace <username> with the name of the user you created and <hostname> with a host name of a Ceph node:

    Host node1
       Hostname <hostname>
       User <username>
    Host node2
       Hostname <hostname>
       User <username>
    Host node3
       Hostname <hostname>
       User <username>

    After editing the ~/.ssh/config file on the Ansible administration node, ensure the permissions are correct:

    $ chmod 600 ~/.ssh/config

Return to prerequisite checklist