Red Hat Training

A Red Hat training course is available for Red Hat Ceph Storage

Chapter 2. Requirements for Installing Red Hat Ceph Storage

Figure 2.1. Prerequisite Workflow

Ceph Installation Guide 459707 0818 01

Before installing Red Hat Ceph Storage (RHCS), review the following requirements and prepare each Monitor, OSD, Metadata Server, and client nodes accordingly.

2.1. Prerequisites

  • Verify the hardware meets the minimum requirements. For details, see the Hardware Guide for Red Hat Ceph Storage 3.

2.2. Requirements Checklist for Installing Red Hat Ceph Storage

TaskRequiredSectionRecommendation

Verifying the operating system version

Yes

Section 2.3, “Operating system requirements for Red Hat Ceph Storage”

 

Enabling Ceph software repositories

Yes

Section 2.4, “Enabling the Red Hat Ceph Storage Repositories”

 

Using a RAID controller with OSD nodes

No

Section 2.5, “Considerations for Using a RAID Controller with OSD Nodes (optional)”

Enabling write-back caches on a RAID controller might result in increased small I/O write throughput for OSD nodes.

Configuring the network

Yes

Section 2.7, “Verifying the Network Configuration for Red Hat Ceph Storage”

At minimum, a public network is required. However, a private network for cluster communication is recommended.

Configuring a firewall

No

Section 2.8, “Configuring a firewall for Red Hat Ceph Storage”

A firewall can increase the level of trust for a network.

Creating an Ansible user

Yes

Section 2.9, “Creating an Ansible user with sudo access”

Creating the Ansible user is required on all Ceph nodes.

Enabling password-less SSH

Yes

Section 2.10, “Enabling Password-less SSH for Ansible”

Required for Ansible.

Note

By default, ceph-ansible installs NTP as a requirement. If NTP is customized, refer to Configuring the Network Time Protocol for Red Hat Ceph Storage in Manually Installing Red Hat Ceph Storage to understand how NTP must be configured to function properly with Ceph.

2.3. Operating system requirements for Red Hat Ceph Storage

Red Hat Ceph Storage 3 requires Ubuntu 16.04.04 with a homogeneous version, such as AMD64 or Intel 64 architectures, running on all Ceph nodes in the storage cluster.

Important

Red Hat does not support clusters with heterogeneous operating systems or versions.

Additional Resources

Return to requirements checklist

2.4. Enabling the Red Hat Ceph Storage Repositories

Before you can install Red Hat Ceph Storage, you must choose an installation method. Red Hat Ceph Storage supports two installation methods:

  • Content Delivery Network (CDN)

    For Ceph Storage clusters with Ceph nodes that can connect directly to the internet, use Red Hat Subscription Manager to enable the required Ceph repository.

  • Local Repository

    For Ceph Storage clusters where security measures preclude nodes from accessing the internet, install Red Hat Ceph Storage 3.3 from a single software build delivered as an ISO image, which will allow you to install local repositories.

Access to the RHCS software repositories requires a valid Red Hat login and password on the Red Hat Customer Portal.

Important

Contact your account manager to obtain credentials for https://rhcs.download.redhat.com.

Prerequisites

  • Valid customer subscription.
  • For CDN installations, RHCS nodes must be able to connect to the internet.

Procedure

For CDN installations:

On the Ansible administration node, enable the Red Hat Ceph Storage 3 Tools repository:

$ sudo bash -c 'umask 0077; echo deb https://customername:customerpasswd@rhcs.download.redhat.com/3-updates/Tools $(lsb_release -sc) main | tee /etc/apt/sources.list.d/Tools.list'

$ sudo bash -c 'wget -O - https://www.redhat.com/security/fd431d51.txt | apt-key add -'

$ sudo apt-get update

For ISO installations:

  1. Log in to the Red Hat Customer Portal.
  2. Click Downloads to visit the Software & Download center.
  3. In the Red Hat Ceph Storage area, click Download Software to download the latest version of the software.

Additional Resources

Return to the requirements checklist

2.5. Considerations for Using a RAID Controller with OSD Nodes (optional)

If an OSD node has a RAID controller with 1-2GB of cache installed, enabling the write-back cache might result in increased small I/O write throughput. However, the cache must be non-volatile.

Modern RAID controllers usually have super capacitors that provide enough power to drain volatile memory to non-volatile NAND memory during a power loss event. It is important to understand how a particular controller and its firmware behave after power is restored.

Some RAID controllers require manual intervention. Hard drives typically advertise to the operating system whether their disk caches should be enabled or disabled by default. However, certain RAID controllers and some firmware do not provide such information. Verify that disk level caches are disabled to avoid file system corruption.

Create a single RAID 0 volume with write-back for each Ceph OSD data drive with write-back cache enabled.

If Serial Attached SCSI (SAS) or SATA connected Solid-state Drive (SSD) disks are also present on the RAID controller, then investigate whether the controller and firmware support pass-through mode. Enabling pass-through mode helps avoid caching logic, and generally results in much lower latency for fast media.

Return to requirements checklist

2.6. Considerations for Using NVMe with Object Gateway (optional)

If you plan to use the Object Gateway feature of Red Hat Ceph Storage and your OSD nodes have NVMe based SSDs or SATA SSDs, consider following the procedures in Ceph Object Gateway for Production to use NVMe with LVM optimally. These procedures explain how to use specially designed Ansible playbooks which will place journals and bucket indexes together on SSDs, which can increase performance compared to having all journals on one device. The information on using NVMe with LVM optimally should be referenced in combination with this Installation Guide.

Return to requirements checklist

2.7. Verifying the Network Configuration for Red Hat Ceph Storage

All Red Hat Ceph Storage (RHCS) nodes require a public network. You must have a network interface card configured to a public network where Ceph clients can reach Ceph monitors and Ceph OSD nodes.

You might have a network interface card for a cluster network so that Ceph can conduct heart-beating, peering, replication, and recovery on a network separate from the public network.

Configure the network interface settings and ensure to make the changes persistent.

Important

Red Hat does not recommend using a single network interface card for both a public and private network.

Additional Resources

Return to requirements checklist

2.8. Configuring a firewall for Red Hat Ceph Storage

Red Hat Ceph Storage (RHCS) uses the iptables service.

The Monitor daemons use port 6789 for communication within the Ceph storage cluster.

On each Ceph OSD node, the OSD daemons use several ports in the range 6800-7300:

  • One for communicating with clients and monitors over the public network
  • One for sending data to other OSDs over a cluster network, if available; otherwise, over the public network
  • One for exchanging heartbeat packets over a cluster network, if available; otherwise, over the public network

The Ceph Manager (ceph-mgr) daemons use ports in range 6800-7300. Consider colocating the ceph-mgr daemons with Ceph Monitors on same nodes.

The Ceph Metadata Server nodes (ceph-mds) use ports in the range 6800-7300.

The Ceph Object Gateway nodes are configured by Ansible to use port 8080 by default. However, you can change the default port, for example to port 80.

To use the SSL/TLS service, open port 443.

Prerequisite

  • Network hardware is connected.

Procedure

Run the following commands as the root user.

  1. On all Monitor nodes, open port 6789 on the public network:

    iptables -I INPUT 1 -i iface -p tcp -s IP_address/netmask_prefix --dport 6789 -j ACCEPT
    Replace
    • iface with the name of network interface card on the public network.
    • IP_address with the network address of the Monitor node.
    • netmask_prefix with the netmask in Classless Inter-domain Routing (CIDR) notation.

    Example

    $ sudo iptables -I INPUT 1 -i enp6s0 -p tcp -s 192.168.0.11/24 --dport 6789 -j ACCEPT

  2. On all OSD nodes, open ports 6800-7300 on the public network:

    iptables -I INPUT 1 -i iface -m multiport -p tcp -s IP_address/netmask_prefix --dports 6800:7300 -j ACCEPT
    Replace
    • iface with the name of network interface card on the public network.
    • IP_address with the network address of the OSD nodes.
    • netmask_prefix with the netmask in CIDR notation.

    Example

    $ sudo iptables -I INPUT 1 -i enp6s0 -m multiport -p tcp -s 192.168.0.21/24 --dports 6800:7300 -j ACCEPT

  3. On all Ceph Manager (ceph-mgr) nodes (usually the same nodes as Monitor ones), open ports 6800-7300 on the public network:

    iptables -I INPUT 1 -i iface -m multiport -p tcp -s IP_address/netmask_prefix --dports 6800:7300 -j ACCEPT
    Replace
    • iface with the name of network interface card on the public network.
    • IP_address with the network address of the OSD nodes.
    • netmask_prefix with the netmask in CIDR notation.

    Example

    $ sudo iptables -I INPUT 1 -i enp6s0 -m multiport -p tcp -s 192.168.0.21/24 --dports 6800:7300 -j ACCEPT

  4. On all Ceph Metadata Server (ceph-mds) nodes, open port 6800 on the public network:

    iptables -I INPUT 1 -i iface -m multiport -p tcp -s IP_address/netmask_prefix --dports 6800 -j ACCEPT
    Replace
    • iface with the name of network interface card on the public network.
    • IP_address with the network address of the OSD nodes.
    • netmask_prefix with the netmask in CIDR notation.

    Example

    $ sudo iptables -I INPUT 1 -i enp6s0 -m multiport -p tcp -s 192.168.0.21/24 --dports 6800 -j ACCEPT

  5. On all Ceph Object Gateway nodes, open the relevant port or ports on the public network.

    1. To open the default Ansible configured port of 8080:

      iptables -I INPUT 1 -i iface -p tcp -s IP_address/netmask_prefix --dport 8080 -j ACCEPT
      Replace
      • iface with the name of the network interface card on the public network.
      • IP_address with the network address of the object gateway node.
      • netmask_prefix with the netmask in CIDR notation.

      Example

      $ sudo iptables -I INPUT 1 -i enp6s0 -p tcp -s 192.168.0.31/24 --dport 8080 -j ACCEPT

    2. Optional. If you installed Ceph Object Gateway using Ansible and changed the default port that Ansible configures Ceph Object Gateway to use from 8080, for example, to port 80, open this port:

      iptables -I INPUT 1 -i iface -p tcp -s IP_address/netmask_prefix --dport 80 -j ACCEPT
      Replace
      • iface with the name of network interface card on the public network.
      • IP_address with the network address of the object gateway node.
      • netmask_prefix with the netmask in CIDR notation.

      Example

      $ sudo iptables -I INPUT 1 -i enp6s0 -p tcp -s 192.168.0.31/24 --dport 80 -j ACCEPT

    3. Optional. To use SSL/TLS, open port 443:

      iptables -I INPUT 1 -i iface -p tcp -s IP_address/netmask_prefix --dport 443 -j ACCEPT
      Replace
      • iface with the name of network interface card on the public network.
      • IP_address with the network address of the object gateway node.
      • netmask_prefix with the netmask in CIDR notation.

      Example

      $ sudo iptables -I INPUT 1 -i enp6s0 -p tcp -s 192.168.0.31/24 --dport 443 -j ACCEPT

  6. Make the changes persistent on all RHCS nodes in the storage cluster.

    1. Install the iptables-persistent package:

      $ sudo apt-get install iptables-persistent
    2. In the terminal UI that appears, select yes to save current IPv4 iptables rules to the /etc/iptables/rules.v4 file and current IPv6 iptables rules to the /etc/iptables/rules.v6 file.

      Note

      If you add a new iptables rule after installing iptables-persistent, add the new rule to the rules file:

      $ sudo iptables-save >> /etc/iptables/rules.v4

Additional Resources

Return to requirements checklist

2.9. Creating an Ansible user with sudo access

Ansible must be able to log into all the Red Hat Ceph Storage (RHCS) nodes as a user that has root privileges to install software and create configuration files without prompting for a password. You must create an Ansible user with password-less root access on all nodes in the storage cluster when deploying and configuring a Red Hat Ceph Storage cluster with Ansible.

Prerequisite

  • Having root or sudo access to all nodes in the storage cluster.

Procedure

  1. Log in to a Ceph node as the root user:

    ssh root@$HOST_NAME
    Replace
    • $HOST_NAME with the host name of the Ceph node.

    Example

    # ssh root@mon01

    Enter the root password when prompted.

  2. Create a new Ansible user:

    adduser $USER_NAME
    Replace
    • $USER_NAME with the new user name for the Ansible user.

    Example

    $ sudo adduser admin

    Enter the password for this user twice when prompted.

    Important

    Do not use ceph as the user name. The ceph user name is reserved for the Ceph daemons. A uniform user name across the cluster can improve ease of use, but avoid using obvious user names, because intruders typically use them for brute-force attacks.

  3. Configure sudo access for the newly created user:

    cat << EOF >/etc/sudoers.d/$USER_NAME
    $USER_NAME ALL = (root) NOPASSWD:ALL
    EOF
    Replace
    • $USER_NAME with the new user name for the Ansible user.

    Example

    $ sudo cat << EOF >/etc/sudoers.d/admin
    admin ALL = (root) NOPASSWD:ALL
    EOF

  4. Assign the correct file permissions to the new file:

    chmod 0440 /etc/sudoers.d/$USER_NAME
    Replace
    • $USER_NAME with the new user name for the Ansible user.

    Example

    $ sudo chmod 0440 /etc/sudoers.d/admin

Additional Resources

  • The Adding a New User section in the System Administrator’s Guide for Red Hat Enterprise Linux 7.

Return to the requirements checklist

2.10. Enabling Password-less SSH for Ansible

Generate an SSH key pair on the Ansible administration node and distribute the public key to each node in the storage cluster so that Ansible can access the nodes without being prompted for a password.

Prerequisites

Procedure

Do the following steps from the Ansible administration node, and as the Ansible user.

  1. Generate the SSH key pair, accept the default file name and leave the passphrase empty:

    [user@admin ~]$ ssh-keygen
  2. Copy the public key to all nodes in the storage cluster:

    ssh-copy-id $USER_NAME@$HOST_NAME
    Replace
    • $USER_NAME with the new user name for the Ansible user.
    • $HOST_NAME with the host name of the Ceph node.

    Example

    [user@admin ~]$ ssh-copy-id admin@ceph-mon01

  3. Create and edit the ~/.ssh/config file.

    Important

    By creating and editing the ~/.ssh/config file you do not have to specify the -u $USER_NAME option each time you execute the ansible-playbook command.

    1. Create the SSH config file:

      [user@admin ~]$ touch ~/.ssh/config
    2. Open the config file for editing. Set the Hostname and User options for each node in the storage cluster:

      Host node1
         Hostname $HOST_NAME
         User $USER_NAME
      Host node2
         Hostname $HOST_NAME
         User $USER_NAME
      ...
      Replace
      • $HOST_NAME with the host name of the Ceph node.
      • $USER_NAME with the new user name for the Ansible user.

      Example

      Host node1
         Hostname monitor
         User admin
      Host node2
         Hostname osd
         User admin
      Host node3
         Hostname gateway
         User admin

  4. Set the correct file permissions for the ~/.ssh/config file:

    [admin@admin ~]$ chmod 600 ~/.ssh/config

Additional Resources

  • The ssh_config(5) manual page
  • The OpenSSH chapter in the System Administrator’s Guide for Red Hat Enterprise Linux 7

Return to requirements checklist