Chapter 2. Prerequisites
Figure 2.1. Prerequisite Workflow

Before installing Red Hat Ceph Storage, review the following prerequisites first and prepare each Ceph Monitor, OSD, and client nodes accordingly.
Table 2.1. Prerequisites Checks
| Task | Required | Section | Recommendation |
|---|---|---|---|
| Verifying the operating system version | Yes | ||
| Registering Ceph nodes | Yes | Section 2.2, “Registering Red Hat Ceph Storage Nodes to CDN and Attaching Subscriptions” | |
| Enabling Ceph software repositories | Yes | Section 2.3, “Enabling the Red Hat Ceph Storage Repositories” | Two installation methods: |
| Using a RAID controller | No | For OSD nodes only. | |
| Configuring network Interface | Yes | Using a public network is required. Having a private network for cluster communication is optional, but recommended. | |
| Configuring a firewall | No | ||
| Configuring the Network Time Protocol | Yes | ||
| Creating an Ansible user | No | Ansible deployment only. Creating the Ansible user is required on all Ceph nodes. | |
| Enabling password-less SSH | No | Section 2.9, “Enabling Password-less SSH (Ansible Deployment Only)” | Ansible deployment only. |
2.1. Operating System
Red Hat Ceph Storage 2 and later requires Red Hat Enterprise Linux 7 Server with a homogeneous version, for example, Red Hat Enterprise Linux 7.2 running on AMD64 and Intel 64 architectures for all Ceph nodes.
Red Hat does not support clusters with heterogeneous operating systems and versions.
Return to prerequisite checklist
2.1.1. Adjusting the PID Count
Hosts with high numbers of OSDs, that being more than 12, may spawn a lot of threads, especially during recovery and re-balancing events. The kernel defaults to a relatively small maximum number of threads, typically 32768.
Check the current
pid_maxsettings:# cat /proc/sys/kernel/pid_max
As
root, consider settingkernel.pid_maxto a higher number of threads. The theoretical maximum is 4,194,303 threads. For example, add the following to the/etc/sysctl.conffile to set it to the maximum value:kernel.pid_max = 4194303
As
root, to load the changes without a rebooting:# sysctl -p
As
root, verify the changes:# sysctl -a | grep kernel.pid_max
2.2. Registering Red Hat Ceph Storage Nodes to CDN and Attaching Subscriptions
Ceph relies on packages in the Red Hat Enterprise Linux 7 Base content set. Each Ceph node must be able to access the full Red Hat Enterprise Linux 7 Base content.
To do so, register Ceph nodes that can connect to the Internet to the Red Hat Content Delivery Network (CDN) and attach appropriate Ceph subscriptions to the nodes:
Registering Ceph Nodes to CDN
Run all commands in this procedure as root.
Register a node with the Red Hat Subscription Manager. Run the following command and when prompted, enter your Red Hat Customer Portal credentials:
# subscription-manager register
Pull the latest subscription data from the CDN server:
# subscription-manager refresh
List all available subscriptions and find the appropriate Red Hat Ceph Storage subscription and determine its Pool ID.
# subscription-manager list --available
Attach the subscriptions:
# subscription-manager attach --pool=<pool-id>
Replace
<pool-id>with the Pool ID determined in the previous step.Enable the Red Hat Enterprise Linux 7 Server Base repository:
# subscription-manager repos --enable=rhel-7-server-rpms
Enable the Red Hat Enterprise Linux 7 Server Extras repository:
# subscription-manager repos --enable=rhel-7-server-extras-rpms
Update the node:
# yum update
Once you register the nodes, enable repositories that provide the Red Hat Ceph Storage packages.
For nodes that cannot access the Internet during the installation, provide the Base content by other means. Either use the Red Hat Satellite server in your environment or mount a local Red Hat Enterprise Linux 7 Server ISO image and point the Ceph cluster nodes to it. For additional details, contact the Red Hat Support.
For more information on registering Ceph nodes with the Red Hat Satellite server, see the How to Register Ceph with Satellite 6 and How to Register Ceph with Satellite 5 articles on the Customer Portal.
2.3. Enabling the Red Hat Ceph Storage Repositories
Before you can install Red Hat Ceph Storage, you must choose an installation method. Red Hat Ceph Storage supports two installation methods:
Content Delivery Network (CDN)
For Ceph Storage clusters with Ceph nodes that can connect directly to the Internet, use Red Hat Subscription Manager to enable the required Ceph repositories on each node.
For Ceph Storage clusters where security measures preclude nodes from accessing the Internet, install Red Hat Ceph Storage 2 from a single software build delivered as an ISO image, which will allow you to install local repositories.
Some Ceph package dependencies require versions that differ from the package versions included in the Extra Packages for Enterprise Linux (EPEL) repository. Disable the EPEL repository to ensure that only the Red Hat Ceph Storage packages are installed.
As root, disable the EPEL repository:
# yum-config-manager --disable epel
This command disables the epel.repo file in /etc/yum.repos.d/.
2.3.1. Content Delivery Network (CDN)
CDN Installations for…
Ansible administration node
As root, enable the Red Hat Ceph Storage 2 Tools repository:
# subscription-manager repos --enable=rhel-7-server-rhceph-2-tools-rpms
Monitor Nodes
As
root, enable the Red Hat Ceph Storage 2 Monitor repository:# subscription-manager repos --enable=rhel-7-server-rhceph-2-mon-rpms
OSD Nodes
As
root, enable the Red Hat Ceph Storage 2 OSD repository:# subscription-manager repos --enable=rhel-7-server-rhceph-2-osd-rpms
RADOS Gateway and Client Nodes
As
root, enable the Red Hat Ceph Storage 2 Tools repository:# subscription-manager repos --enable=rhel-7-server-rhceph-2-tools-rpms
2.3.2. Local Repository
For ISO Installations…
Download the Red Hat Ceph Storage ISO
- Log in to the Red Hat Customer Portal.
- Click Downloads to visit the Software & Download center.
- In the Red Hat Ceph Storage area, click Download Software to download the latest version of the software.
- Copy the ISO image to the node.
As
root, mount the copied ISO image to the/mnt/rhcs2/directory:# mkdir -p /mnt/rhcs2 # mount -o loop /<path_to_iso>/rhceph-2.0-rhel-7-x86_64.iso /mnt/rhcs2
NoteFor ISO installations using Ansible to install Red Hat Ceph Storage 2, mounting the ISO and creating a local repository is not required.
Create a Local Repository
- Copy the ISO image to the node.
- Follow the steps in this Knowledgebase solution.
NoteIf you are completely disconnected from the Internet, then you must use ISO images to receive any updates.
2.4. Configuring RAID Controllers
If a RAID controller with 1-2 GB of cache is installed on a host, enabling write-back caches might result in increased small I/O write throughput. To prevent this problem, the cache must be non-volatile.
Modern RAID controllers usually have super capacitors that provide enough power to drain volatile memory to non-volatile NAND memory during a power loss event. It is important to understand how a particular controller and firmware behave after power is restored.
Some RAID controllers require manual intervention. Hard drives typically advertise to the operating system whether their disk caches should be enabled or disabled by default. However, certain RAID controllers or some firmware do not provide such information, so verify that disk level caches are disabled to avoid file system corruption.
Create a single RAID 0 volume with write-back for each OSD data drive with write-back cache enabled.
If Serial Attached SCSI (SAS) or SATA connected Solid-state Drive (SSD) disks are also present on the controller, investigate whether your controller and firmware support passthrough mode. Passthrough mode helps avoid caching logic, and generally results in much lower latency for fast media.
2.5. Configuring Network
All Ceph clusters require a public network. You must have a network interface card configured to a public network where Ceph clients can reach Ceph monitors and Ceph OSD nodes.
You might have a network interface card for a cluster network so that Ceph can conduct heart-beating, peering, replication, and recovery on a network separate from the public network.
Red Hat does not recommend using a single network interface card for both a public and private network.
Configure the network interfaces and ensure to make the changes persistent so that the settings are identical on reboot. Configure the following settings:
-
The
BOOTPROTOparameter is usually set tononefor static IP addresses. -
The
ONBOOTparameter must be set toyes.If it is set tono, Ceph might fail to peer on reboot. -
If you intend to use IPv6 addressing, the IPv6 parameters, for example
IPV6INITmust be set toyesexcept for theIPV6_FAILURE_FATALparameter. Also, edit the Ceph configuration file to instruct Ceph to use IPv6. Otherwise, Ceph will use IPv4.
Navigate to the /etc/sysconfig/network-scripts/ directory and ensure that the ifcfg-<iface> settings for the public and cluster interfaces are properly configured.
For details on configuring network interface scripts for Red Hat Enterprise Linux 7, see the Configuring a Network Interface Using ifcfg Files chapter in the Networking Guide for Red Hat Enterprise Linux 7.
For additional information on network configuration see the Network Configuration Reference chapter in the Configuration Guide for Red Hat Ceph Storage 2.
2.6. Configuring Firewall
Red Hat Ceph Storage 2 uses the firewalld service, which you must configure to suit your environment.
Monitor nodes use port 6789 for communication within the Ceph cluster. The monitor where the calamari-lite is running uses port 8002 for access to the Calamari REST-based API.
On each Ceph OSD node, the OSD daemon uses several ports in the range 6800-7300:
- One for communicating with clients and monitors over the public network
- One for sending data to other OSDs over a cluster network, if available; otherwise, over the public network
- One for exchanging heartbeat packets over a cluster network, if available; otherwise, over the public network
Ceph object gateway nodes use port 7480 by default. However, you can change the default port, for example to port 80. To use the SSL/TLS service, open port 443.
For more information about public and cluster network, see Network.
Configuring Access
On all Ceph nodes, as
root, start thefirewalldservice, enable it to run on boot, and ensure that it is running:# systemctl enable firewalld # systemctl start firewalld # systemctl status firewalld
As
root, on all Ceph Monitor nodes, open port6789on the public network:# firewall-cmd --zone=public --add-port=6789/tcp # firewall-cmd --zone=public --add-port=6789/tcp --permanent
To limit access based on the source address, run the following commands:
# firewall-cmd --zone=public --add-rich-rule="rule family="ipv4" \ source address="<IP-address>/<prefix>" port protocol="tcp" \ port="6789" accept" # firewall-cmd --zone=public --add-rich-rule="rule family="ipv4" \ source address="<IP-address>/<prefix>" port protocol="tcp" \ port="6789" accept" --permanent
If
calamari-liteis running on the Ceph Monitor node, asroot, open port8002on the public network:# firewall-cmd --zone=public --add-port=8002/tcp # firewall-cmd --zone=public --add-port=8002/tcp --permanent
To limit access based on the source address, run the following commands:
# firewall-cmd --zone=public --add-rich-rule="rule family="ipv4" \ source address="<IP-address>/<prefix>" port protocol="tcp" \ port="8002" accept" # firewall-cmd --zone=public --add-rich-rule="rule family="ipv4" \ source address="<IP-address>/<prefix>" port protocol="tcp" \ port="8002" accept" --permanent
As
root, on all OSD nodes, open ports6800-7300:# firewall-cmd --zone=public --add-port=6800-7300/tcp # firewall-cmd --zone=public --add-port=6800-7300/tcp --permanent
If you have a separate cluster network, repeat the commands with the appropriate zone.
As
root, on all object gateway nodes, open the relevant port or ports on the public network.To open the default port
7480:# firewall-cmd --zone=public --add-port=7480/tcp # firewall-cmd --zone=public --add-port=7480/tcp --permanent
To limit access based on the source address, run the following commands:
# firewall-cmd --zone=public \ --add-rich-rule="rule family="ipv4" \ source address="<IP-address>/<prefix>" \ port protocol="tcp" port="7480" accept" # firewall-cmd --zone=public \ --add-rich-rule="rule family="ipv4" \ source address="<IP-address>/<prefix>" \ port protocol="tcp" port="7480" accept" --permanent
Optionally, as
root, if you changed the default Ceph object gateway port, for example to port80, open this port:# firewall-cmd --zone=public --add-port=80/tcp # firewall-cmd --zone=public --add-port=80/tcp --permanent
To limit access based on the source address, run the following commands:
# firewall-cmd --zone=public \ --add-rich-rule="rule family="ipv4" \ source address="<IP-address>/<prefix>" \ port protocol="tcp" port="80" accept" # firewall-cmd --zone=public \ --add-rich-rule="rule family="ipv4" \ source address="<IP-address>/<prefix>" \ port protocol="tcp" port="80" accept" --permanent
Optionally, as
root, to use SSL/TLS, open port443:# firewall-cmd --zone=public --add-port=443/tcp # firewall-cmd --zone=public --add-port=443/tcp --permanent
To limit access based on the source address, run the following commands:
# firewall-cmd --zone=public \ --add-rich-rule="rule family="ipv4" \ source address="<IP-address>/<prefix>" \ port protocol="tcp" port="443" accept" # firewall-cmd --zone=public \ --add-rich-rule="rule family="ipv4" \ source address="<IP-address>/<prefix>" \ port protocol="tcp" port="443" accept" --permanent
For additional details on firewalld, see the Using Firewalls chapter in the Security Guide for Red Hat Enterprise Linux 7.
2.7. Configuring Network Time Protocol
If using Ansible to deploy a Red Hat Ceph Storage cluster, then the installation, configuration, and enabling NTP is done automatically during the deployment.
You must configure the Network Time Protocol (NTP) on all Ceph Monitor and OSD nodes. Ensure that Ceph nodes are NTP peers. NTP helps preempt issues that arise from clock drift.
As
root, install thentppackage:# yum install ntp
As
root, enable the NTP service to be persistent across a reboot:# systemctl enable ntpd
As
root, start the NTP service and ensure it is running:# systemctl start ntpd # systemctl status ntpd
Ensure that NTP is synchronizing Ceph monitor node clocks properly:
$ ntpq -p
For additional details on NTP for Red Hat Enterprise Linux 7, see the Configuring NTP Using ntpd chapter in the System Administrator’s Guide for Red Hat Enterprise Linux 7.
2.8. Creating an Ansible User with Sudo Access
Ansible must login to Ceph nodes as a user that has passwordless root privileges, because Ansible needs to install software and configuration files without prompting for passwords.
Red Hat recommends creating an Ansible user on all Ceph nodes in the cluster.
Do not use ceph as the user name. The ceph user name is reserved for the Ceph daemons.
A uniform user name across the cluster can improve ease of use, but avoid using obvious user names, because intruders typically use them to for brute force attacks. For example, root, admin, or <productname> are not advised.
The following procedure, substituting <username> for the user name you define, describes how to create an Ansible user with passwordless root privileges on a Ceph node.
Use the
sshcommand to log in to a Ceph node:$ ssh <user_name>@<hostname>
Replace
<hostname>with the host name of the Ceph node.Create a new Ansible user and set a new password for this user:
# adduser <username> # passwd <username>
Ensure that the user you added has the
rootprivileges:# cat << EOF >/etc/sudoers.d/<username> <username> ALL = (root) NOPASSWD:ALL EOF
Ensure the correct file permissions:
# chmod 0440 /etc/sudoers.d/<username>
2.9. Enabling Password-less SSH (Ansible Deployment Only)
Since Ansible will not prompt for a password, you must generate SSH keys on the administration node and distribute the public key to each Ceph node.
Generate the SSH keys, but do not use
sudoor therootuser. Instead, use the Ansible user you created in Creating an Ansible User with Sudo Access. Leave the passphrase empty:$ ssh-keygen Generating public/private key pair. Enter file in which to save the key (/ceph-admin/.ssh/id_rsa): Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /ceph-admin/.ssh/id_rsa. Your public key has been saved in /ceph-admin/.ssh/id_rsa.pub.
Copy the key to each Ceph Node, replacing
<username>with the user name you created in Creating an Ansible User with Sudo Access and<hostname>with a host name of a Ceph node:$ ssh-copy-id <username>@<hostname>
Modify or create (using a utility such as vi) the
~/.ssh/configfile of the Ansible administration node so that Ansible can log in to Ceph nodes as the user you created without requiring you to specify the-u <username>option each time you execute theansible-playbookcommand. Replace<username>with the name of the user you created and<hostname>with a host name of a Ceph node:Host node1 Hostname <hostname> User <username> Host node2 Hostname <hostname> User <username> Host node3 Hostname <hostname> User <username>
After editing the
~/.ssh/configfile on the Ansible administration node, ensure the permissions are correct:$ chmod 600 ~/.ssh/config

Where did the comment section go?
Red Hat's documentation publication system recently went through an upgrade to enable speedier, more mobile-friendly content. We decided to re-evaluate our commenting platform to ensure that it meets your expectations and serves as an optimal feedback mechanism. During this redesign, we invite your input on providing feedback on Red Hat documentation via the discussion platform.