Chapter 2. Pre-Installation Requirements
If you are installing Red Hat Ceph Storage v1.2.3 for the first time, you should review the pre-installation requirements first. Depending on your Linux distribution, you may need to adjust default settings and install required software before setting up a local repository and installing Calamari and Ceph.
2.1. Operating System
Red Hat Ceph Storage v1.2.3 and beyond requires a homogeneous operating system distribution and version (e.g., RHEL 6, RHEL7) on x86_64 architecture for all Ceph nodes, including the Calamari cluster. We do not support clusters with heterogeneous operating systems and versions.
2.2. DNS Name Resolution
Ceph nodes must be able to resolve short host names, not just fully qualified domain names. Set up a default search domain to resolve short host names. To retrieve a Ceph node’s short host name, execute:
hostname -s
Each Ceph node MUST be able to ping every other Ceph node in the cluster by its short host name.
2.3. NICs
All Ceph clusters require a public network. You MUST have a network interface card configured to a public network where Ceph clients can reach Ceph Monitors and Ceph OSDs. You SHOULD have a network interface card for a cluster network so that Ceph can conduct heart-beating, peering, replication and recovery on a network separate from the public network.
We DO NOT RECOMMEND using a single NIC for both a public and private network.
2.4. Network
Ensure that you configure your network interfaces and make them persistent so that the settings are identical on reboot. For example:
-
BOOTPROTOwill usually benonefor static IP addresses. -
IPV6{opt}settings MUST be set toyesexcept forFAILURE_FATALif you intend to use IPv6. You must also set your Ceph configuration file to tell Ceph to use IPv6 if you intend to use it. Otherwise, Ceph will use IPv4. -
ONBOOTMUST be set toyes.If it is set tono, Ceph may fail to peer on reboot.
Navigate to /etc/sysconfig/network-scripts and ensure that the ifcfg-<iface> settings for your public and cluster interfaces (assuming you will use a cluster network too [RECOMMENDED]) are properly configured.
For details on configuring network interface scripts for RHEL 6, see Ethernet Interfaces.
For details on configuring network interface scripts for RHEL 7, see Configuring a Network Interface Using ifcfg Files.
2.5. Firewall for RHEL 6
The default firewall configuration for RHEL is fairly strict. You MUST adjust your firewall settings on the Calamari node to allow inbound requests on port 80 so that clients in your network can access the Calamari web user interface.
Calamari also communicates with Ceph nodes via ports 2003, 4505 and 4506. You MUST open ports 80, 2003, and 4505-4506 on your Calamari node.
sudo iptables -I INPUT 1 -i <iface> -p tcp -s <ip-address>/<netmask> --dport 80 -j ACCEPT sudo iptables -I INPUT 1 -i <iface> -p tcp -s <ip-address>/<netmask> --dport 2003 -j ACCEPT sudo iptables -I INPUT 1 -i <iface> -m multiport -p tcp -s <ip-address>/<netmask> --dports 4505:4506 -j ACCEPT
You MUST open port 6789 on your public network on ALL Ceph monitor nodes.
sudo iptables -I INPUT 1 -i <iface> -p tcp -s <ip-address>/<netmask> --dport 6789 -j ACCEPT
Finally, you MUST also open ports for OSD traffic (e.g., 6800-7100). Each OSD on each Ceph node needs three ports: one for talking to clients and monitors (public network); one for sending data to other OSDs (cluster network, if available; otherwise, public network); and, one for heartbeating (cluster network, if available; otherwise, public network). For example, if you have 4 OSDs, open 4 x 3 ports (12).
sudo iptables -I INPUT 1 -i <iface> -m multiport -p tcp -s <ip-address>/<netmask> --dports 6800:6811 -j ACCEPT
Once you have finished configuring iptables, ensure that you make the changes persistent on each node so that they will be in effect when your nodes reboot. For example:
/sbin/service iptables save
2.6. Firewall For RHEL 7
The default firewall configuration for RHEL is fairly strict. You MUST adjust your firewall settings on the Calamari node to allow inbound requests on port 80 so that clients in your network can access the Calamari web user interface.
Calamari also communicates with Ceph nodes via ports 2003, 4505 and 4506. For firewalld, add port 80, 4505, 4506 and 2003 to the public zone and ensure that you make the setting permanent so that it is enabled on reboot.
You MUST open ports 80, 2003, and 4505-4506 on your Calamari node.
sudo firewall-cmd --zone=public --add-port=80/tcp --permanent sudo firewall-cmd --zone=public --add-port=2003/tcp --permanent sudo firewall-cmd --zone=public --add-port=4505-4506/tcp --permanent
You MUST open port 6789 on your public network on ALL Ceph monitor nodes.
sudo firewall-cmd --zone=public --add-port=6789/tcp --permanent
Finally, you MUST also open ports for OSD traffic (e.g., 6800-7100). Each OSD on each Ceph node needs three ports: one for talking to clients and monitors (public network); one for sending data to other OSDs (cluster network, if available; otherwise, public network); and, one for heartbeating (cluster network, if available; otherwise, public network). For example, if you have 4 OSDs, open 4 x 3 ports (12).
sudo firewall-cmd --zone=public --add-port=6800-6811/tcp --permanent
Once the foregoing procedures are complete, reload the firewall configuration to ensure that the changes take effect.
sudo firewall-cmd --reload
For additional details on firewalld, see Using Firewalls.
2.7. NTP
You MUST install Network Time Protocol (NTP) on all Ceph monitor hosts and ensure that monitor hosts are NTP peers. You SHOULD consider installing NTP on Ceph OSD nodes, but it is not required. NTP helps preempt issues that arise from clock drift.
Install NTP
sudo yum install ntp
Make sure NTP starts on reboot.
For RHEL 6, execute:
sudo chkconfig ntpd on
For RHEL 7, execute:
systemctl enable ntpd.service
Start the NTP service and ensure it’s running.
For RHEL 6, execute:
sudo /etc/init.d/ntpd start
For RHEL 7, execute:
sudo systemctl start ntpd
Then, check its status.
For RHEL 6, execute:
sudo /etc/init.d/ntpd status
For RHEL 7, execute:
sudo systemctl status ntpd
Ensure that NTP is synchronizing Ceph monitor node clocks properly.
ntpq -p
For additional details on NTP for RHEL 6, see Network Time Protocol Setup.
For additional details on NTP for RHEL 7, see Configuring NTP Using ntpd.
2.8. Install SSH Server
For ALL Ceph Nodes perform the following steps:
Install an SSH server (if necessary) on each Ceph Node:
sudo yum install openssh-server
- Ensure the SSH server is running on ALL Ceph Nodes.
For additional details on OpenSSH for RHEL 6, see OpenSSH.
For additional details on OpenSSH for RHEL 7, see OpenSSH.
2.9. Create a Ceph User
The ceph-deploy utility must login to a Ceph node as a user that has passwordless sudo privileges, because it needs to install software and configuration files without prompting for passwords.
ceph-deploy supports a --username option so you can specify any user that has password-less sudo (including root, although this is NOT recommended). To use ceph-deploy --username <username>, the user you specify must have password-less SSH access to the Ceph node, because ceph-deploy will not prompt you for a password.
We recommend creating a Ceph user on ALL Ceph nodes in the cluster. A uniform user name across the cluster may improve ease of use (not required), but you should avoid obvious user names, because hackers typically use them with brute force hacks (e.g., root, admin, <productname>). The following procedure, substituting <username> for the user name you define, describes how to create a user with passwordless sudo on a node called ceph-server.
Create a user on each Ceph Node. :
ssh user@ceph-server sudo useradd -d /home/<username> -m <username> sudo passwd <username>
For the user you added to each Ceph node, ensure that the user has
sudoprivileges and hasrequirettydisabled for the Ceph user.cat << EOF >/etc/sudoers.d/<username> <username> ALL = (root) NOPASSWD:ALL Defaults:<username> !requiretty EOF
Ensure the file permissions are correct.
sudo chmod 0440 /etc/sudoers.d/<username>
2.10. Enable Password-less SSH
Since ceph-deploy will not prompt for a password, you must generate SSH keys on the admin node and distribute the public key to each Ceph node. ceph-deploy will attempt to generate the SSH keys for initial monitors.
Generate the SSH keys, but do not use
sudoor therootuser. Leave the passphrase empty:ssh-keygen Generating public/private key pair. Enter file in which to save the key (/ceph-admin/.ssh/id_rsa): Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /ceph-admin/.ssh/id_rsa. Your public key has been saved in /ceph-admin/.ssh/id_rsa.pub.
Copy the key to each Ceph Node, replacing
<username>with the user name you created with Create a Ceph User_. :ssh-copy-id <username>@node1 ssh-copy-id <username>@node2 ssh-copy-id <username>@node3
(Recommended) Modify the
~/.ssh/configfile of yourceph-deployadmin node so thatceph-deploycan log in to Ceph nodes as the user you created without requiring you to specify--username <username>each time you executeceph-deploy. This has the added benefit of streamliningsshandscpusage. Replace<username>with the user name you created:Host node1 Hostname node1 User <username> Host node2 Hostname node2 User <username> Host node3 Hostname node3 User <username>
2.11. Adjust ulimit on Large Clusters
For users that will run Ceph administrator commands on large clusters (e.g., 1024 OSDs or more), create an /etc/security/limits.d/50-ceph.conf file on your admin node with the following contents:
<username> soft nproc unlimited
Replace <username> with the name of the non-root account that you will use to run Ceph administrator commands.
The root user’s ulimit is already set to "unlimited" by default on RHEL.
2.12. Disable RAID
If you have RAID (not recommended), configure your RAID controllers to RAID 0 (JBOD).
2.13. Adjust PID Count
Hosts with high numbers of OSDs (e.g., > 20) may spawn a lot of threads, especially during recovery and re-balancing. Many Linux kernels default to a relatively small maximum number of threads (e.g., 32768). Check your default settings to see if they are suitable.
cat /proc/sys/kernel/pid_max
Consider setting kernel.pid_max to a higher number of threads. The theoretical maximum is 4,194,303 threads. For example, you could add the following to the /etc/sysctl.conf file to set it to the maximum:
kernel.pid_max = 4194303
To see the changes you made without a reboot, execute:
sudo sysctl -p
To verify the changes, execute:
sudo sysctl -a | grep kernel.pid_max
2.14. Hard Drive Prep on RHEL 6
Ceph aims for data safety, which means that when the Ceph Client receives notice that data was written to a storage drive, that data was actually written to the storage drive (i.e., it’s not in a journal or drive cache, but yet to be written). On RHEL 6, disable the write cache if the journal is on a raw drive.
Use hdparm to disable write caching on OSD storage drives:
sudo hdparm -W 0 /<path-to>/<disk> 0
RHEL 7 has a newer kernel that handles this automatically.
2.15. SELinux
SELinux is set to Enforcing by default. For Ceph Storage v1.2.3, set SELinux to Permissive or disable it entirely and ensure that your installation and cluster is working properly. To set SELinux to Permissive, execute the following:
sudo setenforce 0
To configure SELinux persistently, modify the configuration file at /etc/selinux/config.
2.16. Disable EPEL on Ceph Cluster Nodes
Some Ceph package dependencies require versions that differ from the package versions from EPEL. Disable EPEL to ensure that you install the packages required for use with Ceph.
2.17. Install XFSProgs (RHEL 6)
Red Hat Ceph Storage for RHEL 6 requires xfsprogs for OSD nodes.
You should ensure that your Calamari node has already run subscription-manager to enable the Red Hat Ceph Storage repositories before enabling the Scalable File System repository.
As part of the Red Hat Ceph Storage product, Red Hat includes an entitlement to the Scalable File System set of packages for RHEL6, which includes xfsprogs. On each Ceph Node, using sudo, enable the Scalable File System repo and install xfsprogs:
sudo subscription-manager repos --enable=rhel-scalefs-for-rhel-6-server-rpms sudo yum install xfsprogs

Where did the comment section go?
Red Hat's documentation publication system recently went through an upgrade to enable speedier, more mobile-friendly content. We decided to re-evaluate our commenting platform to ensure that it meets your expectations and serves as an optimal feedback mechanism. During this redesign, we invite your input on providing feedback on Red Hat documentation via the discussion platform.