Chapter 2. Requirements for Installing Red Hat Ceph Storage
Figure 2.1. Prerequisite Workflow

Before installing Red Hat Ceph Storage (RHCS), review the following requirements and prepare each Monitor, OSD, Metadata Server, and client nodes accordingly.
2.1. Prerequisites
- Verify the hardware meets the minimum requirements. For details, see the Hardware Guide for Red Hat Ceph Storage 3.
2.2. Requirements Checklist for Installing Red Hat Ceph Storage
| Task | Required | Section | Recommendation |
|---|---|---|---|
| Verifying the operating system version | Yes | Section 2.3, “Operating system requirements for Red Hat Ceph Storage” | |
| Enabling Ceph software repositories | Yes | Section 2.4, “Enabling the Red Hat Ceph Storage Repositories” | |
| Using a RAID controller with OSD nodes | No | Section 2.5, “Considerations for Using a RAID Controller with OSD Nodes (optional)” | Enabling write-back caches on a RAID controller might result in increased small I/O write throughput for OSD nodes. |
| Configuring the network | Yes | Section 2.7, “Verifying the Network Configuration for Red Hat Ceph Storage” | At minimum, a public network is required. However, a private network for cluster communication is recommended. |
| Configuring a firewall | No | Section 2.8, “Configuring a firewall for Red Hat Ceph Storage” | A firewall can increase the level of trust for a network. |
| Creating an Ansible user | Yes | Creating the Ansible user is required on all Ceph nodes. | |
| Enabling password-less SSH | Yes | Required for Ansible. |
2.3. Operating system requirements for Red Hat Ceph Storage
Red Hat Ceph Storage 3 requires Ubuntu 16.04.04 with a homogeneous version, such as AMD64 or Intel 64 architectures, running on all Ceph nodes in the storage cluster.
Red Hat does not support clusters with heterogeneous operating systems and versions.
Additional Resources
- The Installation Guide for Red Hat Enterprise Linux 7.
- The System Administrator’s Guide for Red Hat Enterprise Linux 7.
2.4. Enabling the Red Hat Ceph Storage Repositories
Before installing Red Hat Ceph Storage (RHCS), enable the appropriate software repositories on each node in the storage cluster. Access to the RHCS software repositories requires a valid Red Hat login and password on the Red Hat Customer Portal.
Contact your account manager to obtain credentials for https://rhcs.download.redhat.com.
Prerequisites
- Valid customer subscription.
- RHCS nodes can connect to the Internet.
Procedure
On the Ansible administration node, enable the Red Hat Ceph Storage 3 Tools repository:
$ sudo bash -c 'umask 0077; echo deb https://customername:customerpasswd@rhcs.download.redhat.com/3-updates/Tools $(lsb_release -sc) main | tee /etc/apt/sources.list.d/Tools.list' $ sudo bash -c 'wget -O - https://www.redhat.com/security/fd431d51.txt | apt-key add -' $ sudo apt-get update
Additional Resources
- The Registering and Managing Subscriptions chapter in the System Administrator’s Guide for Red Hat Enterprise Linux.
2.5. Considerations for Using a RAID Controller with OSD Nodes (optional)
If an OSD node has a RAID controller with 1-2GB of cache installed, enabling the write-back cache might result in increased small I/O write throughput. However, the cache must be non-volatile.
Modern RAID controllers usually have super capacitors that provide enough power to drain volatile memory to non-volatile NAND memory during a power loss event. It is important to understand how a particular controller and its firmware behave after power is restored.
Some RAID controllers require manual intervention. Hard drives typically advertise to the operating system whether their disk caches should be enabled or disabled by default. However, certain RAID controllers and some firmware do not provide such information. Verify that disk level caches are disabled to avoid file system corruption.
Create a single RAID 0 volume with write-back for each Ceph OSD data drive with write-back cache enabled.
If Serial Attached SCSI (SAS) or SATA connected Solid-state Drive (SSD) disks are also present on the RAID controller, then investigate whether the controller and firmware support pass-through mode. Enabling pass-through mode helps avoid caching logic, and generally results in much lower latency for fast media.
2.6. Considerations for Using NVMe with Object Gateway (optional)
If you plan to use the Object Gateway feature of Red Hat Ceph Storage and your OSD nodes have NVMe based SSDs or SATA SSDs, consider following the procedures in Ceph Object Gateway for Production to use NVMe with LVM optimally. These procedures explain how to use specially designed Ansible playbooks which will place journals and bucket indexes together on SSDs, which can increase performance compared to having all journals on one device. The information on using NVMe with LVM optimally should be referenced in combination with this Installation Guide.
2.7. Verifying the Network Configuration for Red Hat Ceph Storage
All Red Hat Ceph Storage (RHCS) nodes require a public network. You must have a network interface card configured to a public network where Ceph clients can reach Ceph monitors and Ceph OSD nodes.
You might have a network interface card for a cluster network so that Ceph can conduct heart-beating, peering, replication, and recovery on a network separate from the public network.
Configure the network interface settings and ensure to make the changes persistent.
Red Hat does not recommend using a single network interface card for both a public and private network.
Additional Resources
- For more information on network configuration see the Network Configuration Reference chapter in the Configuration Guide for Red Hat Ceph Storage 3.
2.8. Configuring a firewall for Red Hat Ceph Storage
Red Hat Ceph Storage (RHCS) uses the iptables service.
The Monitor daemons use port 6789 for communication within the Ceph storage cluster.
On each Ceph OSD node, the OSD daemons use several ports in the range 6800-7300:
- One for communicating with clients and monitors over the public network
- One for sending data to other OSDs over a cluster network, if available; otherwise, over the public network
- One for exchanging heartbeat packets over a cluster network, if available; otherwise, over the public network
The Ceph Manager (ceph-mgr) daemons use ports in range 6800-7300. Consider colocating the ceph-mgr daemons with Ceph Monitors on same nodes.
The Ceph Metadata Server nodes (ceph-mds) use port 6800.
The Ceph Object Gateway nodes use port 7480 by default. However, you can change the default port, for example to port 80.
To use the SSL/TLS service, open port 443.
Prerequisite
- Network hardware is connected.
Procedure
On all Monitor nodes, open port
6789on the public network:iptables -I INPUT 1 -i $NIC_NAME -p tcp -s $IP_ADDR/$NETMASK_PREFIX --dport 6789 -j ACCEPT
- Replace
-
$NIC_NAMEwith the name of network interface card on the public network. -
$IP_ADDRwith the network address of the Monitor node. -
$NETMASK_PREFIXwith the netmask in Classless Inter-domain Routing (CIDR) notation.
-
Example
$ sudo iptables -I INPUT 1 -i enp6s0 -p tcp -s 192.168.0.11/24 --dport 6789 -j ACCEPT
On all OSD nodes, open ports
6800-7300on the public network:iptables -I INPUT 1 -i $NIC_NAME -m multiport -p tcp -s $IP_ADDR/$NETMASK_PREFIX --dports 6800:7300 -j ACCEPT
- Replace
-
$NIC_NAMEwith the name of network interface card on the public network. -
$IP_ADDRwith the network address of the OSD nodes. -
$NETMASK_PREFIXwith the netmask in CIDR notation.
-
Example
$ sudo iptables -I INPUT 1 -i enp6s0 -m multiport -p tcp -s 192.168.0.21/24 --dports 6800:7300 -j ACCEPT
On all Ceph Manager (
ceph-mgr) nodes (usually the same nodes as Monitor ones), open ports6800-7300on the public network:iptables -I INPUT 1 -i $NIC_NAME -m multiport -p tcp -s $IP_ADDR/$NETMASK_PREFIX --dports 6800:7300 -j ACCEPT
- Replace
-
$NIC_NAMEwith the name of network interface card on the public network. -
$IP_ADDRwith the network address of the OSD nodes. -
$NETMASK_PREFIXwith the netmask in CIDR notation.
-
Example
$ sudo iptables -I INPUT 1 -i enp6s0 -m multiport -p tcp -s 192.168.0.21/24 --dports 6800:7300 -j ACCEPT
On all Ceph Metadata Server (
ceph-mds) nodes, open port6800on the public network:iptables -I INPUT 1 -i $NIC_NAME -m multiport -p tcp -s $IP_ADDR/$NETMASK_PREFIX --dports 6800 -j ACCEPT
- Replace
-
$NIC_NAMEwith the name of network interface card on the public network. -
$IP_ADDRwith the network address of the Metada Server nodes. -
$NETMASK_PREFIXwith the netmask in CIDR notation.
-
Example
$ sudo iptables -I INPUT 1 -i enp6s0 -m multiport -p tcp -s 192.168.0.21/24 --dports 6800 -j ACCEPT
On all Ceph Object Gateway nodes, open the relevant port or ports on the public network.
To open the default port
7480:iptables -I INPUT 1 -i $NIC_NAME -p tcp -s $IP_ADDR/$NETMASK_PREFIX --dport 7480 -j ACCEPT
- Replace
-
$NIC_NAMEwith the name of network interface card on the public network. -
$IP_ADDRwith the network address of the object gateway node. -
$NETMASK_PREFIXwith the netmask in CIDR notation.
-
Example
$ sudo iptables -I INPUT 1 -i enp6s0 -p tcp -s 192.168.0.31/24 --dport 7480 -j ACCEPT
Optional. If you changed the default Ceph Object Gateway port, for example, to port
80, open this port:iptables -I INPUT 1 -i $NIC_NAME -p tcp -s $IP_ADDR/$NETMASK_PREFIX --dport 80 -j ACCEPT
- Replace
-
$NIC_NAMEwith the name of network interface card on the public network. -
$IP_ADDRwith the network address of the object gateway node. -
$NETMASK_PREFIXwith the netmask in CIDR notation.
-
Example
$ sudo iptables -I INPUT 1 -i enp6s0 -p tcp -s 192.168.0.31/24 --dport 80 -j ACCEPT
Optional. To use SSL/TLS, open port
443:iptables -I INPUT 1 -i $NIC_NAME -p tcp -s $IP_ADDR/$NETMASK_PREFIX --dport 443 -j ACCEPT
- Replace
-
$NIC_NAMEwith the name of network interface card on the public network. -
$IP_ADDRwith the network address of the object gateway node. -
$NETMASK_PREFIXwith the netmask in CIDR notation.
-
Example
$ sudo iptables -I INPUT 1 -i enp6s0 -p tcp -s 192.168.0.31/24 --dport 443 -j ACCEPT
Make the changes persistent on all RHCS nodes in the storage cluster.
Install the
iptables-persistentpackage:$ sudo apt-get install iptables-persistent
In the terminal UI that appears, select
yesto save currentIPv4 iptablesrules to the/etc/iptables/rules.v4file and currentIPv6 iptablesrules to the/etc/iptables/rules.v6file.NoteIf you add a new
iptablesrule after installingiptables-persistent, add the new rule to therulesfile:$ sudo iptables-save >> /etc/iptables/rules.v4
Additional Resources
- For more information about public and cluster network, see Verifying the Network Configuration for Red Hat Ceph Storage.
2.9. Creating an Ansible user with sudo access
Ansible must be able to log into all the Red Hat Ceph Storage (RHCS) nodes as a user that has root privileges to install software and create configuration files without prompting for a password. You must create an Ansible user with password-less root access on all nodes in the storage cluster when deploying and configuring a Red Hat Ceph Storage cluster with Ansible.
Prerequisite
-
Having
rootorsudoaccess to all nodes in the storage cluster.
Procedure
Log in to a Ceph node as the
rootuser:ssh root@$HOST_NAME
- Replace
-
$HOST_NAMEwith the host name of the Ceph node.
-
Example
# ssh root@mon01
Enter the
rootpassword when prompted.Create a new Ansible user:
adduser $USER_NAME
- Replace
-
$USER_NAMEwith the new user name for the Ansible user.
-
Example
$ sudo adduser admin
Enter the password for this user twice when prompted.
ImportantDo not use
cephas the user name. Thecephuser name is reserved for the Ceph daemons. A uniform user name across the cluster can improve ease of use, but avoid using obvious user names, because intruders typically use them for brute-force attacks.Configure
sudoaccess for the newly created user:cat << EOF >/etc/sudoers.d/$USER_NAME $USER_NAME ALL = (root) NOPASSWD:ALL EOF
- Replace
-
$USER_NAMEwith the new user name for the Ansible user.
-
Example
$ sudo cat << EOF >/etc/sudoers.d/admin admin ALL = (root) NOPASSWD:ALL EOF
Assign the correct file permissions to the new file:
chmod 0440 /etc/sudoers.d/$USER_NAME
- Replace
-
$USER_NAMEwith the new user name for the Ansible user.
-
Example
$ sudo chmod 0440 /etc/sudoers.d/admin
Additional Resources
- The Adding a New User section in the System Administrator’s Guide for Red Hat Enterprise Linux 7.
2.10. Enabling Password-less SSH for Ansible
Generate an SSH key pair on the Ansible administration node and distribute the public key to each node in the storage cluster so that Ansible can access the nodes without being prompted for a password.
Prerequisites
Procedure
Do the following steps from the Ansible administration node, and as the Ansible user.
Generate the SSH key pair, accept the default file name and leave the passphrase empty:
[user@admin ~]$ ssh-keygen
Copy the public key to all nodes in the storage cluster:
ssh-copy-id $USER_NAME@$HOST_NAME
- Replace
-
$USER_NAMEwith the new user name for the Ansible user. -
$HOST_NAMEwith the host name of the Ceph node.
-
Example
[user@admin ~]$ ssh-copy-id ceph-admin@ceph-mon01
Create and edit the
~/.ssh/configfile.ImportantBy creating and editing the
~/.ssh/configfile you do not have to specify the-u $USER_NAMEoption each time you execute theansible-playbookcommand.Create the SSH
configfile:[user@admin ~]$ touch ~/.ssh/config
Open the
configfile for editing. Set theHostnameandUseroptions for each node in the storage cluster:Host node1 Hostname $HOST_NAME User $USER_NAME Host node2 Hostname $HOST_NAME User $USER_NAME ...
- Replace
-
$HOST_NAMEwith the host name of the Ceph node. -
$USER_NAMEwith the new user name for the Ansible user.
-
Example
Host node1 Hostname monitor User admin Host node2 Hostname osd User admin Host node3 Hostname gateway User admin
Set the correct file permissions for the
~/.ssh/configfile:[admin@admin ~]$ chmod 600 ~/.ssh/config
Additional Resources
-
The
ssh_config(5)manual page - The OpenSSH chapter in the System Administrator’s Guide for Red Hat Enterprise Linux 7

Where did the comment section go?
Red Hat's documentation publication system recently went through an upgrade to enable speedier, more mobile-friendly content. We decided to re-evaluate our commenting platform to ensure that it meets your expectations and serves as an optimal feedback mechanism. During this redesign, we invite your input on providing feedback on Red Hat documentation via the discussion platform.