Red Hat Training
A Red Hat training course is available for Red Hat Ceph Storage
Chapter 2. Requirements for Installing Red Hat Ceph Storage
Figure 2.1. Prerequisite Workflow
Before installing Red Hat Ceph Storage (RHCS), review the following requirements and prepare each Monitor, OSD, Metadata Server, and client nodes accordingly.
2.1. Prerequisites
- Verify the hardware meets the minimum requirements. For details, see the Hardware Guide for Red Hat Ceph Storage 3.
2.2. Requirements Checklist for Installing Red Hat Ceph Storage
Task | Required | Section | Recommendation |
---|---|---|---|
Verifying the operating system version | Yes | Section 2.3, “Operating system requirements for Red Hat Ceph Storage” | |
Enabling Ceph software repositories | Yes | Section 2.4, “Enabling the Red Hat Ceph Storage Repositories” | |
Using a RAID controller with OSD nodes | No | Section 2.5, “Considerations for Using a RAID Controller with OSD Nodes (optional)” | Enabling write-back caches on a RAID controller might result in increased small I/O write throughput for OSD nodes. |
Configuring the network | Yes | Section 2.7, “Verifying the Network Configuration for Red Hat Ceph Storage” | At minimum, a public network is required. However, a private network for cluster communication is recommended. |
Configuring a firewall | No | Section 2.8, “Configuring a firewall for Red Hat Ceph Storage” | A firewall can increase the level of trust for a network. |
Creating an Ansible user | Yes | Creating the Ansible user is required on all Ceph nodes. | |
Enabling password-less SSH | Yes | Required for Ansible. |
By default, ceph-ansible
installs NTP as a requirement. If NTP is customized, refer to Configuring the Network Time Protocol for Red Hat Ceph Storage in Manually Installing Red Hat Ceph Storage to understand how NTP must be configured to function properly with Ceph.
2.3. Operating system requirements for Red Hat Ceph Storage
Red Hat Ceph Storage 3 requires Ubuntu 16.04.04 with a homogeneous version, such as AMD64 or Intel 64 architectures, running on all Ceph nodes in the storage cluster.
Red Hat does not support clusters with heterogeneous operating systems or versions.
Additional Resources
- The Installation Guide for Red Hat Enterprise Linux 7.
- The System Administrator’s Guide for Red Hat Enterprise Linux 7.
2.4. Enabling the Red Hat Ceph Storage Repositories
Before you can install Red Hat Ceph Storage, you must choose an installation method. Red Hat Ceph Storage supports two installation methods:
Content Delivery Network (CDN)
For Ceph Storage clusters with Ceph nodes that can connect directly to the internet, use Red Hat Subscription Manager to enable the required Ceph repository.
Local Repository
For Ceph Storage clusters where security measures preclude nodes from accessing the internet, install Red Hat Ceph Storage 3.3 from a single software build delivered as an ISO image, which will allow you to install local repositories.
Access to the RHCS software repositories requires a valid Red Hat login and password on the Red Hat Customer Portal.
Contact your account manager to obtain credentials for https://rhcs.download.redhat.com.
Prerequisites
- Valid customer subscription.
- For CDN installations, RHCS nodes must be able to connect to the internet.
Procedure
For CDN installations:
On the Ansible administration node, enable the Red Hat Ceph Storage 3 Tools repository:
$ sudo bash -c 'umask 0077; echo deb https://customername:customerpasswd@rhcs.download.redhat.com/3-updates/Tools $(lsb_release -sc) main | tee /etc/apt/sources.list.d/Tools.list' $ sudo bash -c 'wget -O - https://www.redhat.com/security/fd431d51.txt | apt-key add -' $ sudo apt-get update
For ISO installations:
- Log in to the Red Hat Customer Portal.
- Click Downloads to visit the Software & Download center.
- In the Red Hat Ceph Storage area, click Download Software to download the latest version of the software.
Additional Resources
- The Registering and Managing Subscriptions chapter in the System Administrator’s Guide for Red Hat Enterprise Linux.
2.5. Considerations for Using a RAID Controller with OSD Nodes (optional)
If an OSD node has a RAID controller with 1-2GB of cache installed, enabling the write-back cache might result in increased small I/O write throughput. However, the cache must be non-volatile.
Modern RAID controllers usually have super capacitors that provide enough power to drain volatile memory to non-volatile NAND memory during a power loss event. It is important to understand how a particular controller and its firmware behave after power is restored.
Some RAID controllers require manual intervention. Hard drives typically advertise to the operating system whether their disk caches should be enabled or disabled by default. However, certain RAID controllers and some firmware do not provide such information. Verify that disk level caches are disabled to avoid file system corruption.
Create a single RAID 0 volume with write-back for each Ceph OSD data drive with write-back cache enabled.
If Serial Attached SCSI (SAS) or SATA connected Solid-state Drive (SSD) disks are also present on the RAID controller, then investigate whether the controller and firmware support pass-through mode. Enabling pass-through mode helps avoid caching logic, and generally results in much lower latency for fast media.
2.6. Considerations for Using NVMe with Object Gateway (optional)
If you plan to use the Object Gateway feature of Red Hat Ceph Storage and your OSD nodes have NVMe based SSDs or SATA SSDs, consider following the procedures in Ceph Object Gateway for Production to use NVMe with LVM optimally. These procedures explain how to use specially designed Ansible playbooks which will place journals and bucket indexes together on SSDs, which can increase performance compared to having all journals on one device. The information on using NVMe with LVM optimally should be referenced in combination with this Installation Guide.
2.7. Verifying the Network Configuration for Red Hat Ceph Storage
All Red Hat Ceph Storage (RHCS) nodes require a public network. You must have a network interface card configured to a public network where Ceph clients can reach Ceph monitors and Ceph OSD nodes.
You might have a network interface card for a cluster network so that Ceph can conduct heart-beating, peering, replication, and recovery on a network separate from the public network.
Configure the network interface settings and ensure to make the changes persistent.
Red Hat does not recommend using a single network interface card for both a public and private network.
Additional Resources
- For more information on network configuration see the Network Configuration Reference chapter in the Configuration Guide for Red Hat Ceph Storage 3.
2.8. Configuring a firewall for Red Hat Ceph Storage
Red Hat Ceph Storage (RHCS) uses the iptables
service.
The Monitor daemons use port 6789
for communication within the Ceph storage cluster.
On each Ceph OSD node, the OSD daemons use several ports in the range 6800-7300
:
- One for communicating with clients and monitors over the public network
- One for sending data to other OSDs over a cluster network, if available; otherwise, over the public network
- One for exchanging heartbeat packets over a cluster network, if available; otherwise, over the public network
The Ceph Manager (ceph-mgr
) daemons use ports in range 6800-7300
. Consider colocating the ceph-mgr
daemons with Ceph Monitors on same nodes.
The Ceph Metadata Server nodes (ceph-mds
) use ports in the range 6800-7300
.
The Ceph Object Gateway nodes are configured by Ansible to use port 8080
by default. However, you can change the default port, for example to port 80
.
To use the SSL/TLS service, open port 443
.
Prerequisite
- Network hardware is connected.
Procedure
Run the following commands as the root
user.
On all Monitor nodes, open port
6789
on the public network:iptables -I INPUT 1 -i iface -p tcp -s IP_address/netmask_prefix --dport 6789 -j ACCEPT
- Replace
-
iface
with the name of network interface card on the public network. -
IP_address
with the network address of the Monitor node. -
netmask_prefix
with the netmask in Classless Inter-domain Routing (CIDR) notation.
-
Example
$ sudo iptables -I INPUT 1 -i enp6s0 -p tcp -s 192.168.0.11/24 --dport 6789 -j ACCEPT
On all OSD nodes, open ports
6800-7300
on the public network:iptables -I INPUT 1 -i iface -m multiport -p tcp -s IP_address/netmask_prefix --dports 6800:7300 -j ACCEPT
- Replace
-
iface
with the name of network interface card on the public network. -
IP_address
with the network address of the OSD nodes. -
netmask_prefix
with the netmask in CIDR notation.
-
Example
$ sudo iptables -I INPUT 1 -i enp6s0 -m multiport -p tcp -s 192.168.0.21/24 --dports 6800:7300 -j ACCEPT
On all Ceph Manager (
ceph-mgr
) nodes (usually the same nodes as Monitor ones), open ports6800-7300
on the public network:iptables -I INPUT 1 -i iface -m multiport -p tcp -s IP_address/netmask_prefix --dports 6800:7300 -j ACCEPT
- Replace
-
iface
with the name of network interface card on the public network. -
IP_address
with the network address of the OSD nodes. -
netmask_prefix
with the netmask in CIDR notation.
-
Example
$ sudo iptables -I INPUT 1 -i enp6s0 -m multiport -p tcp -s 192.168.0.21/24 --dports 6800:7300 -j ACCEPT
On all Ceph Metadata Server (
ceph-mds
) nodes, open port6800
on the public network:iptables -I INPUT 1 -i iface -m multiport -p tcp -s IP_address/netmask_prefix --dports 6800 -j ACCEPT
- Replace
-
iface
with the name of network interface card on the public network. -
IP_address
with the network address of the OSD nodes. -
netmask_prefix
with the netmask in CIDR notation.
-
Example
$ sudo iptables -I INPUT 1 -i enp6s0 -m multiport -p tcp -s 192.168.0.21/24 --dports 6800 -j ACCEPT
On all Ceph Object Gateway nodes, open the relevant port or ports on the public network.
To open the default Ansible configured port of
8080
:iptables -I INPUT 1 -i iface -p tcp -s IP_address/netmask_prefix --dport 8080 -j ACCEPT
- Replace
-
iface
with the name of the network interface card on the public network. -
IP_address
with the network address of the object gateway node. -
netmask_prefix
with the netmask in CIDR notation.
-
Example
$ sudo iptables -I INPUT 1 -i enp6s0 -p tcp -s 192.168.0.31/24 --dport 8080 -j ACCEPT
Optional. If you installed Ceph Object Gateway using Ansible and changed the default port that Ansible configures Ceph Object Gateway to use from
8080
, for example, to port80
, open this port:iptables -I INPUT 1 -i iface -p tcp -s IP_address/netmask_prefix --dport 80 -j ACCEPT
- Replace
-
iface
with the name of network interface card on the public network. -
IP_address
with the network address of the object gateway node. -
netmask_prefix
with the netmask in CIDR notation.
-
Example
$ sudo iptables -I INPUT 1 -i enp6s0 -p tcp -s 192.168.0.31/24 --dport 80 -j ACCEPT
Optional. To use SSL/TLS, open port
443
:iptables -I INPUT 1 -i iface -p tcp -s IP_address/netmask_prefix --dport 443 -j ACCEPT
- Replace
-
iface
with the name of network interface card on the public network. -
IP_address
with the network address of the object gateway node. -
netmask_prefix
with the netmask in CIDR notation.
-
Example
$ sudo iptables -I INPUT 1 -i enp6s0 -p tcp -s 192.168.0.31/24 --dport 443 -j ACCEPT
Make the changes persistent on all RHCS nodes in the storage cluster.
Install the
iptables-persistent
package:$ sudo apt-get install iptables-persistent
In the terminal UI that appears, select
yes
to save currentIPv4 iptables
rules to the/etc/iptables/rules.v4
file and currentIPv6 iptables
rules to the/etc/iptables/rules.v6
file.NoteIf you add a new
iptables
rule after installingiptables-persistent
, add the new rule to therules
file:$ sudo iptables-save >> /etc/iptables/rules.v4
Additional Resources
- For more information about public and cluster network, see Verifying the Network Configuration for Red Hat Ceph Storage.
2.9. Creating an Ansible user with sudo
access
Ansible must be able to log into all the Red Hat Ceph Storage (RHCS) nodes as a user that has root
privileges to install software and create configuration files without prompting for a password. You must create an Ansible user with password-less root
access on all nodes in the storage cluster when deploying and configuring a Red Hat Ceph Storage cluster with Ansible.
Prerequisite
-
Having
root
orsudo
access to all nodes in the storage cluster.
Procedure
Log in to a Ceph node as the
root
user:ssh root@$HOST_NAME
- Replace
-
$HOST_NAME
with the host name of the Ceph node.
-
Example
# ssh root@mon01
Enter the
root
password when prompted.Create a new Ansible user:
adduser $USER_NAME
- Replace
-
$USER_NAME
with the new user name for the Ansible user.
-
Example
$ sudo adduser admin
Enter the password for this user twice when prompted.
ImportantDo not use
ceph
as the user name. Theceph
user name is reserved for the Ceph daemons. A uniform user name across the cluster can improve ease of use, but avoid using obvious user names, because intruders typically use them for brute-force attacks.Configure
sudo
access for the newly created user:cat << EOF >/etc/sudoers.d/$USER_NAME $USER_NAME ALL = (root) NOPASSWD:ALL EOF
- Replace
-
$USER_NAME
with the new user name for the Ansible user.
-
Example
$ sudo cat << EOF >/etc/sudoers.d/admin admin ALL = (root) NOPASSWD:ALL EOF
Assign the correct file permissions to the new file:
chmod 0440 /etc/sudoers.d/$USER_NAME
- Replace
-
$USER_NAME
with the new user name for the Ansible user.
-
Example
$ sudo chmod 0440 /etc/sudoers.d/admin
Additional Resources
- The Adding a New User section in the System Administrator’s Guide for Red Hat Enterprise Linux 7.
2.10. Enabling Password-less SSH for Ansible
Generate an SSH key pair on the Ansible administration node and distribute the public key to each node in the storage cluster so that Ansible can access the nodes without being prompted for a password.
Prerequisites
Procedure
Do the following steps from the Ansible administration node, and as the Ansible user.
Generate the SSH key pair, accept the default file name and leave the passphrase empty:
[user@admin ~]$ ssh-keygen
Copy the public key to all nodes in the storage cluster:
ssh-copy-id $USER_NAME@$HOST_NAME
- Replace
-
$USER_NAME
with the new user name for the Ansible user. -
$HOST_NAME
with the host name of the Ceph node.
-
Example
[user@admin ~]$ ssh-copy-id admin@ceph-mon01
Create and edit the
~/.ssh/config
file.ImportantBy creating and editing the
~/.ssh/config
file you do not have to specify the-u $USER_NAME
option each time you execute theansible-playbook
command.Create the SSH
config
file:[user@admin ~]$ touch ~/.ssh/config
Open the
config
file for editing. Set theHostname
andUser
options for each node in the storage cluster:Host node1 Hostname $HOST_NAME User $USER_NAME Host node2 Hostname $HOST_NAME User $USER_NAME ...
- Replace
-
$HOST_NAME
with the host name of the Ceph node. -
$USER_NAME
with the new user name for the Ansible user.
-
Example
Host node1 Hostname monitor User admin Host node2 Hostname osd User admin Host node3 Hostname gateway User admin
Set the correct file permissions for the
~/.ssh/config
file:[admin@admin ~]$ chmod 600 ~/.ssh/config
Additional Resources
-
The
ssh_config(5)
manual page - The OpenSSH chapter in the System Administrator’s Guide for Red Hat Enterprise Linux 7