-
Language:
English
-
Language:
English
Red Hat Training
A Red Hat training course is available for Red Hat Ceph Storage
Chapter 2. Prerequisites
Figure 2.1. Prerequisite Workflow
Before installing Red Hat Ceph Storage, review the following prerequisites first and prepare each Ceph Monitor, OSD, and client nodes accordingly.
Table 2.1. Prerequisites Checks
Task | Required | Section | Recommendation |
---|---|---|---|
Verifying the operating system version | Yes | ||
Enabling Ceph software repositories | Yes | Section 2.2, “Enabling the Red Hat Ceph Storage Repositories” | Two installation methods: |
Using a RAID controller | No | For OSD nodes only. | |
Configuring network Interface | Yes | Using a public network is required. Having a private network for cluster communication is optional, but recommended. | |
Configuring a firewall | No | ||
Configuring the Network Time Protocol | Yes | ||
Creating an Ansible user | No | Ansible deployment only. Creating the Ansible user is required on all Ceph nodes. | |
Enabling password-less SSH | No | Section 2.8, “Enabling Password-less SSH (Ansible Deployment Only)” | Ansible deployment only. |
2.1. Operating System
Red Hat Ceph Storage 2 and later requires Ubuntu 16.04 with a homogeneous version running on AMD64 and Intel 64 architectures for all Ceph nodes.
Red Hat does not support clusters with heterogeneous operating systems and versions.
Return to prerequisite checklist
2.1.1. Adjusting the PID Count
Hosts with high numbers of OSDs, that being more than 12, may spawn a lot of threads, especially during recovery and re-balancing events. The kernel defaults to a relatively small maximum number of threads, typically 32768
.
Check the current
pid_max
settings:# cat /proc/sys/kernel/pid_max
As
root
, consider settingkernel.pid_max
to a higher number of threads. The theoretical maximum is 4,194,303 threads. For example, add the following to the/etc/sysctl.conf
file to set it to the maximum value:kernel.pid_max = 4194303
As
root
, to load the changes without a rebooting:# sysctl -p
As
root
, verify the changes:# sysctl -a | grep kernel.pid_max
2.2. Enabling the Red Hat Ceph Storage Repositories
Before you can install Red Hat Ceph Storage, you must choose an installation method. Red Hat Ceph Storage supports two installation methods:
Online Repositories
For Ceph Storage clusters with Ceph nodes that can connect directly to the Internet, you can use online repositories from the https://rhcs.download.redhat.com/ubuntu site. You will need your
Customer Name
andCustomer Password
received from https://rhcs.download.redhat.com to be able to use the repositories.ImportantContact your account manager to obtain credentials for https://rhcs.download.redhat.com.
For Ceph Storage clusters where security measures preclude nodes from accessing the Internet, install Red Hat Ceph Storage 2 from a single software build delivered as an ISO image, which will allow you to install local repositories.
2.2.1. Online Repositories
Online Installations for…
Ansible Administration Node
As
root
, enable the Red Hat Ceph Storage 2 Tools repository:$ sudo bash -c 'umask 0077; echo deb https://customername:customerpasswd@rhcs.download.redhat.com/2-updates/Tools $(lsb_release -sc) main | tee /etc/apt/sources.list.d/Tools.list' $ sudo bash -c 'wget -O - https://www.redhat.com/security/fd431d51.txt | apt-key add -' $ sudo apt-get update
Monitor Nodes
As
root
, enable the Red Hat Ceph Storage 2 Monitor repository:$ sudo bash -c 'umask 0077; echo deb https://customername:customerpasswd@rhcs.download.redhat.com/2-updates/MON $(lsb_release -sc) main | tee /etc/apt/sources.list.d/MON.list' $ sudo bash -c 'wget -O - https://www.redhat.com/security/fd431d51.txt | apt-key add -' $ sudo apt-get update
OSD Nodes
As
root
, enable the Red Hat Ceph Storage 2 OSD repository:$ sudo bash -c 'umask 0077; echo deb https://customername:customerpasswd@rhcs.download.redhat.com/2-updates/OSD $(lsb_release -sc) main | tee /etc/apt/sources.list.d/OSD.list' $ sudo bash -c 'wget -O - https://www.redhat.com/security/fd431d51.txt | apt-key add -' $ sudo apt-get update
Ceph Object Gateway and Client Nodes
As
root
, enable the Red Hat Ceph Storage 2 Tools repository:$ sudo bash -c 'umask 0077; echo deb https://customername:customerpasswd@rhcs.download.redhat.com/2-updates/Tools $(lsb_release -sc) main | tee /etc/apt/sources.list.d/Tools.list' $ sudo bash -c 'wget -O - https://www.redhat.com/security/fd431d51.txt | apt-key add -' $ sudo apt-get update
If you install Red Hat Ceph Storage manually, create an APT preferences file on all nodes. The file ensures that the apt-get
utility uses Red Hat Ceph Storage packages from the Red Hat repositories and not from the Ubuntu Xenial repositories that can include a newer version of the ceph
package. Using the ceph
packages from the Ubuntu Xenial repository causes the installation to fail with the unmet dependencies
error.
To create an APT preferences file on all nodes.
Create a new file in the
/etc/apt/preferences.d/
directory namedrhcs2
:$ sudo touch /etc/apt/preferences.d/rhcs2
Add the following content to the file:
Explanation: Prefer Red Hat packages Package: * Pin: release o=/Red Hat/ Pin-Priority: 999
2.2.2. Local Repository
ISO Installations
Download the Red Hat Ceph Storage ISO
- Visit the Red Hat Ceph Storage for Ubuntu page on the Customer Portal to obtain the Red Hat Ceph Storage installation ISO image files.
- Copy the ISO image to the node.
As
root
, mount the copied ISO image to the/mnt/rhcs2/
directory:$ sudo mkdir -p /mnt/rhcs2 $ sudo mount -o loop /<path_to_iso>/rhceph-2.0-ubuntu-x86_64.iso /mnt/rhcs2
NoteFor ISO installations using Ansible to install Red Hat Ceph Storage 2, mounting the ISO and creating a local repository is not required.
Create a Local Repository
- Copy the ISO image to the node.
As
root
, mount the copied ISO image:$ sudo mkdir -p /mnt/<new_directory> $ sudo mount -o loop /<path_to_iso_file> /mnt/<new_directory>
As
root
, add the ISO image as a software source:$ sudo apt-get install software-properties-common $ sudo add-apt-repository "deb file:/mnt/<new_directory> $(lsb_release -sc) main"
NoteIf you are completely disconnected from the Internet, then you must use ISO images to receive any updates.
2.3. Configuring RAID Controllers
If a RAID controller with 1-2 GB of cache is installed on a host, enabling write-back caches might result in increased small I/O write throughput. To prevent this problem, the cache must be non-volatile.
Modern RAID controllers usually have super capacitors that provide enough power to drain volatile memory to non-volatile NAND memory during a power loss event. It is important to understand how a particular controller and firmware behave after power is restored.
Some RAID controllers require manual intervention. Hard drives typically advertise to the operating system whether their disk caches should be enabled or disabled by default. However, certain RAID controllers or some firmware do not provide such information, so verify that disk level caches are disabled to avoid file system corruption.
Create a single RAID 0 volume with write-back for each OSD data drive with write-back cache enabled.
If Serial Attached SCSI (SAS) or SATA connected Solid-state Drive (SSD) disks are also present on the controller, investigate whether your controller and firmware support passthrough
mode. Passthrough
mode helps avoid caching logic, and generally results in much lower latency for fast media.
2.4. Configuring Network
All Ceph clusters require a public network. You must have a network interface card configured to a public network where Ceph clients can reach Ceph monitors and Ceph OSD nodes.
You might have a network interface card for a cluster network so that Ceph can conduct heart-beating, peering, replication, and recovery on a network separate from the public network.
Red Hat does not recommend using a single network interface card for both a public and private network.
For additional information on network configuration see the Network Configuration Reference chapter in the Configuration Guide for Red Hat Ceph Storage 2.
2.5. Configuring Firewall
Red Hat Ceph Storage 2 uses the iptables
service, which you must configure to suit your environment.
Monitor nodes use port 6789
for communication within the Ceph cluster. The monitor where the calamari-lite
is running uses port 8002
for access to the Calamari REST-based API.
On each Ceph OSD node, the OSD daemon uses several ports in the range 6800-7300
:
- One for communicating with clients and monitors over the public network
- One for sending data to other OSDs over a cluster network, if available; otherwise, over the public network
- One for exchanging heartbeat packets over a cluster network, if available; otherwise, over the public network
Ceph object gateway nodes use port 7480
by default. However, you can change the default port, for example to port 80
. To use the SSL/TLS service, open port 443
.
For more information about public and cluster network, see Network.
Configuring Access
As
root
, on all Ceph Monitor nodes, open port6789
on the public network:$ sudo iptables -I INPUT 1 -i <iface> -p tcp -s <IP-address>/<prefix> --dport 6789 -j ACCEPT
If
calamari-lite
is running on the Ceph Monitor node, asroot
, open port8002
on the public network:$ sudo iptables -I INPUT 1 -i <iface> -p tcp -s <IP-address>/<prefix> --dport 8002 -j ACCEPT
As
root
, on all OSD nodes, open ports6800-7300
:$ sudo iptables -I INPUT 1 -i <iface> -m multiport -p tcp -s <IP-address>/<prefix> --dports 6800:7300 -j ACCEPT
Where
<ip-address>
is the network address of the OSD nodes.As
root
, on all object gateway nodes, open the relevant port or ports on the public network.To open the default port
7480
:$ sudo iptables -I INPUT 1 -i <iface> -p tcp -s <IP-address>/<prefix> --dport 7480 -j ACCEPT
Optionally, as
root
, if you changed the default Ceph object gateway port, for example to port80
, open this port:$ sudo iptables -I INPUT 1 -i <iface> -p tcp -s <IP-address>/<prefix> --dport 80 -j ACCEPT
Optionally, as
root
, to use SSL/TLS, open port443
:$ sudo iptables -I INPUT 1 -i <iface> -p tcp -s <IP-address>/<prefix> --dport 443 -j ACCEPT
As
root
, make the changes persistent on each node:Install the
iptables-persistent
package:$ sudo apt-get install iptables-persistent
In the terminal UI that appears, select
yes
to save currentIPv4 iptables
rules to the/etc/iptables/rules.v4
file and currentIPv6 iptables
rules to the/etc/iptables/rules.v6
file.NoteIf you add a new
iptables
rule after installingiptables-persistent
, add the new rule to therules
file:$ sudo iptables-save > /etc/iptables/rules.v4
2.6. Configuring Network Time Protocol
If using Ansible to deploy a Red Hat Ceph Storage cluster, then the installation, configuration, and enabling NTP is done automatically during the deployment.
You must configure the Network Time Protocol (NTP) on all Ceph Monitor and OSD nodes. Ensure that Ceph nodes are NTP peers. NTP helps preempt issues that arise from clock drift.
As
root
, install thentp
package:$ sudo apt-get install ntp
As
root
, start the NTP service and ensure it is running:$ sudo service ntp start $ sudo service ntp status
Ensure that NTP is synchronizing Ceph monitor node clocks properly:
$ ntpq -p
2.7. Creating an Ansible User with Sudo Access
Ansible must login to Ceph nodes as a user that has passwordless root
privileges, because Ansible needs to install software and configuration files without prompting for passwords.
Red Hat recommends creating an Ansible user on all Ceph nodes in the cluster.
Do not use ceph
as the user name. The ceph
user name is reserved for the Ceph daemons.
A uniform user name across the cluster can improve ease of use, but avoid using obvious user names, because intruders typically use them to for brute force attacks. For example, root
, admin
, or <productname>
are not advised.
The following procedure, substituting <username>
for the user name you define, describes how to create an Ansible user with passwordless root
privileges on a Ceph node.
Use the
ssh
command to log in to a Ceph node:$ ssh <user_name>@<hostname>
Replace
<hostname>
with the host name of the Ceph node.Create a new Ansible user and set a new password for this user:
$sudo adduser <username>
Ensure that the user you added has the
root
privileges:$ sudo cat << EOF >/etc/sudoers.d/<username> <username> ALL = (root) NOPASSWD:ALL EOF
Ensure the correct file permissions:
$ sudo chmod 0440 /etc/sudoers.d/<username>
2.8. Enabling Password-less SSH (Ansible Deployment Only)
Since Ansible will not prompt for a password, you must generate SSH keys on the administration node and distribute the public key to each Ceph node.
Generate the SSH keys, but do not use
sudo
or theroot
user. Instead, use the Ansible user you created in Creating an Ansible User with Sudo Access. Leave the passphrase empty:$ ssh-keygen Generating public/private key pair. Enter file in which to save the key (/ceph-admin/.ssh/id_rsa): Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /ceph-admin/.ssh/id_rsa. Your public key has been saved in /ceph-admin/.ssh/id_rsa.pub.
Copy the key to each Ceph Node, replacing
<username>
with the user name you created in Creating an Ansible User with Sudo Access and<hostname>
with a host name of a Ceph node:$ ssh-copy-id <username>@<hostname>
Modify or create (using a utility such as vi) the
~/.ssh/config
file of the Ansible administration node so that Ansible can log in to Ceph nodes as the user you created without requiring you to specify the-u <username>
option each time you execute theansible-playbook
command. Replace<username>
with the name of the user you created and<hostname>
with a host name of a Ceph node:Host node1 Hostname <hostname> User <username> Host node2 Hostname <hostname> User <username> Host node3 Hostname <hostname> User <username>
After editing the
~/.ssh/config
file on the Ansible administration node, ensure the permissions are correct:$ chmod 600 ~/.ssh/config