Appendix B. Manually Installing Red Hat Ceph Storage
Red Hat does not support or test upgrading manually deployed clusters. Therefore, Red Hat recommends to use Ansible to deploy a new cluster with Red Hat Ceph Storage 3. See Chapter 3, Deploying Red Hat Ceph Storage for details.
You can use command-line utilities, such as Yum, to upgrade manually deployed clusters, but Red Hat does not support or test this.
All Ceph clusters require at least one monitor, and at least as many OSDs as copies of an object stored on the cluster. Red Hat recommends using three monitors for production environments and a minimum of three Object Storage Devices (OSD).
Bootstrapping the initial monitor is the first step in deploying a Ceph storage cluster. Ceph monitor deployment also sets important criteria for the entire cluster, such as:
- The number of replicas for pools
- The number of placement groups per OSD
- The heartbeat intervals
- Any authentication requirement
Most of these values are set by default, so it is useful to know about them when setting up the cluster for production.
Installing a Ceph storage cluster by using the command line interface involves these steps:
- Bootstrapping the initial Monitor node
- Adding an Object Storage Device (OSD) node
Configuring the Network Time Protocol for Red Hat Ceph Storage
All Ceph Monitor and OSD nodes requires configuring the Network Time Protocol (NTP). Ensure that Ceph nodes are NTP peers. NTP helps preempt issues that arise from clock drift.
When using Ansible to deploy a Red Hat Ceph Storage cluster, Ansible automatically installs, configures, and enables NTP.
Prerequisites
- Network access to a valid time source.
Procedure: Configuring the Network Time Protocol for RHCS
Do the following steps on the all RHCS nodes in the storage cluster, as the root user.
Install the
ntppackage:# yum install ntp
Start and enable the NTP service to be persistent across a reboot:
# systemctl start ntpd # systemctl enable ntpd
Ensure that NTP is synchronizing clocks properly:
$ ntpq -p
Additional Resources
- The Configuring NTP Using ntpd chapter in the System Administrator’s Guide for Red Hat Enterprise Linux 7.
Monitor Bootstrapping
Bootstrapping a Monitor and by extension a Ceph storage cluster, requires the following data:
- Unique Identifier
-
The File System Identifier (
fsid) is a unique identifier for the cluster. Thefsidwas originally used when the Ceph storage cluster was principally used for the Ceph file system. Ceph now supports native interfaces, block devices, and object storage gateway interfaces too, sofsidis a bit of a misnomer. - Cluster Name
Ceph clusters have a cluster name, which is a simple string without spaces. The default cluster name is
ceph, but you can specify a different cluster name. Overriding the default cluster name is especially useful when you work with multiple clusters.When you run multiple clusters in a multi-site architecture, the cluster name for example,
us-west,us-eastidentifies the cluster for the current command-line session.NoteTo identify the cluster name on the command-line interface, specify the Ceph configuration file with the cluster name, for example,
ceph.conf,us-west.conf,us-east.conf, and so on.Example:
# ceph --cluster us-west.conf ...
- Monitor Name
-
Each Monitor instance within a cluster has a unique name. In common practice, the Ceph Monitor name is the node name. Red Hat recommend one Ceph Monitor per node, and no co-locating the Ceph OSD daemons with the Ceph Monitor daemon. To retrieve the short node name, use the
hostname -scommand. - Monitor Map
Bootstrapping the initial Monitor requires you to generate a Monitor map. The Monitor map requires:
-
The File System Identifier (
fsid) -
The cluster name, or the default cluster name of
cephis used - At least one host name and its IP address.
-
The File System Identifier (
- Monitor Keyring
- Monitors communicate with each other by using a secret key. You must generate a keyring with a Monitor secret key and provide it when bootstrapping the initial Monitor.
- Administrator Keyring
-
To use the
cephcommand-line interface utilities, create theclient.adminuser and generate its keyring. Also, you must add theclient.adminuser to the Monitor keyring.
The foregoing requirements do not imply the creation of a Ceph configuration file. However, as a best practice, Red Hat recommends creating a Ceph configuration file and populating it with the fsid, the mon initial members and the mon host settings at a minimum.
You can get and set all of the Monitor settings at runtime as well. However, the Ceph configuration file might contain only those settings which overrides the default values. When you add settings to a Ceph configuration file, these settings override the default settings. Maintaining those settings in a Ceph configuration file makes it easier to maintain the cluster.
To bootstrap the initial Monitor, perform the following steps:
Enable the Red Hat Ceph Storage 3 Monitor repository:
[root@monitor ~]# subscription-manager repos --enable=rhel-7-server-rhceph-3-mon-rpms
On your initial Monitor node, install the
ceph-monpackage asroot:# yum install ceph-mon
As
root, create a Ceph configuration file in the/etc/ceph/directory. By default, Ceph usesceph.conf, wherecephreflects the cluster name:Syntax
# touch /etc/ceph/<cluster_name>.conf
Example
# touch /etc/ceph/ceph.conf
As
root, generate the unique identifier for your cluster and add the unique identifier to the[global]section of the Ceph configuration file:Syntax
# echo "[global]" > /etc/ceph/<cluster_name>.conf # echo "fsid = `uuidgen`" >> /etc/ceph/<cluster_name>.conf
Example
# echo "[global]" > /etc/ceph/ceph.conf # echo "fsid = `uuidgen`" >> /etc/ceph/ceph.conf
View the current Ceph configuration file:
$ cat /etc/ceph/ceph.conf [global] fsid = a7f64266-0894-4f1e-a635-d0aeaca0e993
As
root, add the initial Monitor to the Ceph configuration file:Syntax
# echo "mon initial members = <monitor_host_name>[,<monitor_host_name>]" >> /etc/ceph/<cluster_name>.conf
Example
# echo "mon initial members = node1" >> /etc/ceph/ceph.conf
As
root, add the IP address of the initial Monitor to the Ceph configuration file:Syntax
# echo "mon host = <ip-address>[,<ip-address>]" >> /etc/ceph/<cluster_name>.conf
Example
# echo "mon host = 192.168.0.120" >> /etc/ceph/ceph.conf
NoteTo use IPv6 addresses, you set the
ms bind ipv6option totrue. For details, see the Bind section in the Configuration Guide for Red Hat Ceph Storage 3.As
root, create the keyring for the cluster and generate the Monitor secret key:Syntax
# ceph-authtool --create-keyring /tmp/<cluster_name>.mon.keyring --gen-key -n mon. --cap mon '<capabilites>'
Example
# ceph-authtool --create-keyring /tmp/ceph.mon.keyring --gen-key -n mon. --cap mon 'allow *' creating /tmp/ceph.mon.keyring
As
root, generate an administrator keyring, generate a<cluster_name>.client.admin.keyringuser and add the user to the keyring:Syntax
# ceph-authtool --create-keyring /etc/ceph/<cluster_name>.client.admin.keyring --gen-key -n client.admin --set-uid=0 --cap mon '<capabilites>' --cap osd '<capabilites>' --cap mds '<capabilites>'
Example
# ceph-authtool --create-keyring /etc/ceph/ceph.client.admin.keyring --gen-key -n client.admin --set-uid=0 --cap mon 'allow *' --cap osd 'allow *' --cap mds 'allow' creating /etc/ceph/ceph.client.admin.keyring
As
root, add the<cluster_name>.client.admin.keyringkey to the<cluster_name>.mon.keyring:Syntax
# ceph-authtool /tmp/<cluster_name>.mon.keyring --import-keyring /etc/ceph/<cluster_name>.client.admin.keyring
Example
# ceph-authtool /tmp/ceph.mon.keyring --import-keyring /etc/ceph/ceph.client.admin.keyring importing contents of /etc/ceph/ceph.client.admin.keyring into /tmp/ceph.mon.keyring
Generate the Monitor map. Specify using the node name, IP address and the
fsid, of the initial Monitor and save it as/tmp/monmap:Syntax
$ monmaptool --create --add <monitor_host_name> <ip-address> --fsid <uuid> /tmp/monmap
Example
$ monmaptool --create --add node1 192.168.0.120 --fsid a7f64266-0894-4f1e-a635-d0aeaca0e993 /tmp/monmap monmaptool: monmap file /tmp/monmap monmaptool: set fsid to a7f64266-0894-4f1e-a635-d0aeaca0e993 monmaptool: writing epoch 0 to /tmp/monmap (1 monitors)
As
rooton the initial Monitor node, create a default data directory:Syntax
# mkdir /var/lib/ceph/mon/<cluster_name>-<monitor_host_name>
Example
# mkdir /var/lib/ceph/mon/ceph-node1
As
root, populate the initial Monitor daemon with the Monitor map and keyring:Syntax
# ceph-mon [--cluster <cluster_name>] --mkfs -i <monitor_host_name> --monmap /tmp/monmap --keyring /tmp/<cluster_name>.mon.keyring
Example
# ceph-mon --mkfs -i node1 --monmap /tmp/monmap --keyring /tmp/ceph.mon.keyring ceph-mon: set fsid to a7f64266-0894-4f1e-a635-d0aeaca0e993 ceph-mon: created monfs at /var/lib/ceph/mon/ceph-node1 for mon.node1
View the current Ceph configuration file:
# cat /etc/ceph/ceph.conf [global] fsid = a7f64266-0894-4f1e-a635-d0aeaca0e993 mon_initial_members = node1 mon_host = 192.168.0.120
For more details on the various Ceph configuration settings, see the Configuration Guide for Red Hat Ceph Storage 3. The following example of a Ceph configuration file lists some of the most common configuration settings:
Example
[global] fsid = <cluster-id> mon initial members = <monitor_host_name>[, <monitor_host_name>] mon host = <ip-address>[, <ip-address>] public network = <network>[, <network>] cluster network = <network>[, <network>] auth cluster required = cephx auth service required = cephx auth client required = cephx osd journal size = <n> osd pool default size = <n> # Write an object n times. osd pool default min size = <n> # Allow writing n copy in a degraded state. osd pool default pg num = <n> osd pool default pgp num = <n> osd crush chooseleaf type = <n>
As
root, create thedonefile:Syntax
# touch /var/lib/ceph/mon/<cluster_name>-<monitor_host_name>/done
Example
# touch /var/lib/ceph/mon/ceph-node1/done
As
root, update the owner and group permissions on the newly created directory and files:Syntax
# chown -R <owner>:<group> <path_to_directory>
Example
# chown -R ceph:ceph /var/lib/ceph/mon # chown -R ceph:ceph /var/log/ceph # chown -R ceph:ceph /var/run/ceph # chown ceph:ceph /etc/ceph/ceph.client.admin.keyring # chown ceph:ceph /etc/ceph/ceph.conf # chown ceph:ceph /etc/ceph/rbdmap
NoteIf the Ceph Monitor node is co-located with an OpenStack Controller node, then the Glance and Cinder keyring files must be owned by
glanceandcinderrespectively. For example:# ls -l /etc/ceph/ ... -rw-------. 1 glance glance 64 <date> ceph.client.glance.keyring -rw-------. 1 cinder cinder 64 <date> ceph.client.cinder.keyring ...
For storage clusters with custom names, as
root, add the the following line:Syntax
# echo "CLUSTER=<custom_cluster_name>" >> /etc/sysconfig/ceph
Example
# echo "CLUSTER=test123" >> /etc/sysconfig/ceph
As
root, start and enable theceph-monprocess on the initial Monitor node:Syntax
# systemctl enable ceph-mon.target # systemctl enable ceph-mon@<monitor_host_name> # systemctl start ceph-mon@<monitor_host_name>
Example
# systemctl enable ceph-mon.target # systemctl enable ceph-mon@node1 # systemctl start ceph-mon@node1
As
root, verify the monitor daemon is running:Syntax
# systemctl status ceph-mon@<monitor_host_name>
Example
# systemctl status ceph-mon@node1 ● ceph-mon@node1.service - Ceph cluster monitor daemon Loaded: loaded (/usr/lib/systemd/system/ceph-mon@.service; enabled; vendor preset: disabled) Active: active (running) since Wed 2018-06-27 11:31:30 PDT; 5min ago Main PID: 1017 (ceph-mon) CGroup: /system.slice/system-ceph\x2dmon.slice/ceph-mon@node1.service └─1017 /usr/bin/ceph-mon -f --cluster ceph --id node1 --setuser ceph --setgroup ceph Jun 27 11:31:30 node1 systemd[1]: Started Ceph cluster monitor daemon. Jun 27 11:31:30 node1 systemd[1]: Starting Ceph cluster monitor daemon...
To add more Red Hat Ceph Storage Monitors to the storage cluster, see the Adding a Monitor section in the Administration Guide for Red Hat Ceph Storage 3.
OSD Bootstrapping
Once you have your initial monitor running, you can start adding the Object Storage Devices (OSDs). Your cluster cannot reach an active + clean state until you have enough OSDs to handle the number of copies of an object.
The default number of copies for an object is three. You will need three OSD nodes at minimum. However, if you only want two copies of an object, therefore only adding two OSD nodes, then update the osd pool default size and osd pool default min size settings in the Ceph configuration file.
For more details, see the OSD Configuration Reference section in the Configuration Guide for Red Hat Ceph Storage 3.
After bootstrapping the initial monitor, the cluster has a default CRUSH map. However, the CRUSH map does not have any Ceph OSD daemons mapped to a Ceph node.
To add an OSD to the cluster and updating the default CRUSH map, execute the following on each OSD node:
Enable the Red Hat Ceph Storage 3 OSD repository:
[root@osd ~]# subscription-manager repos --enable=rhel-7-server-rhceph-3-osd-rpms
As
root, install theceph-osdpackage on the Ceph OSD node:# yum install ceph-osd
Copy the Ceph configuration file and administration keyring file from the initial Monitor node to the OSD node:
Syntax
# scp <user_name>@<monitor_host_name>:<path_on_remote_system> <path_to_local_file>
Example
# scp root@node1:/etc/ceph/ceph.conf /etc/ceph # scp root@node1:/etc/ceph/ceph.client.admin.keyring /etc/ceph
Generate the Universally Unique Identifier (UUID) for the OSD:
$ uuidgen b367c360-b364-4b1d-8fc6-09408a9cda7a
As
root, create the OSD instance:Syntax
# ceph osd create <uuid> [<osd_id>]
Example
# ceph osd create b367c360-b364-4b1d-8fc6-09408a9cda7a 0
NoteThis command outputs the OSD number identifier needed for subsequent steps.
As
root, create the default directory for the new OSD:Syntax
# mkdir /var/lib/ceph/osd/<cluster_name>-<osd_id>
Example
# mkdir /var/lib/ceph/osd/ceph-0
As
root, prepare the drive for use as an OSD, and mount it to the directory you just created. Create a partition for the Ceph data and journal. The journal and the data partitions can be located on the same disk. This example is using a 15 GB disk:Syntax
# parted <path_to_disk> mklabel gpt # parted <path_to_disk> mkpart primary 1 10000 # mkfs -t <fstype> <path_to_partition> # mount -o noatime <path_to_partition> /var/lib/ceph/osd/<cluster_name>-<osd_id> # echo "<path_to_partition> /var/lib/ceph/osd/<cluster_name>-<osd_id> xfs defaults,noatime 1 2" >> /etc/fstab
Example
# parted /dev/sdb mklabel gpt # parted /dev/sdb mkpart primary 1 10000 # parted /dev/sdb mkpart primary 10001 15000 # mkfs -t xfs /dev/sdb1 # mount -o noatime /dev/sdb1 /var/lib/ceph/osd/ceph-0 # echo "/dev/sdb1 /var/lib/ceph/osd/ceph-0 xfs defaults,noatime 1 2" >> /etc/fstab
As
root, initialize the OSD data directory:Syntax
# ceph-osd -i <osd_id> --mkfs --mkkey --osd-uuid <uuid>
Example
# ceph-osd -i 0 --mkfs --mkkey --osd-uuid b367c360-b364-4b1d-8fc6-09408a9cda7a ... auth: error reading file: /var/lib/ceph/osd/ceph-0/keyring: can't open /var/lib/ceph/osd/ceph-0/keyring: (2) No such file or directory ... created new key in keyring /var/lib/ceph/osd/ceph-0/keyring
NoteThe directory must be empty before you run
ceph-osdwith the--mkkeyoption. If you have a custom cluster name, theceph-osdutility requires the--clusteroption.As
root, register the OSD authentication key. If your cluster name differs fromceph, insert your cluster name instead:Syntax
# ceph auth add osd.<osd_id> osd 'allow *' mon 'allow profile osd' -i /var/lib/ceph/osd/<cluster_name>-<osd_id>/keyring
Example
# ceph auth add osd.0 osd 'allow *' mon 'allow profile osd' -i /var/lib/ceph/osd/ceph-0/keyring added key for osd.0
As
root, add the OSD node to the CRUSH map:Syntax
# ceph [--cluster <cluster_name>] osd crush add-bucket <host_name> host
Example
# ceph osd crush add-bucket node2 host
As
root, place the OSD node under thedefaultCRUSH tree:Syntax
# ceph [--cluster <cluster_name>] osd crush move <host_name> root=default
Example
# ceph osd crush move node2 root=default
As
root, add the OSD disk to the CRUSH mapSyntax
# ceph [--cluster <cluster_name>] osd crush add osd.<osd_id> <weight> [<bucket_type>=<bucket-name> ...]
Example
# ceph osd crush add osd.0 1.0 host=node2 add item id 0 name 'osd.0' weight 1 at location {host=node2} to crush mapNoteYou can also decompile the CRUSH map, and add the OSD to the device list. Add the OSD node as a bucket, then add the device as an item in the OSD node, assign the OSD a weight, recompile the CRUSH map and set the CRUSH map. For more details, see the Editing a CRUSH map section in the Storage Strategies Guide for Red Hat Ceph Storage 3. for more details.
As
root, update the owner and group permissions on the newly created directory and files:Syntax
# chown -R <owner>:<group> <path_to_directory>
Example
# chown -R ceph:ceph /var/lib/ceph/osd # chown -R ceph:ceph /var/log/ceph # chown -R ceph:ceph /var/run/ceph # chown -R ceph:ceph /etc/ceph
For storage clusters with custom names, as
root, add the following line to the/etc/sysconfig/cephfile:Syntax
# echo "CLUSTER=<custom_cluster_name>" >> /etc/sysconfig/ceph
Example
# echo "CLUSTER=test123" >> /etc/sysconfig/ceph
The OSD node is in your Ceph storage cluster configuration. However, the OSD daemon is
downandin. The new OSD must beupbefore it can begin receiving data. Asroot, enable and start the OSD process:Syntax
# systemctl enable ceph-osd.target # systemctl enable ceph-osd@<osd_id> # systemctl start ceph-osd@<osd_id>
Example
# systemctl enable ceph-osd.target # systemctl enable ceph-osd@0 # systemctl start ceph-osd@0
Once you start the OSD daemon, it is
upandin.
Now you have the monitors and some OSDs up and running. You can watch the placement groups peer by executing the following command:
$ ceph -w
To view the OSD tree, execute the following command:
$ ceph osd tree
Example
ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY -1 2 root default -2 2 host node2 0 1 osd.0 up 1 1 -3 1 host node3 1 1 osd.1 up 1 1
To expand the storage capacity by adding new OSDs to the storage cluster, see the Adding an OSD section in the Administration Guide for Red Hat Ceph Storage 3.

Where did the comment section go?
Red Hat's documentation publication system recently went through an upgrade to enable speedier, more mobile-friendly content. We decided to re-evaluate our commenting platform to ensure that it meets your expectations and serves as an optimal feedback mechanism. During this redesign, we invite your input on providing feedback on Red Hat documentation via the discussion platform.