Chapter 3. Deploying Red Hat Ceph Storage
This chapter describes how to use the Ansible application to deploy a Red Hat Ceph Storage cluster and other components, such as Metadata Servers or the Ceph Object Gateway.
- To install a Red Hat Ceph Storage cluster, see Section 3.2, “Installing a Red Hat Ceph Storage Cluster”.
- To install Metadata Servers, see Section 3.3, “Installing Metadata Servers”.
-
To install the
ceph-clientrole, see Section 3.4, “Installing the Ceph Client Role”. - To install the Ceph Object Gateway, see Section 3.5, “Installing the Ceph Object Gateway”.
-
To learn about the Ansible
--limitoption, see Section 3.7, “Understanding thelimitoption”.
3.1. Prerequisites
- Obtain a valid customer subscription.
Prepare the cluster nodes. On each node:
3.2. Installing a Red Hat Ceph Storage Cluster
Use the Ansible application with the ceph-ansible playbook to install Red Hat Ceph Storage 3.
Production Ceph storage clusters start with a minimum of three monitor hosts and three OSD nodes containing multiple OSD daemons.

Prerequisites
On the Ansible administration node, install the
ceph-ansiblepackage:[root@admin ~]# yum install ceph-ansible
Procedure
Use the following commands from the Ansible administration node if not instructed otherwise.
In the user’s home directory, create the
ceph-ansible-keysdirectory where Ansible stores temporary values generated by theceph-ansibleplaybook.[user@admin ~]$ mkdir ~/ceph-ansible-keys
Create a symbolic link to the
/usr/share/ceph-ansible/group_varsdirectory in the/etc/ansible/directory:[root@admin ~]# ln -s /usr/share/ceph-ansible/group_vars /etc/ansible/group_vars
Navigate to the
/usr/share/ceph-ansible/directory:[user@admin ~]$ cd /usr/share/ceph-ansible
Create new copies of the
yml.samplefiles:[root@admin ceph-ansible]# cp group_vars/all.yml.sample group_vars/all.yml [root@admin ceph-ansible]# cp group_vars/osds.yml.sample group_vars/osds.yml [root@admin ceph-ansible]# cp site.yml.sample site.yml
Edit the copied files.
Edit the
group_vars/all.ymlfile. See the table below for the most common required and optional parameters to uncomment. Note that the table does not include all parameters.Table 3.1. General Ansible Settings
Option Value Required Notes ceph_repository_typecdnorisoYes
ceph_rhcs_version3Yes
ceph_rhcs_iso_pathThe path to the ISO image
Yes if using an ISO image
monitor_interfaceThe interface that the Monitor nodes listen to
monitor_interface,monitor_address, ormonitor_address_blockis requiredmonitor_addressThe address that the Monitor nodes listen to
monitor_address_blockThe subnet of the Ceph public network
Use when the IP addresses of the nodes are unknown, but the subnet is known
ip_versionipv6Yes if using IPv6 addressing
public_networkThe IP address and netmask of the Ceph public network
Yes
Section 2.8, “Verifying the Network Configuration for Red Hat Ceph Storage”
cluster_networkThe IP address and netmask of the Ceph cluster network
No, defaults to
public_networkAn example of the
all.ymlfile can look like:ceph_origin: repository ceph_repository: rhcs ceph_repository_type: cdn ceph_rhcs_version: 3 monitor_interface: eth0 public_network: 192.168.0.0/24
NoteHaving the
ceph_rhcs_versionoption set to3will pull in the latest version of Red Hat Ceph Storage 3.
For additional details, see the
all.ymlfile.Edit the
group_vars/osds.ymlfile. See the table below for the most common required and optional parameters to uncomment. Note that the table does not include all parameters.Table 3.2. OSD Ansible Settings
Option Value Required Notes osd_scenariocollocatedto use the same device for journal and OSD datanon-collocatedto use a dedicated device to store journal datalvmto use the Logical Volume Manager to store OSD dataYes
When using
osd_scenario: non-collocated,ceph-ansibleexpects the variablesdevicesanddedicated_devicesto match. For example, if you specify 10 disks indevices, you must specify 10 entries indedicated_devices. Currently, Red Hat only supports dedicated journals when usingosd_scenario: lvm, not collocated journals.osd_auto_discoverytrueto automatically discover OSDsYes if using
osd_scenario: collocatedCannot be used when
devicessetting is useddevicesList of devices where
ceph datais storedYes to specify the list of devices
Cannot be used when
osd_auto_discoverysetting is useddedicated_devicesList of dedicated devices for non-collocated OSDs where
ceph journalis storedYes if
osd_scenario: non-collocatedShould be nonpartitioned devices
dmcrypttrueto encrypt OSDsNo
Defaults to
falselvm_volumesa list of dictionaries
Yes if using
osd_scenario: lvmEach dictionary must contain a
data,journalanddata_vgkeys. Thedatakey must be a logical volume. Thejournalkey can be a logical volume (LV), device or partition, but do not use one journal for multipledataLVs. Thedata_vgkey must be the volume group containing thedataLV. Optionally, thejournal_vgkey can be used to specify the volume group containing the journal LV, if applicable.The following are examples of the
osds.ymlfile using these threeosd_scenario::collocated,non-collocated, andlvm.osd_scenario: non-collocated devices: - /dev/sda - /dev/sdb - /dev/sdc - /dev/sdd dedicated_devices: - /dev/nvme0n1 - /dev/nvme0n1 - /dev/nvme0n1 - /dev/nvme0n1
osd_scenario: non-collocated devices: - /dev/sda - /dev/sdb - /dev/sdc - /dev/sdd dedicated_devices: - /dev/nvme0n1 - /dev/nvme0n1 - /dev/nvme0n1 - /dev/nvme0n1
osd_scenario: lvm lvm_volumes: - data: data-lv1 data_vg: vg1 journal: journal-lv1 journal_vg: vg2 - data: data-lv2 journal: /dev/sda data_vg: vg1For additional details, see the comments in the
osds.ymlfile.NoteCurrently,
ceph-ansibledoes not create the volume groups or the logical volumes. This must be done before running the Anisble playbook.
Edit the Ansible inventory file located by default at
/etc/ansible/hosts. Remember to comment out example hosts.Add the Monitor nodes under the
[mons]section:[mons] <monitor-host-name> <monitor-host-name> <monitor-host-name>
Add OSD nodes under the
[osds]section. If the nodes have sequential naming, consider using a range:[osds] <osd-host-name[1:10]>
Optionally, use the
devicesparameter to specify devices that the OSD nodes will use. Use a comma-separated list to list multiple devices.[osds] <ceph-host-name> devices="[ '<device_1>', '<device_2>' ]"
For example:
[osds] ceph-osd-01 devices="[ '/dev/sdb', '/dev/sdc' ]" ceph-osd-02 devices="[ '/dev/sdb', '/dev/sdc', '/dev/sdd' ]"
When specifying no devices, set the
osd_auto_discoveryoption totruein theosds.ymlfile.NoteUsing the
devicesparameter is useful when OSDs use devices with different names or when one of the devices failed on one of the OSDs.Add the Ceph Manager (
ceph-mgr) nodes under the[mgrs]section. Colocate the Ceph Manager daemon with Monitor nodes.[mgrs] <monitor-host-name> <monitor-host-name> <monitor-host-name>
As the Ansible user, ensure that Ansible can reach the Ceph hosts:
[user@admin ~]$ ansible all -m ping
Add the following line to the
/etc/ansible/ansible.cfgfile:retry_files_save_path = ~/
As
root, create the/var/log/ansible/directory and assign the appropriate permissions for theansibleuser:[root@admin ceph-ansible]# mkdir /var/log/ansible [root@admin ceph-ansible]# chown ansible:ansible /var/log/ansible [root@admin ceph-ansible]# chmod 755 /var/log/ansible
Edit the
/usr/share/ceph-ansible/ansible.cfgfile, updating thelog_pathvalue as follows:log_path = /var/log/ansible/ansible.log
As the Ansible user, run the
ceph-ansibleplaybook.[user@admin ceph-ansible]$ ansible-playbook site.yml
From a Monitor node, verify the status of the Ceph cluster.
[root@monitor ~]# ceph health HEALTH_OK
Verify the cluster is functioning using
rados.From a monitor node, create a test pool with eight placement groups:
Syntax
[root@monitor ~]# ceph osd pool create <pool-name> <pg-number>
Example
[root@monitor ~]# ceph osd pool create test 8
Create a file called
hello-world.txt:Syntax
[root@monitor ~]# vim <file-name>
Example
[root@monitor ~]# vim hello-world.txt
Upload
hello-world.txtto the test pool using the object namehello-world:Syntax
[root@monitor ~]# rados --pool <pool-name> put <object-name> <object-file>
Example
[root@monitor ~]# rados --pool test put hello-world hello-world.txt
Download
hello-worldfrom the test pool as file namefetch.txt:Syntax
[root@monitor ~]# rados --pool <pool-name> get <object-name> <object-file>
Example
[root@monitor ~]# rados --pool test get hello-world fetch.txt
Check the contents of
fetch.txt:[root@monitor ~]# cat fetch.txt
The output should be:
"Hello World!"
NoteIn addition to verifying the cluster status, you can use the
ceph-medicutility to overall diagnose the Ceph Storage Cluster. See the Installing and Usingceph-medicto Diagnose a Ceph Storage Cluster chapter in the Red Hat Ceph Storage 3 Troubleshooting Guide.
3.3. Installing Metadata Servers
Use the Ansible automation application to install a Ceph Metadata Server (MDS). Metadata Server daemons are necessary for deploying a Ceph File System.
Procedure
Perform the following steps on the Ansible administration node.
Add a new section
[mdss]to the/etc/ansible/hostsfile:[mdss] <hostname> <hostname> <hostname>
Replace
<hostname>with the host names of the nodes where you want to install the Ceph Metadata Servers.Navigate to the
/usr/share/ceph-ansibledirectory:[root@admin ~]# cd /usr/share/ceph-ansible
Create a copy of the
group_vars/mdss.yml.samplefile namedmdss.yml:[root@admin ceph-ansible]# cp group_vars/mdss.yml.sample group_vars/mdss.yml
-
Optionally, edit parameters in
mdss.yml. Seemdss.ymlfor details. Run the Ansible playbook:
[user@admin ceph-ansible]$ ansible-playbook site-docker.yml --limit mdss
- After installing Metadata Servers, configure them. For details, see the Configuring Metadata Server Daemons chapter in the Ceph File System Guide for Red Hat Ceph Storage 3.
Additional Resources
- The Ceph File System Guide for Red Hat Ceph Storage 3
-
Section 3.7, “Understanding the
limitoption”
3.4. Installing the Ceph Client Role
The ceph-ansible utility provides the ceph-client role that copies the Ceph configuration file and the administration keyring to nodes. In addition, you can use this role to create custom pools and clients.
Prerequisites
-
A running Ceph storage cluster, preferably in the
active + cleanstate. - Perform the tasks listed in Chapter 2, Requirements for Installing Red Hat Ceph Storage.
Procedure
Perform the following tasks on the Ansible administration node.
Add a new section
[clients]to the/etc/ansible/hostsfile:[clients] <client-hostname>
Replace
<client-hostname>with the host name of the node where you want to install theceph-clientrole.Navigate to the
/usr/share/ceph-ansibledirectory:[root@admin ~]# cd /usr/share/ceph-ansible
Create a new copy of the
clients.yml.samplefile namedclients.yml:[root@admin ceph-ansible ~]# cp group_vars/clients.yml.sample group_vars/clients.yml
Optionally, instruct
ceph-clientto create pools and clients.Update
clients.yml.-
Uncomment the
user_configsetting and set it totrue. -
Uncomment the
poolsandkeyssections and update them as required. You can define custom pools and client names altogether with thecephxcapabilities.
-
Uncomment the
Add the
osd_pool_default_pg_numsetting to theceph_conf_overridessection in theall.ymlfile:ceph_conf_overrides: global: osd_pool_default_pg_num: <number>Replace
<number>with the default number of placement groups.
Run the Ansible playbook:
[user@admin ceph-ansible]$ ansible-playbook site.yml --limit clients
Additional Resources
3.5. Installing the Ceph Object Gateway
The Ceph Object Gateway, also know as the RADOS gateway, is an object storage interface built on top of the librados API to provide applications with a RESTful gateway to Ceph storage clusters.
Prerequisites
-
A running Ceph storage cluster, preferably in the
active + cleanstate. - On the Ceph Object Gateway node, perform the tasks listed in Chapter 2, Requirements for Installing Red Hat Ceph Storage.
Procedure
Perform the following tasks on the Ansible administration node.
Create the
rgwsfile from the sample file:[root@ansible ~]# cd /etc/ansible/group_vars [root@ansible ~]# cp rgws.yml.sample rgws.yml
Add gateway hosts to the
/etc/ansible/hostsfile under the[rgws]section to identify their roles to Ansible. If the hosts have sequential naming, use a range. For example:[rgws] <rgw_host_name_1> <rgw_host_name_2> <rgw_host_name[3..10]>
Navigate to the Ansible configuration directory,
/etc/ansible/:[root@ansible ~]# cd /usr/share/ceph-ansible
To copy the administrator key to the Ceph Object Gateway node, uncomment the
copy_admin_keysetting in the/usr/share/ceph-ansible/group_vars/rgws.ymlfile:copy_admin_key: true
The
rgws.ymlfile may specify a different default port than the default port7480. For example:ceph_rgw_civetweb_port: 80
The
all.ymlfile MUST specify aradosgw_interface. For example:radosgw_interface: eth0
Specifying the interface prevents Civetweb from binding to the same IP address as another Civetweb instance when running multiple instances on the same host.
Generally, to change default settings, uncomment the settings in the
rgw.ymlfile, and make changes accordingly. To make additional changes to settings that aren’t in thergw.ymlfile, useceph_conf_overrides:in theall.ymlfile. For example, set thergw_dns_name:with the host of the DNS server and ensure the cluster’s DNS server to configure it for wild cards to enable S3 subdomains.ceph_conf_overrides: client.rgw.rgw1: rgw_dns_name: <host_name> rgw_override_bucket_index_max_shards: 16 rgw_bucket_default_quota_max_objects: 1638400For advanced configuration details, see the Red Hat Ceph Storage 3 Ceph Object Gateway for Production guide. Advanced topics include:
- Configuring Ansible Groups
Developing Storage Strategies. See the Creating the Root Pool, Creating System Pools, and Creating Data Placement Strategies sections for additional details on how create and configure the pools.
See Bucket Sharding for configuration details on bucket sharding.
Uncomment the
radosgw_interfaceparameter in thegroup_vars/all.ymlfile.radosgw_interface: <interface>
Replace:
-
<interface>with the interface that the Ceph Object Gateway nodes listen to
For additional details, see the
all.ymlfile.-
Run the Ansible playbook:
[user@admin ceph-ansible]$ ansible-playbook site.yml --limit rgws
Ansible ensures that each Ceph Object Gateway is running.
For a single site configuration, add Ceph Object Gateways to the Ansible configuration.
For multi-site deployments, you should have an Ansible configuration for each zone. That is, Ansible will create a Ceph storage cluster and gateway instances for that zone.
After installation for a multi-site cluster is complete, proceed to the Multi-site chapter in the Object Gateway Guide for Red Hat Enterprise Linux for details on configuring a cluster for multi-site.
Additional Resources
3.6. Installing the NFS-Ganesha Gateway
The Ceph NFS Ganesha Gateway is an NFS interface built on top of the Ceph Object Gateway to provide applications with a POSIX filesystem interface to the Ceph Object Gateway for migrating files within filesystems to Ceph Object Storage.
Prerequisites
-
A running Ceph storage cluster, preferably in the
active + cleanstate. - At least one node running a Ceph Object Gateway.
- At least one S3 user with an access key and secret.
- Perform the Before You Start procedure.
Procedure
Perform the following tasks on the Ansible administration node.
Create the
nfssfile from the sample file:[root@ansible ~]# cd /etc/ansible/group_vars [root@ansible ~]# cp nfss.yml.sample nfss.yml
Add gateway hosts to the
/etc/ansible/hostsfile under an[nfss]group to identify their group membership to Ansible. If the hosts have sequential naming, use a range. For example:[nfss] <nfs_host_name_1> <nfs_host_name_2> <nfs_host_name[3..10]>
Navigate to the Ansible configuration directory,
/etc/ansible/:[root@ansible ~]# cd /usr/share/ceph-ansible
To copy the administrator key to the Ceph Object Gateway node, uncomment the
copy_admin_keysetting in the/usr/share/ceph-ansible/group_vars/nfss.ymlfile:copy_admin_key: true
Configure the FSAL (File System Abstraction Layer) sections of the
/usr/share/ceph-ansible/group_vars/nfss.ymlfile. Provide an ID, S3 user ID, S3 access key and secret. For NFSv4, it should look something like this:################### # FSAL RGW Config # ################### #ceph_nfs_rgw_export_id: <replace-w-numeric-export-id> #ceph_nfs_rgw_pseudo_path: "/" #ceph_nfs_rgw_protocols: "3,4" #ceph_nfs_rgw_access_type: "RW" #ceph_nfs_rgw_user: "cephnfs" # Note: keys are optional and can be generated, but not on containerized, where # they must be configered. #ceph_nfs_rgw_access_key: "<replace-w-access-key>" #ceph_nfs_rgw_secret_key: "<replace-w-secret-key>"
Run the Ansible playbook:
[user@admin ceph-ansible]$ ansible-playbook site.yml --limit nfss
Additional Resources
3.7. Understanding the limit option
This section contains information about the Ansible --limit option.
Ansible supports the --limit option that enables you to use the site, site-docker, and rolling_upgrade Ansible playbooks for a particular section of the inventory file.
$ ansible-playbook site.yml|rolling_upgrade.yml|site-docker.yml --limit osds|rgws|clients|mdss|nfss
For example, to redeploy only OSDs:
$ ansible-playbook /usr/share/ceph-ansible/site.yml --limit osds
If you colocate Ceph components on one node, Ansible applies a playbook to all components on the node despite that only one component type was specified with the limit option. For example, if you run the rolling_update playbook with the --limit osds option on a node that contains OSDs and Metadata Servers (MDS), Ansible will upgrade both components, OSDs and MDSs.

Where did the comment section go?
Red Hat's documentation publication system recently went through an upgrade to enable speedier, more mobile-friendly content. We decided to re-evaluate our commenting platform to ensure that it meets your expectations and serves as an optimal feedback mechanism. During this redesign, we invite your input on providing feedback on Red Hat documentation via the discussion platform.