Chapter 3. Deploying Red Hat Ceph Storage
This chapter describes how to use the Ansible application to deploy a Red Hat Ceph Storage cluster and other components, such as Metadata Servers or the Ceph Object Gateway.
- To install a Red Hat Ceph Storage cluster, see Section 3.2, “Installing a Red Hat Ceph Storage Cluster”.
- To install Metadata Servers, see Section 3.4, “Installing Metadata Servers”.
-
To install the
ceph-client
role, see Section 3.5, “Installing the Ceph Client Role”. - To install the Ceph Object Gateway, see Section 3.6, “Installing the Ceph Object Gateway”.
- To configure a multisite Ceph Object Gateway, see Section 3.6.1, “Configuring a multisite Ceph Object Gateway”.
-
To learn about the Ansible
--limit
option, see Section 3.8, “Understanding thelimit
option”.
Previously, Red Hat did not provide the ceph-ansible
package for Ubuntu. In Red Hat Ceph Storage version 3 and later, you can use the Ansible automation application to deploy a Ceph cluster from an Ubuntu node.
3.1. Prerequisites
- Obtain a valid customer subscription.
Prepare the cluster nodes. On each node:
3.2. Installing a Red Hat Ceph Storage Cluster
Use the Ansible application with the ceph-ansible
playbook to install Red Hat Ceph Storage 3.
Production Ceph storage clusters start with a minimum of three monitor hosts and three OSD nodes containing multiple OSD daemons.

Prerequisites
On the Ansible administration node, install the
ceph-ansible
package:[user@admin ~]$ sudo apt-get install ceph-ansible
Procedure
Run the following commands from the Ansible administration node unless instructed otherwise.
As the Ansible user, create the
ceph-ansible-keys
directory where Ansible stores temporary values generated by theceph-ansible
playbook.[user@admin ~]$ mkdir ~/ceph-ansible-keys
As root, create a symbolic link to the
/usr/share/ceph-ansible/group_vars
directory in the/etc/ansible/
directory:[root@admin ~]# ln -s /usr/share/ceph-ansible/group_vars /etc/ansible/group_vars
Navigate to the
/usr/share/ceph-ansible/
directory:[root@admin ~]$ cd /usr/share/ceph-ansible
Create new copies of the
yml.sample
files:[root@admin ceph-ansible]# cp group_vars/all.yml.sample group_vars/all.yml [root@admin ceph-ansible]# cp group_vars/osds.yml.sample group_vars/osds.yml [root@admin ceph-ansible]# cp site.yml.sample site.yml
Edit the copied files.
Edit the
group_vars/all.yml
file. See the table below for the most common required and optional parameters to uncomment. Note that the table does not include all parameters.ImportantUsing a custom storage cluster name is not supported. Do not set the
cluster
parameter to any value other thanceph
. Using a custom storage cluster name is only supported with Ceph clients, such as:librados
, the Ceph Object Gateway, and RADOS block device mirroring.Table 3.1. General Ansible Settings
Option Value Required Notes ceph_origin
repository
ordistro
orlocal
Yes
The
repository
value means Ceph will be installed through a new repository. Thedistro
value means that no separate repository file will be added, and you will get whatever version of Ceph that is included with the Linux distribution. Thelocal
value means the Ceph binaries will be copied from the local machine.ceph_repository_type
cdn
oriso
Yes
ceph_rhcs_iso_path
The path to the ISO image
Yes if
ceph_repository_type
is set toiso
ceph_rhcs_cdn_debian_repo
The credentials to access the online Ubuntu Ceph repositories. For example,
https://username:password@rhcs.download.redhat.com
.Yes
ceph_rhcs_cdn_debian_repo_version
Use
/3-release/
for new installations; use/3-updates/
for updates.Yes
monitor_interface
The interface that the Monitor nodes listen to
monitor_interface
,monitor_address
, ormonitor_address_block
is requiredmonitor_address
The address that the Monitor nodes listen to
monitor_address_block
The subnet of the Ceph public network
Use when the IP addresses of the nodes are unknown, but the subnet is known
ip_version
ipv6
Yes if using IPv6 addressing
public_network
The IP address and netmask of the Ceph public network, or the corresponding IPv6 address if using IPv6
Yes
Section 2.7, “Verifying the Network Configuration for Red Hat Ceph Storage”
cluster_network
The IP address and netmask of the Ceph cluster network
No, defaults to
public_network
An example of the
all.yml
file for CDN Installation can look like:ceph_origin: repository ceph_repository: rhcs ceph_repository_type: cdn ceph_rhcs_version: 3 monitor_interface: eth0 public_network: 192.168.0.0/24
An example of the
all.yml
file for ISO Installation can look like:ceph_origin: repository ceph_repository: rhcs ceph_repository_type: iso ceph_rhcs_iso_path: /home/rhceph-3.1-rhel-7-x86_64.iso ceph_rhcs_version: 3 monitor_interface: eth0 public_network: 192.168.0.0/24
NoteHaving the
ceph_rhcs_version
option set to3
will pull in the latest version of Red Hat Ceph Storage 3.For additional details, see the
all.yml
file.Edit the
group_vars/osds.yml
file. See the table below for the most common required and optional parameters to uncomment. Note that the table does not include all parameters.ImportantUse a different physical device to install an OSD than the device where the operating system is installed. Sharing the same device between the operating system and OSDs causes performance issues.
Table 3.2. OSD Ansible Settings
Option Value Required Notes osd_scenario
collocated
to use the same device for write-ahead logging and key/value data (BlueStore) or journal (FileStore) and OSD datanon-collocated
to use a dedicated device, such as SSD or NVMe media to store write-ahead log and key/value data (BlueStore) or journal data (FileStore)lvm
to use the Logical Volume Manager to store OSD dataYes
When using
osd_scenario: non-collocated
,ceph-ansible
expects the variablesdevices
anddedicated_devices
to match. For example, if you specify 10 disks indevices
, you must specify 10 entries indedicated_devices
.osd_auto_discovery
true
to automatically discover OSDsYes if using
osd_scenario:
collocated
orlvm
Cannot be used when
devices
setting is useddevices
List of devices where
ceph data
is storedYes to specify the list of devices
Cannot be used when
osd_auto_discovery
setting is used. When usinglvm
as theosd_scenario
and setting thedevices
option,ceph-volume lvm batch
mode creates the optimized OSD configuration.dedicated_devices
List of dedicated devices for non-collocated OSDs where
ceph journal
is storedYes if
osd_scenario: non-collocated
Should be nonpartitioned devices
dmcrypt
true
to encrypt OSDsNo
Defaults to
false
lvm_volumes
A list of FileStore or BlueStore dictionaries
Yes if using
osd_scenario: lvm
; and storage devices are not defined usingdevices
Each dictionary must contain a
data
,journal
anddata_vg
keys. Any logical volume or volume group must be the name and not the full path. Thedata
, andjournal
keys can be a logical volume (LV) or partition, but do not use one journal for multipledata
LVs. Thedata_vg
key must be the volume group containing thedata
LV. Optionally, thejournal_vg
key can be used to specify the volume group containing the journal LV, if applicable. See the examples below for various supported configurations.osds_per_device
The number of OSDs to create per device.
No
Defaults to
1
osd_objectstore
The Ceph object store type for the OSDs.
No
Defaults to
bluestore
. The other option isfilestore
. Required for upgrades.The following are examples of the
osds.yml
file when using the three OSD scenarios:collocated
,non-collocated
, andlvm
. The default OSD object store format is BlueStore, if not specified.Collocated
osd_objectstore: filestore osd_scenario: collocated devices: - /dev/sda - /dev/sdb
Non-collocated - BlueStore
osd_objectstore: bluestore osd_scenario: non-collocated devices: - /dev/sda - /dev/sdb - /dev/sdc - /dev/sdd dedicated_devices: - /dev/nvme0n1 - /dev/nvme0n1 - /dev/nvme1n1 - /dev/nvme1n1
This non-collocated example will create four BlueStore OSDs, one per device. In this example, the traditional hard drives (
sda
,sdb
,sdc
,sdd
) are used for object data, and the solid state drives (SSDs) (/dev/nvme0n1
,/dev/nvme1n1
) are used for the BlueStore databases and write-ahead logs. This configuration pairs the/dev/sda
and/dev/sdb
devices with the/dev/nvme0n1
device, and pairs the/dev/sdc
and/dev/sdd
devices with the/dev/nvme1n1
device.Non-collocated - FileStore
osd_objectstore: filestore osd_scenario: non-collocated devices: - /dev/sda - /dev/sdb - /dev/sdc - /dev/sdd dedicated_devices: - /dev/nvme0n1 - /dev/nvme0n1 - /dev/nvme1n1 - /dev/nvme1n1
LVM simple
osd_objectstore: bluestore osd_scenario: lvm devices: - /dev/sda - /dev/sdb
or
osd_objectstore: bluestore osd_scenario: lvm devices: - /dev/sda - /dev/sdb - /dev/nvme0n1 - /dev/sdc - /dev/sdd - /dev/nvme1n1
With these simple configurations
ceph-ansible
uses batch mode (ceph-volume lvm batch
) to create the OSDs.In the first scenario, if the
devices
are traditional hard drives or SSDs, then one OSD per device is created.In the second scenario, when there is a mix of traditional hard drives and SSDs, the data is placed on the traditional hard drives (
sda
,sdb
) and the BlueStore database (block.db
) is created as large as possible on the SSD (nvme0n1
). Similarly, the data is placed on the traditional hard drives (sdc
,sdd
) and the BlueStore database (block.db
) is created on the SSDnvme1n1
irrespective of the order of devices mentioned.Importantceph-volume lvm batch mode
creates the optimized OSD configuration by placing data on the traditional hard drives and theblock.db
on the SSD. If you want to specify the logical volumes and volume groups to use, you can create them directly by following the LVM advance scenario below.LVM advance
osd_objectstore: filestore osd_scenario: lvm lvm_volumes: - data: data-lv1 data_vg: vg1 journal: journal-lv1 journal_vg: vg2 - data: data-lv2 journal: /dev/sda1 data_vg: vg1
or
osd_objectstore: bluestore osd_scenario: lvm lvm_volumes: - data: data-lv1 data_vg: data-vg1 db: db-lv1 db_vg: db-vg1 wal: wal-lv1 wal_vg: wal-vg1 - data: data-lv2 data_vg: data-vg2 db: db-lv2 db_vg: db-vg2 wal: wal-lv2 wal_vg: wal-vg2
With these advance scenario examples, the volume groups and logical volumes must be created beforehand. They will not be created by
ceph-ansible
.NoteIf using all NVMe SSDs set the
osd_scenario: lvm
andosds_per_device: 2
options. For more information, see Configuring OSD Ansible settings for all NVMe Storage for Red Hat Enterprise Linux or Configuring OSD Ansible settings for all NVMe Storage for Ubuntu in the Red Hat Ceph Storage Installation Guides.For additional details, see the comments in the
osds.yml
file.
Edit the Ansible inventory file located by default at
/etc/ansible/hosts
. Remember to comment out example hosts.Add the Monitor nodes under the
[mons]
section:[mons] <monitor-host-name> <monitor-host-name> <monitor-host-name>
Add OSD nodes under the
[osds]
section. If the nodes have sequential naming, consider using a range:[osds] <osd-host-name[1:10]>
NoteFor OSDs in a new installation, the default object store format is BlueStore.
Optionally, use the
devices
parameter to specify devices that the OSD nodes will use. Use a comma-separated list to list multiple devices.[osds] <ceph-host-name> devices="[ '<device_1>', '<device_2>' ]"
For example:
[osds] ceph-osd-01 devices="[ '/dev/sdb', '/dev/sdc' ]" ceph-osd-02 devices="[ '/dev/sdb', '/dev/sdc', '/dev/sdd' ]"
When specifying no devices, set the
osd_auto_discovery
option totrue
in theosds.yml
file.NoteUsing the
devices
parameter is useful when OSDs use devices with different names or when one of the devices failed on one of the OSDs.Optionally, if you want
ansible-playbook
to create a custom CRUSH hierarchy, specify where you want the OSD hosts to be in the CRUSH map’s hierarchy by using theosd_crush_location
parameter. You must specify at least two CRUSH bucket types to specify the location of the OSD, and one buckettype
must behost
. By default, these includeroot
,datacenter
,room
,row
,pod
,pdu
,rack
,chassis
andhost
.[osds] <ceph-host-name> osd_crush_location="{ 'root': '<root-bucket>', 'rack': '<rack-bucket>', 'pod': '<pod-bucket>', 'host': '<ceph-host-name>' }"
For example:
[osds] ceph-osd-01 osd_crush_location="{ 'root': 'my-root', 'rack': 'my-rack', 'pod': 'my-pod', 'host': 'ceph-osd-01' }"
Add the Ceph Manager (
ceph-mgr
) nodes under the[mgrs]
section. Colocate the Ceph Manager daemon with Monitor nodes.[mgrs] <monitor-host-name> <monitor-host-name> <monitor-host-name>
As the Ansible user, ensure that Ansible can reach the Ceph hosts:
[user@admin ~]$ ansible all -m ping
Add the following line to the
/etc/ansible/ansible.cfg
file:retry_files_save_path = ~/
As
root
, create the/var/log/ansible/
directory and assign the appropriate permissions for the Ansible user that you created in Creating an Ansible user with sudo access. For example, for user namedansible
:[root@admin ~]# mkdir /var/log/ansible [root@admin ~]# chown ansible:ansible /var/log/ansible [root@admin ~]# chmod 755 /var/log/ansible
Edit the
/usr/share/ceph-ansible/ansible.cfg
file, updating thelog_path
value as follows:log_path = /var/log/ansible/ansible.log
As the Ansible user, change to the
/usr/share/ceph-ansible/
directory:[user@admin ~]$ cd /usr/share/ceph-ansible/
Run the
ceph-ansible
playbook:[user@admin ceph-ansible]$ ansible-playbook site.yml
NoteTo increase the deployment speed, use the
--forks
option toansible-playbook
. By default,ceph-ansible
sets forks to20
. With this setting, up to twenty nodes will be installed at the same time. To install up to thirty nodes at a time, runansible-playbook --forks 30 PLAYBOOK FILE
. The resources on the admin node must be monitored to ensure they are not overused. If they are, lower the number passed to--forks
.Using the root account on a Monitor node, verify the status of the Ceph cluster:
[root@monitor ~]# ceph health HEALTH_OK
Verify the cluster is functioning using
rados
.From a monitor node, create a test pool with eight placement groups:
Syntax
[root@monitor ~]# ceph osd pool create <pool-name> <pg-number>
Example
[root@monitor ~]# ceph osd pool create test 8
Create a file called
hello-world.txt
:Syntax
[root@monitor ~]# vim <file-name>
Example
[root@monitor ~]# vim hello-world.txt
Upload
hello-world.txt
to the test pool using the object namehello-world
:Syntax
[root@monitor ~]# rados --pool <pool-name> put <object-name> <object-file>
Example
[root@monitor ~]# rados --pool test put hello-world hello-world.txt
Download
hello-world
from the test pool as file namefetch.txt
:Syntax
[root@monitor ~]# rados --pool <pool-name> get <object-name> <object-file>
Example
[root@monitor ~]# rados --pool test get hello-world fetch.txt
Check the contents of
fetch.txt
:[root@monitor ~]# cat fetch.txt
The output should be:
"Hello World!"
NoteIn addition to verifying the cluster status, you can use the
ceph-medic
utility to overall diagnose the Ceph Storage Cluster. See the Installing and Usingceph-medic
to Diagnose a Ceph Storage Cluster chapter in the Red Hat Ceph Storage 3 Troubleshooting Guide.
3.3. Configuring OSD Ansible settings for all NVMe storage
To optimize performance when using only non-volatile memory express (NVMe) devices for storage, configure four OSDs on each NVMe device. Normally only one OSD is configured per device, which will underutilize the throughput of an NVMe device.
If you mix SSDs and HDDs, then SSDs will be used for either journals or block.db
, not OSDs.
In testing, configuring two OSDs on each NVMe device was found to provide optimal performance. It is recommended to set osds_per_device: 2
, but it is not required. Other values may provide better performance in your environment.
Prerequisites
- Satisfying all software and hardware requirements for a Ceph cluster.
Procedure
Set
osd_scenario: lvm
andosds_per_device: 2
ingroup_vars/osds.yml
:osd_scenario: lvm osds_per_device: 2
List the NVMe devices under
devices
:devices: - /dev/nvme0n1 - /dev/nvme1n1 - /dev/nvme2n1 - /dev/nvme3n1
The settings in
group_vars/osds.yml
will look similar to this example:osd_scenario: lvm osds_per_device: 2 devices: - /dev/nvme0n1 - /dev/nvme1n1 - /dev/nvme2n1 - /dev/nvme3n1
You must use devices
with this configuration, not lvm_volumes
. This is because lvm_volumes
is generally used with pre-created logical volumes and osds_per_device
implies automatic logical volume creation by Ceph.
3.4. Installing Metadata Servers
Use the Ansible automation application to install a Ceph Metadata Server (MDS). Metadata Server daemons are necessary for deploying a Ceph File System.
Prerequisites
- A working Red Hat Ceph Storage cluster.
Procedure
Perform the following steps on the Ansible administration node.
Add a new section
[mdss]
to the/etc/ansible/hosts
file:[mdss] hostname hostname hostname
Replace hostname with the host names of the nodes where you want to install the Ceph Metadata Servers.
Navigate to the
/usr/share/ceph-ansible
directory:[root@admin ~]# cd /usr/share/ceph-ansible
Optional. Change the default variables.
Create a copy of the
group_vars/mdss.yml.sample
file namedmdss.yml
:[root@admin ceph-ansible]# cp group_vars/mdss.yml.sample group_vars/mdss.yml
-
Optionally, edit parameters in
mdss.yml
. Seemdss.yml
for details.
As the Ansible user, run the Ansible playbook:
[user@admin ceph-ansible]$ ansible-playbook site.yml --limit mdss
- After installing Metadata Servers, configure them. For details, see the Configuring Metadata Server Daemons chapter in the Ceph File System Guide for Red Hat Ceph Storage 3.
Additional Resources
- The Ceph File System Guide for Red Hat Ceph Storage 3
-
Understanding the
limit
option
3.5. Installing the Ceph Client Role
The ceph-ansible
utility provides the ceph-client
role that copies the Ceph configuration file and the administration keyring to nodes. In addition, you can use this role to create custom pools and clients.
Prerequisites
-
A running Ceph storage cluster, preferably in the
active + clean
state. - Perform the tasks listed in Chapter 2, Requirements for Installing Red Hat Ceph Storage.
Procedure
Perform the following tasks on the Ansible administration node.
Add a new section
[clients]
to the/etc/ansible/hosts
file:[clients] <client-hostname>
Replace
<client-hostname>
with the host name of the node where you want to install theceph-client
role.Navigate to the
/usr/share/ceph-ansible
directory:[root@admin ~]# cd /usr/share/ceph-ansible
Create a new copy of the
clients.yml.sample
file namedclients.yml
:[root@admin ceph-ansible ~]# cp group_vars/clients.yml.sample group_vars/clients.yml
Open the
group_vars/clients.yml
file, and uncomment the following lines:keys: - { name: client.test, caps: { mon: "allow r", osd: "allow class-read object_prefix rbd_children, allow rwx pool=test" }, mode: "{{ ceph_keyring_permissions }}" }
Replace
client.test
with the real client name, and add the client key to the client definition line, for example:key: "ADD-KEYRING-HERE=="
Now the whole line example would look similar to this:
- { name: client.test, key: "AQAin8tUMICVFBAALRHNrV0Z4MXupRw4v9JQ6Q==", caps: { mon: "allow r", osd: "allow class-read object_prefix rbd_children, allow rwx pool=test" }, mode: "{{ ceph_keyring_permissions }}" }
NoteThe
ceph-authtool --gen-print-key
command can generate a new client key.
Optionally, instruct
ceph-client
to create pools and clients.Update
clients.yml
.-
Uncomment the
user_config
setting and set it totrue
. -
Uncomment the
pools
andkeys
sections and update them as required. You can define custom pools and client names altogether with thecephx
capabilities.
-
Uncomment the
Add the
osd_pool_default_pg_num
setting to theceph_conf_overrides
section in theall.yml
file:ceph_conf_overrides: global: osd_pool_default_pg_num: <number>
Replace
<number>
with the default number of placement groups.
Run the Ansible playbook:
[user@admin ceph-ansible]$ ansible-playbook site.yml --limit clients
Additional Resources
3.6. Installing the Ceph Object Gateway
The Ceph Object Gateway, also know as the RADOS gateway, is an object storage interface built on top of the librados
API to provide applications with a RESTful gateway to Ceph storage clusters.
Prerequisites
-
A running Red Hat Ceph Storage cluster, preferably in the
active + clean
state. - On the Ceph Object Gateway node, perform the tasks listed in Chapter 2, Requirements for Installing Red Hat Ceph Storage.
Procedure
Perform the following tasks on the Ansible administration node.
Add gateway hosts to the
/etc/ansible/hosts
file under the[rgws]
section to identify their roles to Ansible. If the hosts have sequential naming, use a range, for example:[rgws] <rgw_host_name_1> <rgw_host_name_2> <rgw_host_name[3..10]>
Navigate to the Ansible configuration directory:
[root@ansible ~]# cd /usr/share/ceph-ansible
Create the
rgws.yml
file from the sample file:[root@ansible ~]# cp group_vars/rgws.yml.sample group_vars/rgws.yml
Open and edit the
group_vars/rgws.yml
file. To copy the administrator key to the Ceph Object Gateway node, uncomment thecopy_admin_key
option:copy_admin_key: true
The
rgws.yml
file may specify a different default port than the default port7480
. For example:ceph_rgw_civetweb_port: 80
The
all.yml
file MUST specify aradosgw_interface
. For example:radosgw_interface: eth0
Specifying the interface prevents Civetweb from binding to the same IP address as another Civetweb instance when running multiple instances on the same host.
Generally, to change default settings, uncomment the settings in the
rgws.yml
file, and make changes accordingly. To make additional changes to settings that are not in thergws.yml
file, useceph_conf_overrides:
in theall.yml
file. For example, set thergw_dns_name:
with the host of the DNS server and ensure the cluster’s DNS server to configure it for wild cards to enable S3 subdomains.ceph_conf_overrides: client.rgw.rgw1: rgw_dns_name: <host_name> rgw_override_bucket_index_max_shards: 16 rgw_bucket_default_quota_max_objects: 1638400
For advanced configuration details, see the Red Hat Ceph Storage 3 Ceph Object Gateway for Production guide. Advanced topics include:
- Configuring Ansible Groups
Developing Storage Strategies. See the Creating the Root Pool, Creating System Pools, and Creating Data Placement Strategies sections for additional details on how create and configure the pools.
See Bucket Sharding for configuration details on bucket sharding.
Uncomment the
radosgw_interface
parameter in thegroup_vars/all.yml
file.radosgw_interface: <interface>
Replace:
-
<interface>
with the interface that the Ceph Object Gateway nodes listen to
For additional details, see the
all.yml
file.-
Run the Ansible playbook:
[user@admin ceph-ansible]$ ansible-playbook site.yml --limit rgws
Ansible ensures that each Ceph Object Gateway is running.
For a single site configuration, add Ceph Object Gateways to the Ansible configuration.
For multi-site deployments, you should have an Ansible configuration for each zone. That is, Ansible will create a Ceph storage cluster and gateway instances for that zone.
After installation for a multi-site cluster is complete, proceed to the Multi-site chapter in the Object Gateway Guide for Ubuntu for details on configuring a cluster for multi-site.
Additional Resources
3.6.1. Configuring a multisite Ceph Object Gateway
Ansible will configure the realm, zonegroup, along with the master and secondary zones for a Ceph Object Gateway in a multisite environment.
By default, ceph-ansible
creates new clusters for multisite configuration. If there is an existing cluster which you want to apply to secondary site, see Migrating a Single Site System to Multi-Site in the Red Hat Ceph Storage 3 Object gateway guide for Red Hat Enterprise Linux or Ubuntu
Prerequisites
- Two running Red Hat Ceph Storage clusters.
- On the Ceph Object Gateway node, perform the tasks listed in the Requirements for Installing Red Hat Ceph Storage found in the Red Hat Ceph Storage Installation Guide.
- Install and configure one Ceph Object Gateway per storage cluster.
Procedure
Do the following steps on Ansible node for the primary storage cluster:
Generate the system keys and capture their output in the
multi-site-keys.txt
file:[root@ansible ~]# echo system_access_key: $(cat /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w 20 | head -n 1) > multi-site-keys.txt [root@ansible ~]# echo system_secret_key: $(cat /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w 40 | head -n 1) >> multi-site-keys.txt
Navigate to the Ansible configuration directory,
/usr/share/ceph-ansible
:[root@ansible ~]# cd /usr/share/ceph-ansible
Open and edit the
group_vars/all.yml
file. Enable multisite support by adding the following options, along with updating the$ZONE_NAME
,$ZONE_GROUP_NAME
,$REALM_NAME
,$ACCESS_KEY
, and$SECRET_KEY
values accordingly.When more than one Ceph Object Gateway is in the master zone, then the
rgw_multisite_endpoints_list
option needs to be set. The value for thergw_multisite_endpoints_list
option is a comma separated list, with no spaces.Syntax
rgw_multisite: true rgw_zone: $ZONE_NAME rgw_zonemaster: true rgw_zonesecondary: false rgw_multisite_endpoint_addr: "{{ ansible_fqdn }}" rgw_multisite_endpoints_list: "{{ rgw_multisite_proto }}://{{ ansible_fqdn }}:{{ radosgw_frontend_port }}" rgw_zonegroup: $ZONE_GROUP_NAME rgw_zone_user: zone.user rgw_realm: $REALM_NAME system_access_key: $ACCESS_KEY system_secret_key: $SECRET_KEY
Example
rgw_multisite: true rgw_zone: us-east rgw_zonemaster: true rgw_zonesecondary: false rgw_multisite_endpoint_addr: "primary_rgw_host" rgw_multisite_endpoints_list: http://foo.example.com:8080,http://bar.example.com:8080,http://baz.example.com:8080 rgw_zonegroup: us rgw_zone_user: synchronization-user rgw_realm: test system_access_key: test123 system_secret_key: test1234
NoteThe
ansible_fqdn
domain name must be resolvable from the secondary storage cluster.NoteWhen adding a new Object Gateway, append it to the end of the
rgw_multisite_endpoints_list
with the endpoint URL of the new Object Gateway before running the Ansible playbook.Run the Ansible playbook:
[user@ansible ceph-ansible]$ ansible-playbook site.yml --limit rgws
Restart the Ceph Object Gateway daemon:
[root@rgw ~]# systemctl restart ceph-radosgw@rgw.`hostname -s`
Do the following steps on Ansible node for the secondary storage cluster:
Navigate to the Ansible configuration directory,
/usr/share/ceph-ansible
:[root@ansible ~]# cd /usr/share/ceph-ansible
Open and edit the
group_vars/all.yml
file. Enable multisite support by adding the following options, along with updating the$ZONE_NAME
,$ZONE_GROUP_NAME
,$REALM_NAME
,$ACCESS_KEY
, and$SECRET_KEY
values accordingly: Thergw_zone_user
,system_access_key
, andsystem_secret_key
must be the same value as used in the master zone configuration. Thergw_pullhost
option must be the Ceph Object Gateway for the master zone.When more than one Ceph Object Gateway is in the secondary zone, then the
rgw_multisite_endpoints
option needs to be set. The value for thergw_multisite_endpoints_list
option is a comma separated list, with no spaces.Syntax
rgw_multisite: true rgw_zone: $ZONE_NAME rgw_zonemaster: false rgw_zonesecondary: true rgw_multisite_endpoint_addr: "{{ ansible_fqdn }}" rgw_multisite_endpoints_list: "{{ rgw_multisite_proto }}://{{ ansible_fqdn }}:{{ radosgw_frontend_port }}" rgw_zonegroup: $ZONE_GROUP_NAME rgw_zone_user: zone.user rgw_realm: $REALM_NAME system_access_key: $ACCESS_KEY system_secret_key: $SECRET_KEY rgw_pull_proto: http rgw_pull_port: 8080 rgw_pullhost: $MASTER_RGW_NODE_NAME
Example
rgw_multisite: true rgw_zone: us-west rgw_zonemaster: false rgw_zonesecondary: true rgw_multisite_endpoint_addr: "secondary_rgw_host" rgw_multisite_endpoints_list: http://foo.example.com:8080,http://bar.example.com:8080,http://baz.example.com:8080 rgw_zonegroup: us rgw_zone_user: synchronization-user rgw_realm: test system_access_key: test123 system_secret_key: test1234 rgw_pull_proto: http rgw_pull_port: 8080 rgw_pullhost: primary_rgw_host
NoteThe
ansible_fqdn
domain name must be resolvable from the primary storage cluster.NoteWhen adding a new Object Gateway, append it to the end of the
rgw_multisite_endpoints
list with the endpoint URL of the new Object Gateway before running the Ansible playbook.Run the Ansible playbook:
[user@ansible ceph-ansible]$ ansible-playbook site.yml --limit rgws
Restart the Ceph Object Gateway daemon:
[root@rgw ~]# systemctl restart ceph-radosgw@rgw.`hostname -s`
- After running the Ansible playbook on the master and secondary storage clusters, you will have a running active-active Ceph Object Gateway configuration.
Verify the multisite Ceph Object Gateway configuration:
-
From the Ceph Monitor and Object Gateway nodes at each site, primary and secondary, must be able to
curl
the other site. -
Run the
radosgw-admin sync status
command on both sites.
-
From the Ceph Monitor and Object Gateway nodes at each site, primary and secondary, must be able to
3.7. Installing the NFS-Ganesha Gateway
The Ceph NFS Ganesha Gateway is an NFS interface built on top of the Ceph Object Gateway to provide applications with a POSIX filesystem interface to the Ceph Object Gateway for migrating files within filesystems to Ceph Object Storage.
Prerequisites
-
A running Ceph storage cluster, preferably in the
active + clean
state. - At least one node running a Ceph Object Gateway.
- At least one S3 user with an access key and secret.
- Perform the Before You Start procedure.
Procedure
Perform the following tasks on the Ansible administration node.
Create the
nfss
file from the sample file:[root@ansible ~]# cd /etc/ansible/group_vars [root@ansible ~]# cp nfss.yml.sample nfss.yml
Add gateway hosts to the
/etc/ansible/hosts
file under an[nfss]
group to identify their group membership to Ansible. If the hosts have sequential naming, use a range. For example:[nfss] <nfs_host_name_1> <nfs_host_name_2> <nfs_host_name[3..10]>
Navigate to the Ansible configuration directory,
/etc/ansible/
:[root@ansible ~]# cd /usr/share/ceph-ansible
To copy the administrator key to the Ceph Object Gateway node, uncomment the
copy_admin_key
setting in the/usr/share/ceph-ansible/group_vars/nfss.yml
file:copy_admin_key: true
Configure the FSAL (File System Abstraction Layer) sections of the
/usr/share/ceph-ansible/group_vars/nfss.yml
file. Provide an ID, S3 user ID, S3 access key and secret. For NFSv4, it should look something like this:################### # FSAL RGW Config # ################### #ceph_nfs_rgw_export_id: <replace-w-numeric-export-id> #ceph_nfs_rgw_pseudo_path: "/" #ceph_nfs_rgw_protocols: "3,4" #ceph_nfs_rgw_access_type: "RW" #ceph_nfs_rgw_user: "cephnfs" # Note: keys are optional and can be generated, but not on containerized, where # they must be configered. #ceph_nfs_rgw_access_key: "<replace-w-access-key>" #ceph_nfs_rgw_secret_key: "<replace-w-secret-key>"
Run the Ansible playbook:
[user@admin ceph-ansible]$ ansible-playbook site.yml --limit nfss
Additional Resources
3.8. Understanding the limit
option
This section contains information about the Ansible --limit
option.
Ansible supports the --limit
option that enables you to use the site
and site-docker
Ansible playbooks for a particular role of the inventory file.
$ ansible-playbook site.yml|site-docker.yml --limit osds|rgws|clients|mdss|nfss|iscsigws
For example, to redeploy only OSDs on bare metal, run the following command as the Ansible user:
$ ansible-playbook /usr/share/ceph-ansible/site.yml --limit osds
If you colocate Ceph components on one node, Ansible applies a playbook to all components on the node despite that only one component type was specified with the limit
option. For example, if you run the site
playbook with the --limit osds
option on a node that is listed under OSDs and Metadata Servers (MDS) group in the inventory file, Ansible will run the tasks of both the components, OSDs and MDSs, on the node.