Chapter 5. Installing Red Hat Ceph Storage using Ansible
This chapter describes how to use the Ansible application to deploy a Red Hat Ceph Storage cluster and other components, such as Metadata Servers or the Ceph Object Gateway.
- To install a Red Hat Ceph Storage cluster, see Section 5.2, “Installing a Red Hat Ceph Storage cluster”.
- To install Metadata Servers, see Section 5.4, “Installing Metadata servers”.
-
To install the
ceph-client
role, see Section 5.5, “Installing the Ceph Client Role”. - To install the Ceph Object Gateway, see Section 5.6, “Installing the Ceph Object Gateway”.
- To configure a multisite Ceph Object Gateway, see Section 5.7, “Configuring multisite Ceph Object Gateways”.
-
To learn about the Ansible
--limit
option, see Section 5.10, “Understanding thelimit
option”.
5.1. Prerequisites
- Obtain a valid customer subscription.
Prepare the cluster nodes, by doing the following on each node:
5.2. Installing a Red Hat Ceph Storage cluster
Use the Ansible application with the ceph-ansible
playbook to install Red Hat Ceph Storage on bare-metal or in containers. Using a Ceph storage clusters in production must have a minimum of three monitor nodes and three OSD nodes containing multiple OSD daemons. A typical Ceph storage cluster running in production usually consists of ten or more nodes.
In the following procedure, run the commands from the Ansible administration node, unless instructed otherwise. This procedure applies to both bare-metal and container deployments, unless specified.
Ceph can run with one monitor; however, to ensure high availability in a production cluster, Red Hat will only support deployments with at least three monitor nodes.
Deploying Red Hat Ceph Storage 4 in containers on Red Hat Enterprise Linux 7.7 will deploy Red Hat Ceph Storage 4 on a Red Hat Enterprise Linux 8 container image.
Prerequisites
- A valid customer subscription.
- Root-level access to the Ansible administration node.
-
The
ansible
user account for use with the Ansible application. - Enable Red Hat Ceph Storage Tools and Ansible repositories
- For ISO installation, download the latest ISO image on the Ansible node. See the section For ISO Installations in Enabling the Red Hat Ceph Storage repositories chapter in the Red Hat Ceph Storage Installation Guide.
Procedure
-
Log in as the
root
user account on the Ansible administration node. For all deployments, bare-metal or in containers, install the
ceph-ansible
package:Red Hat Enterprise Linux 7
[root@admin ~]# yum install ceph-ansible
Red Hat Enterprise Linux 8
[root@admin ~]# dnf install ceph-ansible
Navigate to the
/usr/share/ceph-ansible/
directory:[root@admin ~]# cd /usr/share/ceph-ansible
Create new
yml
files:[root@admin ceph-ansible]# cp group_vars/all.yml.sample group_vars/all.yml [root@admin ceph-ansible]# cp group_vars/osds.yml.sample group_vars/osds.yml
Bare-metal deployments:
[root@admin ceph-ansible]# cp site.yml.sample site.yml
Container deployments:
[root@admin ceph-ansible]# cp site-container.yml.sample site-container.yml
Edit the new files.
Open for editing the
group_vars/all.yml
file.ImportantUsing a custom storage cluster name is not supported. Do not set the
cluster
parameter to any value other thanceph
. Using a custom storage cluster name is only supported with Ceph clients, such as:librados
, the Ceph Object Gateway, and RADOS block device mirroring.WarningBy default, Ansible attempts to restart an installed, but masked
firewalld
service, which can cause the Red Hat Ceph Storage deployment to fail. To work around this issue, set theconfigure_firewall
option tofalse
in theall.yml
file. If you are running thefirewalld
service, then there is no requirement to use theconfigure_firewall
option in theall.yml
file.NoteHaving the
ceph_rhcs_version
option set to4
will pull in the latest version of Red Hat Ceph Storage 4.NoteRed Hat recommends leaving the
dashboard_enabled
option set toTrue
in thegroup_vars/all.yml
file, and not changing it toFalse
. If you want to disable the dashboard, see Disabling the Ceph Dashboard.NoteDashboard related components are containerized. Therefore, for Bare-metal or Container deployment,
ceph_docker_registry_username
andceph_docker_registry_password
parameters have to be included so that ceph-ansible can fetch container images required for the dashboard.NoteIf you do not have a Red Hat Registry Service Account, create one using the Registry Service Account webpage. See the Red Hat Container Registry Authentication Knowledgebase article for details on how to create and manage tokens.
NoteIn addition to using a Service Account for the
ceph_docker_registry_username
andceph_docker_registry_password
parameters, you can also use your Customer Portal credentials, but to ensure security, encrypt theceph_docker_registry_password
parameter. For more information, see Encrypting Ansible password variables with ansible-vault.Bare-metal example of the
all.yml
file for CDN installation:fetch_directory: ~/ceph-ansible-keys ceph_origin: repository ceph_repository: rhcs ceph_repository_type: cdn ceph_rhcs_version: 4 monitor_interface: eth0 1 public_network: 192.168.0.0/24 ceph_docker_registry: registry.redhat.io ceph_docker_registry_auth: true ceph_docker_registry_username: SERVICE_ACCOUNT_USER_NAME ceph_docker_registry_password: TOKEN dashboard_admin_user: dashboard_admin_password: node_exporter_container_image: registry.redhat.io/openshift4/ose-prometheus-node-exporter:v4.6 grafana_admin_user: grafana_admin_password: grafana_container_image: registry.redhat.io/rhceph/rhceph-4-dashboard-rhel8 prometheus_container_image: registry.redhat.io/openshift4/ose-prometheus:v4.6 alertmanager_container_image: registry.redhat.io/openshift4/ose-prometheus-alertmanager:v4.6
- 1
- This is the interface on the public network.
ImportantStarting with Red Hat Ceph Storage 4.1, you must uncomment or set
dashboard_admin_password
andgrafana_admin_password
in/usr/share/ceph-ansible/group_vars/all.yml
. Set secure passwords for each. Also set custom user names fordashboard_admin_user
andgrafana_admin_user
.NoteFor Red Hat Ceph Storage 4.2, if you have used local registry for installation, use 4.6 for Prometheus image tags.
Bare-metal example of the
all.yml
file for ISO installation:fetch_directory: ~/ceph-ansible-keys ceph_origin: repository ceph_repository: rhcs ceph_repository_type: iso ceph_rhcs_iso_path: /home/rhceph-4-rhel-8-x86_64.iso ceph_rhcs_version: 4 monitor_interface: eth0 1 public_network: 192.168.0.0/24 ceph_docker_registry: registry.redhat.io ceph_docker_registry_auth: true ceph_docker_registry_username: SERVICE_ACCOUNT_USER_NAME ceph_docker_registry_password: TOKEN dashboard_admin_user: dashboard_admin_password: node_exporter_container_image: registry.redhat.io/openshift4/ose-prometheus-node-exporter:v4.6 grafana_admin_user: grafana_admin_password: grafana_container_image: registry.redhat.io/rhceph/rhceph-4-dashboard-rhel8 prometheus_container_image: registry.redhat.io/openshift4/ose-prometheus:v4.6 alertmanager_container_image: registry.redhat.io/openshift4/ose-prometheus-alertmanager:v4.6
- 1
- This is the interface on the public network.
Containers example of the
all.yml
file:fetch_directory: ~/ceph-ansible-keys monitor_interface: eth0 1 public_network: 192.168.0.0/24 ceph_docker_image: rhceph/rhceph-4-rhel8 ceph_docker_image_tag: latest containerized_deployment: true ceph_docker_registry: registry.redhat.io ceph_docker_registry_auth: true ceph_docker_registry_username: SERVICE_ACCOUNT_USER_NAME ceph_docker_registry_password: TOKEN ceph_origin: repository ceph_repository: rhcs ceph_repository_type: cdn ceph_rhcs_version: 4 dashboard_admin_user: dashboard_admin_password: node_exporter_container_image: registry.redhat.io/openshift4/ose-prometheus-node-exporter:v4.6 grafana_admin_user: grafana_admin_password: grafana_container_image: registry.redhat.io/rhceph/rhceph-4-dashboard-rhel8 prometheus_container_image: registry.redhat.io/openshift4/ose-prometheus:v4.6 alertmanager_container_image: registry.redhat.io/openshift4/ose-prometheus-alertmanager:v4.6
- 1
- This is the interface on the public network.
ImportantLook up the latest container images tags on the Red Hat Ecosystem Catalog to install the latest container images with all the latest patches applied.
Containers example of the
all.yml
file, when the Red Hat Ceph Storage nodes do NOT have access to the Internet during deployment:fetch_directory: ~/ceph-ansible-keys monitor_interface: eth0 1 public_network: 192.168.0.0/24 ceph_docker_image: rhceph/rhceph-4-rhel8 ceph_docker_image_tag: latest containerized_deployment: true ceph_docker_registry: LOCAL_NODE_FQDN:5000 ceph_docker_registry_auth: false ceph_origin: repository ceph_repository: rhcs ceph_repository_type: cdn ceph_rhcs_version: 4 dashboard_admin_user: dashboard_admin_password: node_exporter_container_image: LOCAL_NODE_FQDN:5000/openshift4/ose-prometheus-node-exporter:v4.6 grafana_admin_user: grafana_admin_password: grafana_container_image: LOCAL_NODE_FQDN:5000/rhceph/rhceph-4-dashboard-rhel8 prometheus_container_image: LOCAL_NODE_FQDN:5000/openshift4/ose-prometheus:4.6 alertmanager_container_image: LOCAL_NODE_FQDN:5000/openshift4/ose-prometheus-alertmanager:4.6
- 1
- This is the interface on the public network.
- Replace
- LOCAL_NODE_FQDN with your local host FQDN.
From Red Hat Ceph Storage 4.2,
dashboard_protocol
is set tohttps
and Ansible generates the dashboard and grafana keys and certificates. For custom certificates, in theall.yml
file, update the path at Ansible installer host fordashboard_crt
,dashboard_key
,grafana_crt
, andgrafana_key
for bare-metal or container deployment.Syntax
dashboard_protocol: https dashboard_port: 8443 dashboard_crt: 'DASHBOARD_CERTIFICATE_PATH' dashboard_key: 'DASHBOARD_KEY_PATH' dashboard_tls_external: false dashboard_grafana_api_no_ssl_verify: "{{ True if dashboard_protocol == 'https' and not grafana_crt and not grafana_key else False }}" grafana_crt: 'GRAFANA_CERTIFICATE_PATH' grafana_key: 'GRAFANA_KEY_PATH'
To install Red Hat Ceph Storage using a container registry reachable with a http or https proxy, set the
ceph_docker_http_proxy
orceph_docker_https_proxy
variables in thegroup_vars/all.yml
file.Example
ceph_docker_http_proxy: http://192.168.42.100:8080 ceph_docker_https_proxy: https://192.168.42.100:8080
If you need to exclude some host for the proxy configuration, use the
ceph_docker_no_proxy
variable in thegroup_vars/all.yml
file.Example
ceph_docker_no_proxy: "localhost,127.0.0.1"
In addition to editing the
all.yml
file for proxy installation of Red Hat Ceph Storage, edit the/etc/environment
file:Example
HTTP_PROXY: http://192.168.42.100:8080 HTTPS_PROXY: https://192.168.42.100:8080 NO_PROXY: "localhost,127.0.0.1"
This triggers the podman to start the containerized services such as prometheus, grafana-server, alertmanager, and node-exporter, and download the required images.
For all deployments, bare-metal or in containers, edit the
group_vars/osds.yml
file.ImportantDo not install an OSD on the device the operating system is installed on. Sharing the same device between the operating system and OSDs causes performance issues.
Ceph-ansible uses the
ceph-volume
tool to prepare storage devices for Ceph usage. You can configureosds.yml
to use your storage devices in different ways to optimize performance for your particular workload.ImportantAll the examples below use the BlueStore object store, which is the format Ceph uses to store data on devices. Previously, Ceph used FileStore as the object store. This format is deprecated for new Red Hat Ceph Storage 4.0 installs because BlueStore offers more features and improved performance. It is still possible to use FileStore, but using it requires a Red Hat support exception. For more information on BlueStore, see Ceph BlueStore in the Red Hat Ceph Storage Architecture Guide.
Auto discovery
osd_auto_discovery: true
The above example uses all empty storage devices on the system to create the OSDs, so you do not have to specify them explicitly. The
ceph-volume
tool checks for empty devices, so devices which are not empty will not be used.NoteIf you later decide to remove the cluster using
purge-docker-cluster.yml
orpurge-cluster.yml
, you must comment outosd_auto_discovery
and declare the OSD devices in theosds.yml
file. For more information, see Purging storage clusters deployed by Ansible.Simple configuration
First Scenario
devices: - /dev/sda - /dev/sdb
or
Second Scenario
devices: - /dev/sda - /dev/sdb - /dev/nvme0n1 - /dev/sdc - /dev/sdd - /dev/nvme1n1
or
Third Scenario
lvm_volumes: - data: /dev/sdb - data: /dev/sdc
or
Fourth Scenario
lvm_volumes: - data: /dev/sdb - data:/dev/nvme0n1
When using the
devices
option alone,ceph-volume lvm batch
mode automatically optimizes OSD configuration.In the first scenario, if the
devices
are traditional hard drives or SSDs, then one OSD per device is created.In the second scenario, when there is a mix of traditional hard drives and SSDs, the data is placed on the traditional hard drives (
sda
,sdb
) and the BlueStore database is created as large as possible on the SSD (nvme0n1
). Similarly, the data is placed on the traditional hard drives (sdc
,sdd
), and the BlueStore database is created on the SSDnvme1n1
irrespective of the order of devices mentioned.NoteBy default
ceph-ansible
does not override the default values ofbluestore_block_db_size
andbluestore_block_wal_size
. You can setbluestore_block_db_size
usingceph_conf_overrides
in thegroup_vars/all.yml
file. The value ofbluestore_block_db_size
should be greater than 2 GB.In the third scenario, data is placed on the traditional hard drives (
sdb
,sdc
), and the BlueStore database is collocated on the same devices.In the fourth scenario, data is placed on the traditional hard drive (
sdb
) and on the SSD (nvme1n1
), and the BlueStore database is collocated on the same devices. This is different from using thedevices
directive, where the BlueStore database is placed on the SSD.ImportantThe
ceph-volume lvm batch mode
command creates the optimized OSD configuration by placing data on the traditional hard drives and the BlueStore database on the SSD. If you want to specify the logical volumes and volume groups to use, you can create them directly by following the Advanced configuration scenarios below.Advanced configuration
First Scenario
devices: - /dev/sda - /dev/sdb dedicated_devices: - /dev/sdx - /dev/sdy
or
Second Scenario
devices: - /dev/sda - /dev/sdb dedicated_devices: - /dev/sdx - /dev/sdy bluestore_wal_devices: - /dev/nvme0n1 - /dev/nvme0n2
In the first scenario, there are two OSDs. The
sda
andsdb
devices each have their own data segments and write-ahead logs. The additional dictionarydedicated_devices
is used to isolate their databases, also known asblock.db
, onsdx
andsdy
, respectively.In the second scenario, another additional dictionary,
bluestore_wal_devices
, is used to isolate the write-ahead log on NVMe devicesnvme0n1
andnvme0n2
. Using thedevices
,dedicated_devices
, andbluestore_wal_devices,
options together, this allows you to isolate all components of an OSD onto separate devices. Laying out the OSDs like this can increase overall performance.Pre-created logical volumes
First Scenario
lvm_volumes: - data: data-lv1 data_vg: data-vg1 db: db-lv1 db_vg: db-vg1 wal: wal-lv1 wal_vg: wal-vg1 - data: data-lv2 data_vg: data-vg2 db: db-lv2 db_vg: db-vg2 wal: wal-lv2 wal_vg: wal-vg2
or
Second Scenario
lvm_volumes: - data: /dev/sdb db: db-lv1 db_vg: db-vg1 wal: wal-lv1 wal_vg: wal-vg1
By default, Ceph uses Logical Volume Manager to create logical volumes on the OSD devices. In the Simple configuration and Advanced configuration examples above, Ceph creates logical volumes on the devices automatically. You can use previously created logical volumes with Ceph by specifying the
lvm_volumes
dictionary.In the first scenario, the data is placed on dedicated logical volumes, database, and WAL. You can also specify just data, data and WAL, or data and database. The
data:
line must specify the logical volume name where data is to be stored, anddata_vg:
must specify the name of the volume group the data logical volume is contained in. Similarly,db:
is used to specify the logical volume the database is stored on anddb_vg:
is used to specify the volume group its logical volume is in. Thewal:
line specifies the logical volume the WAL is stored on and thewal_vg:
line specifies the volume group that contains it.In the second scenario, the actual device name is set for the
data:
option, and doing so, does not require specifying thedata_vg:
option. You must specify the logical volume name and the volume group details for the BlueStore database and WAL devices.ImportantWith
lvm_volumes:
, the volume groups and logical volumes must be created beforehand. The volume groups and logical volumes will not be created byceph-ansible
.NoteIf using all NVMe SSDs, then set
osds_per_device: 2
. For more information, see Configuring OSD Ansible settings for all NVMe Storage in the Red Hat Ceph Storage Installation Guide.NoteAfter rebooting a Ceph OSD node, there is a possibility that the block device assignments will change. For example,
sdc
might becomesdd
. You can use persistent naming devices, such as the/dev/disk/by-path/
device path, instead of the traditional block device name.
For all deployments, bare-metal or in containers, create the Ansible inventory file and then open it for editing:
[root@admin ~]# cd /usr/share/ceph-ansible/ [root@admin ceph-ansible]# touch hosts
Edit the
hosts
file accordingly.NoteFor information about editing the Ansible inventory location, see Configuring Ansible inventory location.
Add a node under
[grafana-server]
. This role installs Grafana and Prometheus to provide real-time insights into the performance of the Ceph cluster. These metrics are presented in the Ceph Dashboard, which allows you to monitor and manage the cluster. The installation of the dashboard, Grafana, and Prometheus are required. You can colocate the metrics functions on the Ansible Administration node. If you do, ensure the system resources of the node are greater than than what is required for a stand alone metrics node.[grafana-server] GRAFANA-SERVER_NODE_NAME
Add the monitor nodes under the
[mons]
section:[mons] MONITOR_NODE_NAME_1 MONITOR_NODE_NAME_2 MONITOR_NODE_NAME_3
Add OSD nodes under the
[osds]
section:[osds] OSD_NODE_NAME_1 OSD_NODE_NAME_2 OSD_NODE_NAME_3
NoteYou can add a range specifier (
[1:10]
) to the end of the node name, if the node names are numerically sequential. For example:[osds] example-node[1:10]
NoteFor OSDs in a new installation, the default object store format is BlueStore.
-
Optionally, in container deployments, colocate Ceph Monitor daemons with the Ceph OSD daemons on one node by adding the same node under the
[mon]
and[osd]
sections. In the Additional Resources section below, see the link on colocating Ceph daemons for more information. Add the Ceph Manager (
ceph-mgr
) nodes under the[mgrs]
section. This is colocating the Ceph Manager daemon with Ceph Monitor daemon.[mgrs] MONITOR_NODE_NAME_1 MONITOR_NODE_NAME_2 MONITOR_NODE_NAME_3
Optionally, if you want to use host specific parameters, for all deployments, bare-metal or in containers, create the
host_vars
directory with host files to include any parameters specific to hosts.Create the
host_vars
directory:[ansible@admin ~]$ mkdir /usr/share/ceph-ansible/host_vars
Change to the
host_vars
directory:[ansible@admin ~]$ cd /usr/share/ceph-ansible/host_vars
Create the host files. Use the host-name-short-name format for the name of the files, for example:
[ansible@admin host_vars]$ touch tower-osd6
Update the file with any host specific parameters, for example:
In bare-metal deployments use the
devices
parameter to specify devices that the OSD nodes will use. Usingdevices
is useful when OSDs use devices with different names or when one of the devices failed on one of the OSDs.devices: DEVICE_1 DEVICE_2
Example
devices: /dev/sdb /dev/sdc
NoteWhen specifying no devices, set the
osd_auto_discovery
parameter totrue
in thegroup_vars/osds.yml
file.
Optionally, for all deployments, bare-metal or in containers, you can create a custom CRUSH hierarchy using Ceph Ansible:
Setup your Ansible inventory file. Specify where you want the OSD hosts to be in the CRUSH map’s hierarchy by using the
osd_crush_location
parameter. You must specify at least two CRUSH bucket types to specify the location of the OSD, and one buckettype
must be host. By default, these includeroot
,datacenter
,room
,row
,pod
,pdu
,rack
,chassis
andhost
.Syntax
[osds] CEPH_OSD_NAME osd_crush_location="{ 'root': ROOT_BUCKET_', 'rack': 'RACK_BUCKET', 'pod': 'POD_BUCKET', 'host': 'CEPH_HOST_NAME' }"
Example
[osds] ceph-osd-01 osd_crush_location="{ 'root': 'default', 'rack': 'rack1', 'pod': 'monpod', 'host': 'ceph-osd-01' }"
Edit the
group_vars/osds.yml
file, and set thecrush_rule_config
andcreate_crush_tree
parameters toTrue
. Create at least one CRUSH rule if you do not want to use the default CRUSH rules, for example:crush_rule_config: True crush_rule_hdd: name: replicated_hdd_rule root: root-hdd type: host class: hdd default: True crush_rules: - "{{ crush_rule_hdd }}" create_crush_tree: True
If you are using faster SSD devices, then edit the parameters as follows:
crush_rule_config: True crush_rule_ssd: name: replicated_ssd_rule root: root-ssd type: host class: ssd default: True crush_rules: - "{{ crush_rule_ssd }}" create_crush_tree: True
NoteThe default CRUSH rules fail if both
ssd
andhdd
OSDs are not deployed because the default rules now include theclass
parameter, which must be defined.Create
pools
, with createdcrush_rules
ingroup_vars/clients.yml
file:Example
copy_admin_key: True user_config: True pool1: name: "pool1" pg_num: 128 pgp_num: 128 rule_name: "HDD" type: "replicated" device_class: "hdd" pools: - "{{ pool1 }}"
View the tree:
[root@mon ~]# ceph osd tree
Validate the pools:
[root@mon ~]# for i in $(rados lspools); do echo "pool: $i"; ceph osd pool get $i crush_rule; done pool: pool1 crush_rule: HDD
For all deployments, bare-metal or in containers, log in with or switch to the
ansible
user.Create the
ceph-ansible-keys
directory where Ansible stores temporary values generated by theceph-ansible
playbook:[ansible@admin ~]$ mkdir ~/ceph-ansible-keys
Change to the
/usr/share/ceph-ansible/
directory:[ansible@admin ~]$ cd /usr/share/ceph-ansible/
Verify that Ansible can reach the Ceph nodes:
[ansible@admin ceph-ansible]$ ansible all -m ping -i hosts
Run the
ceph-ansible
playbook.Bare-metal deployments:
[ansible@admin ceph-ansible]$ ansible-playbook site.yml -i hosts
Container deployments:
[ansible@admin ceph-ansible]$ ansible-playbook site-container.yml -i hosts
NoteIf you deploy Red Hat Ceph Storage to Red Hat Enterprise Linux Atomic Host hosts, use the
--skip-tags=with_pkg
option:[user@admin ceph-ansible]$ ansible-playbook site-container.yml --skip-tags=with_pkg -i hosts
NoteTo increase the deployment speed, use the
--forks
option toansible-playbook
. By default,ceph-ansible
sets forks to20
. With this setting, up to twenty nodes will be installed at the same time. To install up to thirty nodes at a time, runansible-playbook --forks 30 PLAYBOOK FILE -i hosts
. The resources on the admin node must be monitored to ensure they are not overused. If they are, lower the number passed to--forks
.
Wait for the Ceph deployment to finish.
Example output
INSTALLER STATUS ******************************* Install Ceph Monitor : Complete (0:00:30) Install Ceph Manager : Complete (0:00:47) Install Ceph OSD : Complete (0:00:58) Install Ceph RGW : Complete (0:00:34) Install Ceph Dashboard : Complete (0:00:58) Install Ceph Grafana : Complete (0:00:50) Install Ceph Node Exporter : Complete (0:01:14)
Verify the status of the Ceph storage cluster.
Bare-metal deployments:
[root@mon ~]# ceph health HEALTH_OK
Container deployments:
Red Hat Enterprise Linux 7
[root@mon ~]# docker exec ceph-mon-ID ceph health
Red Hat Enterprise Linux 8
[root@mon ~]# podman exec ceph-mon-ID ceph health
- Replace
ID
with the host name of the Ceph Monitor node:Example
[root@mon ~]# podman exec ceph-mon-mon0 ceph health HEALTH_OK
For all deployments, bare-metal or in containers, verify the storage cluster is functioning using
rados
.From a Ceph Monitor node, create a test pool with eight placement groups (PG):
Syntax
[root@mon ~]# ceph osd pool create POOL_NAME PG_NUMBER
Example
[root@mon ~]# ceph osd pool create test 8
Create a file called
hello-world.txt
:Syntax
[root@mon ~]# vim FILE_NAME
Example
[root@mon ~]# vim hello-world.txt
Upload
hello-world.txt
to the test pool using the object namehello-world
:Syntax
[root@mon ~]# rados --pool POOL_NAME put OBJECT_NAME OBJECT_FILE_NAME
Example
[root@mon ~]# rados --pool test put hello-world hello-world.txt
Download
hello-world
from the test pool as file namefetch.txt
:Syntax
[root@mon ~]# rados --pool POOL_NAME get OBJECT_NAME OBJECT_FILE_NAME
Example
[root@mon ~]# rados --pool test get hello-world fetch.txt
Check the contents of
fetch.txt
:[root@mon ~]# cat fetch.txt "Hello World!"
NoteIn addition to verifying the storage cluster status, you can use the
ceph-medic
utility to overall diagnose the Ceph Storage cluster. See the Installing and Usingceph-medic
to Diagnose a Ceph Storage Cluster chapter in the Red Hat Ceph Storage 4 Troubleshooting Guide.
Additional Resources
- List of the common Ansible settings.
- List of the common OSD settings.
- See Colocation of containerized Ceph daemons for details.
5.3. Configuring OSD Ansible settings for all NVMe storage
To increase overall performance, you can configure Ansible to use only non-volatile memory express (NVMe) devices for storage. Normally only one OSD is configured per device, which underutilizes the throughput potential of an NVMe device.
If you mix SSDs and HDDs, then SSDs will be used for the database, or block.db
, not for data in OSDs.
In testing, configuring two OSDs on each NVMe device was found to provide optimal performance. Red Hat recommends setting the osds_per_device
option to 2
, but it is not required. Other values might provide better performance in your environment.
Prerequisites
- Access to an Ansible administration node.
-
Installation of the
ceph-ansible
package.
Procedure
Set
osds_per_device: 2
ingroup_vars/osds.yml
:osds_per_device: 2
List the NVMe devices under
devices
:devices: - /dev/nvme0n1 - /dev/nvme1n1 - /dev/nvme2n1 - /dev/nvme3n1
The settings in
group_vars/osds.yml
will look similar to this example:osds_per_device: 2 devices: - /dev/nvme0n1 - /dev/nvme1n1 - /dev/nvme2n1 - /dev/nvme3n1
You must use devices
with this configuration, not lvm_volumes
. This is because lvm_volumes
is generally used with pre-created logical volumes and osds_per_device
implies automatic logical volume creation by Ceph.
Additional Resources
- See the Installing a Red Hat Ceph Storage Cluster in the Red Hat Ceph Storage Installation Guide for more details.
5.4. Installing Metadata servers
Use the Ansible automation application to install a Ceph Metadata Server (MDS). Metadata Server daemons are necessary for deploying a Ceph File System.
Prerequisites
- A working Red Hat Ceph Storage cluster.
- Enable passwordless SSH access.
Procedure
Perform the following steps on the Ansible administration node.
Add a new section
[mdss]
to the/etc/ansible/hosts
file:[mdss] MDS_NODE_NAME1 MDS_NODE_NAME2 MDS_NODE_NAME3
Replace MDS_NODE_NAME with the host names of the nodes where you want to install the Ceph Metadata servers.
Alternatively, you can colocate the Metadata server with the OSD daemon on one node by adding the same node under the
[osds]
and[mdss]
sections.Navigate to the
/usr/share/ceph-ansible
directory:[root@admin ~]# cd /usr/share/ceph-ansible
Optionally, you can change the default variables.
Create a copy of the
group_vars/mdss.yml.sample
file namedmdss.yml
:[root@admin ceph-ansible]# cp group_vars/mdss.yml.sample group_vars/mdss.yml
-
Optionally, edit the parameters in
mdss.yml
. Seemdss.yml
for details.
As the
ansible
user, run the Ansible playbook:Bare-metal deployments:
[user@admin ceph-ansible]$ ansible-playbook site.yml --limit mdss -i hosts
Container deployments:
[ansible@admin ceph-ansible]$ ansible-playbook site-container.yml --limit mdss -i hosts
- After installing the Metadata servers, you can now configure them. For details, see the The Ceph File System Metadata Server chapter in the Red Hat Ceph Storage File System Guide.
Additional Resources
- The Ceph File System Guide for Red Hat Ceph Storage 4
- See Colocation of containerized Ceph daemons for details.
-
See Understanding the
limit
option for details.
5.5. Installing the Ceph Client Role
The ceph-ansible
utility provides the ceph-client
role that copies the Ceph configuration file and the administration keyring to nodes. In addition, you can use this role to create custom pools and clients.
Prerequisites
-
A running Ceph storage cluster, preferably in the
active + clean
state. - Perform the tasks listed in requirements.
- Enable passwordless SSH access.
Procedure
Perform the following tasks on the Ansible administration node.
Add a new section
[clients]
to the/etc/ansible/hosts
file:[clients] CLIENT_NODE_NAME
Replace CLIENT_NODE_NAME with the host name of the node where you want to install the
ceph-client
role.Navigate to the
/usr/share/ceph-ansible
directory:[root@admin ~]# cd /usr/share/ceph-ansible
Create a new copy of the
clients.yml.sample
file namedclients.yml
:[root@admin ceph-ansible ~]# cp group_vars/clients.yml.sample group_vars/clients.yml
Open the
group_vars/clients.yml
file, and uncomment the following lines:keys: - { name: client.test, caps: { mon: "allow r", osd: "allow class-read object_prefix rbd_children, allow rwx pool=test" }, mode: "{{ ceph_keyring_permissions }}" }
Replace
client.test
with the real client name, and add the client key to the client definition line, for example:key: "ADD-KEYRING-HERE=="
Now the whole line example would look similar to this:
- { name: client.test, key: "AQAin8tUMICVFBAALRHNrV0Z4MXupRw4v9JQ6Q==", caps: { mon: "allow r", osd: "allow class-read object_prefix rbd_children, allow rwx pool=test" }, mode: "{{ ceph_keyring_permissions }}" }
NoteThe
ceph-authtool --gen-print-key
command can generate a new client key.
Optionally, instruct
ceph-client
to create pools and clients.Update
clients.yml
.-
Uncomment the
user_config
setting and set it totrue
. -
Uncomment the
pools
andkeys
sections and update them as required. You can define custom pools and client names altogether with thecephx
capabilities.
-
Uncomment the
Add the
osd_pool_default_pg_num
setting to theceph_conf_overrides
section in theall.yml
file:ceph_conf_overrides: global: osd_pool_default_pg_num: NUMBER
Replace NUMBER with the default number of placement groups.
As the
ansible
user, run the Ansible playbook:Bare-metal deployments:
[ansible@admin ceph-ansible]$ ansible-playbook site.yml --limit clients -i hosts
Container deployments:
[ansible@admin ceph-ansible]$ ansible-playbook site-container.yml --limit clients -i hosts
Additional Resources
-
See Understanding the
limit
option for details.
5.6. Installing the Ceph Object Gateway
The Ceph Object Gateway, also know as the RADOS gateway, is an object storage interface built on top of the librados
API to provide applications with a RESTful gateway to Ceph storage clusters.
Prerequisites
-
A running Red Hat Ceph Storage cluster, preferably in the
active + clean
state. - Enable passwordless SSH access.
- On the Ceph Object Gateway node, perform the tasks listed in Chapter 3, Requirements for Installing Red Hat Ceph Storage.
If you intend to use Ceph Object Gateway in a multisite configuration, only complete steps 1 - 6. Do not run the Ansible playbook before configuring multisite as this will start the Object Gateway in a single site configuration. Ansible cannot reconfigure the gateway to a multisite setup after it has already been started in a single site configuration. After you complete steps 1 - 6, proceed to the Configuring multisite Ceph Object Gateways section to set up multisite.
Procedure
Perform the following tasks on the Ansible administration node.
Add gateway hosts to the
/etc/ansible/hosts
file under the[rgws]
section to identify their roles to Ansible. If the hosts have sequential naming, use a range, for example:[rgws] <rgw_host_name_1> <rgw_host_name_2> <rgw_host_name[3..10]>
Navigate to the Ansible configuration directory:
[root@ansible ~]# cd /usr/share/ceph-ansible
Create the
rgws.yml
file from the sample file:[root@ansible ~]# cp group_vars/rgws.yml.sample group_vars/rgws.yml
Open and edit the
group_vars/rgws.yml
file. To copy the administrator key to the Ceph Object Gateway node, uncomment thecopy_admin_key
option:copy_admin_key: true
In the
all.yml
file, you MUST specify aradosgw_interface
.radosgw_interface: <interface>
Replace:
-
<interface>
with the interface that the Ceph Object Gateway nodes listen to
For example:
radosgw_interface: eth0
Specifying the interface prevents Civetweb from binding to the same IP address as another Civetweb instance when running multiple instances on the same host.
For additional details, see the
all.yml
file.-
Generally, to change default settings, uncomment the settings in the
rgws.yml
file, and make changes accordingly. To make additional changes to settings that are not in thergws.yml
file, useceph_conf_overrides:
in theall.yml
file.ceph_conf_overrides: client.rgw.rgw1: rgw_override_bucket_index_max_shards: 16 rgw_bucket_default_quota_max_objects: 1638400
For advanced configuration details, see the Red Hat Ceph Storage 4 Ceph Object Gateway for Production guide. Advanced topics include:
- Configuring Ansible Groups
Developing Storage Strategies. See the Creating the Root Pool, Creating System Pools, and Creating Data Placement Strategies sections for additional details on how create and configure the pools.
See Bucket Sharding for configuration details on bucket sharding.
Run the Ansible playbook:
WarningDo not run the Ansible playbook if you intend to set up multisite. Proceed to the Configuring multisite Ceph Object Gateways section to set up multisite.
Bare-metal deployments:
[user@admin ceph-ansible]$ ansible-playbook site.yml --limit rgws -i hosts
Container deployments:
[user@admin ceph-ansible]$ ansible-playbook site-container.yml --limit rgws -i hosts
Ansible ensures that each Ceph Object Gateway is running.
For a single site configuration, add Ceph Object Gateways to the Ansible configuration.
For multi-site deployments, you should have an Ansible configuration for each zone. That is, Ansible will create a Ceph storage cluster and gateway instances for that zone.
After installation for a multi-site cluster is complete, proceed to the Multi-site chapter in the Red Hat Ceph Storage 4 Object Gateway Guide for details on configuring a cluster for multi-site.
Additional Resources
-
See Understanding the
limit
option for details. - The Red Hat Ceph Storage 4 Object Gateway Guide
5.7. Configuring multisite Ceph Object Gateways
As a system administrator, you can configure multisite Ceph Object Gateways to mirror data across clusters for disaster recovery purposes.
You can configure multisite with one or more RGW realms. A realm allows the RGWs inside of it to be independent and isolated from RGWs outside of the realm. This way, data written to an RGW in one realm cannot be accessed by an RGW in another realm.
Ceph-ansible cannot reconfigure gateways to a multisite setup after they have already been used in single site configurations. You can deploy this configuration manually. Contact Red hat Support for assistance.
From Red Hat Ceph Storage 4.1, you do not need to set the value of rgw_multisite_endpoints_list
in group_vars/all.yml
file.
See the Multisite section in the Red Hat Ceph Storage Object Gateway Configuration and Administration Guide for more information.
5.7.1. Prerequisites
- Two Red Hat Ceph Storage clusters.
- On the Ceph Object Gateway nodes, perform the tasks listed in the Requirements for Installing Red Hat Ceph Storage section in the Red Hat Ceph Storage Installation Guide.
- For each Object Gateway node, perform steps 1 - 6 in Installing the Ceph Object Gateway section in the Red Hat Ceph Storage Installation Guide.
5.7.2. Configuring a multi-site Ceph Object Gateway with one realm
Ceph-ansible configures Ceph Object Gateways to mirror data in one realm across multiple storage clusters with multiple Ceph Object Gateway instances.
Ceph-ansible cannot reconfigure gateways to a multisite setup after they have already been used in single site configurations. You can deploy this configuration manually. Contact Red hat Support for assistance.
Prerequisites
- Two running Red Hat Ceph Storage clusters.
- On the Ceph Object Gateway nodes, perform the tasks listed in the Requirements for Installing Red Hat Ceph Storage section in the Red Hat Ceph Storage Installation Guide.
- For each Object Gateway node, perform steps 1 - 6 in Installing the Ceph Object Gateway. section in the Red Hat Ceph Storage Installation Guide.
Procedure
Generate the system keys and capture their output in the
multi-site-keys.txt
file:[root@ansible ~]# echo system_access_key: $(cat /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w 20 | head -n 1) > multi-site-keys.txt [root@ansible ~]# echo system_secret_key: $(cat /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w 40 | head -n 1) >> multi-site-keys.txt
Primary storage cluster
Navigate to the Ceph-ansible configuration directory:
[root@ansible ~]# cd /usr/share/ceph-ansible
Open and edit the
group_vars/all.yml
file. Uncomment thergw_multisite
line and set it totrue
. Uncomment thergw_multisite_proto
parameter.rgw_multisite: true rgw_multisite_proto: "http"
Create a
host_vars
directory in/usr/share/ceph-ansible
:[root@ansible ceph-ansible]# mkdir host_vars
Create a file in
host_vars
for each of the Object Gateway nodes on the primary storage cluster. The file name should be the same name as used in the Ansible inventory file. For example, if the Object Gateway node is namedrgw-primary
, create the filehost_vars/rgw-primary
.Syntax
touch host_vars/NODE_NAME
Example
[root@ansible ceph-ansible]# touch host_vars/rgw-primary
NoteIf there are multiple Ceph Object Gateway nodes in the cluster used for the multi-site configuration, then create separate files for each of the nodes.
Edit the files and add the configuration details for all the instances on the respective Object Gateway nodes. Configure the following settings, along with updating the ZONE_NAME, ZONE_GROUP_NAME, ZONE_USER_NAME, ZONE_DISPLAY_NAME, and REALM_NAME accordingly. Use the random strings saved in the
multi-site-keys.txt
file for ACCESS_KEY and SECRET_KEY.Syntax
rgw_instances: - instance_name: 'INSTANCE_NAME' rgw_multisite: true rgw_zonemaster: true rgw_zonesecondary: false rgw_zonegroupmaster: true rgw_zone: ZONE_NAME_1 rgw_zonegroup: ZONE_GROUP_NAME_1 rgw_realm: REALM_NAME_1 rgw_zone_user: ZONE_USER_NAME_1 rgw_zone_user_display_name: "ZONE_DISPLAY_NAME_1" system_access_key: ACCESS_KEY_1 system_secret_key: SECRET_KEY_1 radosgw_address: "{{ _radosgw_address }}" radosgw_frontend_port: RGW_PRIMARY_PORT_NUMBER_1
Example
rgw_instances: - instance_name: 'rgw0' rgw_multisite: true rgw_zonemaster: true rgw_zonesecondary: false rgw_zonegroupmaster: true rgw_zone: paris rgw_zonegroup: idf rgw_realm: france rgw_zone_user: jacques.chirac rgw_zone_user_display_name: "Jacques Chirac" system_access_key: P9Eb6S8XNyo4dtZZUUMy system_secret_key: qqHCUtfdNnpHq3PZRHW5un9l0bEBM812Uhow0XfB radosgw_address: "{{ _radosgw_address }}" radosgw_frontend_port: 8080
Optional: For creating multiple instances, edit the files and add the configuration details to all the instances on the respective Object Gateway nodes. Configure the following settings along with updating the items under
rgw_instances
. Use the random strings saved in themulti-site-keys-realm-1.txt
file for ACCESS_KEY_1 and SECRET_KEY_1.Syntax
rgw_instances: - instance_name: 'INSTANCE_NAME_1' rgw_multisite: true rgw_zonemaster: true rgw_zonesecondary: false rgw_zonegroupmaster: true rgw_zone: ZONE_NAME_1 rgw_zonegroup: ZONE_GROUP_NAME_1 rgw_realm: REALM_NAME_1 rgw_zone_user: ZONE_USER_NAME_1 rgw_zone_user_display_name: "ZONE_DISPLAY_NAME_1" system_access_key: ACCESS_KEY_1 system_secret_key: SECRET_KEY_1 radosgw_address: "{{ _radosgw_address }}" radosgw_frontend_port: PORT_NUMBER_1 - instance_name: 'INSTANCE_NAME_2' rgw_multisite: true rgw_zonemaster: true rgw_zonesecondary: false rgw_zonegroupmaster: true rgw_zone: ZONE_NAME_1 rgw_zonegroup: ZONE_GROUP_NAME_1 rgw_realm: REALM_NAME_1 rgw_zone_user: ZONE_USER_NAME_1 rgw_zone_user_display_name: "ZONE_DISPLAY_NAME_1" system_access_key: ACCESS_KEY_1 system_secret_key: SECRET_KEY_1 radosgw_address: "{{ _radosgw_address }}" radosgw_frontend_port: PORT_NUMBER_2
Example
rgw_instances: - instance_name: 'rgw0' rgw_multisite: true rgw_zonemaster: true rgw_zonesecondary: false rgw_zonegroupmaster: true rgw_zone: paris rgw_zonegroup: idf rgw_realm: france rgw_zone_user: jacques.chirac rgw_zone_user_display_name: "Jacques Chirac" system_access_key: P9Eb6S8XNyo4dtZZUUMy system_secret_key: qqHCUtfdNnpHq3PZRHW5un9l0bEBM812Uhow0XfB radosgw_address: "{{ _radosgw_address }}" radosgw_frontend_port: 8080 - instance_name: 'rgw1' rgw_multisite: true rgw_zonemaster: true rgw_zonesecondary: false rgw_zonegroupmaster: true rgw_zone: paris rgw_zonegroup: idf rgw_realm: france rgw_zone_user: jacques.chirac rgw_zone_user_display_name: "Jacques Chirac" system_access_key: P9Eb6S8XNyo4dtZZUUMy system_secret_key: qqHCUtfdNnpHq3PZRHW5un9l0bEBM812Uhow0XfB radosgw_address: "{{ _radosgw_address }}" radosgw_frontend_port: 8081
Secondary storage cluster
Navigate to the Ceph-ansible configuration directory:
[root@ansible ~]# cd /usr/share/ceph-ansible
Open and edit the
group_vars/all.yml
file. Uncomment thergw_multisite
line and set it totrue
. Uncomment thergw_multisite_proto
parameter.rgw_multisite: true rgw_multisite_proto: "http"
Create a
host_vars
directory in/usr/share/ceph-ansible
:[root@ansible ceph-ansible]# mkdir host_vars
Create a file in
host_vars
for each of the Object Gateway nodes on the secondary storage cluster. The file name should be the same name as used in the Ansible inventory file. For example, if the Object Gateway node is namedrgw-secondary
, create the filehost_vars/rgw-secondary
.Syntax
touch host_vars/NODE_NAME
Example
[root@ansible ceph-ansible]# touch host_vars/rgw-secondary
NoteIf there are multiple Ceph Object Gateway nodes in the cluster used for the multi-site configuration, then create files for each of the nodes.
Configure the following settings. Use the same values as used on the first cluster for ZONE_USER_NAME, ZONE_DISPLAY_NAME, ACCESS_KEY, SECRET_KEY, REALM_NAME, and ZONE_GROUP_NAME. Use a different value for ZONE_NAME from the primary storage cluster. Set MASTER_RGW_NODE_NAME to the Ceph Object Gateway node for the master zone. Note that, compared to the primary storage cluster, the settings for
rgw_zonemaster
,rgw_zonesecondary
, andrgw_zonegroupmaster
are reversed.Syntax
rgw_instances: - instance_name: 'INSTANCE_NAME_1' rgw_multisite: true rgw_zonemaster: false rgw_zonesecondary: true rgw_zonegroupmaster: false rgw_zone: ZONE_NAME_2 rgw_zonegroup: ZONE_GROUP_NAME_1 rgw_realm: REALM_NAME_1 rgw_zone_user: ZONE_USER_NAME_1 rgw_zone_user_display_name: "ZONE_DISPLAY_NAME_1" system_access_key: ACCESS_KEY_1 system_secret_key: SECRET_KEY_1 radosgw_address: "{{ _radosgw_address }}" radosgw_frontend_port: PORT_NUMBER_1 endpoint: RGW_PRIMARY_HOSTNAME_ENDPOINT:RGW_PRIMARY_PORT_NUMBER_1
Example
rgw_instances: - instance_name: 'rgw0' rgw_multisite: true rgw_zonemaster: false rgw_zonesecondary: true rgw_zonegroupmaster: false rgw_zone: lyon rgw_zonegroup: idf rgw_realm: france rgw_zone_user: jacques.chirac rgw_zone_user_display_name: "Jacques Chirac" system_access_key: P9Eb6S8XNyo4dtZZUUMy system_secret_key: qqHCUtfdNnpHq3PZRHW5un9l0bEBM812Uhow0XfB radosgw_address: "{{ _radosgw_address }}" radosgw_frontend_port: 8080 endpoint: http://rgw-primary:8081
Optional: For creating multiple instances, edit the files and add the configuration details to all the instances on the respective Object Gateway nodes. Configure the following settings along with updating the items under
rgw_instances
. Use the random strings saved in themulti-site-keys-realm-1.txt
file for ACCESS_KEY_1 and SECRET_KEY_1. Set RGW_PRIMARY_HOSTNAME to the Object Gateway node in the primary storage cluster.Syntax
rgw_instances: - instance_name: 'INSTANCE_NAME_1' rgw_multisite: true rgw_zonemaster: false rgw_zonesecondary: true rgw_zonegroupmaster: false rgw_zone: ZONE_NAME_2 rgw_zonegroup: ZONE_GROUP_NAME_1 rgw_realm: REALM_NAME_1 rgw_zone_user: ZONE_USER_NAME_1 rgw_zone_user_display_name: "ZONE_DISPLAY_NAME_1" system_access_key: ACCESS_KEY_1 system_secret_key: SECRET_KEY_1 radosgw_address: "{{ _radosgw_address }}" radosgw_frontend_port: PORT_NUMBER_1 endpoint: RGW_PRIMARY_HOSTNAME:RGW_PRIMARY_PORT_NUMBER_1 - instance_name: '_INSTANCE_NAME_2_' rgw_multisite: true rgw_zonemaster: false rgw_zonesecondary: true rgw_zonegroupmaster: false rgw_zone: ZONE_NAME_2 rgw_zonegroup: ZONE_GROUP_NAME_1 rgw_realm: REALM_NAME_1 rgw_zone_user: ZONE_USER_NAME_1 rgw_zone_user_display_name: "ZONE_DISPLAY_NAME_1" system_access_key: ACCESS_KEY_1 system_secret_key: SECRET_KEY_1 radosgw_address: "{{ _radosgw_address }}" radosgw_frontend_port: PORT_NUMBER_1 endpoint: RGW_PRIMARY_HOSTNAME:RGW_PRIMARY_PORT_NUMBER_2
Example
rgw_instances: - instance_name: 'rgw0' rgw_multisite: true rgw_zonemaster: false rgw_zonesecondary: true rgw_zonegroupmaster: false rgw_zone: lyon rgw_zonegroup: idf rgw_realm: france rgw_zone_user: jacques.chirac rgw_zone_user_display_name: "Jacques Chirac" system_access_key: P9Eb6S8XNyo4dtZZUUMy system_secret_key: qqHCUtfdNnpHq3PZRHW5un9l0bEBM812Uhow0XfB radosgw_address: "{{ _radosgw_address }}" radosgw_frontend_port: 8080 endpoint: http://rgw-primary:8080 - instance_name: 'rgw1' rgw_multisite: true rgw_zonemaster: false rgw_zonesecondary: true rgw_zonegroupmaster: false rgw_zone: lyon rgw_zonegroup: idf rgw_realm: france rgw_zone_user: jacques.chirac rgw_zone_user_display_name: "Jacques Chirac" system_access_key: P9Eb6S8XNyo4dtZZUUMy system_secret_key: qqHCUtfdNnpHq3PZRHW5un9l0bEBM812Uhow0XfB radosgw_address: "{{ _radosgw_address }}" radosgw_frontend_port: 8081 endpoint: http://rgw-primary:8081
On both sites, run the following steps:
Run the Ansible playbook on the primary storage cluster:
Bare-metal deployments:
[user@ansible ceph-ansible]$ ansible-playbook site.yml -i hosts
Container deployments:
[user@ansible ceph-ansible]$ ansible-playbook site-container.yml -i hosts
Verify the secondary storage cluster can access the API on the primary storage cluster.
From the Object Gateway nodes on the secondary storage cluster, use
curl
or another HTTP client to connect to the API on the primary cluster. Compose the URL using the information used to configurergw_pull_proto
,rgw_pullhost
, andrgw_pull_port
inall.yml
. Following the example above, the URL ishttp://cluster0-rgw-000:8080
. If you cannot access the API, verify the URL is right and updateall.yml
if required. Once the URL works and any network issues are resolved, continue with the next step to run the Ansible playbook on the secondary storage cluster.Run the Ansible playbook on the secondary storage cluster:
NoteIf the cluster is deployed and you are making changes to the Ceph Object Gateway, then use the
--limit rgws
option.Bare-metal deployments:
[user@ansible ceph-ansible]$ ansible-playbook site.yml -i hosts
Container deployments:
[user@ansible ceph-ansible]$ ansible-playbook site-container.yml -i hosts
After running the Ansible playbook on the primary and secondary storage clusters, the Ceph Object Gateways run in an active-active state.
Verify the multisite Ceph Object Gateway configuration on both the sites:
Syntax
radosgw-admin sync status
5.7.3. Configuring a multi-site Ceph Object Gateway with multiple realms and multiple instances
Ceph-ansible configures Ceph Object Gateways to mirror data in multiple realms across multiple storage clusters with multiple Ceph Object Gateway instances.
Ceph-ansible cannot reconfigure gateways to a multi-site setup after they have already been used in single site configurations. You can deploy this configuration manually. Contact Red hat Support for assistance.
Prerequisites
- Two running Red Hat Ceph Storage clusters.
- At least two Object Gateway nodes in each storage cluster.
- On the Ceph Object Gateway nodes, perform the tasks listed in the Requirements for Installing Red Hat Ceph Storage section in the Red Hat Ceph Storage Installation Guide.
- For each Object Gateway node, perform steps 1 - 6 in Installing the Ceph Object Gateway section in the Red Hat Ceph Storage Installation Guide.
Procedure
On any node, generate the system access keys and secret keys for realm one and two, and save them in files named
multi-site-keys-realm-1.txt
andmulti-site-keys-realm-2.txt
, respectively:# echo system_access_key: $(cat /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w 20 | head -n 1) > multi-site-keys-realm-1.txt [root@ansible ~]# echo system_secret_key: $(cat /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w 40 | head -n 1) >> multi-site-keys-realm-1.txt # echo system_access_key: $(cat /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w 20 | head -n 1) > multi-site-keys-realm-2.txt [root@ansible ~]# echo system_secret_key: $(cat /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w 40 | head -n 1) >> multi-site-keys-realm-2.txt
Site-A storage cluster
Navigate to the Ansible configuration directory:
[root@ansible ~]# cd /usr/share/ceph-ansible
Open and edit the
group_vars/all.yml
file. Uncomment thergw_multisite
line and set it totrue
. Uncomment thergw_multisite_proto
parameter.rgw_multisite: true rgw_multisite_proto: "http"
Create a
host_vars
directory in/usr/share/ceph-ansible
:[root@ansible ceph-ansible]# mkdir host_vars
Create a file in
host_vars
for each of the Object Gateway nodes on the site-A storage cluster. The file name should be the same name as used in the Ansible inventory file. For example, if the Object Gateway node is namedrgw-site-a
, create the filehost_vars/rgw-site-a
.Syntax
touch host_vars/NODE_NAME
Example
[root@ansible ceph-ansible]# touch host_vars/rgw-site-a
NoteIf there are multiple Ceph Object Gateway nodes in the cluster used for the multi-site configuration, then create separate files for each of the nodes.
For creating multiple instances for the first realm, edit the files and add the configuration details to all the instances on the respective Object Gateway nodes. Configure the following settings along with updating the items under
rgw_instances
for the first realm. Use the random strings saved in themulti-site-keys-realm-1.txt
file for ACCESS_KEY_1 and SECRET_KEY_1.Syntax
rgw_instances: - instance_name: '_INSTANCE_NAME_1_' rgw_multisite: true rgw_zonemaster: true rgw_zonesecondary: false rgw_zonegroupmaster: true rgw_zone: ZONE_NAME_1 rgw_zonegroup: ZONE_GROUP_NAME_1 rgw_realm: REALM_NAME_1 rgw_zone_user: ZONE_USER_NAME_1 rgw_zone_user_display_name: "ZONE_DISPLAY_NAME_1" system_access_key: ACCESS_KEY_1 system_secret_key: SECRET_KEY_1 radosgw_address: "{{ _radosgw_address }}" radosgw_frontend_port: PORT_NUMBER_1 - instance_name: '_INSTANCE_NAME_2_' rgw_multisite: true rgw_zonemaster: true rgw_zonesecondary: false rgw_zonegroupmaster: true rgw_zone: ZONE_NAME_1 rgw_zonegroup: ZONE_GROUP_NAME_1 rgw_realm: REALM_NAME_1 rgw_zone_user: ZONE_USER_NAME_1 rgw_zone_user_display_name: "ZONE_DISPLAY_NAME_1" system_access_key: ACCESS_KEY_1 system_secret_key: SECRET_KEY_1 radosgw_address: "{{ _radosgw_address }}" radosgw_frontend_port: PORT_NUMBER_1
Example
rgw_instances: - instance_name: 'rgw0' rgw_multisite: true rgw_zonemaster: true rgw_zonesecondary: false rgw_zonegroupmaster: true rgw_zone: paris rgw_zonegroup: idf rgw_realm: france rgw_zone_user: jacques.chirac rgw_zone_user_display_name: "Jacques Chirac" system_access_key: P9Eb6S8XNyo4dtZZUUMy system_secret_key: qqHCUtfdNnpHq3PZRHW5un9l0bEBM812Uhow0XfB radosgw_address: "{{ _radosgw_address }}" radosgw_frontend_port: 8080 - instance_name: 'rgw1' rgw_multisite: true rgw_zonemaster: true rgw_zonesecondary: false rgw_zonegroupmaster: true rgw_zone: paris rgw_zonegroup: idf rgw_realm: france rgw_zone_user: jacques.chirac rgw_zone_user_display_name: "Jacques Chirac" system_access_key: P9Eb6S8XNyo4dtZZUUMy system_secret_key: qqHCUtfdNnpHq3PZRHW5un9l0bEBM812Uhow0XfB radosgw_address: "{{ _radosgw_address }}" radosgw_frontend_port: 8080
NoteSkip next step and run it, followed by running Ansible playbook, after configuring all realms on site-B as site-A is secondary to those realms.
For multiple instances for other realms, configure the following settings along with updating the items under
rgw_instances
. Use the random strings saved in themulti-site-keys-realm-2.txt
file for ACCESS_KEY_2 and SECRET_KEY_2.Syntax
rgw_instances: - instance_name: 'INSTANCE_NAME_1' rgw_multisite: true rgw_zonemaster: false rgw_zonesecondary: true rgw_zonegroupmaster: false rgw_zone: ZONE_NAME_2 rgw_zonegroup: ZONE_GROUP_NAME_2 rgw_realm: REALM_NAME_2 rgw_zone_user: ZONE_USER_NAME_2 rgw_zone_user_display_name: "ZONE_DISPLAY_NAME_2" system_access_key: ACCESS_KEY_2 system_secret_key: SECRET_KEY_2 radosgw_address: "{{ _radosgw_address }}" radosgw_frontend_port: PORT_NUMBER_1 endpoint: RGW_SITE_B_PRIMARY_HOSTNAME_ENDPOINT:RGW_SITE_B_PORT_NUMBER_1 - instance_name: 'INSTANCE_NAME_2' rgw_multisite: true rgw_zonemaster: false rgw_zonesecondary: true rgw_zonegroupmaster: false rgw_zone: ZONE_NAME_2 rgw_zonegroup: ZONE_GROUP_NAME_2 rgw_realm: REALM_NAME_2 rgw_zone_user: ZONE_USER_NAME_2 rgw_zone_user_display_name: "ZONE_DISPLAY_NAME_2" system_access_key: ACCESS_KEY_2 system_secret_key: SECRET_KEY_2 radosgw_address: "{{ _radosgw_address }}" radosgw_frontend_port: PORT_NUMBER_1 endpoint: RGW_SITE_B_PRIMARY_HOSTNAME_ENDPOINT:RGW_SITE_B_PORT_NUMBER_1
Example
rgw_instances: - instance_name: 'rgw0' rgw_multisite: true rgw_zonemaster: false rgw_zonesecondary: true rgw_zonegroupmaster: false rgw_zone: fairbanks rgw_zonegroup: alaska rgw_realm: usa rgw_zone_user: edward.lewis rgw_zone_user_display_name: "Edward Lewis" system_access_key: yu17wkvAx3B8Wyn08XoF system_secret_key: 5YZfaSUPqxSNIkZQQA3lBZ495hnIV6k2HAz710BY radosgw_address: "{{ _radosgw_address }}" radosgw_frontend_port: 8080 endpoint: http://rgw-site-b:8081 - instance_name: 'rgw1' rgw_multisite: true rgw_zonemaster: false rgw_zonesecondary: true rgw_zonegroupmaster: false rgw_zone: fairbanks rgw_zonegroup: alaska rgw_realm: usa rgw_zone_user: edward.lewis rgw_zone_user_display_name: "Edward Lewis" system_access_key: yu17wkvAx3B8Wyn08XoF system_secret_key: 5YZfaSUPqxSNIkZQQA3lBZ495hnIV6k2HAz710BY radosgw_address: "{{ _radosgw_address }}" radosgw_frontend_port: 8081 endpoint: http://rgw-site-b:8081
Run the Ansible playbook on the site-A storage cluster:
Bare-metal deployments:
[user@ansible ceph-ansible]$ ansible-playbook site.yml -i hosts
Container deployments:
[user@ansible ceph-ansible]$ ansible-playbook site-container.yml -i hosts
Site-B Storage Cluster
Navigate to the Ceph-ansible configuration directory:
[root@ansible ~]# cd /usr/share/ceph-ansible
Open and edit the
group_vars/all.yml
file. Uncomment thergw_multisite
line and set it totrue
. Uncomment thergw_multisite_proto
parameter.rgw_multisite: true rgw_multisite_proto: "http"
Create a
host_vars
directory in/usr/share/ceph-ansible
:[root@ansible ceph-ansible]# mkdir host_vars
Create a file in
host_vars
for each of the Object Gateway nodes on the site-B storage cluster. The file name should be the same name as used in the Ansible inventory file. For example, if the Object Gateway node is namedrgw-site-b
, create the filehost_vars/rgw-site-b
.Syntax
touch host_vars/NODE_NAME
Example
[root@ansible ceph-ansible]# touch host_vars/rgw-site-b
NoteIf there are multiple Ceph Object Gateway nodes in the cluster used for the multi-site configuration, then create files for each of the nodes.
For creating multiple instances for the first realm, edit the files and add the configuration details to all the instances on the respective Object Gateway nodes. Configure the following settings along with updating the items under
rgw_instances
for the first realm. Use the random strings saved in themulti-site-keys-realm-1.txt
file for ACCESS_KEY_1 and SECRET_KEY_1. Set RGW_SITE_A_PRIMARY_HOSTNAME_ENDPOINT to the Object Gateway node in the site-A storage cluster.Syntax
rgw_instances: - instance_name: 'INSTANCE_NAME_1' rgw_multisite: true rgw_zonemaster: false rgw_zonesecondary: true rgw_zonegroupmaster: false rgw_zone: ZONE_NAME_1 rgw_zonegroup: ZONE_GROUP_NAME_1 rgw_realm: REALM_NAME_1 rgw_zone_user: ZONE_USER_NAME_1 rgw_zone_user_display_name: "ZONE_DISPLAY_NAME_1" system_access_key: ACCESS_KEY_1 system_secret_key: SECRET_KEY_1 radosgw_address: "{{ _radosgw_address }}" radosgw_frontend_port: PORT_NUMBER_1 endpoint: RGW_SITE_A_HOSTNAME_ENDPOINT:RGW_SITE_A_PORT_NUMBER_1 - instance_name: '_INSTANCE_NAME_2_' rgw_multisite: true rgw_zonemaster: false rgw_zonesecondary: true rgw_zonegroupmaster: false rgw_zone: ZONE_NAME_1 rgw_zonegroup: ZONE_GROUP_NAME_1 rgw_realm: REALM_NAME_1 rgw_zone_user: ZONE_USER_NAME_1 rgw_zone_user_display_name: "ZONE_DISPLAY_NAME_1" system_access_key: ACCESS_KEY_1 system_secret_key: SECRET_KEY_1 radosgw_address: "{{ _radosgw_address }}" radosgw_frontend_port: PORT_NUMBER_1 endpoint: RGW_SITE_A_PRIMARY_HOSTNAME_ENDPOINT:RGW_SITE_A_PORT_NUMBER_1
Example
rgw_instances: - instance_name: 'rgw0' rgw_multisite: true rgw_zonemaster: false rgw_zonesecondary: true rgw_zonegroupmaster: false rgw_zone: paris rgw_zonegroup: idf rgw_realm: france rgw_zone_user: jacques.chirac rgw_zone_user_display_name: "Jacques Chirac" system_access_key: P9Eb6S8XNyo4dtZZUUMy system_secret_key: qqHCUtfdNnpHq3PZRHW5un9l0bEBM812Uhow0XfB radosgw_address: "{{ _radosgw_address }}" radosgw_frontend_port: 8080 endpoint: http://rgw-site-a:8080 - instance_name: 'rgw1' rgw_multisite: true rgw_zonemaster: false rgw_zonesecondary: true rgw_zonegroupmaster: false rgw_zone: paris rgw_zonegroup: idf rgw_realm: france rgw_zone_user: jacques.chirac rgw_zone_user_display_name: "Jacques Chirac" system_access_key: P9Eb6S8XNyo4dtZZUUMy system_secret_key: qqHCUtfdNnpHq3PZRHW5un9l0bEBM812Uhow0XfB radosgw_address: "{{ _radosgw_address }}" radosgw_frontend_port: 8081 endpoint: http://rgw-site-a:8081
For multiple instances for the other realms, configure the following settings along with updating the items under
rgw_instances
. Use the random strings saved in themulti-site-keys-realm-2.txt
file for ACCESS_KEY_2 and SECRET_KEY_2. Set RGW_SITE_A_PRIMARY_HOSTNAME_ENDPOINT to the Object Gateway node in the site-A storage cluster.Syntax
rgw_instances: - instance_name: 'INSTANCE_NAME_1' rgw_multisite: true rgw_zonemaster: true rgw_zonesecondary: false rgw_zonegroupmaster: true rgw_zone: ZONE_NAME_2 rgw_zonegroup: ZONE_GROUP_NAME_2 rgw_realm: REALM_NAME_2 rgw_zone_user: ZONE_USER_NAME_2 rgw_zone_user_display_name: "ZONE_DISPLAY_NAME_2" system_access_key: ACCESS_KEY_2 system_secret_key: SECRET_KEY_2 radosgw_address: "{{ _radosgw_address }}" radosgw_frontend_port: PORT_NUMBER_1 - instance_name: '_INSTANCE_NAME_2_' rgw_multisite: true rgw_zonemaster: true rgw_zonesecondary: false rgw_zonegroupmaster: true rgw_zone: ZONE_NAME_2 rgw_zonegroup: ZONE_GROUP_NAME_2 rgw_realm: REALM_NAME_2 rgw_zone_user: ZONE_USER_NAME_2 rgw_zone_user_display_name: "ZONE_DISPLAY_NAME_2" system_access_key: ACCESS_KEY_2 system_secret_key: SECRET_KEY_2 radosgw_address: "{{ _radosgw_address }}" radosgw_frontend_port: PORT_NUMBER_1
Example
rgw_instances: - instance_name: 'rgw0' rgw_multisite: true rgw_zonemaster: true rgw_zonesecondary: false rgw_zonegroupmaster: true rgw_zone: fairbanks rgw_zonegroup: alaska rgw_realm: usa rgw_zone_user: edward.lewis rgw_zone_user_display_name: "Edward Lewis" system_access_key: yu17wkvAx3B8Wyn08XoF system_secret_key: 5YZfaSUPqxSNIkZQQA3lBZ495hnIV6k2HAz710BY radosgw_address: "{{ _radosgw_address }}" radosgw_frontend_port: 8080 - instance_name: 'rgw1' rgw_multisite: true rgw_zonemaster: true rgw_zonesecondary: false rgw_zonegroupmaster: true rgw_zone: fairbanks rgw_zonegroup: alaska rgw_realm: usa rgw_zone_user: edward.lewis rgw_zone_user_display_name: "Edward Lewis" system_access_key: yu17wkvAx3B8Wyn08XoF system_secret_key: 5YZfaSUPqxSNIkZQQA3lBZ495hnIV6k2HAz710BY radosgw_address: "{{ _radosgw_address }}" radosgw_frontend_port: 8081
Run the Ansible playbook on the site-B storage cluster:
Bare-metal deployments:
[user@ansible ceph-ansible]$ ansible-playbook site.yml -i hosts
Container deployments:
[user@ansible ceph-ansible]$ ansible-playbook site-container.yml -i hosts
Run the Ansible playbook again on the site-A storage cluster for other realms of site-A.
After running the Ansible playbook on the site-A and site-B storage clusters, the Ceph Object Gateways run in an active-active state.
Verification
Verify the multisite Ceph Object Gateway configuration:
-
From the Ceph Monitor and Object Gateway nodes at each site, site-A and site-B, use
curl
or another HTTP client to verify the APIs are accessible from the other site. Run the
radosgw-admin sync status
command on both sites.Syntax
radosgw-admin sync status radosgw-admin sync status --rgw -realm REALM_NAME 1
- 1
- Use this option for multiple realms on the respective nodes of the storage cluster.
Example
[user@ansible ceph-ansible]$ radosgw-admin sync status [user@ansible ceph-ansible]$ radosgw-admin sync status --rgw -realm usa
-
From the Ceph Monitor and Object Gateway nodes at each site, site-A and site-B, use
5.8. Deploying OSDs with different hardware on the same host
You can deploy mixed OSDs, for example, HDDs and SSDs, on the same host, with the device_class
feature in Ansible.
Prerequisites
- A valid customer subscription.
- Root-level access to Ansible Administration node.
- Enable Red Hat Ceph Storage Tools and Ansible repositories.
- The ansible user account for use with the Ansible application.
- OSDs are deployed.
Procedure
Create
crush_rules
in thegroup_vars/mons.yml
file:Example
crush_rule_config: true crush_rule_hdd: name: HDD root: default type: host class: hdd default: true crush_rule_ssd: name: SSD root: default type: host class: ssd default: true crush_rules: - "{{ crush_rule_hdd }}" - "{{ crush_rule_ssd }}" create_crush_tree: true
NoteIf you are not using SSD or HDD devices in the cluster, do not define the
crush_rules
for that device.Create
pools
, with createdcrush_rules
ingroup_vars/clients.yml
file.Example
copy_admin_key: True user_config: True pool1: name: "pool1" pg_num: 128 pgp_num: 128 rule_name: "HDD" type: "replicated" device_class: "hdd" pools: - "{{ pool1 }}"
Sample the inventory file to assign roots to OSDs:
Example
[mons] mon1 [osds] osd1 osd_crush_location="{ 'root': 'default', 'rack': 'rack1', 'host': 'osd1' }" osd2 osd_crush_location="{ 'root': 'default', 'rack': 'rack1', 'host': 'osd2' }" osd3 osd_crush_location="{ 'root': 'default', 'rack': 'rack2', 'host': 'osd3' }" osd4 osd_crush_location="{ 'root': 'default', 'rack': 'rack2', 'host': 'osd4' }" osd5 devices="['/dev/sda', '/dev/sdb']" osd_crush_location="{ 'root': 'default', 'rack': 'rack3', 'host': 'osd5' }" osd6 devices="['/dev/sda', '/dev/sdb']" osd_crush_location="{ 'root': 'default', 'rack': 'rack3', 'host': 'osd6' }" [mgrs] mgr1 [clients] client1
View the tree.
Syntax
[root@mon ~]# ceph osd tree
Example
TYPE NAME root default rack rack1 host osd1 osd.0 osd.10 host osd2 osd.3 osd.7 osd.12 rack rack2 host osd3 osd.1 osd.6 osd.11 host osd4 osd.4 osd.9 osd.13 rack rack3 host osd5 osd.2 osd.8 host osd6 osd.14 osd.15
Validate the pools.
Example
# for i in $(rados lspools);do echo "pool: $i"; ceph osd pool get $i crush_rule;done pool: pool1 crush_rule: HDD
Additional Resources
- See the Installing a Red Hat Ceph Storage Cluster in the Red Hat Ceph Storage Installation Guide for more details.
- See Device Classes in Red Hat Ceph Storage Storage Strategies Guide for more details.
5.9. Installing the NFS-Ganesha Gateway
The Ceph NFS Ganesha Gateway is an NFS interface built on top of the Ceph Object Gateway to provide applications with a POSIX filesystem interface to the Ceph Object Gateway for migrating files within filesystems to Ceph Object Storage.
Prerequisites
-
A running Ceph storage cluster, preferably in the
active + clean
state. - At least one node running a Ceph Object Gateway.
- Disable any running kernel NFS service instances on any host that will run NFS-Ganesha before attempting to run NFS-Ganesha. NFS-Ganesha will not start if another NFS instance is running.
*Enable passwordless SSH access.
Ensure the rpcbind service is running:
# systemctl start rpcbind
NoteThe rpcbind package that provides rpcbind is usually installed by default. If that is not the case, install the package first.
If the nfs-service service is running, stop and disable it:
# systemctl stop nfs-server.service # systemctl disable nfs-server.service
Procedure
Perform the following tasks on the Ansible administration node.
Create the
nfss.yml
file from the sample file:[root@ansible ~]# cd /usr/share/ceph-ansible/group_vars [root@ansible ~]# cp nfss.yml.sample nfss.yml
Add gateway hosts to the
/etc/ansible/hosts
file under an[nfss]
group to identify their group membership to Ansible.[nfss] NFS_HOST_NAME_1 NFS_HOST_NAME_2 NFS_HOST_NAME[3..10]
If the hosts have sequential naming, then you can use a range specifier, for example:
[3..10]
.Navigate to the Ansible configuration directory:
[root@ansible ~]# cd /usr/share/ceph-ansible
To copy the administrator key to the Ceph Object Gateway node, uncomment the
copy_admin_key
setting in the/usr/share/ceph-ansible/group_vars/nfss.yml
file:copy_admin_key: true
Configure the FSAL (File System Abstraction Layer) sections of the
/usr/share/ceph-ansible/group_vars/nfss.yml
file. Provide an export ID (NUMERIC_EXPORT_ID), S3 user ID (S3_USER), S3 access key (ACCESS_KEY) and secret key (SECRET_KEY):# FSAL RGW Config # ceph_nfs_rgw_export_id: NUMERIC_EXPORT_ID #ceph_nfs_rgw_pseudo_path: "/" #ceph_nfs_rgw_protocols: "3,4" #ceph_nfs_rgw_access_type: "RW" ceph_nfs_rgw_user: "S3_USER" ceph_nfs_rgw_access_key: "ACCESS_KEY" ceph_nfs_rgw_secret_key: "SECRET_KEY"
WarningAccess and secret keys are optional, and can be generated.
Run the Ansible playbook:
Bare-metal deployments:
[ansible@admin ceph-ansible]$ ansible-playbook site.yml --limit nfss -i hosts
Container deployments:
[ansible@admin ceph-ansible]$ ansible-playbook site-container.yml --limit nfss -i hosts
Additional Resources
5.10. Understanding the limit
option
This section contains information about the Ansible --limit
option.
Ansible supports the --limit
option that enables you to use the site
and site-container
Ansible playbooks for a particular role of the inventory file.
ansible-playbook site.yml|site-container.yml --limit osds|rgws|clients|mdss|nfss|iscsigws -i hosts
Bare-metal
For example, to redeploy only OSDs on bare-metal, run the following command as the Ansible user:
[ansible@ansible ceph-ansible]$ ansible-playbook site.yml --limit osds -i hosts
Containers
For example, to redeploy only OSDs on containers, run the following command as the Ansible user:
[ansible@ansible ceph-ansible]$ ansible-playbook site-container.yml --limit osds -i hosts
5.11. The placement group autoscaler
Placement group (PG) tuning use to be a manual process of plugging in numbers for pg_num
by using the PG calculator. Starting with Red Hat Ceph Storage 4.1, PG tuning can be done automatically by enabling the pg_autoscaler
Ceph manager module. The PG autoscaler is configured on a per-pool basis, and scales the pg_num
by a power of two. The PG autoscaler only proposes a change to pg_num
, if the suggested value is more than three times the actual value.
The PG autoscaler has three modes:
warn
-
The default mode for new and existing pools. A health warning is generated if the suggested
pg_num
value varies too much from the currentpg_num
value. on
-
The pool’s
pg_num
is adjusted automatically. off
-
The autoscaler can be turned off for any pool, but storage administrators will need to manually set the
pg_num
value for the pool.
Once the PG autoscaler in enabled for a pool, you can view the value adjustments by running the ceph osd pool autoscale-status
command. The autoscale-status
command displays the current state of the pools. Here are the autoscale-status
column descriptions:
SIZE
- Reports the total amount of data, in bytes, that are stored in the pool. This size includes object data and OMAP data.
TARGET SIZE
- Reports the expected size of the pool as provided by the storage administrator. This value is used to calculate the pool’s ideal number of PGs.
RATE
- The replication factor for replicated buckets or the ratio for erasure-coded pools.
RAW CAPACITY
- The raw storage capacity of a storage device that a pool is mapped to based on CRUSH.
RATIO
- The ratio of total storage being consumed by the pool.
TARGET RATIO
- A ratio specifying what fraction of the total storage cluster’s space is consumed by the pool as provided by the storage administrator.
PG_NUM
- The current number of placement groups for the pool.
NEW PG_NUM
- The proposed value. This value might not be set.
AUTOSCALE
- The PG autoscaler mode set for the pool.
Additional Resources
5.11.1. Configuring the placement group autoscaler
You can configure Ceph Ansible to enable and configure the PG autoscaler for new pools in the Red Hat Ceph Storage cluster. By default, the placement group (PG) autoscaler is off.
Currently, you can only configure the placement group autoscaler on new Red Hat Ceph Storage deployments, and not existing Red Hat Ceph Storage installations.
Prerequisites
- Access to the Ansible administration node.
- Access to a Ceph Monitor node.
Procedure
-
On the Ansible administration node, open the
group_vars/all.yml
file for editing. Set the
pg_autoscale_mode
option toTrue
, and set thetarget_size_ratio
value for a new or existing pool:Example
openstack_pools: - {"name": backups, "target_size_ratio": 0.1, "pg_autoscale_mode": True, "application": rbd} - {"name": volumes, "target_size_ratio": 0.5, "pg_autoscale_mode": True, "application": rbd} - {"name": vms, "target_size_ratio": 0.2, "pg_autoscale_mode": True, "application": rbd} - {"name": images, "target_size_ratio": 0.2, "pg_autoscale_mode": True, "application": rbd}
NoteThe
target_size_ratio
value is the weight percentage relative to other pools in the storage cluster.-
Save the changes to the
group_vars/all.yml
file. Run the appropriate Ansible playbook:
Bare-metal deployments
[ansible@admin ceph-ansible]$ ansible-playbook site.yml -i hosts
Containers deployments
[ansible@admin ceph-ansible]$ ansible-playbook site-container.yml -i hosts
Once the Ansible playbook finishes, check the autoscaler status from a Ceph Monitor node:
[user@mon ~]$ ceph osd pool autoscale-status