Chapter 1. Deploying Red Hat Ceph Storage in Containers
This chapter describes how to use the Ansible application with the ceph-ansible playbook to deploy Red Hat Ceph Storage 3 in containers.
- To install the Red Hat Ceph Storage, see Section 1.2, “Installing a Red Hat Ceph Storage Cluster in Containers”.
- To install the Ceph Object Gateway, see Section 1.3, “Installing the Ceph Object Gateway in a Container”.
- To install Metadata Servers, see Section 1.4, “Installing Metadata Servers”.
-
To learn about the Ansible
--limitoption, see Section 1.5, “Understanding thelimitoption”.
1.1. Prerequisites
- Obtain a valid customer subscription.
Prepare the cluster nodes. On each node:
1.1.1. Registering Red Hat Ceph Storage Nodes to the CDN and Attaching Subscriptions
Register each Red Hat Ceph Storage (RHCS) node to the Content Delivery Network (CDN) and attach the appropriate subscription so that the node has access to software repositories. Each RHCS node must be able to access the full Red Hat Enterprise Linux 7 base content and the extras repository content.
Prerequisites
- A valid Red Hat subscription
- RHCS nodes must be able to connect to the Internet.
For RHCS nodes that cannot access the internet during installation, you must first follow these steps on a system with internet access:
Start a local Docker registry:
# docker run -d -p 5000:5000 --restart=always --name registry registry:2
Pull the Red Hat Ceph Storage 3.x image from the Red Hat Customer Portal:
# docker pull registry.access.redhat.com/rhceph/rhceph-3-rhel7
Tag the image:
# docker tag registry.access.redhat.com/rhceph/rhceph-3-rhel7 <local-host-fqdn>:5000/cephimageinlocalreg
Replace
<local-host-fqdn>with your local host FQDN.Push the image to the local Docker registry you started:
# docker push <local-host-fqdn>:5000/cephimageinlocalreg
Replace
<local-host-fqdn>with your local host FQDN.
Procedure
Perform the following steps on all nodes in the storage cluster as the root user.
Register the node. When prompted, enter your Red Hat Customer Portal credentials:
# subscription-manager register
Pull the latest subscription data from the CDN:
# subscription-manager refresh
List all available subscriptions for Red Hat Ceph Storage:
# subscription-manager list --available --all --matches="*Ceph*"
Identify the appropriate subscription and retrieve its Pool ID.
Attach the subscription:
# subscription-manager attach --pool=$POOL_ID
- Replace
-
$POOL_IDwith the Pool ID identified in the previous step.
-
Disable the default software repositories. Then, enable the Red Hat Enterprise Linux 7 Server and Red Hat Enterprise Linux 7 Server Extras repositories:
# subscription-manager repos --disable=* # subscription-manager repos --enable=rhel-7-server-rpms # subscription-manager repos --enable=rhel-7-server-extras-rpms
Update the system to receive the latest packages:
# yum update
Additional Resources
- See the Registering a System and Managing Subscriptions chapter in the System Administrator’s Guide for Red Hat Enterprise Linux 7.
1.1.2. Creating an Ansible user with sudo access
Ansible must be able to log into all the Red Hat Ceph Storage (RHCS) nodes as a user that has root privileges to install software and create configuration files without prompting for a password. You must create an Ansible user with password-less root access on all nodes in the storage cluster when deploying and configuring a Red Hat Ceph Storage cluster with Ansible.
Prerequisite
-
Having
rootorsudoaccess to all nodes in the storage cluster.
Procedure
Log in to a Ceph node as the
rootuser:ssh root@$HOST_NAME
- Replace
-
$HOST_NAMEwith the host name of the Ceph node.
-
Example
# ssh root@mon01
Enter the
rootpassword when prompted.Create a new Ansible user:
adduser $USER_NAME
- Replace
-
$USER_NAMEwith the new user name for the Ansible user.
-
Example
# adduser admin
ImportantDo not use
cephas the user name. Thecephuser name is reserved for the Ceph daemons. A uniform user name across the cluster can improve ease of use, but avoid using obvious user names, because intruders typically use them for brute-force attacks.Set a new password for this user:
# passwd $USER_NAME
- Replace
-
$USER_NAMEwith the new user name for the Ansible user.
-
# passwd admin
Enter the new password twice when prompted.
Configure
sudoaccess for the newly created user:cat << EOF >/etc/sudoers.d/$USER_NAME $USER_NAME ALL = (root) NOPASSWD:ALL EOF
- Replace
-
$USER_NAMEwith the new user name for the Ansible user.
-
Example
# cat << EOF >/etc/sudoers.d/admin admin ALL = (root) NOPASSWD:ALL EOF
Assign the correct file permissions to the new file:
chmod 0440 /etc/sudoers.d/$USER_NAME
- Replace
-
$USER_NAMEwith the new user name for the Ansible user.
-
Example
# chmod 0440 /etc/sudoers.d/admin
Additional Resources
- The Adding a New User section in the System Administrator’s Guide for Red Hat Enterprise Linux 7.
1.1.3. Enabling Password-less SSH for Ansible
Generate an SSH key pair on the Ansible administration node and distribute the public key to each node in the storage cluster so that Ansible can access the nodes without being prompted for a password.
Prerequisites
Procedure
Do the following steps from the Ansible administration node, and as the Ansible user.
Generate the SSH key pair, accept the default file name and leave the passphrase empty:
[user@admin ~]$ ssh-keygen
Copy the public key to all nodes in the storage cluster:
ssh-copy-id $USER_NAME@$HOST_NAME
- Replace
-
$USER_NAMEwith the new user name for the Ansible user. -
$HOST_NAMEwith the host name of the Ceph node.
-
Example
[user@admin ~]$ ssh-copy-id ceph-admin@ceph-mon01
Create and edit the
~/.ssh/configfile.ImportantBy creating and editing the
~/.ssh/configfile you do not have to specify the-u $USER_NAMEoption each time you execute theansible-playbookcommand.Create the SSH
configfile:[user@admin ~]$ touch ~/.ssh/config
Open the
configfile for editing. Set theHostnameandUseroptions for each node in the storage cluster:Host node1 Hostname $HOST_NAME User $USER_NAME Host node2 Hostname $HOST_NAME User $USER_NAME ...
- Replace
-
$HOST_NAMEwith the host name of the Ceph node. -
$USER_NAMEwith the new user name for the Ansible user.
-
Example
Host node1 Hostname monitor User admin Host node2 Hostname osd User admin Host node3 Hostname gateway User admin
Set the correct file permissions for the
~/.ssh/configfile:[admin@admin ~]$ chmod 600 ~/.ssh/config
Additional Resources
-
The
ssh_config(5)manual page - The OpenSSH chapter in the System Administrator’s Guide for Red Hat Enterprise Linux 7
1.1.4. Configuring a firewall for Red Hat Ceph Storage
Red Hat Ceph Storage (RHCS) uses the firewalld service.
The Monitor daemons use port 6789 for communication within the Ceph storage cluster.
On each Ceph OSD node, the OSD daemons use several ports in the range 6800-7300:
- One for communicating with clients and monitors over the public network
- One for sending data to other OSDs over a cluster network, if available; otherwise, over the public network
- One for exchanging heartbeat packets over a cluster network, if available; otherwise, over the public network
The Ceph Manager (ceph-mgr) daemons use ports in range 6800-7300. Consider colocating the ceph-mgr daemons with Ceph Monitors on same nodes.
The Ceph Metadata Server nodes (ceph-mds) use port 6800.
The Ceph Object Gateway nodes use port 7480 by default. However, you can change the default port, for example to port 80.
To use the SSL/TLS service, open port 443.
Prerequisite
- Network hardware is connected.
Procedure
On all RHCS nodes, start the
firewalldservice. Enable it to run on boot, and ensure that it is running:# systemctl enable firewalld # systemctl start firewalld # systemctl status firewalld
On all Monitor nodes, open port
6789on the public network:[root@monitor ~]# firewall-cmd --zone=public --add-port=6789/tcp [root@monitor ~]# firewall-cmd --zone=public --add-port=6789/tcp --permanent
To limit access based on the source address:
firewall-cmd --zone=public --add-rich-rule="rule family="ipv4" \ source address="$IP_ADDR/$NETMASK_PREFIX" port protocol="tcp" \ port="6789" accept"
firewall-cmd --zone=public --add-rich-rule="rule family="ipv4" \ source address="$IP_ADDR/$NETMASK_PREFIX" port protocol="tcp" \ port="6789" accept" --permanent
- Replace
-
$IP_ADDRwith the network address of the Monitor node. -
$NETMASK_PREFIXwith the netmask in CIDR notation.
-
Example
[root@monitor ~]# firewall-cmd --zone=public --add-rich-rule="rule family="ipv4" \ source address="192.168.0.11/24" port protocol="tcp" \ port="6789" accept"
[root@monitor ~]# firewall-cmd --zone=public --add-rich-rule="rule family="ipv4" \ source address="192.168.0.11/24" port protocol="tcp" \ port="6789" accept" --permanent
On all OSD nodes, open ports
6800-7300on the public network:[root@osd ~]# firewall-cmd --zone=public --add-port=6800-7300/tcp [root@osd ~]# firewall-cmd --zone=public --add-port=6800-7300/tcp --permanent
If you have a separate cluster network, repeat the commands with the appropriate zone.
On all Ceph Manager (
ceph-mgr) nodes (usually the same nodes as Monitor ones), open ports6800-7300on the public network:[root@monitor ~]# firewall-cmd --zone=public --add-port=6800-7300/tcp [root@monitor ~]# firewall-cmd --zone=public --add-port=6800-7300/tcp --permanent
If you have a separate cluster network, repeat the commands with the appropriate zone.
On all Ceph Metadata Server (
ceph-mds) nodes, open port6800on the public network:[root@monitor ~]# firewall-cmd --zone=public --add-port=6800/tcp [root@monitor ~]# firewall-cmd --zone=public --add-port=6800/tcp --permanent
If you have a separate cluster network, repeat the commands with the appropriate zone.
On all Ceph Object Gateway nodes, open the relevant port or ports on the public network.
To open the default port
7480:[root@gateway ~]# firewall-cmd --zone=public --add-port=7480/tcp [root@gateway ~]# firewall-cmd --zone=public --add-port=7480/tcp --permanent
To limit access based on the source address:
firewall-cmd --zone=public --add-rich-rule="rule family="ipv4" \ source address="$IP_ADDR/$NETMASK_PREFIX" port protocol="tcp" \ port="7480" accept"
firewall-cmd --zone=public --add-rich-rule="rule family="ipv4" \ source address="$IP_ADDR/$NETMASK_PREFIX" port protocol="tcp" \ port="7480" accept" --permanent
- Replace
-
$IP_ADDRwith the network address of the object gateway node. -
$NETMASK_PREFIXwith the netmask in CIDR notation.
-
Example
[root@gateway ~]# firewall-cmd --zone=public --add-rich-rule="rule family="ipv4" \ source address="192.168.0.31/24" port protocol="tcp" \ port="7480" accept"
[root@gateway ~]# firewall-cmd --zone=public --add-rich-rule="rule family="ipv4" \ source address="192.168.0.31/24" port protocol="tcp" \ port="7480" accept" --permanent
Optional. If you changed the default Ceph Object Gateway port, for example, to port
80, open this port:[root@gateway ~]# firewall-cmd --zone=public --add-port=80/tcp [root@gateway ~]# firewall-cmd --zone=public --add-port=80/tcp --permanent
To limit access based on the source address, run the following commands:
firewall-cmd --zone=public --add-rich-rule="rule family="ipv4" \ source address="$IP_ADDR/$NETMASK_PREFIX" port protocol="tcp" \ port="80" accept"
firewall-cmd --zone=public --add-rich-rule="rule family="ipv4" \ source address="$IP_ADDR/$NETMASK_PREFIX" port protocol="tcp" \ port="80" accept" --permanent
- Replace
-
$IP_ADDRwith the network address of the object gateway node. -
$NETMASK_PREFIXwith the netmask in CIDR notation.
-
Example
[root@gateway ~]# firewall-cmd --zone=public --add-rich-rule="rule family="ipv4" \ source address="192.168.0.31/24" port protocol="tcp" \ port="80" accept"
[root@gateway ~]# firewall-cmd --zone=public --add-rich-rule="rule family="ipv4" \ source address="192.168.0.31/24" port protocol="tcp" \ port="80" accept" --permanent
Optional. To use SSL/TLS, open port
443:[root@gateway ~]# firewall-cmd --zone=public --add-port=443/tcp [root@gateway ~]# firewall-cmd --zone=public --add-port=443/tcp --permanent
To limit access based on the source address, run the following commands:
firewall-cmd --zone=public --add-rich-rule="rule family="ipv4" \ source address="$IP_ADDR/$NETMASK_PREFIX" port protocol="tcp" \ port="443" accept"
firewall-cmd --zone=public --add-rich-rule="rule family="ipv4" \ source address="$IP_ADDR/$NETMASK_PREFIX" port protocol="tcp" \ port="443" accept" --permanent
- Replace
-
$IP_ADDRwith the network address of the object gateway node. -
$NETMASK_PREFIXwith the netmask in CIDR notation.
-
Example
[root@gateway ~]# firewall-cmd --zone=public --add-rich-rule="rule family="ipv4" \ source address="192.168.0.31/24" port protocol="tcp" \ port="443" accept" [root@gateway ~]# firewall-cmd --zone=public --add-rich-rule="rule family="ipv4" \ source address="192.168.0.31/24" port protocol="tcp" \ port="443" accept" --permanent
Additional Resources
- For more information about public and cluster network, see Verifying the Network Configuration for Red Hat Ceph Storage.
-
For additional details on
firewalld, see the Using Firewalls chapter in the Security Guide for Red Hat Enterprise Linux 7.
1.2. Installing a Red Hat Ceph Storage Cluster in Containers
Use the Ansible application with the ceph-ansible playbook to install Red Hat Ceph Storage 3 in containers.
A Ceph cluster used in production usually consists of ten or more nodes. To deploy Red Hat Ceph Storage as a container image, Red Hat recommends to use a Ceph cluster that consists of at least three OSD and three Monitor nodes.
Ceph can run with one monitor; however, to ensure high availability in a production cluster, Red Hat will only support deployments with at least three monitor nodes.
Prerequisites
On the Ansible administration node, enable the Red Hat Ceph Storage 3 Tools repository and Ansible repository:
[root@admin ~]# subscription-manager repos --enable=rhel-7-server-rhceph-3-tools-rpms --enable=rhel-7-server-ansible-2.4-rpms
On the Ansible administration node, install the
ceph-ansiblepackage:[root@admin ~]# yum install ceph-ansible
Procedure
Use the following commands from the Ansible administration node if not instructed otherwise.
In the user’s home directory, create the
ceph-ansible-keysdirectory where Ansible stores temporary values generated by theceph-ansibleplaybook.[user@admin ~]$ mkdir ~/ceph-ansible-keys
Create a symbolic link to the
/usr/share/ceph-ansible/group_varsdirectory in the/etc/ansible/directory:[root@admin ~]# ln -s /usr/share/ceph-ansible/group_vars /etc/ansible/group_vars
Navigate to the
/usr/share/ceph-ansible/directory:[user@admin ~]$ cd /usr/share/ceph-ansible
Create new copies of the
yml.samplefiles:[root@admin ceph-ansible]# cp group_vars/all.yml.sample group_vars/all.yml [root@admin ceph-ansible]# cp group_vars/osds.yml.sample group_vars/osds.yml [root@admin ceph-ansible]# cp site-docker.yml.sample site-docker.yml
Edit the copied files.
Edit the
group_vars/all.ymlfile. See the table below for the most common required and optional parameters to uncomment. Note that the table does not include all parameters.Option Value Required Notes monitor_interfaceThe interface that the Monitor nodes listen to
monitor_interface,monitor_address, ormonitor_address_blockis requiredmonitor_addressThe address that the Monitor nodes listen to
monitor_address_blockThe subnet of the Ceph public network
Use when the IP addresses of the nodes are unknown, but the subnet is known
ip_versionipv6Yes if using IPv6 addressing
journal_sizeThe required size of the journal in MB
No
public_networkThe IP address and netmask of the Ceph public network
Yes
The Verifying the Network Configuration for Red Hat Ceph Storage section in the Installation Guide for Red Hat Enterprise Linux
cluster_networkThe IP address and netmask of the Ceph cluster network
No
ceph_docker_imagerhceph/rhceph-3-rhel7, orcephimageinlocalregif using a local Docker registryYes
containerized_deploymenttrueYes
ceph_docker_registryregistry.access.redhat.com, or<local-host-fqdn>if using a local Docker registryYes
An example of the
all.ymlfile can look like:monitor_interface: eth0 journal_size: 5120 monitor_interface: eth0 public_network: 192.168.0.0/24 ceph_docker_image: rhceph/rhceph-3-rhel7 containerized_deployment: true ceph_docker_registry: registry.access.redhat.com
For additional details, see the
all.ymlfile.Edit the
group_vars/osds.ymlfile. See the table below for the most common required and optional parameters to uncomment. Note that the table does not include all parameters.Table 1.1. OSD Ansible Settings
Option Value Required Notes osd_scenariocollocatedto use the same device for journal and OSD datanon-collocatedto use a dedicated device to store journal datalvmto use the Logical Volume Manager to store OSD dataYes
When using
osd_scenario: non-collocated,ceph-ansibleexpects the variablesdevicesanddedicated_devicesto match. For example, if you specify 10 disks indevices, you must specify 10 entries indedicated_devices. Currently, Red Hat only supports dedicated journals when usingosd_scenario: lvm, not collocated journals.osd_auto_discoverytrueto automatically discover OSDsYes if using
osd_scenario: collocatedCannot be used when
devicessetting is useddevicesList of devices where
ceph datais storedYes to specify the list of devices
Cannot be used when
osd_auto_discoverysetting is useddedicated_devicesList of dedicated devices for non-collocated OSDs where
ceph journalis storedYes if
osd_scenario: non-collocatedShould be nonpartitioned devices
dmcrypttrueto encrypt OSDsNo
Defaults to
falselvm_volumesa list of dictionaries
Yes if using
osd_scenario: lvmEach dictionary must contain a
data,journalanddata_vgkeys. Thedatakey must be a logical volume. Thejournalkey can be a logical volume (LV), device or partition, but do not use one journal for multipledataLVs. Thedata_vgkey must be the volume group containing thedataLV. Optionally, thejournal_vgkey can be used to specify the volume group containing the journal LV, if applicable.The following are examples of the
osds.ymlfile using these threeosd_scenario::collocated,non-collocated, andlvm.osd_scenario: non-collocated devices: - /dev/sda - /dev/sdb - /dev/sdc - /dev/sdd dedicated_devices: - /dev/nvme0n1 - /dev/nvme0n1 - /dev/nvme0n1 - /dev/nvme0n1
osd_scenario: non-collocated devices: - /dev/sda - /dev/sdb - /dev/sdc - /dev/sdd dedicated_devices: - /dev/nvme0n1 - /dev/nvme0n1 - /dev/nvme0n1 - /dev/nvme0n1
osd_scenario: lvm lvm_volumes: - data: data-lv1 data_vg: vg1 journal: journal-lv1 journal_vg: vg2 - data: data-lv2 journal: /dev/sda data_vg: vg1For additional details, see the comments in the
osds.ymlfile.NoteCurrently,
ceph-ansibledoes not create the volume groups or the logical volumes. This must be done before running the Anisble playbook.
Edit the Ansible inventory file located by default at
/etc/ansible/hosts. Remember to comment out example hosts.Add the Monitor nodes under the
[mons]section:[mons] <monitor-host-name> <monitor-host-name> <monitor-host-name>
Add OSD nodes under the
[osds]section. If the nodes have sequential naming, consider using a range:[osds] <osd-host-name[1:10]>
Alternatively, you can colocate Monitors with the OSD daemons on one node by adding the same node under the
[mons]and[osds]sections. See Chapter 2, Colocation of Containerized Ceph Daemons for details.Add the Ceph Manager (
ceph-mgr) nodes under the[mgrs]section. Colocate the Ceph Manager daemon with Monitor nodes.[mgrs] <monitor-host-name> <monitor-host-name> <monitor-host-name>
As the Ansible user, ensure that Ansible can reach the Ceph hosts:
[user@admin ~]$ ansible all -m ping
As
root, create the/var/log/ansible/directory and assign the appropriate permissions for theansibleuser:[root@admin ceph-ansible]# mkdir /var/log/ansible [root@admin ceph-ansible]# chown ansible:ansible /var/log/ansible [root@admin ceph-ansible]# chmod 755 /var/log/ansible
Edit the
/usr/share/ceph-ansible/ansible.cfgfile, updating thelog_pathvalue as follows:log_path = /var/log/ansible/ansible.log
As the Ansible user, run the
ceph-ansibleplaybook.[user@admin ceph-ansible]$ ansible-playbook site-docker.yml
NoteIf you deploy Red Hat Ceph Storage to Red Hat Enterprise Linux Atomic Host hosts, use the
--skip-tags=with_pkgoption:[user@admin ceph-ansible]$ ansible-playbook --skip-tags=with_pkg site-docker.yml
From a Monitor node, verify the status of the Ceph cluster.
docker exec ceph-<mon|mgr>-<id> ceph health
Replace:
-
<id>with the host name of the Monitor node:
For example:
[root@monitor ~]# docker exec ceph-mon-mon0 ceph health HEALTH_OK
NoteIn addition to verifying the cluster status, you can use the
ceph-medicutility to overall diagnose the Ceph Storage Cluster. See the Installing and Usingceph-medicto Diagnose a Ceph Storage Cluster chapter in the Red Hat Ceph Storage 3 Troubleshooting Guide.-
1.3. Installing the Ceph Object Gateway in a Container
Use the Ansible application with the ceph-ansible playbook to install the Ceph Object Gateway in a container.
Prerequisites
- A working Red Hat Ceph Storage cluster. See Section 1.2, “Installing a Red Hat Ceph Storage Cluster in Containers” for details.
Procedure
Use the following commands from the Ansible administration node.
Navigate to the
/usr/share/ceph-ansible/directory.[user@admin ~]$ cd /usr/share/ceph-ansible/
Uncomment the
radosgw_interfaceparameter in thegroup_vars/all.ymlfile.radosgw_interface: <interface>
Replace:
-
<interface>with the interface that the Ceph Object Gateway nodes listen to
For additional details, see the
all.ymlfile.-
Create a new copy of the
rgws.yml.samplefile located in thegroup_varsdirectory.[root@admin ceph-ansible]# cp group_vars/rgws.yml.sample group_vars/rgws.yml
-
Optional. Edit the
group_vars/rgws.ymlfile. For additional details, see thergws.ymlfile. Add the host name of the Ceph Object Gateway node to the
[rgws]section of the Ansible inventory file located by default at/etc/ansible/hosts.[rgws] gateway01Alternatively, you can colocate the Ceph Object Gateway with the OSD daemon on one node by adding the same node under the
[osds]and[rgws]sections. See Chapter 2, Colocation of Containerized Ceph Daemons for details.Run the
ceph-ansibleplaybook.[user@admin ceph-ansible]$ ansible-playbook site-docker.yml --limit rgws
NoteIf you deploy Red Hat Ceph Storage to Red Hat Enterprise Linux Atomic Host hosts, use the
--skip-tags=with_pkgoption:[user@admin ceph-ansible]$ ansible-playbook --skip-tags=with_pkg site-docker.yml
Verify that the Ceph Object Gateway node was deployed successfully.
Connect to a Monitor node:
ssh <hostname>
Replace
<hostname>with the host name of the Monitor node, for example:[user@admin ~]$ ssh root@monitor
Verify that the Ceph Object Gateway pools were created properly:
[root@monitor ~]# docker exec ceph-mon-mon1 rados lspools rbd cephfs_data cephfs_metadata .rgw.root default.rgw.control default.rgw.data.root default.rgw.gc default.rgw.log default.rgw.users.uid
From any client on the same network as the Ceph cluster, for example the Monitor node, use the
curlcommand to send an HTTP request on port 8080 using the IP address of the Ceph Object Gateway host:curl http://<ip-address>:8080
Replace:
-
<ip-address>with the IP address of the Ceph Object Gateway node. To determine the IP address of the Ceph Object Gateway host, use theifconfigoripcommands.
-
List buckets:
[root@monitor ~]# docker exec ceph-mon-mon1 radosgw-admin bucket list
Additional Resources
- The Red Hat Ceph Storage 3 Ceph Object Gateway Guide for Red Hat Enterprise Linux
-
Section 1.5, “Understanding the
limitoption”
1.4. Installing Metadata Servers
Use the Ansible automation application to install a Ceph Metadata Server (MDS). Metadata Server daemons are necessary for deploying a Ceph File System.
Procedure
Perform the following steps on the Ansible administration node.
Add a new section
[mdss]to the/etc/ansible/hostsfile:[mdss] <hostname> <hostname> <hostname>
Replace
<hostname>with the host names of the nodes where you want to install the Ceph Metadata Servers.Alternatively, you can colocate the Metadata Server with the OSD daemon on one node by adding the same node under the
[osds]and[mdss]sections. See Chapter 2, Colocation of Containerized Ceph Daemons for details.Navigate to the
/usr/share/ceph-ansibledirectory:[root@admin ~]# cd /usr/share/ceph-ansible
Create a copy of the
group_vars/mdss.yml.samplefile namedmdss.yml:[root@admin ceph-ansible]# cp group_vars/mdss.yml.sample group_vars/mdss.yml
-
Optionally, edit parameters in
mdss.yml. Seemdss.ymlfor details. Run the Ansible playbook:
[user@admin ceph-ansible]$ ansible-playbook site-docker.yml --limit mdss
- After installing Metadata Servers, configure them. For details, see the Configuring Metadata Server Daemons chapter in the Ceph File System Guide for Red Hat Ceph Storage 3.
Additional Resources
- The Ceph File System Guide for Red Hat Ceph Storage 3
-
Section 1.5, “Understanding the
limitoption”
1.5. Understanding the limit option
This section contains information about the Ansible --limit option.
Ansible supports the --limit option that enables you to use the site, site-docker, and rolling_upgrade Ansible playbooks for a particular section of the inventory file.
$ ansible-playbook site.yml|rolling_upgrade.yml|site-docker.yml --limit osds|rgws|clients|mdss|nfss
For example, to redeploy only OSDs:
$ ansible-playbook /usr/share/ceph-ansible/site.yml --limit osds
If you colocate Ceph components on one node, Ansible applies a playbook to all components on the node despite that only one component type was specified with the limit option. For example, if you run the rolling_update playbook with the --limit osds option on a node that contains OSDs and Metadata Servers (MDS), Ansible will upgrade both components, OSDs and MDSs.
1.6. Additional Resources
- The Getting Started with Containers guide for Red Hat Enterprise Linux Atomic Host

Where did the comment section go?
Red Hat's documentation publication system recently went through an upgrade to enable speedier, more mobile-friendly content. We decided to re-evaluate our commenting platform to ensure that it meets your expectations and serves as an optimal feedback mechanism. During this redesign, we invite your input on providing feedback on Red Hat documentation via the discussion platform.