Chapter 3. Management of hosts using the Ceph Orchestrator
As a storage administrator, you can use the Ceph Orchestrator with Cephadm in the backend to add, list, and remove hosts in an existing Red Hat Ceph Storage cluster.
You can also add labels to hosts. Labels are free-form and have no specific meanings. Each host can have multiple labels. For example, apply the mon label to all hosts that have monitor daemons deployed, mgr for all hosts with manager daemons deployed, rgw for Ceph object gateways, and so on.
Labeling all the hosts in the storage cluster helps to simplify system management tasks by allowing you to quickly identify the daemons running on each host. In addition, you can use the Ceph Orchestrator or a YAML file to deploy or remove daemons on hosts that have specific host labels.
This section covers the following administrative tasks:
3.1. Prerequisites
- A running Red Hat Ceph Storage cluster.
- Root-level access to all the nodes.
-
The IP addresses of the new hosts should be updated in
/etc/hostsfile.
3.2. Adding hosts using the Ceph Orchestrator
You can use the Ceph Orchestrator with Cephadm in the backend to add hosts to an existing Red Hat Ceph Storage cluster.
Prerequisites
- A running Red Hat Ceph Storage cluster.
- Root-level access to all nodes in the storage cluster.
- Register the nodes to the CDN and attach subscriptions.
-
Ansible user with sudo and passwordless
sshaccess to all nodes in the storage cluster.
Procedure
From the Ceph administration node, log into the Cephadm shell:
Example
[root@host01 ~]# cephadm shell
Extract the cluster’s public SSH keys to a folder:
Syntax
ceph cephadm get-pub-key > ~/PATHExample
[ceph: root@host01 /]# ceph cephadm get-pub-key > ~/ceph.pub
Copy Ceph cluster’s public SSH keys to the root user’s
authorized_keysfile on the new host:Syntax
ssh-copy-id -f -i ~/PATH root@HOST_NAME_2
Example
[ceph: root@host01 /]# ssh-copy-id -f -i ~/ceph.pub root@host02
From the Ansible administration node, add the new host to the Ansible inventory file. The default location for the file is
/usr/share/cephadm-ansible/hosts. The following example shows the structure of a typical inventory file:Example
host01 host02 host03 [admin] host00
NoteIf you have previously added the new host to the Ansible inventory file and run the preflight playbook on the host, skip to step 6.
Run the preflight playbook with the
--limitoption:Syntax
ansible-playbook -i INVENTORY_FILE cephadm-preflight.yml --extra-vars "ceph_origin=rhcs" --limit NEWHOST
Example
[ceph-admin@admin cephadm-ansible]$ ansible-playbook -i hosts cephadm-preflight.yml --extra-vars "ceph_origin=rhcs" --limit host02
The preflight playbook installs
podman,lvm2,chronyd, andcephadmon the new host. After installation is complete,cephadmresides in the/usr/sbin/directory.From the Ceph administration node, log into the Cephadm shell:
Example
[root@host01 ~]# cephadm shell
Use the
cephadmorchestrator to add hosts to the storage cluster:Syntax
ceph orch host add HOST_NAME IP_ADDRESS_OF_HOST [--label=LABEL_NAME_1,LABEL_NAME_2]
The
--labeloption is optional and this adds the labels when adding the hosts. You can add multiple labels to the host.Example
[ceph: root@host01 /]# ceph orch host add host02 10.10.128.70 --labels=mon,mgr
Verification
List the hosts:
Example
[ceph: root@host01 /]# ceph orch host ls
Additional Resources
- See the Listing hosts using the Ceph Orchestrator section in the Red Hat Ceph Storage Operations Guide.
-
For more information about the
cephadm-preflightplaybook, see Running the preflight playbook section in the Red Hat Ceph Storage Installation Guide. - See the Registering Red Hat Ceph Storage nodes to the CDN and attaching subscriptions section in the Red Hat Ceph Storage Installation Guide.
- See the Creating an Ansible user with sudo access section in the Red Hat Ceph Storage Installation Guide.
3.3. Setting the initial CRUSH location of host
You can add the location identifier to the host which instructs cephadm to create a new CRUSH host located in the specified hierarchy.
The location attribute only affects the initial CRUSH location. Subsequent changes of the location property is ignored. Also, removing a host does not remove any CRUSH buckets.
Prerequisites
- A running Red Hat Ceph Storage cluster.
Procedure
Edit the
hosts.yamlfile to include the following details:Example
service_type: host hostname: host01 addr: 192.168.0.11 location: rack: rack1
Mount the YAML file under a directory in the container:
Example
[root@host01 ~]# cephadm shell --mount hosts.yaml:/var/lib/ceph/hosts.yaml
Navigate to the directory:
Example
[ceph: root@host01 /]# cd /var/lib/ceph/
Deploy the hosts using service specification:
Syntax
ceph orch apply -i FILE_NAME.yamlExample
[ceph: root@host01 ceph]# ceph orch apply -i hosts.yaml
Additional Resources
- See the Listing hosts using the Ceph Orchestrator section in the Red Hat Ceph Storage Operations Guide.
3.4. Adding multiple hosts using the Ceph Orchestrator
You can use the Ceph Orchestrator to add multiple hosts to a Red Hat Ceph Storage cluster at the same time using the service specification in YAML file format.
Prerequisites
- A running Red Hat Ceph Storage cluster.
Procedure
Create the
hosts.yamlfile:Example
[root@host01 ~]# touch hosts.yaml
Edit the
hosts.yamlfile to include the following details:Example
service_type: host addr: host01 hostname: host01 labels: - mon - osd - mgr --- service_type: host addr: host02 hostname: host02 labels: - mon - osd - mgr --- service_type: host addr: host03 hostname: host03 labels: - mon - osd
Mount the YAML file under a directory in the container:
Example
[root@host01 ~]# cephadm shell --mount hosts.yaml:/var/lib/ceph/hosts.yaml
Navigate to the directory:
Example
[ceph: root@host01 /]# cd /var/lib/ceph/
Deploy the hosts using service specification:
Syntax
ceph orch apply -i FILE_NAME.yamlExample
[ceph: root@host01 hosts]# ceph orch apply -i hosts.yaml
Verification
List the hosts:
Example
[ceph: root@host01 /]# ceph orch host ls
Additional Resources
- See the Listing hosts using the Ceph Orchestrator section in the Red Hat Ceph Storage Operations Guide.
3.5. Listing hosts using the Ceph Orchestrator
You can list hosts of a Ceph cluster with Ceph Orchestrators.
The STATUS of the hosts is blank, in the output of the ceph orch host ls command.
Prerequisites
- A running Red Hat Ceph Storage cluster.
- Hosts are added to the storage cluster.
Procedure
Log into the Cephadm shell:
Example
[root@host01 ~]# cephadm shell
List the hosts of the cluster:
Example
[ceph: root@host01 /]# ceph orch host ls
You will see that the STATUS of the hosts is blank which is expected.
3.6. Adding labels to hosts using the Ceph Orchestrator
You can use the Ceph Orchestrator to add labels to hosts in an existing Red Hat Ceph Storage cluster. A few examples of labels are mgr, mon, and osd based on the service deployed on the hosts.
You can also add the following host labels that have special meaning to cephadm and they begin with _:
-
_no_schedule: This label preventscephadmfrom scheduling or deploying daemons on the host. If it is added to an existing host that already contains Ceph daemons, it causescephadmto move those daemons elsewhere, except OSDs which are not removed automatically. When a host is added with the_no_schedulelabel, no daemons are deployed on it. When the daemons are drained before the host is removed, the_no_schedulelabel is set on that host. -
_no_autotune_memory: This label does not autotune memory on the host. It prevents the daemon memory from being tuned even when theosd_memory_target_autotuneoption or other similar options are enabled for one or more daemons on that host. -
_admin: By default, the_adminlabel is applied to the bootstrapped host in the storage cluster and theclient.adminkey is set to be distributed to that host with theceph orch client-keyring {ls|set|rm}function. Adding this label to additional hosts normally causescephadmto deploy configuration and keyring files in/etc/cephdirectory.
Prerequisites
- A running Red Hat Ceph Storage cluster.
- Hosts are added to the storage cluster
Procedure
Log into the Cephadm shell:
Example
[root@host01 ~]# cephadm shell
Add labels to the hosts:
Syntax
ceph orch host label add HOST_NAME LABEL_NAME
Example
[ceph: root@host01 /]# ceph orch host label add host02 mon
Verification
List the hosts:
Example
[ceph: root@host01 /]# ceph orch host ls
3.7. Removing hosts using the Ceph Orchestrator
You can remove hosts of a Ceph cluster with the Ceph Orchestrators. All the daemons are removed with the drain option which adds the _no_schedule label to ensure that you cannot deploy any daemons or a cluster till the operation is complete.
If you are removing the bootstrap host, be sure to copy the admin keyring and the configuration file to another host in the storage cluster before you remove the host.
Prerequisites
- A running Red Hat Ceph Storage cluster.
- Root-level access to all the nodes.
- Hosts are added to the storage cluster.
- All the services are deployed.
- Cephadm is deployed on the nodes where the services have to be removed.
Procedure
Log into the Cephadm shell:
Example
[root@host01 ~]# cephadm shell
Fetch the host details:
Example
[ceph: root@host01 /]# ceph orch host ls
Drain all the daemons from the host:
Syntax
ceph orch host drain HOSTNAMEExample
[ceph: root@host01 /]# ceph orch host drain host02
The
_no_schedulelabel is automatically applied to the host which blocks deployment.Check the status of OSD removal:
Example
[ceph: root@host01 /]# ceph orch osd rm status
When no placement groups (PG) are left on the OSD, the OSD is decommissioned and removed from the storage cluster.
Check if all the daemons are removed from the storage cluster:
Syntax
ceph orch ps HOSTNAMEExample
[ceph: root@host01 /]# ceph orch ps host02
Remove the host:
Syntax
ceph orch host rm HOSTNAMEExample
[ceph: root@host01 /]# ceph orch host rm host02
Additional Resources
- See the Adding hosts using the Ceph Orchestrator section in the Red Hat Ceph Storage Operations Guide for more information.
- See the Listing hosts using the Ceph Orchestrator section in the Red Hat Ceph Storage Operations Guide for more information.
3.8. Placing hosts in the maintenance mode using the Ceph Orchestrator
You can use the Ceph Orchestrator to place the hosts in and out of the maintenance mode. The ceph orch host maintenance enter command stops the systemd target which causes all the Ceph daemons to stop on the host. Similarly, the ceph orch host maintenance exit command restarts the systemd target and the Ceph daemons restart on their own.
The orchestrator adopts the following workflow when the host is placed in maintenance:
-
Confirms the removal of hosts does not impact data availability by running the
orch host ok-to-stopcommand. -
If the host has Ceph OSD daemons, it applies
nooutto the host subtree to prevent data migration from triggering during the planned maintenance slot. - Stops the Ceph target, thereby, stopping all the daemons.
-
Disables the
ceph targeton the host, to prevent a reboot from automatically starting Ceph services.
Exiting maintenance reverses the above sequence.
Prerequisites
- A running Red Hat Ceph Storage cluster.
- Root-level access to all the nodes.
- Hosts added to the cluster.
Procedure
Log into the Cephadm shell:
Example
[root@host01 ~]# cephadm shell
You can either place the host in maintenance mode or place it out of the maintenance mode:
Place the host in maintenance mode:
Syntax
ceph orch host maintenance enter HOST_NAME [--force]Example
[ceph: root@host01 /]# ceph orch host maintenance enter host02 --force
The
--forceflag allows the user to bypass warnings, but not alerts.Place the host out of the maintenance mode:
Syntax
ceph orch host maintenance exit HOST_NAMEExample
[ceph: root@host01 /]# ceph orch host maintenance exit host02
Verification
List the hosts:
Example
[ceph: root@host01 /]# ceph orch host ls