Chapter 4. Management of hosts using the Ceph Orchestrator

As a storage administrator, you can use the Ceph Orchestrator with Cephadm in the backend to add, list, and remove hosts in an existing Red Hat Ceph Storage cluster.

You can also add labels to hosts. Labels are free-form and have no specific meanings. Each host can have multiple labels. For example, apply the mon label to all hosts that have monitor daemons deployed, mgr for all hosts with manager daemons deployed, rgw for Ceph object gateways, and so on.

Labeling all the hosts in the storage cluster helps to simplify system management tasks by allowing you to quickly identify the daemons running on each host. In addition, you can use the Ceph Orchestrator or a YAML file to deploy or remove daemons on hosts that have specific host labels.

This section covers the following administrative tasks:

4.1. Prerequisites

  • A running Red Hat Ceph Storage cluster.
  • Root-level access to all the nodes.
  • The IP addresses of the new hosts should be updated in /etc/hosts file.

4.2. Adding hosts using the Ceph Orchestrator

You can use the Ceph Orchestrator with Cephadm in the backend to add hosts to an existing Red Hat Ceph Storage cluster.

Prerequisites

  • A running Red Hat Ceph Storage cluster.
  • Root-level access to all the nodes.

Procedure

  1. Log into the Cephadm shell:

    Example

    [root@host01 ~]# cephadm shell

  2. Extract the cluster’s public SSH keys to a folder:

    Syntax

    ceph cephadm get-pub-key > ~/PATH

    Example

    [ceph: root@host01 /]# ceph cephadm get-pub-key > ~/ceph.pub

  3. Copy Ceph cluster’s public SSH keys to the root user’s authorized_keys file on the new host:

    Syntax

    ssh-copy-id -f -i ~/PATH root@HOST_NAME_2

    Example

    [root@host01 ~]# ssh-copy-id -f -i ~/ceph.pub root@host02

  4. Add hosts to the cluster:

    Syntax

    ceph orch host add HOST_NAME IP_ADDRESS_OF_HOST

    Example

    [ceph: root@host01 /]# ceph orch host add host02 10.69.265.25

Verification

  • List the hosts:

    Example

    [ceph: root@host01 /]# ceph orch host ls

Additional Resources

4.3. Adding multiple hosts using the Ceph Orchestrator

You can use the Ceph Orchestrator with Cephadm in the backend to add multiple hosts to a Red Hat Ceph Storage cluster at the same time using the service specification in YAML file format.

Prerequisites

  • A running Red Hat Ceph Storage cluster.

Procedure

  1. Log into the Cephadm shell:

    Example

    [root@host01 ~]# cephadm shell

  2. Navigate to the following directory:

    Syntax

    cd /var/lib/ceph/DAEMON_PATH/

    Example

    [ceph: root@host01 /]# cd /var/lib/ceph/hosts/

  3. Create the hosts.yml file:

    Example

    [ceph: root@host01 hosts]# touch hosts.yml

  4. Edit the hosts.yml file to include the following details:

    Example

    service_type: host
    addr: host01
    hostname: host01
    labels:
    - mon
    - osd
    - mgr
    ---
    service_type: host
    addr: host02
    hostname: host02
    labels:
    - mon
    - osd
    - mgr
    ---
    service_type: host
    addr: host03
    hostname: host03
    labels:
    - mon
    - osd

  5. Deploy the hosts using service specification:

    Syntax

    ceph orch apply -i FILE_NAME.yml

    Example

    [ceph: root@host01 hosts]# ceph orch apply -i hosts.yml

Verification

  • List the hosts:

    Example

    [ceph: root@host01 /]# ceph orch host ls

Additional Resources

4.4. Listing hosts using the Ceph Orchestrator

You can list hosts of a Ceph cluster with Ceph Orchestrators.

Note

The STATUS of the hosts is blank, in the output of the ceph orch host ls command.

Prerequisites

  • A running Red Hat Ceph Storage cluster.
  • Hosts are added to the storage cluster.

Procedure

  1. Log into the Cephadm shell:

    Example

    [root@host01 ~]# cephadm shell

  2. List the hosts of the cluster:

    Example

    [ceph: root@host01 /]# ceph orch host ls

    You will see that the STATUS of the hosts is blank which is expected.

4.5. Adding labels to hosts using the Ceph Orchestrator

You can use the Ceph Orchestrator with Cephadm in the backend to add labels to hosts in an existing Red Hat Ceph Storage cluster. Few examples of labels are mgr, mon, and osd based on the service deployed on the hosts.

Prerequisites

  • A running Red Hat Ceph Storage cluster.
  • Hosts are added to the storage cluster

Procedure

  1. Log into the Cephadm shell:

    Example

    [root@host01 ~]# cephadm shell

  2. Add labels to the hosts:

    Syntax

    ceph orch host label add HOST_NAME LABEL_NAME

    Example

    [ceph: root@host01 /]# ceph orch host label add host02 mon

  3. Optional: You can add multiple labels to hosts:

    Syntax

    ceph orch host label add HOSTNAME_1 LABEL_1,LABEL_2

    Example

    [ceph: root@host01 /]# ceph orch host label add host01 mon,mgr

Verification

  • List the hosts:

    Example

    [ceph: root@host01 /]# ceph orch host ls

4.6. Removing hosts using the Ceph Orchestrator

You can remove hosts of a Ceph cluster with the Ceph Orchestrators. You need to also remove the node-exporter and crash services when removing hosts to avoid retaining containers in the host.

Note

Remove all the services including managers, monitors, OSDs, and others, manually before removing the hosts from the storage cluster.

Important

If you are removing the bootstrap host, be sure to copy the admin keyring and the configuration file to another host in the storage cluster before you remove the host.

Prerequisites

  • A running Red Hat Ceph Storage cluster.
  • Root-level access to all the nodes.
  • Hosts are added to the storage cluster.
  • All the services are deployed.
  • Cephadm is deployed on the nodes where the services have to be removed.

Procedure

  1. Log into the Cephadm shell:

    Example

    [root@host01 ~]# cephadm shell

  2. For all Ceph services, except node-exporter and crash, remove the host from the placement specification files:

    Example

    service_type: rgw
    placement:
      hosts:
      - host01
      - host02

    In this example, host03 is removed the placement specification of Ceph object gateway service. You need to follow the above step for all the services deployed on the host.

    1. For removing OSDs, deploy the OSDs with --unmanaged=True

      Example

      [ceph: root@host01 /]# ceph orch apply osd --all-available-devices --unmanaged=true

      Note

      This prevents auto-deployment of OSDs on the devices.

    2. Remove the OSDs:

      Syntax

      for osd_id in $(ceph orch ps HOST_NAME --daemon_type osd | grep osd | awk '{print  $1}' | cut -c 5-) ; do ceph orch osd rm $osd_id; done

      Example

      [ceph: root@host01 /]# for osd_id in $(ceph orch ps host03 --daemon_type osd | grep osd | awk '{print  $1}' | cut -c 5-) ; do ceph orch osd rm $osd_id; done

    3. Remove the hosts:

      Syntax

      ceph orch host rm HOST_NAME

      Example

      [ceph: root@host01 /]# ceph orch host rm host03

  3. On the node from where you have to remove the node-exporter and crash services, run the following commands:

    1. As a root user, outside the Cephadm shell, fetch details of the fsid of the cluster and the name of the service:

      Example

      [root@host03 ~]# cephadm ls

    2. Copy the fsid and the name of the node-exporter service.
    3. Remove the service:

      Syntax

      cephadm rm-daemon --fsid CLUSTER_ID --name SERVICE_NAME

      Example

      [root@host03 ~]# cephadm rm-daemon --fsid a34c81a0-889b-11eb-af98-001a4a00063d --name node-exporter.host03

Verification

  • Run cephadm ls command to verify service removal:

    Example

    [root@host03 ~]# cephadm ls

  • List the hosts, daemons, and processes:

    Example

    [ceph: root@host01 /]# ceph orch ps

Additional Resources

4.7. Placing hosts in the maintenance mode using the Ceph Orchestrator

You can use the Ceph Orchestrator to place the hosts in and out of the maintenance mode. This stops all the Ceph daemons on the host.

Prerequisites

  • A running Red Hat Ceph Storage cluster.
  • Root-level access to all the nodes.
  • Hosts added to the cluster.

Procedure

  1. Log into the Cephadm shell:

    Example

    [root@host01 ~]# cephadm shell

  2. You can either place the host in maintenance mode or place it out of the maintenance mode:

    • Place the host in maintenance mode:

      Syntax

      ceph orch host maintenance enter HOST_NAME [--force]

      Example

      [ceph: root@host01 /]# ceph orch host maintenance enter host02 --force

      The --force flag allows the user to bypass warnings, but not alerts.

    • Place the host out of the maintenance mode:

      Syntax

      ceph orch host maintenance exit HOST_NAME

      Example

      [ceph: root@host01 /]# ceph orch host maintenance exit host02

Verification

  • List the hosts:

    Example

    [ceph: root@host01 /]# ceph orch host ls