Chapter 3. Red Hat Ceph Storage installation

As a storage administrator, you can use the cephadm utility to deploy new Red Hat Ceph Storage clusters.

The cephadm utility manages the entire life cycle of a Ceph cluster. Installation and management tasks comprise two types of operations:

  • Day One operations involve installing and bootstrapping a bare-minimum, containerized Ceph storage cluster, running on a single node. Day One also includes deploying the Monitor and Manager daemons and adding Ceph OSDs.
  • Day Two operations use the Ceph orchestration interface, cephadm orch, or the Red Hat Ceph Storage Dashboard to expand the storage cluster by adding other Ceph services to the storage cluster.

3.1. Prerequisites

  • At least one running virtual machine (VM) or bare-metal server with an active internet connection.
  • Red Hat Enterprise Linux 8.4 or later.
  • Ansible 2.9 or later.
  • A valid Red Hat subscription with the appropriate entitlements.
  • Root-level access to all nodes.
  • An active Red Hat Network (RHN) or service account to access the Red Hat Registry.

3.2. The cephadm utility

The cephadm utility deploys and manages a Ceph storage cluster. It is tightly integrated with both the command-line interface (CLI) and the Red Hat Ceph Storage Dashboard web interface, so that you can manage storage clusters from either environment. cephadm uses SSH to connect to hosts from the manager daemon to add, remove, or update Ceph daemon containers. It does not rely on external configuration or orchestration tools such as Ansible or Rook.

The cephadm utility consists of two main components:

  • The cephadm shell.
  • The cephadm orchestrator.

The cephadm shell

The cephadm shell launches a bash shell within a container. This enables you to perform “Day One” cluster setup tasks, such as installation and bootstrapping, and to invoke ceph commands.

There are two ways to invoke the cephadm shell:

  • Enter cephadm shell at the system prompt:

    Example

    [root@node00 ~]# cephadm shell
    [cephadm@cephadm ~]# ceph -s

  • At the system prompt, type cephadm shell and the command you want to execute:

    Example

    [root@node00 ~]# cephadm shell ceph -s

Note

If the node contains configuration and keyring files in /etc/ceph/, the container environment uses the values in those files as defaults for the cephadm shell. However, if you execute the cephadm shell on a Ceph Monitor node, the cephadm shell inherits its default configuration from the Ceph Monitor container, instead of using the default configuration.

The cephadm orchestrator

The cephadm orchestrator enables you to perform “Day Two” Ceph functions, such as expanding the storage cluster and provisioning Ceph daemons and services. You can use the cephadm orchestrator through either the command-line interface (CLI) or the web-based Red Hat Ceph Storage Dashboard. Orchestrator commands take the form ceph orch.

The cephadm script interacts with the Ceph orchestration module used by the Ceph Manager.

3.3. How cephadm works

The cephadm command manages the full lifecycle of a Red Hat Ceph Storage cluster. The cephadm command can perform the following operations:

  • Bootstrap a new Red Hat Ceph Storage cluster.
  • Launch a containerized shell that works with the Red Hat Ceph Storage command-line interface (CLI).
  • Aid in debugging containerized daemons.

The cephadm command uses ssh to communicate with the nodes in the storage cluster. This allows you to add, remove, or update Red Hat Ceph Storage containers without using external tools. Generate the ssh key pair during the bootstrapping process, or use your own ssh key.

The cephadm bootstrapping process creates a small storage cluster on a single node, consisting of one Ceph Monitor and one Ceph Manager, as well as any required dependencies. You then use the orchestrator CLI or the Red Hat Ceph Storage Dashboard to expand the storage cluster to include nodes, and to provision all of the Red Hat Ceph Storage daemons and services. You can perform management functions through the CLI or from the Red Hat Ceph Storage Dashboard web interface.

Note

The cephadm utility is a new feature in Red Hat Ceph Storage 5.0. It does not support older versions of Red Hat Ceph Storage.

Cephadm deployment

3.4. Registering the Red Hat Ceph Storage nodes to the CDN and attaching subscriptions

Note

Red Hat Ceph Storage supports Red Hat Enterprise Linux 8.4 and later.

Prerequisites

  • At least one running virtual machine (VM) or bare-metal server with an active internet connection.
  • Red Hat Enterprise Linux 8.4 or later.
  • A valid Red Hat subscription with the appropriate entitlements.
  • Root-level access to all nodes.
  • An active Red Hat Network (RHN) or service account to access the Red Hat Registry.
Note

The Red Hat Registry is located at https://registry.redhat.io/. Nodes require connectivity to the registry.

Procedure

  1. Register the node, and when prompted, enter your Red Hat Customer Portal credentials:

    Syntax

    subscription-manager register

  2. Pull the latest subscription data from the CDN:

    Syntax

    subscription-manager refresh

  3. List all available subscriptions for Red Hat Ceph Storage:

    Syntax

    subscription-manager list --available --matches 'Red Hat Ceph Storage'

  4. Identify the appropriate subscription and retrieve its Pool ID.
  5. Attach a pool ID to gain access to the software entitlements. Use the Pool ID you identified in the previous step.

    Syntax

    subscription-manager attach --pool=POOL_ID

  6. Disable the default software repositories, and then enable the server and the extras repositories on the respective version of Red Hat Enterprise Linux:

    Syntax

    subscription-manager repos --disable=*
    subscription-manager repos --enable=rhel-8-for-x86_64-baseos-rpms
    subscription-manager repos --enable=rhel-8-for-x86_64-appstream-rpms

  7. Update the system to receive the latest packages for Red Hat Enterprise Linux 8:

    Syntax

    # dnf update

  8. Subscribe to Red Hat Ceph Storage 5.0 content. Follow the instructions in How to Register Ceph with Red Hat Satellite 6.
  9. Enable the ceph-tools repository:

    Syntax

    subscription-manager repos --enable=rhceph-5-tools-for-rhel-8-x86_64-rpms

  10. Enable the ansible repository:

    Syntax

    subscription-manager repos --enable=ansible-2.9-for-rhel-8-x86_64-rpms

  11. Install cephadm-ansible:

    Syntax

    dnf install cephadm-ansible

Additional Resources

3.5. Configuring Ansible inventory location

You can configure inventory location files for the cephadm-ansible staging and production environments.

Note

To use the cephadm-purge-cluster.yml or cephadm-client.yml playbooks, you need to have hosts configured in the [admin] and [clients] groups of the inventory hosts file. The [admin] group is defined in the inventory file with a node where the admin keyring is present at /etc/ceph/ceph.client.admin.keyring.

Prerequisites

  • An Ansible administration node.
  • Root-level access to the Ansible administration node.
  • The cephadm-ansible package is installed on the node.

Procedure

  1. Navigate to the /usr/share/cephadm-ansible/ directory:

    [root@admin ~]# cd /usr/share/cephadm-ansible
  2. Optional: Create subdirectories for staging and production:

    [root@admin cephadm-ansible]# mkdir -p inventory/staging inventory/production
  3. Optional: Edit the ansible.cfg file and add the following line to assign a default inventory location:

    [defaults]
    inventory = ./inventory/staging
  4. Optional: Create an inventory hosts file for each environment:

    [root@admin cephadm-ansible]# touch inventory/staging/hosts
    [root@admin cephadm-ansible]# touch inventory/production/hosts
  5. Open and edit each hosts file and add the admin and client nodes:

    [admin]
    ADMIN_NODE_NAME_1
    
    [clients]
    CLIENT_NAME_1
    CLIENT_NAME_2
    CLIENT_NAME_3

    Example

    [admin]
    node00
    
    [clients]
    client01
    client02
    client03

    Note

    By default, playbooks run in the staging environment. To run the playbook in the production environment:

    [root@admin cephadm-ansible]# ansible-playbook -i inventory/production playbook.yml

3.6. Enabling password-less SSH for Ansible

Generate an SSH key pair on the Ansible administration node and distribute the public key to each node in the storage cluster so that Ansible can access the nodes without being prompted for a password.

Prerequisites

  • Access to the Ansible administration node.

Procedure

  1. Generate the SSH key pair, accept the default file name and leave the passphrase empty:

    [ansible@admin ~]$ ssh-keygen
  2. Copy the public key to all nodes in the storage cluster:

    ssh-copy-id USER_NAME@HOST_NAME
    Replace
    • USER_NAME with the new user name for the Ansible user.
    • HOST_NAME with the host name of the Ceph node.

      Example

      [ansible@admin ~]$ ssh-copy-id ceph-admin@ceph-mon01

  3. Create the user’s SSH config file:

    [ansible@admin ~]$ touch ~/.ssh/config
  4. Open for editing the config file. Set values for the Hostname and User options for each node in the storage cluster:

    Host node1
       Hostname HOST_NAME
       User USER_NAME
    Host node2
       Hostname HOST_NAME
       User USER_NAME
    ...
    Replace
    • HOST_NAME with the host name of the Ceph node.
    • USER_NAME with the new user name for the Ansible user.

      Example

      Host node1
         Hostname monitor
         User admin
      Host node2
         Hostname osd
         User admin
      Host node3
         Hostname gateway
         User admin

      Important

      By configuring the ~/.ssh/config file you do not have to specify the -u USER_NAME option each time you execute the ansible-playbook command.

  5. Set the correct file permissions for the ~/.ssh/config file:

    [admin@admin ~]$ chmod 600 ~/.ssh/config

Additional Resources

3.7. Running the preflight playbook

This Ansible playbook configures the Ceph repository and prepares the storage cluster for bootstrapping. It also installs some prerequisites, such as podman, lvm2, chronyd, and cephadm. The default location for cephadm-ansible and cephadm-preflight.yml is /usr/share/cephadm-ansible.

The preflight playbook uses the cephadm-ansible inventory file to identify all the admin and client nodes in the storage cluster.

The default location for the inventory file is /usr/share/cephadm-ansible/hosts. The following example shows the structure of a typical inventory file:

Example

[admin]
node00

[clients]
client01
client02
client03

The [admin] group in the inventory file contains the name of the node where the admin keyring is stored.

Note

Run the preflight playbook before you bootstrap the initial host.

Prerequisites

  • Ansible is installed on the host.
  • Root-level access to all nodes in the storage cluster.

Procedure

  1. Navigate to the /usr/share/cephadm-ansible directory.
  2. Run the preflight playbook on the initial host in the storage cluster:

    Syntax

     ansible-playbook -i INVENTORY-FILE cephadm-preflight.yml --extra-vars "ceph_origin=rhcs"

    Example

    [root@admin ~]# ansible-playbook -i /usr/share/cephadm-ansible/hosts cephadm-preflight.yml --extra-vars "ceph_origin=rhcs"

    After installation is complete, cephadm resides in the /usr/sbin/ directory.

    • Use the --limit option to run the preflight playbook on a selected set of hosts in the storage cluster:

      Syntax

      ansible-playbook -i INVENTORY-FILE cephadm-preflight.yml --extra-vars "ceph_origin=rhcs" --limit OSD_GROUP|NODE_NAME

      Example

      [root@admin ~]# ansible-playbook -i /usr/share/cephadm-ansible/hosts cephadm-preflight.yml --extra-vars "ceph_origin=rhcs" --limit my-osd-group
      [root@admin ~]# ansible-playbook -i /usr/share/cephadm-ansible/hosts cephadm-preflight.yml --extra-vars "ceph_origin=rhcs" --limit my-nodes

    • When you run the preflight playbook, cephadm-ansible automatically installs chronyd and ceph-common on the client nodes.

3.8. Bootstrapping a new storage cluster

The cephadm utility performs the following tasks during the bootstrap process:

  • Installs and starts a Ceph Monitor daemon and a Ceph Manager daemon for a new Red Hat Ceph Storage cluster on the local node as containers.
  • Creates the /etc/ceph directory.
  • Writes a copy of the public key to /etc/ceph/ceph.pub for the Red Hat Ceph Storage cluster and adds the SSH key to the root user’s /root/.ssh/authorized_keys file.
  • Writes a minimal configuration file needed to communicate with the new cluster to /etc/ceph/ceph.conf.
  • Writes a copy of the client.admin administrative secret key to /etc/ceph/ceph.client.admin.keyring.
  • Deploys a basic monitoring stack with prometheus, grafana, and other tools such as node-exporter and alert-manager.
Important

If you are performing a disconnected installation, see Performing a disconnected installation.

Note

If you have existing prometheus services that you want to run with the new storage cluster, or if you are running Ceph with Rook, use the --skip-monitoring-stack option with the cephadm bootstrap command. This option bypasses the basic monitoring stack so that you can manually configure it later.

Important

Bootstrapping provides the default user name and password for initial login to the Dashboard. Bootstrap requires you to change the password after you log in.

Important

Before you begin the bootstrapping process, make sure that the container image that you want to use has the same version of Red Hat Ceph Storage as cephadm. If the two versions do not match, bootstrapping fails at the Creating initial admin user stage.

Prerequisites

  • An IP address for the first Ceph Monitor container, which is also the IP address for the first node in the storage cluster.
  • Login to registry.redhat.io on all the nodes of the storage cluster.
  • A minimum of 10 GB of free space for /var/lib/containers/.
  • Root-level access to all nodes.
Note

If the storage cluster includes multiple networks and interfaces, be sure to choose a network that is accessible by any node that uses the storage cluster.

Note

If the local node uses fully-qualified domain names (FQDN), then add the --allow-fqdn-hostname option to cephadm bootstrap on the command line.

Important

Run cephadm bootstrap on the node that you want to be the initial Monitor node in the cluster. The IP_ADDRESS option should be the IP address of the node you are using to run cephadm bootstrap.

Note

If you want to deploy a storage cluster using IPV6 addresses, then use the IPV6 address format for the --mon-ip IP-ADDRESS option. For example: `cephadm bootstrap --mon-ip 2620:52:0:880:225:90ff:fefc:2536 --registry-json /etc/mylogin.json

Procedure

  1. Bootstrap a storage cluster:

    Syntax

    cephadm bootstrap --mon-ip IP_ADDRESS --registry-url registry.redhat.io --registry-username USER_NAME --registry-password PASSWORD

    Example

    [root@vm00 ~]# cephadm bootstrap --mon-ip 10.10.128.68 --registry-url registry.redhat.io --registry-username myuser1 --registry-password mypassword1

    The script takes a few minutes to complete. Once the script completes, it provides the credentials to the Red Hat Ceph Storage Dashboard URL, a command to access the Ceph command-line interface (CLI), and a request to enable telemetry.

    Ceph Dashboard is now available at:
    
                 URL: https://rh8-3.storage.lab:8443/
                User: admin
            Password: i8nhu7zham
    
    You can access the Ceph CLI with:
    
            sudo /usr/sbin/cephadm shell --fsid 266ee7a8-2a05-11eb-b846-5254002d4916 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring
    
    Please consider enabling telemetry to help improve Ceph:
    
            ceph telemetry on
    
    For more information see:
    
            https://docs.ceph.com/docs/master/mgr/telemetry/
    
    Bootstrap complete.

Additional Resources

3.8.2. Using a JSON file to protect login information

As a storage administrator, you might choose to add login and password information to a JSON file, and then refer to the JSON file for bootstrapping. This protects the login credentials from exposure.

Note

You can also use a JSON file with the cephadm --registry-login command.

Prerequisites

  • An IP address for the first Ceph Monitor container, which is also the IP address for the first node in the storage cluster.
  • Login access to registry.redhat.io.
  • A minimum of 10 GB of free space for /var/lib/containers/.
  • Root-level access to all nodes.

Procedure

  1. Create the JSON file. In this example, the file is named mylogin.json.

    Syntax

    {
     "url":"REGISTRY-URL",
     "username":"USER-NAME",
     "password":"PASSWORD"
    }

    Example

    {
     "url":"registry.redhat.io",
     "username":"myuser1",
     "password":"mypassword1"
    }

  2. Bootstrap a storage cluster:

    Syntax

    cephadm bootstrap --mon-ip IP_ADDRESS --registry-json /etc/mylogin.json

    Example

    [root@vm00 ~]# cephadm bootstrap --mon-ip 10.10.128.68 --registry-json /etc/mylogin.json

3.8.3. Bootstrapping a storage cluster using a service configuration file

As a storage administrator, you can use a service configuration file and the --apply-spec option to bootstrap the storage cluster and configure additional hosts and daemons. The configuration file is a .yaml file that contains the service type, placement, and designated nodes for services that you want to deploy.

Note

If you want to use a non-default realm or zone for applications such as multisite, configure your RGW daemons after you bootstrap the storage cluster, instead of adding them to the configuration file and using the --apply-spec option. This gives you the opportunity to create the realm or zone you need for the Ceph Object Gateway daemons before deploying them. Refer to the Red Hat Ceph Storage Operations Guide for more information.

Prerequisites

  • At least one running virtual machine (VM) or server.
  • Red Hat Enterprise Linux 8.4 or later.
  • Root-level access to all nodes.
  • Passwordless ssh is set up on all hosts in the storage cluster.
  • cephadm is installed on the node that you want to be the initial Monitor node in the storage cluster.

Procedure

  1. Log in to the bootstrap host.
  2. Create the service configuration .yaml file for your storage cluster. The example file directs cephadm bootstrap to configure the initial host and two additional hosts, and it specifies that OSDs be created on all available disks.

    Example

    service_type: host
    addr: node-00
    hostname: node-00
    ---
    service_type: host
    addr: node-01
    hostname: node-01
    ---
    service_type: host
    addr: node-02
    hostname: node-02
    ---
    service_type: osd
    placement:
      host_pattern: "*"
    data_devices:
      all: true

  3. Bootstrap the storage cluster with the --apply-spec option:

    Syntax

    cephadm bootstrap --apply-spec CONFIGURATION_FILE_NAME --mon-ip MONITOR-IP-ADDRESS

    Example

    [root@vm00 ~]# cephadm bootstrap --apply-spec initial-config.yaml --mon-ip 10.10.128.68

    The script takes a few minutes to complete. Once the script completes, it provides the credentials to the Red Hat Ceph Storage Dashboard URL, a command to access the Ceph command-line interface (CLI), and a request to enable telemetry.

  4. Once your storage cluster is up and running, refer to the Red Hat Ceph Storage Operations Guide for more information about configuring additional daemons and services.

Additional Resources

3.8.4. Bootstrapping the storage cluster as a non-root user

To bootstrap the Red Hat Ceph Storage cluster as a non-root user on the bootstrap node, use the --ssh-user option with the cephadm bootstrap command. --ssh-user specifies a user for SSH connections to cluster nodes.

Non-root users must have passwordless sudo access.

Prerequisites

  • An IP address for the first Ceph Monitor container, which is also the IP address for the initial Monitor node in the storage cluster.
  • Login to registry.redhat.io on all the nodes of the storage cluster.
  • A minimum of 10 GB of free space for /var/lib/containers/.
  • SSH public and private keys.
  • Passwordless sudo access to the bootstrap node.

Procedure

  1. Change to sudo on the bootstrap node:

    Syntax

    su - SSH-USER-NAME

    Example

    [root@vm00 ~]# su - ceph
    Last login: Tue Sep 14 12:00:29 EST 2021 on pts/0

  2. Establish the SSH connection to the bootstrap node:

    Example

    [ceph@vm00 ~]# ssh vm00
    Last login: Tue Sep 14 12:03:29 EST 2021 on pts/0

  3. Invoke the cephadm bootstrap command. Include the --ssh-private-key and --ssh-public-key options:

    Syntax

    cephadm bootstrap --ssh-user USER-NAME --mon-ip IP-ADDRESS --ssh-private-key PRIVATE-KEY --ssh-public-key PUBLIC-KEY --registry-url registry.redhat.io --registry-username USER_NAME --registry-password PASSWORD

    Example

    cephadm bootstrap --ssh-user ceph --mon-ip 10.10.128.68 --ssh-private-key /home/ceph/.ssh/id_rsa --ssh-public-key /home/ceph/.ssh/id_rsa.pub --registry-url registry.redhat.io --registry-username myuser1 --registry-password mypassword1

Additional Resources

3.8.5. Bootstrap command options

The cephadm bootstrap command bootstraps a Ceph storage cluster on the local host. It deploys a MON daemon and a MGR daemon on the bootstrap node, automatically deploys the monitoring stack on the local host, and calls ceph orch host add HOSTNAME.

The following table lists the available options for cephadm bootstrap.

cephadm bootstrap optionDescription

--config CONFIG-FILE, -c CONFIG-FILE

CONFIG-FILE is the ceph.conf file to use with the bootsrap command

--mon-id MON-ID

Bootstraps on the host named MON-ID. Default value is the local host.

--mon-addrv MON-ADDRV

mon IPs (e.g., [v2:localipaddr:3300,v1:localipaddr:6789])

--mon-ip IP-ADDRESS

IP address of the node you are using to run cephadm bootstrap.

--mgr-id MGR_ID

Host ID where a MGR node should be installed. Default: randomly generated.

--fsid FSID

cluster FSID

--output-dir OUTPUT_DIR

Use this directory to write config, keyring, and pub key files.

--output-keyring OUTPUT_KEYRING

Use this location to write the keyring file with the new cluster admin and mon keys.

--output-config OUTPUT_CONFIG

Use this location to write the configuration file to connect to the new cluster.

--output-pub-ssh-key OUTPUT_PUB_SSH_KEY

Use this location to write the public SSH key for the cluster.

--skip-ssh

Skip the setup of the ssh key on the local host.

--initial-dashboard-user INITIAL_DASHBOARD_USER

Initial user for the dashboard.

--initial-dashboard-password INITIAL_DASHBOARD_PASSWORD

Initial password for the initial dashboard user.

--ssl-dashboard-port SSL_DASHBOARD_PORT

Port number used to connect with the dashboard using SSL.

--dashboard-key DASHBOARD_KEY

Dashboard key.

--dashboard-crt DASHBOARD_CRT

Dashboard certificate.

--ssh-config SSH_CONFIG

SSH config.

--ssh-private-key SSH_PRIVATE_KEY

SSH private key.

--ssh-public-key SSH_PUBLIC_KEY

SSH public key.

--ssh-user SSH_USER

sets the user for SSH connections to cluster hosts. Passwordless sudo is needed for non-root users.

--skip-mon-network

Sets mon public_network based on the bootstrap mon ip.

--skip-dashboard

Do not enable the Ceph Dashboard.

--dashboard-password-noupdate

Disable forced dashboard password change.

--no-minimize-config

Do not assimilate and minimize the configuration file.

--skip-ping-check

Do not verify that the mon IP is pingable.

--skip-pull

Do not pull the latest image before bootstrapping.

--skip-firewalld

Do not configure firewalld.

--allow-overwrite

Allow the overwrite of existing –output-* config/keyring/ssh files.

--allow-fqdn-hostname

Allow fully qualified host name.

--skip-prepare-host

Do not prepare host.

--orphan-initial-daemons

Do not create initial mon, mgr, and crash service specs.

--skip-monitoring-stack

Do not automatically provision the monitoring stack] (prometheus, grafana, alertmanager, node-exporter).

--apply-spec APPLY_SPEC

Apply cluster spec file after bootstrap (copy ssh key, add hosts and apply services).

--registry-url REGISTRY_URL

Specifies the URL of the custom registry to log into. For example: registry.redhat.io.

--registry-username REGISTRY_USERNAME

User name of the login account to the custom registry.

--registry-password REGISTRY_PASSWORD

Password of the login account to the custom registry.

--registry-json REGISTRY_JSON

JSON file containing registry login information.

Additional Resources

  • For more information about the --skip-monitoring-stack option, see Adding hosts.
  • For more information about logging into the registry with the registry-json option, see help for the registry-login command.
  • For more information about cephadm options, see help for cephadm.

3.8.6. Configuring a custom registry for disconnected installation

You can use a disconnected installation procedure to install cephadm and bootstrap your cluster on a private network. A disconnected installation uses a custom container registry for installation.

Prerequisites

  • At least one running virtual machine (VM) or server.
  • Red Hat Enterprise Linux 8.4 or later.
  • Root-level access to all nodes.
  • Passwordless ssh is set up on all hosts in the storage cluster.
  • A Red Hat Ceph Storage container image.
  • The container image resides in the custom registry.
  • Docker for Red Hat Enterprise Linux 7 or podman for Red Hat Enterprise Linux 8 is installed. For Red Hat Enterprise Linux 7, the docker service is running.

Procedure

Use this procedure when the Red Hat Ceph Storage nodes do NOT have access to the Internet during deployment. Perform these steps on a node that has both Internet access and access to the local cluster.

  1. Log in to the node that has access to both public network and the cluster nodes.
  2. Register the node, and when prompted, enter the appropriate Red Hat Customer Portal credentials:

    Syntax

    subscription-manager register

  3. Pull the latest subscription data:

    Syntax

    subscription-manager refresh

  4. List all available subscriptions for Red Hat Ceph Storage:

    subscription-manager list --available --all --matches="*Ceph*"

    Copy the Pool ID from the list of available subscriptions for Red Hat Ceph Storage.

  5. Attach the subscription to get access to the software entitlements.:

    Syntax

    subscription-manager attach --pool=POOL_ID

    Replace
    • POOL_ID with the Pool ID identified in the previous step.
  6. Disable the default software repositories, and enable the server and the extras repositories on the respective version of Red Hat Enterprise Linux:

    Red Hat Enterprise Linux 7

    subscription-manager repos --disable=*
    subscription-manager repos --enable=rhel-7-server-rpms
    subscription-manager repos --enable=rhel-7-server-extras-rpms

    Red Hat Enterprise Linux 8

    subscription-manager repos --disable=*
    subscription-manager repos --enable=rhel-8-for-x86_64-baseos-rpms
    subscription-manager repos --enable=rhel-8-for-x86_64-appstream-rpms

  7. Install the container runtimes:

    Red Hat Enterprise Linux 7:

    yum install docker

    Red Hat Enterprise Linux 8:

    dnf install podman

  8. Update the system to receive the latest packages.

    Red Hat Enterprise Linux 7:

    yum update

    Red Hat Enterprise Linux 8:

    dnf update

  9. Start a local registry. Use docker for Red Hat Enterprise Linux 7 or podman for Red Hat Enterprise Linux 8:

    Red Hat Enterprise Linux 7

    docker run -d -p 5000:5000 --restart=always --name registry registry:2

    Red Hat Enterprise Linux 8

    podman run -d -p 5000:5000 --restart=always --name registry registry:2

  10. Verify that registry.redhat.io is in the container registry search path.

    1. Edit the /etc/containers/registries.conf file:

      Example

      [registries.search]
      registries = [ 'registry.access.redhat.com', 'registry.fedoraproject.org', 'registry.centos.org', 'docker.io']

    2. If registry.redhat.io is not included in the file, add it:

      Example

      [registries.search]
      registries = ['registry.redhat.io', 'registry.access.redhat.com', 'registry.fedoraproject.org', 'registry.centos.org', 'docker.io']

  11. Pull the Red Hat Ceph Storage 5.0 image, Prometheus image, and Dashboard image from the Red Hat Customer Portal:

    Red Hat Enterprise Linux 7

    # docker pull registry.redhat.io/rhceph/rhceph-5-rhel8:latest
    # docker pull registry.redhat.io/openshift4/ose-prometheus-node-exporter:v4.6
    # docker pull registry.redhat.io/rhceph/rhceph-5-dashboard-rhel8:latest
    # docker pull registry.redhat.io/openshift4/ose-prometheus:v4.6
    # docker pull registry.redhat.io/openshift4/ose-prometheus-alertmanager:v4.6

    Red Hat Enterprise Linux 8

    # podman pull registry.redhat.io/rhceph/rhceph-5-rhel8:latest
    # podman pull registry.redhat.io/openshift4/ose-prometheus-node-exporter:v4.6
    # podman pull registry.redhat.io/rhceph/rhceph-5-dashboard-rhel8:latest
    # podman pull registry.redhat.io/openshift4/ose-prometheus:v4.6
    # podman pull registry.redhat.io/openshift4/ose-prometheus-alertmanager:v4.6

  12. Tag the image. Replace LOCAL_NODE_FQDN with your local host Fully Qualified Domain Name (FQDN):

    Red Hat Enterprise Linux 7

    # docker tag registry.redhat.io/rhceph/rhceph-5-rhel8:latest LOCAL_NODE_FQDN:5000/rhceph/rhceph-5-rhel8:latest
    # docker tag registry.redhat.io/openshift4/ose-prometheus-node-exporter:v4.6 LOCAL_NODE_FQDN:5000/openshift4/ose-prometheus-node-exporter:v4.6
    # docker tag registry.redhat.io/rhceph/rhceph-5-dashboard-rhel8:latest LOCAL_NODE_FQDN:5000/rhceph/rhceph-5-dashboard-rhel8:latest
    # docker tag registry.redhat.io/openshift4/ose-prometheus-alertmanager:v4.6 LOCAL_NODE_FQDN:5000/openshift4/ose-prometheus-alertmanager:v4.6
    # docker tag registry.redhat.io/openshift4/ose-prometheus:v4.6 LOCAL_NODE_FQDN:5000/openshift4/ose-prometheus:v4.6

    Red Hat Enterprise Linux 8

    # podman tag registry.redhat.io/rhceph/rhceph-5-rhel8:latest LOCAL_NODE_FQDN:5000/rhceph/rhceph-5-rhel8:latest
    # podman tag registry.redhat.io/openshift4/ose-prometheus-node-exporter:v4.6 LOCAL_NODE_FQDN:5000/openshift4/ose-prometheus-node-exporter:v4.6
    # podman tag registry.redhat.io/rhceph/rhceph-5-dashboard-rhel8:latest LOCAL_NODE_FQDN:5000/rhceph/rhceph-5-dashboard-rhel8:latest
    # podman tag registry.redhat.io/openshift4/ose-prometheus-alertmanager:v4.6 LOCAL_NODE_FQDN:5000/openshift4/ose-prometheus-alertmanager:v4.6
    # podman tag registry.redhat.io/openshift4/ose-prometheus:v4.6 LOCAL_NODE_FQDN:5000/openshift4/ose-prometheus:v4.6

  13. Edit the /etc/containers/registries.conf file. Add the node’s FQDN with the port into the file in place of LOCAL_NODE_FQDN, and then save it:

    Syntax

    [registries.insecure]
    registries = ['LOCAL_NODE_FQDN:5000']

    Note

    You must perform this step on all storage cluster nodes that access the local container registry.

  14. Push the image to the local container registry you started. Use the local node’s FQDN in place of LOCAL_NODE_FQDN:

    Red Hat Enterprise Linux 7

    # docker push LOCAL_NODE_FQDN:5000/rhceph/rhceph-5-rhel8
    # docker push LOCAL_NODE_FQDN:5000/openshift4/ose-prometheus-node-exporter:v4.6
    # docker push LOCAL_NODE_FQDN:5000/rhceph/rhceph-5-dashboard-rhel8
    # docker push LOCAL_NODE_FQDN:5000/openshift4/ose-prometheus-alertmanager:v4.6
    # docker push LOCAL_NODE_FQDN:5000/openshift4/ose-prometheus:v4.6

    Red Hat Enterprise Linux 8

    # podman push LOCAL_NODE_FQDN:5000/rhceph/rhceph-5-rhel8
    # podman push LOCAL_NODE_FQDN:5000/openshift4/ose-prometheus-node-exporter:v4.6
    # podman push LOCAL_NODE_FQDN:5000/rhceph/rhceph-5-dashboard-rhel8
    # podman push LOCAL_NODE_FQDN:5000/openshift4/ose-prometheus-alertmanager:v4.6
    # podman push LOCAL_NODE_FQDN:5000/openshift4/ose-prometheus:v4.6

  15. Restart the docker service:

    Red Hat Enterprise Linux 7

    systemctl restart docker

3.8.7. Performing a disconnected installation

Before you can perform the installation, you must obtain a Red Hat Ceph Storage container image, either from a proxy host that has access to the Red Hat registry or by copying the image to your local registry.

Note

Red Hat Ceph Storage supports Red Hat Enterprise Linux 8.4 and later.

Important

Before you begin the bootstrapping process, make sure that the container image that you want to use has the same version of Red Hat Ceph Storage as cephadm. If the two versions do not match, bootstrapping fails at the Creating initial admin user stage.

Prerequisites

  • At least one running virtual machine (VM) or server.
  • Red Hat Enterprise Linux 8.4 or later.
  • Root-level access to all nodes.
  • Passwordless ssh is set up on all hosts in the storage cluster.
  • A Red Hat Ceph Storage container image.
  • The container image resides in the custom registry.

Procedure

  1. Log in to the bootstrap host.
  2. Bootstrap the storage cluster:

    Syntax

    cephadm --image CUSTOM-CONTAINER-REGISTRY-NAME:PORT:_CUSTOM-IMAGE-NAME_:_IMAGE-TAG_ bootstrap --mon-ip IP-ADDRESS

    Example

    [root@vm00 ~]# cephadm --image my-private-registry.com:5000:myimage:mytag1 bootstrap --mon-ip 10..0.127.0

    The script takes a few minutes to complete. Once the script completes, it provides the credentials to the Red Hat Ceph Storage Dashboard URL, a command to access the Ceph command-line interface (CLI), and a request to enable telemetry.

    Ceph Dashboard is now available at:
    +
                 URL: https://rh8-3.storage.lab:8443/
                User: admin
            Password: i8nhu7zham
    
    You can access the Ceph CLI with:
    
            sudo /usr/sbin/cephadm shell --fsid 266ee7a8-2a05-11eb-b846-5254002d4916 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring
    
    Please consider enabling telemetry to help improve Ceph:
    
            ceph telemetry on
    
    For more information see:
    
            https://docs.ceph.com/docs/master/mgr/telemetry/
    
    Bootstrap complete.

After the bootstrap process is complete, see Changing configurations of custom container images for disconnected installations to configure the container images.

Additional Resources

3.8.8. Changing configurations of custom container images for disconnected installations

After you perform the initial bootstrap for disconnected nodes, you must specify custom container images for monitoring stack daemons. You can override the default container images for monitoring stack daemons, since the nodes do not have access to the default container registry.

Note

Make sure that the bootstrap process on the initial host is complete before making any configuration changes.

Prerequisites

  • At least one running virtual machine (VM) or server.
  • Red Hat Enterprise Linux 8.4 or later.
  • Root-level access to all nodes.
  • Passwordless ssh is set up on all hosts in the storage cluster.

Procedure

  1. Set the custom container images with the ceph config command:

    Syntax

    ceph config set mgr mgr/cephadm/OPTION-NAME CUSTOM-REGISTRY-NAME/CONTAINER-NAME

    Use the following options for OPTION-NAME:

    container_image_prometheus
    container_image_grafana
    container_image_alertmanager
    container_image_node_exporter

    Example

    [root@vm00 ~]# ceph config set mgr mgr/cephadm/container_image_prometheus myregistry/mycontainer
    [root@vm00 ~]# ceph config set mgr mgr/cephadm/container_image_grafana myregistry/mycontainer
    [root@vm00 ~]# ceph config set mgr mgr/cephadm/container_image_alertmanager myregistry/mycontainer
    [root@vm00 ~]# ceph config set mgr mgr/cephadm/container_image_node_exporter myregistry/mycontainer

  2. Redeploy node-exporter:

    Syntax

    ceph orch redeploy node-exporter

Note

If any of the services do not deploy, you can redeploy them with the ceph orch redeploy command.

Note

By setting a custom image, the default values for the configuration image name and tag will be overridden, but not overwritten. The default values change when updates become available. By setting a custom image, you will not be able to configure the component for which you have set the custom image for automatic updates. You will need to manually update the configuration image name and tag to be able to install updates.

  • If you choose to revert to using the default configuration, you can reset the custom container image. Use ceph config rm to reset the configuration option:

    Syntax

    ceph config rm mgr mgr/cephadm/OPTION-NAME

    Example

    ceph config rm mgr mgr/cephadm/container_image_prometheus

Additional Resources

3.9. Launching the cephadm shell

The cephadm shell command launches a bash shell in a container with all of the Ceph packages installed. This enables you to perform “Day One” cluster setup tasks, such as installation and bootstrapping, and to invoke ceph commands.

Prerequisites

  • A storage cluster that has been installed and bootstrapped.
  • Root-level access to all nodes in the storage cluster.

Procedure

There are two ways to launch the cephadm shell:

  • Enter cephadm shell at the system prompt. This example invokes the ceph -s command from within the shell.

    Example

    [root@vm00 ~]# cephadm shell
    [cephadm@cephadm /~]# ceph -s

  • At the system prompt, type cephadm shell and the command you want to execute:

    Example

    [root@vm00 ~]# cephadm shell ceph -s

Note

If the node contains configuration and keyring files in /etc/ceph/, the container environment uses the values in those files as defaults for the cephadm shell. If you execute the cephadm shell on a MON node, the cephadm shell inherits its default configuration from the MON container, instead of using the default configuration.

3.10. Verifying the cluster installation

Once the cluster installation is complete, you can verify that the Red Hat Ceph Storage 5 installation is running properly.

There are two ways of verifying the storage cluster installation as a root user:

  • Run the podman ps command.
  • Run the cephadm shell ceph -s.

Prerequisites

  • Root-level access to all nodes in the storage cluster.

Procedure

  • Run the podman ps command:

    Example

    [root@host01 ~]# podman ps

    Note

    In Red Hat Ceph Storage 5, the format of the systemd units has changed. In the NAMES column, the unit files now include the FSID.

  • Run the cephadm shell ceph -s command:

    Example

    [root@host01 ~]# cephadm shell ceph -s

    Note

    The health of the storage cluster is in HEALTH_WARN status as the hosts and the daemons are not added.

3.11. Adding hosts

Bootstrapping the Red Hat Ceph Storage installation creates a working storage cluster, consisting of one Monitor daemon and one Manager daemon within the same container. As a storage administrator, you can add additional hosts to the storage cluster and configure them.

Note

Running the preflight playbook installs podman, lvm2, chronyd, and cephadm on all hosts listed in the Ansible inventory file.

Prerequisites

  • A running Red Hat Ceph Storage cluster.
  • Root-level access to all nodes in the storage cluster.
  • Login to registry.redhat.io on all the nodes of the storage cluster.

Procedure

  1. Add the new host to the Ansible inventory file. The default location for the file is /usr/share/cephadm-ansible/hosts.
  2. Switch to root user and install the storage cluster’s public SSH key in the root user’s authorized_keys file on the new host:

    Syntax

    ssh-copy-id -f -i /etc/ceph/ceph.pub root@NEWHOST

    Example

    [root@node00 /]# ssh-copy-id -f -i /etc/ceph/ceph.pub root@node01
    [root@node00 /]# ssh-copy-id -f -i /etc/ceph/ceph.pub root@node02

  3. Run the preflight playbook with the --limit option:

    Syntax

    ansible-playbook -i INVENTORY-FILE cephadm-preflight.yml --limit NEWHOST

    Example

    [root@admin ~]# ansible-playbook -i /usr/share/cephadm-ansible/hosts cephadm-preflight.yml --limit host01

    The preflight playbook installs podman, lvm2, chronyd, and cephadm on the new host. After installation is complete, cephadm resides in the /usr/sbin/ directory.

  4. Use the cephadm orchestrator to add the new host to the storage cluster:

    ceph orch host add _NEWHOST_ _IP-ADDRESS_

    Example

    [ceph: root@host01 /]# ceph orch host add host02 10.0.127.0
    Added host 'host02'
    [ceph: root@host01 /]# ceph orch host add host03 10.0.127.1
    Added host 'host03'

  5. Use the ceph orch host ls command to view the status of the storage cluster, and to verify that the new host has been added.
Note

The STATUS of the hosts is blank, in the output of the ceph orch host ls command.

Note

You can also add nodes by IP address. If you do not have DNS configured in your storage cluster environment, you can add the hosts by IP address, along with the host names.

Syntax

ceph orch host add HOSTNAME IP-ADDRESS LABELS

Note

You can also add nodes by IP address after you run the preflight playbook. If you do not have DNS configured in your storage cluster environment, you can add the hosts by IP address, along with the host names.

ceph orch host add _HOSTNAME_ _IP-ADDRESS_ _LABELS_

3.11.1. Using the addr option to identify hosts

The addr option offers an additional way to contact a host. Add the IP address of the host to the addr option. If ssh cannot connect to the host by its hostname, then it uses the value stored in addr to reach the host by its IP address.

Prerequisites

  • A storage cluster that has been installed and bootstrapped.
  • Root-level access to all nodes in the storage cluster.

Procedure

Run this procedure from inside the cephadm shell.

  1. Add the IP address:

    Syntax

    ceph orch host add HOSTNAME ADDR

    Example

    [cephadm@cephadm /]# ceph orch host add node00 192.168.1.128

Note

If adding a host by hostname results in that host being added with an IPv6 address instead of an IPv4 address, use ceph orch host to specify the IP address of that host:

ceph orch host set-addr _HOSTNAME_ _IP-ADDR_

To convert the IP address from IPv6 format to IPv4 format for a host you have added, use the following command:

ceph orch host set-addr _HOSTNAME_ IPV4-ADDRESS

3.11.2. Labeling Hosts

The Ceph orchestrator supports assigning labels to hosts. Labels are free-form and have no specific meanings. This means that you can use mon, monitor, mycluster_monitor, or any other text string. Each host can have multiple labels.

For example, apply the mon label to all hosts on which you want to deploy Monitor daemons, mgr for all hosts on which you want to deploy Manager daemons, rgw for RADOS gateways, and so on.

Labeling all the hosts in the storage cluster helps to simplify system management tasks by allowing you to quickly identify the daemons running on each host. In addition, you can use the Ceph orchestrator or a YAML file to deploy or remove daemons on hosts that have specific host labels.

Prerequisites

  • A storage cluster that has been installed and bootstrapped.

Procedure

  1. Launch the cephadm shell:

    [root@vm00 ~]# cephadm shell
    [cephadm@cephadm ~]#
  2. Add a label to a host:

    Syntax

    ceph orch host label add HOSTNAME LABEL

    Example

    [cephadm@cephadm ~]# ceph orch host label add node00 mon

3.11.2.1. Removing a label from a host

  1. Use the ceph orchestrator to remove a label from a host:

    Syntax

    ceph orch host label rm HOSTNAME LABEL

    Example

    [cephadm@cephadm ~]# ceph orch host label rm node00 mon

3.11.2.2. Using host labels to deploy daemons on specific hosts

There are two ways to use host labels to deploy daemons on specific hosts: by using the --placement option from the command line, and by using a YAML file.

  • Use the --placement option to deploy a daemon from the command line:

    Example

    [cephadm@cephadm ~]# ceph orch apply prometheus --placement="label:mylabel"

  • To assign the daemon to a specific host label in a YAML file, specify the service type and label in the YAML file:

    Example

    service_type: prometheus
    placement:
      label: "mylabel"

3.11.3. Adding multiple hosts

Use a YAML file to add multiple hosts to the storage cluster at the same time.

Note

Be sure to create the hosts.yaml file within a host container, or create the file on the local host and then use the cephadm shell to mount the file within the container. The cephadm shell automatically places mounted files in /mnt. If you create the file directly on the local host and then apply the hosts.yaml file instead of mounting it, you might see a File does not exist error.

Prerequisites

  • A storage cluster that has been installed and bootstrapped.
  • Root-level access to all nodes in the storage cluster.

Procedure

  1. Copy over the public ssh key to each of the hosts that you want to add.
  2. Use a text editor to create a hosts.yaml file.
  3. Add the host descriptions to the hosts.yaml file, as shown in the following example. Include the labels to identify placements for the daemons that you want to deploy on each host. Separate each host description with three dashes (---).

    Example

    service_type: host
    addr:
    hostname: host00
    labels:
    - mon
    - osd
    - mgr
    ---
    service_type: host
    addr:
    hostname: host01
    labels:
    - mon
    - osd
    - mgr
    ---
    service_type: host
    addr:
    hostname: host02
    labels:
    - mon
    - osd

  4. If you created the hosts.yaml file within the host container, invoke the ceph orch apply command:

    Example

    [root@vm00 ~]# ceph orch apply -i hosts.yaml
    Added host 'host00'
    Added host 'host01'
    Added host 'host02'

  5. If you created the hosts.yaml file directly on the local host, use the cephadm shell to mount the file:

    Example

    [root@vm00 ~]# cephadm shell --mount hosts.yaml -- ceph orch apply -i /mnt/hosts.yaml"

  6. View the list of hosts and their labels:

    Example

    [root@vm00 ~]# ceph orch host ls
    HOST      ADDR      LABELS          STATUS
    host00    host00    mon osd mgr
    host01    host01    mon osd mgr
    host02    host02    mon osd

    Note

    If a host is online and operating normally, its status is blank. An offline host shows a status of OFFLINE, and a host in maintenance mode shows a status of MAINTENANCE.

3.11.4. Adding hosts in disconnected deployments

If you are running a storage cluster on a private network and your host domain name server (DNS) cannot be reached through private IP, you must include both the host name and the IP address for each host you want to add to the storage cluster.

Prerequisites

  • A running storage cluster.
  • Root-level access to all hosts in the storage cluster.

Procedure

  1. Invoke the cephadm shell.

    Syntax

    [root@vm00 ~]# cephadm shell

  2. Add the host:

    Syntax

    ceph orch host add HOST_NAME HOST_ADDRESS

    Example

    [ceph:root@node00 /]# ceph orch host add node03 172.20.20.9

3.11.5. Removing hosts

There are two ways to remove hosts from a storage cluster. The method that you use depends upon whether the host is running the node-exporter or crash services.

Important

If the host that you want to remove is running OSDs, remove them from the host before removing the host.

Prerequisites

  • A storage cluster that has been installed and bootstrapped.
  • Root-level access to all nodes in the storage cluster.

Procedure

  1. If the host is not running node-exporter or the crash service, edit the placement specification file and remove all instances of the host name. By default, the placement specification file is named cluster.yml.

    Example

    Update:
    
    service_type: rgw
    placement:
      hosts:
      - host01
      - host02
    
    To:
    
    service_type: rgw
    placement:
      hosts:
      - host01

  2. Remove the host from the cephadm environment`:

    Example

    [root@vm00 ~]# ceph orch host rm host2

    • If the host that you want to remove is running node-exporter or crash services, run the following command on the host to remove them:

      Syntax

      cephadm rm-daemon --fsid CLUSTER-ID --name SERVICE-NAME

      Example

      [root@host02 ~]# cephadm rm-daemon --fsid cluster00 --name node-exporter

3.12. Adding Monitor service

A typical Red Hat Ceph Storage storage cluster has three or five monitor daemons deployed on different hosts. If your storage cluster has five or more hosts, Red Hat recommends that you deploy five Monitor nodes.

Note

The bootstrap node is the initial monitor of the storage cluster. Be sure to include the bootstrap node in the list of hosts to which you want to deploy.

Note

If you want to apply Monitor service to more than one specific host, be sure to specify all of the host names within the same ceph orch apply command. If you specify ceph orch apply mon --placement host1 and then specify ceph orch apply mon --placement host2, the second command removes the Monitor service on host1 and applies a Monitor service to host2.

If your Monitor nodes or your entire cluster are located on a single subnet, then cephadm automatically adds up to five Monitor daemons as you add new hosts to the cluster. cephadm automatically configures the Monitor daemons on the new hosts. The new hosts reside on the same subnet as the first (bootstrap) host in the storage cluster. cephadm can also deploy and scale monitors to correspond to changes in the size of the storage cluster.

Prerequisites

  • Root-level access to all hosts in the storage cluster.
  • A running storage cluster.

Procedure

  1. Apply the five Monitor daemons to five random hosts in the storage cluster:

    ceph orch apply mon 5
    • Disable automatic Monitor deployment:

      ceph orch apply mon --unmanaged

3.12.1. Adding Monitor nodes to specific hosts

Use host labels to identify the hosts that contain Monitor nodes.

Prerequisites

  • Root-level access to all nodes in the storage cluster.
  • A running storage cluster.

Procedure

  1. Assign the mon label to the host:

    Syntax

    ceph orch host label add HOSTNAME mon

    Example

    [ceph: root@host01 ~]# ceph orch host label add host01 mon

  2. View the current hosts and labels:

    Syntax

    ceph orch host ls

    Example

    [ceph: root@host01 ~]# ceph orch host label add host01 mon
    [ceph: root@host01 ~]# ceph orch host label add host02 mon
    [ceph: root@host01 ~]# ceph orch host ls
    HOST   ADDR   LABELS  STATUS
    host01         mon
    host02         mon
    host03
    host04
    host05

  3. Deploy monitors based on the host label:

    Syntax

    ceph orch apply mon label:mon

  4. Deploy monitors on a specific set of hosts:

    Example

    [root@mon ~]# ceph orch apply mon _host01,host02,host03,..._

3.13. Setting up the admin node

Use an admin node to administer the storage cluster.

An admin node contains both the cluster configuration file and the admin keyring. Both of these files are stored in the directory /etc/ceph and use the name of the storage cluster as a prefix.

For example, the default ceph cluster name is ceph. In a cluster using the default name, the admin keyring is named /etc/ceph/ceph.client.admin.keyring. The corresponding cluster configuration file is named /etc/ceph/ceph.conf.

To set up a host in the storage cluster as the admin node, apply the _admin label to the host you want to designate as the admin node.

Note

Make sure that you copy the ceph.conf file and admin keyring to the admin node after you apply the _admin label.

Prerequisites

  • A running storage cluster with cephadm installed.
  • The storage cluster has running Monitor and Manager nodes.
  • Root-level access to all nodes in the cluster.

Procedure

  1. Use ceph orch host ls to view the hosts in your storage cluster:

    Example

    [cephadm@cephadm /]# ceph orch host ls
    HOST   ADDR   LABELS  STATUS
    host01         mon
    host02         mon,mgr
    host03
    host04
    host05

  2. Use the _admin label to designate the admin host in your storage cluster. For best results, this host should have both Monitor and Manager daemons running.

    Syntax

    ceph orch host label add HOSTNAME _admin

    Example

    [cephadm@cephadm /]# ceph orch host label add host02 _admin

  3. Verify that the admin host has the _admin label.

    Example

    [cephadm@cephadm /]# ceph orch host ls
    HOST   ADDR   LABELS  STATUS
    host01         mon
    host02         mon,mgr,_admin
    host03
    host04
    host05

  4. Log in to the admin node to manage the storage cluster.

3.13.1. Deploying Ceph monitor nodes using host labels

A typical Red Hat Ceph Storage storage cluster has three or five Ceph Monitor daemons deployed on different hosts. If your storage cluster has five or more hosts, Red Hat recommends that you deploy five Ceph Monitor nodes.

If your Ceph Monitor nodes or your entire cluster are located on a single subnet, then cephadm automatically adds up to five Ceph Monitor daemons as you add new nodes to the cluster. cephadm automatically configures the Ceph Monitor daemons on the new nodes. The new nodes reside on the same subnet as the first (bootstrap) node in the storage cluster. cephadm can also deploy and scale monitors to correspond to changes in the size of the storage cluster.

Note

Use host labels to identify the hosts that contain Ceph Monitor nodes.

Prerequisites

  • Root-level access to all nodes in the storage cluster.
  • A running storage cluster.

Procedure

  1. Assign the mon label to the host:

    Syntax

    ceph orch host label add HOSTNAME mon

    Example

    [ceph: root@host01 ~]# ceph orch host label add host01 mon
    [ceph: root@host01 ~]# ceph orch host label add host02 mon

  2. View the current hosts and labels:

    Syntax

    ceph orch host ls

    [ceph: root@host01 ~]# ceph orch host ls
    HOST   ADDR   LABELS  STATUS
    host01         mon
    host02         mon
    host03
    host04
    host05
    • Deploy Ceph Monitor daemons based on the host label:

      Syntax

      ceph orch apply mon label:mon

    • Deploy Ceph Monitor daemons on a specific set of hosts:

      Example

      [ceph: root@host01 ~]# ceph orch apply mon _host01,host02,host03,..._

      Note

      Be sure to include the bootstrap node in the list of hosts to which you want to deploy.

3.13.2. Adding Ceph Monitor nodes by IP address or network name

A typical Red Hat Ceph Storage storage cluster has three or five monitor daemons deployed on different hosts. If your storage cluster has five or more hosts, Red Hat recommends that you deploy five Monitor nodes.

If your Monitor nodes or your entire cluster are located on a single subnet, then cephadm automatically adds up to five Monitor daemons as you add new nodes to the cluster. You do not need to configure the Monitor daemons on the new nodes. The new nodes reside on the same subnet as the first node in the storage cluster. The first node in the storage cluster is the bootstrap node. cephadm can also deploy and scale monitors to correspond to changes in the size of the storage cluster.

Prerequisites

  • Root-level access to all nodes in the storage cluster.
  • A running storage cluster.

Procedure

  1. To deploy each additional Ceph Monitor node:

    Syntax

    ceph orch apply mon NODE:IP-ADDRESS-OR-NETWORK-NAME [NODE:IP-ADDRESS-OR-NETWORK-NAME...]

    Example

    [ceph: root@node00 ~]# ceph orch apply mon node01:10.1.2.0 node02:mynetwork

3.14. Adding Manager service

cephadm automatically installs a Manager daemon on the bootstrap node during the bootstrapping process. Use the Ceph orchestrator to deploy additional Manager daemons.

The Ceph orchestrator deploys two Manager daemons by default. To deploy a different number of Manager daemons, specify a different number. If you do not specify the hosts where the Manager daemons should be deployed, the Ceph orchestrator randomly selects the hosts and deploys the Manager daemons to them.

Note

If you want to apply Manager daemons to more than one specific host, be sure to specify all of the host names within the same ceph orch apply command. If you specify ceph orch apply mgr --placement host1 and then specify ceph orch apply mgr --placement host2, the second command removes the Manager daemon on host1 and applies a Manager daemon to host2.

Red Hat recommends that you use the --placement option to deploy to specific hosts.

Prerequisites

  • A running storage cluster.

Procedure

  1. To specify that you want to apply a certain number of Manager daemons to randomly selected hosts:

    Syntax

    ceph orch apply mgr NUMBER-OF-DAEMONS

    Example

    [ceph: root@node01 ~]# ceph orch apply mgr 3

    • To add Manager daemons to specific hosts in your storage cluster:

      Syntax

      ceph orch apply mgr --placement "HOSTNAME1 HOSTNAME2 HOSTNAME3 "

      Example

      [ceph: root@node01 /]# ceph orch apply mgr --placement "node01 node02 node03"

3.15. Adding OSDs

Cephadm will not provision an OSD on a device that is not available. A storage device is considered available if meets all of the following conditions:

  • The device must have no partitions.
  • The device must not have any LVM state.
  • The device must not be mounted.
  • The device must not contain a file system.
  • The device must not contain a Ceph BlueStore OSD.
  • The device must be larger than 5 GB.

Prerequisites

  • A running Red Hat Ceph Storage cluster.

Procedure

  1. List the available devices to deploy OSDs:

    Syntax

    ceph orch device ls [--hostname=HOSTNAME_1 HOSTNAME_2] [--wide] [--refresh]

    Example

    [ceph: root@host01 /]# ceph orch device ls --wide --refresh

  2. You can either deploy the OSDs on specific hosts or on all the available devices:

    • To create an OSD from a specific device on a specific host:

      Syntax

      ceph orch daemon add osd HOSTNAME:_DEVICE-PATH_

      Example

      [ceph: root@host01 /]# ceph orch daemon add osd host01:/dev/sdb

    • To deploy OSDs on any available and unused devices, use the --all-available-devices option.

      Example

      [ceph: root@host01 /]# ceph orch apply osd --all-available-devices

Additional Resources

3.16. Running the cephadm-clients playbook

The cephadm-clients.yml playbook handles the distribution of configuration and admin keyring files to a group of Ceph clients.

Note

If you do not specify a configuration file when you run the playbook, the playbook will generate and distribute a minimal configuration file. By default, the generated file is located at /etc/ceph/ceph.conf.

Prerequisites

  • An admin keyring exists on the admin node.
  • Root-level access to all client nodes.
  • ceph-ansible and cephadm-ansible are installed.
  • The preflight playbook has been run on the initial host in the storage cluster. For more information, see Running the preflight playbook.
  • The client_group variable must be specified in the Ansible inventory file.

Procedure

  1. Navigate to the /usr/share/cephadm-ansible directory.
  2. Run the cephadm-clients.yml playbook on the initial host in the group of clients. Use the full path name to the admin keyring on the admin host for PATH-TO-KEYRING. Optional: If you want to specify an existing configuration file to use, specify the full path to the configuration file for CONFIG-FILE. Use the Ansible group name for the group of clients for ANSIBLE_GROUP_NAME. Use the FSID of the cluster where the admin keyring and configuration files are stored for FSID. The default path for the FSID is /var/lib/ceph/.

    Syntax

    ansible-playbook -i hosts cephadm-clients.yml -extra-vars '{"fsid":"FSID", "client_group":"ANSIBLE_GROUP_NAME", "keyring":"_PATH-TO-KEYRING", "conf":"CONFIG-FILE"}'

    Example

    [root@admin ~]# ansible-playbook -i hosts cephadm-clients.yml --extra-vars '{"fsid":"be3ca2b2-27db-11ec-892b-005056833d58","client_group":"fs_clients","keyring":"/etc/ceph/fs.keyring", "conf": "/etc/ceph/ceph.conf"}'

After installation is complete, the specified clients in the group have the admin keyring. If you did not specify a configuration file, cephadm-ansible creates a minimal default configuration file on each client.

Additional Resources

3.17. Purging the Ceph storage cluster

Purging the Ceph storage cluster clears any data or connections that remain from previous deployments on your server. This Ansible script removes all daemons, logs, and data that belong to the fsid passed to the script from all hosts in the storage cluster.

Important

This process works only if the cephadm binary is installed on all hosts in the storage cluster.

The Ansible inventory file lists all the hosts in your cluster and what roles each host plays in your Ceph storage cluster. The default location for an inventory file is /usr/share/cephadm-ansible/hosts, but this file can be placed anywhere.

The following example shows the structure of an inventory file:

Example

[root@node00 ~]# cat hosts

[admin]
node1

[mons]
node1
node2
node3

[mgrs]
node1
node2
node3

[osds]
node1
node2
node3

Prerequisites

  • A running bootstrap node.
  • Ansible 2.9 or later is installed on the bootstrap node.
  • Root-level access to all nodes in the cluster.
  • The [admin] group is defined in the inventory file with a node where the admin keyring is present at /etc/ceph/ceph.client.admin.keyring.

Procedure

  1. Use the cephadm orchestrator to halt cephadm on the bootstrap node:

    Syntax

    ceph orch pause

  2. As an Ansible user, run the purge script:

    Syntax

    ansible-playbook -i hosts cephadm-purge-cluster.yml -e fsid=FSID -vvv

    Example

    [root@node00 cephadm-ansible]# ansible-playbook -i hosts cephadm-purge-cluster.yml -e fsid=a6ca415a-cde7-11eb-a41a-002590fc2544 -vvv

When the script has completed, the entire storage cluster will have been removed from all hosts in the cluster.