Chapter 11. Scaling the Ceph Storage cluster

11.1. Scaling up the Ceph Storage cluster

You can scale up the number of Ceph Storage nodes in your overcloud by re-running the deployment with the number of Ceph Storage nodes you need.

Before doing so, ensure that you have enough nodes for the updated deployment. These nodes must be registered with the director and tagged accordingly.

Registering new Ceph Storage nodes

To register new Ceph storage nodes with director, complete the following steps.

Procedure

  1. Log in to the undercloud as the stack user and initialize your director configuration:

    $ source ~/stackrc
  2. Define the hardware and power management details for the new nodes in a new node definition template; for example, instackenv-scale.json.
  3. Import this file in to director:

    $ openstack overcloud node import ~/instackenv-scale.json

    Importing the node definition template registers each node that is defined there to director.

  4. Assign the kernel and ramdisk images to all nodes:

    $ openstack overcloud node configure
Note

For more information about registering new nodes, see Section 2.2, “Registering nodes”.

Manually tagging new nodes

After you register each node, you must inspect the hardware and tag the node into a specific profile. Use profile tags to match your nodes to flavors, and then assign flavors to deployment roles.

Procedure

  1. Trigger hardware introspection to retrieve the hardware attributes of each node:

    $ openstack overcloud node introspect --all-manageable --provide
    • The --all-manageable option introspects only the nodes that are in a managed state. In this example, all nodes are in a managed state.
    • The --provide option resets all nodes to an active state after introspection.

      Important

      Ensure that this process completes successfully. This process usually takes 15 minutes for bare metal nodes.

  2. Retrieve a list of your nodes to identify their UUIDs:

    $ openstack baremetal node list
  3. Add a profile option to the properties/capabilities parameter for each node to manually tag a node to a specific profile. The addition of the profile option tags the nodes into each respective profile.

    Note

    As an alternative to manual tagging, use the Automated Health Check (AHC) Tools to automatically tag larger numbers of nodes based on benchmarking data. For example, the following commands tag three additional nodes with the ceph-storage profile:

    $ openstack baremetal node set --property capabilities='profile:baremetal,boot_option:local' 551d81f5-4df2-4e0f-93da-6c5de0b868f7
    $ openstack baremetal node set --property capabilities='profile:baremetal,boot_option:local' 5e735154-bd6b-42dd-9cc2-b6195c4196d7
    $ openstack baremetal node set --property capabilities='profile:baremetal,boot_option:local' 1a2b090c-299d-4c20-a25d-57dd21a7085b
Tip

If the nodes you tagged and registered use multiple disks, you can set director to use a specific root disk on each node. For more information, see Section 2.5, “Defining the root disk for multi-disk clusters”.

Redeploying the overcloud with additional Ceph Storage nodes

After you register and tag the new nodes, you can scale up the number of Ceph Storage nodes by redeploying the overcloud.

Procedure

  1. Before you redeploy the overcloud, set the CephStorageCount parameter in the parameter_defaults of your environment file, in this case, ~/templates/storage-config.yaml. In Section 7.1, “Assigning nodes and flavors to roles”, the overcloud is configured to deploy with three Ceph Storage nodes. The following example scales the overcloud to 6 nodes:

    parameter_defaults:
      ControllerCount: 3
      OvercloudControlFlavor: control
      ComputeCount: 3
      OvercloudComputeFlavor: compute
      CephStorageCount: 6
      OvercloudCephStorageFlavor: ceph-storage
      CephMonCount: 3
      OvercloudCephMonFlavor: ceph-mon
  2. Redeploy the overcloud. The overcloud now has six Ceph Storage nodes instead of three.

11.2. Scaling down and replacing Ceph Storage nodes

In some cases, you might need to scale down your Ceph cluster, or even replace a Ceph Storage node, for example, if a Ceph Storage node is faulty. In either situation, you must disable and rebalance any Ceph Storage node that you want to remove from the overcloud to avoid data loss.

Note

This procedure uses steps from the Red Hat Ceph Storage Administration Guide to manually remove Ceph Storage nodes. For more in-depth information about manual removal of Ceph Storage nodes, see Starting, stopping, and restarting Ceph daemons that run in containers and Removing a Ceph OSD using the command-line interface.

Procedure

  1. Log in to a Controller node as the heat-admin user. The director stack user has an SSH key to access the heat-admin user.
  2. List the OSD tree and find the OSDs for your node. For example, the node you want to remove might contain the following OSDs:

    -2 0.09998     host overcloud-cephstorage-0
    0 0.04999         osd.0                         up  1.00000          1.00000
    1 0.04999         osd.1                         up  1.00000          1.00000
  3. Disable the OSDs on the Ceph Storage node. In this case, the OSD IDs are 0 and 1.

    [heat-admin@overcloud-controller-0 ~]$ sudo podman exec ceph-mon-<HOSTNAME> ceph osd out 0
    [heat-admin@overcloud-controller-0 ~]$ sudo podman exec ceph-mon-<HOSTNAME> ceph osd out 1
  4. The Ceph Storage cluster begins rebalancing. Wait for this process to complete. Follow the status by using the following command:

    [heat-admin@overcloud-controller-0 ~]$ sudo podman exec ceph-mon-<HOSTNAME> ceph -w
  5. After the Ceph cluster completes rebalancing, log in to the Ceph Storage node you are removing, in this case, overcloud-cephstorage-0, as the heat-admin user, and stop and disable the node.

    [heat-admin@overcloud-cephstorage-0 ~]$ sudo systemctl stop ceph-osd@0
    [heat-admin@overcloud-cephstorage-0 ~]$ sudo systemctl stop ceph-osd@1
    [heat-admin@overcloud-cephstorage-0 ~]$ sudo systemctl disable ceph-osd@0
    [heat-admin@overcloud-cephstorage-0 ~]$ sudo systemctl disable ceph-osd@1
  6. Stop the OSDs.

    [heat-admin@overcloud-cephstorage-0 ~]$ sudo systemctl stop ceph-osd@0
    [heat-admin@overcloud-cephstorage-0 ~]$ sudo systemctl stop ceph-osd@1
  7. While logged in to the Controller node, remove the OSDs from the CRUSH map so that they no longer receive data.

    [heat-admin@overcloud-controller-0 ~]$ sudo podman exec ceph-mon-<HOSTNAME> ceph osd crush remove osd.0
    [heat-admin@overcloud-controller-0 ~]$ sudo podman exec ceph-mon-<HOSTNAME> ceph osd crush remove osd.1
  8. Remove the OSD authentication key.

    [heat-admin@overcloud-controller-0 ~]$ sudo podman exec ceph-mon-<HOSTNAME> ceph auth del osd.0
    [heat-admin@overcloud-controller-0 ~]$ sudo podman exec ceph-mon-<HOSTNAME> ceph auth del osd.1
  9. Remove the OSD from the cluster.

    [heat-admin@overcloud-controller-0 ~]$ sudo podman exec ceph-mon-<HOSTNAME> ceph osd rm 0
    [heat-admin@overcloud-controller-0 ~]$ sudo podman exec ceph-mon-<HOSTNAME> ceph osd rm 1
  10. Remove the Storage node from the CRUSH map:

    [heat-admin@overcloud-controller-0 ~]$ sudo docker exec ceph-mon-<HOSTNAME> ceph osd crush rm <NODE>
    [heat-admin@overcloud-controller-0 ~]$ sudo ceph osd crush remove <NODE>

    You can confirm the <NODE> name as defined in the CRUSH map by searching the CRUSH tree:

    [heat-admin@overcloud-controller-0 ~]$ sudo podman exec ceph-mon-<HOSTNAME> ceph osd crush tree | grep overcloud-osd-compute-3 -A 4
                    "name": "overcloud-osd-compute-3",
                    "type": "host",
                    "type_id": 1,
                    "items": []
                },
    [heat-admin@overcloud-controller-0 ~]$

    In the CRUSH tree, ensure that the items list is empty. If the list is not empty, revisit step 7.

  11. Leave the node and return to the undercloud as the stack user.

    [heat-admin@overcloud-controller-0 ~]$ exit
    [stack@director ~]$
  12. Disable the Ceph Storage node so that director does not reprovision it.

    [stack@director ~]$ openstack baremetal node list
    [stack@director ~]$ openstack baremetal node maintenance set UUID
  13. Removing a Ceph Storage node requires an update to the overcloud stack in director with the local template files. First identify the UUID of the overcloud stack:

    $ openstack stack list
  14. Identify the UUIDs of the Ceph Storage node you want to delete:

    $ openstack server list
  15. Delete the node from the stack:

    (undercloud)$ openstack overcloud node delete --stack <overcloud> <node>
    • Replace <overcloud> with the name or UUID of the overcloud stack.
    • Replace <node> with the host name or UUID of the node that you want to delete.
  16. Wait until the stack completes its update. Use the heat stack-list --show-nested command to monitor the stack update.
  17. Add new nodes to the director node pool and deploy them as Ceph Storage nodes. Use the CephStorageCount parameter in parameter_defaults of your environment file, in this case, ~/templates/storage-config.yaml, to define the total number of Ceph Storage nodes in the overcloud.

    parameter_defaults:
      ControllerCount: 3
      OvercloudControlFlavor: control
      ComputeCount: 3
      OvercloudComputeFlavor: compute
      CephStorageCount: 3
      OvercloudCephStorageFlavor: ceph-storage
      CephMonCount: 3
      OvercloudCephMonFlavor: ceph-mon
    Note

    For more information about how to define the number of nodes per role, see Section 7.1, “Assigning nodes and flavors to roles”.

  18. After you update your environment file, redeploy the overcloud:

    $ openstack overcloud deploy --templates -e <ENVIRONMENT_FILE>

    Director provisions the new node and updates the entire stack with the details of the new node.

  19. Log in to a Controller node as the heat-admin user and check the status of the Ceph Storage node:

    [heat-admin@overcloud-controller-0 ~]$ sudo ceph status
  20. Confirm that the value in the osdmap section matches the number of nodes in your cluster that you want. The Ceph Storage node that you removed is replaced with a new node.

11.3. Adding an OSD to a Ceph Storage node

This procedure demonstrates how to add an OSD to a node. For more information about Ceph OSDs, see Ceph OSDs in the Red Hat Ceph Storage Operations Guide.

Procedure

  1. Notice the following heat template deploys Ceph Storage with three OSD devices:

    parameter_defaults:
      CephAnsibleDisksConfig:
        devices:
          - /dev/sdb
          - /dev/sdc
          - /dev/sdd
        osd_scenario: lvm
        osd_objectstore: bluestore
  2. To add an OSD, update the node disk layout as described in Section 5.3, “Mapping the Ceph Storage node disk layout”. In this example, add /dev/sde to the template:

    parameter_defaults:
      CephAnsibleDisksConfig:
        devices:
          - /dev/sdb
          - /dev/sdc
          - /dev/sdd
          - /dev/sde
        osd_scenario: lvm
        osd_objectstore: bluestore
  3. Run openstack overcloud deploy to update the overcloud.
Note

In this example, all hosts with OSDs have a new device called /dev/sde. If you do not want all nodes to have the new device, update the heat template. For information about how to define hosts with a differing devices list, see Section 5.5, “Overriding parameters for dissimilar Ceph Storage nodes” and Section 5.5.1.2, “Altering the disk layout in Ceph Storage nodes”.

11.4. Removing an OSD from a Ceph Storage node

This procedure demonstrates how to remove an OSD from a node. It assumes the following about the environment:

  • A server (ceph-storage0) has an OSD (ceph-osd@4) running on /dev/sde.
  • The Ceph monitor service (ceph-mon) is running on controller0.
  • There are enough available OSDs to ensure the storage cluster is not at its near-full ratio.

For more information about Ceph OSDs, see Ceph OSDs in the Red Hat Ceph Storage Operations Guide.

Procedure

  1. SSH into ceph-storage0 and log in as root.
  2. Disable and stop the OSD service:

    [root@ceph-storage0 ~]# systemctl disable ceph-osd@4
    [root@ceph-stoarge0 ~]# systemctl stop ceph-osd@4
  3. Disconnect from ceph-storage0.
  4. SSH into controller0 and log in as root.
  5. Identify the name of the Ceph monitor container:

    [root@controller0 ~]# podman ps | grep ceph-mon
    ceph-mon-controller0
    [root@controller0 ~]#
  6. Enable the Ceph monitor container to mark the undesired OSD as out:

    [root@controller0 ~]# podman exec ceph-mon-controller0 ceph osd out 4
    Note

    This command causes Ceph to rebalance the storage cluster and copy data to other OSDs in the cluster. The cluster temporarily leaves the active+clean state until rebalancing is complete.

  7. Run the following command and wait for the storage cluster state to become active+clean:

    [root@controller0 ~]# podman exec ceph-mon-controller0 ceph -w
  8. Remove the OSD from the CRUSH map so that it no longer receives data:

    [root@controller0 ~]# podman exec ceph-mon-controller0 ceph osd crush remove osd.4
  9. Remove the OSD authentication key:

    [root@controller0 ~]# podman exec ceph-mon-controller0 ceph auth del osd.4
  10. Remove the OSD:

    [root@controller0 ~]# podman exec ceph-mon-controller0 ceph osd rm 4
  11. Disconnect from controller0.
  12. SSH into the undercloud as the stack user and locate the heat environment file in which you defined the CephAnsibleDisksConfig parameter.
  13. Notice the heat template contains four OSDs:

    parameter_defaults:
      CephAnsibleDisksConfig:
        devices:
          - /dev/sdb
          - /dev/sdc
          - /dev/sdd
          - /dev/sde
        osd_scenario: lvm
        osd_objectstore: bluestore
  14. Modify the template to remove /dev/sde.

    parameter_defaults:
      CephAnsibleDisksConfig:
        devices:
          - /dev/sdb
          - /dev/sdc
          - /dev/sdd
        osd_scenario: lvm
        osd_objectstore: bluestore
  15. Run openstack overcloud deploy to update the overcloud.

    Note

    In this example, you remove the /dev/sde device from all hosts with OSDs. If you do not remove the same device from all nodes, update the heat template. For information about how to define hosts with a differing devices list, see Section 5.5, “Overriding parameters for dissimilar Ceph Storage nodes”.