Show Table of Contents
8.5. Replacing Ceph Storage Nodes
A situation might occur when a Ceph Storage node fails. In this situation, you must ensure to disable and rebalance the faulty node before removing it from the Overcloud to ensure no data loss. This procedure explains the process for replacing a Ceph Storage node.
Note
This procedure uses steps from the Red Hat Ceph Storage Administration Guide to manually remove Ceph Storage nodes. For more in-depth information about manual removal of Ceph Storage nodes, see Chapter 15. Removing OSDs (Manual) from the Red Hat Ceph Storage Administration Guide.
- Log into either a Controller node or a Ceph Storage node as the
heat-adminuser. The director'sstackuser has an SSH key to access theheat-adminuser. - List the OSD tree and find the OSDs for your node. For example, your node to remove might contain the following OSDs:
-2 0.09998 host overcloud-cephstorage-0 0 0.04999 osd.0 up 1.00000 1.00000 1 0.04999 osd.1 up 1.00000 1.00000
- Disable the OSDs on the Ceph Storage node. In this case, the OSD IDs are 0 and 1.
[heat-admin@overcloud-controller-0 ~]$ sudo ceph osd out 0 [heat-admin@overcloud-controller-0 ~]$ sudo ceph osd out 1
The Ceph Storage cluster begins rebalancing. Wait for this process to complete. You can follow the status using the following command:[heat-admin@overcloud-controller-0 ~]$ sudo ceph -w
- Once the Ceph cluster completes rebalancing, log into the faulty Ceph Storage node as the
heat-adminuser and stop the node.[heat-admin@overcloud-cephstorage-0 ~]$ sudo /etc/init.d/ceph stop osd.0 [heat-admin@overcloud-cephstorage-0 ~]$ sudo /etc/init.d/ceph stop osd.1
- Remove the Ceph Storage node from the CRUSH map so that it no longer receives data.
[heat-admin@overcloud-cephstorage-0 ~]$ sudo ceph osd crush remove osd.0 [heat-admin@overcloud-cephstorage-0 ~]$ sudo ceph osd crush remove osd.1
- Remove the OSD authentication key.
[heat-admin@overcloud-cephstorage-0 ~]$ sudo ceph auth del osd.0 [heat-admin@overcloud-cephstorage-0 ~]$ sudo ceph auth del osd.1
- Remove the OSD from the cluster.
[heat-admin@overcloud-cephstorage-0 ~]$ sudo ceph osd rm 0 [heat-admin@overcloud-cephstorage-0 ~]$ sudo ceph osd rm 1
- Leave the node and return to the director host as the
stackuser.[heat-admin@overcloud-cephstorage-0 ~]$ exit [stack@director ~]$
- Disable the Ceph Storage node so the director does not reprovision it.
[stack@director ~]$ ironic node-list [stack@director ~]$ ironic node-set-maintenance [UUID] true
- Removing a Ceph Storage node requires an update to the
overcloudstack in the director using the local template files. First identify the UUID of the Overcloud stack:$ heat stack-list
Identify the UUIDs of the Ceph Storage node to delete:$ nova list
Run the following command to delete the nodes from the stack and update the plan accordingly:$ openstack overcloud node delete --stack [STACK_UUID] --templates -e [ENVIRONMENT_FILE] [NODE1_UUID] [NODE2_UUID] [NODE3_UUID]
Important
If you passed any extra environment files when you created the Overcloud, pass them again here using the-eor--environment-fileoption to avoid making undesired changes to the Overcloud.Wait until the stack completes its update. Monitor the stack update using theheat stack-list --show-nested. - Follow the procedure in Section 8.1, “Adding Compute or Ceph Storage Nodes” to add new nodes to the director's node pool and deploy them as Ceph Storage nodes. Use the
--ceph-storage-scaleto define the total number of Ceph Storage nodes in the Overcloud. For example, if you removed a faulty node from a three node cluster and you want to replace it, use--ceph-storage-scale 3to return the number of Ceph Storage nodes to its original value:$ openstack overcloud deploy --templates --ceph-storage-scale 3 -e [ENVIRONMENT_FILES]
Important
If you passed any extra environment files when you created the Overcloud, pass them again here using the-eor--environment-fileoption to avoid making undesired changes to the Overcloud.The director provisions the new node and updates the entire stack with the new node's details - Log into a Controller node as the
heat-adminuser and check the status of the Ceph Storage node. For example:[heat-admin@overcloud-controller-0 ~]$ sudo ceph status
Confirm that the value in theosdmapsection matches the number of desired nodes in your cluster.
The failed Ceph Storage node has now been replaced with a new node.

Where did the comment section go?
Red Hat's documentation publication system recently went through an upgrade to enable speedier, more mobile-friendly content. We decided to re-evaluate our commenting platform to ensure that it meets your expectations and serves as an optimal feedback mechanism. During this redesign, we invite your input on providing feedback on Red Hat documentation via the discussion platform.