Integrate existing CEPH Storage cluster with existing Red Hat OpenStack Platform 13
I have RHOSP 13 with 3 controllers and 3 compute nodes. I did overcloud deploy without the CEPH storage cluster. Now I deployed a new 3 node CEPH storage cluster and I want to integrate with the existing OpenStack platform. I don't want to run overcloud deploy again. Is there any way to integrate with OpenStack? I found Red Hat document for CEPH integration but the steps include overcloud deploy as well. So is it mandatory to perform overcloud deploy steps again? Any further help highly appreciated.
(undercloud) stack@director.lab.local:/home/stack>openstack server list
+--------------------------------------+------------------------+--------+--------------------------+----------------+---------+
| ID | Name | Status | Networks | Image | Flavor |
+--------------------------------------+------------------------+--------+--------------------------+----------------+---------+
| 0f51537d-4a56-4bb7-b1a4-f882c80a4322 | overcloud-compute-2 | ACTIVE | ctlplane=192.168.126.110 | overcloud-full | compute |
| aa537344-a797-4e72-8d0e-3ede0355c04b | overcloud-compute-1 | ACTIVE | ctlplane=192.168.126.115 | overcloud-full | compute |
| 16ca88fc-8788-4543-85d6-4b764b4abcf7 | overcloud-controller-1 | ACTIVE | ctlplane=192.168.126.101 | overcloud-full | control |
| d9cd617c-dad9-447b-8f0a-c484f8ab6b9a | overcloud-compute-0 | ACTIVE | ctlplane=192.168.126.105 | overcloud-full | compute |
| d1368c09-b20e-492d-87dc-88c65189671c | overcloud-controller-2 | ACTIVE | ctlplane=192.168.126.108 | overcloud-full | control |
| fae4193d-a4fb-4ba4-9e2b-ac30656b143c | overcloud-controller-0 | ACTIVE | ctlplane=192.168.126.102 | overcloud-full | control |
+--------------------------------------+------------------------+--------+--------------------------+----------------+---------+
(undercloud) stack@director.lab.local:/home/stack>
root@overcloud-cephmon:/root>ceph -s
cluster:
id: 944f59c4-3c9c-49b4-b508-19c947091b24
health: HEALTH_OK
services:
mon: 1 daemons, quorum overcloud-cephmon
mgr: overcloud-cephmon(active)
osd: 9 osds: 9 up, 9 in
data:
pools: 0 pools, 0 pgs
objects: 0 objects, 0B
usage: 9.09GiB used, 440GiB / 449GiB avail
pgs:
root@overcloud-cephmon:/root>
root@overcloud-cephmon:/root>ceph osd tree
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
-1 0.43822 root default
-7 0.14607 host overcloud-ceph-0
2 hdd 0.04869 osd.2 up 1.00000 1.00000
7 hdd 0.04869 osd.7 up 1.00000 1.00000
8 hdd 0.04869 osd.8 up 1.00000 1.00000
-3 0.14607 host overcloud-ceph-1
0 hdd 0.04869 osd.0 up 1.00000 1.00000
3 hdd 0.04869 osd.3 up 1.00000 1.00000
5 hdd 0.04869 osd.5 up 1.00000 1.00000
-5 0.14607 host overcloud-ceph-2
1 hdd 0.04869 osd.1 up 1.00000 1.00000
4 hdd 0.04869 osd.4 up 1.00000 1.00000
6 hdd 0.04869 osd.6 up 1.00000 1.00000
root@overcloud-cephmon:/root>
Regards,
Arunabha