The Ceph Ansible shrink-osd.yml playbook does not clean the Ceph OSD properly
Issue
-
After run the
/usr/share/ceph-ansible/ansible-playbook infrastructure-playbooks/shrink-osd.yml
playbook to remove one or more OSDs and then try to add the same OSDs again running the/usr/share/ceph-ansible/ansible-playbook infrastructure-playbooks/add-osd.yml
or the/usr/share/ceph-ansible/site-docker.yml
playbooks, nothing happens and the OSDs are not added again to the cluster. -
Running the
lsblk
command inside the OSD server shows that the devices that were being used previously were not completely cleaned:# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 20G 0 disk ├─sda1 8:1 0 1M 0 part ├─sda2 8:2 0 100M 0 part /boot/efi └─sda3 8:3 0 19,9G 0 part / sdb 8:16 0 20G 0 disk └─ceph--f4ef2331--cc60--4484--952e--058bc0ce06ce-osd--data--f907df66--b710--4d1d--97c4--4b81b320e019 253:0 0 19G 0 lvm sdc 8:32 0 20G 0 disk └─ceph--d8787373--1d52--4a1c--aa6c--86567b9cfef6-osd--data--e844aa7e--e025--486b--a069--19fd88e16fe8 253:1 0 19G 0 lvm sr0 11:0 1 492K 0 rom
Environment
- Red Hat Ceph Storage (RHCS)
- 4.0
- 4.1
Subscriber exclusive content
A Red Hat subscription provides unlimited access to our knowledgebase, tools, and much more.