ceph-ansible: What factors triggers add-osd.yml playbook to restart all the OSDs in the node ?
Issue
- When we try to scale up the environment, by adding one OSD on each node and using the playbook it causes all the OSDs in all nodes to restart. And this is not always reproducible.
- Can we know what all factors trigger this restart handler? And what specific change caused the OSDs to restart in the customer environment?
Environment
- Red Hat Ceph Storage 3.3.z2
- Red Hat Ceph Storage 4.2
Subscriber exclusive content
A Red Hat subscription provides unlimited access to our knowledgebase, tools, and much more.