ceph-ansible: What factors triggers add-osd.yml playbook to restart all the OSDs in the node ?

Solution In Progress - Updated -

Issue

  • When we try to scale up the environment, by adding one OSD on each node and using the playbook it causes all the OSDs in all nodes to restart. And this is not always reproducible.
  • Can we know what all factors trigger this restart handler? And what specific change caused the OSDs to restart in the customer environment?

Environment

  • Red Hat Ceph Storage 3.3.z2
  • Red Hat Ceph Storage 4.2

Subscriber exclusive content

A Red Hat subscription provides unlimited access to our knowledgebase, tools, and much more.

Current Customers and Partners

Log in for full access

Log In

New to Red Hat?

Learn more about Red Hat subscriptions

Using a Red Hat product through a public cloud?

How to access this content