How to prevent a Ceph cluster from automatically replicating data to other OSDs, while removing one of the OSDs manually.
Issue
- An OSD disk was found faulty and needs to be removed from the Ceph cluster, to be replaced by a new disk.
- How can I prevent a Ceph cluster from automatically replicating data to other OSDs, while removing one of the OSDs manually?
-
While removing the problematic disk, the Ceph cluster will automatically create replicas of the objects onto other OSDs depending on the value set for 'osd_pool_default_size' in /etc/ceph/ceph.conf. This creates additional traffic/IO which is better avoided, since a new OSD is set to be added and the replica is better done to the new disk.
-
In such a case, how can the automatic replication be stopped?
Environment
- Red Hat Ceph Enterprise 1.3
- Red Hat Ceph Enterprise 1.2.3
Subscriber exclusive content
A Red Hat subscription provides unlimited access to our knowledgebase of over 48,000 articles and solutions.
Welcome! Check out the Getting Started with Red Hat page for quick tours and guides for common tasks.
