How to prevent a Ceph cluster from automatically replicating data to other OSDs, while removing one of the OSDs manually.

Solution Verified - Updated -

Issue

  • An OSD disk was found faulty and needs to be removed from the Ceph cluster, to be replaced by a new disk.
  • How can I prevent a Ceph cluster from automatically replicating data to other OSDs, while removing one of the OSDs manually?
  • While removing the problematic disk, the Ceph cluster will automatically create replicas of the objects onto other OSDs depending on the value set for 'osd_pool_default_size' in /etc/ceph/ceph.conf. This creates additional traffic/IO which is better avoided, since a new OSD is set to be added and the replica is better done to the new disk.

  • In such a case, how can the automatic replication be stopped?

Environment

  • Red Hat Ceph Enterprise 1.3
  • Red Hat Ceph Enterprise 1.2.3

Subscriber exclusive content

A Red Hat subscription provides unlimited access to our knowledgebase, tools, and much more.

Current Customers and Partners

Log in for full access

Log In

New to Red Hat?

Learn more about Red Hat subscriptions

Using a Red Hat product through a public cloud?

How to access this content