OSD Scale Up without heavy re-balancing

Solution In Progress - Updated -

Issue

  • Our scale up documentation suggests increasing CephStorage node count and letting director call ceph-ansible to add the OSDs. When ceph-ansible adds OSDs it waits for them to be "in" before the playbooks can complete.

  • When OSDs are marked "in" there is a data rebalance which uses Ceph cluster resources. The larger the amount of OSDs added, relative to the total number of OSDs, the more data is re-balanced.

  • How can we add new OSDs but delay the data re-balancing so that we can manually execute it in smaller increments during limited maintenance windows?

Environment

  • Red Hat OpenStack Platform 13.0 (RHOSP)
  • Red Hat Ceph Storage 3.y

Subscriber exclusive content

A Red Hat subscription provides unlimited access to our knowledgebase, tools, and much more.

Current Customers and Partners

Log in for full access

Log In

New to Red Hat?

Learn more about Red Hat subscriptions

Using a Red Hat product through a public cloud?

How to access this content