Ceph: How do I increase Placement Group (PG) count in a Ceph Cluster

Solution Verified - Updated -

Issue

Note:

  • This is the most intensive process that can be performed on a Ceph cluster, and can have drastic performance impact if not done in a slow and methodical fashion.
  • Once the data starts moving for a chunk of Placement Groups (PGs) ( in the increasing pgp_num section ), it cannot be stopped or reversed and must be allowed to complete.
  • It is advised that this process be performed off-hours, and all clients alerted to the potential performance impact well ahead of time.

Overview:

Having proper Placement Group (PG) count is a critical part of ensuring top performance and best data distribution in your Ceph cluster.

  • The Ceph PG calc tool should be referenced for optimal values.
  • Care should be taken to maintain between 100 and 200 PGs per OSD ratio as detailed in the Ceph PG calc tool.
    • The current PG count per OSD can be viewed in the PGS column of the ceph osd df tree command.
  • The only time increasing the PG count is required is if you expand your cluster with more OSDs, such that the ratio drops to or below 100 PGs per OSD ratio in your cluster, or if the initial PG count was not properly planned.
  • Because of the high intensity of this operation, careful planning should be used to ensure that if a PG increase evolution is needed, that the increase goes to a level which will cover any possible cluster expansion into the foreseeable future.

Environment

All versions of Red Hat Ceph Storage

Subscriber exclusive content

A Red Hat subscription provides unlimited access to our knowledgebase, tools, and much more.

Current Customers and Partners

Log in for full access

Log In

New to Red Hat?

Learn more about Red Hat subscriptions

Using a Red Hat product through a public cloud?

How to access this content