Why doesn't my Ceph cluster fully redistribute PGs after an OSD is marked down and out?

Solution In Progress - Updated -

Issue

  • Why doesn't my Ceph cluster fully redistribute PGs after an OSD is marked down and out?
  • When an OSD goes offline or is marked down and out, it's been observed that most of the PGs/objects will redistribute appropriately to other OSDs but there are occasionally some Placement Groups (PGs) that remain "stuck" and/or degraded until the OSD is manually removed from the OSD map.

Environment

  • Red Hat Ceph Storage
  • Inktank Ceph Enterprise

Subscriber exclusive content

A Red Hat subscription provides unlimited access to our knowledgebase, tools, and much more.

Current Customers and Partners

Log in for full access

Log In

New to Red Hat?

Learn more about Red Hat subscriptions

Using a Red Hat product through a public cloud?

How to access this content