Ceph: Status shows "Reduced data availability: xx pgs inactive, xx pgs peering"

Solution Verified - Updated -

Issue

Ceph status returns "[WRN] PG_AVAILABILITY: Reduced data availability: xx pgs inactive, xx pgs peering"

Example:

# ceph -s
  cluster:
    id:     5b3c2fd{Cluster ID Obfuscated}16bfb00
    health: HEALTH_WARN
            1 MDSs report slow metadata IOs
            1 MDSs report slow requests
            1 MDSs behind on trimming
            Reduced data availability: 6 pgs inactive, 6 pgs peering     <--- Here

  services:
    mon: 3 daemons, quorum e,f,h (age 22h)
    mgr: a(active, since 25h)
    mds: 1/1 daemons up, 1 hot standby
    osd: 6 osds: 6 up (since 3h), 6 in (since 3h); 60 remapped pgs

  data:
    volumes: 1/1 healthy
    pools:   4 pools, 193 pgs
    objects: 378.73k objects, 98 GiB
    usage:   315 GiB used, 2.6 TiB / 2.9 TiB avail
    pgs:     3.109% pgs not active
             243503/1136184 objects misplaced (21.432%)
             127 active+clean
             59  active+remapped+backfill_wait
             6   peering                                                <--- Here
             1   active+remapped+backfilling

  io:
    client:   102 B/s wr, 0 op/s rd, 0 op/s wr
    recovery: 1.7 MiB/s, 8 objects/s

From Ceph Health Detail

# ceph health detail:
HEALTH_WARN 1 MDSs report slow metadata IOs; 1 MDSs report slow requests; 1 MDSs behind on trimming; Reduced data availability: 6 pgs inactive, 6 pgs peering
[WRN] MDS_SLOW_METADATA_IO: 1 MDSs report slow metadata IOs
    mds.ocs-storagecluster-cephfilesystem-b(mds.0): 100+ slow metadata IOs are blocked > 30 secs, oldest blocked for 13158 secs
[WRN] MDS_SLOW_REQUEST: 1 MDSs report slow requests
    mds.ocs-storagecluster-cephfilesystem-b(mds.0): 84356 slow requests are blocked > 30 secs
[WRN] MDS_TRIM: 1 MDSs behind on trimming
    mds.ocs-storagecluster-cephfilesystem-b(mds.0): Behind on trimming (261/128) max_segments: 128, num_segments: 261
[WRN] PG_AVAILABILITY: Reduced data availability: 6 pgs inactive, 6 pgs peering   \
    pg 1.25 is stuck peering for 86m, current state peering, last acting [2,3,1]   \
    pg 1.43 is stuck peering for 74m, current state peering, last acting [2,1,3]    \
    pg 1.44 is stuck peering for 86m, current state peering, last acting [2,3,5]     Here
    pg 1.63 is stuck peering for 97m, current state peering, last acting [2,5,3]    /
    pg 3.4 is stuck peering for 74m, current state peering, last acting [2,5,3]    /
    pg 3.1f is stuck peering for 86m, current state peering, last acting [2,5,0]  /

Environment

Red Hat Ceph Storage (RHCS) 4.x
Red Hat Ceph Storage (RHCS) 5.x
Red Hat OpenShift Cluster Platform (OCP) 4.x
Red Hat OpenShift Container Storage (OCS) 4.x
Red Hat OpenShift Data Foundation (ODF) 4.x

Subscriber exclusive content

A Red Hat subscription provides unlimited access to our knowledgebase, tools, and much more.

Current Customers and Partners

Log in for full access

Log In

New to Red Hat?

Learn more about Red Hat subscriptions

Using a Red Hat product through a public cloud?

How to access this content