Ceph HEALTH_WARN pgs not deep-scrubbed in time.
Issue
- ceph status shows
HEALTH_WARN xxx pgs not deep-scrubbed in time. - The count of PGs
not deep-scrubbed in timeis rising over time. - There are no PGs actively deep-scrubbing
# ceph status
cluster:
id: e2bf4810-8eb8-11ee-92a7-fa163e01bab3
health: HEALTH_WARN
85 pgs not deep-scrubbed in time
25 pgs not scrubbed in time
services:
mon: 5 daemons, quorum mgmt-0,osds-0,mons-0,osds-1,mdss-0 (age 13h)
mgr: mgmt-0.ewastw(active, since 2w), standbys: osds-0.ivuqwu
osd: 24 osds: 24 up (since 2w), 24 in (since 2w)
rgw: 2 daemons active (2 hosts, 1 zones)
data:
pools: 6 pools, 161 pgs
objects: 241 objects, 456 KiB
usage: 7.8 GiB used, 232 GiB / 240 GiB avail
pgs: 161 active+clean
- If PGs are both "
not deep-scrubbed in time" and actively "scrubbing+deep", see KCS Article 7083351.
Environment
- Red Hat Ceph Storage (RHCS) 5.x
- Red Hat Ceph Storage (RHCS) 6.x
Subscriber exclusive content
A Red Hat subscription provides unlimited access to our knowledgebase, tools, and much more.