Ceph - 'ceph osd reweight-by-utilization' was used and now cluster is showing PG's stuck unclean with degraded objects
Issue
- 'ceph osd reweight-by-utilization' was used in an attempt to help re-balance data on over-utilized OSD's throughout the cluster. After the data was re-balanced and the cluster settled down, Placement Groups can be seen stuck unclean and remapped with objects in a degraded state.
~$ ceph@admin # ceph -s
cluster xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
health HEALTH_WARN 677 pgs stuck unclean; recovery 392/46131948 objects degraded (0.001%)
monmap e1: 3 mons at {ceph1=10.0.0.1:6789/0,ceph2-b=10.0.0.2:6789/0,ceph3=10.0.0.3:6789/0}, election epoch 58, quorum 0,1,2 ceph1,ceph2,ceph3
osdmap e6101: 112 osds: 112 up, 112 in
pgmap v14994116: 36720 pgs, 18 pools, 59256 GB data, 15016 kobjects
173 TB used, 168 TB / 341 TB avail
392/46131948 objects degraded (0.001%)
36043 active+clean
677 active+remapped
client io 22293 B/s rd, 36927 kB/s wr, 246 op/s
Environment
Red Hat Enterprise Linux 6.x
Red Hat Enterprise Linux 7.x
Red Hat Ceph Storage 1.2.x
Subscriber exclusive content
A Red Hat subscription provides unlimited access to our knowledgebase of over 48,000 articles and solutions.
Welcome! Check out the Getting Started with Red Hat page for quick tours and guides for common tasks.
