An erasure coded pool with "4+2" chunks created on a ruleset with four OSD nodes, ends up moving the cluster status to HEALTH_WARN, why?
Issue
-
An EC pool created with "4 + 2" chunks on a ruleset mapped to four OSD nodes, shifts the cluster from being HEALTH_OK to HEALTH_WARN.
-
A 'ceph -s' shows:
# ceph -s
cluster <cluster-id>
health HEALTH_WARN
600 pgs degraded
600 pgs stuck degraded
600 pgs stuck unclean
600 pgs stuck undersized
600 pgs undersized
monmap e1: 3 mons at {mon1:6789/0,mon2:6789/0,mon3:6789/0}
election epoch 6, quorum 0,1,2 mon1,mon2,mon3
osdmap e361: 81 osds: 81 up, 81 in; 600 remapped pgs
pgmap v1083: 864 pgs, 3 pools, 533 MB data, 181 objects
5156 MB used, 61124 GB / 61129 GB avail
600 active+undersized+degraded+remapped
264 active+clean
Environment
-
Red Hat Ceph Storage 1.2.3
-
Red Hat Ceph Storage 1.3
Subscriber exclusive content
A Red Hat subscription provides unlimited access to our knowledgebase of over 48,000 articles and solutions.
Welcome! Check out the Getting Started with Red Hat page for quick tours and guides for common tasks.
