Ceph - Adding OSD's with initial CRUSH weight of 0 causes 'ceph df' output to report invalid MAX AVAIL on pools
Issue
-
When adding OSD's to a Ceph cluster using
'ceph-deploy'
with OSD Crush Initial Weight set to 0, the output of'ceph df'
reports 'MAX AVAIL' to be 0 instead of the proper numerical value for all Ceph pools. This causes problems for OpenStack Cinder because it thinks there isn't any available space to provision new volumes. -
Before adding an OSD:
GLOBAL:
SIZE AVAIL RAW USED %RAW USED
589T 345T 243T 41.32
POOLS:
NAME ID USED %USED MAX AVAIL OBJECTS
data 0 816M 0 102210G 376
metadata 1 120M 0 102210G 94
images 5 11990G 1.99 68140G 1536075
volumes 6 63603G 10.54 68140G 16462022
instances 8 5657G 0.94 68140G 1063602
rbench 12 260M 0 68140G 22569
scratch 13 40960 0 68140G 10
- After adding OSD:
GLOBAL:
SIZE AVAIL RAW USED %RAW USED
590T 346T 243T 41.24
POOLS:
NAME ID USED %USED MAX AVAIL OBJECTS
data 0 816M 0 0 376
metadata 1 120M 0 0 94
images 5 11990G 1.98 0 1536075
volumes 6 63603G 10.52 0 16462022
instances 8 5657G 0.94 0 1063602
rbench 12 260M 0 0 22569
scratch 13 40960 0 0 10
- Max Avail is showing 0's for all pools.
Environment
- Red Hat Ceph Storage 1.3.x
- Red Hat Enterprise Linux 7.x
Subscriber exclusive content
A Red Hat subscription provides unlimited access to our knowledgebase, tools, and much more.