Ceph - Ceph 3.x disk deployed without '--filestore' option not showing correct disk size in 'ceph osd df' output
Environment
- Red Hat Ceph Storage 3.x
Issue
- After an OSD was redeployed on RHCS 3.x it shows incorrect size information and looks different when compared to the other OSDs.
# ceph osd df:
ID CLASS WEIGHT REWEIGHT SIZE USE AVAIL %USE VAR PGS
0 ssd 3.63199 1.00000 10240M 9467M 772M 92.45 20.78 131 <------ osd.0
.....
5 ssd 3.63199 1.00000 3719G 1025G 2693G 27.57 6.20 419
10 ssd 3.63199 1.00000 3719G 1220G 2498G 32.81 7.38 458
16 ssd 3.63199 1.00000 3719G 1114G 2604G 29.98 6.74 428
21 ssd 3.63199 1.00000 3719G 1004G 2714G 27.02 6.07 417
# ceph osd tree:
-9 18.15994 host storage-001
0 ssd 3.63199 osd.0 up 1.00000 1.00000
5 ssd 3.63199 osd.5 up 1.00000 1.00000
10 ssd 3.63199 osd.10 up 1.00000 1.00000
16 ssd 3.63199 osd.16 up 1.00000 1.00000
21 ssd 3.63199 osd.21 up 1.00000 1.00000
Resolution
- Verify the OSD type (filestore vs. bluestore) with the following:
[root@storage-001 ~]# cat /var/lib/ceph/osd/ceph-*/type
bluestore
filestore
filestore
filestore
filestore
- Here we can see that the reason this OSD is displaying different stats is because it was deployed as a bluestore OSD.
- Remove the OSD device from the cluster, zap the disk, and redeploy using the method of your choice as outlined by the Ceph documentation.
- For a specific example of how to add an OSD in RHCS 3.x with the '--filestore' option using the command line you can review https://access.redhat.com/solutions/1979363.
Root Cause
- In RHCS the default OSD type is bluestore. To manually deploy a filestore OSD in Ceph 3.x , the '--filestore' option must be specified or it will deploy as the bluestore default.
This solution is part of Red Hat’s fast-track publication program, providing a huge library of solutions that Red Hat engineers have created while supporting our customers. To give you the knowledge you need the instant it becomes available, these articles may be presented in a raw and unedited form.
Comments