Ceph status in HEALTH_ERR: all pools are full
Issue
- Ceph status shows
HEALTH_ERRand all pools currently full:
sh-5.1$ ceph -s
cluster:
id: [...]
health: HEALTH_ERR
1 backfillfull osd(s)
1 full osd(s)
1 nearfull osd(s)
12 pool(s) full
- Even after deleting the more impactful
PersistentVolumeobjects, theHEALTH_ERRstatut remains even though the pools are not at capacity anymore:
--- POOLS ---
POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL
ocs-storagecluster-cephblockpool 1 128 530 GiB 142.79k 1.6 TiB 100.00 0 B
ocs-storagecluster-cephobjectstore.rgw.otp 2 8 0 B 0 0 B 0 0 B
ocs-storagecluster-cephobjectstore.rgw.buckets.non-ec 3 8 0 B 0 0 B 0 0 B
ocs-storagecluster-cephobjectstore.rgw.control 4 8 0 B 8 0 B 0 0 B
ocs-storagecluster-cephobjectstore.rgw.log 5 8 3.3 MiB 348 12 MiB 100.00 0 B
ocs-storagecluster-cephobjectstore.rgw.meta 6 8 5.9 KiB 26 240 KiB 100.00 0 B
.rgw.root 7 8 5.4 KiB 17 192 KiB 100.00 0 B
ocs-storagecluster-cephobjectstore.rgw.buckets.index 8 8 31 MiB 44 93 MiB 100.00 0 B
ocs-storagecluster-cephfilesystem-metadata 9 32 331 MiB 147.12k 992 MiB 100.00 0 B
ocs-storagecluster-cephobjectstore.rgw.buckets.data 10 32 22 GiB 171.61k 68 GiB 100.00 0 B
ocs-storagecluster-cephfilesystem-data0 11 32 31 GiB 621.84k 97 GiB 100.00 0 B
.mgr 12 1 5.8 MiB 3 17 MiB 100.00 0 B
Environment
- Red Hat OpenShift Data Foundation
- 4
- Red Hat OpenShift Container Platform
- 4
Subscriber exclusive content
A Red Hat subscription provides unlimited access to our knowledgebase, tools, and much more.