ODF - MDS pods in constant CrashLoopBackOff (Bad Session)

Solution Verified - Updated -

Issue

Database workloads were placed on PVCs backed by CephFS and not Ceph RBD (ceph-block-pool). Because the mds pods were in constant CrashLoopBackOff (CLBO), performing the steps in the following article did not return mds pods to a healthy state: OCS 4.x - mds pods in constant CrashLoopBackOff with FAILED ceph_assert

NAME                                                             READY  STATUS   RESTARTS  AGE
rook-ceph-mds-ocs-storagecluster-cephfilesystem-a-b9bd569fbdkk5  1/2    Running  384       1d
rook-ceph-mds-ocs-storagecluster-cephfilesystem-b-5bbccc9d88zs5  1/2    Running  383       1d

-----------------------------------------Message--------------------------------------------------------------

Pod openshift-storage/rook-ceph-mds-ocs-storagecluster-cephfilesystem-b-5bbccc9d88zs5 (mds) is in waiting state (reason: "CrashLoopBackOff").
Pod openshift-storage/rook-ceph-mds-ocs-storagecluster-cephfilesystem-a-b9bd569fbdkk5 (mds) is in waiting state (reason: "CrashLoopBackOff").

Environment

OCS 4.6 and higher
ODF 4.9 and higher

Subscriber exclusive content

A Red Hat subscription provides unlimited access to our knowledgebase, tools, and much more.

Current Customers and Partners

Log in for full access

Log In

New to Red Hat?

Learn more about Red Hat subscriptions

Using a Red Hat product through a public cloud?

How to access this content