Ceph: How to failover an MDS daemon
Issue
How to failover an MDS daemon
In a Ceph cluster with more than one MDS daemon running, to promote a standby
MDS to active, it is necessary to fail one of the current active
MDS daemons.
Some examples:
[user@edon-0 ~]# ceph fs status
root - 88 clients
====
RANK STATE MDS ACTIVITY DNS INOS DIRS CAPS
0 active root.edon-1.amvgfe Reqs: 8 /s 76.1k 75.0k 1501 38.9k
1 active root.edon-3.wnboxv Reqs: 121 /s 5321k 5309k 12.1k 494k
2 active root.edon-7.oqqvka Reqs: 0 /s 5370 3575 289 3710
3 active root.edon-9.qvfdrs Reqs: 1 /s 8184k 8178k 84.2k 434k
4 active root.edon-5.lbrqru Reqs: 556 /s 319k 307k 23.2k 45.9k
POOL TYPE USED AVAIL
cephfs.meta metadata 46.2G 12.8T
cephfs.data data 124T 246T
STANDBY MDS
root.edon-2.rdnzhn
root.edon-6.kfqjor
root.edon-4.pjrzki
MDS version: ceph version 17.2.6-100.el9cp (ea4e3ef8df2cf26540aae06479df031dcfc80343) quincy (stable)
[user@edon0 ~]$ sudo cephadm shell -- ceph fs status
cephfs - 207 clients
======
RANK STATE MDS ACTIVITY DNS INOS DIRS CAPS
0 active cephfs.edon1.bnsdfe Reqs: 1804 /s 17.9M 17.7M 6856k 1811k
1 active cephfs.edon3.ipwxcw Reqs: 57 /s 942 301 1063 67
POOL TYPE USED AVAIL
cephfs_metadata metadata 40.9G 117T
cephfs_data data 39.8T 117T
STANDBY MDS
cephfs.edon2.gruurg
MDS version: ceph version 16.2.10-187.el8cp (5d6355e2bccd18b5c6457a34cb666d773f21823d) pacific (stable)
Environment
Red Hat Ceph Storage (RHCS) 4
Red Hat Ceph Storage (RHCS) 5
Red Hat Ceph Storage (RHCS) 6
Subscriber exclusive content
A Red Hat subscription provides unlimited access to our knowledgebase, tools, and much more.