Some ceph daemons are in error state after ceph node reboot

Solution Verified - Updated -

Issue

  • Some ceph daemons are in error state after ceph nodes reboot:
# ceph status 
  cluster:
    id:     36086a52-bbf5-11ed-89c6-001a4a0002f7
    health: HEALTH_WARN
            3 failed cephadm daemon(s)

# ceph health detail
HEALTH_WARN 3 failed cephadm daemon(s)
[WRN] CEPHADM_FAILED_DAEMON: 3 failed cephadm daemon(s)
    daemon node-exporter.ceph-node1 on ceph-node1 is in error state
    daemon node-exporter.ceph-node2 on ceph-node2 is in error state
    daemon node-exporter.ceph-node3 on ceph-node3 is in error state

Environment

  • Red Hat Ceph Storage 5.x

Subscriber exclusive content

A Red Hat subscription provides unlimited access to our knowledgebase, tools, and much more.

Current Customers and Partners

Log in for full access

Log In

New to Red Hat?

Learn more about Red Hat subscriptions

Using a Red Hat product through a public cloud?

How to access this content