ODF/CEPH: MGR not getting updates about PG states from OSDs and Ceph reports many PGs in "Unknown" state

Solution Verified - Updated -

Issue

  • Ceph status shows a huge number of unknown pgs, eventhough all the OSDs are up and in
  • This is usually accompanied by issues in CephFS like '1 filesystem is degraded' and 'MDSs behind on trimming'
$ ceph -s
  cluster:
    id:     740391ec-b7dc-4d73-944a
    health: HEALTH_WARN
            1 filesystem is degraded
            1 MDSs behind on trimming
            Reduced data availability: 273 pgs inactive
  services:
    mon: 3 daemons, quorum i,k,l (age 59s)
    mgr: a(active, since 3m)
    mds: 1/1 daemons up, 1 standby
    osd: 3 osds: 3 up (since 3d), 3 in (since 4M)
  data:
    volumes: 0/1 healthy, 1 recovering
    pools:   11 pools, 273 pgs
    objects: 0 objects, 0 B
    usage:   0 B used, 0 B / 0 B avail
    pgs:     100.000% pgs unknown
             273 unknown

Environment

  • Red Hat Ceph Storage 5.2 and above
  • Red Hat Openshift Data foundation 4.10 and above

Subscriber exclusive content

A Red Hat subscription provides unlimited access to our knowledgebase, tools, and much more.

Current Customers and Partners

Log in for full access

Log In

New to Red Hat?

Learn more about Red Hat subscriptions

Using a Red Hat product through a public cloud?

How to access this content