A "ceph -s" or "ceph status" shows a negative number for degraded objects.

Solution In Progress - Updated -

Issue

  • Shuffling around OSDs ended up showing a negative number for degraded objects, why?

  • Moving few OSDs in and out as well as setting a pool to triple replication and back to double, resulted in 'ceph -s' showing a negative number for degraded objects.

# ceph -s
    cluster dc4904ea-3e0d-40cc-b083-80bb42e4783c
     health HEALTH_WARN 14 pgs stuck unclean; recovery -82/91468 objects degraded (-0.090%)
     monmap e10: 3 mons at {mon1=10.10.20.77:6789/0,mon2=10.10.20.78:6789/0,mon3=10.10.20.79:6789/0}, election epoch 430, quorum 0,1,2 mon1,mon2,mon3
     osdmap e15362: 24 osds: 24 up, 24 in
      pgmap v10420714: 2048 pgs, 1 pools, 110 GB data, 45734 objects
            226 GB used, 14827 GB / 15053 GB avail
            -82/91468 objects degraded (-0.090%)
                  14 active
                2034 active+clean
  client io 64868 kB/s rd, 69308 kB/s wr, 5955 op/s

Environment

  • Red Hat Ceph Storage 1.2.3

Subscriber exclusive content

A Red Hat subscription provides unlimited access to our knowledgebase of over 48,000 articles and solutions.

Current Customers and Partners

Log in for full access

Log In
Close

Welcome! Check out the Getting Started with Red Hat page for quick tours and guides for common tasks.