Ceph: Cluster is "FULL", xx full osd(s)

Solution Verified - Updated -

Issue

  • Cluster is FULL, work around
  • Cluster is FULL how to fix it?
  • Cluster is FULL and all IO to the cluster are paused, how to fix it?

For Red Hat OpenShift Container Storage (OCS) and Red Hat OpenShift Data Foundation (ODF) with internal Ceph Clusters, see KCS OCS/ODF Ceph Cluster Full

Example

[root@edon ~]# ceph -s
  cluster:
    id:     1803fbda-2567-11ee-a4d7-fa163e87aaba
    health: HEALTH_ERR
            1 full osd(s)
            6 nearfull osd(s)
            Degraded data redundancy: 96/75108 objects degraded (0.128%), 7 pgs degraded
            Full OSDs blocking recovery: 7 pgs recovery_toofull
            9 pool(s) full

  services:
    mon: 5 daemons, quorum mgmt-0.icemanny01.lab.psi.pnq2.redhat.com,mons-0,mons-1,osds-0,osds-1 (age 18m)
    mgr: mgmt-0.icemanny01.lab.psi.pnq2.redhat.com.dccitq(active, since 9d), standbys: mons-0.lfkata
    mds: 1/1 daemons up, 1 standby
    osd: 24 osds: 24 up (since 16s), 24 in (since 7d)
    rgw: 2 daemons active (2 hosts, 1 zones)

  data:
    volumes: 1/1 healthy
    pools:   9 pools, 628 pgs
    objects: 25.04k objects, 97 GiB
    usage:   295 GiB used, 185 GiB / 480 GiB avail
    pgs:     96/75108 objects degraded (0.128%)
             621 active+clean
             7   active+recovery_toofull+degraded 


[root@edon ~]# ceph health detail 
HEALTH_ERR 1 full osd(s); 6 near full osd(s)
osd.60 is full at 95%
osd.0 is near full at 86%
osd.4 is near full at 91%
osd.8 is near full at 92%
osd.10 is near full at 94%
osd.25 is near full at 87%
osd.46 is near full at 92%

Environment

  • Red Hat Ceph Storage 4.x
  • Red Hat Ceph Storage 5.x
  • Red Hat Ceph Storage 6.x

Subscriber exclusive content

A Red Hat subscription provides unlimited access to our knowledgebase, tools, and much more.

Current Customers and Partners

Log in for full access

Log In

New to Red Hat?

Learn more about Red Hat subscriptions

Using a Red Hat product through a public cloud?

How to access this content