Is it possible to delete data from Ceph pools when the OSD disks becomes full, or hit the full_ratio of 95%?

Solution In Progress - Updated -

Issue

  • Is it possible to delete data from Ceph pools when the OSD disks become full or hit the full_ratio of 95%?

  • This is specifically important for OpenStack implementations since there are no ways to delete volumes from OpenStack once the backend OSDs reach a full state.

  • Ceph OSD disks stop incoming I/O when it reaches 95% of its capacity. This will prevent any sort of I/O, even if it is to clean the disk off existing data to make space.

  • To work around this requires changing the full_ratio of the OSD.

  • It would be useful if there is a feature which can help in removing data from the pools in case the OSD disks hit full, at least for RBD.

Environment

  • Red Hat Ceph Storage 2.x

  • Red Hat OpenStack (using Red Hat Ceph Storage as a backend for its instances)

Subscriber exclusive content

A Red Hat subscription provides unlimited access to our knowledgebase, tools, and much more.

Current Customers and Partners

Log in for full access

Log In

New to Red Hat?

Learn more about Red Hat subscriptions

Using a Red Hat product through a public cloud?

How to access this content