Is it possible to delete data from Ceph pools when the OSD disks becomes full, or hit the full_ratio of 95%?
Issue
-
Is it possible to delete data from Ceph pools when the OSD disks become full or hit the full_ratio of 95%?
-
This is specifically important for OpenStack implementations since there are no ways to delete volumes from OpenStack once the backend OSDs reach a full state.
-
Ceph OSD disks stop incoming I/O when it reaches 95% of its capacity. This will prevent any sort of I/O, even if it is to clean the disk off existing data to make space.
-
To work around this requires changing the
full_ratioof the OSD. -
It would be useful if there is a feature which can help in removing data from the pools in case the OSD disks hit full, at least for RBD.
Environment
-
Red Hat Ceph Storage 2.x
-
Red Hat OpenStack (using
Red Hat Ceph Storageas a backend for its instances)
Subscriber exclusive content
A Red Hat subscription provides unlimited access to our knowledgebase of over 48,000 articles and solutions.
Welcome! Check out the Getting Started with Red Hat page for quick tours and guides for common tasks.
