Why does Red Hat Ceph Storage reports a high USED space after a fresh installation?
Issue
-
After a fresh Ceph installation, the
usagefield in the output ofceph -sis showing 12 TiB:[root@node01 ~]# ceph -s cluster: id: <cluster-id> health: HEALTH_OK services: mon: 3 daemons, quorum node02,node03,node01 (age 2h) mgr: node02.qkbcsi(active, since 2h), standbys: node03.svhevu, node01.dlmmxp osd: 96 osds: 96 up (since 88m), 96 in (since 89m) data: pools: 1 pools, 1 pgs objects: 0 objects, 0 B usage: 12 TiB used, 1.4 PiB / 1.4 PiB avail pgs: 1 active+clean -
Outputs of
ceph dfandrados df:[root@node01 ~]# ceph df --- RAW STORAGE --- CLASS SIZE AVAIL USED RAW USED %RAW USED hdd 1.4 PiB 1.4 PiB 12 TiB 12 TiB 0.83 TOTAL 1.4 PiB 1.4 PiB 12 TiB 12 TiB 0.83 --- POOLS --- POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL device_health_metrics 1 1 0 B 0 0 B 0 442 TiB [root@node01 ~]# rados df POOL_NAME USED OBJECTS CLONES COPIES MISSING_ON_PRIMARY UNFOUND DEGRADED RD_OPS RD WR_OPS WR USED COMPR UNDER COMPR device_health_metrics 0 B 0 0 0 0 0 0 0 0 B 0 0 B 0 B 0 B total_objects 0 total_used 12 TiB total_avail 1.4 PiB total_space 1.4 PiB -
Why is this occurring?
- Where is this high amount of
USEDspace coming from?
Environment
- All Red Hat Ceph Storage versions.
Subscriber exclusive content
A Red Hat subscription provides unlimited access to our knowledgebase, tools, and much more.