Select Your Language

Infrastructure and Management

Cloud Computing

Storage

Runtimes

Integration and Automation

  • Comments
  • The mystical Ceph available space and disk usage

    Posted on

    Hello,

    I am running a ceph cluster for testing purposes and it's looking very nice, except some lack of information that I am unable to clarify...

    I am running Mimic - last version - on a 10 servers with 22SSDs of 1T each searver.

    I've started adding some data to see the load, IO speeds and all but after a closer look I cannot understand how the space used and free is calculated.

    I have only one RBD with replication 2, CephFS replication 2, and another rbd pool with also replication 2.

    My question is... how is the space calculated ? I 've added everything and there is something missing and I cannot understand what.

    This is how ceph df looks like:

    ceph df
    GLOBAL:
        SIZE        AVAIL      RAW USED     %RAW USED 
        141 TiB     61 TiB       80 TiB         56.54 
    POOLS:
        NAME                           ID     USED        %USED     MAX AVAIL     OBJECTS   
        rbd                            1       23 TiB     51.76        22 TiB       6139492 
        .rgw.root                      7      1.1 KiB         0        22 TiB             4 
        default.rgw.control            8          0 B         0        22 TiB             8 
        default.rgw.meta               9      1.7 KiB         0        22 TiB            10 
        default.rgw.log                10         0 B         0        22 TiB           207 
        default.rgw.buckets.index      11         0 B         0        22 TiB             3 
        default.rgw.buckets.data       12      61 KiB         0        22 TiB            11 
        default.rgw.buckets.non-ec     13         0 B         0        22 TiB             0 
        cephfs_data                    14      12 TiB     35.42        22 TiB     296419491
        cephfs_metadata                15     174 MiB         0        11 TiB       1043395
        rbd2                           16     2.7 TiB     11.18        22 TiB        720971
    

    by

    points

    Responses

    Red Hat LinkedIn YouTube Facebook X, formerly Twitter

    Quick Links

    Help

    Site Info

    Related Sites

    © 2025 Red Hat