Red Hat Training

A Red Hat training course is available for Red Hat Ceph Storage

Chapter 12. Add OSDs

Before creating OSDs, consider the following:

  • We recommend using the XFS filesystem (default).
  • We recommend using SSDs for journals. It is common to partition SSDs to serve multiple OSDs. Ensure that the number of SSD partitions does not exceed your SSD’s sequential write limits. Also, ensure that SSD partitions are properly aligned, or their write performance will suffer.
  • We recommend using ceph-deploy disk zap on a Ceph OSD drive before executing ceph-deploy osd create. For example:
ceph-deploy disk zap <ceph-node>:<data-drive>

From your admin node, use ceph-deploy to prepare the OSDs.

ceph-deploy osd prepare <ceph-node>:<data-drive>[:<journal-partition>] [<ceph-node>:<data-drive>[:<journal-partition>]]

For example:

ceph-deploy osd prepare node2:sdb:ssdb node3:sdd:ssdb node4:sdd:ssdb

In the foregoing example, sdb is a spinning hard drive. Ceph will use the entire drive for OSD data. ssdb is a partition on an SSD drive, which Ceph will use to store the journal for the OSD.

Once you prepare OSDs, use ceph-deploy to activate the OSDs.

ceph-deploy osd activate <ceph-node>:<data-drive>:<journal-partition> [<ceph-node>:<data-drive>:<journal-partition>]

For example:

ceph-deploy osd activate node2:sdb:ssdb node3:sdd:ssdb node4:sdd:ssdb

To achieve an active + clean state, you must add as many OSDs as the value of osd pool default size = <n> from your Ceph configuration file.