Chapter 12. Add OSDs
Before creating OSDs, consider the following:
- We recommend using the XFS filesystem (default).
- We recommend using SSDs for journals. It is common to partition SSDs to serve multiple OSDs. Ensure that the number of SSD partitions does not exceed your SSD’s sequential write limits. Also, ensure that SSD partitions are properly aligned, or their write performance will suffer.
-
We recommend using
ceph-deploy disk zapon a Ceph OSD drive before executingceph-deploy osd create. For example:
ceph-deploy disk zap <ceph-node>:<data-drive>
From your admin node, use ceph-deploy to prepare the OSDs.
ceph-deploy osd prepare <ceph-node>:<data-drive>[:<journal-partition>] [<ceph-node>:<data-drive>[:<journal-partition>]]
For example:
ceph-deploy osd prepare node2:sdb:ssdb node3:sdd:ssdb node4:sdd:ssdb
In the foregoing example, sdb is a spinning hard drive. Ceph will use the entire drive for OSD data. ssdb is a partition on an SSD drive, which Ceph will use to store the journal for the OSD.
Once you prepare OSDs, use ceph-deploy to activate the OSDs.
ceph-deploy osd activate <ceph-node>:<data-drive>:<journal-partition> [<ceph-node>:<data-drive>:<journal-partition>]
For example:
ceph-deploy osd activate node2:sdb:ssdb node3:sdd:ssdb node4:sdd:ssdb
To achieve an active + clean state, you must add as many OSDs as the value of osd pool default size = <n> from your Ceph configuration file.

Where did the comment section go?
Red Hat's documentation publication system recently went through an upgrade to enable speedier, more mobile-friendly content. We decided to re-evaluate our commenting platform to ensure that it meets your expectations and serves as an optimal feedback mechanism. During this redesign, we invite your input on providing feedback on Red Hat documentation via the discussion platform.