Chapter 6. Using the ceph-volume Utility to Deploy OSDs
The ceph-volume utility is a single purpose command-line tool to deploy logical volumes as OSDs. It uses a plugin-type framework to deploying OSDs with different device technologies. The ceph-volume utility follows a similar workflow of the ceph-disk utility for deploying OSDs, with a predictable, and robust way of preparing, activating, and starting OSDs. Currently, the ceph-volume utility only supports the lvm plugin, with the plan to support others technologies in the future.
The ceph-disk command is deprecated.
6.1. Using the ceph-volume LVM Plugin
By making use of LVM tags, the lvm sub-command is able to store and re-discover by querying devices associated with OSDs so they can be activated. This includes support for lvm-based technologies like dm-cache as well.
When using ceph-volume, the use of dm-cache is transparent, and treats dm-cache like a logical volume. The performance gains and losses when using dm-cache will depend on the specific workload. Generally, random and sequential reads will see an increase in performance at smaller block sizes; while random and sequential writes will see a decrease in performance at larger block sizes.
To use the LVM plugin, add lvm as a subcommand to the ceph-volume command:
ceph-volume lvm
There are three subcommands to the lvm subcommand, as follows:
Using the create subcommand combines the prepare and activate subcommands into one subcommand. See the create subcommand section for more details.
6.1.1. Preparing OSDs
The prepare subcommand prepares an OSD with the default filestore backend and consumes logical volumes for both the OSD data and journal. The prepare subcommand will not create or modify the logical volumes, except for adding some extra metadata using LVM tags.
LVM tags makes volumes easier to discover later, and help identify them as part of a Ceph system, and what role they have. The ceph-volume lvm prepare command adds the following list of LVM tags:
-
cluster_fsid -
data_device -
journal_device -
encrypted -
osd_fsid -
osd_id -
journal_uuid
Currently, only the filestore backend is supported, and is the default OSD backend. The prepare process is very strict, it requires two logical volumes that are ready for use, and requires the minimum size for an OSD data and journal. The journal device can be either a logical volume or a partition.
Here is the prepare workflow process:
- Accept logical volumes for data and journal
- Generate a UUID for the OSD
- Ask the Ceph Monitor to get an OSD identifier reusing the generated UUID
- OSD data directory is created and data volume mounted
- Journal is symlinked from data volume to journal location
-
The
monmapis fetched for activation -
Device is mounted and the data directory is populated by
ceph-osd - LVM tags are assigned to theOSD data and journal volumes
Do the following step on an OSD node, and as the root user, to prepare an OSD for deployment using LVM:
ceph-volume lvm prepare --filestore --data $VG_NAME/$LV_NAME --journal $JOURNAL_DEVICE
For example:
# ceph-volume lvm prepare --filestore --data example_vg/data_lv --journal example_vg/journal_lv
For example, using a partition as a journal device:
# ceph-volume lvm prepare --filestore --data example_vg/data_lv --journal /dev/sdc1
When using a partition, it must contain a PARTUUID discoverable by the blkid command, this way it can be identified correctly regardless of the device name or path.
The ceph-volume LVM plugin does not create partitions on a raw disk device. Creating this partition has to be done before using a partition for the OSD journal device.
6.1.2. Activating OSDs
Once the prepare process is done, the OSD is ready to go active. The activation process enables a Systemd unit at boot time which allows the correct OSD identifier and its UUID to be enabled and mounted.
Here is the activate workflow process:
- Requires both OSD id and OSD uuid
- Enable the systemd unit with matching OSD id and OSD uuid
- The systemd unit will ensure all devices are ready and mounted
-
The matching
ceph-osdsystemd unit will get started
Do the following step on an OSD node, and as the root user, to activate an OSD:
ceph-volume lvm activate --filestore $OSD_ID $OSD_UUID
For example:
# ceph-volume lvm activate --filestore 0 0263644D-0BF1-4D6D-BC34-28BD98AE3BC8
There are no side-effects when running this command multiple times.
6.1.3. Creating OSDs
The create subcommand wraps the two-step process to deploy a new OSD by calling the prepare subcommand and then calling the activate subcommand into a single subcommand. The reason to use prepare and then activate separately is to gradually introduce new OSDs into a storage cluster, and avoiding large amounts of data being rebalanced. There is nothing different to the process except the OSD will become up and in immediately after completion.
Do the following step on an OSD node, and as the root user:
ceph-volume lvm create --filestore --data $VG_NAME/$LV_NAME --journal $JOURNAL_DEVICE
For example:
# ceph-volume lvm create --filestore --data example_vg/data_lv --journal example_vg/journal_lv

Where did the comment section go?
Red Hat's documentation publication system recently went through an upgrade to enable speedier, more mobile-friendly content. We decided to re-evaluate our commenting platform to ensure that it meets your expectations and serves as an optimal feedback mechanism. During this redesign, we invite your input on providing feedback on Red Hat documentation via the discussion platform.