Chapter 9. OSD BlueStore (Technology Preview)

OSD BlueStore is a new back end for the OSD daemons. Compared to the currently used FileStore back end, BlueStore allows for storing objects directly on raw block devices without any file system interface.

Important

BlueStore is provided as a Technology Preview only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs), might not be functionally complete, and Red Hat does not recommend to use them for production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. See the support scope for Red Hat Technology Preview features for more details.

9.1. About BlueStore

BlueStore is a new back end for the OSD daemons. Compared to the currently used FileStore back end, BlueStore allows for storing objects directly on the block devices without any file system interface, which improves the performance of the cluster.

The following are some of the main features of using BlueStore:

Direct management of storage devices
BlueStore consumes raw block devices or partitions. This avoids any intervening layers of abstraction, such as local file systems like XFS, that might limit performance or add complexity.
Metadata management with RocksDB
BlueStore uses the RocksDB’ key-value database to manage internal metadata, such as the mapping from object names to block locations on a disk.
Full data and metadata checksumming
By default all data and metadata written to BlueStore is protected by one or more checksums. No data or metadata are read from disk or returned to the user without verification.
Inline compression
Data written can be optionally compressed before being written to disk.
Efficient copy-on-write
The Ceph Block Device and Ceph File System snapshots rely on a copy-on-write clone mechanism that is implemented efficiently in BlueStore. This results in efficient I/O both for regular snapshots and for erasure coded pools which rely on cloning to implement efficient two-phase commits.
No large double-writes
BlueStore first writes any new data to unallocated space on a block device, and then commits a RocksDB transaction that updates the object metadata to reference the new region of the disk. Only when the write operation is below a configurable size threshold, it falls back to a write-ahead journaling scheme, similar to what how FileStore operates.
Multi-device support

BlueStore can use multiple block devices for storing different data, for example: Hard Disk Drive (HDD) for the data, Solid-state Drive (SSD) for metadata, Non-volatile Memory (NVM) or Non-volatile random-access memory (NVRAM) or persistent memory for the RocksDB write-ahead log (WAL). See Section 9.2, “BlueStore Devices” for details.

Note

The ceph-disk utility does not yet provision multiple devices. To use multiple devices, OSDs must be set up manually.

Efficient block device usage
Because BlueStore does not use any file system, it minimizes the need to clear the storage device cache.

9.2. BlueStore Devices

This section explains what block devices the BlueStore back end uses.

BlueStore manages either one, two, or (in certain cases) three storage devices.

  • primary
  • WAL
  • DB

In the simplest case, BlueStore consumes a single (primary) storage device. The storage device is partitioned into two parts that contain:

  • OSD metadata: A small partition formatted with XFS that contains basic metadata for the OSD. This data directory includes information about the OSD, such as its identifier, which cluster it belongs to, and its private keyring.
  • Data: A large partition occupying the rest of the device that is managed directly by BlueStore and that contains all of the OSD data. This primary device is identified by a block symbolic link in the data directory.

You can also use two additional devices:

  • A WAL (write-ahead-log) device: A device that stores BlueStore internal journal or write-ahead log. It is identified by the block.wal symbolic link in the data directory. Consider using a WAL device only if the device is faster than the primary device, for example, when the WAL device uses an SSD disk and the primary devices uses an HDD disk.
  • A DB device: A device that stores BlueStore internal metadata. The embedded RocksDB database puts as much metadata as it can on the DB device instead on the primary device to improve performance. If the DB device is full, it starts adding metadata to the primary device. Consider using a DB device only if the device is faster than the primary device.

If you have only a less than a gigabyte storage available on fast devices, Red Hat recommends using it as a WAL device. If you have more fast devices available, consider using it as a DB device. The BlueStore journal is always places on the fastest device, so using a DB device provides the same benefit that the WAL device while also allows for storing additional metadata.

9.3. Adding OSDs That Use BlueStore

This section describes how to install a new Ceph OSD node with the BlueStore back end.

Prerequisites

Procedure

Use the following commands on the Ansible administration node.

  1. Add a new OSD node to the [osds] section in Ansible inventory file, by default located at /etc/ansible/hosts.

    [osds]
    node1
    node2
    node3
    <hostname>

    Replace:

    • <hostname> with the name of the OSD node

    For example:

    [osds]
    node1
    node2
    node3
    node4
  2. Navigate to the /usr/share/ceph-ansible/ directory.

    [user@admin ~]$ cd /usr/share/ceph-ansible
  3. Create the host_vars directory.

    [root@admin ceph-ansible] mkdir host_vars
  4. Create the configuration file for the newly added OSD in host_vars.

    [root@admin ceph-ansible] touch host_vars/<hostname>.yml

    Replace:

    • <hostname> with the host name of the newly added OSD

    For example:

    [root@admin ceph-ansible] touch host_vars/node4.yml
  5. Add the following setting to the newly created file:

    osd_objectstore: bluestore
    Note

    To use BlueStore for all OSDs, add osd_objectstore:bluestore to the group_vars/all.yml file.

  6. Optional. If you want to store the block.wal and block.db partitions on dedicated devices, edit the host_vars/<hostname>.yml file as follows.

    1. To use dedicated devices for block.wal:

      osd_scenario: non-collocated
      
      bluestore_wal_devices:
         - <device>
         - <device>

      Replace:

      • <device> with the path to the device

      For example:

      osd_scenario: non-collocated
      
      bluestore_wal_devices:
         - /dev/sdf
         - /dev/sdg
    2. To use dedicated devices for block.db:

      osd_scenario: non-collocated
      
      dedicated_devices:
         - <device>
         - <device>

      Replace:

      • <device> with the path to the device

      For example:

      osd_scenario: non-collocated
      
      dedicated_devices:
         - /dev/sdh
         - /dev/sdi
      Note

      If you use the osd_scenario: collocated parameter, the block.wal and block.db partitions will use the same device as specified with the devices parameter. For details, see the Installing a Red Hat Ceph Storage Cluster section in the Red Hat Ceph Storage 3 l Installation Guide for Red Hat Enterprise Linux or Ubuntu.

      Note

      To use BlueStore for all OSDs, add the aforementioned parameters to the group_vars/osds.yml file.

  7. To configure LVM based BlueStore OSDs, use osd_scenario: lvm in host_vars/<hostname>.yml:

    osd_scenario: lvm
    lvm_volumes:
      - data: <datalv>
        data_vg: <datavg>

    Replace:

    • <datalv> with the data logical volume name
    • <datavg> with the data logical volume group name

    For example:

    osd_scenario: lvm
    lvm_volumes:
      - data: data-lv1
        data_vg: vg1
  8. Optional. If you want to store the block.wal and block.db on dedicated logical volumes, edit the host_vars/<hostname>.yml file as follows:

    osd_scenario: lvm
    lvm_volumes:
      - data: <datalv>
        wal: <wallv>
        wal_vg: <vg>
        db: <dblv>
        db_vg: <vg>

    Replace:

    • <datalv> with the logical volume where the data should be contained
    • <wallv> with the logical volume where the write-ahead-log should be contained
    • <vg> with the volume group the WAL and/or DB device LVs are on
    • <dblv> with the logical volume the BlueStore internal metadata should be contained

    For example:

    osd_scenario: lvm
    lvm_volumes:
      - data: data-lv3
        wal: wal-lv1
        wal_vg: vg3
        db: db-lv3
        db_vg: vg3
    Note

    When using lvm_volumes: with osd_objectstore: bluestore the lvm_volumes YAML dictionary must contain at least data. When defining wal or db, it must have both the LV name and VG name (db and wal are not required). This allows for four combinations: just data, data and wal, data and wal and db, or data and db. Data can be a raw device, lv or partition. Wal and db can be a lv or partition. When specifying a raw device or partition ceph-volume will put logical volumes on top of them.

    Note

    Currently, ceph-ansible does not create the volume groups or the logical volumes. This must be done before running the Anisble playbook.

  9. Use the ansible-playbook:

    [user@admin ceph-ansible]$ ansible-playbook site.yml
  10. From a Monitor node, verify that the new OSD has been successfully added:

    [root@monitor ~]# ceph osd tree

Additional Resources