Red Hat Training

A Red Hat training course is available for Red Hat Ceph Storage

Chapter 9. BlueStore

BlueStore is a new back-end object store for the OSD daemons. The original object store, FileStore, requires a file system on top of raw block devices. Objects are then written to the file system. BlueStore does not require an initial file system, because BlueStore puts objects directly on the block device.

Important

BlueStore provides a high-performance backend for OSD daemons in a production environment. By default, BlueStore is configured to be self-tuning. If you determine that your environment performs better with BlueStore tuned manually, please contact Red Hat support and share the details of your configuration to help us improve the auto-tuning capability. Red Hat looks forward to your feedback and appreciates your recommendations.

9.1. About BlueStore

BlueStore is a new back end for the OSD daemons. Unlike the original FileStore back end, BlueStore stores objects directly on the block devices without any file system interface, which improves the performance of the cluster.

The following are some of the main features of using BlueStore:

Direct management of storage devices
BlueStore consumes raw block devices or partitions. This avoids any intervening layers of abstraction, such as local file systems like XFS, that might limit performance or add complexity.
Metadata management with RocksDB
BlueStore uses the RocksDB’ key-value database to manage internal metadata, such as the mapping from object names to block locations on a disk.
Full data and metadata checksumming
By default all data and metadata written to BlueStore is protected by one or more checksums. No data or metadata are read from disk or returned to the user without verification.
Efficient copy-on-write
The Ceph Block Device and Ceph File System snapshots rely on a copy-on-write clone mechanism that is implemented efficiently in BlueStore. This results in efficient I/O both for regular snapshots and for erasure coded pools which rely on cloning to implement efficient two-phase commits.
No large double-writes
BlueStore first writes any new data to unallocated space on a block device, and then commits a RocksDB transaction that updates the object metadata to reference the new region of the disk. Only when the write operation is below a configurable size threshold, it falls back to a write-ahead journaling scheme, similar to what how FileStore operates.
Multi-device support

BlueStore can use multiple block devices for storing different data, for example: Hard Disk Drive (HDD) for the data, Solid-state Drive (SSD) for metadata, Non-volatile Memory (NVM) or Non-volatile random-access memory (NVRAM) or persistent memory for the RocksDB write-ahead log (WAL). See Section 9.2, “BlueStore Devices” for details.

Note

The ceph-disk utility does not yet provision multiple devices. To use multiple devices, OSDs must be set up manually.

Efficient block device usage
Because BlueStore does not use any file system, it minimizes the need to clear the storage device cache.

9.2. BlueStore Devices

This section explains what block devices the BlueStore back end uses.

BlueStore manages either one, two, or (in certain cases) three storage devices.

  • primary
  • WAL
  • DB

In the simplest case, BlueStore consumes a single (primary) storage device. The storage device is partitioned into two parts that contain:

  • OSD metadata: A small partition formatted with XFS that contains basic metadata for the OSD. This data directory includes information about the OSD, such as its identifier, which cluster it belongs to, and its private keyring.
  • Data: A large partition occupying the rest of the device that is managed directly by BlueStore and that contains all of the OSD data. This primary device is identified by a block symbolic link in the data directory.

You can also use two additional devices:

  • A WAL (write-ahead-log) device: A device that stores BlueStore internal journal or write-ahead log. It is identified by the block.wal symbolic link in the data directory. Consider using a WAL device only if the device is faster than the primary device, for example, when the WAL device uses an SSD disk and the primary devices uses an HDD disk.
  • A DB device: A device that stores BlueStore internal metadata. The embedded RocksDB database puts as much metadata as it can on the DB device instead on the primary device to improve performance. If the DB device is full, it starts adding metadata to the primary device. Consider using a DB device only if the device is faster than the primary device.

If you have only a less than a gigabyte storage available on fast devices, Red Hat recommends using it as a WAL device. If you have more fast devices available, consider using it as a DB device. The BlueStore journal is always places on the fastest device, so using a DB device provides the same benefit that the WAL device while also allows for storing additional metadata.

9.3. BlueStore caching

The BlueStore cache is a collection of buffers that, depending on configuration, can be populated with data as the OSD daemon does reading from or writing to the disk. By default in Red Hat Ceph Storage, BlueStore will cache on reads, but not writes. This is because the bluestore_default_buffered_write option is set to false to avoid potential overhead associated with cache eviction.

If the bluestore_default_buffered_write option is set to true, data is written to the buffer first, and then committed to disk. Afterwards, a write acknowledgement is sent to the client, allowing subsequent reads faster access to the data already in cache, until that data is evicted.

Read-heavy workloads will not see an immediate benefit from BlueStore caching. As more reading is done, the cache will grow over time and subsequent reads will see an improvement in performance. How fast the cache populates depends on the BlueStore block and database disk type, and the client’s workload requirements.

Important

Please contact Red Hat support before enabling the bluestore_default_buffered_write option.

9.4. Sizing considerations for BlueStore

When mixing traditional and solid state drives using BlueStore OSDs, it is important to size the RocksDB logical volume (block.db) appropriately. Red Hat recommends that the RocksDB logical volume be no less than 4% of the block size with object, file and mixed workloads. Red Hat supports 1% of the BlueStore block size with RocksDB and OpenStack block workloads. For example, if the block size is 1 TB for an object workload, then at a minimum, create a 40 GB RocksDB logical volume.

When not mixing drive types, there is no requirement to have a separate RocksDB logical volume. BlueStore will automatically manage the sizing of RocksDB.

BlueStore’s cache memory is used for the key-value pair metadata for RocksDB, BlueStore metadata and object data.

Note

The BlueStore cache memory values are in addition to the memory footprint already being consumed by the OSD.

9.5. Adding OSDs That Use BlueStore

This section describes how to install a new Ceph OSD node with the BlueStore back end.

Prerequisites

Procedure

Use the following commands on the Ansible administration node.

  1. Add a new OSD node to the [osds] section in Ansible inventory file, by default located at /etc/ansible/hosts.

    [osds]
    node1
    node2
    node3
    <hostname>

    Replace:

    • <hostname> with the name of the OSD node

    For example:

    [osds]
    node1
    node2
    node3
    node4
  2. Navigate to the /usr/share/ceph-ansible/ directory.

    [user@admin ~]$ cd /usr/share/ceph-ansible
  3. Create the host_vars directory.

    [root@admin ceph-ansible] mkdir host_vars
  4. Create the configuration file for the newly added OSD in host_vars.

    [root@admin ceph-ansible] touch host_vars/<hostname>.yml

    Replace:

    • <hostname> with the host name of the newly added OSD

    For example:

    [root@admin ceph-ansible] touch host_vars/node4.yml
  5. Add the following setting to the newly created file:

    osd_objectstore: bluestore
    Note

    To use BlueStore for all OSDs, add osd_objectstore:bluestore to the group_vars/all.yml file.

  6. Optional. If you want to store the block.wal and block.db partitions on dedicated devices, edit the host_vars/<hostname>.yml file as follows.

    1. To use dedicated devices for block.wal:

      osd_scenario: non-collocated
      
      bluestore_wal_devices:
         - <device>
         - <device>

      Replace:

      • <device> with the path to the device

      For example:

      osd_scenario: non-collocated
      
      bluestore_wal_devices:
         - /dev/sdf
         - /dev/sdg
    2. To use dedicated devices for block.db:

      osd_scenario: non-collocated
      
      dedicated_devices:
         - <device>
         - <device>

      Replace:

      • <device> with the path to the device

      For example:

      osd_scenario: non-collocated
      
      dedicated_devices:
         - /dev/sdh
         - /dev/sdi
      Note

      If you use the osd_scenario: collocated parameter, the block.wal and block.db partitions will use the same device as specified with the devices parameter. For details, see the Installing a Red Hat Ceph Storage Cluster section in the Red Hat Ceph Storage 3 l Installation Guide for Red Hat Enterprise Linux or Ubuntu.

      Note

      To use BlueStore for all OSDs, add the aforementioned parameters to the group_vars/osds.yml file.

    3. To override the block.db and block.wal default size in the group_vars/all.yml file:

      ceph_conf_overrides:
        osd:
          bluestore_block_db_size: <value>
          bluestore_block_wal_size: <value>

      Replace:

      • <value> with the size in bytes.

      For example:

      ceph_conf_overrides:
        osd:
          bluestore_block_db_size: 14336000000
          bluestore_block_wal_size: 2048000000
  7. To configure LVM based BlueStore OSDs, use osd_scenario: lvm in host_vars/<hostname>.yml:

    osd_scenario: lvm
    lvm_volumes:
      - data: <datalv>
        data_vg: <datavg>

    Replace:

    • <datalv> with the data logical volume name
    • <datavg> with the data logical volume group name

    For example:

    osd_scenario: lvm
    lvm_volumes:
      - data: data-lv1
        data_vg: vg1
  8. Optional. If you want to store the block.wal and block.db on dedicated logical volumes, edit the host_vars/<hostname>.yml file as follows:

    osd_scenario: lvm
    lvm_volumes:
      - data: <datalv>
        wal: <wallv>
        wal_vg: <vg>
        db: <dblv>
        db_vg: <vg>

    Replace:

    • <datalv> with the logical volume where the data should be contained
    • <wallv> with the logical volume where the write-ahead-log should be contained
    • <vg> with the volume group the WAL and/or DB device LVs are on
    • <dblv> with the logical volume the BlueStore internal metadata should be contained

    For example:

    osd_scenario: lvm
    lvm_volumes:
      - data: data-lv3
        wal: wal-lv1
        wal_vg: vg3
        db: db-lv3
        db_vg: vg3
    Note

    When using lvm_volumes: with osd_objectstore: bluestore the lvm_volumes YAML dictionary must contain at least data. When defining wal or db, it must have both the LV name and VG name (db and wal are not required). This allows for four combinations: just data, data and wal, data and wal and db, or data and db. Data can be a raw device, lv or partition. The wal and db can be a lv or partition. When specifying a raw device or partition ceph-volume will put logical volumes on top of them.

    Note

    Currently, ceph-ansible does not create the volume groups or the logical volumes. This must be done before running the Anisble playbook.

  9. Open and edit the group_vars/all.yml file, and uncomment the osd_memory_target option. Adjust the value on how much memory you want the OSD to consume.

    Note

    The default value for the osd_memory_target option is 4000000000, which is 4 GB. This option pins the BlueStore cache in memory.

    Important

    The osd_memory_target option only applies to BlueStore-backed OSDs.

  10. Use the ansible-playbook:

    [user@admin ceph-ansible]$ ansible-playbook site.yml
  11. From a Monitor node, verify that the new OSD has been successfully added:

    [root@monitor ~]# ceph osd tree

Additional Resources

9.6. Tuning BlueStore for small writes with bluestore_min_alloc_size

In BlueStore, the raw partition is allocated and managed in chunks of bluestore_min_alloc_size. By default, bluestore_min_alloc_size is 64 KB for HDDs, and 16 KB for SSDs. The unwritten area in each chunk is filled with zeroes when it is written to the raw partition. This can lead to wasted unused space when not properly sized for your workload, for example when writing small objects.

It is best practice to set bluestore_min_alloc_size to match the smallest write so this can write amplification penalty can be avoided.

For example, if your client writes 4 KB objects frequently, use ceph-ansible to configure the following setting on OSD nodes:

bluestore_min_alloc_size = 4096

Note

The settings bluestore_min_alloc_size_ssd and bluestore_min_alloc_size_hdd are specific to SSDs and HDDs, respectively, but setting them is not necessary because setting bluestore_min_alloc_size overrides them.

Prerequisites

  • A running {storage-product} cluster.
  • New servers that can be freshly provisioned as OSD nodes, or:
  • OSD nodes that can be redeployed.

Procedure

  1. Optional: If redeploying an existing OSD node, copy the admin keyring from /etc/ceph/ on the Ceph Monitor node to the node from which you want to remove the OSD.
  2. Optional: If redeploying an existing OSD node, use the shrink-osd.yml Ansible playbook to remove the OSD from the cluster.

    ansible-playbook -v infrastructure-playbooks/shrink-osd.yml -e osd_to_kill=OSD_ID

    Example

    [admin@admin ceph-ansible]$ ansible-playbook -v infrastructure-playbooks/shrink-osd.yml -e osd_to_kill=1

  3. If redeploying an existing OSD node, wipe the OSD drives and reinstall the OS.
  4. Prepare the node for OSD provisioning using Ansible. Such as, enabling {storage-product} repositories, adding an Ansible user, and enabling password-less SSH login.
  5. Add the bluestore_min_alloc_size to the ceph_conf_overrides section of the group_vars/all.yml Ansible playbook:

    ceph_conf_overrides:
      osd:
        bluestore_min_alloc_size: 4096
  6. If deploying a new node, add it to the Ansible inventory file, normally /etc/ansible/hosts:

    [osds]
    OSD_NODE_NAME

    Example

    [osds]
    osd1 devices="[ '/dev/sdb' ]"

  7. Provision the OSD node using Ansible:

    ansible-playbook -v site.yml -l OSD_NODE_NAME

    Example

    [admin@admin ceph-ansible]$ ansible-playbook -v site.yml -l osd1

  8. After the playbook finishes, verify the setting using the ceph daemon command:

    ceph daemon OSD.ID config get bluestore_min_alloc_size

    Example

    [root@osd1 ~]# ceph daemon osd.1 config get bluestore_min_alloc_size
    {
        "bluestore_min_alloc_size": "4096"
    }

    You can see bluestore_min_alloc_size is set to 4096 bytes, which is equivalent to 4 KB.

Additional Resources

9.7. The BlueStore fragmentation tool

As a storage administrator, you will want to periodically check the fragmentation level of your BlueStore OSDs. You can check fragmentation levels with one simple command for offline or online OSDs.

9.7.1. Prerequisites

  • A running Red Hat Ceph Storage 3.3 or higher storage cluster.
  • BlueStore OSDs.

9.7.2. What is the BlueStore fragmentation tool?

For BlueStore OSDs, the free space gets fragmented over time on the underlying storage device. Some fragmentation is normal, but when there is excessive fragmentation this causes poor performance.

The BlueStore fragmentation tool generates a score on the fragmentation level of the BlueStore OSD. This fragmentation score is given as a range, 0 through 1. A score of 0 means no fragmentation, and a score of 1 means severe fragmentation.

Table 9.1. Fragmentation scores' meaning

ScoreFragmentation Amount

0.0 - 0.4

None to tiny fragmentation.

0.4 - 0.7

Small and acceptable fragmentation.

0.7 - 0.9

Considerable, but safe fragmentation.

0.9 - 1.0

Severe fragmentation and that causes performance issues.

Important

If you have severe fragmentation, and need some help in resolving the issue, contact Red Hat Support.

9.7.3. Checking for fragmentation

Checking the fragmentation level of BlueStore OSDs can be done either online or offline.

Prerequisites

  • A running Red Hat Ceph Storage 3.3 or higher storage cluster.
  • BlueStore OSDs.

Online BlueStore fragmentation score

  1. Inspect a running BlueStore OSD process:

    1. Simple report:

      Syntax

      ceph daemon OSD_ID bluestore allocator score block

      Example

      [root@osd ~]# ceph daemon osd.123 bluestore allocator score block

    2. A more detailed report:

      Syntax

      ceph daemon OSD_ID bluestore allocator dump block

      Example

      [root@osd ~]# ceph daemon osd.123 bluestore allocator dump block

Offline BlueStore fragmentation score

  1. Inspect a non-running BlueStore OSD process:

    1. Simple report:

      Syntax

      ceph-bluestore-tool --path PATH_TO_OSD_DATA_DIRECTORY --allocator block free-score

      Example

      [root@osd ~]# ceph-bluestore-tool --path /var/lib/ceph/osd/ceph-123 --allocator block free-score

    2. A more detailed report:

      Syntax

      ceph-bluestore-tool --path PATH_TO_OSD_DATA_DIRECTORY --allocator block free-dump

      Example

      [root@osd ~]# ceph-bluestore-tool --path /var/lib/ceph/osd/ceph-123 --allocator block free-dump

Additional Resources