5.3.3.3. Using pre-created LVMs for increased control

In the previous advanced scenarios, ceph-volume uses different types of device lists to create logical volumes for OSDs. You can also create the logical volumes before ceph-volume runs and then pass ceph-volume an lvm_volumes list of those logical volumes. Although this requires that you create the logical volumes in advance, it means that you have more precise control. Because director is also responsible for hardware provisioning, you must create these LVMs in advance by using a first-boot script.

Procedure

  1. Create an environment file, /home/stack/templates/firstboot.yaml, that registers your heat template as the OS::TripleO::NodeUserData resource type and contains the following content:

    resource_registry:
      OS::TripleO::NodeUserData: /home/stack/templates/ceph-lvm.yaml
  2. Create an environment file, /home/stack/templates/ceph-lvm.yaml. Add a list similar to the following example, which includes three physical volumes. If your devices list is longer, expand the example according to your requirements.

    heat_template_version: 2014-10-16
    
    description: >
      Extra hostname configuration
    
    resources:
      userdata:
        type: OS::Heat::MultipartMime
        properties:
          parts:
          - config: {get_resource: ceph_lvm_config}
    
      ceph_lvm_config:
        type: OS::Heat::SoftwareConfig
        properties:
          config: |
            #!/bin/bash -x
            pvcreate /dev/sda
            vgcreate ceph_vg_hdd /dev/sda
            pvcreate /dev/sdb
            vgcreate ceph_vg_ssd /dev/sdb
            pvcreate /dev/nvme0n1
            vgcreate ceph_vg_nvme /dev/nvme0n1
            lvcreate -n ceph_lv_wal1 -L 50G ceph_vg_nvme
            lvcreate -n ceph_lv_db1 -L 500G ceph_vg_ssd
            lvcreate -n ceph_lv_data1 -L 5T ceph_vg_hdd
            lvs
    
    outputs:
      OS::stack_id:
        value: {get_resource: userdata}
  3. Use the lvm_volumes parameter instead of the devices list in the following way. This assumes that the volume groups and logical volumes are already created. A typical use case in this scenario is that the WAL and DB LVs are on SSDs and the data LV is on HDDs:

    parameter_defaults:
      CephAnsibleDisksConfig:
        osd_objectstore: bluestore
        osd_scenario: lvm
        lvm_volumes:
          - data: ceph_lv_data1
            data_vg: ceph_vg_hdd
            db: ceph_lv_db1
            db_vg: ceph_vg_ssd
            wal: ceph_lv_wal1
            wal_vg: ceph_vg_nvme
  4. Include the environment files that contain your new content in the deployment command with the -e option when you deploy the overcloud.

    Note
    Specifying a separate WAL device is necessary only if that WAL device resides on hardware that performs better than the DB device. Usually creating a separate DB device is sufficient and the same partition is then used for the WAL function.