Appendix A. Sample environment file: creating a Ceph Storage cluster

The following custom environment file uses many of the options described throughout Chapter 2, Preparing Ceph Storage nodes for overcloud deployment. This sample does not include any commented-out options. For an overview on environment files, see Environment Files (from the Advanced Overcloud Customization guide).

/home/stack/templates/storage-config.yaml

parameter_defaults: 1
  CinderBackupBackend: ceph 2
  CephAnsibleDisksConfig: 3
    osd_scenario: lvm
    osd_objectstore: bluestore
    dmcrypt: true
    devices:
      - /dev/disk/by-path/pci-0000:03:00.0-scsi-0:0:10:0
      - /dev/disk/by-path/pci-0000:03:00.0-scsi-0:0:11:0
      - /dev/nvme0n1
  ControllerCount: 3 4
  OvercloudControlFlavor: control
  ComputeCount: 3
  OvercloudComputeFlavor: compute
  CephStorageCount: 3
  OvercloudCephStorageFlavor: ceph-storage
  CephMonCount: 3
  OvercloudCephMonFlavor: ceph-mon
  CephMdsCount: 3
  OvercloudCephMdsFlavor: ceph-mds
  NeutronNetworkType: vxlan 5

1
The parameter_defaults section modifies the default values for parameters in all templates. Most of the entries listed here are described in Chapter 4, Customizing the Storage service.
2
If you are deploying the Ceph Object Gateway, you can use Ceph Object Storage (ceph-rgw) as a backup target. To configure this, set CinderBackupBackend to swift. See Section 4.2, “Enabling the Ceph Object Gateway” for details.
3
The CephAnsibleDisksConfig section defines a custom disk layout for deployments using BlueStore.
4
For each role, the *Count parameters assign a number of nodes while the Overcloud*Flavor parameters assign a flavor. For example, ControllerCount: 3 assigns 3 nodes to the Controller role, and OvercloudControlFlavor: control sets each of those roles to use the control flavor. See Section 7.1, “Assigning nodes and flavors to roles” for details.
Note

The CephMonCount, CephMdsCount, OvercloudCephMonFlavor, and OvercloudCephMdsFlavor parameters (along with the ceph-mon and ceph-mds flavors) will only be valid if you created a custom CephMON and CephMds role, as described in Chapter 3, Deploying Ceph services on dedicated nodes.

5
NeutronNetworkType: sets the network type that the neutron service should use (in this case, vxlan).