Chapter 5. Customizing the Storage Service

The Heat template collection provided by the director already contains the necessary templates and environment files to enable a basic Ceph Storage configuration.

The /usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-ansible.yaml environment file will create a Ceph cluster and integrate it with your overcloud upon deployment. This cluster will feature containerized Ceph Storage nodes. For more information about containerized services in OpenStack, see "Configuring a Basic Overcloud with the CLI Tools" in the Director Installation and Usage Guide.

The Red Hat OpenStack director will also apply basic, default settings to the deployed Ceph cluster. You need a custom environment file to pass custom settings to your Ceph cluster. To create one:

  1. Create the file storage-config.yaml in /home/stack/templates/. For the purposes of this document, ~/templates/storage-config.yaml will contain most of the overcloud-related custom settings for your environment. It will override all the default settings applied by the director to your overcloud.
  2. Add a parameter_defaults section to ~/templates/storage-config.yaml. This section will contain custom settings for your overcloud. For example, to set vxlan as the network type of the networking service (neutron):

    parameter_defaults:
      NeutronNetworkType: vxlan
  3. If needed, set the following options under parameter_defaults as you see fit:

    OptionDescriptionDefault value

    CinderEnableIscsiBackend

    Enables the iSCSI backend

    false

    CinderEnableRbdBackend

    Enables the Ceph Storage back end

    true

    CinderBackupBackend

    Sets ceph or swift as the back end for volume backups; see Section 5.3, “Configuring the Backup Service to Use Ceph” for related details

    ceph

    NovaEnableRbdBackend

    Enables Ceph Storage for Nova ephemeral storage

    true

    GlanceBackend

    Defines which back end the Image service should use: rbd (Ceph), swift, or file

    rbd

    GnocchiBackend

    Defines which back end the Telemetry service should use: rbd (Ceph), swift, or file

    rbd

    Note

    You can omit an option from ~/templates/storage-config.yaml if you intend to use the default setting.

The contents of your environment file will change depending on the settings you apply in the sections that follow. See Appendix A, Sample Environment File: Creating a Ceph Cluster for a finished example.

The following subsections explain how to override common default storage service settings applied by the director.

5.1. Enabling the Ceph Metadata Server

The Ceph Metadata Server (MDS) runs the ceph-mds daemon, which manages metadata related to files stored on CephFS. CephFS can be consumed via NFS. For related information about using CephFS via NFS, see Ceph File System Guide and CephFS via NFS Back End Guide for the Shared File System Service.

Note

Red Hat only supports deploying Ceph MDS with the CephFS via NFS back end for the Shared File System service.

To enable the Ceph Metadata Server, invoke the following environment file when creating your overcloud:

  • /usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-mds.yaml

See Section 8.2, “Initiating Overcloud Deployment” for more details. For more information about the Ceph Metadata Server, see Configuring Metadata Server Daemons.

Note

By default, the Ceph Metadata Server will be deployed on the Controller node. You can deploy the Ceph Metadata Server on its own dedicated node; for instructions, see Section 4.2, “Creating a Custom Role and Flavor for the Ceph MDS Service”.

5.2. Enabling the Ceph Object Gateway

The Ceph Object Gateway provides applications with an interface to object storage capabilities within a Ceph storage cluster. Upon deploying the Ceph Object Gateway, you can then replace the default Object Storage service (swift) with Ceph. For more information, see Object Gateway Guide for Red Hat Enterprise Linux.

To enable a Ceph Object Gateway in your deployment, invoke the following environment file when creating your overcloud:

  • /usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-rgw.yaml

See Section 8.2, “Initiating Overcloud Deployment” for details.

The Ceph Object Gateway acts as a drop-in replacement for the default Object Storage service. As such, all other services that normally use swift can seamlessly start using the Ceph Object Gateway instead without further configuration. Refer to the Block Storage Backup Guide for instructions.

5.3. Configuring the Backup Service to Use Ceph

The Block Storage Backup service (cinder-backup) is disabled by default. To enable it, invoke the following environment file when creating your overcloud:

  • /usr/share/openstack-tripleo-heat-templates/environments/cinder-backup.yaml

Refer to the Block Storage Backup Guide for instructions.

5.4. Configuring Multiple Bonded Interfaces Per Ceph Node

Using a bonded interface allows you to combine multiple NICs to add redundancy to a network connection. If you have enough NICs on your Ceph nodes, you can take this a step further by creating multiple bonded interfaces per node.

With this, you can then use a bonded interface for each network connection required by the node. This provides both redundancy and a dedicated connection for each network.

The simplest implementation of this involves the use of two bonds, one for each storage network used by the Ceph nodes. These networks are the following:

Front-end storage network (StorageNet)
The Ceph client uses this network to interact with its Ceph cluster.
Back-end storage network (StorageMgmtNet)
The Ceph cluster uses this network to balance data in accordance with the placement group policy of the cluster. For more information, see Placement Groups (PG) (from the Red Hat Ceph Architecture Guide).

Configuring this involves customizing a network interface template, as the director does not provide any sample templates that deploy multiple bonded NICs. However, the director does provide a template that deploys a single bonded interface — namely, /usr/share/openstack-tripleo-heat-templates/network/config/bond-with-vlans/ceph-storage.yaml. You can add a bonded interface for your additional NICs by defining it there.

Note

For more detailed instructions on how to do this, see Creating Custom Interface Templates (from the Advanced Overcloud Customization guide). That section also explains the different components of a bridge and bonding definition.

The following snippet contains the default definition for the single bonded interface defined by /usr/share/openstack-tripleo-heat-templates/network/config/bond-with-vlans/ceph-storage.yaml:

  type: ovs_bridge // 1
  name: br-bond
  members:
    -
      type: ovs_bond // 2
      name: bond1 // 3
      ovs_options: {get_param: BondInterfaceOvsOptions} 4
      members: // 5
        -
          type: interface
          name: nic2
          primary: true
        -
          type: interface
          name: nic3
    -
      type: vlan // 6
      device: bond1 // 7
      vlan_id: {get_param: StorageNetworkVlanID}
      addresses:
        -
          ip_netmask: {get_param: StorageIpSubnet}
    -
      type: vlan
      device: bond1
      vlan_id: {get_param: StorageMgmtNetworkVlanID}
      addresses:
        -
          ip_netmask: {get_param: StorageMgmtIpSubnet}
1
A single bridge named br-bond holds the bond defined by this template. This line defines the bridge type, namely OVS.
2
The first member of the br-bond bridge is the bonded interface itself, named bond1. This line defines the bond type of bond1, which is also OVS.
3
The default bond is named bond1, as defined in this line.
4
The ovs_options entry instructs director to use a specific set of bonding module directives. Those directives are passed through the BondInterfaceOvsOptions, which you can also configure in this same file. For instructions on how to configure this, see Section 5.4.1, “Configuring Bonding Module Directives”.
5
The members section of the bond defines which network interfaces are bonded by bond1. In this case, the bonded interface uses nic2 (set as the primary interface) and nic3.
6
The br-bond bridge has two other members: namely, a VLAN for both front-end (StorageNetwork) and back-end (StorageMgmtNetwork) storage networks.
7
The device parameter defines what device a VLAN should use. In this case, both VLANs will use the bonded interface bond1.

With at least two more NICs, you can define an additional bridge and bonded interface. Then, you can move one of the VLANs to the new bonded interface. This results in added throughput and reliability for both storage network connections.

When customizing /usr/share/openstack-tripleo-heat-templates/network/config/bond-with-vlans/ceph-storage.yaml for this purpose, it is advisable to also use Linux bonds (type: linux_bond ) instead of the default OVS (type: ovs_bond). This bond type is more suitable for enterprise production deployments.

The following edited snippet defines an additional OVS bridge (br-bond2) which houses a new Linux bond named bond2. The bond2 interface uses two additional NICs (namely, nic4 and nic5) and will be used solely for back-end storage network traffic:

  type: ovs_bridge
  name: br-bond
  members:
    -
      type: linux_bond
      name: bond1
      bonding_options: {get_param: BondInterfaceOvsOptions} // 1
      members:
        -
          type: interface
          name: nic2
          primary: true
        -
          type: interface
          name: nic3
    -
      type: vlan
      device: bond1
      vlan_id: {get_param: StorageNetworkVlanID}
      addresses:
        -
          ip_netmask: {get_param: StorageIpSubnet}
-
  type: ovs_bridge
  name: br-bond2
  members:
    -
      type: linux_bond
      name: bond2
      bonding_options: {get_param: BondInterfaceOvsOptions}
      members:
        -
          type: interface
          name: nic4
          primary: true
        -
          type: interface
          name: nic5
    -
      type: vlan
      device: bond1
      vlan_id: {get_param: StorageMgmtNetworkVlanID}
      addresses:
        -
          ip_netmask: {get_param: StorageMgmtIpSubnet}
1
As bond1 and bond2 are both Linux bonds (instead of OVS), they use bonding_options instead of ovs_options to set bonding directives. For related information, see Section 5.4.1, “Configuring Bonding Module Directives”.

For the full contents of this customized template, see Appendix B, Sample Custom Interface Template: Multiple Bonded Interfaces.

5.4.1. Configuring Bonding Module Directives

After adding and configuring the bonded interfaces, use the BondInterfaceOvsOptions parameter to set what directives each should use. You can find this in the parameters: section of /usr/share/openstack-tripleo-heat-templates/network/config/bond-with-vlans/ceph-storage.yaml. The following snippet shows the default definition of this parameter (namely, empty):

BondInterfaceOvsOptions:
    default: ''
    description: The ovs_options string for the bond interface. Set
                 things like lacp=active and/or bond_mode=balance-slb
                 using this option.
    type: string

Define the options you need in the default: line. For example, to use 802.3ad (mode 4) and a LACP rate of 1 (fast), use 'mode=4 lacp_rate=1', as in:

BondInterfaceOvsOptions:
    default: 'mode=4 lacp_rate=1'
    description: The bonding_options string for the bond interface. Set
                 things like lacp=active and/or bond_mode=balance-slb
                 using this option.
    type: string

See Appendix C. Open vSwitch Bonding Options (from the Advanced Overcloud Optimization guide) for other supported bonding options. For the full contents of the customized /usr/share/openstack-tripleo-heat-templates/network/config/bond-with-vlans/ceph-storage.yaml template, see Appendix B, Sample Custom Interface Template: Multiple Bonded Interfaces.