Chapter 4. Customizing the Storage Service
The Heat template collection provided by the director already contains the necessary templates and environment files to enable a basic Ceph Storage configuration.
The /usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-ansible.yaml environment file will create a Ceph cluster and integrate it with your overcloud upon deployment. This cluster will feature containerized Ceph Storage nodes. For more information about containerized services in OpenStack, see "Configuring a Basic Overcloud with the CLI Tools" in the Director Installation and Usage Guide.
The Red Hat OpenStack director will also apply basic, default settings to the deployed Ceph cluster. You need a custom environment file to pass custom settings to your Ceph cluster. To create one:
-
Create the file
storage-config.yamlin/home/stack/templates/. For the purposes of this document,~/templates/storage-config.yamlwill contain most of the overcloud-related custom settings for your environment. It will override all the default settings applied by the director to your overcloud. Add a
parameter_defaultssection to~/templates/storage-config.yaml. This section will contain custom settings for your overcloud. For example, to setvxlanas the network type of the networking service (neutron):parameter_defaults: NeutronNetworkType: vxlan
If needed, set the following options under
parameter_defaultsas you see fit:Option Description Default value CinderEnableIscsiBackend
Enables the iSCSI backend
false
CinderEnableRbdBackend
Enables the Ceph Storage back end
true
CinderBackupBackend
Sets ceph or swift as the back end for volume backups; see Section 4.3, “Configuring the Backup Service to Use Ceph” for related details
ceph
NovaEnableRbdBackend
Enables Ceph Storage for Nova ephemeral storage
true
GlanceBackend
Defines which back end the Image service should use:
rbd(Ceph),swift, orfilerbd
GnocchiBackend
Defines which back end the Telemetry service should use:
rbd(Ceph),swift, orfilerbd
NoteYou can omit an option from
~/templates/storage-config.yamlif you intend to use the default setting.
The contents of your environment file will change depending on the settings you apply in the sections that follow. See Appendix A, Sample Environment File: Creating a Ceph Cluster for a finished example.
The following subsections explain how to override common default storage service settings applied by the director.
4.1. Enabling the Ceph Metadata Server (MDS) [Technology Preview]
This feature is available in this release as a Technology Preview, and therefore is not fully supported by Red Hat. It should only be used for testing, and should not be deployed in a production environment. For more information about Technology Preview features, see Scope of Coverage Details.
The Ceph Metadata Server (MDS) runs the ceph-mds daemon, which manages metadata related to files stored on the Ceph File System (CephFS). For related information about CephFS, see Ceph File System Guide and CephFS Back End Guide for the Shared File System Service.
To enable the Ceph Metadata Server, invoke the following environment file when creating your overcloud:
-
/usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-mds.yaml
See Section 6.2, “Initiating Overcloud Deployment” for more details. For more information about the Ceph Metadata Server, see Configuring Metadata Server Daemons.
By default, the Ceph Metadata Server will be deployed on the Controller node. You can deploy the Ceph Metadata Server on its own dedicated node; for instructions, see Section 3.2, “Creating a Custom Role and Flavor for the Ceph MDS Service”.
4.2. Enabling the Ceph Object Gateway
The Ceph Object Gateway provides applications with an interface to object storage capabilities within a Ceph storage cluster. Upon deploying the Ceph Object Gateway, you can then replace the default Object Storage service (swift) with Ceph. For more information, see Object Gateway Guide for Red Hat Enterprise Linux.
To enable a Ceph Object Gateway in your deployment, invoke the following environment file when creating your overcloud:
-
/usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-rgw.yaml
See Section 6.2, “Initiating Overcloud Deployment” for more details.
The Ceph Object Gateway acts as a drop-in replacement for the default Object Storage service. As such, all other services that normally use swift can seamlessly start using the Ceph Object Gateway instead without further configuration. For example, when configuring the Block Storage Backup service (cinder-backup) to use the Ceph Object Gateway, set ceph as the target back end (see Section 4.3, “Configuring the Backup Service to Use Ceph”).
4.3. Configuring the Backup Service to Use Ceph
The Block Storage Backup service (cinder-backup) is disabled by default. To enable it, invoke the following environment file when creating your overcloud:
-
/usr/share/openstack-tripleo-heat-templates/environments/cinder-backup.yaml
See Section 6.2, “Initiating Overcloud Deployment” for more details.
When you enable cinder-backup (as in Section 4.2, “Enabling the Ceph Object Gateway”), you can configure it to store backups in Ceph. This involves adding the following line to the parameter_defaults of your environment file (namely, ~/templates/storage-config.yaml):
CinderBackupBackend: ceph
4.4. Configuring Multiple Bonded Interfaces Per Ceph Node
Using a bonded interface allows you to combine multiple NICs to add redundancy to a network connection. If you have enough NICs on your Ceph nodes, you can take this a step further by creating multiple bonded interfaces per node.
With this, you can then use a bonded interface for each network connection required by the node. This provides both redundancy and a dedicated connection for each network.
The simplest implementation of this involves the use of two bonds, one for each storage network used by the Ceph nodes. These networks are the following:
- Front-end storage network (
StorageNet) - The Ceph client uses this network to interact with its Ceph cluster.
- Back-end storage network (
StorageMgmtNet) - The Ceph cluster uses this network to balance data in accordance with the placement group policy of the cluster. For more information, see Placement Groups (PG) (from the Red Hat Ceph Architecture Guide).
Configuring this involves customizing a network interface template, as the director does not provide any sample templates that deploy multiple bonded NICs. However, the director does provide a template that deploys a single bonded interface — namely, /usr/share/openstack-tripleo-heat-templates/network/config/bond-with-vlans/ceph-storage.yaml. You can add a bonded interface for your additional NICs by defining it there.
For more detailed instructions on how to do this, see Creating Custom Interface Templates (from the Advanced Overcloud Customization guide). That section also explains the different components of a bridge and bonding definition.
The following snippet contains the default definition for the single bonded interface defined by /usr/share/openstack-tripleo-heat-templates/network/config/bond-with-vlans/ceph-storage.yaml:
type: ovs_bridge // 1 name: br-bond members: - type: ovs_bond // 2 name: bond1 // 3 ovs_options: {get_param: BondInterfaceOvsOptions} 4 members: // 5 - type: interface name: nic2 primary: true - type: interface name: nic3 - type: vlan // 6 device: bond1 // 7 vlan_id: {get_param: StorageNetworkVlanID} addresses: - ip_netmask: {get_param: StorageIpSubnet} - type: vlan device: bond1 vlan_id: {get_param: StorageMgmtNetworkVlanID} addresses: - ip_netmask: {get_param: StorageMgmtIpSubnet}
- 1
- A single bridge named
br-bondholds the bond defined by this template. This line defines the bridge type, namely OVS. - 2
- The first member of the
br-bondbridge is the bonded interface itself, namedbond1. This line defines the bond type ofbond1, which is also OVS. - 3
- The default bond is named
bond1, as defined in this line. - 4
- The
ovs_optionsentry instructs director to use a specific set of bonding module directives. Those directives are passed through theBondInterfaceOvsOptions, which you can also configure in this same file. For instructions on how to configure this, see Section 4.4.1, “Configuring Bonding Module Directives”. - 5
- The
memberssection of the bond defines which network interfaces are bonded bybond1. In this case, the bonded interface usesnic2(set as the primary interface) andnic3. - 6
- The
br-bondbridge has two other members: namely, a VLAN for both front-end (StorageNetwork) and back-end (StorageMgmtNetwork) storage networks. - 7
- The
deviceparameter defines what device a VLAN should use. In this case, both VLANs will use the bonded interfacebond1.
With at least two more NICs, you can define an additional bridge and bonded interface. Then, you can move one of the VLANs to the new bonded interface. This results in added throughput and reliability for both storage network connections.
When customizing /usr/share/openstack-tripleo-heat-templates/network/config/bond-with-vlans/ceph-storage.yaml for this purpose, it is advisable to also use Linux bonds (type: linux_bond ) instead of the default OVS (type: ovs_bond). This bond type is more suitable for enterprise production deployments.
The following edited snippet defines an additional OVS bridge (br-bond2) which houses a new Linux bond named bond2. The bond2 interface uses two additional NICs (namely, nic4 and nic5) and will be used solely for back-end storage network traffic:
type: ovs_bridge
name: br-bond
members:
-
type: linux_bond
name: bond1
bonding_options: {get_param: BondInterfaceOvsOptions} // 1
members:
-
type: interface
name: nic2
primary: true
-
type: interface
name: nic3
-
type: vlan
device: bond1
vlan_id: {get_param: StorageNetworkVlanID}
addresses:
-
ip_netmask: {get_param: StorageIpSubnet}
-
type: ovs_bridge
name: br-bond2
members:
-
type: linux_bond
name: bond2
bonding_options: {get_param: BondInterfaceOvsOptions}
members:
-
type: interface
name: nic4
primary: true
-
type: interface
name: nic5
-
type: vlan
device: bond1
vlan_id: {get_param: StorageMgmtNetworkVlanID}
addresses:
-
ip_netmask: {get_param: StorageMgmtIpSubnet}- 1
- As
bond1andbond2are both Linux bonds (instead of OVS), they usebonding_optionsinstead ofovs_optionsto set bonding directives. For related information, see Section 4.4.1, “Configuring Bonding Module Directives”.
For the full contents of this customized template, see Appendix B, Sample Custom Interface Template: Multiple Bonded Interfaces.
4.4.1. Configuring Bonding Module Directives
After adding and configuring the bonded interfaces, use the BondInterfaceOvsOptions parameter to set what directives each should use. You can find this in the parameters: section of /usr/share/openstack-tripleo-heat-templates/network/config/bond-with-vlans/ceph-storage.yaml. The following snippet shows the default definition of this parameter (namely, empty):
BondInterfaceOvsOptions:
default: ''
description: The ovs_options string for the bond interface. Set
things like lacp=active and/or bond_mode=balance-slb
using this option.
type: string
Define the options you need in the default: line. For example, to use 802.3ad (mode 4) and a LACP rate of 1 (fast), use 'mode=4 lacp_rate=1', as in:
BondInterfaceOvsOptions:
default: 'mode=4 lacp_rate=1'
description: The bonding_options string for the bond interface. Set
things like lacp=active and/or bond_mode=balance-slb
using this option.
type: string
See Appendix C. Open vSwitch Bonding Options (from the Advanced Overcloud Optimization guide) for other supported bonding options. For the full contents of the customized /usr/share/openstack-tripleo-heat-templates/network/config/bond-with-vlans/ceph-storage.yaml template, see Appendix B, Sample Custom Interface Template: Multiple Bonded Interfaces.
