Chapter 4. Customizing the Storage service
The heat template collection provided by the director already contains the necessary templates and environment files to enable a basic Ceph Storage configuration.
The director uses the /usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-ansible.yaml
environment file to create a Ceph cluster and integrate it with your overcloud during deployment. This cluster features containerized Ceph Storage nodes. For more information about containerized services in OpenStack, see Configuring a basic overcloud with the CLI tools in the Director Installation and Usage guide.
The Red Hat OpenStack director also applies basic, default settings to the deployed Ceph cluster. You must also define any additional configuration in a custom environment file:
Procedure
-
Create the file
storage-config.yaml
in/home/stack/templates/
. In this example, the~/templates/storage-config.yaml
file contains most of the overcloud-related custom settings for your environment. Parameters that you include in the custom environment file override the corresponding default settings from the/usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-ansible.yaml
file. Add a
parameter_defaults
section to~/templates/storage-config.yaml
. This section contains custom settings for your overcloud. For example, to setvxlan
as the network type of the networking service (neutron
), add the following snippet to your custom environment file:parameter_defaults: NeutronNetworkType: vxlan
If necessary, set the following options under
parameter_defaults
according to your requirements:Option Description Default value CinderEnableIscsiBackend
Enables the iSCSI backend
false
CinderEnableRbdBackend
Enables the Ceph Storage back end
true
CinderBackupBackend
Sets ceph or swift as the back end for volume backups. For more information, see Section 4.3, “Configuring the Backup Service to use Ceph”.
ceph
NovaEnableRbdBackend
Enables Ceph Storage for Nova ephemeral storage
true
GlanceBackend
Defines which back end the Image service should use:
rbd
(Ceph),swift
, orfile
rbd
GnocchiBackend
Defines which back end the Telemetry service should use:
rbd
(Ceph),swift
, orfile
rbd
NoteYou can omit an option from
~/templates/storage-config.yaml
if you intend to use the default setting.
The contents of your custom environment file change depending on the settings that you apply in the following sections. See Appendix A, Sample environment file: creating a Ceph Storage cluster for a completed example.
The following subsections contain information about overriding the common default storage service settings that the director applies.
4.1. Enabling the Ceph Metadata Server
The Ceph Metadata Server (MDS) runs the ceph-mds
daemon, which manages metadata related to files stored on CephFS. CephFS can be consumed through NFS. For more information about using CephFS through NFS, see File System Guide and CephFS via NFS Back End Guide for the Shared File System Service.
Red Hat supports deploying Ceph MDS only with the CephFS through NFS back end for the Shared File System service.
Procedure
To enable the Ceph Metadata Server, invoke the following environment file when you create your overcloud:
-
/usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-mds.yaml
For more information, see Section 7.2, “Initiating overcloud deployment”. For more information about the Ceph Metadata Server, see Configuring Metadata Server Daemons.
By default, the Ceph Metadata Server will be deployed on the Controller node. You can deploy the Ceph Metadata Server on its own dedicated node. For more information, see Section 3.3, “Creating a custom role and flavor for the Ceph MDS service”.
4.2. Enabling the Ceph Object Gateway
The Ceph Object Gateway (RGW) provides applications with an interface to object storage capabilities within a Ceph Storage cluster. When you deploy RGW, you can replace the default Object Storage service (swift
) with Ceph. For more information, see Object Gateway Configuration and Administration Guide.
Procedure
To enable RGW in your deployment, invoke the following environment file when you create the overcloud:
-
/usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-rgw.yaml
For more information, see Section 7.2, “Initiating overcloud deployment”.
By default, Ceph Storage allows 250 placement groups per OSD. When you enable RGW, Ceph Storage creates six additional pools that are required by RGW. The new pools are:
- .rgw.root
- default.rgw.control
- default.rgw.meta
- default.rgw.log
- default.rgw.buckets.index
- default.rgw.buckets.data
In your deployment, default
is replaced with the name of the zone to which the pools belongs.
Therefore, when you enable RGW, be sure to set the default pg_num
using the CephPoolDefaultPgNum
parameter to account for the new pools. For more information about how to calculate the number of placement groups for Ceph pools, see Section 5.3, “Assigning custom attributes to different Ceph pools”.
The Ceph Object Gateway is a direct replacement for the default Object Storage service. As such, all other services that normally use swift
can seamlessly start using the Ceph Object Gateway instead without further configuration. For more information, see the Block Storage Backup Guide.
4.3. Configuring the Backup Service to use Ceph
The Block Storage Backup service (cinder-backup
) is disabled by default. To enable the Block Storage Backup service, complete the following steps:
Procedure
Invoke the following environment file when you create your overcloud:
-
/usr/share/openstack-tripleo-heat-templates/environments/cinder-backup.yaml
For more information, see the Block Storage Backup Guide.
4.4. Configuring multiple bonded interfaces for Ceph nodes
Use a bonded interface to combine multiple NICs and add redundancy to a network connection. If you have enough NICs on your Ceph nodes, you can create multiple bonded interfaces on each node to expand redundancy capability.
You can then use a bonded interface for each network connection that the node requires. This provides both redundancy and a dedicated connection for each network.
The simplest implementation of bonded interfaces involves the use of two bonds, one for each storage network used by the Ceph nodes. These networks are the following:
- Front-end storage network (
StorageNet
) - The Ceph client uses this network to interact with the corresponding Ceph cluster.
- Back-end storage network (
StorageMgmtNet
) - The Ceph cluster uses this network to balance data in accordance with the placement group policy of the cluster. For more information, see Placement Groups (PG) in the in the Red Hat Ceph Architecture Guide.
To configure multiple bonded interfaces, you must create a new network interface template, as the director does not provide any sample templates that you can use to deploy multiple bonded NICs. However, the director does provide a template that deploys a single bonded interface. This template is /usr/share/openstack-tripleo-heat-templates/network/config/bond-with-vlans/ceph-storage.yaml
. You can define an additional bonded interface for your additional NICs in this template.
For more information about creating custom interface templates, Creating Custom Interface Templates in the Advanced Overcloud Customization guide.
The following snippet contains the default definition for the single bonded interface defined in the /usr/share/openstack-tripleo-heat-templates/network/config/bond-with-vlans/ceph-storage.yaml
file:
type: ovs_bridge // 1 name: br-bond members: - type: ovs_bond // 2 name: bond1 // 3 ovs_options: {get_param: BondInterfaceOvsOptions} 4 members: // 5 - type: interface name: nic2 primary: true - type: interface name: nic3 - type: vlan // 6 device: bond1 // 7 vlan_id: {get_param: StorageNetworkVlanID} addresses: - ip_netmask: {get_param: StorageIpSubnet} - type: vlan device: bond1 vlan_id: {get_param: StorageMgmtNetworkVlanID} addresses: - ip_netmask: {get_param: StorageMgmtIpSubnet}
- 1
- A single bridge named
br-bond
holds the bond defined in this template. This line defines the bridge type, namely OVS. - 2
- The first member of the
br-bond
bridge is the bonded interface itself, namedbond1
. This line defines the bond type ofbond1
, which is also OVS. - 3
- The default bond is named
bond1
. - 4
- The
ovs_options
entry instructs director to use a specific set of bonding module directives. Those directives are passed through theBondInterfaceOvsOptions
, which you can also configure in this file. For more information about configuring bonding module directives, see Section 4.4.1, “Configuring bonding module directives”. - 5
- The
members
section of the bond defines which network interfaces are bonded bybond1
. In this example, the bonded interface usesnic2
(set as the primary interface) andnic3
. - 6
- The
br-bond
bridge has two other members: a VLAN for both front-end (StorageNetwork
) and back-end (StorageMgmtNetwork
) storage networks. - 7
- The
device
parameter defines which device a VLAN should use. In this example, both VLANs use the bonded interface,bond1
.
With at least two more NICs, you can define an additional bridge and bonded interface. Then, you can move one of the VLANs to the new bonded interface, which increases throughput and reliability for both storage network connections.
When you customize the /usr/share/openstack-tripleo-heat-templates/network/config/bond-with-vlans/ceph-storage.yaml
file for this purpose, Red Hat recommends that you use Linux bonds (type: linux_bond
) instead of the default OVS (type: ovs_bond
). This bond type is more suitable for enterprise production deployments.
The following edited snippet defines an additional OVS bridge (br-bond2
) which houses a new Linux bond named bond2
. The bond2
interface uses two additional NICs, nic4
and nic5
, and is used solely for back-end storage network traffic:
type: ovs_bridge name: br-bond members: - type: linux_bond name: bond1 bonding_options: {get_param: BondInterfaceOvsOptions} // 1 members: - type: interface name: nic2 primary: true - type: interface name: nic3 - type: vlan device: bond1 vlan_id: {get_param: StorageNetworkVlanID} addresses: - ip_netmask: {get_param: StorageIpSubnet} - type: ovs_bridge name: br-bond2 members: - type: linux_bond name: bond2 bonding_options: {get_param: BondInterfaceOvsOptions} members: - type: interface name: nic4 primary: true - type: interface name: nic5 - type: vlan device: bond1 vlan_id: {get_param: StorageMgmtNetworkVlanID} addresses: - ip_netmask: {get_param: StorageMgmtIpSubnet}
- 1
- As
bond1
andbond2
are both Linux bonds (instead of OVS), they usebonding_options
instead ofovs_options
to set bonding directives. For more information, see Section 4.4.1, “Configuring bonding module directives”.
For the full contents of this customized template, see Appendix B, Sample custom interface template: multiple bonded interfaces.
4.4.1. Configuring bonding module directives
After you add and configure the bonded interfaces, use the BondInterfaceOvsOptions
parameter to set the directives that you want each bonded interface to use. You can find this information in the parameters:
section of the /usr/share/openstack-tripleo-heat-templates/network/config/bond-with-vlans/ceph-storage.yaml
file. The following snippet shows the default definition of this parameter (namely, empty):
BondInterfaceOvsOptions: default: '' description: The ovs_options string for the bond interface. Set things like lacp=active and/or bond_mode=balance-slb using this option. type: string
Define the options you need in the default:
line. For example, to use 802.3ad (mode 4) and a LACP rate of 1 (fast), use 'mode=4 lacp_rate=1'
:
BondInterfaceOvsOptions: default: 'mode=4 lacp_rate=1' description: The bonding_options string for the bond interface. Set things like lacp=active and/or bond_mode=balance-slb using this option. type: string
For more information about other supported bonding options, see Open vSwitch Bonding Options in the Advanced Overcloud Optimization guide. For the full contents of the customized /usr/share/openstack-tripleo-heat-templates/network/config/bond-with-vlans/ceph-storage.yaml
template, see Appendix B, Sample custom interface template: multiple bonded interfaces.