Red Hat Training

A Red Hat training course is available for Red Hat OpenStack Platform

Chapter 4. Customizing the Storage service

The heat template collection provided by director already contains the necessary templates and environment files to enable a basic Ceph Storage configuration.

The /usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-ansible.yaml environment file creates a Ceph cluster and integrates it with your overcloud at deployment. This cluster features containerized Ceph Storage nodes. For more information about containerized services in OpenStack, see Configuring a basic overcloud with the CLI tools in the Director Installation and Usage Guide.

The Red Hat OpenStack director also applies basic, default settings to the deployed Ceph cluster. You need a custom environment file to pass custom settings to your Ceph cluster.

Procedure

  1. Create the file storage-config.yaml in /home/stack/templates/. For the purposes of this document, ~/templates/storage-config.yaml contains most of the overcloud-related custom settings for your environment. It overrides all the default settings applied by director to your overcloud.
  2. Add a parameter_defaults section to ~/templates/storage-config.yaml. This section contains custom settings for your overcloud. For example, to set vxlan as the network type of the Networking service (neutron):

    parameter_defaults:
      NeutronNetworkType: vxlan
  3. Optional: You can set the following options under parameter_defaults depending on your needs:

    OptionDescriptionDefault value

    CinderEnableIscsiBackend

    Enables the iSCSI backend

    false

    CinderEnableRbdBackend

    Enables the Ceph Storage back end

    true

    CinderBackupBackend

    Sets ceph or swift as the back end for volume backups; see Section 4.4, “Configuring the Backup Service to use Ceph” for related details

    ceph

    NovaEnableRbdBackend

    Enables Ceph Storage for Nova ephemeral storage

    true

    GlanceBackend

    Defines which back end the Image service should use: rbd (Ceph), swift, or file

    rbd

    GnocchiBackend

    Defines which back end the Telemetry service should use: rbd (Ceph), swift, or file

    rbd

    Note

    You can omit an option from ~/templates/storage-config.yaml if you want to use the default setting.

The contents of your environment file changes depending on the settings you apply in the sections that follow. See Appendix A, Sample environment file: Creating a Ceph cluster for a finished example.

The following subsections explain how to override common default storage service settings applied by the director.

4.1. Enabling the Ceph Metadata server

The Ceph Metadata Server (MDS) runs the ceph-mds daemon, which manages metadata related to files stored on CephFS. CephFS can be consumed via NFS. For related information about using CephFS via NFS, see Ceph File System Guide and CephFS via NFS Back End Guide for the Shared File System Service.

Note

Red Hat only supports deploying Ceph MDS with the CephFS through NFS back end for the Shared File Systems service.

To enable the Ceph Metadata Server, invoke the following environment file when you create your overcloud:

  • /usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-mds.yaml

See Section 7.2, “Initiating overcloud deployment” for more details. For more information about the Ceph Metadata Server, see Configuring Metadata Server Daemons.

Note

By default, the Ceph Metadata Server is deployed on the Controller node. You can deploy the Ceph Metadata Server on its own dedicated node, see Section 3.2, “Creating a custom role and flavor for the Ceph MDS service”.

4.2. Enabling the Ceph Object Gateway

The Ceph Object Gateway (RGW) provides applications with an interface to object storage capabilities within a Ceph Storage cluster. When you deploy RGW, you can replace the default Object Storage service (swift) with Ceph. For more information, see Object Gateway Guide for Red Hat Enterprise Linux.

To enable RGW in your deployment, invoke the following environment file when creating your overcloud:

  • /usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-rgw.yaml

For more information, see Section 7.2, “Initiating overcloud deployment”.

By default, Ceph Storage allows 250 placement groups per OSD. When you enable RGW, Ceph Storage creates six additional pools that are required by RGW. The new pools are:

  • .rgw.root
  • default.rgw.control
  • default.rgw.meta
  • default.rgw.log
  • default.rgw.buckets.index
  • default.rgw.buckets.data
Note

In your deployment, default is replaced with the name of the zone to which the pools belongs.

Therefore, when you enable RGW, be sure to set the default pg_num using the CephPoolDefaultPgNum parameter to account for the new pools. For more information about how to calculate the number of placement groups for Ceph pools, see Section 6.9, “Assigning custom attributes to different Ceph pools”.

The Ceph Object Gateway acts as a drop-in replacement for the default Object Storage service. As such, all other services that normally use swift can seamlessly start using the Ceph Object Gateway instead without further configuration. For example, when configuring the Block Storage Backup service (cinder-backup) to use the Ceph Object Gateway, set ceph as the target back end (see Section 4.4, “Configuring the Backup Service to use Ceph”).

4.3. Configuring Ceph Object Store to use external Ceph Object Gateway

Red Hat OpenStack Platform (RHOSP) director supports configuring an external Ceph Object Gateway (RGW) as an Object Store service. To authenticate with the external RGW service, you must configure RGW to verify users and their roles in the Identity service (keystone).

For more information about how to configure an external Ceph Object Gateway, see Configuring the Ceph Object Gateway in the Using Keystone with the Ceph Object Gateway Guide.

Procedure

  1. Add the following parameter_defaults to a custom environment file, for example, swift-external-params.yaml, and adjust the values to suit your deployment:

    parameter_defaults:
       ExternalPublicUrl: 'http://<Public RGW endpoint or loadbalancer>:8080/swift/v1/AUTH_%(project_id)s'
       ExternalInternalUrl: 'http://<Internal RGW endpoint>:8080/swift/v1/AUTH_%(project_id)s'
       ExternalAdminUrl: 'http://<Admin RGW endpoint>:8080/swift/v1/AUTH_%(project_id)s'
       ExternalSwiftUserTenant: 'service'
       SwiftPassword: 'choose_a_random_password'
    Note

    The example code snippet contains parameter values that might differ from values that you use in your environment:

    • The default port where the remote RGW instance listens is 8080. The port might be different depending on how the external RGW is configured.
    • The swift user created in the overcloud uses the password defined by the SwiftPassword parameter. You must configure the external RGW instance to use the same password to authenticate with the Identity service by using the rgw_keystone_admin_password.
  2. Add the following code to the Ceph config file to configure RGW to use the Identity service. Adjust the variable values to suit your environment.

        rgw_keystone_api_version: 3
        rgw_keystone_url: http://<public Keystone endpoint>:5000/
        rgw_keystone_accepted_roles: 'member, Member, admin'
        rgw_keystone_accepted_admin_roles: ResellerAdmin, swiftoperator
        rgw_keystone_admin_domain: default
        rgw_keystone_admin_project: service
        rgw_keystone_admin_user: swift
        rgw_keystone_admin_password: <Password as defined in the environment parameters>
        rgw_keystone_implicit_tenants: 'true'
        rgw_keystone_revocation_interval: '0'
        rgw_s3_auth_use_keystone: 'true'
        rgw_swift_versioning_enabled: 'true'
        rgw_swift_account_in_url: 'true'
    Note

    Director creates the following roles and users in the Identity service by default:

    • rgw_keystone_accepted_admin_roles: ResellerAdmin, swiftoperator
    • rgw_keystone_admin_domain: default
    • rgw_keystone_admin_project: service
    • rgw_keystone_admin_user: swift
  3. Deploy the overcloud with the additional environment files:

    openstack overcloud deploy --templates \
    -e <your environment files>
    -e /usr/share/openstack-tripleo-heat-templates/environments/swift-external.yaml
    -e swift-external-params.yaml

4.4. Configuring the Backup Service to use Ceph

The Block Storage Backup service (cinder-backup) is disabled by default. To enable it, invoke the following environment file when creating your overcloud:

  • /usr/share/openstack-tripleo-heat-templates/environments/cinder-backup.yaml

See Section 7.2, “Initiating overcloud deployment” for more details.

When you enable cinder-backup (as in Section 4.2, “Enabling the Ceph Object Gateway”), you can configure it to store backups in Ceph. This involves adding the following line to the parameter_defaults of your environment file (namely, ~/templates/storage-config.yaml):

CinderBackupBackend: ceph

4.5. Configuring multiple bonded interfaces per Ceph node

You can use a bonded interface to combine multiple NICs to add redundancy to a network connection. If you have enough NICs on your Ceph nodes, you can take this a step further by creating multiple bonded interfaces per node.

With this, you can then use a bonded interface for each network connection required by the node. This provides both redundancy and a dedicated connection for each network.

The simplest implementation of this involves the use of two bonds, one for each storage network used by the Ceph nodes. These networks are the following:

Front-end storage network (StorageNet)
The Ceph client uses this network to interact with its Ceph cluster.
Back-end storage network (StorageMgmtNet)
The Ceph cluster uses this network to balance data in accordance with the placement group policy of the cluster. For more information, see Placement Groups (PG) in the Red Hat Ceph Architecture Guide.

Configuring this involves customizing a network interface template, as the director does not provide any sample templates that deploy multiple bonded NICs. However, the director does provide a template that deploys a single bonded interface — namely, /usr/share/openstack-tripleo-heat-templates/network/config/bond-with-vlans/ceph-storage.yaml. You can add a bonded interface for your additional NICs by defining it there.

Note

For more information about creating custom interface templates, Creating Custom Interface Templates in the Advanced Overcloud Customization guide.

The following snippet contains the default definition for the single bonded interface defined by /usr/share/openstack-tripleo-heat-templates/network/config/bond-with-vlans/ceph-storage.yaml:

  type: ovs_bridge // 1
  name: br-bond
  members:
    -
      type: ovs_bond // 2
      name: bond1 // 3
      ovs_options: {get_param: BondInterfaceOvsOptions} 4
      members: // 5
        -
          type: interface
          name: nic2
          primary: true
        -
          type: interface
          name: nic3
    -
      type: vlan // 6
      device: bond1 // 7
      vlan_id: {get_param: StorageNetworkVlanID}
      addresses:
        -
          ip_netmask: {get_param: StorageIpSubnet}
    -
      type: vlan
      device: bond1
      vlan_id: {get_param: StorageMgmtNetworkVlanID}
      addresses:
        -
          ip_netmask: {get_param: StorageMgmtIpSubnet}
1
A single bridge named br-bond holds the bond defined by this template. This line defines the bridge type, namely OVS.
2
The first member of the br-bond bridge is the bonded interface itself, named bond1. This line defines the bond type of bond1, which is also OVS.
3
The default bond is named bond1, as defined in this line.
4
The ovs_options entry instructs director to use a specific set of bonding module directives. Those directives are passed through the BondInterfaceOvsOptions, which you can also configure in this same file. For instructions on how to configure this, see Section 4.5.1, “Configuring bonding module directives”.
5
The members section of the bond defines which network interfaces are bonded by bond1. In this case, the bonded interface uses nic2 (set as the primary interface) and nic3.
6
The br-bond bridge has two other members: namely, a VLAN for both front-end (StorageNetwork) and back-end (StorageMgmtNetwork) storage networks.
7
The device parameter defines what device a VLAN should use. In this case, both VLANs will use the bonded interface bond1.

With at least two more NICs, you can define an additional bridge and bonded interface. Then, you can move one of the VLANs to the new bonded interface. This results in added throughput and reliability for both storage network connections.

When customizing /usr/share/openstack-tripleo-heat-templates/network/config/bond-with-vlans/ceph-storage.yaml for this purpose, it is advisable to also use Linux bonds (type: linux_bond ) instead of the default OVS (type: ovs_bond). This bond type is more suitable for enterprise production deployments.

The following edited snippet defines an additional OVS bridge (br-bond2) which houses a new Linux bond named bond2. The bond2 interface uses two additional NICs (namely, nic4 and nic5) and will be used solely for back-end storage network traffic:

  type: ovs_bridge
  name: br-bond
  members:
    -
      type: linux_bond
      name: bond1
      **bonding_options**: {get_param: BondInterfaceOvsOptions} // 1
      members:
        -
          type: interface
          name: nic2
          primary: true
        -
          type: interface
          name: nic3
    -
      type: vlan
      device: bond1
      vlan_id: {get_param: StorageNetworkVlanID}
      addresses:
        -
          ip_netmask: {get_param: StorageIpSubnet}
-
  type: ovs_bridge
  name: br-bond2
  members:
    -
      type: linux_bond
      name: bond2
      **bonding_options**: {get_param: BondInterfaceOvsOptions}
      members:
        -
          type: interface
          name: nic4
          primary: true
        -
          type: interface
          name: nic5
    -
      type: vlan
      device: bond1
      vlan_id: {get_param: StorageMgmtNetworkVlanID}
      addresses:
        -
          ip_netmask: {get_param: StorageMgmtIpSubnet}
1
As bond1 and bond2 are both Linux bonds (instead of OVS), they use bonding_options instead of ovs_options to set bonding directives. For related information, see Section 4.5.1, “Configuring bonding module directives”.

For the full contents of this customized template, see Appendix B, Sample custom interface template: Multiple bonded interfaces.

4.5.1. Configuring bonding module directives

After you add and configure the bonded interfaces, use the BondInterfaceOvsOptions parameter to set what directives each use. You can find this in the parameters: section of /usr/share/openstack-tripleo-heat-templates/network/config/bond-with-vlans/ceph-storage.yaml. The following snippet shows the default definition of this parameter (namely, empty):

BondInterfaceOvsOptions:
    default: ''
    description: The ovs_options string for the bond interface. Set
                 things like lacp=active and/or bond_mode=balance-slb
                 using this option.
    type: string

Define the options you need in the default: line. For example, to use 802.3ad (mode 4) and a LACP rate of 1 (fast), use 'mode=4 lacp_rate=1', as in:

BondInterfaceOvsOptions:
    default: 'mode=4 lacp_rate=1'
    description: The bonding_options string for the bond interface. Set
                 things like lacp=active and/or bond_mode=balance-slb
                 using this option.
    type: string

For more information about other supported bonding options, see Open vSwitch Bonding Options in the Advanced Overcloud Optimization guide. For the full contents of the customized /usr/share/openstack-tripleo-heat-templates/network/config/bond-with-vlans/ceph-storage.yaml template, see Appendix B, Sample custom interface template: Multiple bonded interfaces.