Chapter 3. Integrate with an existing Ceph Storage cluster
To integrate the Red Hat OpenStack Platform with an existing Ceph Storage cluster, you must install the
ceph-ansible package. After that, you can create custom environment files and assign nodes and flavors to roles.
3.1. Installing the ceph-ansible package
The Red Hat OpenStack Platform director uses
ceph-ansible to integrate with an existing Ceph Storage cluster, but
ceph-ansible is not installed by default on the undercloud.
Enter the following command to install the
ceph-ansible package on the undercloud:
sudo dnf install -y ceph-ansible
3.2. Creating a custom environment file
Director supplies parameters to
ceph-ansible to integrate with an external Ceph Storage cluster through the environment file:
If you deploy the Shared File Systems service with external CephFS, separate environment files supply additional parameters.
For native CephFS the environment file is
For CephFS through NFS, the environment file is
Director invokes these environment files during deployment to integrate an existing Ceph Storage cluster with the overcloud. For more information, Section 3.5, “Deploying the overcloud”.
To configure integration, you must supply the details of your Ceph Storage cluster to director. To do this, use a custom environment file to override the default settings.
Create a custom environment file:
parameter_defaults:section to the file:
parameter_defaultsto set all of the parameters that you want to override in
/usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-ansible-external.yaml. You must set the following parameters at a minimum:
CephClientKey: the Ceph client key of your Ceph Storage cluster. This is the value of
keyyou retrieved in Section 2.2, “Configuring the existing Ceph Storage cluster”. For example,
CephClusterFSID: the file system ID of your Ceph Storage cluster. This is the value of
fsidin your Ceph Storage cluster configuration file, which you retrieved in Section 2.2, “Configuring the existing Ceph Storage cluster”. For example,
CephExternalMonHost: a comma-delimited list of the IPs of all MON hosts in your Ceph Storage cluster, for example,
parameter_defaults: CephClientKey: AQDLOh1VgEp6FRAAFzT7Zw+Y9V6JJExQAsRnRQ== CephClusterFSID: 4b5c8c0a-ff60-454b-a1b4-9747aa737d19 CephExternalMonHost: 172.16.1.7, 172.16.1.8
If necessary, override the default pool names or the name of the Red Hat OpenStack Platform client user to match your Ceph Storage cluster:
If you are deploying the Shared File Systems service backed by CephFS, set the names of the data and metadata pools:
ManilaCephFSDataPoolName: manila_data ManilaCephFSMetadataPoolName: manila_metadataNote
Ensure that these names match the names of the pools you created.
Set the client key that you created for manila and the name of the Ceph user for that key:
ManilaCephFSCephFSAuthId: 'manila' CephManilaClientKey: 'AQDQ991cAAAAABAA0aXFrTnjH9aO39P0iVvYyg=='Note
The default client user name
manila, unless you overide it.
CephManilaClientKeyis always required.
You can also add overcloud parameters to your custom environment file. For example, to set
neutronnetwork type, add the following to
After you create the custom environment file, you must include it when you deploy the overcloud. For more information about deploying the overcloud, see Section 3.5, “Deploying the overcloud”.
3.3. Assigning nodes and flavors to roles
Planning an overcloud deployment involves specifying how many nodes and which flavors to assign to each role. Like all heat template parameters, these role specifications are declared in the
parameter_defaults section of your custom environment file, in this case,
For this purpose, use the following parameters:
Table 3.1. Roles and flavors for overcloud nodes
|Heat template parameter||Description|
The number of Controller nodes to scale out
The flavor to use for Controller nodes (
The number of Compute nodes to scale out
The flavor to use for Compute nodes (
For example, to configure the overcloud to deploy three nodes for each role, Controller and Compute, add the following to
parameter_defaults: ControllerCount: 3 ComputeCount: 3 OvercloudControlFlavor: control OvercloudComputeFlavor: compute
For more information and a more complete list of heat template parameters, see Creating the Overcloud with the CLI Tools in the Director Installation and Usage guide.
3.4. Ceph containers for Red Hat OpenStack Platform with Ceph Storage
A Ceph container is required to configure OpenStack Platform to use Ceph, even with an external Ceph cluster. To be compatible with Red Hat Enterprise Linux 8, Red Hat OpenStack Platform (RHOSP) 15 requires Red Hat Ceph Storage 4. The Ceph Storage 4 container is hosted at registry.redhat.io, a registry that requires authentication.
You can use the heat environment parameter
ContainerImageRegistryCredentials to authenticate at
registry.redhat.io. For more information, see Container image preparation parameters.
3.5. Deploying the overcloud
During undercloud installation, set
generate_service_certificate=false in the
undercloud.conf file. Otherwise, you must inject a trust anchor when you deploy the overcloud. For more information about how to inject a trust anchor, see Enabling SSL/TLS on Overcloud Public Endpoints in the Advanced Overcloud Customization guide.
The creation of the overcloud requires additional arguments for the
openstack overcloud deploycommand:
$ openstack overcloud deploy --templates \ -e /usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-ansible-external.yaml \ -e /home/stack/templates/ceph-config.yaml \ -e --ntp-server pool.ntp.org \
This example command uses the following options:
--templates- Creates the overcloud from the default heat template collection,
-e /usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-ansible-external.yaml- Sets the director to integrate an existing Ceph cluster to the overcloud.
-e /home/stack/templates/ceph-config.yaml- Adds a custom environment file to override the defaults set by
-e /usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-ansible-external.yaml. In this case, it is the custom environment file you created in Section 3.1, “Installing the ceph-ansible package”.
--ntp-server pool.ntp.org- Sets the NTP server.
3.5.2. Adding an additional environment file for external Ceph Object Gateway (RGW) for Object storage
If you deploy an overcloud that uses an already existing RGW service for Object storage, you must add an additional environment file.
Add the following
parameter_defaultsto a custom environment file, for example,
swift-external-params.yaml. Replace the values to suit your deployment:
parameter_defaults: ExternalSwiftPublicUrl: 'http://<Public RGW endpoint or loadbalancer>:8080/swift/v1/AUTH_%(project_id)s' ExternalSwiftInternalUrl: 'http://<Internal RGW endpoint>:8080/swift/v1/AUTH_%(project_id)s' ExternalSwiftAdminUrl: 'http://<Admin RGW endpoint>:8080/swift/v1/AUTH_%(project_id)s' ExternalSwiftUserTenant: 'service' SwiftPassword: 'choose_a_random_password'Note
The example code snippet contains parameter values that might differ from values that you use in your environment:
The default port where the remote RGW instance listens is
8080. The port might be different depending on how the external RGW is configured.
swiftuser created in the overcloud uses the password defined by the
SwiftPasswordparameter. You must configure the external RGW instance to use the same password to authenticate with the Identity service by using the
- The default port where the remote RGW instance listens is
Add the following code to the Ceph config file to configure RGW to use the Identity service. Replace the variable values to suit your environment:
rgw_keystone_api_version = 3 rgw_keystone_url = http://<public Keystone endpoint>:5000/ rgw_keystone_accepted_roles = member, Member, admin rgw_keystone_accepted_admin_roles = ResellerAdmin, swiftoperator rgw_keystone_admin_domain = default rgw_keystone_admin_project = service rgw_keystone_admin_user = swift rgw_keystone_admin_password = <password_as_defined_in_the_environment_parameters> rgw_keystone_implicit_tenants = true rgw_keystone_revocation_interval = 0 rgw_s3_auth_use_keystone = true rgw_swift_versioning_enabled = true rgw_swift_account_in_url = trueNote
Director creates the following roles and users in the Identity service by default:
- rgw_keystone_accepted_admin_roles: ResellerAdmin, swiftoperator
- rgw_keystone_admin_domain: default
- rgw_keystone_admin_project: service
- rgw_keystone_admin_user: swift
Deploy the overcloud with the additional environment files with any other environment files that are relevant to your deployment::
openstack overcloud deploy --templates \ -e <your_environment_files> -e /usr/share/openstack-tripleo-heat-templates/environments/swift-external.yaml -e swift-external-params.yaml
3.5.3. Invoking templates and environment files
You can also use an answers file to invoke all your templates and environment files. For example, you can use the following command to deploy an identical overcloud:
$ openstack overcloud deploy \ --answers-file /home/stack/templates/answers.yaml \ --ntp-server pool.ntp.org
In this case, the answers file
templates: /usr/share/openstack-tripleo-heat-templates/ environments: - /usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-ansible-external.yaml \ - /home/stack/templates/ceph-config.yaml \
For more information, see Including environment files in an overcloud deployment in the Director Installation and Usage guide.
3.5.4. OpenStack overcloud deploy command options
You can enter the following command to see a full list of options that are available to use with the
openstack overcloud deploy command:
$ openstack help overcloud deploy
For more information, see Configuring a basic overcloud with the CLI tools in the Director Installation and Usage guide.
3.5.5. Viewing the status of overcloud creation
The overcloud creation process begins and director provisions your nodes. This process takes some time to complete.
To view the status of the overcloud creation, open a separate terminal as the
stack user and enter the following commands:
$ source ~/stackrc $ openstack stack list --nested