Chapter 3. Configuring the Red Hat Ceph Storage cluster
To deploy the Red Hat Ceph Storage cluster for your Red Hat OpenStack Platform environment, you must first configure the Red Hat Ceph Storage cluster options for your environment.
Configure the Red Hat Ceph Storage cluster options:
- Configuring time synchronization
- Configuring a top level domain
- Configuring the Red Hat Ceph Storage cluster name
- Configuring network options with the network data file
- Configuring network options with a configuration file
- Configuring a CRUSH hierarchy for an OSD
- Configuring Ceph service placement options
- Configuring SSH user options for Ceph nodes
- Configuring the container registry
Prerequisites
Before you can configure and deploy the Red Hat Ceph Storage cluster, use the the Bare Metal Provisioning service (ironic) to provision the bare metal instances and networks. For more information, see Configuring the Bare Metal Provisioning service.
3.1. The openstack overcloud ceph deploy
command
If you deploy the Ceph cluster using director, you must use the openstack overcloud ceph deploy
command. For a complete listing of command options and parameters, see openstack overcloud ceph deploy in the Command line interface reference.
The command openstack overcloud ceph deploy --help
provides the current options and parameters available in your environment.
3.2. Ceph configuration file
A standard format initialization file is one way to perform Ceph cluster configuration. This initialization file is used to configure the Ceph cluster. Use one of the following commands to use this file: * cephadm bootstap --config <file_name>
* openstack overcloud ceph deploy --config <file_name>
commands.
Example
The following example creates a simple initialization file called initial-ceph.conf
and then uses the openstack overcloud ceph deploy
command to configure the Ceph cluster with it. It demonstrates how to configure the messenger v2 protocol to use a secure mode that encrypts all data passing over the network.
$ cat <<EOF > initial-ceph.conf [global] ms_cluster_mode: secure ms_service_mode: secure ms_client_mode: secure EOF $ openstack overcloud ceph deploy --config initial-ceph.conf ...
3.3. Configuring time synchronization
The Time Synchronization Service (chrony) is enabled for time synchronization by default. You can perform the following tasks to configure the service.
Time synchronization is configured using either a delimited list or an environment file. Use the procedure that is best suited to your administrative practices.
3.3.1. Configuring time synchronization with a delimited list
You can configure the Time Synchronization Service (chrony) to use a delimited list to configure NTP servers.
Procedure
-
Log in to the undercloud node as the
stack
user. Configure NTP servers with a delimited list:
openstack overcloud ceph deploy \ --ntp-server "<ntp_server_list>"
Replace
<ntp_server_list>
with a comma delimited list of servers.openstack overcloud ceph deploy \ --ntp-server "0.pool.ntp.org,1.pool.ntp.org"
3.3.2. Configuring time synchronization with an environment file
You can configure the Time Synchronization Service (chrony) to use an environment file that defines NTP servers.
Procedure
-
Log in to the undercloud node as the
stack
user. -
Create an environment file, such as
/home/stack/templates/ntp-parameters.yaml
, to contain the NTP server configuration. Add the
NtpServer
parameter. TheNtpServer
parameter contains a comma delimited list of NTP servers.parameter_defaults: NtpServer: 0.pool.ntp.org,1.pool.ntp.org
Configure NTP servers with an environment file:
openstack overcloud ceph deploy \ --ntp-heat-env-file "<ntp_file_name>"
Replace
<ntp_file_name>
with the name of the environment file you created.openstack overcloud ceph deploy \ --ntp-heat-env-file "/home/stack/templates/ntp-parameters.yaml"
3.3.3. Disabling time synchronization
The Time Synchronization Service (chrony) is enabled by default. You can disable the service if you do not want to use it.
Procedure
-
Log in to the undercloud node as the
stack
user. Disable the Time Synchronization Service (chrony):
openstack overcloud ceph deploy \ --skip-ntp
3.4. Configuring a top level domain suffix
You can configure a top level domain (TLD) suffix. This suffix is added to the short hostname to create a fully qualified domain name for overcloud nodes.
A fully qualified domain name is required for TLS-e configuration.
Procedure
-
Log in to the undercloud node as the
stack
user. Configure the top level domain suffix:
openstack overcloud ceph deploy \ --tld "<domain_name>"
Replace
<domain_name>
with the required domain name.openstack overcloud ceph deploy \ --tld "example.local"
3.5. Configuring the Red Hat Ceph Storage cluster name
You can deploy the Red Hat Ceph Storage cluster with a name that you configure. The default name is ceph.
Procedure
-
Log in to the undercloud node as the
stack
user. Configure the name of the Ceph Storage cluster by using the following command:
openstack overcloud ceph deploy \ --cluster <cluster_name>
$ openstack overcloud ceph deploy \ --cluster central \
Keyring files are not created at this time. Keyring files are created during the overcloud deployment. Keyring files inherit the cluster name configured during this procedure. For more information about overcloud deployment see Section 8.1, “Initiating overcloud deployment”
In the example above, the Ceph cluster is named central. The configuration and keyring files for the central Ceph cluster would be created in /etc/ceph
during the deployment process.
[root@oc0-controller-0 ~]# ls -l /etc/ceph/ total 16 -rw-------. 1 root root 63 Mar 26 21:49 central.client.admin.keyring -rw-------. 1 167 167 201 Mar 26 22:17 central.client.openstack.keyring -rw-------. 1 167 167 134 Mar 26 22:17 central.client.radosgw.keyring -rw-r--r--. 1 root root 177 Mar 26 21:49 central.conf
Troubleshooting
The following error may be displayed if you configure a custom name for the Ceph Storage cluster:
monclient: get_monmap_and_config cannot identify monitors to contact because
If this error is displayed, use the following command after Ceph deployment:
cephadm shell --config <configuration_file> --keyring <keyring_file>
For example, if this error was displayed when you configured the cluster name to central
, you would use the following command:
cephadm shell --config /etc/ceph/central.conf \ --keyring /etc/ceph/central.client.admin.keyring
The following command could also be used as an alternative:
cephadm shell --mount /etc/ceph:/etc/ceph export CEPH_ARGS='--cluster central'
3.6. Configuring network options with the network data file
The network data file describes the networks used by the Red Hat Ceph Storage cluster.
Procedure
-
Log in to the undercloud node as the
stack
user. Create a YAML format file that defines the custom network attributes called
network_data.yaml
.ImportantUsing network isolation, the standard network deployment consists of two storage networks which map to the two Ceph networks:
-
The storage network,
storage
, maps to the Ceph network,public_network
. This network handles storage traffic such as the RBD traffic from the Compute nodes to the Ceph cluster. -
The storage network,
storage_mgmt
, maps to the Ceph network,cluster_network
. This network handles storage management traffic such as data replication between Ceph OSDs.
-
The storage network,
Use the
openstack overcloud ceph deploy
command with the--crush-hierarchy
option to deploy the configuration.openstack overcloud ceph deploy \ deployed_metal.yaml \ -o deployed_ceph.yaml \ --network-data network_data.yaml
ImportantThe
openstack overcloud ceph deploy
command uses the network data file specified by the--network-data
option to determine the networks to be used as thepublic_network
andcluster_network
. The command assumes these networks are namedstorage
andstorage_mgmt
in network data file unless a different name is specified by the--public-network-name
and--cluster-network-name
options.You must use the
--network-data
option when deploying with network isolation. The default undercloud (192.168.24.0/24) will be used for both thepublic_network
andcluster_network
if you do not use this option.
3.7. Configuring network options with a configuration file
Network options can be specified with a configuration file as an alternative to the network data file.
Using this method to configure network options overwrites automatically generated values in network_data.yaml
. Ensure you set all four values when using this network configuration method.
Procedure
-
Log in to the undercloud node as the
stack
user. - Create a standard format initialization file to configure the Ceph cluster. If you have already created a file to include other configuration options, you can add the network configuration to it.
Add the following parameters to the
[global]
section of the file:-
public_network
-
cluster_network
ms_bind_ipv4
ImportantEnsure the
public_network
andcluster_network
map to the same networks asstorage
andstorage_mgmt
.The following is an example of a configuration file entry for a network configuration with multiple subnets and custom networking names:
[global] public_network = 172.16.14.0/24,172.16.15.0/24 cluster_network = 172.16.12.0/24,172.16.13.0/24 ms_bind_ipv4 = True ms_bind_ipv6 = False
-
Use the command
openstack overcloud ceph deploy
with the--config
option to deploy the configuration file.$ openstack overcloud ceph deploy \ --config initial-ceph.conf --network-data network_data.yaml
3.8. Configuring a CRUSH hierarchy for an OSD
You can configure a custom Controlled Replication Under Scalable Hashing (CRUSH) hierarchy during OSD deployment to add the OSD location
attribute to the Ceph Storage cluster hosts
specification. The location
attribute configures where the OSD is placed within the CRUSH hierarchy.
The location
attribute sets only the initial CRUSH location. Subsequent changes of the attribute are ignored.
Procedure
-
Log in to the undercloud node as the
stack
user. Source the
stackrc
undercloud credentials file:$ source ~/stackrc
-
Create a configuration file to define the custom CRUSH hierarchy, for example,
crush_hierarchy.yaml
. Add the following configuration to the file:
<osd_host>: root: default rack: <rack_num> <osd_host>: root: default rack: <rack_num> <osd_host>: root: default rack: <rack_num>
-
Replace
<osd_host>
with the hostnames of the nodes where the OSDs are deployed, for example,ceph-0
. -
Replace
<rack_num>
with the number of the rack where the OSDs are deployed, for example,r0
.
-
Replace
Deploy the Ceph cluster with your custom OSD layout:
openstack overcloud ceph deploy \ deployed_metal.yaml \ -o deployed_ceph.yaml \ --osd-spec osd_spec.yaml \ --crush-hierarchy crush_hierarchy.yaml
The Ceph cluster is created with the custom OSD layout.
The example file above would result in the following OSD layout.
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF -1 0.02939 root default -3 0.00980 rack r0 -2 0.00980 host ceph-node-00 0 hdd 0.00980 osd.0 up 1.00000 1.00000 -5 0.00980 rack r1 -4 0.00980 host ceph-node-01 1 hdd 0.00980 osd.1 up 1.00000 1.00000 -7 0.00980 rack r2 -6 0.00980 host ceph-node-02 2 hdd 0.00980 osd.2 up 1.00000 1.00000
Device classes are automatically detected by Ceph but CRUSH rules are associated with pools. Pools are still defined and created using the CephCrushRules
parameter during the overcloud deployment.
Additional resources
See Red Hat Ceph Storage workload considerations in the Red Hat Ceph Storage Installation Guide for additional information.
3.9. Configuring Ceph service placement options
You can define what nodes run what Ceph services using a custom roles file. A custom roles file is only necessary when default role assignments are not used because of the environment. For example, when deploying hyperconverged nodes, the predeployed compute nodes should be labeled as osd with a service type of osd to have a placement list containing a list of compute instances.
Service definitions in the roles_data.yaml
file determine which bare metal instance runs which service. By default, the Controller role has the CephMon and CephMgr service while the CephStorage role has the CephOSD service. Unlike most composable services, Ceph services do not require heat output to determine how services are configured. The roles_data.yaml
file always determines Ceph service placement even though the deployed Ceph process occurs before Heat runs.
Procedure
-
Log in to the undercloud node as the
stack
user. - Create a YAML format file that defines the custom roles.
Deploy the configuration file:
$ openstack overcloud ceph deploy \ deployed_metal.yaml \ -o deployed_ceph.yaml \ --roles-data custom_roles.yaml
3.10. Configuring SSH user options for Ceph nodes
The openstack overcloud ceph deploy
command creates the user and keys and distributes them to the hosts so it is not necessary to perform the procedures in this section. However, it is a supported option.
Cephadm connects to all managed remote Ceph nodes using SSH. The Red Hat Ceph Storage cluster deployment process creates an account and SSH key pair on all overcloud Ceph nodes. The key pair is then given to Cephadm so it can communicate with the nodes.
3.10.1. Creating the SSH user before Red Hat Ceph Storage cluster creation
You can create the SSH user before Ceph cluster creation with the openstack overcloud ceph user enable
command.
Procedure
-
Log in to the undercloud node as the
stack
user. Create the SSH user:
$ openstack overcloud ceph user enable
NoteThe default user name is
ceph-admin
. To specify a different user name, use the--cephadm-ssh-user
option to specify a different one.openstack overcloud ceph user enable --cephadm-ssh-user <custom_user_name>
It is recommended to use the default name and not use the
--cephadm-ssh-user
parameter.If the user is created in advance, use the parameter
--skip-user-create
when executingopenstack overcloud ceph deploy
.
3.10.2. Disabling the SSH user
Disabling the SSH user disables Cephadm. Disabling Cephadm removes the ability of the service to administer the Ceph cluster and prevents associated commands from working. It also prevents Ceph node overcloud scaling operations. It also removes all public and private SSH keys.
Procedure
-
Log in to the undercloud node as the
stack
user. Use the command
openstack overcloud ceph user disable --fsid <FSID> ceph_spec.yaml
to disable the SSH user.NoteThe FSID is located in the
deployed_ceph.yaml
environment file.ImportantThe
openstack overcloud ceph user disable
command is not recommended unless it is necessary to disable Cephadm.ImportantTo enable the SSH user and Cephadm service after being disabled, use the
openstack overcloud ceph user enable --fsid <FSID> ceph_spec.yaml
command.NoteThis command requires the path to a Ceph specification file to determine:
- Which hosts require the SSH user.
- Which hosts have the _admin label and require the private SSH key.
- Which hosts require the public SSH key.
For more information about specification files and how to generate them, see Generating the service specification.
3.11. Accessing Ceph Storage containers
Obtaining and modifying container images in the Introduction to containerized services in Red Hat OpenStack Platform guide contains procedures and information on how to prepare the registry and your undercloud and overcloud configuration to use container images. Use the information in this section to adapt these procedures to access Ceph Storage containers.
There are two options for accessing Ceph Storage containers from the overcloud.
3.11.1. Downloading containers directly from a remote registry
You can configure Ceph to download containers directly from a remote registry.
Procedure
-
Create a
containers-prepare-parameter.yaml
file using the procedure Preparing container images. -
Add the remote registry credentials to the
containers-prepare-parameter.yaml
file using theContainerImageRegistryCredentials
parameter as described in Obtaining container images from private registries. When you deploy Ceph, pass the
containers-prepare-parameter.yaml
file using theopenstack overcloud ceph deploy
command.openstack overcloud ceph deploy \ --container-image-prepare containers-prepare-parameter.yaml
NoteIf you do not cache the containers on the undercloud, as described in Cacheing containers on the undercloud, then you should pass the same
containers-prepare-parameter.yaml
file to theopenstack overcloud ceph deploy
command when you deploy Ceph. This will cache containers on the undercloud.
Result
The credentials in the containers-prepare-parameter.yaml
are used by the cephadm
command to authenticate to the remote registry and download the Ceph Storage container.
3.11.2. Cacheing containers on the undercloud
The procedure Modifying images during preparation describes using the following command:
sudo openstack tripleo container image prepare \ -e ~/containers-prepare-parameter.yaml \
If you do not use the --container-image-prepare
option to provide authentication credentials to the openstack overcloud ceph deploy
command and directly download the Ceph containers from a remote registry, as described in Downloading containers directly from a remote registry, you must run the sudo openstack tripleo container image prepare
command before deploying Ceph.