Chapter 2. Preparing Ceph Storage nodes for overcloud deployment
All nodes in this scenario are bare metal systems that use IPMI for power management. These nodes do not require an operating system because director copies a Red Hat Enterprise Linux 8 image to each node. Additionally, the Ceph Storage services on these nodes are containerized. Director communicates to each node through the Provisioning network during the introspection and provisioning processes. All nodes connect to this network through the native VLAN.
2.1. Cleaning Ceph Storage node disks
The Ceph Storage OSDs and journal partitions require GPT disk labels. This means the additional disks on Ceph Storage require conversion to GPT before installing the Ceph OSD services. You must delete all metadata from the disks to allow the director to set GPT labels on them.
You can configure director to delete all disk metadata by default. With this option, the Bare Metal Provisioning service runs an additional step to boot the nodes and clean the disks each time the node is set to available
. This process adds an additional power cycle after the first introspection and before each deployment. The Bare Metal Provisioning service uses the wipefs --force --all
command to perform the clean.
Procedure
Add the following setting to your
/home/stack/undercloud.conf
file:clean_nodes=true
After you set this option, run the
openstack undercloud install
command to execute this configuration change.WarningThe
wipefs --force --all
command deletes all data and metadata on the disk, but does not perform a secure erase. A secure erase takes much longer.
2.2. Registering nodes
Procedure
Import a node inventory file (
instackenv.json
) in JSON format to director so that director can communicate with the nodes. This inventory file contains hardware and power management details that director can use to register nodes:{ "nodes":[ { "mac":[ "b1:b1:b1:b1:b1:b1" ], "cpu":"4", "memory":"6144", "disk":"40", "arch":"x86_64", "pm_type":"ipmi", "pm_user":"admin", "pm_password":"p@55w0rd!", "pm_addr":"192.0.2.205" }, { "mac":[ "b2:b2:b2:b2:b2:b2" ], "cpu":"4", "memory":"6144", "disk":"40", "arch":"x86_64", "pm_type":"ipmi", "pm_user":"admin", "pm_password":"p@55w0rd!", "pm_addr":"192.0.2.206" }, { "mac":[ "b3:b3:b3:b3:b3:b3" ], "cpu":"4", "memory":"6144", "disk":"40", "arch":"x86_64", "pm_type":"ipmi", "pm_user":"admin", "pm_password":"p@55w0rd!", "pm_addr":"192.0.2.207" }, { "mac":[ "c1:c1:c1:c1:c1:c1" ], "cpu":"4", "memory":"6144", "disk":"40", "arch":"x86_64", "pm_type":"ipmi", "pm_user":"admin", "pm_password":"p@55w0rd!", "pm_addr":"192.0.2.208" }, { "mac":[ "c2:c2:c2:c2:c2:c2" ], "cpu":"4", "memory":"6144", "disk":"40", "arch":"x86_64", "pm_type":"ipmi", "pm_user":"admin", "pm_password":"p@55w0rd!", "pm_addr":"192.0.2.209" }, { "mac":[ "c3:c3:c3:c3:c3:c3" ], "cpu":"4", "memory":"6144", "disk":"40", "arch":"x86_64", "pm_type":"ipmi", "pm_user":"admin", "pm_password":"p@55w0rd!", "pm_addr":"192.0.2.210" }, { "mac":[ "d1:d1:d1:d1:d1:d1" ], "cpu":"4", "memory":"6144", "disk":"40", "arch":"x86_64", "pm_type":"ipmi", "pm_user":"admin", "pm_password":"p@55w0rd!", "pm_addr":"192.0.2.211" }, { "mac":[ "d2:d2:d2:d2:d2:d2" ], "cpu":"4", "memory":"6144", "disk":"40", "arch":"x86_64", "pm_type":"ipmi", "pm_user":"admin", "pm_password":"p@55w0rd!", "pm_addr":"192.0.2.212" }, { "mac":[ "d3:d3:d3:d3:d3:d3" ], "cpu":"4", "memory":"6144", "disk":"40", "arch":"x86_64", "pm_type":"ipmi", "pm_user":"admin", "pm_password":"p@55w0rd!", "pm_addr":"192.0.2.213" } ] }
-
After you create the inventory file, save the file to the home directory of the stack user (
/home/stack/instackenv.json
). Initialize the stack user, then import the
instackenv.json
inventory file into director:$ source ~/stackrc $ openstack overcloud node import ~/instackenv.json
The
openstack overcloud node import
command imports the inventory file and registers each node with director.Assign the kernel and ramdisk images to each node:
$ openstack overcloud node configure <node>
The nodes are registered and configured in director.
2.3. Verifying available Red Hat Ceph Storage packages
To help avoid overcloud deployment failures, verify that the required packages exist on your servers.
2.3.1. Verifying the ceph-ansible package version
The undercloud contains Ansible-based validations that you can run to identify potential problems before you deploy the overcloud. These validations can help you avoid overcloud deployment failures by identifying common problems before they happen.
Procedure
Verify that the
ceph-ansible
package version you want is installed:$ ansible-playbook -i /usr/bin/tripleo-ansible-inventory /usr/share/ansible/validation-playbooks/ceph-ansible-installed.yaml
2.3.2. Verifying packages for pre-provisioned nodes
Red Hat Ceph Storage (RHCS) can service only overcloud nodes that have a certain set of packages. When you use pre-provisioned nodes, you can verify the presence of these packages.
For more information about pre-provisioned nodes, see Configuring a basic overcloud with pre-provisioned nodes.
Procedure
Verify that the pre-provisioned nodes contain the required packages:
ansible-playbook -i /usr/bin/tripleo-ansible-inventory /usr/share/ansible/validation-playbooks/ceph-dependencies-installed.yaml
2.4. Manually tagging nodes into profiles
After you register each node, you must inspect the hardware and tag the node into a specific profile. Use profile tags to match your nodes to flavors, and then assign flavors to deployment roles.
Procedure
Trigger hardware introspection to retrieve the hardware attributes of each node:
$ openstack overcloud node introspect --all-manageable --provide
-
The
--all-manageable
option introspects only the nodes that are in a managed state. In this example, all nodes are in a managed state. The
--provide
option resets all nodes to anactive
state after introspection.ImportantEnsure that this process completes successfully. This process usually takes 15 minutes for bare metal nodes.
-
The
Retrieve a list of your nodes to identify their UUIDs:
$ openstack baremetal node list
Add a profile option to the
properties/capabilities
parameter for each node to manually tag a node to a specific profile. The addition of theprofile
option tags the nodes into each respective profile.NoteAs an alternative to manual tagging, use the Automated Health Check (AHC) Tools to automatically tag larger numbers of nodes based on benchmarking data.
For example, a typical deployment contains three profiles:
control
,compute
, andceph-storage
. Enter the following commands to tag three nodes for each profile:$ openstack baremetal node set --property capabilities='profile:control,boot_option:local' 1a4e30da-b6dc-499d-ba87-0bd8a3819bc0 $ openstack baremetal node set --property capabilities='profile:control,boot_option:local' 6faba1a9-e2d8-4b7c-95a2-c7fbdc12129a $ openstack baremetal node set --property capabilities='profile:control,boot_option:local' 6faba1a9-e2d8-4b7c-95a2-c7fbdc12129a $ openstack baremetal node set --property capabilities='profile:compute,boot_option:local' 484587b2-b3b3-40d5-925b-a26a2fa3036f $ openstack baremetal node set --property capabilities='profile:compute,boot_option:local' d010460b-38f2-4800-9cc4-d69f0d067efe $ openstack baremetal node set --property capabilities='profile:compute,boot_option:local' d930e613-3e14-44b9-8240-4f3559801ea6 $ openstack baremetal node set --property capabilities='profile:ceph-storage,boot_option:local' 484587b2-b3b3-40d5-925b-a26a2fa3036f $ openstack baremetal node set --property capabilities='profile:ceph-storage,boot_option:local' d010460b-38f2-4800-9cc4-d69f0d067efe $ openstack baremetal node set --property capabilities='profile:ceph-storage,boot_option:local' d930e613-3e14-44b9-8240-4f3559801ea6
You can also configure a new custom profile to tag a node for the Ceph MON and Ceph MDS services, see Chapter 3, Deploying Ceph services on dedicated nodes.
2.5. Defining the root disk for multi-disk clusters
Director must identify the root disk during provisioning in the case of nodes with multiple disks. For example, most Ceph Storage nodes use multiple disks. By default, director writes the overcloud image to the root disk during the provisioning process.
There are several properties that you can define to help director identify the root disk:
-
model
(String): Device identifier. -
vendor
(String): Device vendor. -
serial
(String): Disk serial number. -
hctl
(String): Host:Channel:Target:Lun for SCSI. -
size
(Integer): Size of the device in GB. -
wwn
(String): Unique storage identifier. -
wwn_with_extension
(String): Unique storage identifier with the vendor extension appended. -
wwn_vendor_extension
(String): Unique vendor storage identifier. -
rotational
(Boolean): True for a rotational device (HDD), otherwise false (SSD). -
name
(String): The name of the device, for example: /dev/sdb1.
Use the name
property only for devices with persistent names. Do not use name
to set the root disk for any other devices because this value can change when the node boots.
You can specify the root device using its serial number.
Procedure
Check the disk information from the hardware introspection of each node. Run the following command to display the disk information of a node:
(undercloud)$ openstack baremetal introspection data save 1a4e30da-b6dc-499d-ba87-0bd8a3819bc0 | jq ".inventory.disks"
For example, the data for one node might show three disks:
[ { "size": 299439751168, "rotational": true, "vendor": "DELL", "name": "/dev/sda", "wwn_vendor_extension": "0x1ea4dcc412a9632b", "wwn_with_extension": "0x61866da04f3807001ea4dcc412a9632b", "model": "PERC H330 Mini", "wwn": "0x61866da04f380700", "serial": "61866da04f3807001ea4dcc412a9632b" } { "size": 299439751168, "rotational": true, "vendor": "DELL", "name": "/dev/sdb", "wwn_vendor_extension": "0x1ea4e13c12e36ad6", "wwn_with_extension": "0x61866da04f380d001ea4e13c12e36ad6", "model": "PERC H330 Mini", "wwn": "0x61866da04f380d00", "serial": "61866da04f380d001ea4e13c12e36ad6" } { "size": 299439751168, "rotational": true, "vendor": "DELL", "name": "/dev/sdc", "wwn_vendor_extension": "0x1ea4e31e121cfb45", "wwn_with_extension": "0x61866da04f37fc001ea4e31e121cfb45", "model": "PERC H330 Mini", "wwn": "0x61866da04f37fc00", "serial": "61866da04f37fc001ea4e31e121cfb45" } ]
Enter
openstack baremetal node set --property root_device=
to set the root disk for a node. Include the most appropriate hardware attribute value to define the root disk.(undercloud)$ openstack baremetal node set --property root_device='{"serial":"<serial_number>"}' <node-uuid>
For example, to set the root device to disk 2, which has the serial number
61866da04f380d001ea4e13c12e36ad6
, enter the following command:(undercloud)$ openstack baremetal node set --property root_device='{"serial": "61866da04f380d001ea4e13c12e36ad6"}' 1a4e30da-b6dc-499d-ba87-0bd8a3819bc0
NoteEnsure that you configure the BIOS of each node to include booting from the root disk that you choose. Configure the boot order to boot from the network first, then to boot from the root disk.
Director identifies the specific disk to use as the root disk. When you run the
openstack overcloud deploy
command, director provisions and writes the overcloud image to the root disk.
2.6. Using the overcloud-minimal image to avoid using a Red Hat subscription entitlement
By default, director writes the QCOW2 overcloud-full
image to the root disk during the provisioning process. The overcloud-full
image uses a valid Red Hat subscription. However, you can also use the overcloud-minimal
image, for example, to provision a bare OS where you do not want to run any other OpenStack services and consume your subscription entitlements.
A common use case for this occurs when you want to provision nodes with only Ceph daemons. For this and similar use cases, you can use the overcloud-minimal
image option to avoid reaching the limit of your paid Red Hat subscriptions. For information about how to obtain the overcloud-minimal
image, see Obtaining images for overcloud nodes.
A Red Hat OpenStack Platform (RHOSP) subscription contains Open vSwitch (OVS), but core services, such as OVS, are not available when you use the overcloud-minimal
image. OVS is not required to deploy Ceph Storage nodes. Use linux_bond
instead of ovs_bond
to define bonds. For more information about linux_bond
, see Linux bonding options.
Procedure
To configure director to use the
overcloud-minimal
image, create an environment file that contains the following image definition:parameter_defaults: <roleName>Image: overcloud-minimal
Replace
<roleName>
with the name of the role and appendImage
to the name of the role. The following example shows anovercloud-minimal
image for Ceph storage nodes:parameter_defaults: CephStorageImage: overcloud-minimal
In the
roles_data.yaml
role definition file, set therhsm_enforce
parameter toFalse
.rhsm_enforce: False
-
Pass the environment file to the
openstack overcloud deploy
command.
The overcloud-minimal
image supports only standard Linux bridges and not OVS because OVS is an OpenStack service that requires a Red Hat OpenStack Platform subscription entitlement.