Chapter 2. Preparing Overcloud Nodes
The scenario described in this chapter consists of six nodes in the Overcloud:
- Three Controller nodes with high availability.
- Three Compute nodes.
The director will integrate a separate Ceph Storage Cluster with its own nodes into the Overcloud. You manage this cluster independently from the Overcloud. For example, you scale the Ceph Storage cluster using the Ceph management tools and not through the OpenStack Platform director. Consult the Red Hat Ceph documentation for more information.
2.1. Configuring the Existing Ceph Storage Cluster
Create the following pools in your Ceph cluster relevant to your environment:
-
volumes: Storage for OpenStack Block Storage (cinder) -
images: Storage for OpenStack Image Storage (glance) -
vms: Storage for instances -
backups: Storage for OpenStack Block Storage Backup (cinder-backup) metrics: Storage for OpenStack Telemetry Metrics (gnocchi)Use the following commands as a guide:
[root@ceph ~]# ceph osd pool create volumes PGNUM [root@ceph ~]# ceph osd pool create images PGNUM [root@ceph ~]# ceph osd pool create vms PGNUM [root@ceph ~]# ceph osd pool create backups PGNUM [root@ceph ~]# ceph osd pool create metrics PGNUM
Replace PGNUM with the number of placement groups. We recommend approximately 100 per OSD. For example, the total number of OSDs multiplied by 100 divided by the number of replicas (
osd pool default size). You can also use the Ceph Placement Groups (PGs) per Pool Calculator to determine a suitable value.
-
Create a
client.openstackuser in your Ceph cluster with the following capabilities:-
cap_mon:
allow r cap_osd:
allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=vms, allow rwx pool=images, allow rwx pool=backups, allow rwx pool=metricsUse the following command as a guide:
[root@ceph ~]# ceph auth add client.openstack mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=vms, allow rwx pool=images, allow rwx pool=backups, allow rwx pool=metrics'
-
cap_mon:
Next, note the Ceph client key created for the
client.openstackuser:[root@ceph ~]# ceph auth list ... client.openstack key: AQDLOh1VgEp6FRAAFzT7Zw+Y9V6JJExQAsRnRQ== caps: [mon] allow r caps: [osd] allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=vms, allow rwx pool=images, allow rwx pool=backups, allow rwx pool=metrics ...The
keyvalue here (AQDLOh1VgEp6FRAAFzT7Zw+Y9V6JJExQAsRnRQ==) is your Ceph client key.Finally, note the file system ID of your Ceph Storage cluster. This value is specified with the
fsidsetting in the configuration file of your cluster (under the[global]section):[global] fsid = 4b5c8c0a-ff60-454b-a1b4-9747aa737d19 ...NoteFor more information about the Ceph Storage cluster configuration file, see Configuration Reference (from the Red Hat Ceph Storage Configuration Guide).
The Ceph client key and file system ID will both be used later in Chapter 3, Integrating with the Existing Ceph Cluster.
2.2. Initializing the Stack User
Log into the director host as the stack user and run the following command to initialize your director configuration:
$ source ~/stackrc
This sets up environment variables containing authentication details to access the director’s CLI tools.
2.3. Registering Nodes
A node definition template (instackenv.json) is a JSON format file and contains the hardware and power management details for registering nodes. For example:
{
"nodes":[
{
"mac":[
"bb:bb:bb:bb:bb:bb"
],
"cpu":"4",
"memory":"6144",
"disk":"40",
"arch":"x86_64",
"pm_type":"pxe_ipmitool",
"pm_user":"admin",
"pm_password":"p@55w0rd!",
"pm_addr":"192.0.2.205"
},
{
"mac":[
"cc:cc:cc:cc:cc:cc"
],
"cpu":"4",
"memory":"6144",
"disk":"40",
"arch":"x86_64",
"pm_type":"pxe_ipmitool",
"pm_user":"admin",
"pm_password":"p@55w0rd!",
"pm_addr":"192.0.2.206"
},
{
"mac":[
"dd:dd:dd:dd:dd:dd"
],
"cpu":"4",
"memory":"6144",
"disk":"40",
"arch":"x86_64",
"pm_type":"pxe_ipmitool",
"pm_user":"admin",
"pm_password":"p@55w0rd!",
"pm_addr":"192.0.2.207"
},
{
"mac":[
"ee:ee:ee:ee:ee:ee"
],
"cpu":"4",
"memory":"6144",
"disk":"40",
"arch":"x86_64",
"pm_type":"pxe_ipmitool",
"pm_user":"admin",
"pm_password":"p@55w0rd!",
"pm_addr":"192.0.2.208"
}
{
"mac":[
"ff:ff:ff:ff:ff:ff"
],
"cpu":"4",
"memory":"6144",
"disk":"40",
"arch":"x86_64",
"pm_type":"pxe_ipmitool",
"pm_user":"admin",
"pm_password":"p@55w0rd!",
"pm_addr":"192.0.2.209"
}
{
"mac":[
"gg:gg:gg:gg:gg:gg"
],
"cpu":"4",
"memory":"6144",
"disk":"40",
"arch":"x86_64",
"pm_type":"pxe_ipmitool",
"pm_user":"admin",
"pm_password":"p@55w0rd!",
"pm_addr":"192.0.2.210"
}
]
}
After creating the template, save the file to the stack user’s home directory (/home/stack/instackenv.json). Initialize the stack user, then import instackenv.json into the director:
$ source ~/stackrc $ openstack overcloud node import ~/instackenv.json
This imports the template and registers each node from the template into the director.
Assign the kernel and ramdisk images to each node:
$ openstack overcloud node configure <node>
The nodes are now registered and configured in the director.
2.4. Manually Tagging the Nodes
After registering each node, you will need to inspect the hardware and tag the node into a specific profile. Profile tags match your nodes to flavors, and in turn the flavors are assigned to a deployment role.
To inspect and tag new nodes, follow these steps:
Trigger hardware introspection to retrieve the hardware attributes of each node:
$ openstack overcloud node introspect --all-manageable --provide
-
The
--all-manageableoption introspects only nodes in a managed state. In this example, it is all of them. The
--provideoption resets all nodes to anactivestate after introspection.ImportantMake sure this process runs to completion. This process usually takes 15 minutes for bare metal nodes.
-
The
Retrieve a list of your nodes to identify their UUIDs:
$ openstack baremetal node list
Add a profile option to the
properties/capabilitiesparameter for each node to manually tag a node to a specific profile.For example, to tag three nodes to use the
controlprofile and another three nodes to use thecomputeprofile, run:$ ironic node-update 1a4e30da-b6dc-499d-ba87-0bd8a3819bc0 add properties/capabilities='profile:control,boot_option:local' $ ironic node-update 6faba1a9-e2d8-4b7c-95a2-c7fbdc12129a add properties/capabilities='profile:control,boot_option:local' $ ironic node-update 5e3b2f50-fcd9-4404-b0a2-59d79924b38e add properties/capabilities='profile:control,boot_option:local' $ ironic node-update 484587b2-b3b3-40d5-925b-a26a2fa3036f add properties/capabilities='profile:compute,boot_option:local' $ ironic node-update d010460b-38f2-4800-9cc4-d69f0d067efe add properties/capabilities='profile:compute,boot_option:local' $ ironic node-update d930e613-3e14-44b9-8240-4f3559801ea6 add properties/capabilities='profile:compute,boot_option:local'
The addition of the profile option tags the nodes into each respective profiles.
As an alternative to manual tagging, use the Automated Health Check (AHC) Tools to automatically tag larger numbers of nodes based on benchmarking data.
