Chapter 2. Preparing overcloud nodes

The scenario described in this chapter consists of six nodes in the Overcloud:

  • Three Controller nodes with high availability.
  • Three Compute nodes.

The director integrates a separate Ceph Storage cluster with its own nodes into the overcloud. You manage this cluster independently from the overcloud. For example, you scale the Ceph Storage cluster using the Ceph management tools, not through the OpenStack Platform director. For more information, see the Red Hat Ceph Storage documentation library.

2.1. Configuring the existing Ceph Storage cluster

  1. Create the following pools in your Ceph cluster relevant to your environment:

    • volumes: Storage for OpenStack Block Storage (cinder)
    • images: Storage for OpenStack Image Storage (glance)
    • vms: Storage for instances
    • backups: Storage for OpenStack Block Storage Backup (cinder-backup)
    • metrics: Storage for OpenStack Telemetry Metrics (gnocchi)

      Use the following commands as a guide:

      [root@ceph ~]# ceph osd pool create volumes PGNUM
      [root@ceph ~]# ceph osd pool create images PGNUM
      [root@ceph ~]# ceph osd pool create vms PGNUM
      [root@ceph ~]# ceph osd pool create backups PGNUM
      [root@ceph ~]# ceph osd pool create metrics PGNUM

      If your overcloud deploys the Shared File System (manila) backed by CephFS, create CephFS data and metadata pools as well:

      [root@ceph ~]# ceph osd pool create manila_data PGNUM
      [root@ceph ~]# ceph osd pool create manila_metadata PGNUM

      Replace PGNUM with the number of placement groups. Red Hat recommends approximately 100 placement groups per OSD. For example, the total number of OSDs multiplied by 100, divided by the number of replicas (osd pool default size). You can also use the Ceph Placement Groups (PGs) per Pool Calculator to determine a suitable value.

  2. Create a client.openstack user in your Ceph cluster with the following capabilities:

    • cap_mgr: “allow *”
    • cap_mon: profile rbd
    • cap_osd: profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images, profile rbd pool=backups, profile rbd pool=metrics

      Use the following command as a guide:

      [root@ceph ~]# ceph auth add client.openstack mgr 'allow *' mon 'profile rbd' osd 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images, profile rbd pool=backups, profile rbd pool=metrics'
  3. Note the Ceph client key created for the client.openstack user:

    [root@ceph ~]# ceph auth list
    ...
    [client.openstack]
    	key = AQC+vYNXgDAgAhAAc8UoYt+OTz5uhV7ItLdwUw==
    	caps mgr = "allow *"
    	caps mon = "profile rbd"
    	caps osd = "profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images, profile rbd pool=backups, profile rbd pool=metrics"
    ...

    The key value in the example, AQC+vYNXgDAgAhAAc8UoYt+OTz5uhV7ItLdwUw==, is your Ceph client key.

  4. If your overcloud deploys the Shared File System backed by CephFS, create the client.manila user in your Ceph cluster with the following capabilities:

    • cap_mds: allow *
    • cap_mgr: allow *
    • cap_mon: allow r, allow command "auth del", allow command "auth caps", allow command "auth get", allow command "auth get-or-create"
    • cap_osd: allow rw

      Use the following command as a guide:

      [root@ceph ~]# ceph auth add client.manila mon 'allow r, allow command "auth del", allow command "auth caps", allow command "auth get", allow command "auth get-or-create"' osd 'allow rw' mds 'allow *' mgr 'allow *'
  5. Note the manila client name and the key value to use in overcloud deployment templates:

    [root@ceph ~]# ceph auth get-key client.manila
    AQDQ991cAAAAABAA0aXFrTnjH9aO39P0iVvYyg==
  6. Note the file system ID of your Ceph Storage cluster. This value is specified with the fsid setting in the configuration file of your cluster (in the [global] section):

    [global]
    fsid = 4b5c8c0a-ff60-454b-a1b4-9747aa737d19
    ...
    Note

    For more information about the Ceph Storage cluster configuration file, see Ceph configuration in the Red Hat Ceph Storage Configuration Guide.

The Ceph client key and file system ID, as well as the manila client IDS and key, will all be used later in Chapter 3, Integrating with the existing Ceph Storage cluster.

2.2. Initializing the stack user

Log into the director host as the stack user and run the following command to initialize your director configuration:

$ source ~/stackrc

This sets up environment variables containing authentication details to access the director’s CLI tools.

2.3. Registering nodes

A node definition template (instackenv.json) is a JSON format file and contains the hardware and power management details for registering nodes. For example:

{
    "nodes":[
        {
            "mac":[
                "bb:bb:bb:bb:bb:bb"
            ],
            "cpu":"4",
            "memory":"6144",
            "disk":"40",
            "arch":"x86_64",
            "pm_type":"pxe_ipmitool",
            "pm_user":"admin",
            "pm_password":"p@55w0rd!",
            "pm_addr":"192.0.2.205"
        },
        {
            "mac":[
                "cc:cc:cc:cc:cc:cc"
            ],
            "cpu":"4",
            "memory":"6144",
            "disk":"40",
            "arch":"x86_64",
            "pm_type":"pxe_ipmitool",
            "pm_user":"admin",
            "pm_password":"p@55w0rd!",
            "pm_addr":"192.0.2.206"
        },
        {
            "mac":[
                "dd:dd:dd:dd:dd:dd"
            ],
            "cpu":"4",
            "memory":"6144",
            "disk":"40",
            "arch":"x86_64",
            "pm_type":"pxe_ipmitool",
            "pm_user":"admin",
            "pm_password":"p@55w0rd!",
            "pm_addr":"192.0.2.207"
        },
        {
            "mac":[
                "ee:ee:ee:ee:ee:ee"
            ],
            "cpu":"4",
            "memory":"6144",
            "disk":"40",
            "arch":"x86_64",
            "pm_type":"pxe_ipmitool",
            "pm_user":"admin",
            "pm_password":"p@55w0rd!",
            "pm_addr":"192.0.2.208"
        }
        {
            "mac":[
                "ff:ff:ff:ff:ff:ff"
            ],
            "cpu":"4",
            "memory":"6144",
            "disk":"40",
            "arch":"x86_64",
            "pm_type":"pxe_ipmitool",
            "pm_user":"admin",
            "pm_password":"p@55w0rd!",
            "pm_addr":"192.0.2.209"
        }
        {
            "mac":[
                "gg:gg:gg:gg:gg:gg"
            ],
            "cpu":"4",
            "memory":"6144",
            "disk":"40",
            "arch":"x86_64",
            "pm_type":"pxe_ipmitool",
            "pm_user":"admin",
            "pm_password":"p@55w0rd!",
            "pm_addr":"192.0.2.210"
        }
    ]
}

Procedure

  1. After you create the inventory file, save the file to the home directory of the stack user (/home/stack/instackenv.json).
  2. Initialize the stack user, then import the instackenv.json inventory file into the director:

    $ source ~/stackrc
    $ openstack overcloud node import ~/instackenv.json

    The openstack overcloud node import command imports the inventory file and registers each node with the director.

  3. Assign the kernel and ramdisk images to each node:
$ openstack overcloud node configure <node>

The nodes are now registered and configured in the director.

2.4. Manually tagging the nodes

After you register each node, you must inspect the hardware and tag the node into a specific profile. Use profile tags to match your nodes to flavors, and then assign flavors to deployment roles.

To inspect and tag new nodes, complete the following steps:

  1. Trigger hardware introspection to retrieve the hardware attributes of each node:

    $ openstack overcloud node introspect --all-manageable --provide
    • The --all-manageable option introspects only the nodes that are in a managed state. In this example, all nodes are in a managed state.
    • The --provide option resets all nodes to an active state after introspection.

      Important

      Ensure that this process completes successfully. This process usually takes 15 minutes for bare metal nodes.

  2. Retrieve a list of your nodes to identify their UUIDs:

    $ openstack baremetal node list
  3. Add a profile option to the properties/capabilities parameter for each node to manually tag a node to a specific profile. The addition of the profile option tags the nodes into each respective profile.

    Note

    As an alternative to manual tagging, use the Automated Health Check (AHC) Tools to automatically tag larger numbers of nodes based on benchmarking data.

    For example, to tag three nodes to use the control profile and another three nodes to use the compute profile, run:

    $ openstack baremetal node set 1a4e30da-b6dc-499d-ba87-0bd8a3819bc0 --property capabilities="profile:control,boot_option:local"
    $ openstack baremetal node set 6faba1a9-e2d8-4b7c-95a2-c7fbdc12129a --property capabilities="profile:control,boot_option:local"
    $ openstack baremetal node set 5e3b2f50-fcd9-4404-b0a2-59d79924b38e --property capabilities="profile:control,boot_option:local"
    $ openstack baremetal node set 484587b2-b3b3-40d5-925b-a26a2fa3036f --property capabilities="profile:compute,boot_option:local"
    $ openstack baremetal node set d010460b-38f2-4800-9cc4-d69f0d067efe --property capabilities="profile:compute,boot_option:local"
    $ openstack baremetal node set d930e613-3e14-44b9-8240-4f3559801ea6 --property capabilities="profile:compute,boot_option:local"

The addition of the profile option tags the nodes into each respective profiles.

Note

As an alternative to manual tagging, use the Automated Health Check (AHC) Tools to automatically tag larger numbers of nodes based on benchmarking data.