Chapter 2. Preparing overcloud nodes

The overcloud deployed in this scenario consists of six nodes:

  • Three Controller nodes with high availability.
  • Three Compute nodes.

Director integrates a separate Ceph Storage cluster with its own nodes into the overcloud. You manage this cluster independently from the overcloud. For example, you scale the Ceph Storage cluster with the Ceph management tools, not through director. For more information, see the Red Hat Ceph Storage documentation library.

2.1. Pre-deployment validations for Ceph Storage

To help avoid overcloud deployment failures, verify that the required packages exist on your servers.

2.1.1. Verifying the ceph-ansible package version

The undercloud contains Ansible-based validations that you can run to identify potential problems before you deploy the overcloud. These validations can help you avoid overcloud deployment failures by identifying common problems before they happen.

Procedure

Verify that the correction version of the ceph-ansible package is installed:

$ ansible-playbook -i /usr/bin/tripleo-ansible-inventory /usr/share/ansible/validation-playbooks/ceph-ansible-installed.yaml

2.1.2. Verifying packages for pre-provisioned nodes

Ceph can service only overcloud nodes that have a certain set of packages. When you use pre-provisioned nodes, you can verify the presence of these packages.

For more information about pre-provisioned nodes, see Configuring a basic overcloud with pre-provisioned nodes.

Procedure

Verify that the servers contained the required packages:

ansible-playbook -i /usr/bin/tripleo-ansible-inventory /usr/share/ansible/validation-playbooks/ceph-dependencies-installed.yaml

2.2. Configuring the existing Ceph Storage cluster

Create OSD pools, define capabilities, and create keys and IDs for your Ceph Storage cluster.

Procedure

  1. Create the following pools in your Ceph cluster relevant to your environment:

    • volumes: Storage for OpenStack Block Storage (cinder)
    • images: Storage for OpenStack Image Storage (glance)
    • vms: Storage for instances
    • backups: Storage for OpenStack Block Storage Backup (cinder-backup)
    • metrics: Storage for OpenStack Telemetry Metrics (gnocchi)

      Use the following commands as a guide:

      [root@ceph ~]# ceph osd pool create volumes <_pgnum_>
      [root@ceph ~]# ceph osd pool create images <_pgnum_>
      [root@ceph ~]# ceph osd pool create vms <_pgnum_>
      [root@ceph ~]# ceph osd pool create backups <_pgnum_>
      [root@ceph ~]# ceph osd pool create metrics <_pgnum_>

      If your overcloud deploys the Shared File Systems (manila) backed by CephFS, also create CephFS data and metadata pools:

      [root@ceph ~]# ceph osd pool create manila_data PGNUM
      [root@ceph ~]# ceph osd pool create manila_metadata PGNUM

    Replace <_pgnum_> with the number of placement groups. Approximately 100 placement groups per OSD is the best practice. For example, the total number of OSDs multiplied by 100, divided by the number of replicas, osd pool default size. You can also use the Ceph Placement Groups (PGs) per Pool Calculator to determine a suitable value.

  2. Create a client.openstack user in your Ceph cluster with the following capabilities:

    • cap_mgr: “allow *”
    • cap_mon: profile rbd
    • cap_osd: profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images, profile rbd pool=backups, profile rbd pool=metrics

      Use the following command as a guide:

      [root@ceph ~]# ceph auth add client.openstack mgr 'allow *' mon 'profile rbd' osd 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images, profile rbd pool=backups, profile rbd pool=metrics'
  3. Note the Ceph client key created for the client.openstack user:

    [root@ceph ~]# ceph auth list
    ...
    [client.openstack]
    	key = AQC+vYNXgDAgAhAAc8UoYt+OTz5uhV7ItLdwUw==
    	caps mgr = "allow *"
    	caps mon = "profile rbd"
    	caps osd = "profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images, profile rbd pool=backups, profile rbd pool=metrics"
    ...

    The key value in the example, AQC+vYNXgDAgAhAAc8UoYt+OTz5uhV7ItLdwUw==, is your Ceph client key.

  4. If your overcloud deploys the Shared File Systems service backed by CephFS, create the client.manila user in your Ceph cluster with the following capabilities:

    • cap_mds: allow *
    • cap_mgr: allow *
    • cap_mon: allow r, allow command "auth del", allow command "auth caps", allow command "auth get", allow command "auth get-or-create"
    • cap_osd: allow rw Use the following command as a guide:

      [root@ceph ~]# ceph auth add client.manila mon 'allow r, allow command "auth del", allow command "auth caps", allow command "auth get", allow command "auth get-or-create"' osd 'allow rw' mds 'allow *' mgr 'allow *'
  5. Note the manila client name and the key value to use in overcloud deployment templates:

    [root@ceph ~]# ceph auth get-key client.manila
         AQDQ991cAAAAABAA0aXFrTnjH9aO39P0iVvYyg==
  6. Note the file system ID of your Ceph Storage cluster. This value is specified with the fsid setting in the configuration file of your cluster in the [global] section:

    [global]
    fsid = 4b5c8c0a-ff60-454b-a1b4-9747aa737d19
    ...
Note

For more information about the Ceph Storage cluster configuration file, see Ceph configuration in the Red Hat Ceph Storage Configuration Guide.

Use the Ceph client key and file system ID; and the manila client IDS and key in the following procedure: Section 3.1, “Installing the ceph-ansible package”.

2.3. Initializing the stack user

Initialize the stack user to configure the authentication details used to access director CLI tools.

Procedure

  1. Log in to the director host as the stack user.
  2. Enter the following command to initialize your director configuration:

    $ source ~/stackrc

2.4. Registering nodes

An inventory file contains hardware and power management details about nodes. Create an inventory file to configure and register nodes in director.

Procedure

  1. Create an inventory file. Use the example node definition template, instackenv.json as a reference:

    {
        "nodes":[
            {
                "mac":[
                    "bb:bb:bb:bb:bb:bb"
                ],
                "cpu":"4",
                "memory":"6144",
                "disk":"40",
                "arch":"x86_64",
                "pm_type":"pxe_ipmitool",
                "pm_user":"admin",
                "pm_password":"p@55w0rd!",
                "pm_addr":"192.0.2.205"
            },
            {
                "mac":[
                    "cc:cc:cc:cc:cc:cc"
                ],
                "cpu":"4",
                "memory":"6144",
                "disk":"40",
                "arch":"x86_64",
                "pm_type":"pxe_ipmitool",
                "pm_user":"admin",
                "pm_password":"p@55w0rd!",
                "pm_addr":"192.0.2.206"
            },
            {
                "mac":[
                    "dd:dd:dd:dd:dd:dd"
                ],
                "cpu":"4",
                "memory":"6144",
                "disk":"40",
                "arch":"x86_64",
                "pm_type":"pxe_ipmitool",
                "pm_user":"admin",
                "pm_password":"p@55w0rd!",
                "pm_addr":"192.0.2.207"
            },
            {
                "mac":[
                    "ee:ee:ee:ee:ee:ee"
                ],
                "cpu":"4",
                "memory":"6144",
                "disk":"40",
                "arch":"x86_64",
                "pm_type":"pxe_ipmitool",
                "pm_user":"admin",
                "pm_password":"p@55w0rd!",
                "pm_addr":"192.0.2.208"
            }
            {
                "mac":[
                    "ff:ff:ff:ff:ff:ff"
                ],
                "cpu":"4",
                "memory":"6144",
                "disk":"40",
                "arch":"x86_64",
                "pm_type":"pxe_ipmitool",
                "pm_user":"admin",
                "pm_password":"p@55w0rd!",
                "pm_addr":"192.0.2.209"
            }
            {
                "mac":[
                    "gg:gg:gg:gg:gg:gg"
                ],
                "cpu":"4",
                "memory":"6144",
                "disk":"40",
                "arch":"x86_64",
                "pm_type":"pxe_ipmitool",
                "pm_user":"admin",
                "pm_password":"p@55w0rd!",
                "pm_addr":"192.0.2.210"
            }
        ]
    }
  2. Save the file to the home directory of the stack user: /home/stack/instackenv.json.
  3. Initialize the stack user, then import the instackenv.json inventory file into director:

    $ source ~/stackrc
    $ openstack overcloud node import ~/instackenv.json

    The openstack overcloud node import command imports the inventory file and registers each node with the director.

  4. Assign the kernel and ramdisk images to each node:

    $ openstack overcloud node configure <node>
    Result
    The nodes are registered and configured in director.

2.5. Manually tagging nodes

After you register each node, you must inspect the hardware and tag the node into a specific profile. Use profile tags to match your nodes to flavors and then assign flavors to deployment roles.

Procedure

  1. Trigger hardware introspection to retrieve the hardware attributes of each node:

    $ openstack overcloud node introspect --all-manageable --provide
    • The --all-manageable option introspects only the nodes that are in a managed state. In this example, all nodes are in a managed state.
    • The --provide option resets all nodes to an active state after introspection.

      Important

      Ensure that this process completes successfully. This process usually takes 15 minutes for bare metal nodes.

  2. Retrieve a list of your nodes to identify their UUIDs:

    $ openstack baremetal node list
  3. Add a profile option to the properties/capabilities parameter for each node to manually tag a node to a specific profile. The addition of the profile option tags the nodes into each respective profile.

    As an alternative to manual tagging, you can configure the Automated Health Check (AHC) Tools to automatically tag larger numbers of nodes based on benchmarking data.

    For example, to tag three nodes to use the control profile and another three nodes to use the compute profile, create the following profile options:

    $ ironic node-update 1a4e30da-b6dc-499d-ba87-0bd8a3819bc0 add properties/capabilities='profile:control,boot_option:local'
    $ ironic node-update 6faba1a9-e2d8-4b7c-95a2-c7fbdc12129a add properties/capabilities='profile:control,boot_option:local'
    $ ironic node-update 5e3b2f50-fcd9-4404-b0a2-59d79924b38e add properties/capabilities='profile:control,boot_option:local'
    $ ironic node-update 484587b2-b3b3-40d5-925b-a26a2fa3036f add properties/capabilities='profile:compute,boot_option:local'
    $ ironic node-update d010460b-38f2-4800-9cc4-d69f0d067efe add properties/capabilities='profile:compute,boot_option:local'
    $ ironic node-update d930e613-3e14-44b9-8240-4f3559801ea6 add properties/capabilities='profile:compute,boot_option:local'