Chapter 6. Configuring overcloud software with the director Operator

You can configure your overcloud after you have provisioned virtual and bare metal nodes for your overcloud. You must create an OpenStackConfigGenerator resource to generate your Ansible playbooks, register your nodes to either the Red Hat Customer Portal or Red Hat Satellite, and then create an OpenStackDeploy resource to apply the configuration to your nodes

6.1. Creating Ansible playbooks for overcloud configuration with OpenStackConfigGenerator

After you provision the overcloud infrastructure, you must create a set of Ansible playbooks to configure the Red Hat OpenStack Platform (RHOSP) software on the overcloud nodes. You create these playbooks with the OpenStackConfigGenerator resource, which uses the config-download feature in RHOSP director to convert heat configuration to playbooks.

Prerequisites

  • Ensure your OpenShift Container Platform cluster is operational and you have installed the director Operator correctly.
  • Ensure that you have installed the oc command line tool on your workstation.
  • OpenStackControlPlane and OpenStackBarementalSets created as required.
  • Configure a git-secret Secret that contains authentication details for your remote Git repository.
  • Configure a tripleo-tarball-config ConfigMap that contains your custom heat templates.
  • Configure a heat-env-config ConfigMap that contains your custom environment files.

Procedure

  1. Create a file named openstack-config-generator.yaml on your workstation. Include the resource specification to generate the Ansible playbooks. For example, the specification to generate the playbooks is as follows:

    apiVersion: osp-director.openstack.org/v1beta1
    kind: OpenStackConfigGenerator
    metadata:
      name: default
      namespace: openstack
    spec:
      enableFencing: true
      gitSecret: git-secret
      imageURL: registry.redhat.io/rhosp-rhel8/openstack-tripleoclient:16.2
      heatEnvConfigMap: heat-env-config
      # List of heat environment files to include from tripleo-heat-templates/environments
      heatEnvs:
      - ssl/tls-endpoints-public-dns.yaml
      - ssl/enable-tls.yaml
      tarballConfigMap: tripleo-tarball-config

    Set the following values in the resource specification:

    metadata.name
    Set to the name of the Compute node bare metal set, by default default.
    metadata.namespace
    Set to the director Operator namespace, by default openstack.
    spec.enableFencing
    Enable the automatic creation of required heat environment files to enable fencing.
    Note

    Production OSP environments must have fencing enabled. Virtual machines running pacemaker require the fence-agents-kubevirt package.

    spec.gitSecret
    Set to the ConfigMap that contains the Git authentication credentials, by default git-secret.
    spec.heatEnvs
    A list of default tripleo environment files used to generate the playbooks.
    spec.heatEnvConfigMap
    Set to the ConfigMap that contains your custom environment files, by default heat-env-config.
    spec.tarballConfigMap
    Set to the ConfigMap that contains the tarball with your custom heat templates, by default tripleo-tarball-config.

    For more descriptions of the values you can use in the spec section, view the specification schema in the custom resource definition for the openstackconfiggenerator CRD:

    $ oc describe crd openstackconfiggenerator

    Save the file when you have finished configuring the Ansible config generator specification.

  2. Create the Ansible config generator:

    $ oc create -f openstack-config-generator.yaml -n openstack

Verification

  1. View the resource for the config generator:

    $ oc get openstackconfiggenerator/default -n openstack

6.2. Ephemeral heat container image source parameters

To create an ephemeral heat service, The OpenStackConfigGenerator resource requires four specific container images from registry.redhat.io:

  • openstack-heat-api
  • openstack-heat-engine
  • openstack-mariadb
  • openstack-rabbitmq

You can change the source location of these images with the spec.ephemeralHeatSettings parameter. For example, if you host these images or a Red Hat Satellite Server, you can change the spec.ephemeralHeatSettings parameter and sub-parameters to use the Red Hat Satellite Server as the source for these images.

apiVersion: osp-director.openstack.org/v1beta1
kind: OpenStackConfigGenerator
metadata:
  name: default
  namespace: openstack
spec:
  …​
  ephemeralHeatSettings:
    heatAPIImageURL: <heat_api_image_location>
    heatEngineImageURL: <heat_engine_image_location>
    mariadbImageURL: <mariadb_image_location>
    rabbitImageURL: <rabbitmq_image-location>

Set the following values in the resource specification:

spec.ephemeralHeatSettings.heatAPIImageURL
Image location for the heat API.
spec.ephemeralHeatSettings.heatEngineImageURL
Image location for the heat engine.
spec.ephemeralHeatSettings.mariadbImageURL
Image location for MariaDB.
spec.ephemeralHeatSettings.rabbitImageURL
Image location for RabbitMQ.

6.3. Config generation interactive mode

To debug config generation operations, you can set the OpenStackConfigGenerator resource to use interactive mode.

apiVersion: osp-director.openstack.org/v1beta1
kind: OpenStackConfigGenerator
metadata:
  name: default
  namespace: openstack
spec:
  …​
  interactive: true

In this mode, the OpenStackConfigGenerator resource creates the environment to start rendering the playbooks but does not automatically render the playbooks. When the OpenStackConfigGenerator pod, with the prefix generate-config starts, you can rsh into the pod and inspect files and playbook rendering:

$ oc rsh $(oc get pod -o name -l job-name=generate-config-default)
$ ls -la /home/cloud-admin/
...
config 1
config-custom 2
config-passwords 3
create-playbooks.sh 4
process-heat-environment.py 5
tht-tars 6
1
Directory storing auto rendered files by the director operator
2
Directory storing environment files provided via heatEnvConfigMap
3
Directory storing the overcloud service passwords created by the director operator
4
Script to render the ansible playbooks
5
Internal script used by create-playbooks, to replicate the undocumented heat client merging of map parameters
6
Directory storing the tarball from the tarballConfigMap

6.4. Using the heat environment from tripleo-heat-templates/environments

TripleO is delivered with heat environment files for different deployment scenarios, for example, TLS for public endpoints. Heat environment files can be included into the playbook generation using the heatEnvs parameter list.

apiVersion: osp-director.openstack.org/v1beta1
kind: OpenStackConfigGenerator
metadata:
  name: default
  namespace: openstack
spec:
  …
  heatEnvs:
  - ssl/tls-endpoints-public-dns.yaml
  - ssl/enable-tls.yaml

6.5. Registering the operating system of your overcloud

Before the director Operator configures the overcloud software on nodes, you must register the operating system of all nodes to either the Red Hat Customer Portal or Red Hat Satellite Server, and enable repositories for your nodes.

As a part of the OpenStackControlPlane resource, the director Operator creates an OpenStackClient pod that you access through a remote shell and run Red Hat OpenStack Platform (RHOSP) commands. This pod also contains an ansible inventory script named /home/cloud-admin/ctlplane-ansible-inventory.

To register your nodes, you can use the redhat_subscription Ansible module with the inventory script from the OpentackClient pod.

Prerequisites

  • Ensure your OpenShift Container Platform cluster is operational and you have installed the director Operator correctly.
  • Ensure that you have installed the oc command line tool on your workstation.
  • Use the OpenStackControlPlane resource to create a control plane.
  • Use the OpenStackBareMetalSet resource to create bare metal Compute nodes.

Procedure

  1. Access the remote shell for openstackclient:

    $ oc rsh -n openstack openstackclient
  2. Change to the cloud-admin home directory:

    $ cd /home/cloud-admin
  3. Create a playbook that uses the redhat_subscription modules to register your nodes. For example, the following playbook registers Controller nodes:

    ---
    - name: Register Controller nodes
      hosts: Controller
      become: yes
      vars:
        repos:
          - rhel-8-for-x86_64-baseos-eus-rpms
          - rhel-8-for-x86_64-appstream-eus-rpms
          - rhel-8-for-x86_64-highavailability-eus-rpms
          - ansible-2.9-for-rhel-8-x86_64-rpms
          - openstack-16.2-for-rhel-8-x86_64-rpms
          - fast-datapath-for-rhel-8-x86_64-rpms
      tasks:
        - name: Register system
          redhat_subscription:
            username: myusername
            password: p@55w0rd!
            org_id: 1234567
            release: 8.4
            pool_ids: 1a85f9223e3d5e43013e3d6e8ff506fd
        - name: Disable all repos
          command: "subscription-manager repos --disable *"
        - name: Enable Controller node repos
          command: "subscription-manager repos --enable {{ item }}"
          with_items: "{{ repos }}"

    This play contains the following three tasks:

    • Register the node.
    • Disable any auto-enabled repositories.
    • Enable only the repositories relevant to the Controller node. The repositories are listed with the repos variable.
  4. Register the overcloud nodes to required repositories:

    ansible-playbook -i /home/cloud-admin/ctlplane-ansible-inventory ./rhsm.yaml

6.6. Obtain the latest OpenStackConfigVersion

Different versions of Ansible playbooks are stored in the git repository. For each version an OpenStackConfigVersion object exists which references the hash/digest of git.

Procedure

  1. Select the hash/digest of the latest OpenStackConfigVersion:

    $ oc get -n openstack --sort-by {.metadata.creationTimestamp} osconfigversions -o json
Note

OpenStackConfigVersion objects also have a git diff attribute that can be used to easily compare the changes between Ansible playbook versions.

6.7. Applying overcloud configuration with the director Operator

You can configure the overcloud with director Operator only after you have created your control plane, provisioned your bare metal Compute nodes, and generated the Ansible playbooks to configure software on each node. When you create an OpenStackDeploy resource, the director Operator creates a job that runs the ansible playbooks to configure the overcloud.

Prerequisites

  • Ensure your OpenShift Container Platform cluster is operational and you have installed the director Operator correctly.
  • Ensure that you have installed the oc command line tool on your workstation.
  • Use the OpenStackControlPlane resource to create a control plane.
  • Use the OpenStackBareMetalSet resource to create bare metal Compute nodes.
  • Use the OpentackConfigGenerator to create the Ansible playbook configuration for your overcloud.
  • Use the OpeenstackConfigVersion to select the hash/digest of the ansible playbooks which should be used to configure the overcloud.

Procedure

  1. Create a file named openstack-deployment.yaml on your workstation. Include the resource specification to the Ansible playbooks. For example:

    apiVersion: osp-director.openstack.org/v1beta1
    kind: OpenStackDeploy
    metadata:
      name: default
    spec:
      configVersion: n5fch96h548h75hf4hbdhb8hfdh676h57bh96h5c5h59hf4h88h…
      configGenerator: default

    Set the following values in the resource specification:

    metadata.name
    Set the name of the Compute node baremetal set, by default default.
    metadata.namespace
    Set to the diretor Operator namespace, by default openstack.
    spec.configVersion
    The config version/git hash of the playbooks to deploy.
    spec.configGenerator
    The name of the configGenerator.

    For more descriptions of the values you can use inthe spec section, view the specification schema in the custom resource definition of the openstackdeploy CRD:

    $ oc describe crd openstackdeploy

    Save the file when you have finished configuring the OpenStackDeploy specification.

  2. Create the OpenStackDeploy resource:

    $ oc create -f openstack-deployment.yaml -n openstack

    As the deployment runs it creates a Kubernetes job to execute the Ansible playbooks. You can tail the logs of the job to watch the Ansible playbooks running:

    $ oc logs -f jobs/deploy-openstack-default

    Additionally, you can manually access the executed Ansible playbooks by logging into the openstackclient pod. In the /home/cloud-admin/work/directory you can find the ansible playbooks and the ansible.log file for the current deployment.