Red Hat Training

A Red Hat training course is available for Red Hat OpenStack Platform

Chapter 8. Containerized Services

The director installs the core OpenStack Platform services as containers on the overcloud. This section provides some background information on how containerized services work.

8.1. Containerized Service Architecture

The director installs the core OpenStack Platform services as containers on the overcloud. The templates for the containerized services are located in the /usr/share/openstack-tripleo-heat-templates/docker/services/. These templates reference their respective composable service templates. For example, the OpenStack Identity (keystone) containerized service template (docker/services/keystone.yaml) includes the following resource:

  KeystoneBase:
    type: ../../puppet/services/keystone.yaml
    properties:
      EndpointMap: {get_param: EndpointMap}
      ServiceData: {get_param: ServiceData}
      ServiceNetMap: {get_param: ServiceNetMap}
      DefaultPasswords: {get_param: DefaultPasswords}
      RoleName: {get_param: RoleName}
      RoleParameters: {get_param: RoleParameters}

The type refers to the respective OpenStack Identity (keystone) composable service and pulls the outputs data from that template. The containerized service merges this data with its own container-specific data.

All nodes using containerized services must enable the OS::TripleO::Services::Docker service. When creating a roles_data.yaml file for your custom roles configuration, include the the OS::TripleO::Services::Docker service with the base composable services, as the containerized services. For example, the Keystone role uses the following role definition:

- name: Keystone
  ServicesDefault:
    - OS::TripleO::Services::CACerts
    - OS::TripleO::Services::Kernel
    - OS::TripleO::Services::Ntp
    - OS::TripleO::Services::Snmp
    - OS::TripleO::Services::Sshd
    - OS::TripleO::Services::Timezone
    - OS::TripleO::Services::TripleoPackages
    - OS::TripleO::Services::TripleoFirewall
    - OS::TripleO::Services::SensuClient
    - OS::TripleO::Services::Fluentd
    - OS::TripleO::Services::AuditD
    - OS::TripleO::Services::Collectd
    - OS::TripleO::Services::MySQLClient
    - OS::TripleO::Services::Docker
    - OS::TripleO::Services::Keystone

8.2. Containerized Service Parameters

Each containerized service template contains an outputs section that defines a data set passed to the director’s OpenStack Orchestration (heat) service. In addition to the standard composable service parameters (see Section 7.2.4, “Examining Role Parameters”), the template contain a set of parameters specific to the container configuration.

puppet_config

Data to pass to Puppet when configuring the service. In the initial overcloud deployment steps, the director creates a set of containers used to configure the service before the actual containerized service runs. This parameter includes the following sub-parameters: +

  • config_volume - The mounted docker volume that stores the configuration.
  • puppet_tags - Tags to pass to Puppet during configuration. These tags are used in OpenStack Platform to restrict the Puppet run to a particular service’s configuration resource. For example, the OpenStack Identity (keystone) containerized service uses the keystone_config tag to ensure all required only the keystone_config Puppet resource run on the configuration container.
  • step_config - The configuration data passed to Puppet. This is usually inherited from the referenced composable service.
  • config_image - The container image used to configure the service.
kolla_config
A set of container-specific data that defines configuration file locations, directory permissions, and the command to run on the container to launch the service.
docker_config

Tasks to run on the service’s configuration container. All tasks are grouped into steps to help the director perform a staged deployment. The steps are: +

  • Step 1 - Load balancer configuration
  • Step 2 - Core services (Database, Redis)
  • Step 3 - Initial configuration of OpenStack Platform service
  • Step 4 - General OpenStack Platform services configuration
  • Step 5 - Service activation
host_prep_tasks
Preparation tasks for the bare metal node to accommodate the containerized service.

8.3. Modifying OpenStack Platform Containers

Red Hat provides a set of pre-built container images through the Red Hat Container Catalog (registry.redhat.io). It is possible to modify these images and add additional layers to them. This is useful for adding RPMs for certified 3rd party drivers to the containers.

Note

To ensure continued support for modified OpenStack Platform container images, ensure that the resulting images comply with the "Red Hat Container Support Policy".

This example shows how to customize the latest openstack-keystone image. However, these instructions can also apply to other images:

  1. Pull the image you aim to modify. For example, for the openstack-keystone image:

    $ sudo docker pull registry.redhat.io/rhosp13/openstack-keystone:latest
  2. Check the default user on the original image. For example, for the openstack-keystone image:

    $ sudo docker run -it registry.redhat.io/rhosp13/openstack-keystone:latest whoami
    root
    Note

    The openstack-keystone image uses root as the default user. Other images use specific users. For example, openstack-glance-api uses glance for the default user.

  3. Create a Dockerfile to build an additional layer on an existing container image. The following is an example that pulls the latest OpenStack Identity (keystone) image from the Container Catalog and installs a custom RPM file to the image:

    FROM registry.redhat.io/rhosp13/openstack-keystone
    MAINTAINER Acme
    LABEL name="rhosp13/openstack-keystone-acme" vendor="Acme" version="2.1" release="1"
    
    # switch to root and install a custom RPM, etc.
    USER root
    COPY custom.rpm /tmp
    RUN rpm -ivh /tmp/custom.rpm
    
    # switch the container back to the default user
    USER root
  4. Build and tag the new image. For example, to build with a local Dockerfile stored in the /home/stack/keystone directory and tag it to your undercloud’s local registry:

    $ docker build /home/stack/keystone -t "192.168.24.1:8787/rhosp13/openstack-keystone-acme:rev1"
  5. Push the resulting image to the undercloud’s local registry:

    $ docker push 192.168.24.1:8787/rhosp13/openstack-keystone-acme:rev1
  6. Edit your overcloud container images environment file (usually overcloud_images.yaml) and change the appropriate parameter to use the custom container image.
Warning

The Container Catalog publishes container images with a complete software stack built into it. When the Container Catalog releases a container image with updates and security fixes, your existing custom container will not include these updates and will require rebuilding using the new image version from the Catalog.

8.4. Deploying a Vendor Plugin

To use third-party hardware as a Block Storage back end, you must deploy a vendor plugin. The following example demonstrates how to deploy a vendor plugin to use Dell EMC hardware as a Block Storage back end.

  1. Log in to the registry.connect.redhat.com catalog:

    $ docker login registry.connect.redhat.com
  2. Download the plugin:

    $ docker pull registry.connect.redhat.com/dellemc/openstack-cinder-volume-dellemc-rhosp13
  3. Tag and push the image to the local undercloud registry using the undercloud IP address relevant to your OpenStack deployment:

    $ docker tag registry.connect.redhat.com/dellemc/openstack-cinder-volume-dellemc-rhosp13 192.168.24.1:8787/dellemc/openstack-cinder-volume-dellemc-rhosp13
    
    $ docker push 192.168.24.1:8787/dellemc/openstack-cinder-volume-dellemc-rhosp13
  4. Deploy the overcloud with an additional environment file that contains the following parameter:

    parameter_defaults:
      DockerCinderVolumeImage: 192.168.24.1:8787/dellemc/openstack-cinder-volume-dellemc-rhosp13