Chapter 7. Director operator deployment scenario: Overcloud with Hyper-Converged Infrastructure (HCI)

You can use the director Operator to deploy an overcloud with Hyper-Converged Infrastructure (HCI). This scenario installs both Compute and Ceph Storage OSD services on the same nodes.

Prerequisites

  • Your Compute HCI nodes require extra disks to use as OSDs.

7.1. Creating a data volume for the base operating system

You must create a data volume with the OpenShift Container Platform (OCP) cluster to store the base operating system image for your Controller virtual machines.

Prerequisites

  • Download a Red Hat Enterprise Linux 8.4 QCOW2 image to your workstation. You can download this image from the Product Download section of the Red Hat Customer Portal.
  • Install the virtctl client tool on your workstation. You can install this tool on a Red Hat Enterprise Linux workstation using the following commands:

    $ sudo subscription-manager repos --enable=cnv-4.10-for-rhel-8-x86_64-rpms
    $ sudo dnf install -y kubevirt-virtctl
  • Install the virt-customize client tool on your workstation. You can install this tool on a Red Hat Enterprise Linux workstation using the following command:

    $ dnf install -y libguestfs-tools-c

Procedure

  1. The default QCOW2 image that you have downloaded from access.redhat.com does not use biosdev predictable network interface names. Modify the image with virt-customize to use biosdev predictable network interface names:

    $ sudo virt-customize -a <local path to image> --run-command 'sed -i -e "s/^\(kernelopts=.*\)net.ifnames=0 \(.*\)/\1\2/" /boot/grub2/grubenv'
    $ sudo virt-customize -a <local path to image> --run-command 'sed -i -e "s/^\(GRUB_CMDLINE_LINUX=.*\)net.ifnames=0 \(.*\)/\1\2/" /etc/default/grub' --truncate /etc/machine-id
  2. Upload the image to OpenShift Virtualization with virtctl:

    $ virtctl image-upload dv <datavolume_name> -n openstack \
     --size=<size> --image-path=<local_path_to_image> \
     --storage-class <storage_class> --access-mode <access_mode> --insecure
    • Replace <datavolume_name> with the name of the data volume, for example, openstack-base-img.
    • Replace <size> with the size of the data volume required for your environment, for example, 500Gi. The minimum size is 500GB.
    • Replace <storage_class> with the required storage class from your cluster. Use the following command to retrieve the available storage classes:

      $ oc get storageclass
    • Replace <access_mode> with the access mode for the PVC. The default value is ReadWriteOnce.
  3. When you create the OpenStackControlPlane resource and individual OpenStackVmSet resources, set the baseImageVolumeName parameter to the data volume name:

    ...
    spec:
      ...
      baseImageVolumeName: openstack-base-img
    ...

7.2. Adding authentication details for your remote Git repository

The director Operator stores rendered Ansible playbooks to a remote Git repository and uses this repository to track changes to the overcloud configuration. You can use any Git repository that supports SSH authentication. You must provide details for the Git repository as an OpenShift Secret resource named git-secret.

Prerequisites

  • Ensure your OpenShift Container Platform cluster is operational and you have installed the director Operator correctly.
  • Ensure that you have installed the oc command line tool on your workstation.
  • Prepare a remote Git repository for the director Operator to store the generated configuration for your overcloud.
  • Prepare an SSH key pair. Upload the public key to the Git repository and keep the private key available to add to the git-secret Secret resource.

Procedure

  1. Create the Secret resource:

    $ oc create secret generic git-secret -n openstack --from-file=git_ssh_identity=<path_to_private_SSH_key> --from-literal=git_url=<git_server_URL>

    The git-secret Secret resource contains two key-value pairs:

    git_ssh_identity
    The private key to access the Git repository. The --from-file option stores the content of the private SSH key file.
    git_url
    The SSH URL of the git repository to store the configuration. The --from-literal option stores the URL that you enter for this key.

Verification

  1. View the Secret resource:

    $ oc get secret/git-secret -n openstack

7.3. Setting the root password for nodes

To access the root user with a password on each node, you can set a root password in a Secret resource named userpassword.

Note

Setting the root password for nodes is optional. If you do not set a root password, you can still log into nodes with the SSH keys defined in the osp-controlplane-ssh-keys Secret.

Prerequisites

  • Ensure your OpenShift Container Platform cluster is operational and you have installed the director Operator correctly.
  • Ensure that you have installed the oc command line tool on your workstation.

Procedure

  1. Convert your chosen password to a base64 value:

    $ echo -n "p@ssw0rd!" | base64
    cEBzc3cwcmQh
    Note

    The -n option removes the trailing newline from the echo output.

  2. Create a file named openstack-userpassword.yaml on your workstation. Include the following resource specification for the Secret in the file:

    apiVersion: v1
    kind: Secret
    metadata:
      name: userpassword
      namespace: openstack
    data:
      NodeRootPassword: "cEBzc3cwcmQh"

    Set the NodeRootPassword parameter to your base64 encoded password.

  3. Create the userpassword Secret:

    $ oc create -f openstack-userpassword.yaml -n openstack
Note

Enter the userpassword Secret in passwordSecret when you create OpenStackControlPlane or OpenStackBaremetalSet:

apiVersion: osp-director.openstack.org/v1beta2
kind: OpenStackControlPlane
metadata:
  name: overcloud
  namespace: openstack
spec:
  passwordSecret: <userpassword>
  • Replace <userpassword> with your userpassword Secret.

7.4. Creating an overcloud control plane network with OpenStackNetConfig

You must define at least one control plane network for your overcloud in OpenStackNetConfig. In addition to IP address assignment, the network definition includes the mapping information for OpenStackNetAttachment. OpenShift Virtualization uses this information to attach any virtual machines to the network.

Prerequisites

  • Ensure your OpenShift Container Platform cluster is operational and you have installed the director Operator correctly.
  • Ensure that you have installed the oc command line tool on your workstation.

Procedure

  1. Create a file named osnetconfig.yaml on your workstation. Include the resource specification for the control plane network, which is named ctlplane. For example, the specification for a control plane that uses a Linux bridge connected to the enp6s0 Ethernet device on each worker node is as follows:

    apiVersion: osp-director.openstack.org/v1beta1
    kind: OpenStackNetConfig
    metadata:
      name: openstacknetconfig
    spec:
      attachConfigurations:
        br-osp:
          nodeNetworkConfigurationPolicy:
            nodeSelector:
              node-role.kubernetes.io/worker: ""
            desiredState:
              interfaces:
              - bridge:
                  options:
                    stp:
                      enabled: false
                  port:
                  - name: enp6s0
                description: Linux bridge with enp6s0 as a port
                name: br-osp
                state: up
                type: linux-bridge
                mtu: 1500
      # optional DnsServers list
      dnsServers:
      - 192.168.25.1
      # optional DnsSearchDomains list
      dnsSearchDomains:
      - osptest.test.metalkube.org
      - some.other.domain
      # DomainName of the OSP environment
      domainName: osptest.test.metalkube.org
      networks:
      - name: Control
        nameLower: ctlplane
        subnets:
        - name: ctlplane
          ipv4:
            allocationEnd: 172.22.0.250
            allocationStart: 172.22.0.100
            cidr: 172.22.0.0/24
            gateway: 172.22.0.1
          attachConfiguration: br-osp
      # optional: configure static mapping for the networks per nodes. If there is none, a random gets created
      reservations:
        controller-0:
          ipReservations:
            ctlplane: 172.22.0.120
        compute-0:
          ipReservations:
            ctlplane: 172.22.0.140

    Set the following values in the networks specification:

    name
    Set to the name of the control plane network, which is Control.
    nameLower
    Set to the lower name of the control plane network, which is ctlplane.
    subnets
    Set the subnet specifications.
    subnets.name
    Set the name of the control plane subnet, which is ctlplane.
    subnets.attachConfiguration
    Set the reference to which of the attach configuration should be used.
    subnets.ipv4
    Details of the ipv4 subnet with allocationStart, allocationEnd, cidr, gateway and optional list of routes (with destination and nexthop)

    For descriptions of the values you can use in this section, view the specification schema in the custom resource definition for the openstacknetconfig CRD:

    $ oc describe crd openstacknetconfig

    Save the file when you have finished configuring the network specification.

  2. Create the control plane network:

    $ oc create -f osnetconfig.yaml -n openstack

Verification

  1. View the resource for the control plane network:

    $ oc get openstacknetconfig/openstacknetconfig

7.5. Creating VLAN networks for network isolation with OpenStackNetConfig

You must create additional networks to implement network isolation for your composable networks. To accomplish this network isolation, you can place your composable networks on individual VLAN networks. In addition to IP address assignment, the OpenStackNetConfig resource includes information to define the network configuration policy that OpenShift Virtualization uses to attach any virtual machines to VLAN networks.

To use the default Red Hat OpenStack Platform networks, you must create an OpenStackNetConfig resource which defines each network.

Table 7.1. Default Red Hat OpenStack Platform networks

NetworkVLANCIDRAllocation

External

10

10.0.0.0/24

10.0.0.10 - 10.0.0.250

InternalApi

20

172.17.0.0/24

172.17.0.10 - 172.17.0.250

Storage

30

172.18.0.0/24

172.18.0.10 - 172.18.0.250

StorageMgmt

40

172.19.0.0/24

172.19.0.10 - 172.19..250

Tenant

50

172.20.0.0/24

172.20.0.10 - 172.20.0.250

Important

To use different networking details for each network, you must create a custom network_data.yaml file.

Prerequisites

  • Ensure your OpenShift Container Platform cluster is operational and you have installed the director Operator correctly.
  • Ensure that you have installed the oc command line tool on your workstation.

Procedure

  1. Create a file for your network configuration. Include the resource specification for the VLAN network. For example, the specification for internal API, storage, storage mgmt, tenant, and external network that manages VLAN-tagged traffic over Linux bridges br-ex and br-osp connected to the enp6s0 and enp7s0 Ethernet device on each worker node is as follows:

    kind: OpenStackNetConfig
    metadata:
      name: openstacknetconfig
    spec:
      attachConfigurations:
        br-osp:
          nodeNetworkConfigurationPolicy:
            nodeSelector:
              node-role.kubernetes.io/worker: ""
            desiredState:
              interfaces:
              - bridge:
                  options:
                    stp:
                      enabled: false
                  port:
                  - name: enp7s0
                description: Linux bridge with enp7s0 as a port
                name: br-osp
                state: up
                type: linux-bridge
                mtu: 1500
        br-ex:
          nodeNetworkConfigurationPolicy:
            nodeSelector:
              node-role.kubernetes.io/worker: ""
            desiredState:
              interfaces:
              - bridge:
                  options:
                    stp:
                      enabled: false
                  port:
                  - name: enp6s0
                description: Linux bridge with enp6s0 as a port
                name: br-ex
                state: up
                type: linux-bridge
                mtu: 1500
      # optional DnsServers list
      dnsServers:
      - 172.22.0.1
      # optional DnsSearchDomains list
      dnsSearchDomains:
      - osptest.test.metalkube.org
      - some.other.domain
      # DomainName of the OSP environment
      domainName: osptest.test.metalkube.org
      networks:
      - name: Control
        nameLower: ctlplane
        subnets:
        - name: ctlplane
          ipv4:
            allocationEnd: 172.22.0.250
            allocationStart: 172.22.0.10
            cidr: 172.22.0.0/24
            gateway: 172.22.0.1
          attachConfiguration: br-osp
      - name: InternalApi
        nameLower: internal_api
        mtu: 1350
        subnets:
        - name: internal_api
          attachConfiguration: br-osp
          vlan: 20
          ipv4:
            allocationEnd: 172.17.0.250
            allocationStart: 172.17.0.10
            cidr: 172.17.0.0/24
      - name: External
        nameLower: external
        subnets:
        - name: external
          ipv4:
            allocationEnd: 10.0.0.250
            allocationStart: 10.0.0.10
            cidr: 10.0.0.0/24
            gateway: 10.0.0.1
          attachConfiguration: br-ex
      - name: Storage
        nameLower: storage
        mtu: 1500
        subnets:
        - name: storage
          ipv4:
            allocationEnd: 172.18.0.250
            allocationStart: 172.18.0.10
            cidr: 172.18.0.0/24
          vlan: 30
          attachConfiguration: br-osp
      - name: StorageMgmt
        nameLower: storage_mgmt
        mtu: 1500
        subnets:
        - name: storage_mgmt
          ipv4:
            allocationEnd: 172.19.0.250
            allocationStart: 172.19.0.10
            cidr: 172.19.0.0/24
          vlan: 40
          attachConfiguration: br-osp
      - name: Tenant
        nameLower: tenant
        vip: False
        mtu: 1500
        subnets:
        - name: tenant
          ipv4:
            allocationEnd: 172.20.0.250
            allocationStart: 172.20.0.10
            cidr: 172.20.0.0/24
          vlan: 50
          attachConfiguration: br-osp

    When you use VLAN for network isolation with linux-bridge the following happens:

    • The director Operator creates a Node Network Configuration Policy for the bridge interface specified in the resource, which uses nmstate to configure the bridge on worker nodes.
    • The director Operator creates a Network Attach Definition for each network, which defines the Multus CNI plugin configuration. When you specify the VLAN ID on the Network Attach Definition, the Multus CNI plugin enables vlan-filtering on the bridge.
    • The director Operator attaches a dedicated interface for each network on a virtual machine. This means that the network template for the OpenStackVMSet is a multi-NIC network template.

    Set the following values in the resource specification:

    metadata.name
    Set to the name of the OpenStackNetConfig.
    spec

    Set the network configuration for attaching the networks and the network specifics. For descriptions of the values you can use in this section, view the specification schema in the custom resource definition for the openstacknetconfig CRD:

    $ oc describe crd openstacknetconfig

    Save the file when you have finished configuring the network specification.

  2. Create the network configuration:

    $ oc apply -f openstacknetconfig.yaml -n openstack

Verification

  1. View the OpenStackNetConfig API and created child resources:

    $ oc get openstacknetconfig/openstacknetconfig -n openstack
    $ oc get openstacknetattachment -n openstack
    $ oc get openstacknet -n openstack

    If you see errors, check the underlying network-attach-definition and node network configuration policies:

    $ oc get network-attachment-definitions -n openstack
    $ oc get nncp

7.6. Creating a control plane with OpenStackControlPlane

The overcloud control plane contains the main Red Hat OpenStack Platform services that manage overcloud functionality. The control plane usually consists of 3 Controller nodes and can scale to other control plane-based composable roles. When you use composable roles, each service must run on exactly 3 additional dedicated nodes and the total number of nodes in the control plane must be odd to maintain Pacemaker quorum.

The OpenStackControlPlane custom resource creates control plane-based nodes as virtual machines within OpenShift Virtualization.

Prerequisites

  • Ensure your OpenShift Container Platform cluster is operational and you have installed the director Operator correctly.
  • Ensure that you have installed the oc command line tool on your workstation.
  • Use the OpenStackNetConfig resource to create a control plane network and any additional isolated networks.

Procedure

  1. Create a file named openstack-controller.yaml on your workstation. Include the resource specification for the Controller nodes. For example, the specification for a control plane that consists of 3 Controller nodes is as follows:

    apiVersion: osp-director.openstack.org/v1beta2
    kind: OpenStackControlPlane
    metadata:
      name: overcloud
      namespace: openstack
    spec:
      openStackClientNetworks:
            - ctlplane
            - internal_api
            - external
      openStackClientStorageClass: host-nfs-storageclass
      passwordSecret: userpassword
      virtualMachineRoles:
        Controller:
          roleName: Controller
          roleCount: 3
          networks:
            - ctlplane
            - internal_api
            - external
            - tenant
            - storage
            - storage_mgmt
          cores: 12
          memory: 64
          rootDisk:
            diskSize: 500
            baseImageVolumeName: openstack-base-img
            # storageClass must support RWX to be able to live migrate VMs
            storageClass: host-nfs-storageclass
            storageAccessMode:  ReadWriteMany
            # When using OpenShift Virtualization with OpenShift Container Platform Container Storage,
            # specify RBD block mode persistent volume claims (PVCs) when creating virtual machine disks.
            # With virtual machine disks, RBD block mode volumes are more efficient and provide better
            # performance than Ceph FS or RBD filesystem-mode PVCs.
            # To specify RBD block mode PVCs, use the 'ocs-storagecluster-ceph-rbd' storage class and
            # VolumeMode: Block.
            storageVolumeMode: Filesystem
          # optional configure additional discs to be attached to the VMs,
          # need to be configured manually inside the VMs where to be used.
          additionalDisks:
            - name: datadisk
              diskSize: 500
              storageClass: host-nfs-storageclass
              storageAccessMode:  ReadWriteMany
              storageVolumeMode: Filesystem
      openStackRelease: "16.2"

    Set the following values in the resource specification:

    metadata.name
    Set to the name of the overcloud control plane, which is overcloud.
    metadata.namespace
    Set to the director Operator namespace, which is openstack.
    spec

    Set the configuration for the control plane. For descriptions of the values you can use in this section, view the specification schema in the custom resource definition for the openstackcontrolplane CRD:

    $ oc describe crd openstackcontrolplane

    Save the file when you have finished configuring the control plane specification.

  2. Create the control plane:

    $ oc create -f openstack-controller.yaml -n openstack

    Wait until OCP creates the resources related to OpenStackControlPlane resource.

    As a part of the OpenStackControlPlane resource, the director Operator also creates an OpenStackClient pod that you can access through a remote shell and run RHOSP commands.

Verification

  1. View the resource for the control plane:

    $ oc get openstackcontrolplane/overcloud -n openstack
  2. View the OpenStackVMSet resources to verify the creation of the control plane virtual machine set:

    $ oc get openstackvmsets -n openstack
  3. View the virtual machine resources to verify the creation of the control plane virtual machines in OpenShift Virtualization:

    $ oc get virtualmachines
  4. Test access to the openstackclient remote shell:

    $ oc rsh -n openstack openstackclient

7.7. Creating directories for templates and environment files

Create directories on your workstation to store your custom templates and environment files, which you upload to ConfigMaps in OpenShift Container Platform (OCP).

Procedure

  1. Create a directory for your custom templates:

    $ mkdir custom_templates
  2. Create a directory for your custom environment files:

    $ mkdir custom_environment_files

7.8. Custom NIC heat template for HCI Compute nodes

The following example is a heat template that contains NIC configuration for the HCI Compute bare metal nodes.

heat_template_version: rocky
description: >
  Software Config to drive os-net-config to configure VLANs for the Compute role.
parameters:
  ControlPlaneIp:
    default: ''
    description: IP address/subnet on the ctlplane network
    type: string
  ControlPlaneSubnetCidr:
    default: ''
    description: >
      The subnet CIDR of the control plane network. (The parameter is
      automatically resolved from the ctlplane subnet's cidr attribute.)
    type: string
  ControlPlaneDefaultRoute:
    default: ''
    description: The default route of the control plane network. (The parameter
      is automatically resolved from the ctlplane subnet's gateway_ip attribute.)
    type: string
  ControlPlaneStaticRoutes:
    default: []
    description: >
      Routes for the ctlplane network traffic.
      JSON route e.g. [{'destination':'10.0.0.0/16', 'nexthop':'10.0.0.1'}]
      Unless the default is changed, the parameter is automatically resolved
      from the subnet host_routes attribute.
    type: json
  ControlPlaneMtu:
    default: 1500
    description: The maximum transmission unit (MTU) size(in bytes) that is
      guaranteed to pass through the data path of the segments in the network.
      (The parameter is automatically resolved from the ctlplane network's mtu attribute.)
    type: number
  StorageIpSubnet:
    default: ''
    description: IP address/subnet on the storage network
    type: string
  StorageNetworkVlanID:
    default: 30
    description: Vlan ID for the storage network traffic.
    type: number
  StorageMtu:
    default: 1500
    description: The maximum transmission unit (MTU) size(in bytes) that is
      guaranteed to pass through the data path of the segments in the
      Storage network.
    type: number
  StorageInterfaceRoutes:
    default: []
    description: >
      Routes for the storage network traffic.
      JSON route e.g. [{'destination':'10.0.0.0/16', 'nexthop':'10.0.0.1'}]
      Unless the default is changed, the parameter is automatically resolved
      from the subnet host_routes attribute.
    type: json
  StorageMgmtIpSubnet:
    default: ''
    description: IP address/subnet on the storage_mgmt network
    type: string
  StorageMgmtNetworkVlanID:
    default: 40
    description: Vlan ID for the storage_mgmt network traffic.
    type: number
  StorageMgmtMtu:
    default: 1500
    description: The maximum transmission unit (MTU) size(in bytes) that is
      guaranteed to pass through the data path of the segments in the
      StorageMgmt network.
    type: number
  StorageMgmtInterfaceRoutes:
    default: []
    description: >
      Routes for the storage_mgmt network traffic.
      JSON route e.g. [{'destination':'10.0.0.0/16', 'nexthop':'10.0.0.1'}]
      Unless the default is changed, the parameter is automatically resolved
      from the subnet host_routes attribute.
    type: json
  InternalApiIpSubnet:
    default: ''
    description: IP address/subnet on the internal_api network
    type: string
  InternalApiNetworkVlanID:
    default: 20
    description: Vlan ID for the internal_api network traffic.
    type: number
  InternalApiMtu:
    default: 1500
    description: The maximum transmission unit (MTU) size(in bytes) that is
      guaranteed to pass through the data path of the segments in the
      InternalApi network.
    type: number
  InternalApiInterfaceRoutes:
    default: []
    description: >
      Routes for the internal_api network traffic.
      JSON route e.g. [{'destination':'10.0.0.0/16', 'nexthop':'10.0.0.1'}]
      Unless the default is changed, the parameter is automatically resolved
      from the subnet host_routes attribute.
    type: json
  TenantIpSubnet:
    default: ''
    description: IP address/subnet on the tenant network
    type: string
  TenantNetworkVlanID:
    default: 50
    description: Vlan ID for the tenant network traffic.
    type: number
  TenantMtu:
    default: 1500
    description: The maximum transmission unit (MTU) size(in bytes) that is
      guaranteed to pass through the data path of the segments in the
      Tenant network.
    type: number
  TenantInterfaceRoutes:
    default: []
    description: >
      Routes for the tenant network traffic.
      JSON route e.g. [{'destination':'10.0.0.0/16', 'nexthop':'10.0.0.1'}]
      Unless the default is changed, the parameter is automatically resolved
      from the subnet host_routes attribute.
    type: json
  ExternalMtu:
    default: 1500
    description: The maximum transmission unit (MTU) size(in bytes) that is
      guaranteed to pass through the data path of the segments in the
      External network.
    type: number
  DnsServers: # Override this via parameter_defaults
    default: []
    description: >
      DNS servers to use for the Overcloud (2 max for some implementations).
      If not set the nameservers configured in the ctlplane subnet's
      dns_nameservers attribute will be used.
    type: comma_delimited_list
  DnsSearchDomains: # Override this via parameter_defaults
    default: []
    description: A list of DNS search domains to be added (in order) to resolv.conf.
    type: comma_delimited_list

resources:

  MinViableMtu:
    # This resource resolves the minimum viable MTU for interfaces, bonds and
    # bridges that carry multiple VLANs. Each VLAN may have different MTU. The
    # bridge, bond or interface must have an MTU to allow the VLAN with the
    # largest MTU.
    type: OS::Heat::Value
    properties:
      type: number
      value:
        yaql:
          expression: $.data.max()
          data:
            - {get_param: ControlPlaneMtu}
            - {get_param: StorageMtu}
            - {get_param: StorageMgmtMtu}
            - {get_param: InternalApiMtu}
            - {get_param: TenantMtu}
            - {get_param: ExternalMtu}

  OsNetConfigImpl:
    type: OS::Heat::SoftwareConfig
    properties:
      group: script
      config:
        str_replace:
          template:
            get_file: /usr/share/openstack-tripleo-heat-templates/network/scripts/run-os-net-config.sh
          params:
            $network_config:
              network_config:
              - type: interface
                name: nic4
                mtu:
                  get_attr: [MinViableMtu, value]
                use_dhcp: false
                dns_servers:
                  get_param: DnsServers
                domain:
                  get_param: DnsSearchDomains
                addresses:
                - ip_netmask:
                    list_join:
                    - /
                    - - get_param: ControlPlaneIp
                      - get_param: ControlPlaneSubnetCidr
                routes:
                  list_concat_unique:
                    - get_param: ControlPlaneStaticRoutes
                    - - default: true
                        next_hop:
                          get_param: ControlPlaneDefaultRoute
              - type: vlan
                mtu:
                  get_param: StorageMtu
                device: nic4
                vlan_id:
                  get_param: StorageNetworkVlanID
                addresses:
                - ip_netmask:
                    get_param: StorageIpSubnet
                routes:
                  list_concat_unique:
                    - get_param: StorageInterfaceRoutes
              - type: vlan
                mtu:
                  get_param: InternalApiMtu
                device: nic4
                vlan_id:
                  get_param: InternalApiNetworkVlanID
                addresses:
                - ip_netmask:
                    get_param: InternalApiIpSubnet
                routes:
                  list_concat_unique:
                    - get_param: InternalApiInterfaceRoutes

              - type: ovs_bridge
                # This will default to br-ex, anything else   requires specific
                # bridge mapping entries for it to be used.
                name: bridge_name
                mtu:
                  get_param: ExternalMtu
                use_dhcp: false
                members:
                - type: interface
                  name: nic3
                  mtu:
                    get_param: ExternalMtu
                  use_dhcp: false
                  primary: true
                - type: vlan
                  mtu:
                    get_param: TenantMtu
                  vlan_id:
                    get_param: TenantNetworkVlanID
                  addresses:
                  - ip_netmask:
                      get_param: TenantIpSubnet
                  routes:
                    list_concat_unique:
                      - get_param: TenantInterfaceRoutes
outputs:
  OS::stack_id:
    description: The OsNetConfigImpl resource.
    value:
      get_resource: OsNetConfigImpl

This configuration maps the the networks to the following bridges and interfaces:

NetworksBridgeinterface

Control Plane, Storage, Internal API

N/A

nic4

External, Tenant

br-ex

nic3

Note

You can modify this configuration to suit the NIC configuration of your bare metal nodes.

To use this template in your deployment, copy the contents of the example to net-config-two-nic-vlan-computehci.yaml in your custom_templates directory on your workstation.

7.9. Creating a roles_data.yaml file with the Compute HCI role for the director Operator

To include configuration for the Compute HCI role in your overcloud, you must include the Compute HCI role in the roles_data.yaml file that you include with your overcloud deployment.

Note

Ensure you use roles_data.yaml as the file name.

Prerequisites

  • Ensure your OpenShift Container Platform cluster is operational and you have installed the director Operator correctly.
  • Ensure that you have installed the oc command line tool on your workstation.
  • Use the OpenStackControlPlane resource to create a control plane.

Procedure

  1. Access the remote shell for openstackclient:

    $ oc rsh -n openstack openstackclient
  2. Unset the OS_CLOUD environment variable:

    $ unset OS_CLOUD
  3. Change to the cloud-admin directory:

    $ cd /home/cloud-admin/
  4. Generate a new roles_data.yaml file with the Controller and ComputeHCI roles:

    $ openstack overcloud roles generate Controller ComputeHCI > roles_data.yaml
  5. Exit the openstackclient pod:

    $ exit
  6. Copy the custom roles_data.yaml file from the openstackclient pod to your custom templates directory:

    $ oc cp openstackclient:/home/cloud-admin/roles_data.yaml custom_templates/roles_data.yaml -n openstack

Additional resources

7.10. Adding custom templates to the overcloud configuration

Archive your custom templates into a tarball file so that you can include these templates as a part of your overcloud deployment.

Prerequisites

  • Ensure your OpenShift Container Platform cluster is operational and you have installed the director Operator correctly.
  • Ensure that you have installed the oc command line tool on your workstation.
  • Create the custom templates that you want to apply to provisioned nodes.

Procedure

  1. Navigate to the location of your custom templates:

    $ cd ~/custom_templates
  2. Archive the templates into a tarball:

    $ tar -cvzf custom-config.tar.gz *.yaml
  3. Create the tripleo-tarball-config ConfigMap and use the tarball as data:

    $ oc create configmap tripleo-tarball-config --from-file=custom-config.tar.gz -n openstack

Verification

  1. View the ConfigMap:

    $ oc get configmap/tripleo-tarball-config -n openstack

7.11. Custom environment file for configuring HCI networking in the director Operator

The following example is an environment file that maps the network software configuration resources to the NIC templates for your overcloud.

resource_registry:
  OS::TripleO::ComputeHCI::Net::SoftwareConfig: net-config-two-nic-vlan-computehci.yaml
Note

Add any additional network configuration in a parameter_defaults section.

To use this template in your deployment, copy the contents of the example to network-environment.yaml in your custom_environment_files directory on your workstation.

7.12. Custom environment file for configuring Hyper-Converged Infrastructure (HCI) storage in the director Operator

The following example is an environment file that contains Ceph Storage configuration for the Compute HCI nodes.

resource_registry:
  OS::TripleO::Services::CephMgr: deployment/ceph-ansible/ceph-mgr.yaml
  OS::TripleO::Services::CephMon: deployment/ceph-ansible/ceph-mon.yaml
  OS::TripleO::Services::CephOSD: deployment/ceph-ansible/ceph-osd.yaml
  OS::TripleO::Services::CephClient: deployment/ceph-ansible/ceph-client.yaml

parameter_defaults:
  # needed for now because of the repo used to create tripleo-deploy image
  CephAnsibleRepo: "rhelosp-ceph-4-tools"
  CephAnsiblePlaybookVerbosity: 3
  CinderEnableIscsiBackend: false
  CinderEnableRbdBackend: true
  CinderBackupBackend: ceph
  CinderEnableNfsBackend: false
  NovaEnableRbdBackend: true
  GlanceBackend: rbd
  CinderRbdPoolName: "volumes"
  NovaRbdPoolName: "vms"
  GlanceRbdPoolName: "images"
  CephPoolDefaultPgNum: 32
  CephPoolDefaultSize: 2
  CephAnsibleDisksConfig:
    devices:
      - '/dev/sdb'
      - '/dev/sdc'
      - '/dev/sdd'
    osd_scenario: lvm
    osd_objectstore: bluestore
  CephAnsibleExtraConfig:
    is_hci: true
  CephConfigOverrides:
    rgw_swift_enforce_content_length: true
    rgw_swift_versioning_enabled: true

This configuration maps the OSD nodes to the sdb, sdc, and sdd devices and enables HCI with the is_hci option.

Note

You can modify this configuration to suit the storage configuration of your bare metal nodes. Use the "Ceph Placement Groups (PGs) per Pool Calculator" to determine the value for the CephPoolDefaultPgNum parameter.

To use this template in your deployment, copy the contents of the example to compute-hci.yaml in your custom_environment_files directory on your workstation.

7.13. Adding custom environment files to the overcloud configuration

Upload a set of custom environment files from a directory to a ConfigMap that you can include as a part of your overcloud deployment.

Prerequisites

  • Ensure your OpenShift Container Platform cluster is operational and you have installed the director Operator correctly.
  • Ensure that you have installed the oc command line tool on your workstation.
  • Create custom environment files for your overcloud deployment.

Procedure

  1. Create the heat-env-config ConfigMap and use the directory that contains the environment files as data:

    $ oc create configmap -n openstack heat-env-config --from-file=~/custom_environment_files/ --dry-run=client -o yaml | oc apply -f -

Verification

  1. View the ConfigMap:

    $ oc get configmap/heat-env-config -n openstack

7.14. Creating HCI Compute nodes with OpenStackBaremetalSet

Compute nodes provide computing resources to your Red Hat OpenStack Platform environment. You must have at least one Compute node in your overcloud and you can scale the number of Compute nodes after deployment.

The OpenStackBaremetalSet custom resource creates Compute nodes from bare metal machines that OpenShift Container Platform manages.

Prerequisites

  • Ensure your OpenShift Container Platform cluster is operational and you have installed the director Operator correctly.
  • Ensure that you have installed the oc command line tool on your workstation.
  • Use the OpenStackNetConfig resource to create a control plane network and any additional isolated networks.

Procedure

  1. Create a file named openstack-hcicompute.yaml on your workstation. Include the resource specification for the Compute nodes. For example, the specification for 1 Compute node is as follows:

    apiVersion: osp-director.openstack.org/v1beta1
    kind: OpenStackBaremetalSet
    metadata:
      name: computehci
      namespace: openstack
    spec:
      count: 3
      baseImageUrl: http://host/images/rhel-image-8.4.x86_64.qcow2
      deploymentSSHSecret: osp-controlplane-ssh-keys
      ctlplaneInterface: enp8s0
      networks:
        - ctlplane
        - internal_api
        - tenant
        - storage
        - storage_mgmt
      roleName: ComputeHCI
      passwordSecret: userpassword

    Set the following values in the resource specification:

    metadata.name
    Set to the name of the Compute node bare metal set, which is overcloud.
    metadata.namespace
    Set to the director Operator namespace, which is openstack.
    spec

    Set the configuration for the Compute nodes. For descriptions of the values you can use in this section, view the specification schema in the custom resource definition for the openstackbaremetalset CRD:

    $ oc describe crd openstackbaremetalset

    Save the file when you have finished configuring the Compute node specification.

  2. Create the Compute nodes:

    $ oc create -f openstack-hcicompute.yaml -n openstack

Verification

  1. View the resource for the Compute HCI nodes:

    $ oc get openstackbaremetalset/computehci -n openstack
  2. View the bare metal machines that OpenShift manages to verify the creation of the Compute HCI nodes:

    $ oc get baremetalhosts -n openshift-machine-api

7.15. Creating Ansible playbooks for overcloud configuration with OpenStackConfigGenerator

After you provision the overcloud infrastructure, you must create a set of Ansible playbooks to configure the Red Hat OpenStack Platform (RHOSP) software on the overcloud nodes. You create these playbooks with the OpenStackConfigGenerator resource, which uses the config-download feature in RHOSP director to convert heat configuration to playbooks.

Prerequisites

  • Ensure your OpenShift Container Platform cluster is operational and you have installed the director Operator correctly.
  • Ensure that you have installed the oc command line tool on your workstation.
  • OpenStackControlPlane and OpenStackBarementalSets created as required.
  • Configure a git-secret Secret that contains authentication details for your remote Git repository.
  • Configure a tripleo-tarball-config ConfigMap that contains your custom heat templates.
  • Configure a heat-env-config ConfigMap that contains your custom environment files.

Procedure

  1. Create a file named openstack-config-generator.yaml on your workstation. Include the resource specification to generate the Ansible playbooks. For example, the specification to generate the playbooks is as follows:

    apiVersion: osp-director.openstack.org/v1beta1
    kind: OpenStackConfigGenerator
    metadata:
      name: default
      namespace: openstack
    spec:
      enableFencing: true
      gitSecret: git-secret
      imageURL: registry.redhat.io/rhosp-rhel8/openstack-tripleoclient:16.2
      heatEnvConfigMap: heat-env-config
      # List of heat environment files to include from tripleo-heat-templates/environments
      heatEnvs:
      - ssl/tls-endpoints-public-dns.yaml
      - ssl/enable-tls.yaml
      tarballConfigMap: tripleo-tarball-config

    Set the following values in the resource specification:

    metadata.name
    Set to the name of the Compute node bare metal set, by default default.
    metadata.namespace
    Set to the director Operator namespace, by default openstack.
    spec.enableFencing
    Enable the automatic creation of required heat environment files to enable fencing.
    Note

    Production OSP environments must have fencing enabled. Virtual machines running pacemaker require the fence-agents-kubevirt package.

    spec.gitSecret
    Set to the ConfigMap that contains the Git authentication credentials, by default git-secret.
    spec.heatEnvs
    A list of default tripleo environment files used to generate the playbooks.
    spec.heatEnvConfigMap
    Set to the ConfigMap that contains your custom environment files, by default heat-env-config.
    spec.tarballConfigMap
    Set to the ConfigMap that contains the tarball with your custom heat templates, by default tripleo-tarball-config.

    For more descriptions of the values you can use in the spec section, view the specification schema in the custom resource definition for the openstackconfiggenerator CRD:

    $ oc describe crd openstackconfiggenerator

    Save the file when you have finished configuring the Ansible config generator specification.

  2. Create the Ansible config generator:

    $ oc create -f openstack-config-generator.yaml -n openstack

Verification

  1. View the resource for the config generator:

    $ oc get openstackconfiggenerator/default -n openstack

7.16. Registering the operating system of your overcloud

Before the director Operator configures the overcloud software on nodes, you must register the operating system of all nodes to either the Red Hat Customer Portal or Red Hat Satellite Server, and enable repositories for your nodes.

As a part of the OpenStackControlPlane resource, the director Operator creates an OpenStackClient pod that you access through a remote shell and run Red Hat OpenStack Platform (RHOSP) commands. This pod also contains an ansible inventory script named /home/cloud-admin/ctlplane-ansible-inventory.

To register your nodes, you can use the redhat_subscription Ansible module with the inventory script from the OpentackClient pod.

Prerequisites

  • Ensure your OpenShift Container Platform cluster is operational and you have installed the director Operator correctly.
  • Ensure that you have installed the oc command line tool on your workstation.
  • Use the OpenStackControlPlane resource to create a control plane.
  • Use the OpenStackBareMetalSet resource to create bare metal Compute nodes.

Procedure

  1. Access the remote shell for openstackclient:

    $ oc rsh -n openstack openstackclient
  2. Change to the cloud-admin home directory:

    $ cd /home/cloud-admin
  3. Create a playbook that uses the redhat_subscription modules to register your nodes. For example, the following playbook registers Controller nodes:

    ---
    - name: Register Controller nodes
      hosts: Controller
      become: yes
      vars:
        repos:
          - rhel-8-for-x86_64-baseos-eus-rpms
          - rhel-8-for-x86_64-appstream-eus-rpms
          - rhel-8-for-x86_64-highavailability-eus-rpms
          - ansible-2.9-for-rhel-8-x86_64-rpms
          - openstack-16.2-for-rhel-8-x86_64-rpms
          - fast-datapath-for-rhel-8-x86_64-rpms
      tasks:
        - name: Register system
          redhat_subscription:
            username: myusername
            password: p@55w0rd!
            org_id: 1234567
            release: 8.4
            pool_ids: 1a85f9223e3d5e43013e3d6e8ff506fd
        - name: Disable all repos
          command: "subscription-manager repos --disable *"
        - name: Enable Controller node repos
          command: "subscription-manager repos --enable {{ item }}"
          with_items: "{{ repos }}"

    This play contains the following three tasks:

    • Register the node.
    • Disable any auto-enabled repositories.
    • Enable only the repositories relevant to the Controller node. The repositories are listed with the repos variable.
  4. Register the overcloud nodes to required repositories:

    ansible-playbook -i /home/cloud-admin/ctlplane-ansible-inventory ./rhsm.yaml

7.17. Applying overcloud configuration with the director Operator

You can configure the overcloud with director Operator only after you have created your control plane, provisioned your bare metal Compute nodes, and generated the Ansible playbooks to configure software on each node. When you create an OpenStackDeploy resource, the director Operator creates a job that runs the ansible playbooks to configure the overcloud.

Prerequisites

  • Ensure your OpenShift Container Platform cluster is operational and you have installed the director Operator correctly.
  • Ensure that you have installed the oc command line tool on your workstation.
  • Use the OpenStackControlPlane resource to create a control plane.
  • Use the OpenStackBareMetalSet resource to create bare metal Compute nodes.
  • Use the OpentackConfigGenerator to create the Ansible playbook configuration for your overcloud.
  • Use the OpeenstackConfigVersion to select the hash/digest of the ansible playbooks which should be used to configure the overcloud.

Procedure

  1. Create a file named openstack-deployment.yaml on your workstation. Include the resource specification to the Ansible playbooks. For example:

    apiVersion: osp-director.openstack.org/v1beta1
    kind: OpenStackDeploy
    metadata:
      name: default
    spec:
      configVersion: n5fch96h548h75hf4hbdhb8hfdh676h57bh96h5c5h59hf4h88h…
      configGenerator: default

    Set the following values in the resource specification:

    metadata.name
    Set the name of the Compute node baremetal set, by default default.
    metadata.namespace
    Set to the diretor Operator namespace, by default openstack.
    spec.configVersion
    The config version/git hash of the playbooks to deploy.
    spec.configGenerator
    The name of the configGenerator.

    For more descriptions of the values you can use inthe spec section, view the specification schema in the custom resource definition of the openstackdeploy CRD:

    $ oc describe crd openstackdeploy

    Save the file when you have finished configuring the OpenStackDeploy specification.

  2. Create the OpenStackDeploy resource:

    $ oc create -f openstack-deployment.yaml -n openstack

    As the deployment runs it creates a Kubernetes job to execute the Ansible playbooks. You can tail the logs of the job to watch the Ansible playbooks running:

    $ oc logs -f jobs/deploy-openstack-default

    Additionally, you can manually access the executed Ansible playbooks by logging into the openstackclient pod. In the /home/cloud-admin/work/directory you can find the ansible playbooks and the ansible.log file for the current deployment.