Chapter 8. Director operator deployment scenario: Overcloud with external Ceph Storage

You can use the director Operator to deploy an overcloud that connects to an external Red Hat Ceph Storage cluster.

Prerequisites

  • An external Red Hat Ceph Storage cluster.

8.1. Creating a data volume for the base operating system

You must create a data volume with the OpenShift Container Platform (OCP) cluster to store the base operating system image for your Controller virtual machines.

Prerequisites

  • Download a Red Hat Enterprise Linux 8 QCOW2 image to your workstation.You can download this image from the Product Download section of the Red Hat Customer Portal.
  • Install the virtctl client tool on your workstation. You can install this tool on a Red Hat Enterprise Linux workstation using the following commands:

    $ sudo subscription-manager repos --enable=cnv-2.6-for-rhel-8-x86_64-rpms
    $ sudo dnf install -y kubevirt-virtctl
  • Install the virt-customize client tool on your workstation. You can install this tool on a Red Hat Enterprise Linux workstation using the following command:

    $ dnf install -y libguestfs-tools-c

Procedure

  1. The default QCOW2 image that you have downloaded from access.redhat.com does not use biosdev predictable network interface names. Modify the image with virt-customize to use biosdev predictable network interface names:

    $ sudo virt-customize -a <local path to image> --run-command 'sed -i -e "s/^\(kernelopts=.*\)net.ifnames=0 \(.*\)/\1\2/" /boot/grub2/grubenv'
    $ sudo virt-customize -a <local path to image> --run-command 'sed -i -e "s/^\(GRUB_CMDLINE_LINUX=.*\)net.ifnames=0 \(.*\)/\1\2/" /etc/default/grub'
  2. Upload the image to OpenShift Virtualization with virtctl:

    $ virtctl image-upload dv openstack-base-img -n openstack --size=50Gi --image-path=<local path to image> --storage-class <storage class> --insecure

    For the --storage-class option, choose a storage class from your cluster. View a list of storage classes with the following command:

    $ oc get storageclass
  3. When you create the OpenStackControlPlane resource and individual OpenStackVmSet resources, set the baseImageVolumeName parameter to the data volume name:

    ...
    spec:
      ...
      baseImageVolumeName: openstack-base-img
    ...

8.2. Adding authentication details for your remote Git repository

The director Operator stores rendered Ansible playbooks to a remote Git repository and uses this repository to track changes to the overcloud configuration. You can use any Git repository that supports SSH authentication. You must provide details for the Git repository as an OpenShift Secret resource named git-secret.

Prerequisites

  • Ensure your OpenShift Container Platform cluster is operational and you have installed the director Operator correctly.
  • Ensure that you have installed the oc command line tool on your workstation.
  • Prepare a remote Git repository for the director Operator to store the generated configuration for your overcloud.
  • Prepare an SSH key pair. Upload the public key to the Git repository and keep the private key available to add to the git-secret Secret resource.

Procedure

  • Create the Secret resource:

    $ oc create secret generic git-secret -n openstack --from-file=git_ssh_identity=<path_to_private_SSH_key> --from-literal=git_url=<git_server_URL>

    The git-secret Secret resource contains two key-value pairs:

    git_ssh_identity
    The private key to access the Git repository. The --from-file option stores the content of the private SSH key file.
    git_url
    The SSH URL of the git repository to store the configuration. The --from-literal option stores the URL that you enter for this key.

Verification

  • View the Secret resource:

    $ oc get secret/git-secret -n openstack

8.3. Setting the root password for nodes

To access the root user with a password on each node, you can set a root password in a Secret resource named userpassword.

Note

Setting the root password for nodes is optional. If you do not set a root password, you can still log into nodes with the SSH keys defined in the osp-controlplane-ssh-keys Secret.

Prerequisites

  • Ensure your OpenShift Container Platform cluster is operational and you have installed the director Operator correctly.
  • Ensure that you have installed the oc command line tool on your workstation.

Procedure

  1. Convert your chosen password to a base64 value:

    $ echo -n "p@ssw0rd!" | base64
    cEBzc3cwcmQh
    Note

    The -n option removes the trailing newline from the echo output.

  2. Create a file named openstack-userpassword.yaml on your workstation. Include the following resource specification for the Secret in the file:

    apiVersion: v1
    kind: Secret
    metadata:
      name: userpassword
      namespace: openstack
    data:
      NodeRootPassword: "cEBzc3cwcmQh"

    Set the NodeRootPassword parameter to your base64 encoded password.

  3. Create the userpassword Secret:

    $ oc create -f openstack-userpassword.yaml -n openstack

8.4. Creating an overcloud control plane network with OpenStackNet

You must create one control plane network for your overcloud. In addition to IP address assignment, the OpenStackNet resource includes information to define the network configuration policy that OpenShift Virtualization uses to attach any virtual machines to the network.

Prerequisites

  • Ensure your OpenShift Container Platform cluster is operational and you have installed the director Operator correctly.
  • Ensure that you have installed the oc command line tool on your workstation.

Procedure

  1. Create a file named ctlplane-network.yaml on your workstation. Include the resource specification for the control plane network, which is named ctlplane. For example, the specification for a control plane that uses a Linux bridge connected to the enp6s0 Ethernet device on each worker node is as follows:

    apiVersion: osp-director.openstack.org/v1beta1
    kind: OpenStackNet
    metadata:
      name: ctlplane
    spec:
      cidr: 192.168.25.0/24
      allocationStart: 192.168.25.100
      allocationEnd: 192.168.25.250
      gateway: 192.168.25.1
      attachConfiguration:
        nodeNetworkConfigurationPolicy:
          nodeSelector:
            node-role.kubernetes.io/worker: ""
          desiredState:
            interfaces:
            - bridge:
                options:
                  stp:
                    enabled: false
                port:
                - name: enp6s0
              description: Linux bridge with enp6s0 as a port
              name: br-osp
              state: up
              type: linux-bridge

    Set the following values in the resource specification:

    metadata.name
    Set to the name of the control plane network, which is ctlplane.
    spec

    Set the network configuration for the control plane network. For descriptions of the values you can use in this section, view the specification schema in the custom resource definition for the openstacknet CRD:

    $ oc describe crd openstacknet

    Save the file when you have finished configuring the network specification.

  2. Create the control plane network:

    $ oc create -f ctlplane-network.yaml -n openstack

Verification

  • View the resource for the control plane network:

    $ oc get openstacknet/ctlplane

8.5. Creating VLAN networks for network isolation with OpenStackNet

You must create additional networks to implement network isolation for your composable networks. To accomplish this network isolation, you can place your composable networks networks on individual VLAN networks. In addition to IP address assignment, the OpenStackNet resource includes information to define the network configuration policy that OpenShift Virtualization uses to attach any virtual machines to VLAN networks.

To use the default Red Hat OpenStack Platform networks, you must create an OpenStackNet resource for each network.

Table 8.1. Default Red Hat OpenStack Platform networks

NetworkVLANCIDRAllocation

External

10

10.0.0.0/24

10.0.0.4 - 10.0.0.250

InternalApi

20

172.16.2.0/24

172.16.2.4 - 172.16.2.250

Storage

30

172.16.1.0/24

172.16.1.4 - 172.16.1.250

StorageMgmt

40

172.16.3.0/24

172.16.3.4 - 172.16.3.250

Tenant

50

172.16.0.0/24

172.16.0.4 - 172.16.0.250

Important

To use different networking details for each network, you must create a custom network_data.yaml file.

Prerequisites

  • Ensure your OpenShift Container Platform cluster is operational and you have installed the director Operator correctly.
  • Ensure that you have installed the oc command line tool on your workstation.

Procedure

  1. Create a file for your new network. For example, for the internal API network, create a file named internalapi-network.yaml on your workstation. Include the resource specification for the VLAN network. For example, the specification for an internal API network that manages VLAN-tagged traffic over a Linux bridge connected to the enp6s0 Ethernet device on each worker node is as follows:

    apiVersion: osp-director.openstack.org/v1beta1
    kind: OpenStackNet
    metadata:
      name: internalapi
    spec:
      cidr: 172.16.2.0/24
      vlan: 20
      allocationStart: 172.16.2.4
      allocationEnd: 172.16.2.250
      attachConfiguration:
        nodeNetworkConfigurationPolicy:
          nodeSelector:
            node-role.kubernetes.io/worker: ""
          desiredState:
            interfaces:
            - bridge:
                options:
                  stp:
                    enabled: false
                port:
                - name: enp6s0
              description: Linux bridge with enp7s0 as a port
              name: br-osp
              state: up
              type: linux-bridge

    When you use a VLAN for network isolation with linux-bridge the following happens:

    • The director Operator creates a Node Network Configuration Policy for the bridge interface specified in the resource, which uses nmstate to configure the bridge on worker nodes.
    • The director Operator creates a Network Attach Definition for each network, which defines the Multus CNI plugin configuration. When you specify the VLAN ID on the Network Attach Definition, the Multus CNI plugin enables vlan-filtering on the bridge.
    • The director Operator attaches a dedicated interface for each network on a virtual machine. This means the network template for the OSVMSet is a multi-NIC network template.

    Set the following values in the resource specification:

    metadata.name
    Set to the name of the control plane network, which is ctlplane.
    spec

    Set the network configuration for the control plane network. For descriptions of the values you can use in this section, view the specification schema in the custom resource definition for the openstacknet CRD:

    $ oc describe crd openstacknet

    Save the file when you have finished configuring the network specification.

  2. Create the internal API network:

    $ oc create -f internalapi-network.yaml -n openstack

Verification

  • View the resource for the internal API network:

    $ oc get openstacknet/internalapi -n openstack

8.6. Creating a control plane with OpenStackControlPlane

The overcloud control plane contains the main Red Hat OpenStack Platform services that manage overcloud functionality. The control plane usually consists of 3 Controller nodes and can scale to other control plane-based composable roles. When you use composable roles, each service must run on exactly 3 additional dedicated nodes and the total number of nodes in the control plane must be odd to maintain Pacemaker quorum.

The OpenStackControlPlane custom resource creates control plane-based nodes as virtual machines within OpenShift Virtualization.

Prerequisites

  • Ensure your OpenShift Container Platform cluster is operational and you have installed the director Operator correctly.
  • Ensure that you have installed the oc command line tool on your workstation.
  • Use the OpenStackNet resource to create a control plane network and any additional isolated networks.

Procedure

  1. Create a file named openstack-controller.yaml on your workstation. Include the resource specification for the Controller nodes. For example, the specification for a control plane that consists of 3 Controller nodes is as follows:

    apiVersion: osp-director.openstack.org/v1beta1
    kind: OpenStackControlPlane
    metadata:
      name: overcloud
      namespace: openstack
    spec:
      openStackClientImageURL: registry.redhat.io/rhosp-beta/openstack-tripleoclient:16.2
      openStackClientNetworks:
            - ctlplane
            - external
            - internalapi
      openStackClientStorageClass: host-nfs-storageclass
      passwordSecret: userpassword
      gitSecret: git-secret
      virtualMachineRoles:
        controller:
          roleName: Controller
          roleCount: 3
          networks:
            - ctlplane
            - internalapi
            - external
            - tenant
            - storage
            - storagemgmt
          cores: 6
          memory: 12
          diskSize: 50
          baseImageVolumeName: openstack-base-img
          storageClass: host-nfs-storageclass

    Set the following values in the resource specification:

    metadata.name
    Set to the name of the overcloud control plane, which is overcloud.
    metadata.namespace
    Set to the director Operator namespace, which is openstack.
    spec

    Set the configuration for the control plane. For descriptions of the values you can use in this section, view the specification schema in the custom resource definition for the openstackcontrolplane CRD:

    $ oc describe crd openstackcontrolplane

    Save the file when you have finished configuring the control plane specification.

  2. Create the control plane:

    $ oc create -f openstack-controller.yaml -n openstack

    Wait until OCP creates the resources related to OpenStackControlPlane resource.

    As a part of the OpenStackControlPlane resource, the director Operator also creates an OpenStackClient pod that you can access through a remote shell and run RHOSP commands.

Verification

  1. View the resource for the control plane:

    $ oc get openstackcontrolplane/overcloud -n openstack
  2. View the OpenStackVMSet resources to verify the creation of the control plane virtual machine set:

    $ oc get openstackvmsets -n openstack
  3. View the virtual machine resources to verify the creation of the control plane virtual machines in OpenShift Virtualization:

    $ oc get virtualmachines
  4. Test access to the openstackclient remote shell:

    $ oc rsh -n openstack openstackclient

8.7. Creating directories for templates and environment files

Create directories on your workstation to store your custom templates and environment files, which you upload to ConfigMaps in OpenShift Container Platform (OCP).

Procedure

  1. Create a directory for your custom templates:

    $ mkdir custom_templates
  2. Create a directory for your custom environment files:

    $ mkdir custom_environment_files

8.8. Custom NIC heat template for Controller nodes

The following example is a heat template that contains NIC configuration for the Controller virtual machines.

heat_template_version: rocky
description: >
  Software Config to drive os-net-config to configure multiple interfaces for the Controller role.
parameters:
  ControlPlaneIp:
    default: ''
    description: IP address/subnet on the ctlplane network
    type: string
  ControlPlaneSubnetCidr:
    default: ''
    description: >
      The subnet CIDR of the control plane network. (The parameter is
      automatically resolved from the ctlplane subnet's cidr attribute.)
    type: string
  ControlPlaneDefaultRoute:
    default: ''
    description: The default route of the control plane network. (The parameter
      is automatically resolved from the ctlplane subnet's gateway_ip attribute.)
    type: string
  ControlPlaneStaticRoutes:
    default: []
    description: >
      Routes for the ctlplane network traffic.
      JSON route e.g. [{'destination':'10.0.0.0/16', 'nexthop':'10.0.0.1'}]
      Unless the default is changed, the parameter is automatically resolved
      from the subnet host_routes attribute.
    type: json
  ControlPlaneMtu:
    default: 1500
    description: The maximum transmission unit (MTU) size(in bytes) that is
      guaranteed to pass through the data path of the segments in the network.
      (The parameter is automatically resolved from the ctlplane network's mtu attribute.)
    type: number
  StorageIpSubnet:
    default: ''
    description: IP address/subnet on the storage network
    type: string
  StorageNetworkVlanID:
    default: 30
    description: Vlan ID for the storage network traffic.
    type: number
  StorageMtu:
    default: 1500
    description: The maximum transmission unit (MTU) size(in bytes) that is
      guaranteed to pass through the data path of the segments in the
      Storage network.
    type: number
  StorageInterfaceRoutes:
    default: []
    description: >
      Routes for the storage network traffic.
      JSON route e.g. [{'destination':'10.0.0.0/16', 'nexthop':'10.0.0.1'}]
      Unless the default is changed, the parameter is automatically resolved
      from the subnet host_routes attribute.
    type: json
  StorageMgmtIpSubnet:
    default: ''
    description: IP address/subnet on the storage_mgmt network
    type: string
  StorageMgmtNetworkVlanID:
    default: 40
    description: Vlan ID for the storage_mgmt network traffic.
    type: number
  StorageMgmtMtu:
    default: 1500
    description: The maximum transmission unit (MTU) size(in bytes) that is
      guaranteed to pass through the data path of the segments in the
      StorageMgmt network.
    type: number
  StorageMgmtInterfaceRoutes:
    default: []
    description: >
      Routes for the storage_mgmt network traffic.
      JSON route e.g. [{'destination':'10.0.0.0/16', 'nexthop':'10.0.0.1'}]
      Unless the default is changed, the parameter is automatically resolved
      from the subnet host_routes attribute.
    type: json
  InternalApiIpSubnet:
    default: ''
    description: IP address/subnet on the internal_api network
    type: string
  InternalApiNetworkVlanID:
    default: 20
    description: Vlan ID for the internal_api network traffic.
    type: number
  InternalApiMtu:
    default: 1500
    description: The maximum transmission unit (MTU) size(in bytes) that is
      guaranteed to pass through the data path of the segments in the
      InternalApi network.
    type: number
  InternalApiInterfaceRoutes:
    default: []
    description: >in your `custom_templates` directory.
      Routes for the internal_api network traffic.
      JSON route e.g. [{'destination':'10.0.0.0/16', 'nexthop':'10.0.0.1'}]
      Unless the default is changed, the parameter is automatically resolved
      from the subnet host_routes attribute.
    type: json
  TenantIpSubnet:
    default: ''
    description: IP address/subnet on the tenant network
    type: string
  TenantNetworkVlanID:
    default: 50
    description: Vlan ID for the tenant network traffic.
    type: number
  TenantMtu:
    default: 1500
    description: The maximum transmission unit (MTU) size(in bytes) that is
      guaranteed to pass through the data path of the segments in the
      Tenant network.
    type: number
  TenantInterfaceRoutes:
    default: []
    description: >
      Routes for the tenant network traffic.
      JSON route e.g. [{'destination':'10.0.0.0/16', 'nexthop':'10.0.0.1'}]
      Unless the default is changed, the parameter is automatically resolved
      from the subnet host_routes attribute.
    type: json
  ExternalIpSubnet:
    default: ''
    description: IP address/subnet on the external network
    type: string
  ExternalNetworkVlanID:
    default: 10
    description: Vlan ID for the external network traffic.
    type: number
  ExternalMtu:
    default: 1500
    description: The maximum transmission unit (MTU) size(in bytes) that is
      guaranteed to pass through the data path of the segments in the
      External network.
    type: number
  ExternalInterfaceDefaultRoute:
    default: ''
    description: default route for the external network
    type: string
  ExternalInterfaceRoutes:
    default: []
    description: >
      Routes for the external network traffic.
      JSON route e.g. [{'destination':'10.0.0.0/16', 'nexthop':'10.0.0.1'}]
      Unless the default is changed, the parameter is automatically resolved
      from the subnet host_routes attribute.
    type: json
  DnsServers: # Override this via parameter_defaults
    default: []
    description: >
      DNS servers to use for the Overcloud (2 max for some implementations).
      If not set the nameservers configured in the ctlplane subnet's
      dns_nameservers attribute will be used.
    type: comma_delimited_list
  DnsSearchDomains: # Override this via parameter_defaults
    default: []
    description: A list of DNS search domains to be added (in order) to resolv.conf.
    type: comma_delimited_list
resources:
  OsNetConfigImpl:
    type: OS::Heat::SoftwareConfig
    properties:
      group: script
      config:
        str_replace:
          template:
            get_file: /usr/share/openstack-tripleo-heat-templates/network/scripts/run-os-net-config.sh
          params:
            $network_config:
              network_config:

              ## NIC2 - Control Plane ##
              - type: interface
                name: nic2
                mtu:
                  get_param: ControlPlaneMtu
                use_dhcp: false
                dns_servers:
                  get_param: DnsServers
                domain:
                  get_param: DnsSearchDomains
                addresses:
                - ip_netmask:
                    list_join:
                    - /
                    - - get_param: ControlPlaneIp
                      - get_param: ControlPlaneSubnetCidr
                routes:
                  list_concat_unique:
                    - get_param: ControlPlaneStaticRoutes

              ## NIC3 - External ##
              - type: ovs_bridge
                name: bridge_name
                mtu:
                  get_param: ExternalMtu
                dns_servers:
                  get_param: DnsServers
                use_dhcp: false
                addresses:
                - ip_netmask:
                    get_param: ExternalIpSubnet
                routes:
                  list_concat_unique:
                    - get_param: ExternalInterfaceRoutes
                    - - default: true
                        next_hop:
                          get_param: ExternalInterfaceDefaultRoute
                members:
                - type: interface
                  name: nic3
                  mtu:
                    get_param: ExternalMtu
                  use_dhcp: false
                  primary: true

              ## NIC4 - Internal API ##
              - type: interface
                name: nic4
                mtu:
                  get_param: InternalApiMtu
                use_dhcp: false
                addresses:
                - ip_netmask:
                    get_param: InternalApiIpSubnet
                routes:
                  list_concat_unique:
                    - get_param: InternalApiInterfaceRoutes

              ## NIC5 - Storage ##
              - type: interface
                name: nic5
                mtu:
                  get_param: StorageMtu
                use_dhcp: false
                addresses:
                - ip_netmask:
                    get_param: StorageIpSubnet
                routes:
                  list_concat_unique:
                    - get_param: StorageInterfaceRoutes

              ## NIC6 - StorageMgmt ##
              - type: interface
                name: nic6
                mtu:
                  get_param: StorageMgmtMtu
                use_dhcp: false
                addresses:
                - ip_netmask:
                    get_param: StorageMgmtIpSubnet
                routes:
                  list_concat_unique:
                    - get_param: StorageMgmtInterfaceRoutes

              ## NIC7 - Tenant ##
              - type: ovs_bridge
                name: br-isolated
                mtu:
                  get_param: TenantMtu
                dns_servers:
                  get_param: DnsServers
                use_dhcp: false
                addresses:
                - ip_netmask:
                    get_param: TenantIpSubnet
                routes:
                  list_concat_unique:
                    - get_param: TenantInterfaceRoutes
                members:
                - type: interface
                  name: nic7
                  mtu:
                    get_param: TenantMtu
                  use_dhcp: false
                  primary: true

outputs:
  OS::stack_id:
    description: The OsNetConfigImpl resource.
    value:
      get_resource: OsNetConfigImpl

This configuration maps the the networks to the following bridges and interfaces:

NetworksBridgeinterface

Control Plane

 

nic2

External

br-ex

nic3

Internalapi

 

nic4

Storage

 

nic5

StorageMgmt

 

nic6

Tenant

br-isolated

nic7

To use this template in your deployment, copy the contents of the example to net-config-multi-nic-controller.yaml in your custom_templates directory on your workstation.

8.9. Custom NIC heat template for Compute nodes

The following example is a heat template that contains NIC configuration for the Compute bare metal nodes.

heat_template_version: rocky
description: >
  Software Config to drive os-net-config to configure VLANs for the Compute role.
parameters:
  ControlPlaneIp:
    default: ''
    description: IP address/subnet on the ctlplane network
    type: string
  ControlPlaneSubnetCidr:
    default: ''
    description: >
      The subnet CIDR of the control plane network. (The parameter is
      automatically resolved from the ctlplane subnet's cidr attribute.)
    type: string
  ControlPlaneDefaultRoute:
    default: ''
    description: The default route of the control plane network. (The parameter
      is automatically resolved from the ctlplane subnet's gateway_ip attribute.)
    type: string
  ControlPlaneStaticRoutes:
    default: []
    description: >
      Routes for the ctlplane network traffic.
      JSON route e.g. [{'destination':'10.0.0.0/16', 'nexthop':'10.0.0.1'}]
      Unless the default is changed, the parameter is automatically resolved
      from the subnet host_routes attribute.
    type: json
  ControlPlaneMtu:
    default: 1500
    description: The maximum transmission unit (MTU) size(in bytes) that is
      guaranteed to pass through the data path of the segments in the network.
      (The parameter is automatically resolved from the ctlplane network's mtu attribute.)
    type: number
  StorageIpSubnet:
    default: ''
    description: IP address/subnet on the storage network
    type: string
  StorageNetworkVlanID:
    default: 30
    description: Vlan ID for the storage network traffic.
    type: number
  StorageMtu:
    default: 1500
    description: The maximum transmission unit (MTU) size(in bytes) that is
      guaranteed to pass through the data path of the segments in the
      Storage network.
    type: number
  StorageInterfaceRoutes:
    default: []
    description: >
      Routes for the storage network traffic.
      JSON route e.g. [{'destination':'10.0.0.0/16', 'nexthop':'10.0.0.1'}]
      Unless the default is changed, the parameter is automatically resolved
      from the subnet host_routes attribute.
    type: json
  InternalApiIpSubnet:
    default: ''
    description: IP address/subnet on the internal_api network
    type: string
  InternalApiNetworkVlanID:
    default: 20
    description: Vlan ID for the internal_api network traffic.
    type: number
  InternalApiMtu:
    default: 1500
    description: The maximum transmission unit (MTU) size(in bytes) that is
      guaranteed to pass through the data path of the segments in the
      InternalApi network.
    type: number
  InternalApiInterfaceRoutes:
    default: []
    description: >
      Routes for the internal_api network traffic.
      JSON route e.g. [{'destination':'10.0.0.0/16', 'nexthop':'10.0.0.1'}]
      Unless the default is changed, the parameter is automatically resolved
      from the subnet host_routes attribute.
    type: json
  TenantIpSubnet:
    default: ''
    description: IP address/subnet on the tenant network
    type: string
  TenantNetworkVlanID:
    default: 50
    description: Vlan ID for the tenant network traffic.
    type: number
  TenantMtu:
    default: 1500
    description: The maximum transmission unit (MTU) size(in bytes) that is
      guaranteed to pass through the data path of the segments in the
      Tenant network.
    type: number
  TenantInterfaceRoutes:
    default: []
    description: >
      Routes for the tenant network traffic.
      JSON route e.g. [{'destination':'10.0.0.0/16', 'nexthop':'10.0.0.1'}]
      Unless the default is changed, the parameter is automatically resolved
      from the subnet host_routes attribute.
    type: json
  ExternalMtu:
    default: 1500
    description: The maximum transmission unit (MTU) size(in bytes) that is
      guaranteed to pass through the data path of the segments in the
      External network.
    type: number
  DnsServers: # Override this via parameter_defaults
    default: []
    description: >
      DNS servers to use for the Overcloud (2 max for some implementations).
      If not set the nameservers configured in the ctlplane subnet's
      dns_nameservers attribute will be used.
    type: comma_delimited_list
  DnsSearchDomains: # Override this via parameter_defaults
    default: []
    description: A list of DNS search domains to be added (in order) to resolv.conf.
    type: comma_delimited_list

resources:

  MinViableMtu:
    # This resource resolves the minimum viable MTU for interfaces, bonds and
    # bridges that carry multiple VLANs. Each VLAN may have different MTU. The
    # bridge, bond or interface must have an MTU to allow the VLAN with the
    # largest MTU.
    type: OS::Heat::Value
    properties:
      type: number
      value:
        yaql:
          expression: $.data.max()
          data:
            - {get_param: ControlPlaneMtu}
            - {get_param: StorageMtu}
            - {get_param: InternalApiMtu}
            - {get_param: TenantMtu}
            - {get_param: ExternalMtu}

  OsNetConfigImpl:
    type: OS::Heat::SoftwareConfig
    properties:
      group: script
      config:
        str_replace:
          template:
            get_file: /usr/share/openstack-tripleo-heat-templates/network/scripts/run-os-net-config.sh
          params:
            $network_config:
              network_config:
              - type: ovs_bridge
                name: br-isolated
                mtu:
                  get_attr: [MinViableMtu, value]
                use_dhcp: false
                dns_servers:
                  get_param: DnsServers
                domain:
                  get_param: DnsSearchDomains
                addresses:
                - ip_netmask:
                    list_join:
                    - /
                    - - get_param: ControlPlaneIp
                      - get_param: ControlPlaneSubnetCidr
                routes:
                  list_concat_unique:
                    - get_param: ControlPlaneStaticRoutes
                    - - default: true
                        next_hop:
                          get_param: ControlPlaneDefaultRoute
                members:
                - type: interface
                  name: nic4
                  mtu:
                    get_attr: [MinViableMtu, value]
                  # force the MAC address of the bridge to this interface
                  primary: true
                - type: vlan
                  mtu:
                    get_param: StorageMtu
                  vlan_id:
                    get_param: StorageNetworkVlanID
                  addresses:
                  - ip_netmask:
                      get_param: StorageIpSubnet
                  routes:
                    list_concat_unique:
                      - get_param: StorageInterfaceRoutes
                - type: vlan
                  mtu:
                    get_param: InternalApiMtu
                  vlan_id:
                    get_param: InternalApiNetworkVlanID
                  addresses:
                  - ip_netmask:
                      get_param: InternalApiIpSubnet
                  routes:
                    list_concat_unique:
                      - get_param: InternalApiInterfaceRoutes
                - type: vlan
                  mtu:
                    get_param: TenantMtu
                  vlan_id:
                    get_param: TenantNetworkVlanID
                  addresses:
                  - ip_netmask:
                      get_param: TenantIpSubnet
                  routes:
                    list_concat_unique:
                      - get_param: TenantInterfaceRoutes
              - type: ovs_bridge
                # This will default to br-ex, anything else requires specific
                # bridge mapping entries for it to be used.
                name: bridge_name
                mtu:
                  get_param: ExternalMtu
                use_dhcp: false
                members:
                - type: interface
                  name: nic3
                  mtu:
                    get_param: ExternalMtu
                  use_dhcp: false
                  primary: true
outputs:
  OS::stack_id:
    description: The OsNetConfigImpl resource.
    value:
      get_resource: OsNetConfigImpl

This configuration maps the the networks to the following bridges and interfaces:

NetworksBridgeinterface

Control Plane, Storage, Internal API, Tenant

br-isolated

nic4

External

br-ex

nic3

Note

You can modify this configuration to suit the NIC configuration of your bare metal nodes.

To use this template in your deployment, copy the contents of the example to net-config-two-nic-vlan-compute.yaml in your custom_templates directory on your workstation.

8.10. Adding custom templates to the overcloud configuration

Archive your custom templates into a tarball file so that you can include these templates as a part of your overcloud deployment.

Prerequisites

  • Ensure your OpenShift Container Platform cluster is operational and you have installed the director Operator correctly.
  • Ensure that you have installed the oc command line tool on your workstation.
  • Create the custom templates that you want to apply to provisioned nodes.

Procedure

  1. Navigate to the location of your custom templates:

    $ cd ~/custom_templates
  2. Archive the templates into a tarball:

    $ tar -cvzf custom-config.tar.gz *.yaml
  3. Create the tripleo-tarball-config ConfigMap and use the tarball as data:

    $ oc create configmap tripleo-tarball-config --from-file=custom-config.tar.gz -n openstack

Verification

  • View the ConfigMap:

    $ oc get configmap/tripleo-tarball-config -n openstack

8.11. Custom environment file for configuring networking in the director Operator

The following example is an environment file that map the network software configuration resources to the NIC templates for your overcloud.

resource_registry:
  OS::TripleO::Controller::Net::SoftwareConfig: net-config-multi-nic-controller.yaml
  OS::TripleO::Compute::Net::SoftwareConfig: net-config-two-nic-vlan-compute.yaml
Note

Add any additional network configuration in a parameter_defaults section.

To use this template in your deployment, copy the contents of the example to network-environment.yaml in your custom_environment_files directory on your workstation.

8.12. Custom environment file for configuring external Ceph Storage usage in the director Operator

The following example is an environment file that contains configuration to integrate with an external Ceph Storage cluster.

resource_registry:
  OS::TripleO::Services::CephExternal: deployment/ceph-ansible/ceph-external.yaml

parameter_defaults:
  # needed for now because of the repo used to create tripleo-deploy image
  CephAnsibleRepo: "rhelosp-ceph-4-tools"
  CephAnsiblePlaybookVerbosity: 3

  NovaEnableRbdBackend: true
  GlanceBackend: rbd
  CinderEnableRbdBackend: true
  CinderBackupBackend: ceph
  CinderEnableIscsiBackend: false

  # Change the following values for your external ceph cluster
  NovaRbdPoolName: vms
  CinderRbdPoolName: volumes
  CinderBackupRbdPoolName: backups
  GlanceRbdPoolName: images
  CephClientUserName: openstack
  CephClientKey: AQDLOh1VgEp6FRAAFzT7Zw+Y9V6JJExQAsRnRQ==
  CephClusterFSID: 4b5c8c0a-ff60-454b-a1b4-9747aa737d19
  CephExternalMonHost: 172.16.1.7, 172.16.1.8, 172.16.1.9

This configuration contains parameters to enable the CephExternal and CephClient services on your nodes, set the pools for different RHOSP services, and sets the following parameters to integrate with your external Ceph Storage cluster:

CephClientKey
The Ceph client key of your external Ceph Storage cluster.
CephClusterFSID
The file system ID of your external Ceph Storage cluster.
CephExternalMonHost
A comma-delimited list of the IPs of all MON hosts in your external Ceph Storage cluster.
Note

You can modify this configuration to suit your storage configuration.

To use this template in your deployment, copy the contents of the example to ceph-ansible-external.yaml in your custom_environment_files directory on your workstation.

8.13. Adding custom environment files to the overcloud configuration

Upload a set of custom environment files from a directory to a ConfigMap that you can include as a part of your overcloud deployment.

Prerequisites

  • Ensure your OpenShift Container Platform cluster is operational and you have installed the director Operator correctly.
  • Ensure that you have installed the oc command line tool on your workstation.
  • Create custom environment files for your overcloud deployment.

Procedure

  1. Create the heat-env-config ConfigMap and use the directory that contains the environment files as data:

    $ oc create configmap -n openstack heat-env-config --from-file=~/custom_environment_files/ --dry-run=client -o yaml | oc apply -f -

Verification

  • View the ConfigMap:

    $ oc get configmap/heat-env-config -n openstack

8.14. Creating Compute nodes with OpenStackBaremetalSet

Compute nodes provide computing resources to your Red Hat OpenStack Platform environment. You must have at least one Compute node in your overcloud and you can scale the number of Compute nodes after deployment.

The OpenStackBaremetalSet custom resource creates Compute nodes from bare metal machines that OpenShift Container Platform manages.

Prerequisites

  • Ensure your OpenShift Container Platform cluster is operational and you have installed the director Operator correctly.
  • Ensure that you have installed the oc command line tool on your workstation.
  • Use the OpenStackNet resource to create a control plane network and any additional isolated networks.

Procedure

  1. Create a file named openstack-compute.yaml on your workstation. Include the resource specification for the Compute nodes. For example, the specification for 1 Compute node is as follows:

    apiVersion: osp-director.openstack.org/v1beta1
    kind: OpenStackBaremetalSet
    metadata:
      name: compute
      namespace: openstack
    spec:
      count: 1
      baseImageUrl: http://host/images/rhel-image-8.4.x86_64.qcow2
      deploymentSSHSecret: osp-controlplane-ssh-keys
      ctlplaneInterface: enp2s0
      networks:
        - ctlplane
        - internalapi
        - tenant
        - storage
      roleName: Compute
      passwordSecret: userpassword

    Set the following values in the resource specification:

    metadata.name
    Set to the name of the Compute node bare metal set, which is overcloud.
    metadata.namespace
    Set to the director Operator namespace, which is openstack.
    spec

    Set the configuration for the Compute nodes. For descriptions of the values you can use in this section, view the specification schema in the custom resource definition for the openstackbaremetalset CRD:

    $ oc describe crd openstackbaremetalset

    Save the file when you have finished configuring the Compute node specification.

  2. Create the Compute nodes:

    $ oc create -f openstack-compute.yaml -n openstack

Verification

  1. View the resource for the Compute nodes:

    $ oc get openstackbaremetalset/compute -n openstack
  2. View the bare metal machines that OpenShift manages to verify the creation of the Compute nodes:

    $ oc get baremetalhosts -n openshift-machine-api

8.15. Creating Ansible playbooks for overcloud configuration with OpenStackPlaybookGenerator

After you provision the overcloud infrastructure, you must create a set of Ansible playbooks to configure the Red Hat OpenStack Platform (RHOSP) software on the overcloud nodes. You create these playbooks with the OpenStackPlaybookGenerator resource, which uses the config-download feature in RHOSP director to convert heat configuration to playbooks.

Prerequisites

  • Ensure your OpenShift Container Platform cluster is operational and you have installed the director Operator correctly.
  • Ensure that you have installed the oc command line tool on your workstation.
  • Configure a git-secret Secret that contains authentication details for your remote Git repository.
  • Configure a tripleo-tarball-config ConfigMap that contains your custom heat templates.
  • Configure a heat-env-config ConfigMap that contains your custom environment files.

Procedure

  1. Create a file named openstack-playbook-generator.yaml on your workstation. Include the resource specification to generate the Ansible playbooks. For example, the specification to generate the playbooks is as follows:

    apiVersion: osp-director.openstack.org/v1beta1
    kind: OpenStackPlaybookGenerator
    metadata:
      name: default
      namespace: openstack
    spec:
      imageURL: registry.redhat.io/rhosp-beta/openstack-tripleoclient:16.2
      gitSecret: git-secret
      heatEnvConfigMap: heat-env-config
      tarballConfigMap: tripleo-tarball-config

    Set the following values in the resource specification:

    metadata.name
    Set to the name of the Compute node bare metal set, which is default.
    metadata.namespace
    Set to the director Operator namespace, which is openstack.
    spec.gitSecret
    Set to the ConfigMap that contains the Git authentication credentials, which is git-secret.
    spec.tarballConfigMap
    Set to the ConfigMap that contains the tarball with your custom heat templates, which is tripleo-tarball-config.
    spec.heatEnvConfigMap
    Set to the ConfigMap that contains your custom environment files, which is heat-env-config.

    For more descriptions of the values you can use in the spec section, view the specification schema in the custom resource definition for the openstackplaybookgenerator CRD:

    $ oc describe crd openstackplaybookgenerator

    Save the file when you have finished configuring the Ansible playbook generator specification.

  2. Create the Ansible playbook generator:

    $ oc create -f openstack-playbook-generator.yaml -n openstack

Verification

  • View the resource for the playbook generator:

    $ oc get openstackplaybookgenerator/default -n openstack

8.16. Registering the operating system of your overcloud

Before the director Operator configures the overcloud software on nodes, you must register the operating system of all nodes to either the Red Hat Customer Portal or Red Hat Satellite Server, and enable repositories for your nodes. To register your nodes, you can use the redhat_subscription Ansible module with the inventory file that the OpentackPlaybookGenerator resource creates.

Prerequisites

  • Ensure your OpenShift Container Platform cluster is operational and you have installed the director Operator correctly.
  • Ensure that you have installed the oc command line tool on your workstation.
  • Use the OpenStackControlPlane resource to create a control plane.
  • Use the OpenStackBareMetalSet resource to create bare metal Compute nodes.
  • Use the OpentackPlaybookGenerator to create the Ansible playbook configuration for your overcloud.

Procedure

  1. Access the remote shell for openstackclient:

    $ oc rsh -n openstack openstackclient
  2. Change to the cloud-admin home directory:

    $ cd /home/cloud-admin
  3. Optional: Check the diff for the overcloud Ansible playbooks:

    $ ./tripleo-deploy.sh -d
  4. Accept the newest version of the rendered Ansible playbooks and tag them as latest:

    $ ./tripleo-deploy.sh -a

    The newest version of the playbooks includes an inventory file for the overcloud configuration. You can find this inventory file on the OpenstackClient pod at /home/cloud-admin/playbooks/tripleo-ansible/inventory.yaml.

  5. Create a playbook that uses the redhat_subscription modules to register your nodes. For example, the following playbook registers Controller nodes:

    ---
    - name: Register Controller nodes
      hosts: Controller
      become: yes
      vars:
        repos:
          - rhel-8-for-x86_64-baseos-eus-rpms
          - rhel-8-for-x86_64-appstream-eus-rpms
          - rhel-8-for-x86_64-highavailability-eus-rpms
          - ansible-2.9-for-rhel-8-x86_64-rpms
          - advanced-virt-for-rhel-8-x86_64-rpms
          - openstack-16.2-for-rhel-8-x86_64-rpms
          - fast-datapath-for-rhel-8-x86_64-rpms
      tasks:
        - name: Register system
          redhat_subscription:
            username: myusername
            password: p@55w0rd!
            org_id: 1234567
            release: 8.4
            pool_ids: 1a85f9223e3d5e43013e3d6e8ff506fd
        - name: Disable all repos
          command: "subscription-manager repos --disable *"
        - name: Enable Controller node repos
          command: "subscription-manager repos --enable {{ item }}"
          with_items: "{{ repos }}"

    This play contains the following three tasks:

    • Register the node.
    • Disable any auto-enabled repositories.
    • Enable only the repositories relevant to the Controller node. The repositories are listed with the repos variable.
  6. Register the overcloud nodes to required repositories:

    ansible-playbook -i /home/cloud-admin/playbooks/tripleo-ansible/inventory.yaml ./rhsm.yaml

8.17. Applying overcloud configuration with the director Operator

You can configure the overcloud with director Operator only after you have created your control plane, provisioned your bare metal Compute nodes, and generated the Ansible playbooks to configure software on each node. When you create an OpenStackControlPlane resource, the director Operator creates an OpenStackClient pod that you access through a remote shell and run the tripleo-deploy.sh script to configure the overcloud.

Prerequisites

  • Ensure your OpenShift Container Platform cluster is operational and you have installed the director Operator correctly.
  • Ensure that you have installed the oc command line tool on your workstation.
  • Use the OpenStackControlPlane resource to create a control plane.
  • Use the OpenStackBareMetalSet resource to create bare metal Compute nodes.
  • Use the OpentackPlaybookGenerator to create the Ansible playbook configuration for your overcloud.

Procedure

  1. Access the remote shell for openstackclient:

    $ oc rsh -n openstack openstackclient
  2. Change to the cloud-admin home directory:

    $ cd /home/cloud-admin
  3. Optional: Check the diff for the overcloud Ansible playbooks:

    $ ./tripleo-deploy.sh -d
  4. Accept the newest version of the rendered Ansible playbooks and tag them as latest:

    $ ./tripleo-deploy.sh -a
  5. Apply the Ansible playbooks against the overcloud nodes:

    $ ./tripleo-deploy.sh -p
Note

You can view a log of the Ansible configuration at ~/ansible.log within the openstackclient pod.