Chapter 3. Creating networks with the director Operator

Use the OpenStackNet resource to create networks and bridges on OpenShift Virtualization worker nodes to connect your virtual machines to these networks. You must create one control plane network for your overcloud and additional networks to implement network isolation for your composable networks.

3.1. Understanding virtual machine bridging with OpenStackNet

When you create virtual machines with the OpenStackVMSet resource, you must connect these virtual machines to the relevant Red Hat OpenStack Platform (RHOSP) networks. The OpenStackNet resource includes a nodeNetworkConfigurationPolicy option, which passes network interface data to the NodeNetworkConfigurationPolicy resource in OpenShift Virtualization. The NodeNetworkConfigurationPolicy resource uses the nmstate API to configure the end state of the network configuration on each OCP worker node. Through this method, you can create a bridge on OCP nodes to connect your Controller virtual machines to RHOSP networks.

For example, if you create a control plane network and set the nodeNetworkConfigurationPolicy option to create a Linux bridge and connect the bridge to a NIC on each worker, the NodeNetworkConfigurationPolicy resource configures each OCP worker node to match this desired end state:

apiVersion: osp-director.openstack.org/v1beta1
kind: OpenStackNet
metadata:
  name: ctlplane
spec:
  ...
  attachConfiguration:
    nodeSelector:
      node-role.kubernetes.io/worker: ""
    nodeNetworkConfigurationPolicy:
      ...
      desiredState:
        interfaces:
        - bridge:
            options:
              stp:
                enabled: false
            port:
            - name: enp6s0
          description: Linux bridge with enp6s0 as a port
          name: br-osp
          state: up
          type: linux-bridge

After you apply this configuration, each worker contains a new bridge named br-osp, which is connected to the enp6s0 NIC on each host. All RHOSP Controller virtual machines can connect to the br-osp bridge for control plane network traffic.

If you later create an Internal API network through VLAN 20, you can set the nodeNetworkConfigurationPolicy option to modify the networking configuration on each OCP worker node and connect the VLAN to the existing br-osp bridge:

apiVersion: osp-director.openstack.org/v1beta1
kind: OpenStackNet
metadata:
  name: internalapi
spec:
  ...
  vlan: 20
  attachConfiguration:
    nodeNetworkConfigurationPolicy:
      nodeSelector:
        node-role.kubernetes.io/worker: ""
      desiredState:
        interfaces:
        - bridge:
            options:
              stp:
                enabled: false
            port:
            - name: enp6s0
          description: Linux bridge with enp6s0 as a port
          name: br-osp
          state: up
          type: linux-bridge

The br-osp already exists and is connected to the enp6s0 NIC on each host, so no change occurs to the bridge itself. However, OpenStackNet associates VLAN 20 to this network, which means RHOSP Controller virtual machines can connect to the VLAN 20 on the br-osp bridge for Internal API network traffic.

When you create virtual machines with the OpenStackVMSet resource, the virtual machines use multiple Virtio devices connected to each network. OpenShift Virtualization sorts the network names in alphabetical order except for the default network, which is always the first interface.

For example, if you create the default RHOSP networks with OpenStackNet, the interface configuration for Controller virtual machines resembles the following example:

interfaces:
  - masquerade: {}
    model: virtio
    name: default
  - bridge: {}
    model: virtio
    name: ctlplane
  - bridge: {}
    model: virtio
    name: external
  - bridge: {}
    model: virtio
    name: internalapi
  - bridge: {}
    model: virtio
    name: storage
  - bridge: {}
    model: virtio
    name: storagemgmt
  - bridge: {}
    model: virtio
    name: tenant

This configuration results in the following network-to-interface mapping for Controller nodes:

Table 3.1. Default network-to-interface mapping

NetworkInterface

default

nic1

ctlplane

nic2

external

nic3

internalapi

nic4

storage

nic5

storagemgmt

nic6

tenant

nic7

When you create heat templates for your Controller NICs, ensure the templates contain the respective NIC configuration for each network:

...
resources:
  OsNetConfigImpl:
    type: OS::Heat::SoftwareConfig
    properties:
      group: script
      config:
        str_replace:
          template:
            get_file: /usr/share/openstack-tripleo-heat-templates/network/scripts/run-os-net-config.sh
          params:
            $network_config:
              network_config:

              ## NIC2 - Control Plane ##
              - type: interface
                name: nic2
                ...

              ## NIC3 - External ##
              - type: ovs_bridge
                name: bridge_name
                ...
                members:
                - type: interface
                  name: nic3

              ## NIC4 - Internal API ##
              - type: interface
                name: nic4
                ...

              ## NIC5 - Storage ##
              - type: interface
                name: nic5
                ...

              ## NIC6 - StorageMgmt ##
              - type: interface
                name: nic6
                ...

              ## NIC7 - Tenant ##
              - type: ovs_bridge
                name: br-isolated
                ...
                members:
                - type: interface
                  name: nic7
                  ...

3.2. Creating an overcloud control plane network with OpenStackNet

You must create one control plane network for your overcloud. In addition to IP address assignment, the OpenStackNet resource includes information to define the network configuration policy that OpenShift Virtualization uses to attach any virtual machines to the network.

Prerequisites

  • Ensure your OpenShift Container Platform cluster is operational and you have installed the director Operator correctly.
  • Ensure that you have installed the oc command line tool on your workstation.

Procedure

  1. Create a file named ctlplane-network.yaml on your workstation. Include the resource specification for the control plane network, which is named ctlplane. For example, the specification for a control plane that uses a Linux bridge connected to the enp6s0 Ethernet device on each worker node is as follows:

    apiVersion: osp-director.openstack.org/v1beta1
    kind: OpenStackNet
    metadata:
      name: ctlplane
    spec:
      cidr: 192.168.25.0/24
      allocationStart: 192.168.25.100
      allocationEnd: 192.168.25.250
      gateway: 192.168.25.1
      attachConfiguration:
        nodeNetworkConfigurationPolicy:
          nodeSelector:
            node-role.kubernetes.io/worker: ""
          desiredState:
            interfaces:
            - bridge:
                options:
                  stp:
                    enabled: false
                port:
                - name: enp6s0
              description: Linux bridge with enp6s0 as a port
              name: br-osp
              state: up
              type: linux-bridge

    Set the following values in the resource specification:

    metadata.name
    Set to the name of the control plane network, which is ctlplane.
    spec

    Set the network configuration for the control plane network. For descriptions of the values you can use in this section, view the specification schema in the custom resource definition for the openstacknet CRD:

    $ oc describe crd openstacknet

    Save the file when you have finished configuring the network specification.

  2. Create the control plane network:

    $ oc create -f ctlplane-network.yaml -n openstack

Verification

  • View the resource for the control plane network:

    $ oc get openstacknet/ctlplane

3.3. Creating VLAN networks for network isolation with OpenStackNet

You must create additional networks to implement network isolation for your composable networks. To accomplish this network isolation, you can place your composable networks networks on individual VLAN networks. In addition to IP address assignment, the OpenStackNet resource includes information to define the network configuration policy that OpenShift Virtualization uses to attach any virtual machines to VLAN networks.

To use the default Red Hat OpenStack Platform networks, you must create an OpenStackNet resource for each network.

Table 3.2. Default Red Hat OpenStack Platform networks

NetworkVLANCIDRAllocation

External

10

10.0.0.0/24

10.0.0.4 - 10.0.0.250

InternalApi

20

172.16.2.0/24

172.16.2.4 - 172.16.2.250

Storage

30

172.16.1.0/24

172.16.1.4 - 172.16.1.250

StorageMgmt

40

172.16.3.0/24

172.16.3.4 - 172.16.3.250

Tenant

50

172.16.0.0/24

172.16.0.4 - 172.16.0.250

Important

To use different networking details for each network, you must create a custom network_data.yaml file.

Prerequisites

  • Ensure your OpenShift Container Platform cluster is operational and you have installed the director Operator correctly.
  • Ensure that you have installed the oc command line tool on your workstation.

Procedure

  1. Create a file for your new network. For example, for the internal API network, create a file named internalapi-network.yaml on your workstation. Include the resource specification for the VLAN network. For example, the specification for an internal API network that manages VLAN-tagged traffic over a Linux bridge connected to the enp6s0 Ethernet device on each worker node is as follows:

    apiVersion: osp-director.openstack.org/v1beta1
    kind: OpenStackNet
    metadata:
      name: internalapi
    spec:
      cidr: 172.16.2.0/24
      vlan: 20
      allocationStart: 172.16.2.4
      allocationEnd: 172.16.2.250
      attachConfiguration:
        nodeNetworkConfigurationPolicy:
          nodeSelector:
            node-role.kubernetes.io/worker: ""
          desiredState:
            interfaces:
            - bridge:
                options:
                  stp:
                    enabled: false
                port:
                - name: enp6s0
              description: Linux bridge with enp7s0 as a port
              name: br-osp
              state: up
              type: linux-bridge

    When you use a VLAN for network isolation with linux-bridge the following happens:

    • The director Operator creates a Node Network Configuration Policy for the bridge interface specified in the resource, which uses nmstate to configure the bridge on worker nodes.
    • The director Operator creates a Network Attach Definition for each network, which defines the Multus CNI plugin configuration. When you specify the VLAN ID on the Network Attach Definition, the Multus CNI plugin enables vlan-filtering on the bridge.
    • The director Operator attaches a dedicated interface for each network on a virtual machine. This means the network template for the OSVMSet is a multi-NIC network template.

    Set the following values in the resource specification:

    metadata.name
    Set to the name of the control plane network, which is ctlplane.
    spec

    Set the network configuration for the control plane network. For descriptions of the values you can use in this section, view the specification schema in the custom resource definition for the openstacknet CRD:

    $ oc describe crd openstacknet

    Save the file when you have finished configuring the network specification.

  2. Create the internal API network:

    $ oc create -f internalapi-network.yaml -n openstack

Verification

  • View the resource for the internal API network:

    $ oc get openstacknet/internalapi -n openstack