Chapter 10. Configuring overcloud networking

To configure the physical network for your overcloud, create the following configuration files:

  • The network configuration file, network_data.yaml, that follows the structure defined in the network data schema.
  • The network interface controllers (NICs) configuration files, by using the NIC template file in Jinja2 ansible format, j2.

10.1. Example network configuration files

The following are examples of the network data schema for IPv4 and IPv6.

10.1.1. Example network data schema for IPv4

- name: Storage
  name_lower: storage                     #optional, default: name.lower
  admin_state_up: false                   #optional, default: false
  dns_domain: storage.localdomain.        #optional, default: undef
  mtu: 1442                               #optional, default: 1500
  shared: false                           #optional, default: false
  service_net_map_replace: storage        #optional, default: undef
  ipv6: false                             #optional, default: false
  vip: true                               #optional, default: false
  subnets:
    subnet01:
      ip_subnet: 172.18.1.0/24
      gateway_ip: 172.18.1.254            #optional, default: undef
      allocation_pools:                   #optional, default: []
        - start: 172.18.1.10
          end: 172.18.1.250
      enable_dhcp: false                  #optional, default: false
      routes:                             #optional, default: []
        - destination: 172.18.0.0/24
          nexthop: 172.18.1.254
      vlan: 21                            #optional, default: undef
      physical_network: storage_subnet01  #optional, default: {{name.lower}}_{{subnet name}}
      network_type: flat                  #optional, default: flat
      segmentation_id: 21                 #optional, default: undef
    subnet02:
      ip_subnet: 172.18.0.0/24
      gateway_ip: 172.18.0.254            #optional, default: undef
      allocation_pools:                   #optional, default: []
        - start: 172.18.0.10
          end: 172.18.0.250
      enable_dhcp: false                  #optional, default: false
      routes:                             #optional, default: []
        - destination: 172.18.1.0/24
          nexthop: 172.18.0.254
      vlan: 20                            #optional, default: undef
      physical_network: storage_subnet02  #optional, default: {{name.lower}}_{{subnet name}}
      network_type: flat                  #optional, default: flat
      segmentation_id: 20                 #optional, default: undef

10.1.2. Example network data schema for IPv6

- name: Storage
  name_lower: storage
  admin_state_up: false
  dns_domain: storage.localdomain.
  mtu: 1442
  shared: false
  ipv6: true
  vip: true
  subnets:
    subnet01:
      ipv6_subnet: 2001:db8:a::/64
      gateway_ipv6: 2001:db8:a::1
      ipv6_allocation_pools:
        - start: 2001:db8:a::0010
          end: 2001:db8:a::fff9
      enable_dhcp: false
      routes_ipv6:
        - destination: 2001:db8:b::/64
          nexthop: 2001:db8:a::1
      ipv6_address_mode: null
      ipv6_ra_mode: null
      vlan: 21
      physical_network: storage_subnet01  #optional, default: {{name.lower}}_{{subnet name}}
      network_type: flat                  #optional, default: flat
      segmentation_id: 21                 #optional, default: undef
    subnet02:
      ipv6_subnet: 2001:db8:b::/64
      gateway_ipv6: 2001:db8:b::1
      ipv6_allocation_pools:
        - start: 2001:db8:b::0010
          end: 2001:db8:b::fff9
      enable_dhcp: false
      routes_ipv6:
        - destination: 2001:db8:a::/64
          nexthop: 2001:db8:b::1
      ipv6_address_mode: null
      ipv6_ra_mode: null
      vlan: 20
      physical_network: storage_subnet02  #optional, default: {{name.lower}}_{{subnet name}}
      network_type: flat                  #optional, default: flat
      segmentation_id: 20                 #optional, default: undef

10.2. Network isolation

Red Hat OpenStack Platform (RHOSP) provides isolated overcloud networks so that you can host specific types of network traffic in isolation. Traffic is assigned to specific network interfaces or bonds. Using bonds provides fault tolerance and, if the correct bonding protocols are used, can also provide load sharing. If no isolated networks are configured, RHOSP uses the provisioning network for all services.

Network configuration consists of two parts: the parameters applied to the network as a whole, and the templates used to configure the network interfaces on the deployed nodes.

You can create the following isolated networks for your RHOSP deployment:

IPMI
Network used for power management of nodes. This network is predefined before the installation of the undercloud.
Provisioning
Director uses this network for deployment and management. The provisioning network is normally configured on a dedicated interface. The initial deployment uses DHCP with PXE, then the network is converted to static IP. By default, PXE boot must occur on the native VLAN, although some system controllers allow booting from a VLAN. By default, the compute and storage nodes use the provisioning interface as their default gateway for DNS, NTP, and system maintenance.
Internal API
The Internal API network is used for communication between the RHOSP services using API communication, RPC messages, and database communication.
Tenant

The Networking service (neutron) provides each tenant (project) with their own networks using one of the following methods:

  • VLAN segregation, where each tenant network is a network VLAN.
  • Tunneling (through VXLAN or GRE.

Network traffic is isolated within each tenant network. Each tenant network has an IP subnet associated with it, and network namespaces means that multiple tenant networks can use the same address range without causing conflicts.

Storage
Network used for block Storage, NFS, iSCSI, and others. Ideally, this would be isolated to an entirely separate switch fabric for performance reasons.
Storage Management
OpenStack Object Storage (swift) uses this network to synchronize data objects between participating replica nodes. The proxy service acts as the intermediary interface between user requests and the underlying storage layer. The proxy receives incoming requests and locates the necessary replica to retrieve the requested data. Services that use a Ceph backend connect over the Storage Management network, since they do not interact with Ceph directly but rather use the frontend service. Note that the RBD driver is an exception, as this traffic connects directly to Ceph.
External
Hosts the OpenStack Dashboard (horizon) for graphical system management, the public APIs for OpenStack services, and performs SNAT for incoming traffic destined for instances.
Floating IP
Allows incoming traffic to reach instances using 1-to-1 IP address mapping between the floating IP address, and the IP address actually assigned to the instance in the tenant network. If hosting the Floating IPs on a VLAN separate from external, you can trunk the Floating IP VLAN to the Controller nodes and add the VLAN through the Networking Service (neutron) after overcloud creation. This provides a means to create multiple Floating IP networks attached to multiple bridges. The VLANs are trunked but are not configured as interfaces. Instead, the Networking Service (neutron) creates an OVS port with the VLAN segmentation ID on the chosen bridge for each Floating IP network.
Note

The provisioning network must be a native VLAN, the other networks can be trunked.

The undercloud can be used as a default gateway. However, all traffic is behind an IP masquerade NAT (Network Address Translation), and is not reachable from the rest of the RHOSP network. The undercloud is also a single point of failure for the overcloud default route. If there is an external gateway configured on a router device on the provisioning network, the undercloud neutron DHCP server can offer that service instead.

10.2.1. Networks required for each role

You can create tenant networks by using VLANs, but you can create VXLAN tunnels for special use without consuming tenant VLANs. It is possible to add VXLAN capability to a deployment with a tenant VLAN, but is it not possible to add a tenant VLAN to a deployed overcloud without serious disruption.

The following table details the isolated networks that are attached to each role:

RoleNetwork

Controller

provisioning, internal API, storage, storage management, tenant, external

Compute

provisioning, internal API, storage, tenant

Ceph Storage

provisioning, internal API, storage, storage management

Cinder Storage

provisioning, internal API, storage, storage management

Swift Storage

provisioning, internal API, storage, storage management

10.2.2. Network definition file configuration options

Use the following tables to understand the available options for configuring your network definition file, network_data.yaml, in YAML format:

Table 10.1. Network data YAML options

NameOptionTypeDefault value

name

Name of the network.

string

 

name_lower

Optional: Lower case name of the network.

string

name.lower()

dns_domain

Optional: DNS domain name for the network.

string

 

mtu

Maximum Transmission Unit (MTU).

number

1500

ipv6

Optional: Set to true if using IPv6.

Boolean

false

vip

Create a VIP on the network.

Boolean

false

subnets

Contains the subnets for the network.

dictionary

 

Table 10.2. Network data YAML option for subnet definition

NameOptionType / ElementExample

ip_subnet

IPv4 CIDR block notation.

string

192.0.5.0/24

ipv6_subnet

IPv6 CIDR block notation.

string

2001:db6:fd00:1000::/64

gateway_ip

Optional: Gateway IPv4 address.

string

192.0.5.1

allocation_pools

Start and end address for the subnet.

list / dictionary

start: 192.0.5.100

end: 192.0.5.150

ipv6_allocation_pools

Start and end address for the subnet.

list / dictionary

start: 2001:db6:fd00:1000:100::1

end: 2001:db6:fd00:1000:150::1

routes

List of IPv4 networks that require routing through the network gateway.

list / dictionary

 

routes_ipv6

List of IPv6 networks that require routing through the network gateway.

list / dictionary

 

vlan

Optional: VLAN ID for the network.

number

 
Note

The routes and routes_ipv6 options contain a list of routes. Each route is a dictionary entry with the destination and nexthop keys. Both options are of type string.

routes:
  - destination: 198.51.100.0/24
    nexthop: 192.0.5.1
  - destination: 203.0.113.0/24
    nexthost: 192.0.5.1
routes:
  - destination: 2001:db6:fd00:2000::/64
    nexthop: 2001:db6:fd00:1000:100::1
  - destination: 2001:db6:fd00:3000::/64
    nexthost: 2001:db6:fd00:1000:100::1

Table 10.3. YAML data options for network virtual IPs

NameOptionType / ElementDefault value

network

Neutron network name.

string

 

ip_address

Optional: Pre-defined fixed IP address.

string

 

subnet

Neutron subnet name. Specifies the subnet for the virtual IP neutron port. Required for deployments using routed networks.

string

 

dns_name

Optional: FQDN (Fully Qualified Domain Name).

list / dictionary

overcloud

name

Optional: Virtual IP name.

string

$network_name_virtual_ip

10.2.3. Configuring network isolation

To enable and configure network isolation, you must add the required elements to the network_data.yaml configuration file.

Procedure

  1. Create your network YAML definition file:

    $ cp /usr/share/openstack-tripleo-heat-templates/network-data-samples/default-network-isolation-ipv6.yaml /home/stack/templates/network_data.yaml
  2. Update the options in your network_data.yaml file to match the requirements for your overcloud networking environment:

    - name: Storage
      name_lower: storage
      vip: true
      ipv6: true
      mtu: 1500
      subnets:
        storage_subnet:
          ipv6_subnet: fd00:fd00:fd00:3000::/64
          ipv6_allocation_pools:
            - start: fd00:fd00:fd00:3000::10
              end: fd00:fd00:fd00:3000:ffff:ffff:ffff:fffe
          vlan: 30
    - name: StorageMgmt
      name_lower: storage_mgmt
      vip: true
      ipv6: true
      mtu: 1500
      subnets:
        storage_mgmt_subnet:
          ipv6_subnet: fd00:fd00:fd00:4000::/64
          ipv6_allocation_pools:
            - start: fd00:fd00:fd00:4000::10
              end: fd00:fd00:fd00:4000:ffff:ffff:ffff:fffe
          vlan: 40
    - name: InternalApi
      name_lower: internal_api
      vip: true
      ipv6: true
      mtu: 1500
      subnets:
        internal_api_subnet:
          ipv6_subnet: fd00:fd00:fd00:2000::/64
          ipv6_allocation_pools:
            - start: fd00:fd00:fd00:2000::10
              end: fd00:fd00:fd00:2000:ffff:ffff:ffff:fffe
          vlan: 20
    - name: Tenant
      name_lower: tenant
      vip: false # Tenant networks do not use VIPs
      ipv6: true
      mtu: 1500
      subnets:
        tenant_subnet:
          ipv6_subnet: fd00:fd00:fd00:5000::/64
          ipv6_allocation_pools:
            - start: fd00:fd00:fd00:5000::10
              end: fd00:fd00:fd00:5000:ffff:ffff:ffff:fffe
          vlan: 50
    - name: External
      name_lower: external
      vip: true
      ipv6: true
      mtu: 1500
      subnets:
        external_subnet:
          ipv6_subnet: 2001:db8:fd00:1000::/64
          ipv6_allocation_pools:
            - start: 2001:db8:fd00:1000::10
              end: 2001:db8:fd00:1000:ffff:ffff:ffff:fffe
          gateway_ipv6: 2001:db8:fd00:1000::1
          vlan: 10

10.3. Composable Networks

You can create custom composable networks if you want to host specific network traffic on different networks. Director provides a default network topology with network isolation enabled. You can find this configuration in the /usr/share/openstack-tripleo-heat-templates/network-data-samples/default-network-isolation.yaml.

The overcloud uses the following pre-defined set of network segments by default:

  • Internal API
  • Storage
  • Storage management
  • Tenant
  • External

You can use composable networks to add networks for various services. For example, if you have a network that is dedicated to NFS traffic, you can present it to multiple roles.

Director supports the creation of custom networks during the deployment and update phases. You can use these additional networks for bare metal nodes, system management, or to create separate networks for different roles. You can also use them to create multiple sets of networks for split deployments where traffic is routed between networks.

10.3.1. Adding a composable network

Use composable networks to add networks for various services. For example, if you have a network that is dedicated to storage backup traffic, you can present the network to multiple roles.

You can find a sample file in the /usr/share/openstack-tripleo-heat-templates/network-data-samples directory.

Procedure

  1. List the available sample configuration files:

    $ ll /usr/share/openstack-tripleo-heat-templates/network-data-samples/
    -rw-r--r--. 1 root root 1554 May 11 23:04 default-network-isolation-ipv6.yaml
    -rw-r--r--. 1 root root 1181 May 11 23:04 default-network-isolation.yaml
    -rw-r--r--. 1 root root 1126 May 11 23:04 ganesha-ipv6.yaml
    -rw-r--r--. 1 root root 1100 May 11 23:04 ganesha.yaml
    -rw-r--r--. 1 root root 3556 May 11 23:04 legacy-routed-networks-ipv6.yaml
    -rw-r--r--. 1 root root 2929 May 11 23:04 legacy-routed-networks.yaml
    -rw-r--r--. 1 root root  383 May 11 23:04 management-ipv6.yaml
    -rw-r--r--. 1 root root  290 May 11 23:04 management.yaml
    -rw-r--r--. 1 root root  136 May 11 23:04 no-networks.yaml
    -rw-r--r--. 1 root root 2725 May 11 23:04 routed-networks-ipv6.yaml
    -rw-r--r--. 1 root root 2033 May 11 23:04 routed-networks.yaml
    -rw-r--r--. 1 root root  943 May 11 23:04 vip-data-default-network-isolation.yaml
    -rw-r--r--. 1 root root  848 May 11 23:04 vip-data-fixed-ip.yaml
    -rw-r--r--. 1 root root 1050 May 11 23:04 vip-data-routed-networks.yaml
  2. Copy an example of a network configuration file which best suits your needs:

    $ cp /usr/share/openstack-tripleo-heat-templates/network-data-samples/default-network-isolation.yaml /home/stack/templates/network_data.yaml
  3. Edit your network_data.yaml configuration file and add a section for your new network:

    - name: StorageBackup
      vip: false
      name_lower: storage_backup
      subnets:
        storage_backup_subnet:
          ip_subnet: 172.16.6.0/24
          allocation_pools:
            - start: 172.16.6.4
            - end: 172.16.6.250
          gateway_ip: 172.16.6.1

    You can use the following parameters in your network_data.yaml file:

    name
    Sets the name of the network.
    vip
    Enables the creation of a virtual IP address on the network.
    name_lower
    Sets the lowercase version of the name, which director maps to respective networks assigned to roles in the roles_data.yaml file.
    subnets
    One or more subnet defintions.
    subnet_name
    Sets the name of the subnet.
    ip_subnet
    Sets the IPv4 subnet in CIDR format.
    allocation_pools
    Sets the IP range for the IPv4 subnet.
    gateway_ip
    Sets the gateway for the network.
    vlan
    Sets the VLAN ID for the network.
    ipv6
    Set the value to true or false.
    ipv6_subnet
    Sets the IPv6 subnet.
    gateway_ipv6
    Sets the gateway for the IPv6 network.
    ipv6_allocation_pools
    Sets the IP range for the IPv6 subnet.
    routes_ipv6
    Sets the routes for the IPv6 network.
  4. Copy the sample network VIP definition template you require from /usr/share/openstack-tripleo-heat-templates/network-data-samples to your environment file directory. The following example copies the vip-data-default-network-isolation.yaml to a local environment file named vip_data.yaml:

    $ cp /usr/share/openstack-tripleo-heat-templates/network-data-samples/vip-data-default-network-isolation.yaml  /home/stack/templates/vip_data.yaml
  5. Edit your vip_data.yaml configuration file. The virtual IP data is a list of virtual IP address definitions, each containing the name of the network where the IP address is allocated.

    - network: storage_mgmt
      dns_name: overcloud
    - network: internal_api
      dns_name: overcloud
    - network: storage
      dns_name: overcloud
    - network: external
      dns_name: overcloud
      ip_address: <vip_address>
    - network: ctlplane
      dns_name: overcloud
    • Replace <vip_address> with the required virtual IP address.

    You can use the following parameters in your vip_data.yaml file:

    network
    Sets the neutron network name. This is the only required parameter.
    ip_address
    Sets the IP address of the VIP.
    subnet
    Sets the neutron subnet name. Use to specify the subnet when creating the virtual IP neutron port. This parameter is required when your deployment uses routed networks.
    dns_name
    Sets the FQDN (Fully Qualified Domain Name).
    name
    Sets the virtual IP name.
  6. Copy a sample network configuration template. Jinja2 templates are used to define NIC configuration templates. Browse the examples provided in the /usr/share/ansible/roles/tripleo_network_config/templates/ directory, if one of the examples matches your requirements, use it. If the examples do not match your requirements, copy a sample configuration file, and modify it for your needs:

    $ cp /usr/share/ansible/roles/tripleo_network_config/templates/single_nic_vlans/single_nic_vlans.j2 /home/stack/templates/
  7. Edit your single_nic_vlans.j2 configuration file:

    ---
    {% set mtu_list = [ctlplane_mtu] %}
    {% for network in role_networks %}
    {{ mtu_list.append(lookup('vars', networks_lower[network] ~ '_mtu')) }}
    {%- endfor %}
    {% set min_viable_mtu = mtu_list | max %}
    network_config:
    - type: ovs_bridge
      name: {{ neutron_physical_bridge_name }}
      mtu: {{ min_viable_mtu }}
      use_dhcp: false
      dns_servers: {{ ctlplane_dns_nameservers }}
      domain: {{ dns_search_domains }}
      addresses:
      - ip_netmask: {{ ctlplane_ip }}/{{ ctlplane_subnet_cidr }}
      routes: {{ ctlplane_host_routes }}
      members:
      - type: interface
        name: nic1
        mtu: {{ min_viable_mtu }}
        # force the MAC address of the bridge to this interface
        primary: true
    {% for network in role_networks %}
      - type: vlan
        mtu: {{ lookup('vars', networks_lower[network] ~ '_mtu') }}
        vlan_id: {{ lookup('vars', networks_lower[network] ~ '_vlan_id') }}
        addresses:
        - ip_netmask:
            {{ lookup('vars', networks_lower[network] ~ '_ip') }}/{{ lookup('vars', networks_lower[network] ~ '_cidr') }}
        routes: {{ lookup('vars', networks_lower[network] ~ '_host_routes') }}
    {% endfor %}
  8. Set the network_config template in overcloud-baremetal-deploy.yaml configuration file:

    - name: CephStorage
      count: 3
      defaults:
        networks:
        - network: storage
        - network: storage_mgmt
        - network: storage_backup
        network_config:
          template: /home/stack/templates/single_nic_vlans.j2
  9. Provision the overcloud networks. This action generates an output file which will be used an an environment file when deploying the overcloud:

    (undercloud)$ openstack overcloud network provision --output <deployment_file> /home/stack/templates/<networks_definition_file>.yaml
    • Replace <networks_definition_file> with the name of your networks definition file, for example, network_data.yaml.
    • Replace <deployment_file> with the name of the heat environment file to generate for inclusion in the deployment command, for example /home/stack/templates/overcloud-networks-deployed.yaml.
  10. Provision the network VIPs and generate the vip-deployed-environment.yaml file. You use this file when you deploy the overcloud:

    (overcloud)$ openstack overcloud network vip provision  --stack <stack> --output <deployment_file> /home/stack/templates/vip_data.yaml
    • Replace <stack> with the name of the stack for which the network VIPs are provisioned. If not specified, the default is overcloud.
    • Replace <deployment_file> with the name of the heat environment file to generate for inclusion in the deployment command, for example /home/stack/templates/overcloud-vip-deployed.yaml.

10.3.2. Including a composable network in a role

You can assign composable networks to the overcloud roles defined in your environment. For example, you might include a custom StorageBackup network with your Ceph Storage nodes.

Procedure

  1. If you do not already have a custom roles_data.yaml file, copy the default to your home directory:

    $ cp /usr/share/openstack-tripleo-heat-templates/roles_data.yaml /home/stack/templates/roles_data.yaml
  2. Edit the custom roles_data.yaml file.
  3. Include the network name in the networks list for the role that you want to add the network to. For example, to add the StorageBackup network to the Ceph Storage role, use the following example snippet:

    - name: CephStorage
      description: |
        Ceph OSD Storage node role
      networks:
        Storage
          subnet: storage_subnet
        StorageMgmt
          subnet: storage_mgmt_subnet
        StorageBackup
          subnet: storage_backup_subnet
  4. After you add custom networks to their respective roles, save the file.

When you run the openstack overcloud deploy command, include the custom roles_data.yaml file using the -r option. Without the -r option, the deployment command uses the default set of roles with their respective assigned networks.

10.3.3. Assigning OpenStack services to composable networks

Each OpenStack service is assigned to a default network type in the resource registry. These services are bound to IP addresses within the network type’s assigned network. Although the OpenStack services are divided among these networks, the number of actual physical networks can differ as defined in the network environment file. You can reassign OpenStack services to different network types by defining a new network map in an environment file, for example, /home/stack/templates/service-reassignments.yaml. The ServiceNetMap parameter determines the network types that you want to use for each service.

For example, you can reassign the Storage Management network services to the Storage Backup Network by modifying the highlighted sections:

parameter_defaults:
  ServiceNetMap:
    SwiftStorageNetwork: storage_backup
    CephClusterNetwork: storage_backup

Changing these parameters to storage_backup places these services on the Storage Backup network instead of the Storage Management network. This means that you must define a set of parameter_defaults only for the Storage Backup network and not the Storage Management network.

Director merges your custom ServiceNetMap parameter definitions into a pre-defined list of defaults that it obtains from ServiceNetMapDefaults and overrides the defaults. Director returns the full list, including customizations, to ServiceNetMap, which is used to configure network assignments for various services.

Service mappings apply to networks that use vip: true in the network_data.yaml file for nodes that use Pacemaker. The overcloud load balancer redirects traffic from the VIPs to the specific service endpoints.

Note

You can find a full list of default services in the ServiceNetMapDefaults parameter in the /usr/share/openstack-tripleo-heat-templates/network/service_net_map.j2.yaml file.

10.3.4. Enabling custom composable networks

Use one of the default NIC templates to enable custom composable networks. In this example, use the Single NIC with VLANs template, (custom_single_nic_vlans).

Procedure

  1. Source the stackrc undercloud credential file:

    $ source ~/stackrc
  2. Provision the overcloud networks:

    $ openstack overcloud network provision \
      --output overcloud-networks-deployed.yaml \
      custom_network_data.yaml
  3. Provision the network VIPs:

    $ openstack overcloud network vip provision \
        --stack overcloud \
        --output overcloud-networks-vips-deployed.yaml \
         custom_vip_data.yaml
  4. Provision the overcloud nodes:

    $ openstack overcloud node provision \
        --stack overcloud \
        --output overcloud-baremetal-deployed.yaml \
        overcloud-baremetal-deploy.yaml
  5. Construct your openstack overcloud deploy command, specifying the configuration files and templates in the required order, for example:

    $ openstack overcloud deploy --templates \
      --networks-file network_data_v2.yaml \
      -e overcloud-networks-deployed.yaml \
      -e overcloud-networks-vips-deployed.yaml \
      -e overcloud-baremetal-deployed.yaml
      -e custom-net-single-nic-with-vlans.yaml

This example command deploys the composable networks, including your additional custom networks, across nodes in your overcloud.

10.3.5. Renaming the default networks

You can use the network_data.yaml file to modify the user-visible names of the default networks:

  • InternalApi
  • External
  • Storage
  • StorageMgmt
  • Tenant

To change these names, do not modify the name field. Instead, change the name_lower field to the new name for the network and update the ServiceNetMap with the new name.

Procedure

  1. In your network_data.yaml file, enter new names in the name_lower parameter for each network that you want to rename:

    - name: InternalApi
      name_lower: MyCustomInternalApi
  2. Include the default value of the name_lower parameter in the service_net_map_replace parameter:

    - name: InternalApi
      name_lower: MyCustomInternalApi
      service_net_map_replace: internal_api

10.4. Custom network interface templates

After you configure Section 10.2, “Network isolation”, you can create a set of custom network interface templates to suit the nodes in your environment. For example, you can include the following files:

  • The environment file to configure network defaults (/usr/share/openstack-tripleo-heat-templates/environments/network/multiple-nics/network-environment.yaml).
  • Templates to define your NIC layout for each node. The overcloud core template collection contains a set of defaults for different use cases. To create a custom NIC template, render a default Jinja2 template as the basis for your custom templates.
  • A custom environment file to enable NICs. This example uses a custom environment file (/home/stack/templates/custom-network-configuration.yaml) that references your custom interface templates.
  • Any additional environment files to customize your networking parameters.
  • If you customize your networks, a custom network_data.yaml file.
  • If you create additional or custom composable networks, a custom network_data.yaml file and a custom roles_data.yaml file.
Note

Some of the files in the previous list are Jinja2 format files and have a .j2.yaml extension. Director renders these files to .yaml versions during deployment.

10.4.1. Custom network architecture

The example NIC templates might not suit a specific network configuration. For example, you might want to create your own custom NIC template that suits a specific network layout. You might want to separate the control services and data services on to separate NICs. In this situation, you can map the service to NIC assignments in the following way:

  • NIC1 (Provisioning)

    • Provisioning / Control Plane
  • NIC2 (Control Group)

    • Internal API
    • Storage Management
    • External (Public API)
  • NIC3 (Data Group)

    • Tenant Network (VXLAN tunneling)
    • Tenant VLANs / Provider VLANs
    • Storage
    • External VLANs (Floating IP/SNAT)
  • NIC4 (Management)

    • Management

10.4.2. Network interface reference

The network interface configuration contains the following parameters:

Interface

Defines a single network interface. The configuration defines each interface using either the actual interface name ("eth0", "eth1", "enp0s25") or a set of numbered interfaces ("nic1", "nic2", "nic3"):

  - type: interface
    name: nic2

Table 10.4. interface options

OptionDefaultDescription

name

 

Name of the interface.

use_dhcp

False

Use DHCP to get an IP address.

use_dhcpv6

False

Use DHCP to get a v6 IP address.

addresses

 

A list of IP addresses assigned to the interface.

routes

 

A list of routes assigned to the interface. For more information, see routes.

mtu

1500

The maximum transmission unit (MTU) of the connection.

primary

False

Defines the interface as the primary interface.

persist_mapping

False

Write the device alias configuration instead of the system names.

dhclient_args

None

Arguments that you want to pass to the DHCP client.

dns_servers

None

List of DNS servers that you want to use for the interface.

ethtool_opts

 

Set this option to "rx-flow-hash udp4 sdfn" to improve throughput when you use VXLAN on certain NICs.

vlan

Defines a VLAN. Use the VLAN ID and subnet passed from the parameters section.

For example:

  - type: vlan
    device: nic{{ loop.index + 1 }}
    mtu: {{ lookup('vars', networks_lower[network] ~ '_mtu') }}
    vlan_id: {{ lookup('vars', networks_lower[network] ~ '_vlan_id') }}
    addresses:
    - ip_netmask:
      {{ lookup('vars', networks_lower[network] ~ '_ip') }}/{{ lookup('vars', networks_lower[network] ~ '_cidr') }}
  routes: {{ lookup('vars', networks_lower[network] ~ '_host_routes') }}

Table 10.5. vlan options

OptionDefaultDescription

vlan_id

 

The VLAN ID.

device

 

The parent device to attach the VLAN. Use this parameter when the VLAN is not a member of an OVS bridge. For example, use this parameter to attach the VLAN to a bonded interface device.

use_dhcp

False

Use DHCP to get an IP address.

use_dhcpv6

False

Use DHCP to get a v6 IP address.

addresses

 

A list of IP addresses assigned to the VLAN.

routes

 

A list of routes assigned to the VLAN. For more information, see routes.

mtu

1500

The maximum transmission unit (MTU) of the connection.

primary

False

Defines the VLAN as the primary interface.

persist_mapping

False

Write the device alias configuration instead of the system names.

dhclient_args

None

Arguments that you want to pass to the DHCP client.

dns_servers

None

List of DNS servers that you want to use for the VLAN.

ovs_bond

Defines a bond in Open vSwitch to join two or more interfaces together. This helps with redundancy and increases bandwidth.

For example:

  members:
    - type: ovs_bond
      name: bond1
      mtu: {{ min_viable_mtu }}
      ovs_options: {{ bond_interface_ovs_options }}
      members:
      - type: interface
        name: nic2
        mtu: {{ min_viable_mtu }}
        primary: true
      - type: interface
        name: nic3
        mtu: {{ min_viable_mtu }}

Table 10.6. ovs_bond options

OptionDefaultDescription

name

 

Name of the bond.

use_dhcp

False

Use DHCP to get an IP address.

use_dhcpv6

False

Use DHCP to get a v6 IP address.

addresses

 

A list of IP addresses assigned to the bond.

routes

 

A list of routes assigned to the bond. For more information, see routes.

mtu

1500

The maximum transmission unit (MTU) of the connection.

primary

False

Defines the interface as the primary interface.

members

 

A sequence of interface objects that you want to use in the bond.

ovs_options

 

A set of options to pass to OVS when creating the bond.

ovs_extra

 

A set of options to set as the OVS_EXTRA parameter in the network configuration file of the bond.

defroute

True

Use a default route provided by the DHCP service. Only applies when you enable use_dhcp or use_dhcpv6.

persist_mapping

False

Write the device alias configuration instead of the system names.

dhclient_args

None

Arguments that you want to pass to the DHCP client.

dns_servers

None

List of DNS servers that you want to use for the bond.

ovs_bridge

Defines a bridge in Open vSwitch, which connects multiple interface, ovs_bond, and vlan objects together.

The network interface type, ovs_bridge, takes a parameter name.

Note

If you have multiple bridges, you must use distinct bridge names other than accepting the default name of bridge_name. If you do not use distinct names, then during the converge phase, two network bonds are placed on the same bridge.

If you are defining an OVS bridge for the external tripleo network, then retain the values bridge_name and interface_name as your deployment framework automatically replaces these values with an external bridge name and an external interface name, respectively.

For example:

  - type: ovs_bridge
    name: br-bond
    dns_servers: {{ ctlplane_dns_nameservers }}
    domain: {{ dns_search_domains }}
    members:
    - type: ovs_bond
      name: bond1
      mtu: {{ min_viable_mtu }}
      ovs_options: {{ bound_interface_ovs_options }}
      members:
      - type: interface
        name: nic2
        mtu: {{ min_viable_mtu }}
        primary: true
      - type: interface
        name: nic3
        mtu: {{ min_viable_mtu }}
Note

The OVS bridge connects to the Networking service (neutron) server to obtain configuration data. If the OpenStack control traffic, typically the Control Plane and Internal API networks, is placed on an OVS bridge, then connectivity to the neutron server is lost whenever you upgrade OVS, or the OVS bridge is restarted by the admin user or process. This causes some downtime. If downtime is not acceptable in these circumstances, then you must place the Control group networks on a separate interface or bond rather than on an OVS bridge:

  • You can achieve a minimal setting when you put the Internal API network on a VLAN on the provisioning interface and the OVS bridge on a second interface.
  • To implement bonding, you need at least two bonds (four network interfaces). Place the control group on a Linux bond (Linux bridge). If the switch does not support LACP fallback to a single interface for PXE boot, then this solution requires at least five NICs.

Table 10.7. ovs_bridge options

OptionDefaultDescription

name

 

Name of the bridge.

use_dhcp

False

Use DHCP to get an IP address.

use_dhcpv6

False

Use DHCP to get a v6 IP address.

addresses

 

A list of IP addresses assigned to the bridge.

routes

 

A list of routes assigned to the bridge. For more information, see routes.

mtu

1500

The maximum transmission unit (MTU) of the connection.

members

 

A sequence of interface, VLAN, and bond objects that you want to use in the bridge.

ovs_options

 

A set of options to pass to OVS when creating the bridge.

ovs_extra

 

A set of options to to set as the OVS_EXTRA parameter in the network configuration file of the bridge.

defroute

True

Use a default route provided by the DHCP service. Only applies when you enable use_dhcp or use_dhcpv6.

persist_mapping

False

Write the device alias configuration instead of the system names.

dhclient_args

None

Arguments that you want to pass to the DHCP client.

dns_servers

None

List of DNS servers that you want to use for the bridge.

linux_bond

Defines a Linux bond that joins two or more interfaces together. This helps with redundancy and increases bandwidth. Ensure that you include the kernel-based bonding options in the bonding_options parameter.

For example:

- type: linux_bridge
  name: {{ neutron_physical_bridge_name }}
  mtu: {{ min_viable_mtu }}
  use_dhcp: false
  dns_servers: {{ ctlplane_dns_nameservers }}
  domain: {{ dns_search_domains }}
  addresses:
  - ip_netmask: {{ ctlplane_ip }}/{{ ctlplane_subnet_cidr }}
  routes: {{ ctlplane_host_routes }}

Note that nic2 uses primary: true to ensure that the bond uses the MAC address for nic2.

Table 10.8. linux_bond options

OptionDefaultDescription

name

 

Name of the bond.

use_dhcp

False

Use DHCP to get an IP address.

use_dhcpv6

False

Use DHCP to get a v6 IP address.

addresses

 

A list of IP addresses assigned to the bond.

routes

 

A list of routes assigned to the bond. See routes.

mtu

1500

The maximum transmission unit (MTU) of the connection.

primary

False

Defines the interface as the primary interface.

members

 

A sequence of interface objects that you want to use in the bond.

bonding_options

 

A set of options when creating the bond.

defroute

True

Use a default route provided by the DHCP service. Only applies when you enable use_dhcp or use_dhcpv6.

persist_mapping

False

Write the device alias configuration instead of the system names.

dhclient_args

None

Arguments that you want to pass to the DHCP client.

dns_servers

None

List of DNS servers that you want to use for the bond.

linux_bridge

Defines a Linux bridge, which connects multiple interface, linux_bond, and vlan objects together. The external bridge also uses two special values for parameters:

  • bridge_name, which is replaced with the external bridge name.
  • interface_name, which is replaced with the external interface.

For example:

  - type: linux_bridge
      name: bridge_name
      mtu:
        get_attr: [MinViableMtu, value]
      use_dhcp: false
      dns_servers:
        get_param: DnsServers
      domain:
        get_param: DnsSearchDomains
      addresses:
      - ip_netmask:
          list_join:
          - /
          - - get_param: ControlPlaneIp
            - get_param: ControlPlaneSubnetCidr
      routes:
        list_concat_unique:
          - get_param: ControlPlaneStaticRoutes

Table 10.9. linux_bridge options

OptionDefaultDescription

name

 

Name of the bridge.

use_dhcp

False

Use DHCP to get an IP address.

use_dhcpv6

False

Use DHCP to get a v6 IP address.

addresses

 

A list of IP addresses assigned to the bridge.

routes

 

A list of routes assigned to the bridge. For more information, see routes.

mtu

1500

The maximum transmission unit (MTU) of the connection.

members

 

A sequence of interface, VLAN, and bond objects that you want to use in the bridge.

defroute

True

Use a default route provided by the DHCP service. Only applies when you enable use_dhcp or use_dhcpv6.

persist_mapping

False

Write the device alias configuration instead of the system names.

dhclient_args

None

Arguments that you want to pass to the DHCP client.

dns_servers

None

List of DNS servers that you want to use for the bridge.

routes

Defines a list of routes to apply to a network interface, VLAN, bridge, or bond.

For example:

  - type: linux_bridge
    name: bridge_name
    ...
    routes: {{ [ctlplane_host_routes] | flatten | unique }}
OptionDefaultDescription

ip_netmask

None

IP and netmask of the destination network.

default

False

Sets this route to a default route. Equivalent to setting ip_netmask: 0.0.0.0/0.

next_hop

None

The IP address of the router used to reach the destination network.

10.4.3. Example network interface layout

The following snippet for an example controller node NIC template demonstrates how to configure the custom network scenario to keep the control group separate from the OVS bridge:

network_config:
- type: interface
  name: nic1
  mtu: {{ ctlplane_mtu }}
  use_dhcp: false
  addresses:
  - ip_netmask: {{ ctlplane_ip }}/{{ ctlplane_subnet_cidr }}
  routes: {{ ctlplane_host_routes }}
- type: linux_bond
  name: bond_api
  mtu: {{ min_viable_mtu_ctlplane }}
  use_dhcp: false
  bonding_options: {{ bond_interface_ovs_options }}
  dns_servers: {{ ctlplane_dns_nameservers }}
  domain: {{ dns_search_domains }}
  members:
  - type: interface
    name: nic2
    mtu: {{ min_viable_mtu_ctlplane }}
    primary: true
  - type: interface
    name: nic3
    mtu: {{ min_viable_mtu_ctlplane }}
{% for network in role_networks if not network.startswith('Tenant') %}
- type: vlan
  device: bond_api
  mtu: {{ lookup('vars', networks_lower[network] ~ '_mtu') }}
  vlan_id: {{ lookup('vars', networks_lower[network] ~ '_vlan_id') }}
  addresses:
  - ip_netmask: {{ lookup('vars', networks_lower[network] ~ '_ip') }}/{{ lookup('vars', networks_lower[network] ~ '_cidr') }}
  routes: {{ lookup('vars', networks_lower[network] ~ '_host_routes') }}
{% endfor %}
- type: ovs_bridge
  name: {{ neutron_physical_bridge_name }}
  dns_servers: {{ ctlplane_dns_nameservers }}
  members:
  - type: linux_bond
    name: bond-data
    mtu: {{ min_viable_mtu_dataplane }}
    bonding_options: {{ bond_interface_ovs_options }}
    members:
    - type: interface
      name: nic4
      mtu: {{ min_viable_mtu_dataplane }}
      primary: true
    - type: interface
      name: nic5
      mtu: {{ min_viable_mtu_dataplane }}
{% for network in role_networks if network.startswith('Tenant') %}
  - type: vlan
    device: bond-data
    mtu: {{ lookup('vars', networks_lower[network] ~ '_mtu') }}
    vlan_id: {{ lookup('vars', networks_lower[network] ~ '_vlan_id') }}
    addresses:
    - ip_netmask: {{ lookup('vars', networks_lower[network] ~ '_ip') }}/{{ lookup('vars', networks_lower[network] ~ '_cidr') }}
    routes: {{ lookup('vars', networks_lower[network] ~ '_host_routes') }}

This template uses five network interfaces and assigns a number of tagged VLAN devices to the numbered interfaces. On nic4 and nic5, this template creates the OVS bridges.

10.5. Additional overcloud network configuration

This chapter follows on from the concepts and procedures outlined in Section 10.4, “Custom network interface templates” and provides some additional information to help configure parts of your overcloud network.

10.5.1. Configuring custom interfaces

Individual interfaces might require modification. The following example shows the modifications that are necessary to use a second NIC to connect to an infrastructure network with DHCP addresses, and to use another NIC for the bond:

network_config:
  # Add a DHCP infrastructure network to nic2
  - type: interface
    name: nic2
    mtu: {{ tenant_mtu }}
    use_dhcp: true
    primary: true
  - type: vlan
    mtu: {{ tenant_mtu }}
    vlan_id: {{ tenant_vlan_id }}
    addresses:
    - ip_netmask: {{ tenant_ip }}/{{ tenant_cidr }}
    routes: {{ [tenant_host_routes] | flatten | unique }}
  - type: ovs_bridge
    name: br-bond
    mtu: {{ external_mtu }}
    dns_servers: {{ ctlplane_dns_nameservers }}
    use_dhcp: false
    members:
      - type: interface
        name: nic10
        mtu: {{ external_mtu }}
        use_dhcp: false
        primary: true
      - type: vlan
        mtu: {{ external_mtu }}
        vlan_id: {{ external_vlan_id }}
        addresses:
        - ip_netmask: {{ external_ip }}/{{ external_cidr }}
        routes: {{ [external_host_routes, [{'default': True, 'next_hop': external_gateway_ip}]] | flatten | unique }}

The network interface template uses either the actual interface name (eth0, eth1, enp0s25) or a set of numbered interfaces (nic1, nic2, nic3). The network interfaces of hosts within a role do not have to be exactly the same when you use numbered interfaces (nic1, nic2, etc.) instead of named interfaces (eth0, eno2, etc.). For example, one host might have interfaces em1 and em2, while another has eno1 and eno2, but you can refer to the NICs of both hosts as nic1 and nic2.

The order of numbered interfaces corresponds to the order of named network interface types:

  • ethX interfaces, such as eth0, eth1, etc. These are usually onboard interfaces.
  • enoX interfaces, such as eno0, eno1, etc. These are usually onboard interfaces.
  • enX interfaces, sorted alpha numerically, such as enp3s0, enp3s1, ens3, etc. These are usually add-on interfaces.

The numbered NIC scheme includes only live interfaces, for example, if the interfaces have a cable attached to the switch. If you have some hosts with four interfaces and some with six interfaces, use nic1 to nic4 and attach only four cables on each host.

Customizing NIC mappings for pre-provisioned nodes

If you are using pre-provisioned nodes, you can specify os-net-config mappings for specific nodes by configuring the NetConfigDataLookup heat parameter in an environment file.

Note

The configuration of the NetConfigDataLookup heat parameter is equivalent to the net_config_data_lookup property in your node definition file, overcloud-baremetal-deploy.yaml. If you are not using pre-provisioned nodes, you must configure the NIC mappings in your node definition file. For more information on configuring the net_config_data_lookup property, see Bare-metal node provisioning attributes.

You can assign aliases to the physical interfaces on each node to pre-determine which physical NIC maps to specific aliases, such as nic1 or nic2, and you can map a MAC address to a specified alias. You can map specific nodes by using the MAC address or DMI keyword, or you can map a group of nodes by using a DMI keyword. The following example configures three nodes and two node groups with aliases to the physical interfaces. The resulting configuration is applied by os-net-config. On each node, you can see the applied configuration in the interface_mapping section of the /etc/os-net-config/mapping.yaml file.

Example os-net-config-mappings.yaml

NetConfigDataLookup:
  node1: 1
    nic1: "00:c8:7c:e6:f0:2e"
  node2:
    nic1: "00:18:7d:99:0c:b6"
  node3: 2
    dmiString: "system-uuid" 3
    id: 'A8C85861-1B16-4803-8689-AFC62984F8F6'
    nic1: em3
  # Dell PowerEdge
  nodegroup1: 4
    dmiString: "system-product-name"
    id: "PowerEdge R630"
    nic1: em3
    nic2: em1
    nic3: em2
  # Cisco UCS B200-M4"
  nodegroup2:
    dmiString: "system-product-name"
    id: "UCSB-B200-M4"
    nic1: enp7s0
    nic2: enp6s0

1
Maps node1 to the specified MAC address, and assigns nic1 as the alias for the MAC address on this node.
2
Maps node3 to the node with the system UUID "A8C85861-1B16-4803-8689-AFC62984F8F6", and assigns nic1 as the alias for em3 interface on this node.
3
The dmiString parameter must be set to a valid string keyword. For a list of the valid string keywords, see the DMIDECODE(8) man page.
4
Maps all the nodes in nodegroup1 to nodes with the product name "PowerEdge R630", and assigns nic1, nic2, and nic3 as the alias for the named interfaces on these nodes.
Note

Normally, os-net-config registers only the interfaces that are already connected in an UP state. However, if you hardcode interfaces with a custom mapping file, the interface is registered even if it is in a DOWN state.

10.5.2. Configuring routes and default routes

You can set the default route of a host in one of two ways. If the interface uses DHCP and the DHCP server offers a gateway address, the system uses a default route for that gateway. Otherwise, you can set a default route on an interface with a static IP.

Although the Linux kernel supports multiple default gateways, it uses only the gateway with the lowest metric. If there are multiple DHCP interfaces, this can result in an unpredictable default gateway. In this case, it is recommended to set defroute: false for interfaces other than the interface that uses the default route.

For example, you might want a DHCP interface (nic3) to be the default route. Use the following YAML snippet to disable the default route on another DHCP interface (nic2):

# No default route on this DHCP interface
- type: interface
  name: nic2
  use_dhcp: true
  defroute: false
# Instead use this DHCP interface as the default route
- type: interface
  name: nic3
  use_dhcp: true
Note

The defroute parameter applies only to routes obtained through DHCP.

To set a static route on an interface with a static IP, specify a route to the subnet. For example, you can set a route to the 10.1.2.0/24 subnet through the gateway at 172.17.0.1 on the Internal API network:

    - type: vlan
      device: bond1
      vlan_id: 9
      addresses:
      - ip_netmask: 172.17.0.100/16
      routes:
      - ip_netmask: 10.1.2.0/24
        next_hop: 172.17.0.1

10.5.3. Configuring policy-based routing

To configure unlimited access from different networks on Controller nodes, configure policy-based routing. Policy-based routing uses route tables where, on a host with multiple interfaces, you can send traffic through a particular interface depending on the source address. You can route packets that come from different sources to different networks, even if the destinations are the same.

For example, you can configure a route to send traffic to the Internal API network, based on the source address of the packet, even when the default route is for the External network. You can also define specific route rules for each interface.

Red Hat OpenStack Platform uses the os-net-config tool to configure network properties for your overcloud nodes. The os-net-config tool manages the following network routing on Controller nodes:

  • Routing tables in the /etc/iproute2/rt_tables file
  • IPv4 rules in the /etc/sysconfig/network-scripts/rule-{ifname} file
  • IPv6 rules in the /etc/sysconfig/network-scripts/rule6-{ifname} file
  • Routing table specific routes in the /etc/sysconfig/network-scripts/route-{ifname}

Prerequisites

  • You have installed the undercloud successfully. For more information, see Installing director in the Director Installation and Usage guide.

Procedure

  1. Create the interface entries in a custom NIC template from the /home/stack/templates/custom-nics directory, define a route for the interface, and define rules that are relevant to your deployment:

      network_config:
      - type: interface
        name: em1
        use_dhcp: false
        addresses:
          - ip_netmask: {{ external_ip }}/{{ external_cidr}}
        routes:
          - default: true
            next_hop: {{ external_gateway_ip }}
          - ip_netmask: {{ external_ip }}/{{ external_cidr}}
            next_hop: {{ external_gateway_ip }}
            route_table: 2
            route_options: metric 100
        rules:
          - rule: "iif em1 table 200"
            comment: "Route incoming traffic to em1 with table 200"
          - rule: "from 192.0.2.0/24 table 200"
            comment: "Route all traffic from 192.0.2.0/24 with table 200"
          - rule: "add blackhole from 172.19.40.0/24 table 200"
          - rule: "add unreachable iif em1 from 192.168.1.0/24"
  2. Include your custom NIC configuration and network environment files in the deployment command, along with any other environment files relevant to your deployment:

    $ openstack overcloud deploy --templates \
    -e /home/stack/templates/<custom-nic-template>
    -e <OTHER_ENVIRONMENT_FILES>

Verification

  • Enter the following commands on a Controller node to verify that the routing configuration is functioning correctly:

    $ cat /etc/iproute2/rt_tables
    $ ip route
    $ ip rule

10.5.4. Configuring jumbo frames

The Maximum Transmission Unit (MTU) setting determines the maximum amount of data transmitted with a single Ethernet frame. Using a larger value results in less overhead because each frame adds data in the form of a header. The default value is 1500 and using a higher value requires the configuration of the switch port to support jumbo frames. Most switches support an MTU of at least 9000, but many are configured for 1500 by default.

The MTU of a VLAN cannot exceed the MTU of the physical interface. Ensure that you include the MTU value on the bond or interface.

The Storage, Storage Management, Internal API, and Tenant networks can all benefit from jumbo frames.

You can alter the value of the mtu in the jinja2 template or in the network_data.yaml file. If you set the value in the network_data.yaml file it is rendered during deployment.

Warning

Routers typically cannot forward jumbo frames across Layer 3 boundaries. To avoid connectivity issues, do not change the default MTU for the Provisioning interface, External interface, and any Floating IP interfaces.

---
{% set mtu_list = [ctlplane_mtu] %}
{% for network in role_networks %}
{{ mtu_list.append(lookup('vars', networks_lower[network] ~ '_mtu')) }}
{%- endfor %}
{% set min_viable_mtu = mtu_list | max %}
network_config:
- type: ovs_bridge
  name: bridge_name
  mtu: {{ min_viable_mtu }}
  use_dhcp: false
  dns_servers: {{ ctlplane_dns_nameservers }}
  domain: {{ dns_search_domains }}
  addresses:
  - ip_netmask: {{ ctlplane_ip }}/{{ ctlplane_subnet_cidr }}
  routes: {{ [ctlplane_host_routes] | flatten | unique }}
  members:
  - type: interface
    name: nic1
    mtu: {{ min_viable_mtu }}
    primary: true
  - type: vlan
    mtu: 9000  1
    vlan_id: {{ storage_vlan_id }}
    addresses:
    - ip_netmask: {{ storage_ip }}/{{ storage_cidr }}
    routes: {{ [storage_host_routes] | flatten | unique }}
  - type: vlan
    mtu: {{ storage_mgmt_mtu }} 2
    vlan_id: {{ storage_mgmt_vlan_id }}
    addresses:
    - ip_netmask: {{ storage_mgmt_ip }}/{{ storage_mgmt_cidr }}
    routes: {{ [storage_mgmt_host_routes] | flatten | unique }}
  - type: vlan
    mtu: {{ internal_api_mtu }}
    vlan_id: {{ internal_api_vlan_id }}
    addresses:
    - ip_netmask: {{ internal_api_ip }}/{{ internal_api_cidr }}
    routes: {{ [internal_api_host_routes] | flatten | unique }}
  - type: vlan
    mtu: {{ tenant_mtu }}
    vlan_id: {{ tenant_vlan_id }}
    addresses:
    - ip_netmask: {{ tenant_ip }}/{{ tenant_cidr }}
    routes: {{ [tenant_host_routes] | flatten | unique }}
  - type: vlan
    mtu: {{ external_mtu }}
    vlan_id: {{ external_vlan_id }}
    addresses:
    - ip_netmask: {{ external_ip }}/{{ external_cidr }}
    routes: {{ [external_host_routes, [{'default': True, 'next_hop': external_gateway_ip}]] | flatten | unique }}
1
mtu value updated directly in the jinja2 template.
2
mtu value is taken from the network_data.yaml file during deployment.

10.5.5. Configuring ML2/OVN northbound path MTU discovery for jumbo frame fragmentation

If a VM on your internal network sends jumbo frames to an external network, and the maximum transmission unit (MTU) of the internal network exceeds the MTU of the external network, a northbound frame can easily exceed the capacity of the external network.

ML2/OVS automatically handles this oversized packet issue, and ML2/OVN handles it automatically for TCP packets.

But to ensure proper handling of oversized northbound UDP packets in a deployment that uses the ML2/OVN mechanism driver, you need to perform additional configuration steps.

These steps configure ML2/OVN routers to return ICMP "fragmentation needed" packets to the sending VM, where the sending application can break the payload into smaller packets.

Note

In east/west traffic, a RHOSP ML2/OVN deployment does not support fragmentation of packets that are larger than the smallest MTU on the east/west path. For example:

  • VM1 is on Network1 with an MTU of 1300.
  • VM2 is on Network2 with an MTU of 1200.
  • A ping in either direction between VM1 and VM2 with a size of 1171 or less succeeds. A ping with a size greater than 1171 results in 100 percent packet loss.

    With no identified customer requirements for this type of fragmentation, Red Hat has no plans to add support.

Procedure

  1. Set the following value in the [ovn] section of ml2_conf.ini:

    ovn_emit_need_to_frag = True

10.5.6. Configuring the native VLAN on a trunked interface

If a trunked interface or bond has a network on the native VLAN, the IP addresses are assigned directly to the bridge and there is no VLAN interface.

The following example configures a bonded interface where the External network is on the native VLAN:

network_config:
- type: ovs_bridge
  name: br-ex
  addresses:
  - ip_netmask: {{ external_ip }}/{{ external_cidr }}
  routes: {{ external_host_routes }}
  members:
  - type: ovs_bond
    name: bond1
    ovs_options: {{ bond_interface_ovs_options }}
    members:
    - type: interface
      name: nic3
      primary: true
    - type: interface
      name: nic4
Note

When you move the address or route statements onto the bridge, remove the corresponding VLAN interface from the bridge. Make the changes to all applicable roles. The External network is only on the controllers, so only the controller template requires a change. The Storage network is attached to all roles, so if the Storage network is on the default VLAN, all roles require modifications.

10.5.7. Increasing the maximum number of connections that netfilter tracks

The Red Hat OpenStack Platform (RHOSP) Networking service (neutron) uses netfilter connection tracking to build stateful firewalls and to provide network address translation (NAT) on virtual networks. There are some situations that can cause the kernel space to reach the maximum connection limit and result in errors such as nf_conntrack: table full, dropping packet. You can increase the limit for connection tracking (conntrack) and avoid these types of errors. You can increase the conntrack limit for one or more roles, or across all the nodes, in your RHOSP deployment.

Prerequisites

  • A successful RHOSP undercloud installation.

Procedure

  1. Log in to the undercloud host as the stack user.
  2. Source the undercloud credentials file:

    $ source ~/stackrc
  3. Create a custom YAML environment file.

    Example

    $ vi /home/stack/templates/custom-environment.yaml

  4. Your environment file must contain the keywords parameter_defaults and ExtraSysctlSettings. Enter a new value for the maximum number of connections that netfilter can track in the variable, net.nf_conntrack_max.

    Example

    In this example, you can set the conntrack limit across all hosts in your RHOSP deployment:

    parameter_defaults:
      ExtraSysctlSettings:
        net.nf_conntrack_max:
          value: 500000

    Use the <role>Parameter parameter to set the conntrack limit for a specific role:

    parameter_defaults:
      <role>Parameters:
        ExtraSysctlSettings:
          net.nf_conntrack_max:
            value: <simultaneous_connections>
    • Replace <role> with the name of the role.

      For example, use ControllerParameters to set the conntrack limit for the Controller role, or ComputeParameters to set the conntrack limit for the Compute role.

    • Replace <simultaneous_connections> with the quantity of simultaneous connections that you want to allow.

      Example

      In this example, you can set the conntrack limit for only the Controller role in your RHOSP deployment:

      parameter_defaults:
        ControllerParameters:
          ExtraSysctlSettings:
            net.nf_conntrack_max:
              value: 500000
      Note

      The default value for net.nf_conntrack_max is 500000 connections. The maximum value is: 4294967295.

  5. Run the deployment command and include the core heat templates, environment files, and this new custom environment file.

    Important

    The order of the environment files is important as the parameters and resources defined in subsequent environment files take precedence.

    Example

    $ openstack overcloud deploy --templates \
    -e /home/stack/templates/custom-environment.yaml

Additional resources

10.6. Network interface bonding

You can use various bonding options in your custom network configuration.

10.6.1. Network interface bonding for overcloud nodes

You can bundle multiple physical NICs together to form a single logical channel known as a bond. You can configure bonds to provide redundancy for high availability systems or increased throughput.

Red Hat OpenStack Platform supports Open vSwitch (OVS) kernel bonds, OVS-DPDK bonds, and Linux kernel bonds.

Table 10.10. Supported interface bonding types

Bond typeType valueAllowed bridge typesAllowed members

OVS kernel bonds

ovs_bond

ovs_bridge

interface

OVS-DPDK bonds

ovs_dpdk_bond

ovs_user_bridge

ovs_dpdk_port

Linux kernel bonds

linux_bond

ovs_bridge or linux_bridge

interface

Important

Do not combine ovs_bridge and ovs_user_bridge on the same node.

10.6.2. Creating Open vSwitch (OVS) bonds

You create OVS bonds in your network interface templates. For example, you can create a bond as part of an OVS user space bridge:

- type: ovs_user_bridge
  name: br-dpdk0
  members:
  - type: ovs_dpdk_bond
    name: dpdkbond0
    rx_queue: {{ num_dpdk_interface_rx_queues }}
    members:
    - type: ovs_dpdk_port
      name: dpdk0
      members:
      - type: interface
        name: nic4
    - type: ovs_dpdk_port
      name: dpdk1
      members:
      - type: interface
        name: nic5

In this example, you create the bond from two DPDK ports.

The ovs_options parameter contains the bonding options. You can configure a bonding options in a network environment file with the BondInterfaceOvsOptions parameter:

environment_parameters:
  BondInterfaceOvsOptions: "bond_mode=active_backup"

10.6.3. Open vSwitch (OVS) bonding options

You can set various Open vSwitch (OVS) bonding options with the ovs_options heat parameter in your NIC template files.

bond_mode=balance-slb
Source load balancing (slb) balances flows based on source MAC address and output VLAN, with periodic rebalancing as traffic patterns change. When you configure a bond with the balance-slb bonding option, there is no configuration required on the remote switch. The Networking service (neutron) assigns each source MAC and VLAN pair to a link and transmits all packets from that MAC and VLAN through that link. A simple hashing algorithm based on source MAC address and VLAN number is used, with periodic rebalancing as traffic patterns change. The balance-slb mode is similar to mode 2 bonds used by the Linux bonding driver. You can use this mode to provide load balancing even when the switch is not configured to use LACP.
bond_mode=active-backup
When you configure a bond using active-backup bond mode, the Networking service keeps one NIC in standby. The standby NIC resumes network operations when the active connection fails. Only one MAC address is presented to the physical switch. This mode does not require switch configuration, and works when the links are connected to separate switches. This mode does not provide load balancing.
lacp=[active | passive | off]
Controls the Link Aggregation Control Protocol (LACP) behavior. Only certain switches support LACP. If your switch does not support LACP, use bond_mode=balance-slb or bond_mode=active-backup.
other-config:lacp-fallback-ab=true
Set active-backup as the bond mode if LACP fails.
other_config:lacp-time=[fast | slow]
Set the LACP heartbeat to one second (fast) or 30 seconds (slow). The default is slow.
other_config:bond-detect-mode=[miimon | carrier]
Set the link detection to use miimon heartbeats (miimon) or monitor carrier (carrier). The default is carrier.
other_config:bond-miimon-interval=100
If using miimon, set the heartbeat interval (milliseconds).
bond_updelay=1000
Set the interval (milliseconds) that a link must be up to be activated to prevent flapping.
other_config:bond-rebalance-interval=10000
Set the interval (milliseconds) that flows are rebalancing between bond members. Set this value to zero to disable flow rebalancing between bond members.

10.6.5. Creating Linux bonds

You create Linux bonds in your network interface templates. For example, you can create a Linux bond that bonds two interfaces:

- type: linux_bond
  name: bond_api
  mtu: {{ min_viable_mtu_ctlplane }}
  use_dhcp: false
  bonding_options: {{ bond_interface_ovs_options }}
  dns_servers: {{ ctlplane_dns_nameservers }}
  domain: {{ dns_search_domains }}
  members:
  - type: interface
    name: nic2
    mtu: {{ min_viable_mtu_ctlplane }}
    primary: true
  - type: interface
    name: nic3
    mtu: {{ min_viable_mtu_ctlplane }}

The bonding_options parameter sets the specific bonding options for the Linux bond.

mode
Sets the bonding mode, which in the example is 802.3ad or LACP mode. For more information about Linux bonding modes, see "Upstream Switch Configuration Depending on the Bonding Modes" in the Red Hat Enterprise Linux 9 Configuring and Managing Networking guide.
lacp_rate
Defines whether LACP packets are sent every 1 second, or every 30 seconds.
updelay
Defines the minimum amount of time that an interface must be active before it is used for traffic. This minimum configuration helps to mitigate port flapping outages.
miimon
The interval in milliseconds that is used for monitoring the port state using the MIIMON functionality of the driver.

Use the following additional examples as guides to configure your own Linux bonds:

  • Linux bond set to active-backup mode with one VLAN:

    ....
    
    - type: linux_bond
      name: bond_api
      mtu: {{ min_viable_mtu_ctlplane }}
      use_dhcp: false
      bonding_options: "mode=active-backup"
      dns_servers: {{ ctlplane_dns_nameservers }}
      domain: {{ dns_search_domains }}
      members:
      - type: interface
        name: nic2
        mtu: {{ min_viable_mtu_ctlplane }}
        primary: true
      - type: interface
        name: nic3
        mtu: {{ min_viable_mtu_ctlplane }}
      - type: vlan
        mtu: {{ internal_api_mtu }}
        vlan_id: {{ internal_api_vlan_id }}
        addresses:
        - ip_netmask:
            {{ internal_api_ip }}/{{ internal_api_cidr }}
          routes:
            {{ internal_api_host_routes }}
  • Linux bond on OVS bridge. Bond set to 802.3ad LACP mode with one VLAN:

    - type: linux_bond
      name: bond_tenant
      mtu: {{ min_viable_mtu_ctlplane }}
      bonding_options: "mode=802.3ad updelay=1000 miimon=100"
      use_dhcp: false
      dns_servers: {{ ctlplane_dns_nameserver }}
      domain: {{ dns_search_domains }}
      members:
        - type: interface
          name: p1p1
          mtu: {{ min_viable_mtu_ctlplane }}
        - type: interface
          name: p1p2
          mtu: {{ min_viable_mtu_ctlplane }}
        - type: vlan
          mtu: {{ tenant_mtu }}
          vlan_id: {{ tenant_vlan_id }}
          addresses:
            - ip_netmask:
               {{ tenant_ip }}/{{ tenant_cidr }}
          routes:
            {{ tenant_host_routes }}
    Important

    You must set up min_viable_mtu_ctlplane before you can use it. Copy /usr/share/ansible/roles/tripleo_network_config/templates/2_linux_bonds_vlans.j2 to your templates directory and modify it for your needs. For more information, see Adding a composable network, and refer to the steps that pertain to the network configuration template.

10.7. Updating the format of your network configuration files

The format of the network configuration yaml files has changed in Red Hat OpenStack Platform (RHOSP) 17.0. The structure of the network configuration file network_data.yaml has changed, and the NIC template file format has changed from yaml file format to Jinja2 ansible format, j2.

You can convert your existing network configuration file in your current deployment to the RHOSP 17+ format by using the following conversion tools:

  • convert_v1_net_data.py
  • convert_heat_nic_config_to_ansible_j2.py

You can also manually convert your existing NIC template files.

The files you need to convert include the following:

  • network_data.yaml
  • Controller NIC templates
  • Compute NIC templates
  • Any other custom network files

10.7.1. Updating the format of your network configuration file

The format of the network configuration yaml file has changed in Red Hat OpenStack Platform (RHOSP) 17.0. You can convert your existing network configuration file in your current deployment to the RHOSP 17+ format by using the convert_v1_net_data.py conversion tool.

Procedure

  1. Download the conversion tool:

    • /usr/share/openstack-tripleo-heat-templates/tools/convert_v1_net_data.py
  2. Convert your RHOSP 16+ network configuration file to the RHOSP 17+ format:

    $ python3 convert_v1_net_data.py <network_config>.yaml
    • Replace <network_config> with the name of the existing configuration file that you want to convert, for example, network_data.yaml.

10.7.2. Automatically converting NIC templates to Jinja2 Ansible format

The NIC template file format has changed from yaml file format to Jinja2 Ansible format, j2, in Red Hat OpenStack Platform (RHOSP) 17.0.

You can convert your existing NIC template files in your current deployment to the Jinja2 format by using the convert_heat_nic_config_to_ansible_j2.py conversion tool.

You can also manually convert your existing NIC template files. For more information, see Manually converting NIC templates to Jinja2 Ansible format.

The files you need to convert include the following:

  • Controller NIC templates
  • Compute NIC templates
  • Any other custom network files

Procedure

  1. Download the conversion tool:

    • /usr/share/openstack-tripleo-heat-templates/tools/convert_heat_nic_config_to_ansible_j2.py
  2. Convert your Compute and Controller NIC tempate files, and any other custom network files, to the Jinja2 Ansible format:

    $ python3 convert_heat_nic_config_to_ansible_j2.py \
     [--stack <overcloud> | --standalone] --networks_file <network_config.yaml> \
     <network_template>.yaml
    • Replace <overcloud> with the name or UUID of the overcloud stack. If --stack is not specified, the stack defaults to overcloud.

      Note

      You can use the --stack option only on your RHOSP 16 deployment because it requires the Orchestration service (heat) to be running on the undercloud node. Starting with RHOSP 17, RHOSP deployments use ephemeral heat, which runs the Orchestration service in a container. If the Orchestration service is not available, or you have no stack, then use the --standalone option instead of --stack.

    • Replace <network_config.yaml> with the name of the configuration file that describes the network deployment, for example, network_data.yaml.
    • Replace <network_template> with the name of the network configuration file you want to convert.

    Repeat this command until you have converted all your custom network configuration files. The convert_heat_nic_config_to_ansible_j2.py script generates a .j2 file for each yaml file you pass to it for conversion.

  3. Inspect each generated .j2 file to ensure the configuration is correct and complete for your environment, and manually address any comments generated by the tool that highlight where the configuration could not be converted. For more information about manually converting the NIC configuration to Jinja2 format, see Heat parameter to Ansible variable mappings.
  4. Configure the *NetworkConfigTemplate parameters in your network-environment.yaml file to point to the generated .j2 files:

    parameter_defaults:
      ControllerNetworkConfigTemplate: '/home/stack/templates/custom-nics/controller.j2'
      ComputeNetworkConfigTemplate: '/home/stack/templates/custom-nics/compute.j2'
  5. Delete the resource_registry mappings from your network-environment.yaml file for the old network configuration files:

    resource_registry:
      OS::TripleO::Compute::Net::SoftwareConfig: /home/stack/templates/nic-configs/compute.yaml
      OS::TripleO::Controller::Net::SoftwareConfig: /home/stack/templates/nic-configs/controller.yaml

10.7.3. Manually converting NIC templates to Jinja2 Ansible format

The NIC template file format has changed from yaml file format to Jinja2 Ansible format, j2, in Red Hat OpenStack Platform (RHOSP) 17.0.

You can manually convert your existing NIC template files.

You can also convert your existing NIC template files in your current deployment to the Jinja2 format by using the convert_heat_nic_config_to_ansible_j2.py conversion tool. For more information, see Automatically converting NIC templates to Jinja2 ansible format.

The files you need to convert include the following:

  • Controller NIC templates
  • Compute NIC templates
  • Any other custom network files

Procedure

  1. Create a Jinja2 template. You can create a new template by using the os-net-config schema, or copy and edit an example template from the /usr/share/ansible/roles/tripleo_network_config/templates/ directory on the undercloud node.
  2. Replace the heat intrinsic functions with Jinja2 filters. For example, use the following filter to calculate the min_viable_mtu:

    {% set mtu_list = [ctlplane_mtu] %}
    {% for network in role_networks %}
    {{ mtu_list.append(lookup('vars', networks_lower[network] ~ '_mtu')) }}
    {%- endfor %}
    {% set min_viable_mtu = mtu_list | max %}
  3. Use Ansible variables to configure the network properties for your deployment. You can configure each individual network manually, or programatically configure each network by iterating over role_networks:

    • To manually configure each network, replace each get_param function with the equivalent Ansible variable. For example, if your current deployment configures vlan_id by using get_param: InternalApiNetworkVlanID, then add the following configuration to your template:

      vlan_id: {{ internal_api_vlan_id }}

      Table 10.12. Example network property mapping from heat parameters to Ansible vars

      yaml file formatJinja2 ansible format, j2
      - type: vlan
        device: nic2
        vlan_id:
          get_param: InternalApiNetworkVlanID
        addresses:
        - ip_netmask:
            get_param: InternalApiIpSubnet
      - type: vlan
        device: nic2
        vlan_id: {{ internal_api_vlan_id }}
        addresses:
        - ip_netmask: {{ internal_api_ip }}/{{ internal_api_cidr }}
    • To programatically configure each network, add a Jinja2 for-loop structure to your template that retrieves the available networks by their role name by using role_networks.

      Example

      {% for network in role_networks %}
        - type: vlan
          mtu: {{ lookup('vars', networks_lower[network] ~ '_mtu') }}
          vlan_id: {{ lookup('vars', networks_lower[network] ~ '_vlan_id') }}
          addresses:
          - ip_netmask: {{ lookup('vars', networks_lower[network] ~ '_ip') }}/{{ lookup('vars', networks_lower[network] ~ '_cidr') }}
          routes: {{ lookup('vars', networks_lower[network] ~ '_host_routes') }}
      {%- endfor %}

    For a full list of the mappings from the heat parameter to the Ansible vars equivalent, see Heat parameter to Ansible variable mappings.

  4. Configure the *NetworkConfigTemplate parameters in your network-environment.yaml file to point to the generated .j2 files:

    parameter_defaults:
      ControllerNetworkConfigTemplate: '/home/stack/templates/custom-nics/controller.j2'
      ComputeNetworkConfigTemplate: '/home/stack/templates/custom-nics/compute.j2'
  5. Delete the resource_registry mappings from your network-environment.yaml file for the old network configuration files:

    resource_registry:
      OS::TripleO::Compute::Net::SoftwareConfig: /home/stack/templates/nic-configs/compute.yaml
      OS::TripleO::Controller::Net::SoftwareConfig: /home/stack/templates/nic-configs/controller.yaml

10.7.4. Heat parameter to Ansible variable mappings

The NIC template file format has changed from yaml file format to Jinja2 ansible format, j2, in Red Hat OpenStack Platform (RHOSP) 17.x.

To manually convert your existing NIC template files to Jinja2 ansible format, you can map your heat parameters to Ansible variables to configure the network properties for pre-provisioned nodes in your deployment. You can also map your heat parameters to Ansible variables if you run openstack overcloud node provision without specifying the --network-config optional argument.

For example, if your current deployment configures vlan_id by using get_param: InternalApiNetworkVlanID, then replace it with the following configuration in your new Jinja2 template:

vlan_id: {{ internal_api_vlan_id }}
Note

If you provision your nodes by running openstack overcloud node provision with the --network-config optional argument, you must configure the network properties for your deploying by using the parameters in overcloud-baremetal-deploy.yaml. For more information, see Heat parameter to provisioning definition file mappings.

The following table lists the available mappings from the heat parameter to the Ansible vars equivalent.

Table 10.13. Mappings from heat parameters to Ansible vars

Heat parameterAnsible vars

BondInterfaceOvsOptions

{{ bond_interface_ovs_options }}

ControlPlaneIp

{{ ctlplane_ip }}

ControlPlaneDefaultRoute

{{ ctlplane_gateway_ip }}

ControlPlaneMtu

{{ ctlplane_mtu }}

ControlPlaneStaticRoutes

{{ ctlplane_host_routes }}

ControlPlaneSubnetCidr

{{ ctlplane_subnet_cidr }}

DnsSearchDomains

{{ dns_search_domains }}

DnsServers

{{ ctlplane_dns_nameservers }}

Note

This Ansible variable is populated with the IP address configured in undercloud.conf for DEFAULT/undercloud_nameservers and %SUBNET_SECTION%/dns_nameservers. The configuration of %SUBNET_SECTION%/dns_nameservers overrides the configuration of DEFAULT/undercloud_nameservers, so that you can use different DNS servers for the undercloud and the overcloud, and different DNS servers for nodes on different provisioning subnets.

NumDpdkInterfaceRxQueues

{{ num_dpdk_interface_rx_queues }}

Configuring a heat parameter that is not listed in the table

To configure a heat parameter that is not listed in the table, you must configure the parameter as a {{role.name}}ExtraGroupVars. After you have configured the parameter as a {{role.name}}ExtraGroupVars parameter, you can then use it in your new template. For example, to configure the StorageSupernet parameter, add the following configuration to your network configuration file:

parameter_defaults:
  ControllerExtraGroupVars:
    storage_supernet: 172.16.0.0/16

You can then add {{ storage_supernet }} to your Jinja2 template.

Warning

This process will not work if the --network-config option is used with node provisioning. Users requiring custom vars should not use the --network-config option. Instead, after creating the Heat stack, apply the node network configuration to the config-download ansible run.

Converting the Ansible variable syntax to programmatically configure each network

When you use a Jinja2 for-loop structure to retrieve the available networks by their role name by iterating over role_networks, you need to retrieve the lower case name for each network role to prepend to each property. Use the following structure to convert the Ansible vars from the above table to the required syntax:

{{ lookup(‘vars’, networks_lower[network] ~ ‘_<property>’) }}

  • Replace <property> with the property that you are setting, for example, ip, vlan_id, or mtu.

For example, to populate the value for each NetworkVlanID dynamically, replace {{ <network_name>_vlan_id }} with the following configuration:

{{ lookup(‘vars’, networks_lower[network] ~ ‘_vlan_id’) }}`

10.7.5. Heat parameter to provisioning definition file mappings

If you provision your nodes by running the openstack overcloud node provision command with the --network-config optional argument, you must configure the network properties for your deployment by using the parameters in the node definition file overcloud-baremetal-deploy.yaml.

If your deployment uses pre-provisioned nodes, you can map your heat parameters to Ansible variables to configure the network properties. You can also map your heat parameters to Ansible variables if you run openstack overcloud node provision without specifying the --network-config optional argument. For more information about configuring network properties by using Ansible variables, see Heat parameter to Ansible variable mappings.

The following table lists the available mappings from the heat parameter to the network_config property equivalent in the node definition file overcloud-baremetal-deploy.yaml.

Table 10.14. Mappings from heat parameters to node definition file overcloud-baremetal-deploy.yaml

Heat parameternetwork_config property

BondInterfaceOvsOptions

bond_interface_ovs_options

DnsSearchDomains

dns_search_domains

NetConfigDataLookup

net_config_data_lookup

NeutronPhysicalBridge

physical_bridge_name

NeutronPublicInterface

public_interface_name

NumDpdkInterfaceRxQueues

num_dpdk_interface_rx_queues

{{role.name}}NetworkConfigUpdate

network_config_update

The following table lists the available mappings from the heat parameter to the property equivalent in the networks definition file network_data.yaml.

Table 10.15. Mappings from heat parameters to networks definition file network_data.yaml

Heat parameterIPv4 network_data.yaml propertyIPv6 network_data.yaml property

<network_name>IpSubnet

- name: <network_name>
  subnets:
    subnet01:
      ip_subnet: 172.16.1.0/24
- name: <network_name>
  subnets:
    subnet01:
      ipv6_subnet: 2001:db8:a::/64

<network_name>NetworkVlanID

- name: <network_name>
  subnets:
    subnet01:
      ...
      vlan: <vlan_id>
- name: <network_name>
  subnets:
    subnet01:
      ...
      vlan: <vlan_id>

<network_name>Mtu

- name: <network_name>
  mtu:
- name: <network_name>
  mtu:

<network_name>InterfaceDefaultRoute

- name: <network_name>
  subnets:
    subnet01:
      ip_subnet: 172.16.16.0/24
      gateway_ip: 172.16.16.1
- name: <network_name>
  subnets:
    subnet01:
      ipv6_subnet: 2001:db8:a::/64
      gateway_ipv6: 2001:db8:a::1

<network_name>InterfaceRoutes

- name: <network_name>
  subnets:
    subnet01:
      ...
      routes:
        - destination: 172.18.0.0/24
          nexthop: 172.18.1.254
- name: <network_name>
  subnets:
    subnet01:
      ...
      routes_ipv6:
        - destination: 2001:db8:b::/64
          nexthop: 2001:db8:a::1

10.7.6. Changes to the network data schema

The network data schema was updated in Red Hat OpenStack Platform (RHOSP) 17. The main differences between the network data schema used in RHOSP 16 and earlier, and network data schema used in RHOSP 17 and later, are as follows:

  • The base subnet has been moved to the subnets map. This aligns the configuration for non-routed and routed deployments, such as spine-leaf networking.
  • The enabled option is no longer used to ignore disabled networks. Instead, you must remove disabled networks from the configuration file.
  • The compat_name option is no longer required as the heat resource that used it has been removed.
  • The following parameters are no longer valid at the network level: ip_subnet, gateway_ip, allocation_pools, routes, ipv6_subnet, gateway_ipv6, ipv6_allocation_pools, and routes_ipv6. These parameters are still used at the subnet level.
  • A new parameter, physical_network, has been introduced, that is used to create ironic ports in metalsmith.
  • New parameters network_type and segmentation_id replace {{network.name}}NetValueSpecs used to set the network type to vlan.
  • The following parameters have been deprecated in RHOSP 17:

    • {{network.name}}NetCidr
    • {{network.name}}SubnetName
    • {{network.name}}Network
    • {{network.name}}AllocationPools
    • {{network.name}}Routes
    • {{network.name}}SubnetCidr_{{subnet}}
    • {{network.name}}AllocationPools_{{subnet}}
    • {{network.name}}Routes_{{subnet}}