Red Hat Training

A Red Hat training course is available for Red Hat OpenStack Platform

Chapter 12. Additional network configuration

This chapter follows on from the concepts and procedures outlined in Chapter 11, Custom network interface templates and provides some additional information to help configure parts of your overcloud network.

12.1. Configuring custom Interfaces

Individual interfaces might require modification. The following example shows the modifications that are necessary to use a second NIC to connect to an infrastructure network with DHCP addresses, and to use a third and fourth NIC for the bond:

network_config:
  # Add a DHCP infrastructure network to nic2
  - type: interface
    name: nic2
    use_dhcp: true
  - type: ovs_bridge
    name: br-bond
    members:
      - type: ovs_bond
        name: bond1
        ovs_options:
          get_param: BondInterfaceOvsOptions
        members:
          # Modify bond NICs to use nic3 and nic4
          - type: interface
            name: nic3
            primary: true
          - type: interface
            name: nic4

The network interface template uses either the actual interface name (eth0, eth1, enp0s25) or a set of numbered interfaces (nic1, nic2, nic3). The network interfaces of hosts within a role do not have to be exactly the same when using numbered interfaces (nic1, nic2, etc.) instead of named interfaces (eth0, eno2, etc.). For example, one host might have interfaces em1 and em2, while another has eno1 and eno2, but you can refer to both hosts' NICs as nic1 and nic2.

The order of numbered interfaces corresponds to the order of named network interface types:

  • ethX interfaces, such as eth0, eth1, etc. These are usually onboard interfaces.
  • enoX interfaces, such as eno0, eno1, etc. These are usually onboard interfaces.
  • enX interfaces, sorted alpha numerically, such as enp3s0, enp3s1, ens3, etc. These are usually add-on interfaces.

The numbered NIC scheme only takes into account the interfaces that are live, for example, if they have a cable attached to the switch. If you have some hosts with four interfaces and some with six interfaces, you should use nic1 to nic4 and only plug four cables on each host.

You can hardcode physical interfaces to specific aliases. This allows you to be pre-determine which physical NIC will be mapped as nic1 or nic2 and so on. You can also map a MAC address to a specified alias.

Note

Normally, os-net-config will only register interfaces that are already connected in an UP state. However, if you do hardcode interfaces using a custom mapping file, then the interface is registered even if it is in a DOWN state.

Interfaces are mapped to aliases with an environment file. In this example, each node has predefined entries for nic1 and nic2.

Note

If you want to use the NetConfigDataLookup configuration, you must also include the os-net-config-mappings.yaml file in the NodeUserData resource registry.

resource_registry:
  OS::TripleO::NodeUserData: /usr/share/openstack/tripleo-heat-templates/firstboot/os-net-config-mappings.yaml
parameter_defaults:
  NetConfigDataLookup:
    node1:
      nic1: "em1"
      nic2: "em2"
    node2:
      nic1: "00:50:56:2F:9F:2E"
      nic2: "em2"

The resulting configuration is then applied by os-net-config. On each node, you can see the applied configuration in the interface_mapping section of the /etc/os-net-config/mapping.yaml file.

12.2. Configuring routes and default routes

You can set the default route of a host in one of two ways. If the interface uses DHCP and the DHCP server offers a gateway address, the system uses a default route for that gateway. Otherwise, you can set a default route on an interface with a static IP.

Although the Linux kernel supports multiple default gateways, it only uses the one with the lowest metric. If there are multiple DHCP interfaces, this can result in an unpredictable default gateway. In this case, it is recommended to set defroute: false for interfaces other than the one using the default route.

For example, you might want a DHCP interface (nic3) to be the default route. Use the following YAML to disable the default route on another DHCP interface (nic2):

# No default route on this DHCP interface
- type: interface
  name: nic2
  use_dhcp: true
  defroute: false
# Instead use this DHCP interface as the default route
- type: interface
  name: nic3
  use_dhcp: true
Note

The defroute parameter only applies to routes obtained through DHCP.

To set a static route on an interface with a static IP, specify a route to the subnet. For example, you can set a route to the 10.1.2.0/24 subnet through the gateway at 172.17.0.1 on the Internal API network:

    - type: vlan
      device: bond1
      vlan_id:
        get_param: InternalApiNetworkVlanID
      addresses:
      - ip_netmask:
          get_param: InternalApiIpSubnet
      routes:
      - ip_netmask: 10.1.2.0/24
        next_hop: 172.17.0.1

12.3. Configuring jumbo frames

The Maximum Transmission Unit (MTU) setting determines the maximum amount of data transmitted with a single Ethernet frame. Using a larger value results in less overhead because each frame adds data in the form of a header. The default value is 1500 and using a higher value requires the configuration of the switch port to support jumbo frames. Most switches support an MTU of at least 9000, but many are configured for 1500 by default.

The MTU of a VLAN cannot exceed the MTU of the physical interface. Make sure to include the MTU value on the bond and/or interface.

The Storage, Storage Management, Internal API, and Tenant networks all benefit from jumbo frames. In testing, a project’s networking throughput demonstrated substantial improvement when using jumbo frames in conjunction with VXLAN tunnels.

Note

It is recommended that the Provisioning interface, External interface, and any floating IP interfaces be left at the default MTU of 1500. Connectivity problems are likely to occur otherwise. This is because routers typically cannot forward jumbo frames across Layer 3 boundaries.

- type: ovs_bond
  name: bond1
  mtu: 9000
  ovs_options: {get_param: BondInterfaceOvsOptions}
  members:
    - type: interface
      name: nic3
      mtu: 9000
      primary: true
    - type: interface
      name: nic4
      mtu: 9000

# The external interface should stay at default
- type: vlan
  device: bond1
  vlan_id:
    get_param: ExternalNetworkVlanID
  addresses:
    - ip_netmask:
        get_param: ExternalIpSubnet
  routes:
    - ip_netmask: 0.0.0.0/0
      next_hop:
        get_param: ExternalInterfaceDefaultRoute

# MTU 9000 for Internal API, Storage, and Storage Management
- type: vlan
  device: bond1
  mtu: 9000
  vlan_id:
    get_param: InternalApiNetworkVlanID
  addresses:
  - ip_netmask:
      get_param: InternalApiIpSubnet

12.4. Configuring the native VLAN on a trunked interface

If a trunked interface or bond has a network on the native VLAN, the IP addresses are assigned directly to the bridge and there is no VLAN interface.

For example, if the External network is on the native VLAN, a bonded configuration looks like this:

network_config:
  - type: ovs_bridge
    name: bridge_name
    dns_servers:
      get_param: DnsServers
    addresses:
      - ip_netmask:
          get_param: ExternalIpSubnet
    routes:
      - ip_netmask: 0.0.0.0/0
        next_hop:
          get_param: ExternalInterfaceDefaultRoute
    members:
      - type: ovs_bond
        name: bond1
        ovs_options:
          get_param: BondInterfaceOvsOptions
        members:
          - type: interface
            name: nic3
            primary: true
          - type: interface
            name: nic4
Note

When moving the address (and possibly route) statements onto the bridge, remove the corresponding VLAN interface from the bridge. Make the changes to all applicable roles. The External network is only on the controllers, so only the controller template requires a change. The Storage network on the other hand is attached to all roles, so if the Storage network is on the default VLAN, all roles require modifications.

12.5. Increasing the maximum number of connections that netfilter tracks

The Red Hat OpenStack Platform (RHOSP) Networking service (neutron) uses netfilter connection tracking to build stateful firewalls and to provide network address translation (NAT) on virtual networks. There are some situations that can cause the kernel space to reach the maximum connection limit and result in errors such as nf_conntrack: table full, dropping packet. You can increase the limit for connection tracking (conntrack) and avoid these types of errors. You can increase the conntrack limit for one or more roles, or across all the nodes, in your RHOSP deployment.

Prerequisites

  • A successful RHOSP undercloud installation.

Procedure

  1. Log in to the undercloud host as the stack user.
  2. Source the undercloud credentials file:

    $ source ~/stackrc
  3. Create a custom YAML environment file.

    Example

    $ vi /home/stack/templates/my-environment.yaml

  4. Your environment file must contain the keywords parameter_defaults and ExtraSysctlSettings. Enter a new value for the maximum number of connections that netfilter can track in the variable, net.nf_conntrack_max.

    Example

    In this example, you can set the conntrack limit across all hosts in your RHOSP deployment:

    parameter_defaults:
      ExtraSysctlSettings:
        net.nf_conntrack_max:
          value: 500000

    Use the <role>Parameter parameter to set the conntrack limit for a specific role:

    parameter_defaults:
      <role>Parameters:
        ExtraSysctlSettings:
          net.nf_conntrack_max:
            value: <simultaneous_connections>
    • Replace <role> with the name of the role.

      For example, use ControllerParameters to set the conntrack limit for the Controller role, or ComputeParameters to set the conntrack limit for the Compute role.

    • Replace <simultaneous_connections> with the quantity of simultaneous connections that you want to allow.

      Example

      In this example, you can set the conntrack limit for only the Controller role in your RHOSP deployment:

      parameter_defaults:
        ControllerParameters:
          ExtraSysctlSettings:
            net.nf_conntrack_max:
              value: 500000
      Note

      The default value for net.nf_conntrack_max is 500000 connections. The maximum value is: 4294967295.

  5. Run the deployment command and include the core heat templates, environment files, and this new custom environment file.

    Important

    The order of the environment files is important as the parameters and resources defined in subsequent environment files take precedence.

    Example

    $ openstack overcloud deploy --templates \
    -e /home/stack/templates/my-environment.yaml