Chapter 11. Additional network configuration
This chapter follows on from the concepts and procedures outlined in Chapter 10, Custom network interface templates and provides some additional information to help configure parts of your overcloud network.
11.1. Configuring custom interfaces
Individual interfaces might require modification. The following example shows the modifications that are necessary to use a second NIC to connect to an infrastructure network with DHCP addresses, and to use a third and fourth NIC for the bond:
network_config: # Add a DHCP infrastructure network to nic2 - type: interface name: nic2 use_dhcp: true - type: ovs_bridge name: br-bond members: - type: ovs_bond name: bond1 ovs_options: get_param: BondInterfaceOvsOptions members: # Modify bond NICs to use nic3 and nic4 - type: interface name: nic3 primary: true - type: interface name: nic4
The network interface template uses either the actual interface name (
enp0s25) or a set of numbered interfaces (
nic3). The network interfaces of hosts within a role do not have to be exactly the same when you use numbered interfaces (
nic2, etc.) instead of named interfaces (
eno2, etc.). For example, one host might have interfaces
em2, while another has
eno2, but you can refer to the NICs of both hosts as
The order of numbered interfaces corresponds to the order of named network interface types:
ethXinterfaces, such as
eth1, etc. These are usually onboard interfaces.
enoXinterfaces, such as
eno1, etc. These are usually onboard interfaces.
enXinterfaces, sorted alpha numerically, such as
ens3, etc. These are usually add-on interfaces.
The numbered NIC scheme includes only live interfaces, for example, if the interfaces have a cable attached to the switch. If you have some hosts with four interfaces and some with six interfaces, use
nic4 and attach only four cables on each host.
You can hardcode physical interfaces to specific aliases so that you can pre-determine which physical NIC maps as
nic2 and so on. You can also map a MAC address to a specified alias.
os-net-config registers only the interfaces that are already connected in an
UP state. However, if you hardcode interfaces with a custom mapping file, the interface is registered even if it is in a
Interfaces are mapped to aliases with an environment file. In this example, each node has predefined entries for
If you want to use the
NetConfigDataLookup configuration, you must also include the
os-net-config-mappings.yaml file in the
NodeUserData resource registry.
resource_registry: OS::TripleO::NodeUserData: /usr/share/openstack-tripleo-heat-templates/firstboot/os-net-config-mappings.yaml parameter_defaults: NetConfigDataLookup: node1: nic1: "em1" nic2: "em2" node2: nic1: "00:50:56:2F:9F:2E" nic2: "em2"
The resulting configuration is applied by
os-net-config. On each node, you can see the applied configuration in the
interface_mapping section of the
11.2. Configuring routes and default routes
You can set the default route of a host in one of two ways. If the interface uses DHCP and the DHCP server offers a gateway address, the system uses a default route for that gateway. Otherwise, you can set a default route on an interface with a static IP.
Although the Linux kernel supports multiple default gateways, it uses only the gateway with the lowest metric. If there are multiple DHCP interfaces, this can result in an unpredictable default gateway. In this case, it is recommended to set
defroute: false for interfaces other than the interface that uses the default route.
For example, you might want a DHCP interface (
nic3) to be the default route. Use the following YAML snippet to disable the default route on another DHCP interface (
# No default route on this DHCP interface - type: interface name: nic2 use_dhcp: true defroute: false # Instead use this DHCP interface as the default route - type: interface name: nic3 use_dhcp: true
defroute parameter applies only to routes obtained through DHCP.
To set a static route on an interface with a static IP, specify a route to the subnet. For example, you can set a route to the 10.1.2.0/24 subnet through the gateway at 172.17.0.1 on the Internal API network:
- type: vlan device: bond1 vlan_id: get_param: InternalApiNetworkVlanID addresses: - ip_netmask: get_param: InternalApiIpSubnet routes: - ip_netmask: 10.1.2.0/24 next_hop: 172.17.0.1
11.3. Configuring policy-based routing
On Controller nodes, to configure unlimited access from different networks, configure policy-based routing. Policy-based routing uses route tables where, on a host with multiple interfaces, you can send traffic through a particular interface depending on the source address. You can route packets that come from different sources to different networks, even if the destinations are the same.
For example, you can configure a route to send traffic to the Internal API network, based on the source address of the packet, even when the default route is for the External network. You can also define specific route rules for each interface.
Red Hat OpenStack Platform uses the
os-net-config tool to configure network properties for your overcloud nodes. The
os-net-config tool manages the following network routing on Controller nodes:
Routing tables in the
IPv4 rules in the
IPv6 rules in the
Routing table specific routes in the
- You have installed the undercloud successfully. For more information, see Installing director in the Director Installation and Usage guide.
You have rendered the default
.j2network interface templates from the
openstack-tripleo-heat-templatesdirectory. For more information, see Section 10.2, “Rendering default network interface templates for customization”.
interfaceentries in a custom NIC template from the
~/templates/custom-nicsdirectory, define a route for the interface, and define rules that are relevant to your deployment:
$network_config: network_config: - type: route_table name: custom table_id: 200 - type: interface name: em1 use_dhcp: false addresses: - ip_netmask: 192.0.2.1/24 routes: - ip_netmask: 10.1.3.0/24 next_hop: 192.0.2.5 route_options: "metric 10" table: 200 rules: - rule: "iif em1 table 200" comment: "Route incoming traffic to em1 with table 200" - rule: "from 192.0.2.0/24 table 200" comment: "Route all traffic from 192.0.2.0/24 with table 200" - rule: "add blackhole from 172.19.40.0/24 table 200" - rule: "add unreachable iif em1 from 192.168.1.0/24"
run-os-net-config.shscript location to an absolute path in each custom NIC template that you create. The script is located in the
/usr/share/openstack-tripleo-heat-templates/network/scripts/directory on the undercloud:
resources: OsNetConfigImpl: type: OS::Heat::SoftwareConfig properties: group: script config: str_replace: template: get_file: /usr/share/openstack-tripleo-heat-templates/network/scripts/run-os-net-config.sh
Include your custom NIC configuration and network environment files in the deployment command, along with any other environment files relevant to your deployment:
$ openstack overcloud deploy --templates \ -e ~/templates/<custom-nic-template> -e <OTHER_ENVIRONMENT_FILES>
Enter the following commands on a Controller node to verify that the routing configuration is functioning correctly:
$ cat /etc/iproute2/rt_tables $ ip route $ ip rule
11.4. Configuring jumbo frames
The Maximum Transmission Unit (MTU) setting determines the maximum amount of data transmitted with a single Ethernet frame. Using a larger value results in less overhead because each frame adds data in the form of a header. The default value is 1500 and using a higher value requires the configuration of the switch port to support jumbo frames. Most switches support an MTU of at least 9000, but many are configured for 1500 by default.
The MTU of a VLAN cannot exceed the MTU of the physical interface. Ensure that you include the MTU value on the bond or interface.
The Storage, Storage Management, Internal API, and Tenant networks all benefit from jumbo frames.
Routers typically cannot forward jumbo frames across Layer 3 boundaries. To avoid connectivity issues, do not change the default MTU for the Provisioning interface, External interface, and any floating IP interfaces.
- type: ovs_bond name: bond1 mtu: get_param: [MaxViableMtu, value] ovs_options: get_param: BondInterfaceOvsOptions members: - type: interface name: nic2 mtu: 9000 primary: true - type: interface name: nic3 mtu: 9000 # The external interface should stay at default - type: vlan device: bond1 vlan_id: get_param: ExternalNetworkVlanID addresses: - ip_netmask: get_param: ExternalIpSubnet routes: list_concat_unique - get_param: ExternalInterfaceRoutes - - default: true next_hop: get_param: ExternalInterfaceDefaultRoute # MTU 9000 for Internal API, Storage, and Storage Management - type: vlan device: bond1 mtu: 9000 vlan_id: get_param: InternalApiNetworkVlanID addresses: - ip_netmask: get_param: InternalApiIpSubnet
11.5. Configuring ML2/OVN northbound path MTU discovery for jumbo frame fragmentation
If a VM on your internal network sends jumbo frames to an external network, and the maximum transmission unit (MTU) of the internal network exceeds the MTU of the external network, a northbound frame can easily exceed the capacity of the external network.
ML2/OVS automatically handles this oversized packet issue, and ML2/OVN handles it automatically for TCP packets.
But to ensure proper handling of oversized northbound UDP packets in a deployment that uses the ML2/OVN mechanism driver, you need to perform additional configuration steps.
These steps configure ML2/OVN routers to return ICMP "fragmentation needed" packets to the sending VM, where the sending application can break the payload into smaller packets.
In east/west traffic, a RHOSP ML2/OVN deployment does not support fragmentation of packets that are larger than the smallest MTU on the east/west path. For example:
- VM1 is on Network1 with an MTU of 1300.
- VM2 is on Network2 with an MTU of 1200.
A ping in either direction between VM1 and VM2 with a size of 1171 or less succeeds. A ping with a size greater than 1171 results in 100 percent packet loss.
With no identified customer requirements for this type of fragmentation, Red Hat has no plans to add support.
- RHEL 188.8.131.52 or later with kernel-4.18.0-193.20.1.el8_2 or later.
Check the kernel version.
ovs-appctl -t ovs-vswitchd dpif/show-dp-features br-int
If the output includes
Check pkt length action: No, or if there is no
Check pkt length actionstring in the output, upgrade to RHEL 184.108.40.206 or later, or do not send jumbo frames to an external network that has a smaller MTU.
If the output includes
Check pkt length action: Yes, set the following value in the [ovn] section of ml2_conf.ini.
ovn_emit_need_to_frag = True
11.6. Configuring the native VLAN on a trunked interface
If a trunked interface or bond has a network on the native VLAN, the IP addresses are assigned directly to the bridge and there is no VLAN interface.
For example, if the External network is on the native VLAN, a bonded configuration looks like this:
network_config: - type: ovs_bridge name: bridge_name dns_servers: get_param: DnsServers addresses: - ip_netmask: get_param: ExternalIpSubnet routes: list_concat_unique: - get_param: ExternalInterfaceRoutes - - default: true next_hop: get_param: ExternalInterfaceDefaultRoute members: - type: ovs_bond name: bond1 ovs_options: get_param: BondInterfaceOvsOptions members: - type: interface name: nic3 primary: true - type: interface name: nic4
When you move the address or route statements onto the bridge, remove the corresponding VLAN interface from the bridge. Make the changes to all applicable roles. The External network is only on the controllers, so only the controller template requires a change. The Storage network is attached to all roles, so if the Storage network is on the default VLAN, all roles require modifications.
11.7. Increasing the maximum number of connections that netfilter tracks
The Red Hat OpenStack Platform (RHOSP) Networking service (neutron) uses netfilter connection tracking to build stateful firewalls and to provide network address translation (NAT) on virtual networks. There are some situations that can cause the kernel space to reach the maximum connection limit and result in errors such as
nf_conntrack: table full, dropping packet. You can increase the limit for connection tracking (conntrack) and avoid these types of errors. You can increase the conntrack limit for one or more roles, or across all the nodes, in your RHOSP deployment.
- A successful RHOSP undercloud installation.
Log in to the undercloud host as the
Source the undercloud credentials file:
$ source ~/stackrc
Create a custom YAML environment file.
$ vi /home/stack/templates/my-environment.yaml
Your environment file must contain the keywords
ExtraSysctlSettings. Enter a new value for the maximum number of connections that netfilter can track in the variable,
In this example, you can set the conntrack limit across all hosts in your RHOSP deployment:
parameter_defaults: ExtraSysctlSettings: net.nf_conntrack_max: value: 500000
<role>Parameterparameter to set the conntrack limit for a specific role:
parameter_defaults: <role>Parameters: ExtraSysctlSettings: net.nf_conntrack_max: value: <simultaneous_connections>
<role>with the name of the role.
For example, use
ControllerParametersto set the conntrack limit for the Controller role, or
ComputeParametersto set the conntrack limit for the Compute role.
<simultaneous_connections>with the quantity of simultaneous connections that you want to allow.
In this example, you can set the conntrack limit for only the Controller role in your RHOSP deployment:
parameter_defaults: ControllerParameters: ExtraSysctlSettings: net.nf_conntrack_max: value: 500000Note
The default value for
500000connections. The maximum value is:
Run the deployment command and include the core heat templates, environment files, and this new custom environment file.Important
The order of the environment files is important as the parameters and resources defined in subsequent environment files take precedence.
$ openstack overcloud deploy --templates \ -e /home/stack/templates/my-environment.yaml