Chapter 19. Common administrative networking tasks

Sometimes you might need to perform administration tasks on the Red Hat OpenStack Platform Networking service (neutron) such as configuring the Layer 2 Population driver or specifying the name assigned to ports by the internal DNS.

19.1. Configuring the L2 population driver

The L2 Population driver is used in Networking service (neutron) ML2/OVS environments to enable broadcast, multicast, and unicast traffic to scale out on large overlay networks. By default, Open vSwitch GRE and VXLAN replicate broadcasts to every agent, including those that do not host the destination network. This design requires the acceptance of significant network and processing overhead. The alternative design introduced by the L2 Population driver implements a partial mesh for ARP resolution and MAC learning traffic; it also creates tunnels for a particular network only between the nodes that host the network. This traffic is sent only to the necessary agent by encapsulating it as a targeted unicast.

Prerequisites

  • You must have RHOSP administrator privileges.
  • The Networking service must be using the ML2/OVS mechanism driver.

Procedure

  1. Log in to the undercloud host as the stack user.
  2. Source the undercloud credentials file:

    $ source ~/stackrc
  3. Create a custom YAML environment file.

    Example

    $ vi /home/stack/templates/my-environment.yaml

  4. Your environment file must contain the keywords parameter_defaults. Under these keywords, add the following lines:

    parameter_defaults:
      NeutronMechanismDrivers: ['openvswitch', 'l2population']
      NeutronEnableL2Pop: 'True'
      NeutronEnableARPResponder: true
  5. Run the deployment command and include the core heat templates, environment files, and this new custom environment file.

    Important

    The order of the environment files is important because the parameters and resources defined in subsequent environment files take precedence.

    Example

    $ openstack overcloud deploy --templates \
    -e <your_environment_files> \
    -e /home/stack/templates/my-environment.yaml

Verification

  1. Obtain the IDs for the OVS agents.

    $ openstack network agent list -c ID -c Binary

    Sample output

    +--------------------------------------+---------------------------+
    | ID                                   | Binary                    |
    +--------------------------------------+---------------------------+
    | 003a8750-a6f9-468b-9321-a6c03c77aec7 | neutron-openvswitch-agent |
    | 02bbbb8c-4b6b-4ce7-8335-d1132df31437 | neutron-l3-agent          |
    | 0950e233-60b2-48de-94f6-483fd0af16ea | neutron-openvswitch-agent |
    | 115c2b73-47f5-4262-bc66-8538d175029f | neutron-openvswitch-agent |
    | 2a9b2a15-e96d-468c-8dc9-18d7c2d3f4bb | neutron-metadata-agent    |
    | 3e29d033-c80b-4253-aaa4-22520599d62e | neutron-dhcp-agent        |
    | 3ede0b64-213d-4a0d-9ab3-04b5dfd16baa | neutron-dhcp-agent        |
    | 462199be-0d0f-4bba-94da-603f1c9e0ec4 | neutron-sriov-nic-agent   |
    | 54f7c535-78cc-464c-bdaa-6044608a08d7 | neutron-l3-agent          |
    | 6657d8cf-566f-47f4-856c-75600bf04828 | neutron-metadata-agent    |
    | 733c66f1-a032-4948-ba18-7d1188a58483 | neutron-l3-agent          |
    | 7e0a0ce3-7ebb-4bb3-9b89-8cccf8cb716e | neutron-openvswitch-agent |
    | dfc36468-3a21-4a2d-84c3-2bc40f224235 | neutron-metadata-agent    |
    | eb7d7c10-69a2-421e-bd9e-aec3edfe1b7c | neutron-openvswitch-agent |
    | ef5219b4-ee49-4635-ad04-048291209373 | neutron-sriov-nic-agent   |
    | f36c7af0-e20c-400b-8a37-4ffc5d4da7bd | neutron-dhcp-agent        |
    +--------------------------------------+---------------------------+

  2. Using an ID from one of the OVS agents, confirm that the L2 Population driver is set on the OVS agent.

    Example

    This example verifies the configuration of the L2 Population driver on the neutron-openvswitch-agent with ID 003a8750-a6f9-468b-9321-a6c03c77aec7:

    $ openstack network agent show 003a8750-a6f9-468b-9321-a6c03c77aec7 -c configuration -f json | grep l2_population

    Sample output

        "l2_population": true,

  3. Ensure that the ARP responder feature is enabled for the OVS agent.

    Example

    $ openstack network agent show 003a8750-a6f9-468b-9321-a6c03c77aec7 -c configuration -f json | grep arp_responder_enabled

    Sample output

        "arp_responder_enabled": true,

Additional resources

19.2. Tuning keepalived to avoid VRRP packet loss

If the number of highly available (HA) routers on a single host is high, when an HA router fail over occurs, the Virtual Router Redundancy Protocol (VRRP) messages might overflow the IRQ queues. This overflow stops Open vSwitch (OVS) from responding and forwarding those VRRP messages.

To avoid VRRP packet overload, you must increase the VRRP advertisement interval using the ha_vrrp_advert_int parameter in the ExtraConfig section for the Controller role.

Procedure

  1. Log in to the undercloud as the stack user, and source the stackrc file to enable the director command line tools.

    Example

    $ source ~/stackrc

  2. Create a custom YAML environment file.

    Example

    $ vi /home/stack/templates/my-neutron-environment.yaml

    Tip

    The Red Hat OpenStack Platform Orchestration service (heat) uses a set of plans called templates to install and configure your environment. You can customize aspects of the overcloud with a custom environment file, which is a special type of template that provides customization for your heat templates.

  3. In the YAML environment file, increase the VRRP advertisement interval using the ha_vrrp_advert_int argument with a value specific for your site. (The default is 2 seconds.)

    You can also set values for gratuitous ARP messages:

    ha_vrrp_garp_master_repeat
    The number of gratuitous ARP messages to send at one time after the transition to the master state. (The default is 5 messages.)
    ha_vrrp_garp_master_delay

    The delay for second set of gratuitous ARP messages after the lower priority advert is received in the master state. (The default is 5 seconds.)

    Example

    parameter_defaults:
      ControllerExtraConfig:
        neutron::agents::l3::ha_vrrp_advert_int: 7
        neutron::config::l3_agent_config:
          DEFAULT/ha_vrrp_garp_master_repeat:
            value: 5
          DEFAULT/ha_vrrp_garp_master_delay:
            value: 5

  4. Run the openstack overcloud deploy command and include the core heat templates, environment files, and this new custom environment file.

    Important

    The order of the environment files is important because the parameters and resources defined in subsequent environment files take precedence.

    Example

    $ openstack overcloud deploy --templates \
    -e [your-environment-files] \
    -e /usr/share/openstack-tripleo-heat-templates/environments/services/my-neutron-environment.yaml

Additional resources

19.3. Specifying the name that DNS assigns to ports

You can specify the name assigned to ports by the internal DNS when you enable the Red Hat OpenStack Platform (RHOSP) Networking service (neutron) DNS domain for ports extension (dns_domain_ports).

You enable the DNS domain for ports extension by declaring the RHOSP Orchestration (heat) NeutronPluginExtensions parameter in a YAML-formatted environment file. Using a corresponding parameter, NeutronDnsDomain, you specify your domain name, which overrides the default value, openstacklocal. After redeploying your overcloud, you can use the OpenStack Client port commands, port set or port create, with --dns-name to assign a port name.

Important

You must enable the DNS domain for ports extension (dns_domain_ports) for DNS to internally resolve names for ports in your RHOSP environment. Using the NeutronDnsDomain default value, openstacklocal, means that the Networking service does not internally resolve port names for DNS.

Also, when the DNS domain for ports extension is enabled, the Compute service automatically populates the dns_name attribute with the hostname attribute of the instance during the boot of VM instances. At the end of the boot process, dnsmasq recognizes the allocated ports by their instance hostname.

Procedure

  1. Log in to the undercloud as the stack user, and source the stackrc file to enable the director command line tools.

    Example

    $ source ~/stackrc

  2. Create a custom YAML environment file (my-neutron-environment.yaml).

    Note

    Values inside parentheses are sample values that are used in the example commands in this procedure. Substitute these sample values with values that are appropriate for your site.

    Example

    $ vi /home/stack/templates/my-neutron-environment.yaml

    Tip

    The undercloud includes a set of Orchestration service templates that form the plan for your overcloud creation. You can customize aspects of the overcloud with environment files, which are YAML-formatted files that override parameters and resources in the core Orchestration service template collection. You can include as many environment files as necessary.

  3. In the environment file, add a parameter_defaults section. Under this section, add the DNS domain for ports extension, dns_domain_ports.

    Example

    parameter_defaults:
      NeutronPluginExtensions: "qos,port_security,dns_domain_ports"

    Note

    If you set dns_domain_ports, ensure that the deployment does not also use dns_domain, the DNS Integration extension. These extensions are incompatible, and both extensions cannot be defined simultaneously.

  4. Also in the parameter_defaults section, add your domain name (example.com) using the NeutronDnsDomain parameter.

    Example

    parameter_defaults:
        NeutronPluginExtensions: "qos,port_security,dns_domain_ports"
        NeutronDnsDomain: "example.com"

  5. Run the openstack overcloud deploy command and include the core Orchestration templates, environment files, and this new environment file.

    Important

    The order of the environment files is important because the parameters and resources defined in subsequent environment files take precedence.

    Example

    $ openstack overcloud deploy --templates \
    -e [your-environment-files] \
    -e /usr/share/openstack-tripleo-heat-templates/environments/services/my-neutron-environment.yaml

Verification

  1. Log in to the overcloud, and create a new port (new_port) on a network (public). Assign a DNS name (my_port) to the port.

    Example

    $ source ~/overcloudrc
    $ openstack port create --network public --dns-name my_port new_port

  2. Display the details for your port (new_port).

    Example

    $ openstack port show -c dns_assignment -c dns_domain -c dns_name -c name new_port

    Output

    +-------------------------+----------------------------------------------+
    | Field                   | Value                                        |
    +-------------------------+----------------------------------------------+
    | dns_assignment          | fqdn='my_port.example.com',                  |
    |                         | hostname='my_port',                          |
    |                         | ip_address='10.65.176.113'                   |
    | dns_domain              | example.com                                  |
    | dns_name                | my_port                                      |
    | name                    | new_port                                     |
    +-------------------------+----------------------------------------------+

    Under dns_assignment, the fully qualified domain name (fqdn) value for the port contains a concatenation of the DNS name (my_port) and the domain name (example.com) that you set earlier with NeutronDnsDomain.

  3. Create a new VM instance (my_vm) using the port (new_port) that you just created.

    Example

    $ openstack server create --image rhel --flavor m1.small --port new_port my_vm

  4. Display the details for your port (new_port).

    Example

    $ openstack port show -c dns_assignment -c dns_domain -c dns_name -c name new_port

    Output

    +-------------------------+----------------------------------------------+
    | Field                   | Value                                        |
    +-------------------------+----------------------------------------------+
    | dns_assignment          | fqdn='my_vm.example.com',                    |
    |                         | hostname='my_vm',                            |
    |                         | ip_address='10.65.176.113'                   |
    | dns_domain              | example.com                                  |
    | dns_name                | my_vm                                        |
    | name                    | new_port                                     |
    +-------------------------+----------------------------------------------+

    Note that the Compute service changes the dns_name attribute from its original value (my_port) to the name of the instance with which the port is associated (my_vm).

Additional resources

19.4. Assigning DHCP attributes to ports

You can use Red Hat Openstack Plaform (RHOSP) Networking service (neutron) extensions to add networking functions. You can use the extra DHCP option extension (extra_dhcp_opt) to configure ports of DHCP clients with DHCP attributes. For example, you can add a PXE boot option such as tftp-server, server-ip-address, or bootfile-name to a DHCP client port.

The value of the extra_dhcp_opt attribute is an array of DHCP option objects, where each object contains an opt_name and an opt_value. IPv4 is the default version, but you can change this to IPv6 by including a third option, ip-version=6.

When a VM instance starts, the RHOSP Networking service supplies port information to the instance using DHCP protocol. If you add DHCP information to a port already connected to a running instance, the instance only uses the new DHCP port information when the instance is restarted.

Some of the more common DHCP port attributes are: bootfile-name, dns-server, domain-name, mtu, server-ip-address, and tftp-server. For the complete set of acceptable values for opt_name, refer to the DHCP specification.

Prerequisites

  • You must have RHOSP administrator privileges.

Procedure

  1. Log in to the undercloud host as the stack user.
  2. Source the undercloud credentials file:

    $ source ~/stackrc
  3. Create a custom YAML environment file.

    Example

    $ vi /home/stack/templates/my-environment.yaml

  4. Your environment file must contain the keywords parameter_defaults. Under these keywords, add the extra DHCP option extension, extra_dhcp_opt.

    Example

    parameter_defaults:
      NeutronPluginExtensions: "qos,port_security,extra_dhcp_opt"

  5. Run the deployment command and include the core heat templates, environment files, and this new custom environment file.

    Important

    The order of the environment files is important because the parameters and resources defined in subsequent environment files take precedence.

    Example

    $ openstack overcloud deploy --templates \
    -e <your_environment_files> \
    -e /home/stack/templates/my-environment.yaml

Verification

  1. Source your credentials file.

    Example

    $ source ~/overcloudrc

  2. Create a new port (new_port) on a network (public). Assign a valid attribute from the DHCP specification to the new port.

    Example

    $ openstack port create --extra-dhcp-option \
    name=domain-name,value=test.domain --extra-dhcp-option \
    name=ntp-server,value=192.0.2.123 --network public new_port

  3. Display the details for your port (new_port).

    Example

    $ openstack port show new_port -c extra_dhcp_opts

    Sample output

    +-----------------+--------------------------------------------------------------------+
    | Field           | Value                                                              |
    +-----------------+--------------------------------------------------------------------+
    | extra_dhcp_opts | ip_version='4', opt_name='domain-name', opt_value='test.domain'    |
    |                 | ip_version='4', opt_name='ntp-server', opt_value='192.0.2.123'     |
    +-----------------+--------------------------------------------------------------------+

Additional resources

19.5. Loading kernel modules

Some features in Red Hat OpenStack Platform (RHOSP) require certain kernel modules to be loaded. For example, the OVS firewall driver requires you to load the nf_conntrack_proto_gre kernel module to support GRE tunneling between two VM instances.

By using a special Orchestration service (heat) parameter, ExtraKernelModules, you can ensure that heat stores configuration information about the required kernel modules needed for features like GRE tunneling. Later, during normal module management, these required kernel modules are loaded.

Procedure

  1. On the undercloud host, logged in as the stack user, create a custom YAML environment file.

    Example

    $ vi /home/stack/templates/my-modules-environment.yaml

    Tip

    Heat uses a set of plans called templates to install and configure your environment. You can customize aspects of the overcloud with a custom environment file, which is a special type of template that provides customization for your heat templates.

  2. In the YAML environment file under parameter_defaults, set ExtraKernelModules to the name of the module that you want to load.

    Example

    ComputeParameters:
      ExtraKernelModules:
        nf_conntrack_proto_gre: {}
    ControllerParameters:
      ExtraKernelModules:
        nf_conntrack_proto_gre: {}

  3. Run the openstack overcloud deploy command and include the core heat templates, environment files, and this new custom environment file.

    Important

    The order of the environment files is important as the parameters and resources defined in subsequent environment files take precedence.

    Example

    $ openstack overcloud deploy --templates \
    -e [your-environment-files] \
    -e /usr/share/openstack-tripleo-heat-templates/environments/services/my-modules-environment.yaml

Verification

  • If heat has properly loaded the module, you should see output when you run the lsmod command on the Compute node:

    Example

    sudo lsmod | grep nf_conntrack_proto_gre

Additional resources

19.6. Configuring shared security groups

When you want one or more Red Hat OpenStack Platform (RHOSP) projects to be able to share data, you can use the RHOSP Networking service (neutron) RBAC policy feature to share a security group. You create security groups and Networking service role-based access control (RBAC) policies using the OpenStack Client.

You can apply a security group directly to an instance during instance creation, or to a port on the running instance.

Note

You cannot apply a role-based access control (RBAC)-shared security group directly to an instance during instance creation. To apply an RBAC-shared security group to an instance you must first create the port, apply the shared security group to that port, and then assign that port to the instance. See Adding a security group to a port.

Prerequisites

  • You have at least two RHOSP projects that you want to share.
  • In one of the projects, the current project, you have created a security group that you want to share with another project, the target project.

    In this example, the ping_ssh security group is created:

    Example

    $ openstack security group create ping_ssh

Procedure

  1. Log in to the overcloud for the current project that contains the security group.
  2. Obtain the name or ID of the target project.

    $ openstack project list
  3. Obtain the name or ID of the security group that you want to share between RHOSP projects.

    $ openstack security group list
  4. Using the identifiers from the previous steps, create an RBAC policy using the openstack network rbac create command.

    In this example, the ID of the target project is 32016615de5d43bb88de99e7f2e26a1e. The ID of the security group is 5ba835b7-22b0-4be6-bdbe-e0722d1b5f24:

    Example

    $ openstack network rbac create --target-project \
    32016615de5d43bb88de99e7f2e26a1e --action access_as_shared \
    --type security_group 5ba835b7-22b0-4be6-bdbe-e0722d1b5f24

    --target-project

    specifies the project that requires access to the security group.

    Tip

    You can share data between all projects by using the --target-all-projects argument instead of --target-project <target-project>. By default, only the admin user has this privilege.

    --action access_as_shared
    specifies what the project is allowed to do.
    --type
    indicates that the target object is a security group.
    5ba835b7-22b0-4be6-bdbe-e0722d1b5f24
    is the ID of the particular security group which is being granted access to.

The target project is able to access the security group when running the OpenStack Client security group commands, in addition to being able to bind to its ports. No other users (other than administrators and the owner) are able to access the security group.

Tip

To remove access for the target project, delete the RBAC policy that allows it using the openstack network rbac delete command.

Additional resources