7.5. Configure the Networking Service

7.5.1. Configure Networking Service Authentication

The Networking service must be explicitly configured to use the Identity service for authentication. To accomplish this, perform the following procedure on the network node while logged in as root on the DHCP agent's host:

Procedure 7.6. Configuring the Networking Service to authenticate through the Identity Service

  1. Set the authentication strategy (auth_strategy) configuration key to keystone using the openstack-config command.
    # openstack-config --set /etc/neutron/neutron.conf \
       DEFAULT auth_strategy keystone
  2. Set the authentication host (auth_host configuration key) to the IP address or host name of the Identity server.
    # openstack-config --set /etc/neutron/neutron.conf \
       keystone_authtoken auth_host IP
    Replace IP with the IP address or host name of the Identity server.
  3. Set the administration tenant name (admin_tenant_name) configuration key to the name of the tenant that was created for the use of the Networking service. Examples in this guide use services.
    # openstack-config --set /etc/neutron/neutron.conf \
       keystone_authtoken admin_tenant_name services
  4. Set the administration user name (admin_user) configuration key to the name of the user that was created for the use of the networking services. Examples in this guide use neutron.
    # openstack-config --set /etc/neutron/neutron.conf \
       keystone_authtoken admin_user neutron
  5. Set the administration password (admin_password) configuration key to the password that is associated with the user specified in the previous step.
    # openstack-config --set /etc/neutron/neutron.conf \
       keystone_authtoken admin_password PASSWORD
The authentication keys used by the Networking service have been set and will be used when the services are started.

7.5.2. Configure RabbitMQ Message Broker Settings for the Networking Service

As of Red Hat Enterprise Linux OpenStack Platform 5, RabbitMQ replaces QPid as the default (and recommended) message broker. The RabbitMQ messaging service is provided by the rabbitmq-server package.
This section assumes that you have already configured a RabbitMQ message broker. For more information, refer to:

Procedure 7.7. Configuring the Networking service to use the RabbitMQ message broker

  1. Log in as root to the system hosting the neutron-server service.
  2. In /etc/neutron/neutron.conf of that system, set RabbitMQ as the RPC back end.
    # openstack-config --set /etc/neutron/neutron.conf \
     DEFAULT rpc_backend neutron.openstack.common.rpc.impl_kombu
  3. Set the neutron-server service to connect to the RabbitMQ host:
    # openstack-config --set /etc/neutron/neutron.conf \
     DEFAULT rabbit_host RABBITMQ_HOST
    Replace RABBITMQ_HOST with the IP address or host name of the message broker.
  4. Set the message broker port to 5672:
    # openstack-config --set /etc/neutron/neutron.conf \
     DEFAULT rabbit_port 5672
  5. Set the RabbitMQ username and password created for the Networking service:
    # openstack-config --set /etc/neutron/neutron.conf \
     DEFAULT rabbit_userid neutron
    # openstack-config --set /etc/neutron/neutron.conf \
     DEFAULT rabbit_password NEUTRON_PASS
    Where neutron and NEUTRON_PASS are the RabbitMQ username and password created for Networking (in Section 2.4.2, “Install and Configure the RabbitMQ Message Broker”).

7.5.3. Set the Networking Service Plug-in

Additional configuration settings must be applied to enable the desired plug-in. Below are the procedures for enabling the ML2, Open vSwitch (OVS), and Linux Bridge plug-ins.

Note

The monolithic Open vSwitch and linuxbridge plug-ins have been deprecated and will be removed in a future release; their functionality has instead been re-implemented as ML2 mechanisms.
OpenStack Networking plug-ins can be referenced in neutron.conf by their nominated short names, instead of their lengthy class names. For example:
core_plugin = neutron.plugins.ml2.plugin:Ml2Plugin
will become:
core_plugin = ml2

Note

Take care not to introduce errant whitespace characters, as these could result in parse errors.

Table 7.2.  core_plugin

Short name Class name
bigswitch neutron.plugins.bigswitch.plugin:NeutronRestProxyV2
brocade neutron.plugins.brocade.NeutronPlugin:BrocadePluginV2
cisco neutron.plugins.cisco.network_plugin:PluginV2
embrane neutron.plugins.embrane.plugins.embrane_ovs_plugin:EmbraneOvsPlugin
hyperv neutron.plugins.hyperv.hyperv_neutron_plugin:HyperVNeutronPlugin
linuxbridge neutron.plugins.linuxbridge.lb_neutron_plugin:LinuxBridgePluginV2
midonet neutron.plugins.midonet.plugin:MidonetPluginV2
ml2 neutron.plugins.ml2.plugin:Ml2Plugin
mlnx neutron.plugins.mlnx.mlnx_plugin:MellanoxEswitchPlugin
nec neutron.plugins.nec.nec_plugin:NECPluginV2
nicira neutron.plugins.nicira.NeutronPlugin:NvpPluginV2
openvswitch neutron.plugins.openvswitch.ovs_neutron_plugin:OVSNeutronPluginV2
plumgrid neutron.plugins.plumgrid.plumgrid_plugin.plumgrid_plugin:NeutronPluginPLUMgridV2
ryu neutron.plugins.ryu.ryu_neutron_plugin:RyuNeutronPluginV2
The service_plugins option accepts a comma-delimited list of multiple service plugins.

Table 7.3.  service_plugins

Short name Class name
dummy neutron.tests.unit.dummy_plugin:DummyServicePlugin
router neutron.services.l3_router.l3_router_plugin:L3RouterPlugin
firewall neutron.services.firewall.fwaas_plugin:FirewallPlugin
lbaas neutron.services.loadbalancer.plugin:LoadBalancerPlugin
metering neutron.services.metering.metering_plugin:MeteringPlugin

Procedure 7.8. Enabling the ML2 plug-in

Follow these steps on the node running the neutron-server service.
  1. Install the openstack-neutron-ml2 package:
    # yum install openstack-neutron-ml2
  2. Create a symbolic link to direct Networking to the ML2 config file ml2_conf.ini:
    # ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
  3. Add the appropriate configuration options to the ml2_conf.ini file. Available options are listed below. Refer to Section 7.2.7, “Modular Layer 2 (ML2) Overview” for further information on these settings.
    [ml2]
    type_drivers = local,flat,vlan,gre,vxlan
    mechanism_drivers = openvswitch,linuxbridge,l2population
    [agent]
    l2_population = True
    
  4. Enable the ML2 plug-in and L3 router in the neutron.conf file:
    core_plugin = neutron.plugins.ml2.plugin.Ml2Plugin
    service_plugins = neutron.services.l3_router.l3_router_plugin.L3RouterPlugin
  5. Refer to Section 7.5.7, “Create the OpenStack Networking Database” to configure the ML2 database.
  6. Restart the Networking service:
    # service neutron-server restart

Procedure 7.9. Enabling the Open vSwitch plug-in

Follow these steps on the node running the neutron-server service.

Note

The monolithic Open vSwitch plug-in has been deprecated and will be removed in a future release; its functionality has instead been re-implemented as a ML2 mechanism.
  1. Create a symbolic link between the /etc/neutron/plugin.ini path referred to by the Networking service and the plug-in specific configuration file.
    # ln -s /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini \
       /etc/neutron/plugin.ini
  2. Update the value of the tenant_network_type configuration key in the /etc/neutron/plugin.ini file to refer to the type of network that must be used for tenant networks. Supported values are flat, gre, local, vlan, and vxlan.
    The default is local but this is not recommended for real deployments.
    Open vSwitch Tunneling allows virtual machines across multiple hosts to share a single layer 2 network. GRE and VXLAN tunnels are supported for encapsulating traffic between Open vSwitch endpoints on each host. Ensure that MTUs are an appropriate size from end-to-end, including those on the virtual machines.
    # openstack-config --set /etc/neutron/plugin.ini \
       OVS tenant_network_type TYPE
    Replace TYPE with the type chosen tenant network type.
  3. If flat or vlan networking was chosen, the value of the network_vlan_ranges configuration key must also be set. This configuration key maps physical networks to VLAN ranges.
    Mappings are of the form NAME:START:END where NAME is replaced by the name of the physical network, START is replaced by the VLAN identifier that starts the range, and END is replaced by the replaced by the VLAN identifier that ends the range.
    # openstack-config --set /etc/neutron/plugin.ini \
       OVS network_vlan_ranges NAME:START:END
    Multiple ranges can be specified using a comma separated list, for example:
    physnet1:1000:2999,physnet2:3000:3999
  4. Update the value of the core_plugin configuration key in the /etc/neutron/neutron.conf file to refer to the Open vSwitch plug-in.
    # openstack-config --set /etc/neutron/neutron.conf \
       DEFAULT core_plugin \
       neutron.plugins.openvswitch.ovs_neutron_plugin.OVSNeutronPluginV2
If you are using a Linux Bridge plug-in, perform the following procedure instead:

Procedure 7.10. Enabling the Linux Bridge plug-in

Follow these steps on the node running the neutron-server service.

Note

The monolithic linuxbridge plug-in has been deprecated and will be removed in a future release; its functionality has instead been re-implemented as a ML2 mechanism.
  1. Create a symbolic link between the /etc/neutron/plugin.ini path referred to by the Networking service and the plug-in specific configuration file.
    # ln -s /etc/neutron/plugins/linuxbridge/linuxbridge_conf.ini \
             /etc/neutron/plugin.ini
  2. Update the value of the tenant_network_type configuration key in the /etc/neutron/plugin.ini file to refer to the type of network that must be used for tenant networks. Supported values are flat, vlan, and local.
    The default is local but this is not recommended for real deployments.
    # openstack-config --set /etc/neutron/plugin.ini \
       VLAN tenant_network_type TYPE
    Replace TYPE with the type chosen tenant network type.
  3. If flat or vlan networking was chosen, the value of the network_vlan_ranges configuration key must also be set. This configuration key maps physical networks to VLAN ranges.
    Mappings are of the form NAME:START:END where NAME is replaced by the name of the physical network, START is replaced by the VLAN identifier that starts the range, and END is replaced by the replaced by the VLAN identifier that ends the range.
    # openstack-config --set /etc/neutron/plugin.ini \
       LINUX_BRIDGE network_vlan_ranges NAME:START:END
    Multiple ranges can be specified using a comma separated list, for example:
    physnet1:1000:2999,physnet2:3000:3999
  4. Update the value of the core_plugin configuration key in the /etc/neutron/neutron.conf file to refer to the Linux Bridge plug-in.
    # openstack-config --set /etc/neutron/neutron.conf \
       DEFAULT core_plugin \
       neutron.plugins.linuxbridge.lb_neutron_plugin.LinuxBridgePluginV2

7.5.4. VXLAN and GRE tunnels

Overlay links enable instances in different networks to communicate with each other by tunneling traffic through the underlying network. GRE and VXLAN are two supported tunnel encapsulation technologies:
GRE is an established encapsulation technology, with general acceptance in the industry. The GRE headers (RFC 2784 and 2890) perform encapsulation of Layer 2 frames.
VXLAN was submitted to the IETF in 2011 and is expected to become the default standard for multitenant cloud-scale networks. VXLAN uses UDP encapsulation and random values in the UDP source port, resulting in automatic equal-cost load balancing in every switch device that uses a 5-tuple hash calculation to load balance. This allows VXLAN-encapsulated traffic to be load balanced by compatible physical network hardware.

Table 7.4.  VXLAN and GRE comparison

Feature VXLAN GRE
Segmentation 24-bit VNI (VXLAN Network Identifier) Uses different Tunnel IDs
Theoretical scale limit 16 million unique IDs 16 million unique IDs
Transport UDP (default port 4789) IP Protocol 47
Filtering VXLAN uses UDP with a well-known destination port; firewalls and switch/router ACLs can be tailored to block only VXLAN traffic. Firewalls and layer 3 switches and routers with ACLs will typically not parse deeply enough into the GRE header to distinguish tunnel traffic types; all GRE would need to be blocked indiscriminately.
Protocol overhead 50 bytes over IPv4 (8 bytes VXLAN header, 20 bytes IPv4 header, 8 bytes UDP header, 14 bytes Ethernet). 42 bytes over IPv4 (8 bytes GRE header, 20 bytes IPv4 header, 14 bytes Ethernet).
Handling unknown destination packets, broadcasts, and multicast VXLAN uses IP multicast to manage flooding from these traffic types. Ideally, one logical Layer 2 network (VNI) is associated with one multicast group address on the physical network. This requires end-to-end IP multicast support on the physical data center network. GRE has no built-in protocol mechanism to address these. This type of traffic is simply replicated to all nodes.
IETF specification http://tools.ietf.org/html/draft-mahalingam-dutt-dcops-vxlan-01 http://tools.ietf.org/html/rfc2784.html

Note

To avoid packet fragmentation, it is recommended to increase the MTU at the vSwitch and on all physical network devices that the VXLAN or GRE traffic will traverse.

7.5.5. Configure Open vSwitch tunneling

Tunneling encapsulates network traffic between physical Networking hosts and allows VLANs to span multiple physical hosts. Instances communicate as if they share the same layer 2 network. Open vSwitch supports tunneling with the VXLAN and GRE encapsulation protocols.
Example VXLAN tunnel

Figure 7.4. Example VXLAN tunnel

This diagram shows two instances running on separate hosts connected by a VXLAN tunnel. The required physical and virtual components are also illustrated. The following procedure creates a VXLAN or GRE tunnel between two Open vSwitches running on separate Networking hosts:

Procedure 7.11. Example tunnel configuration

  1. Create a virtual bridge named OVS-BR0 on each participating host:
    ovs-vsctl add-br OVS-BR0
    
  2. Create a tunnel to link the OVS-BR0 virtual bridges. Run the ovs-vsctl command on HOST1 to create the tunnel and link it to the bridge on HOST2:
    GRE tunnel command:
    ovs-vsctl add-port OVS-BR0 gre1 -- set Interface gre1 type=gre options:remote_ip=192.168.1.11
    
    VXLAN tunnel command:
    ovs-vsctl add-port OVS-BR0 vxlan1 -- set Interface vxlan1 type=vxlan options:remote_ip=192.168.1.11
    
  3. Run the ovs-vsctl command on HOST2 to create the tunnel and link it to the bridge on HOST1.
    GRE tunnel command:
    ovs-vsctl add-port OVS-BR0 gre1 -- set Interface gre1 type=gre options:remote_ip=192.168.1.10
    
    VXLAN tunnel command:
    ovs-vsctl add-port OVS-BR0 vxlan1 -- set Interface vxlan1 type=vxlan options:remote_ip=192.168.1.10
    
Successful completion of these steps results in the two instances sharing a layer 2 network.

7.5.6. Configure the Networking Service Database Connection

The database connection string used by the networking service is defined in the /etc/neutron/plugin.ini file. It must be updated to point to a valid database server before starting the service.

Procedure 7.12. Configuring the OpenStack Networking SQL database connection

  • Use the openstack-config command to set the value of the connection configuration key.
    # openstack-config --set /etc/neutron/plugin.ini \
       DATABASE sql_connection mysql://USER:PASS@IP/DB
    Replace:
    • USER with the database user name the networking service is to use, usually neutron.
    • PASS with the password of the chosen database user.
    • IP with the IP address or host name of the database server.
    • DB with the name of the database that will be created for use by the networking service (ovs_neutron is used as the example in the upcoming Creating the OpenStack Networking Database section).

    Important

    The IP address or host name specified in the connection configuration key must match the IP address or host name to which the neutron database user was granted access when creating the neutron database. Moreover, if the database is hosted locally and you granted permissions to 'localhost' when creating the neutron database, you must enter 'localhost'.

7.5.7. Create the OpenStack Networking Database

In this procedure the database and database user that will be used by the networking service will be created. These steps must be performed while logged in to the database server as the root user, and prior to starting the neutron-server service.

Procedure 7.13. Creating the OpenStack Networking database

  1. Connect to the database service using the mysql command.
    # mysql -u root -p
  2. Create the database. If you intend to use the:
    • ML2 plug-in, the recommended database name is neutron_ml2
    • Open vSwitch plug-in, the recommended database name is ovs_neutron.
    • Linux Bridge plug-in, the recommended database name is neutron_linux_bridge.
    This example creates the ML2 neutron_ml2 database.
    mysql> CREATE DATABASE neutron_ml2 character set utf8;
  3. Create a neutron database user and grant it access to the neutron_ml2 database.
    mysql>"GRANT ALL ON neutron_ml2.* TO 'neutron'@'%';"
  4. Flush the database privileges to ensure that they take effect immediately.
    mysql> FLUSH PRIVILEGES;
  5. Exit the mysql client.
    mysql> quit
  6. Run the neutron-db-manage command:
    # neutron-db-manage --config-file /usr/share/neutron/neutron-dist.conf \
       --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugin.ini upgrade head
The OpenStack Networking database neutron_ml2 has been created. The database will be populated during service configuration.
See Section 7.5.6, “Configure the Networking Service Database Connection” to configure Networking to use the newly created database.
See Section 7.5.3, “Set the Networking Service Plug-in” for Networking plug-in selection and configuration.

7.5.8. Launch the Networking Service

Once the required settings are configured, you can now launch the Networking service (neutron) using the service command:
# service neutron-server start
Enable the Networking service permanently using the chkconfig command.
# chkconfig neutron-server on
The OpenStack Networking service is configured and running. Further action is however required to configure and run the various networking agents that are also fundamental to providing networking functionality.

Important

By default, OpenStack Networking does not enforce Classless Inter-Domain Routing (CIDR) checking of IP addresses. This is to maintain backwards compatibility with previous releases. If you require such checks set the value of the force_gateway_on_subnet configuration key to True in the /etc/neutron/neutron.conf file.