Chapter 6. Networking

Learn OpenStack Networking concepts, architecture, and basic and advanced neutron and nova command-line interface (CLI) commands.

6.1. Introduction to Networking

The Networking service, code-named neutron, provides an API that lets you define network connectivity and addressing in the cloud. The Networking service enables operators to leverage different networking technologies to power their cloud networking. The Networking service also provides an API to configure and manage a variety of network services ranging from L3 forwarding and NAT to load balancing, edge firewalls, and IPSEC VPN.
For information on the Networking API, see the OpenStack Networking Installation Overview section in Deploying OpenStack: Learning Environments (Manual Setup) at

6.1.1. Networking API

Networking is a virtual network service that provides a powerful API to define the network connectivity and IP addressing that devices from other services, such as Compute, use.
The Compute API has a virtual server abstraction to describe computing resources. Similarly, the Networking API has virtual network, subnet, and port abstractions to describe networking resources.

Table 6.1. Networking resources

Resource Description
Network An isolated L2 segment, analogous to VLAN in the physical networking world.
Subnet A block of v4 or v6 IP addresses and associated configuration state.
Port A connection point for attaching a single device, such as the NIC of a virtual server, to a virtual network. Also describes the associated network configuration, such as the MAC and IP addresses to be used on that port.
You can configure rich network topologies by creating and configuring networks and subnets, and then instructing other OpenStack services like Compute to attach virtual devices to ports on these networks.
In particular, Networking supports each tenant having multiple private networks, and allows tenants to choose their own IP addressing scheme (even if those IP addresses overlap with those that other tenants use). The Networking service:
  • Enables advanced cloud networking use cases, such as building multi-tiered web applications and enabling migration of applications to the cloud without changing IP addresses.
  • Offers flexibility for the cloud administrator to customize network offerings.
  • Enables developers to extend the Networking API. Over time, the extended functionality becomes part of the core Networking API.

6.1.2. Configure SSL support for networking API

OpenStack Networking supports SSL for the Networking API server. By default, SSL is disabled but you can enable it in the neutron.conf file.
Set these options to configure SSL:
use_ssl = True
Enables SSL on the networking API server.
ssl_cert_file = /path/to/certfile
Certificate file that is used when you securely start the Networking API server.
ssl_key_file = /path/to/keyfile
Private key file that is used when you securely start the Networking API server.
ssl_ca_file = /path/to/cafile
Optional. CA certificate file that is used when you securely start the Networking API server. This file verifies connecting clients. Set this option when API clients must authenticate to the API server by using SSL certificates that are signed by a trusted CA.
tcp_keepidle = 600
The value of TCP_KEEPIDLE, in seconds, for each server socket when starting the API server. Not supported on OS X.
retry_until_window = 30
Number of seconds to keep retrying to listen.
backlog = 4096
Number of backlog requests with to configure the socket.

6.1.3. Load Balancing-as-a-Service (LBaaS) overview

Load Balancing-as-a-Service (LBaaS) enables Networking to distribute incoming requests evenly between designated instances. This ensures the workload is shared predictably among instances, and allows more effective use of system resources. Incoming requests are distributed using one of these load balancing methods:
Round robin
Rotates requests evenly between multiple instances.
Source IP
Requests from a unique source IP address are consistently directed to the same instance.
Least connections
Allocates requests to the instance with the least number of active connections.

Table 6.2. LBaaS features

Feature Description
Monitors LBaaS provides availability monitoring with the ping, TCP, HTTP and HTTPS GET methods. Monitors are implemented to determine whether pool members are available to handle requests.
Management LBaaS is managed using a variety of tool sets. The REST API is available for programmatic administration and scripting. Users perform administrative management of load balancers through either the CLI (neutron) or the OpenStack dashboard.
Connection limits Ingress traffic can be shaped with connection limits. This feature allows workload control, and can also assist with mitigating DoS (Denial of Service) attacks.
Session persistence
LBaaS supports session persistence by ensuring incoming requests are routed to the same instance within a pool of multiple instances. LBaaS supports routing decisions based on cookies and source IP address.

6.1.4. Firewall-as-a-Service (FWaaS) overview

The Firewall-as-a-Service (FWaaS) plug-in adds perimeter firewall management to Networking. FWaaS uses iptables to apply firewall policy to all Networking routers within a project. FWaaS supports one firewall policy and logical firewall instance per project.
Whereas security groups operate at the instance-level, FWaaS operates at the perimeter by filtering traffic at the neutron router.
FWaaS is currently in technical preview; untested operation is not recommended.
The example diagram below illustrates the flow of ingress and egress traffic for the VM2 instance:

Figure 6.1. FWaaS architecture

FWaaS architecture
Enable FWaaS
Enable the FWaaS plugin in the neutron.conf file:
service_plugins =
driver =
enabled = True
FWaaS management options are available in OpenStack dashboard. Enable the option in the file typically located on the controller node: /usr/share/openstack-dashboard/openstack_dashboard/local/
'enable_firewall' = True

Procedure 6.1. Configure Firewall-as-a-Service

First create the firewall rules and create a policy that contains them, then create a firewall that applies the policy:
  1. Create a firewall rule:
    $ neutron firewall-rule-create --protocol <tcp|udp|icmp|any> --destination-port <port-range> --action <allow|deny>
    The CLI requires a protocol value; if the rule is protocol agnostic, the 'any' value can be used.
  2. Create a firewall policy:
    $ neutron firewall-policy-create --firewall-rules "<firewall-rule IDs or names separated by space>" myfirewallpolicy
    The order of the rules specified above is important.You can create a firewall policy without and rules and add rules later either with the update operation (when adding multiple rules) or with the insert-rule operations (when adding a single rule).
    FWaaS always adds a default deny all rule at the lowest precedence of each policy. Consequently, a firewall policy with no rules blocks all traffic by default.
  3. Create a firewall:
    $ neutron firewall-create <firewall-policy-uuid>
    The firewall remains in PENDING_CREATE state until a Networking router is created, and an interface is attached.
Allowed-address-pairs allow you to specify mac_address/ip_address(cidr) pairs that pass through a port regardless of subnet. This enables the use of protocols such as VRRP, which floats an IP address between two instances to enable fast data plane failover.
The allowed-address-pairs extension is currently only supported by these plug-ins: ML2, Open vSwitch, and VMware NSX.
Basic allowed-address-pairs operations
  • Create a port with a specific allowed-address-pairs:
    $ neutron port-create net1 --allowed-address-pairs type=dict list=true mac_address=<mac_address>,ip_address=<ip_cidr>
  • Update a port adding allowed-address-pairs:
    $ neutron port-update <port-uuid> --allowed-address-pairs type=dict list=true mac_address=<mac_address>,ip_address=<ip_cidr>
OpenStack Networking prevents setting an allowed-address-pair that matches the mac_address and ip_address of a port. This is because that would have no effect since traffic matching the mac_address and ip_address is already allowed to pass through the port.

6.1.5. Plug-in architecture

The original Compute network implementation assumed a basic model of isolation through Linux VLANs and IP tables. Networking introduces support for vendor plug-in s, which offer a custom back-end implementation of the Networking API. A plug-in can use a variety of technologies to implement the logical API requests. Some Networking plug-ins might use basic Linux VLANs and IP tables, while others might use more advanced technologies, such as L2-in-L3 tunneling or OpenFlow, to provide similar benefits.

Table 6.3. Available networking plug-ins

Plug-ins can have different properties for hardware requirements, features, performance, scale, or operator tools. Because Networking supports a large number of plug-ins, the cloud administrator can weigh options to decide on the right networking technology for the deployment.
In the Havana release, OpenStack Networking introduces the Modular Layer 2 (ML2) plug-in that enables the use of multiple concurrent mechanism drivers. This capability aligns with the complex requirements typically found in large heterogeneous environments. It currently works with the existing Open vSwitch, Linux Bridge, and Hyper-v L2 agents. The ML2 framework simplifies the addition of support for new L2 technologies and reduces the effort that is required to add and maintain them compared to earlier large plug-ins.
Plug-in deprecation notice
The Open vSwitch and Linux Bridge plug-ins are deprecated in the Havana release and will be removed in the Icehouse release. The features in these plug-ins are now part of the ML2 plug-in in the form of mechanism drivers.
Not all Networking plug-ins are compatible with all possible Compute drivers:

Table 6.4. Plug-in compatibility with Compute drivers

Plug-in Libvirt (KVM/QEMU) VMware Hyper-V Bare-metal
Big Switch / Floodlight Yes
Brocade Yes
Cisco Yes
Cloudbase Hyper-V Yes
Linux Bridge Yes
Mellanox Yes
Midonet Yes
ML2 Yes Yes
NEC OpenFlow Yes
Open vSwitch Yes
Plumgrid Yes Yes
Ryu Yes
VMware NSX Yes Yes Yes Plug-in configurations

For configurations options, see the Configuration Reference Guide. These sections explain how to configure specific plug-ins. Configure Big Switch, Floodlight REST Proxy plug-in

Procedure 6.2. To use the REST Proxy plug-in with OpenStack Networking

  1. Edit the /etc/neutron/neutron.conf file and add this line:
    core_plugin = neutron.plugins.bigswitch.plugin.NeutronRestProxyV2
  2. Edit the plug-in configuration file, /etc/neutron/plugins/bigswitch/restproxy.ini, and specify a comma-separated list of controller_ip:port pairs:
    server = <controller-ip>:<port>
    For database configuration, see the Create the OpenStack Networking Database section in Deploying OpenStack: Learning Environments (Manual Setup) at
  3. Restart neutron-server to apply the new settings:
    # service neutron-server restart Configure Brocade plug-in

Procedure 6.3. To use the Brocade plug-in with OpenStack Networking

  1. Install the Brocade-modified Python netconf client (ncclient) library, which is available at
    $ git clone
    As root execute:
    # cd ncclient;python install
  2. Edit the /etc/neutron/neutron.conf file and set the following option:
    core_plugin = neutron.plugins.brocade.NeutronPlugin.BrocadePluginV2
  3. Edit the /etc/neutron/plugins/brocade/brocade.ini configuration file for the Brocade plug-in and specify the admin user name, password, and IP address of the Brocade switch:
    username = admin
    password = password
    address  = switch mgmt ip address
    ostype   = NOS
  4. Restart the neutron-server service to apply the new settings:
    # service neutron-server restart Configure OVS plug-in
If you use the Open vSwitch (OVS) plug-in in a deployment with multiple hosts, you must use either tunneling or vlans to isolate traffic from multiple networks. Tunneling is easier to deploy because it does not require configuring VLANs on network switches.
This procedure uses tunneling:

Procedure 6.4. To configure OpenStack Networking to use the OVS plug-in

  1. Edit /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini to specify these values.
    # only required for nodes running agents
  2. If you use the neutron DHCP agent, add these lines to the /etc/neutron/dhcp_agent.ini file:
  3. Create /etc/neutron/dnsmasq/dnsmasq-neutron.conf, and add these values to lower the MTU size on instances and prevent packet fragmentation over the GRE tunnel:
  4. Restart to apply the new settings:
    # service neutron-server restart Configure NSX plug-in

Procedure 6.5. To configure OpenStack Networking to use the NSX plug-in

While the instructions in this section refer to the VMware NSX platform, this is formerly known as Nicira NVP.
  1. Install the NSX plug-in, as follows:
    # yum install openstack-neutron-vmware
  2. Edit /etc/neutron/neutron.conf and set:
    core_plugin = vmware
    Example neutron.conf file for NSX:
    core_plugin = vmware
    rabbit_host =
    allow_overlapping_ips = True
  3. To configure the NSX controller cluster for the OpenStack Networking Service, locate the [default] section in the /etc/neutron/plugins/vmware/nsx.ini file, and add the following entries:
    • To establish and configure the connection with the controller cluster you must set some parameters, including NSX API endpoints, access credentials, and settings for HTTP redirects and retries in case of connection failures:
      nsx_user = <admin user name>
      nsx_password = <password for nsx_user>
      req_timeout = <timeout in seconds for NSX_requests> # default 30 seconds
      http_timeout = <tiemout in seconds for single HTTP request> # default 10 seconds
      retries = <number of HTTP request retries> # default 2
      redirects = <maximum allowed redirects for a HTTP request> # default 3
      nsx_controllers = <comma separated list of API endpoints>
      To ensure correct operations, the nsx_user user must have administrator credentials on the NSX platform.
      A controller API endpoint consists of the IP address and port for the controller; if you omit the port, port 443 is used. If multiple API endpoints are specified, it is up to the user to ensure that all these endpoints belong to the same controller cluster. The OpenStack Networking VMware NSX plug-in does not perform this check, and results might be unpredictable.
      When you specify multiple API endpoints, the plug-in load-balances requests on the various API endpoints.
    • The UUID of the NSX Transport Zone that should be used by default when a tenant creates a network. You can get this value from the NSX Manager's Transport Zones page:
      default_tz_uuid = <uuid_of_the_transport_zone>
    • default_l3_gw_service_uuid = <uuid_of_the_gateway_service>
  4. Restart neutron-server to apply new settings:
    # service neutron-server restart
Example nsx.ini file:
default_tz_uuid = d3afb164-b263-4aaa-a3e4-48e0e09bb33c
To debug nsx.ini configuration issues, run this command from the host that runs neutron-server:
# neutron-check-nsx-config <path/to/nsx.ini>
This command tests whether neutron-server can log into all of the NSX Controllers and the SQL server, and whether all UUID values are correct. Load Balancer-as-a-Service and Firewall-as-a-Service
The NSX LBaaS and FWaaS services use the standard OpenStack API with the exception of requiring routed-insertion extension support.
The main differences between the NSX implementation and the community reference implementation of these services are:
  1. The NSX LBaaS and FWaaS plug-ins require the routed-insertion extension, which adds the router_id attribute to the VIP (Virtual IP address) and firewall resources and binds these services to a logical router.
  2. The community reference implementation of LBaaS only supports a one-arm model, which restricts the VIP to be on the same subnet as the back-end servers. The NSX LBaaS plug-in only supports a two-arm model between north-south traffic, which means that you can create the VIP on only the external (physical) network.
  3. The community reference implementation of FWaaS applies firewall rules to all logical routers in a tenant, while the NSX FWaaS plug-in applies firewall rules only to one logical router according to the router_id of the firewall entity.

Procedure 6.6. To configure Load Balancer-as-a-Service and Firewall-as-a-Service with NSX:

  1. Edit /etc/neutron/neutron.conf file:
    core_plugin = neutron.plugins.vmware.plugin.NsxServicePlugin
    # Note: comment out service_plug-ins. LBaaS & FWaaS is supported by core_plugin NsxServicePlugin
    # service_plugins =
  2. Edit /etc/neutron/plugins/vmware/nsx.ini file:
    In addition to the original NSX configuration, the default_l3_gw_service_uuid is required for the NSX Advanced plug-in and you must add a vcns section:
        nsx_password = admin
        nsx_user = admin
        nsx_controllers =
        default_l3_gw_service_uuid = aae63e9b-2e4e-4efe-81a1-92cf32e308bf
        default_tz_uuid = 2702f27a-869a-49d1-8781-09331a0f6b9e
        # VSM management URL
        manager_uri =
        # VSM admin user name
        user = admin
        # VSM admin password
        password = default
        # UUID of a logical switch on NSX which has physical network connectivity (currently using bridge transport type)
        external_network = f2c023cf-76e2-4625-869b-d0dabcfcc638
        # ID of deployment_container on VSM. Optional, if not specified, a default global deployment container is used
        # deployment_container_id =
        # task_status_check_interval configures status check interval for vCNS asynchronous API. Default is 2000 msec.
        # task_status_check_interval = Configure PLUMgrid plug-in

Procedure 6.7. To use the PLUMgrid plug-in with OpenStack Networking

  1. Edit /etc/neutron/neutron.conf and set:
    core_plugin = neutron.plugins.plumgrid.plumgrid_plugin.plumgrid_plugin.NeutronPluginPLUMgridV2
  2. Edit /etc/neutron/plugins/plumgrid/plumgrid.ini under the [PLUMgridDirector] section, and specify the IP address, port, admin user name, and password of the PLUMgrid Director:
    director_server = "PLUMgrid-director-ip-address"
    director_server_port = "PLUMgrid-director-port"
    username = "PLUMgrid-director-admin-username"
    password = "PLUMgrid-director-admin-password"
  3. Restart neutron-server to apply the new settings:
    # service neutron-server restart Configure Ryu plug-in

Procedure 6.8. To use the Ryu plug-in with OpenStack Networking

  1. Install the Ryu plug-in, as follows:
    # yum install openstack-neutron-ryu
  2. Edit /etc/neutron/neutron.conf and set:
    core_plugin = neutron.plugins.ryu.ryu_neutron_plugin.RyuNeutronPluginV2
  3. Edit the /etc/neutron/plugins/ryu/ryu.ini file and update these options in the [ovs] section for the ryu-neutron-agent:
    • openflow_rest_api. Defines where Ryu is listening for REST API. Substitute ip-address and port-no based on your Ryu setup.
    • ovsdb_interface. Enables Ryu to access the ovsdb-server. Substitute eth0 based on your setup. The IP address is derived from the interface name. If you want to change this value irrespective of the interface name, you can specify ovsdb_ip. If you use a non-default port for ovsdb-server, you can specify ovsdb_port.
    • tunnel_interface. Defines which IP address is used for tunneling. If you do not use tunneling, this value is ignored. The IP address is derived from the network interface name.
    You can use the same configuration file for many compute nodes by using a network interface name with a different IP address:
    openflow_rest_api = <ip-address>:<port-no> ovsdb_interface = <eth0> tunnel_interface = <eth0>
  4. Restart neutron-server to apply the new settings:
    # service neutron-server restart

6.1.6. Configure neutron agents

Plug-ins typically have requirements for particular software that must be run on each node that handles data packets. This includes any node that runs nova-compute and nodes that run dedicated OpenStack Networking service agents such as neutron-dhcp-agent, neutron-l3-agent, neutron-metering-agent or neutron-lbaas-agent.
A data-forwarding node typically has a network interface with an IP address on the “management network” and another interface on the “data network”.
This section shows you how to install and configure a subset of the available plug-ins, which might include the installation of switching software (for example, Open vSwitch) and as agents used to communicate with the neutron-server process running elsewhere in the data center. Configure data-forwarding nodes Node set up: OVS plug-in
This section also applies to the ML2 plug-in when Open vSwitch is used as a mechanism driver.
If you use the Open vSwitch plug-in, you must install Open vSwitch and the neutron-plugin-openvswitch-agent agent on each data-forwarding node:
Do not install the openvswitch-brcompat package because it prevents the security group functionality from operating correctly.

Procedure 6.9. To set up each node for the OVS plug-in

  1. Install the OVS agent package. This action also installs the Open vSwitch software as a dependency:
    # yum install openstack-neutron-openvswitch
  2. On each node that runs the agent, complete these steps:
    • Replicate the ovs_neutron_plugin.ini file that you created on the node.
    • If you use tunneling, update the ovs_neutron_plugin.ini file for the node with the IP address that is configured on the data network for the node by using the local_ip value.
  3. Restart Open vSwitch to properly load the kernel module:
    # service openvswitch-switch restart
  4. Restart the agent:
    # service neutron-plugin-openvswitch-agent restart
  5. All nodes that run neutron-plugin-openvswitch-agent must have an OVS br-int bridge. To create the bridge, run:
    # ovs-vsctl add-br br-int Node set up: NSX plug-in
If you use the NSX plug-in, you must also install Open vSwitch on each data-forwarding node. However, you do not need to install an additional agent on each node.

Procedure 6.10. To set up each node for the NSX plug-in

  1. Ensure that each data-forwarding node has an IP address on the management network, and an IP address on the "data network" that is used for tunneling data traffic. For full details on configuring your forwarding node, see the NSX Administrator Guide.
  2. Use the NSX Administrator Guide to add the node as a Hypervisor by using the NSX Manager GUI. Even if your forwarding node has no VMs and is only used for services agents like neutron-dhcp-agent or neutron-lbaas-agent, it should still be added to NSX as a Hypervisor.
  3. After following the NSX Administrator Guide, use the page for this Hypervisor in the NSX Manager GUI to confirm that the node is properly connected to the NSX Controller Cluster and that the NSX Controller Cluster can see the br-int integration bridge. Node set up: Ryu plug-in
If you use the Ryu plug-in, you must install both Open vSwitch and Ryu, in addition to the Ryu agent package:

Procedure 6.11. To set up each node for the Ryu plug-in

  1. Install Ryu:
    # pip install ryu
  2. Install the Ryu agent and Open vSwitch packages:
    # yum install openstack-neutron-ryu openvswitch python-openvswitch
  3. Replicate the ovs_ryu_plugin.ini and neutron.conf files created in the above step on all nodes running neutron-ryu-agent.
  4. Restart Open vSwitch to properly load the kernel module:
    # service openvswitch restart
  5. Restart the agent:
    # service neutron-ryu-agent restart
  6. All nodes running neutron-ryu-agent also require that an OVS bridge named "br-int" exists on each node. To create the bridge, run:
    # ovs-vsctl add-br br-int Configure DHCP agent

The DHCP service agent is compatible with all existing plug-ins and is required for all deployments where VMs should automatically receive IP addresses through DHCP.

Procedure 6.12. To install and configure the DHCP agent

  1. You must configure the host running the neutron-dhcp-agent as a "data forwarding node" according to the requirements for your plug-in (see Section 6.1.6, “Configure neutron agents”).
  2. Install the DHCP agent:
    # yum install openstack-neutron
  3. Finally, update any options in the /etc/neutron/dhcp_agent.ini file that depend on the plug-in in use (see the sub-sections). DHCP agent setup: OVS plug-in
These DHCP agent options are required in the /etc/neutron/dhcp_agent.ini file for the OVS plug-in:
enable_isolated_metadata = True
use_namespaces = True
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver DHCP agent setup: NSX plug-in
These DHCP agent options are required in the /etc/neutron/dhcp_agent.ini file for the NSX plug-in:
enable_metadata_network = True
enable_isolated_metadata = True
use_namespaces = True
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver DHCP agent setup: Ryu plug-in
These DHCP agent options are required in the /etc/neutron/dhcp_agent.ini file for the Ryu plug-in:
use_namespace = True
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver Configure L3 agent

The OpenStack Networking Service has a widely used API extension to allow administrators and tenants to create routers to interconnect L2 networks, and floating IPs to make ports on private networks publicly accessible.
Many plug-ins rely on the L3 service agent to implement the L3 functionality. However, the following plug-ins already have built-in L3 capabilities:
  • NSX plug-in
  • Big Switch/Floodlight plug-in, which supports both the open source Floodlight controller and the proprietary Big Switch controller.
    Only the proprietary BigSwitch controller implements L3 functionality. When using Floodlight as your OpenFlow controller, L3 functionality is not available.
  • PLUMgrid plug-in
Do not configure or use neutron-l3-agent if you use one of these plug-ins.

Procedure 6.13. To install the L3 agent for all other plug-ins

  1. Install the neutron-l3-agent binary on the network node:
    # yum install openstack-neutron
  2. To uplink the node that runs neutron-l3-agent to the external network, create a bridge named "br-ex" and attach the NIC for the external network to this bridge.
    For example, with Open vSwitch and NIC eth1 connected to the external network, run:
    # ovs-vsctl add-br br-ex
    # ovs-vsctl add-port br-ex eth1
    Do not manually configure an IP address on the NIC connected to the external network for the node running neutron-l3-agent. Rather, you must have a range of IP addresses from the external network that can be used by OpenStack Networking for routers that uplink to the external network. This range must be large enough to have an IP address for each router in the deployment, as well as each floating IP.
  3. The neutron-l3-agent uses the Linux IP stack and iptables to perform L3 forwarding and NAT. In order to support multiple routers with potentially overlapping IP addresses, neutron-l3-agent defaults to using Linux network namespaces to provide isolated forwarding contexts. As a result, the IP addresses of routers are not visible simply by running the ip addr list or ifconfig command on the node. Similarly, you cannot directly ping fixed IPs.
    To do either of these things, you must run the command within a particular network namespace for the router. The namespace has the name "qrouter-<UUID of the router>. These example commands run in the router namespace with UUID 47af3868-0fa8-4447-85f6-1304de32153b:
    # ip netns exec qrouter-47af3868-0fa8-4447-85f6-1304de32153b ip addr list
    # ip netns exec qrouter-47af3868-0fa8-4447-85f6-1304de32153b ping <fixed-ip> Configure metering agent

Starting with the Havana release, the Neutron Metering resides beside neutron-l3-agent.

Procedure 6.14. To install the metering agent and configure the node

  1. Install the agent by running:
    # yum install openstack-neutron-metering-agent
    Package name prior to Icehouse
    In releases of neutron prior to Icehouse, this package was named neutron-plugin-metering-agent.
  2. If you use one of the following plugins, you need to configure the metering agent with these lines as well:
    • An OVS-based plug-in such as OVS, NSX, Ryu, NEC, BigSwitch/Floodlight:
      interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
    • A plug-in that uses LinuxBridge:
      interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver
  3. To use the reference implementation, you must set:
    driver =
  4. Set this parameter in the neutron.conf file on the host that runs neutron-server:
    service_plugins = Configure Load-Balancing-as-a-Service (LBaaS)

Configure Load-Balancing-as-a-Service (LBaas) with the Open vSwitch or Linux Bridge plug-in. The Open vSwitch LBaaS driver is required when enabling LBaaS for OVS-based plug-ins, including BigSwitch, Floodlight, NEC, NSX, and Ryu.
  1. Install the agent:
    # yum install openstack-neutron
  2. Enable the HAProxy plug-in using the service_provider parameter in the /etc/neutron/neutron.conf file:
    service_provider =
  3. Enable the load balancer plugin using service_plugin in the /etc/neutron/neutron.conf file:
    service_plugins =
  4. Enable the HAProxy load balancer in the /etc/neutron/lbaas_agent.ini file:
    device_driver =
  5. Select the required driver in the /etc/neutron/lbaas_agent.ini file:
    Enable the Open vSwitch LBaaS driver:
    interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
    Or enable the Linux Bridge LBaaS driver:
    interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver
    Apply the new settings by restarting the neutron-server and neutron-lbaas-agent services.
    Upgrade from Havana to Icehouse
    There were changes in LBaaS server-agent communications in Icehouse so during Havana to Icehouse transition make sure to upgrade both server and agent sides before actual use of the load balancing service.
  6. Enable Load Balancing in the Project section of the Dashboard user interface:
    Change the enable_lb option to True in the /etc/openstack-dashboard/local_settings file:
    OPENSTACK_NEUTRON_NETWORK = {'enable_lb': True,
    Apply the new settings by restarting the httpd service. You can now view the Load Balancer management options in dashboard's Project view.