3.3. Networking with nova-network

Understanding the networking configuration options helps you design the best configuration for your Compute instances.
You can choose to either install and configure nova-network for networking between VMs or use the OpenStack Networking service (neutron) for networking. To configure Compute networking options with OpenStack Networking, see Chapter 6, Networking.

3.3.1. Networking concepts

This section offers a brief overview of networking concepts for Compute.
Compute assigns a private IP address to each VM instance. (Currently, Compute with nova-network only supports Linux bridge networking that enables the virtual interfaces to connect to the outside network through the physical interface.) Compute makes a distinction between fixed IPs and floating IPs. Fixed IPs are IP addresses that are assigned to an instance on creation and stay the same until the instance is explicitly terminated. By contrast, floating IPs are addresses that can be dynamically associated with an instance. A floating IP address can be disassociated and associated with another instance at any time. A user can reserve a floating IP for their project.
The network controller with nova-network provides virtual networks to enable compute servers to interact with each other and with the public network. Compute with nova-network supports the following network modes, which are implemented as “Network Manager” types.
Flat Network Manager
In Flat mode, a network administrator specifies a subnet. IP addresses for VM instances are assigned from the subnet, and then injected into the image on launch. Each instance receives a fixed IP address from the pool of available addresses. A system administrator must create the Linux networking bridge (typically named br100, although this is configurable) on the systems running the nova-network service. All instances of the system are attached to the same bridge, and this is configured manually by the network administrator.
Configuration injection currently only works on Linux-style systems that keep networking configuration in /etc/network/interfaces.
Flat DHCP Network Manager
In FlatDHCP mode, OpenStack starts a DHCP server (dnsmasq) to allocate IP addresses to VM instances from the specified subnet, in addition to manually configuring the networking bridge. IP addresses for VM instances are assigned from a subnet specified by the network administrator.
Like Flat Mode, all instances are attached to a single bridge on the compute node. Additionally, a DHCP server is running to configure instances (depending on single-/multi-host mode, alongside each nova-network). In this mode, Compute does a bit more configuration in that it attempts to bridge into an ethernet device (flat_interface, eth0 by default). For every instance, Compute allocates a fixed IP address and configures dnsmasq with the MAC/IP pair for the VM. Dnsmasq does not take part in the IP address allocation process, it only hands out IPs according to the mapping done by Compute. Instances receive their fixed IPs by doing a dhcpdiscover. These IPs are not assigned to any of the host's network interfaces, only to the VM's guest-side interface.
In any setup with flat networking, the hosts providing the nova-network service are responsible for forwarding traffic from the private network. They also run and configure dnsmasq as a DHCP server listening on this bridge, usually on IP address (see DHCP server: dnsmasq ). Compute can determine the NAT entries for each network, although sometimes NAT is not used, such as when configured with all public IPs or a hardware router is used (one of the HA options). Such hosts need to have br100 configured and physically connected to any other nodes that are hosting VMs. You must set the flat_network_bridge option or create networks with the bridge parameter in order to avoid raising an error. Compute nodes have iptables/ebtables entries created for each project and instance to protect against IP/MAC address spoofing and ARP poisoning.
In single-host Flat DHCP mode you will be able to ping VMs through their fixed IP from the nova-network node, but you cannot ping them from the compute nodes. This is expected behavior.
VLAN Network Manager
VLANManager mode is the default mode for OpenStack Compute. In this mode, Compute creates a VLAN and bridge for each tenant. For multiple-machine installation, the VLAN Network Mode requires a switch that supports VLAN tagging (IEEE 802.1Q). The tenant gets a range of private IPs that are only accessible from inside the VLAN. In order for a user to access the instances in their tenant, a special VPN instance (code named cloudpipe) needs to be created. Compute generates a certificate and key for the user to access the VPN and starts the VPN automatically. It provides a private network segment for each tenant's instances that can be accessed through a dedicated VPN connection from the Internet. In this mode, each tenant gets its own VLAN, Linux networking bridge, and subnet.
The subnets are specified by the network administrator, and are assigned dynamically to a tenant when required. A DHCP Server is started for each VLAN to pass out IP addresses to VM instances from the subnet assigned to the tenant. All instances belonging to one tenant are bridged into the same VLAN for that tenant. OpenStack Compute creates the Linux networking bridges and VLANs when required.
These network managers can co-exist in a cloud system. However, because you cannot select the type of network for a given tenant, you cannot configure multiple network types in a single Compute installation.
All network managers configure the network using network drivers. For example, the Linux L3 driver (l3.py and linux_net.py), which makes use of iptables, route and other network management facilities, and libvirt's network filtering facilities. The driver is not tied to any particular network manager; all network managers use the same driver. The driver usually initializes (creates bridges and so on) only when the first VM lands on this host node.
All network managers operate in either single-host or multi-host mode. This choice greatly influences the network configuration. In single-host mode, a single nova-network service provides a default gateway for VMs and hosts a single DHCP server (dnsmasq). In multi-host mode, each compute node runs its own nova-network service. In both cases, all traffic between VMs and the outer world flows through nova-network. .
All networking options require network connectivity to be already set up between OpenStack physical nodes. OpenStack does not configure any physical network interfaces. All network managers automatically create VM virtual interfaces. Some, but not all, managers create network bridges such as br100.
All machines must have a public and internal network interface (controlled by the options: public_interface for the public interface, and flat_interface and vlan_interface for the internal interface with flat / VLAN managers). This guide refers to the public network as the external network and the private network as the internal or tenant network.
The internal network interface is used for communication with VMs; the interface should not have an IP address attached to it before OpenStack installation (it serves merely as a fabric where the actual endpoints are VMs and dnsmasq). Also, you must put the internal network interface in promiscuous mode, because it must receive packets whose target MAC address is of the guest VM, not of the host.
Throughout this documentation, the public network is sometimes referred to as the external network, while the internal network is also sometimes referred to as the private network or tenant network.
For flat and flat DHCP modes, use the following command to create a network:
$ nova network-create vmnet \
 --fixed-range-v4= --fixed-cidr= --bridge=br100
  • --fixed-range-v4- specifies the network subnet.
  • --fixed-cidr specifies a range of fixed IP addresses to allocate, and can be a subset of the --fixed-range-v4 argument.
  • --bridge specifies the bridge device to which this network is connected on every compute node.

3.3.2. DHCP server: dnsmasq

The Compute service uses dnsmasq as the DHCP server when running with either that Flat DHCP Network Manager or the VLAN Network Manager. The nova-network service is responsible for starting up dnsmasq processes.
The behavior of dnsmasq can be customized by creating a dnsmasq configuration file. Specify the configuration file using the dnsmasq_config_file configuration option. For example:
For an example of how to change the behavior of dnsmasq using a dnsmasq configuration file, see the Configuration Reference Guide from https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Linux_OpenStack_Platform/.
The dnsmasq documentation also has a more comprehensive dnsmasq configuration file example.
dnsmasq also acts as a caching DNS server for instances. You can explicitly specify the DNS server that dnsmasq should use by setting the dns_server configuration option in /etc/nova/nova.conf. The following example would configure dnsmasq to use Google's public DNS server:
Logging output for dnsmasq goes to the /var/log/messages file. dnsmasq logging output can be useful for troubleshooting if VM instances boot successfully but are not reachable over the network.
A network administrator can run nova-manage fixed reserve --address=x.x.x.x to specify the starting point IP address (x.x.x.x) to reserve with the DHCP server. This reservation only affects which IP address the VMs start at, not the fixed IP addresses that the nova-network service places on the bridges.

3.3.3. Configure Compute to use IPv6 addresses

If you are using OpenStack Compute with nova-network, you can put Compute into IPv4/IPv6 dual-stack mode, so that it uses both IPv4 and IPv6 addresses for communication. In IPv4/IPv6 dual-stack mode, instances can acquire their IPv6 global unicast address by using a stateless address auto configuration mechanism [RFC 4862/2462]. IPv4/IPv6 dual-stack mode works with both VlanManager and FlatDHCPManager networking modes. In VlanManager, each project uses a different 64-bit global routing prefix. In FlatDHCPManager, all instances use one 64-bit global routing prefix.
This configuration was tested with VM images that have an IPv6 stateless address auto configuration capability. This capability is required for any VM you want to run with an IPv6 address. You must use EUI-64 address for stateless address auto configuration. Each node that executes a nova-* service must have python-netaddr and radvd installed.

Procedure 3.2. Switch into IPv4/IPv6 dual-stack mode

  1. On all nodes running a nova-* service, install python-netaddr:
    # yum install python-netaddr
  2. On all nova-network nodes, install radvd and configure IPv6 networking:
    # yum install radvd
    # echo 1 > /proc/sys/net/ipv6/conf/all/forwarding
    # echo 0 > /proc/sys/net/ipv6/conf/all/accept_ra
  3. Edit the nova.conf file on all nodes to specify use_ipv6 = True.
  4. Restart all nova-* services.
You can add a fixed range for IPv6 addresses to the nova network-create command. Specify public or private after the network-create parameter.
$ nova network-create public --fixed-range-v4 fixed_range_v4 --vlan vlan_id --vpn vpn_start --fixed-range-v6 fixed_range_v6
You can set IPv6 global routing prefix by using the --fixed_range_v6 parameter. The default value for the parameter is: fd00::/48.
  • When you use FlatDHCPManager, the command uses the original --fixed_range_v6 value. For example:
    $ nova network-create public --fixed-range-v4 --fixed-range-v6 fd00:1::/48
  • When you use VlanManager, the command increments the subnet ID to create subnet prefixes. Guest VMs use this prefix to generate their IPv6 global unicast address. For example:
    $ nova network-create public --fixed-range-v4 --vlan 100 --vpn 1000 --fixed-range-v6 fd00:1::/48

Table 3.1. Description of configuration options for ipv6

Configuration option = Default value Description
fixed_range_v6 = fd00::/48 (StrOpt) Fixed IPv6 address block
gateway_v6 = None (StrOpt) Default IPv6 gateway
ipv6_backend = rfc2462 (StrOpt) Backend to use for IPv6 generation
use_ipv6 = False (BoolOpt) Use IPv6

3.3.4. Metadata service


The Compute service uses a special metadata service to enable virtual machine instances to retrieve instance-specific data. Instances access the metadata service at The metadata service supports two sets of APIs: an OpenStack metadata API and an EC2-compatible API. Each of the APIs is versioned by date.
To retrieve a list of supported versions for the OpenStack metadata API, make a GET request to For example:
$ curl
2012-08-10 latest
To list supported versions for the EC2-compatible metadata API, make a GET request to
For example:
$ curl
1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 2009-04-04 latest
If you write a consumer for one of these APIs, always attempt to access the most recent API version supported by your consumer first, then fall back to an earlier version if the most recent one is not available.

OpenStack metadata API

Metadata from the OpenStack API is distributed in JSON format. To retrieve the metadata, make a GET request to
For example:
$ curl
    "uuid": "d8e02d56-2648-49a3-bf97-6be8f1204f38",
    "availability_zone": "nova",
    "hostname": "test.novalocal",
    "launch_index": 0,
    "meta": {
        "priority": "low",
        "role": "webserver"
    "public_keys": {
        "mykey": "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAAAgQDYVEprvtYJXVOBN0XNKVVRNCRX6BlnNbI+USLGais1sUWPwtSg7z9K9vhbYAPUZcq8c/s5S9dg5vTHbsiyPCIDOKyeHba4MUJq8Oh5b2i71/3BISpyxTBH/uZDHdslW2a+SrPDCeuMMoss9NFhBdKtDkdG9zyi0ibmCP6yMdEX8Q== Generated by Nova\n"
    "name": "test"
Instances also retrieve user data (passed as the user_data parameter in the API call or by the --user_data flag in the nova boot command) through the metadata service, by making a GET request to
For example:
$ curl
                        #!/bin/bash echo 'Extra user data here'

EC2 metadata API

The metadata service has an API that is compatible with version 2009-04-04 of the Amazon EC2 metadata service; virtual machine images that are designed for EC2 work properly with OpenStack.
The EC2 API exposes a separate URL for each metadata. You can retrieve a listing of these elements by making a GET query to
For example:
$ curl
ami-id ami-launch-index ami-manifest-path block-device-mapping/ hostname instance-action instance-id instance-type kernel-id local-hostname local-ipv4 placement/ public-hostname public-ipv4 public-keys/ ramdisk-id reservation-id security-groups
$ curl
$ curl
$ curl
Instances can retrieve the public SSH key (identified by keypair name when a user requests a new instance) by making a GET request to
For example:
$ curl
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAAAgQDYVEprvtYJXVOBN0XNKVVRNCRX6BlnNbI+USLGais1sUWPwtSg7z9K9vhbYAPUZcq8c/s5S9dg5vTHbsiyPCIDOKyeHba4MUJq8Oh5b2i71/3BISpyxTBH/uZDHdslW2a+SrPDCeuMMoss9NFhBdKtDkdG9zyi0ibmCP6yMdEX8Q== Generated by Nova
Instances can retrieve user data by making a GET request to
For example:
$ curl
#!/bin/bash echo 'Extra user data here'

Run the metadata service

The metadata service is implemented by either the nova-api service or the nova-api-metadata service. (The nova-api-metadata service is generally only used when running in multi-host mode, it retrieves instance-specific metadata). If you are running the nova-api service, you must have metadata as one of the elements of the list of the enabled_apis configuration option in /etc/nova/nova.conf. The default enabled_apis configuration setting includes the metadata service, so you should not need to modify it.
Hosts access the service at, and this is translated to metadata_host:metadata_port by an iptables rule established by the nova-network servce. In multi-host mode, you can set metadata_host to
To enable instances to reach the metadata service, the nova-network service configures iptables to NAT port 80 of the address to the IP address specified in metadata_host (default $my_ip, which is the IP address of the nova-network service) and port specified in metadata_port (default 8775) in /etc/nova/nova.conf.
The metadata_host configuration option must be an IP address, not a host name.
The default Compute service settings assume that the nova-network service and the nova-api service are running on the same host. If this is not the case, you must make this change in the /etc/nova/nova.conf file on the host running the nova-network service:
Set the metadata_host configuration option to the IP address of the host where the nova-api service runs.

Table 3.2. Description of configuration options for metadata

Configuration option = Default value Description
metadata_host = $my_ip (StrOpt) The IP address for the metadata API server
metadata_listen = (StrOpt) The IP address on which the metadata API will listen.
metadata_listen_port = 8775 (IntOpt) The port on which the metadata API will listen.
metadata_manager = nova.api.manager.MetadataManager (StrOpt) OpenStack metadata service manager
metadata_port = 8775 (IntOpt) The port for the metadata API port
metadata_workers = None (IntOpt) Number of workers for metadata service. The default will be the number of CPUs available.
vendordata_driver = nova.api.metadata.vendordata_json.JsonFileVendorData (StrOpt) Driver to use for vendor data
vendordata_jsonfile_path = None (StrOpt) File to load json formatted vendor data from

3.3.5. Enable ping and SSH on VMs

Be sure you enable access to your VMs by using the euca-authorize or nova secgroup-add-rule command. These commands enable you to ping and ssh to your VMs:
You must run these commands as root only if the credentials used to interact with nova-api are in /root/.bashrc. If the EC2 credentials are the .bashrc file for another user, you must run these commands as the user.
Run nova commands:
$ nova secgroup-add-rule default icmp -1 -1
$ nova secgroup-add-rule default tcp 22 22
Using euca2ools:
$ euca-authorize -P icmp -t -1:-1 -s default
$ euca-authorize -P tcp -p 22 -s default
If you still cannot ping or SSH your instances after issuing the nova secgroup-add-rule commands, look at the number of dnsmasq processes that are running. If you have a running instance, check to see that TWO dnsmasq processes are running. If not, perform the following commands as root:
# killall dnsmasq
# service nova-network restart

3.3.6. Configure public (floating) IP addresses

If you are using Compute's nova-network instead of OpenStack Networking (neutron) for networking in OpenStack, use procedures in this section to configure floating IP addresses. For instructions on how to configure OpenStack Networking (neutron) to provide access to instances through floating IP addresses, see Section 6.8.2, “L3 routing and NAT”. Private and public IP addresses

Every virtual instance is automatically assigned a private IP address. You can optionally assign public IP addresses to instances. The term floating IP refers to an IP address, typically public, that you can dynamically add to a running virtual instance. OpenStack Compute uses Network Address Translation (NAT) to assign floating IPs to virtual instances.
If you plan to use this feature, you must add edit the /etc/nova/nova.conf file to specify to which interface the nova-network service binds public IP addresses, as follows:
If you make changes to the /etc/nova/nova.conf file while the nova-network service is running, you must restart the service.
Traffic between VMs using floating IPs
Because floating IPs are implemented by using a source NAT (SNAT rule in iptables), security groups can display inconsistent behavior if VMs use their floating IP to communicate with other VMs, particularly on the same physical host. Traffic from VM to VM across the fixed network does not have this issue, and so this is the recommended path. To ensure that traffic does not get SNATed to the floating range, explicitly set:
The x.x.x.x/y value specifies the range of floating IPs for each pool of floating IPs that you define. If the VMs in the source group have floating IPs, this configuration is also required. Enable IP forwarding

To use the floating IP feature, you must enable IP forwarding.
You must enable IP forwarding only on the nodes that run the nova-network service. If you use multi_host mode, ensure that you enable it on all compute nodes. Otherwise, enable it on only the node that runs the nova-network service.
To check whether forwarding is enabled, run:
$ cat /proc/sys/net/ipv4/ip_forward
Alternatively, you can run:
$ sysctl net.ipv4.ip_forward
net.ipv4.ip_forward = 0
In the previous example, IP forwarding is disabled. To enable it dynamically, run:
# sysctl -w net.ipv4.ip_forward=1
# echo 1 > /proc/sys/net/ipv4/ip_forward
To make the changes permanent, edit the /etc/sysctl.conf file and update the IP forwarding setting:
net.ipv4.ip_forward = 1
Save the file and run the following command to apply the changes:
# sysctl -p
You can also update the setting by restarting the network service:
# service network restart Create a list of available floating IP addresses

Compute maintains a list of floating IP addresses that you can assign to instances. Use the nova-manage floating create command to add entries to this list.
For example:
# nova-manage floating create --pool=nova --ip_range=
You can use the following nova-manage commands to perform floating IP operations:
  • # nova-manage floating list
    Lists the floating IP addresses in the pool.
  • # nova-manage floating create --pool=[pool name] --ip_range=[CIDR]
    Creates specific floating IPs for either a single address or a subnet.
  • # nova-manage floating delete [CIDR]
    Removes floating IP addresses using the same parameters as the create command. Automatically add floating IPs

You can configure the nova-network service to automatically allocate and assign a floating IP address to virtual instances when they are launched. Add the following line to the /etc/nova/nova.conf file and restart the nova-network service:
If you enable this option and all floating IP addresses have already been allocated, the nova boot command fails.

3.3.7. Remove a network from a project

You cannot remove a network that has already been associated to a project by simply deleting it.
To determine the project ID, you must have administrative rights. You can disassociate the project from the network with a scrub command and the project ID as the final parameter:
# nova-manage project scrub --project=<id>

3.3.8. Multiple interfaces for your instances (multinic)

The multinic feature allows you to plug more than one interface to your instances, making it possible to make several use cases available:
  • SSL Configurations (VIPs)
  • Services failover/ HA
  • Bandwidth Allocation
  • Administrative/ Public access to your instances
Each VIF is representative of a separate network with its own IP block. Every network mode introduces its own set of changes regarding the multinic usage:

Figure 3.4. multinic flat manager

multinic flat manager

Figure 3.5. multinic flatdhcp manager

multinic flatdhcp manager

Figure 3.6. multinic VLAN manager

multinic VLAN manager Use the multinic feature

In order to use the multinic feature, first create two networks, and attach them to your tenant (still named 'project' on the command line):
$ nova network-create first-net --fixed-range-v4= --project-id=$your-project
$ nova network-create second-net --fixed-range-v4= --project-id=$your-project
Now every time you spawn a new instance, it gets two IP addresses from the respective DHCP servers:
$ nova list
|  ID |    Name    | Status |                Networks                |
| 124 | Server 124 | ACTIVE | network2=; private=|
Make sure to power up the second interface on the instance, otherwise that last won't be reachable through its second IP. Here is an example of how to setup the interfaces within the instance (this is the configuration that needs to be applied inside the image):
# The loopback network interface
auto lo
iface lo inet loopback

auto eth0
iface eth0 inet dhcp

auto eth1
iface eth1 inet dhcp
If the Virtual Network Service Neutron is installed, it is possible to specify the networks to attach to the respective interfaces by using the --nic flag when invoking the nova command:
$ nova boot --image ed8b2a37-5535-4a5f-a615-443513036d71 --flavor 1 --nic net-id= <id of first network> --nic net-id= <id of second network> test-vm1

3.3.9. Troubleshoot Networking

Cannot reach floating IPs

If you cannot reach your instances through the floating IP address, check the following:
  • Ensure the default security group allows ICMP (ping) and SSH (port 22), so that you can reach the instances:
    $ nova secgroup-list-rules default
    | IP Protocol | From Port | To Port |  IP Range | Source Group |
    | icmp        | -1        | -1      | |              |
    | tcp         | 22        | 22      | |              |
  • Ensure the NAT rules have been added to iptables on the node that nova-network is running on, as root:
    # iptables -L -nv -t nat
    -A nova-network-PREROUTING -d -j DNAT --to-destination -A nova-network-floating-snat -s -j SNAT --to-source
  • Check that the public address, in this example "", has been added to your public interface. You should see the address in the listing when you enter "ip addr" at the command prompt.
    $ ip addr
    2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000 link/ether xx:xx:xx:17:4b:c2 brd ff:ff:ff:ff:ff:ff inet brd scope global eth0 inet scope global eth0 inet6 fe80::82b:2bf:fe1:4b2/64 scope link valid_lft forever preferred_lft forever
    Note that you cannot SSH to an instance with a public IP from within the same server as the routing configuration won't allow it.
  • You can use tcpdump to identify if packets are being routed to the inbound interface on the compute host. If the packets are reaching the compute hosts but the connection is failing, the issue may be that the packet is being dropped by reverse path filtering. Try disabling reverse-path filtering on the inbound interface. For example, if the inbound interface is eth2, as root, run:
    # sysctl -w net.ipv4.conf.eth2.rp_filter=0
    If this solves your issue, add the following line to /etc/sysctl.conf so that the reverse-path filter is disabled the next time the compute host reboots:

Disable firewall

To help debug networking issues with reaching VMs, you can disable the firewall by setting the following option in /etc/nova/nova.conf:
We strongly recommend you remove this line to re-enable the firewall once your networking issues have been resolved.

Packet loss from instances to nova-network server (VLANManager mode)

If you can SSH to your instances but you find that the network interactions to your instance is slow, or if you find that running certain operations are slower than they should be (for example, sudo), then there may be packet loss occurring on the connection to the instance.
Packet loss can be caused by Linux networking configuration settings related to bridges. Certain settings can cause packets to be dropped between the VLAN interface (for example, vlan100) and the associated bridge interface (for example, br100) on the host running the nova-network service.
One way to check whether this is the issue in your setup, is to open up three terminals and run the following commands:
  1. In the first terminal, on the host running nova-network, use tcpdump on the VLAN interface to monitor DNS-related traffic (UDP, port 53). As root, run:
    # tcpdump -K -p -i vlan100 -v -vv udp port 53
  2. In the second terminal, also on the host running nova-network, use tcpdump to monitor DNS-related traffic on the bridge interface. As root, run:
    # tcpdump -K -p -i br100 -v -vv udp port 53
  3. In the third terminal, SSH inside of the instance and generate DNS requests by using the nslookup command:
    $ nslookup www.google.com
    The symptoms may be intermittent, so try running nslookup multiple times. If the network configuration is correct, the command should return immediately each time. If it is not functioning properly, the command hangs for several seconds.
  4. If the nslookup command sometimes hangs, and there are packets that appear in the first terminal but not the second, then the problem may be due to filtering done on the bridges. Try to disable filtering, run the following commands as root:
    # sysctl -w net.bridge.bridge-nf-call-arptables=0
    # sysctl -w net.bridge.bridge-nf-call-iptables=0
    # sysctl -w net.bridge.bridge-nf-call-ip6tables=0
    If this solves your issue, add the following line to /etc/sysctl.conf so that these changes take effect the next time the host reboots: