Evaluating OpenStack: Simple Networking in Red Hat Enterprise Linux OpenStack Platform 5

Updated -

OpenStack Networking Overview

This article describes the steps necessary to configure networking in your Packstack All-in-One deployment. All Compute and OpenStack Networking services reside on a single host. After using the steps in this article, all ingress/egress network traffic is bridged to the physical network infrastructure. In addition, this particular all-in-one configuration is intended for a single node deployment. As a result, the networking type is set to local, and does not make use of VXLAN or VLAN networking.

Note: This article is applicable to Red Hat OpenStack Platform 5. For more recent information, see the Red Hat OpenStack Platform 10 version.

Note: This article assumes an existing Packstack installation is present, using Red Hat Enterprise Linux OpenStack Platform 5. Refer to the Evaluating OpenStack: Single-Node Deployment article for instructions on building an all-in-one Packstack environment. Ensure NetworkManager has been disabled before proceeding:

# systemctl disable NetworkManager.service
# systemctl stop NetworkManager.service

Overview of Configuration Steps

  • The default Packstack topology is removed (network, subnet, and router).
  • A physical NIC is mapped to a virtual bridge. This step allows OpenStack Networking traffic to reach the physical network.
  • The virtual network topology is created with a new network, public subnet, and router:

Packstack networking


This article assumes that the destination host has the following network configuration. You can adjust the steps to suit your deployment:

  • 1 x physical NIC allocated with the following IP details:

Table 1. eth0

Setting Value
IP address
Subnet mask
Default gateway
DNS server

1. Clear Packstack's Default Network Configuration

The default Packstack configuration creates a public_subnet and configures a gateway on router1. These need to be recreated with the appropriate settings. Use the following steps to remove the defaults:

# cd ~
# source keystonerc_admin
# neutron router-gateway-clear router1
# neutron subnet-delete public_subnet
# neutron net-delete public

2. Map the Physical NIC to the Bridge

1) Map a physical NIC to the virtual Open vSwitch bridge. The virtual bridge acts as the intermediary between the physical network and any virtual networks:

# vi /etc/sysconfig/network-scripts/ifcfg-eth0

2) Configure the virtual bridge with the IP address details that were previously allocated to eth0:

# vi /etc/sysconfig/network-scripts/ifcfg-br-ex

3) Restart your server for the changes to take effect:

# reboot

3. Create The New OpenStack Networking Topology

Recreate the Network Topology

Note: You can open the Network Topology tab in Dashboard to observe the results of the following commands.

# cd ~
# source keystonerc_admin
# neutron net-create public --router:external=True
# neutron subnet-create --name public_subnet --enable_dhcp=False --allocation_pool start=,end= --gateway= public
# neutron net-create private_network
# neutron subnet-create private_network --name private_vmsubnet
# neutron router-create router2
# neutron router-gateway-set router2 public
# neutron router-interface-add router2 private_vmsubnet

You can now build an instance using Dashboard and assign it to private_network in the Networking tab.

4. SSH To An Instance

Enabling SSH access to an instance requires security group (firewall) configuration, the allocation of a floating IP address, and the generation of a SSH keypair. These steps are detailed in the following procedure:

Allow Incoming ICMP and SSH Traffic

List the security groups to determine which one to configure for SSH and ICMP access:

# neutron security-group-list
| id                                   | name    | description |
| 1822f9b1-f7bf-47d0-8e57-950a66036ce5 | default | default     |
| 71141e31-11f8-496e-aa1c-8ef1cc95eb77 | default | default     |
| f897bc94-66b8-4485-974d-0a4256ebe3c4 | default | default     |

Add Security Group Rules

The neutron commands allow incoming SSH and ICMP traffic

    # neutron security-group-rule-create --protocol icmp --direction ingress 1822f9b1-f7bf-47d0-8e57-950a66036ce5
    # neutron security-group-rule-create --protocol tcp --port-range-min 22 --port-range-max 22 --direction ingress 1822f9b1-f7bf-47d0-8e57-950a66036ce5

Allocate a Floating IP Address

Floating IP addresses enable external network connectivity for instances. In the following procedure, a floating IP address is created and associated with the instance's network port.

1) Use the neutron command to create a floating IP address:

# neutron floatingip-create public
| Field               | Value                                |
| fixed_ip_address    |                                      |
| floating_ip_address |                       |
| floating_network_id | 7a03e6bc-234d-402b-9fb2-0af06c85a8a3 |
| id                  | 9d7e2603482d                         |
| port_id             |                                      |
| router_id           |                                      |
| status              | ACTIVE                               |
| tenant_id           | 9e67d44eab334f07bf82fa1b17d824b6     |

2) Locate the ID of the port associated with your instance. This will match the fixed IP address allocated to the instance. This port ID is used in the following step to associate the instance's port ID with the floating IP address ID. You can further distinguish the correct port ID by ensuring the MAC address in the third column matches the one on the instance.

# neutron port-list
| id     | name | mac_address | fixed_ips                                              |
| ce8320 |      | 3e:37:09:4b | {"subnet_id": "361f27", "ip_address": ""} |
| d88926 |      | 3e:1d:ea:31 | {"subnet_id": "361f27", "ip_address": ""} |
| 8190ab |      | 3e:a3:3d:2f | {"subnet_id": "b74dbb", "ip_address": ""}|

3) Use the neutron command to associate the floating IP address with an instance:

# neutron floatingip-associate 9d7e2603482d 8190ab

Generate a Security Keypair

The nova command below generates a keypair and adds it to the Compute service. The resulting sshaccess.pem file contains the private key, which can be presented when attempting to SSH to the instance.

# nova keypair-add sshaccess > sshaccess.pem

You can now SSH to the instance's floating IP address by presenting the sshaccess.pem key:

# ssh -i sshaccess.pem root@
[root@corp-vm-01 ~]#

Network connectivity for the instance's operating system

You can now proceed to configure your instance's operating system for connectivity. You'll need to configure the default gateway and DNS settings manually if they're not served out using DHCP:

  • The default gateway IP address will need to be situated on the same subnet as your instance, in this example.
  • At least one DNS server will need to be specified in the interface's settings for name resolution to work.


hello, is it vxlan or vlan or flat? by default, all-in-one will use vxlan in config.


This configuration refers to an all-in-one setup where only one compute node is being used and VMs running in this node are not expected to communicate with VMs in other compute nodes. The networking type in this case is default to 'local' so the VMs can only communicate locally within the host. However this is not to be confused with the external provider network used to provide NAT and floating IPs; for this network you can use flat (untagged) or 802.1q VLANs depending on your environment.

Hope this helps,

Thanks for the feedback Nir, I've updated the article to clarify the intentions behind this particular all-in-one deployment type.

Just tested this using Fedora 20 and got it to work. I had been struggling with making external network work on a single node and this did the trick. Thanks!

Awesome, glad to hear it worked out for you.

Thanks Martin.
Also to have the internet on the vms we need to add DNS :)

Hi Marian,

Thanks for the suggestion. I've added a mention of this to the final section: "Network connectivity for the instance's operating system".

Does there any plan to describe multinode scenario? For me, the most difficult section is network.

Hi Yixuan,

Yes, we're planning something similar for multinode deployments in the next release.

Hello Martin,
So i currently have allinone setup with two interfaces. Both interfaces are on 10.0.0./8 network. The Ip assigned to OpenStack is which has now been bridged with br-ex. Following your instruction, how do i go about this setup? THank you greatly

Hi Malik,

A key step would be to populate /etc/sysconfig/network-scripts/ifcfg-br-ex with the correct values. I might suggest consulting with your network administrator to determine which addresses to use for the DNS and gateway settings.

Hi Martin,

This is a most helpful KB :)
Farther more, I have two things which I think should be fixed:
1. The subnet of the br-ex is 192.168.120.XXX while in the command to create the public network is 192.168.100.XXX. I think it should be the same.
2. I don't think you need two routers in this configuration, I think it is possible to skip:

neutron router-gateway-set router1 public

Please correct me if I'm wrong
Best regards,

Hi Roei,
You're right, thanks for catching that. I've made the example subnet a consistent 192.168.120.xxx throughout.
I'll check out the router suggestion too.

There is no need for this router1, you are effectively connecting the private network to the the public network, in doing so it creates an interface in the internal router of your private network associated with the public networks ip allocation. The public network has its own default gateway that exists in your physical network topology

This is not needed

neutron router-gateway-set router1 public

If you run this

neutron router-port-list router2

You will see there is an interface on router2 that has an ip address of the allocation pool of the public network, you can also ping this ip address from your existing LAN. Oddly the WebUI reports this interface to be down but you can ping it.

Hi Stuart, thanks for the suggestion. I've updated the article accordingly.

In Section "3. Create The New OpenStack Networking Topology"
This command failed on RHOSP 7.0:
[root@xxx01 ~(keystone_admin)]# neutron net-create public --router:external=True
usage: neutron net-create [-h] [-f {shell,table,value}] [-c COLUMN]
[--max-width ] [--prefix PREFIX]
[--request-format {json,xml}]
[--tenant-id TENANT_ID] [--admin-state-down]
[--shared] [--router:external]
[--provider:network_type ]
[--provider:segmentation_id ]
[--vlan-transparent {True,False}]
neutron net-create: error: argument --router:external: ignored explicit argument u'True'

Another question, why should we set gateway to an unused router router1?

neutron router-gateway-set router1 public


--enable-dhcp=false is incorrect. it should be --disable-dhcp. Also, im not sure why 2 routers are created. Only 1 is required.

Did you get an answer on running 'neutron net-create public --router:external=True ? I am hitting the same error.

ok , seems the Flag True is not required , works with neutron net-create public --router:external


Ah, this article is only applicable to RHELOSP 5, which is why we're seeing some syntax changes. I've updated the note to make that clearer.

https://www.rdoproject.org/networking/neutron-with-existing-external-network/ explains the same thing

Also it would be nice to show adding second compute node to this setup

Hi does the VM suppose to access the internet with this configuration? I did the configuration as in this tutorial. The VM can ping locally the host, ping VMs in the same subnet, but can't ping the internet.

Hi Steven, I would expect that the VMs can access external networks: In the example above, the VMs should be using the same default gateway ( as the one configured on eth0 (later moved to ifcfg-br-ex). If your environment is also configured in this way, I might suggest testing that you can ping an IP address past the default gateway from the packstack host, just to confirm that the gateway is functioning as expected. I could also suggest comparing traceroute output from the packstack node with that of the VM, as this can give a starting point in seeing where things might be going wrong.

I tried osp9 nested on virtualbox 5.x, using osp5 step above but still cannot access the vm, finally I changed the virtualbox nic setting 'Promiscuous Mode : Allow all' only then I can access the vm tru floating IP from physical network. Now the only problem from within the vm instance I still cannot ping or access my physical network.

Hi Kamarulzaman, it might be worth checking that the VM instance can ping its default gateway (and that the gateway is correctly configured in the guest OS). Or perhaps, deploy another VM instance to the same network and see if they can ping each other.

I am doing OSP10 single node install using packstack. Does this article apply to that too ?

Hi Kashif, this article is only for OSP5, but we do have the broader Networking Guide for OSP10: https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/10/html-single/networking_guide/. Would this be useful to you, or were you specifically after an OSP10 version of this article?

As I am new to this so I was specifically after OSP10 version of the topic. Actually I was forwarded to this page from https://access.redhat.com/articles/1127153 . I don't know if something has happened because of following OSP 5 version :( asI have lost the horizon dashboard.

Hi Pål, I've finished writing an updated version of this guide, and it's currently going through technical review with our SMEs. I'll be sure to post an update here once it's ready.

Thanks Martin. If the technical review will take long time can you share the changes wrt the OSP5 manual here and I can give them a try. Offcourse you don't bear any responsibility :)

Hi Pål, I'm checking on this, will let you know.

Hi Pål, I've discussed this option with our support organisation, and it's instead been suggested that you consider opening a support case to discuss your situation further.

Hi Pål, the updated version for OSP10 has been published here: https://access.redhat.com/articles/3188582