Chapter 4. Planning IP address usage

An OpenStack deployment can consume a larger number of IP addresses than might be expected. This section contains information about correctly anticipating the quantity of addresses that you require, and where the addresses are used in your environment.

4.1. VLAN planning

When you plan your Red Hat OpenStack Platform deployment, you start with a number of subnets, from which you allocate individual IP addresses. When you use multiple subnets you can segregate traffic between systems into VLANs.

For example, it is ideal that your management or API traffic is not on the same network as systems that serve web traffic. Traffic between VLANs travels through a router where you can implement firewalls to govern traffic flow.

You must plan your VLANs as part of your overall plan that includes traffic isolation, high availability, and IP address utilization for the various types of virtual networking resources in your deployment.

Note

The maximum number of VLANs in a single network, or in one OVS agent for a network node, is 4094. In situations where you require more than the maximum number of VLANs, you can create several provider networks (VXLAN networks) and several network nodes, one per network. Each node can contain up to 4094 private networks.

4.2. Types of network traffic

You can allocate separate VLANs for the different types of network traffic that you want to host. For example, you can have separate VLANs for each of these types of networks. Only the External network must be routable to the external physical network. In this release, director provides DHCP services.

Note

You do not require all of the isolated VLANs in this section for every OpenStack deployment.. For example, if your cloud users do not create ad hoc virtual networks on demand, then you may not require a project network. If you want each VM to connect directly to the same switch as any other physical system, connect your Compute nodes directly to a provider network and configure your instances to use that provider network directly.

  • Provisioning network - This VLAN is dedicated to deploying new nodes using director over PXE boot. OpenStack Orchestration (heat) installs OpenStack onto the overcloud bare metal servers. These servers attach to the physical network to receive the platform installation image from the undercloud infrastructure.
  • Internal API network - The OpenStack services use the Internal API networkfor communication, including API communication, RPC messages, and database communication. In addition, this network is used for operational messages between controller nodes. When planning your IP address allocation, note that each API service requires its own IP address. Specifically, you must plan IP addresses for each of the following services:

    • vip-msg (ampq)
    • vip-keystone-int
    • vip-glance-int
    • vip-cinder-int
    • vip-nova-int
    • vip-neutron-int
    • vip-horizon-int
    • vip-heat-int
    • vip-ceilometer-int
    • vip-swift-int
    • vip-keystone-pub
    • vip-glance-pub
    • vip-cinder-pub
    • vip-nova-pub
    • vip-neutron-pub
    • vip-horizon-pub
    • vip-heat-pub
    • vip-ceilometer-pub
    • vip-swift-pub
Note

When using High Availability, Pacemaker moves VIP addresses between the physical nodes.

  • Storage - Block Storage, NFS, iSCSI, and other storage services. Isolate this network to separate physical Ethernet links for performance reasons.
  • Storage Management - OpenStack Object Storage (swift) uses this network to synchronise data objects between participating replica nodes. The proxy service acts as the intermediary interface between user requests and the underlying storage layer. The proxy receives incoming requests and locates the necessary replica to retrieve the requested data. Services that use a Ceph back end connect over the Storage Management network, since they do not interact with Ceph directly but rather use the front end service. Note that the RBD driver is an exception; this traffic connects directly to Ceph.
  • Project networks - Neutron provides each project with their own networks using either VLAN segregation (where each project network is a network VLAN), or tunneling using VXLAN or GRE. Network traffic is isolated within each project network. Each project network has an IP subnet associated with it, and multiple project networks may use the same addresses.
  • External - The External network hosts the public API endpoints and connections to the Dashboard (horizon). You can also use this network for SNAT. In a production deployment, it is common to use a separate network for floating IP addresses and NAT.
  • Provider networks - Use provider networks to attach instances to existing network infrastructure. You can use provider networks to map directly to an existing physical network in the data center, using flat networking or VLAN tags. This allows an instance to share the same layer-2 network as a system external to the OpenStack Networking infrastructure.

4.3. IP address consumption

The following systems consume IP addresses from your allocated range:

  • Physical nodes - Each physical NIC requires one IP address. It is common practice to dedicate physical NICs to specific functions. For example, allocate management and NFS traffic to distinct physical NICs, sometimes with multiple NICs connecting across to different switches for redundancy purposes.
  • Virtual IPs (VIPs) for High Availability - Plan to allocate between one and three VIPs for each network that controller nodes share.

4.4. Virtual networking

The following virtual resources consume IP addresses in OpenStack Networking. These resources are considered local to the cloud infrastructure, and do not need to be reachable by systems in the external physical network:

  • Project networks - Each project network requires a subnet that it can use to allocate IP addresses to instances.
  • Virtual routers - Each router interface plugging into a subnet requires one IP address. If you want to use DHCP, each router interface requires two IP addresses.
  • Instances - Each instance requires an address from the project subnet that hosts the instance. If you require ingress traffic, you must allocate a floating IP address to the instance from the designated external network.
  • Management traffic - Includes OpenStack Services and API traffic. All services share a small number of VIPs. API, RPC and database services communicate on the internal API VIP.

4.5. Example network plan

This example shows a number of networks that accommodate multiple subnets, with each subnet being assigned a range of IP addresses:

Table 4.1. Example subnet plan

Subnet nameAddress rangeNumber of addressesSubnet Mask

Provisioning network

192.168.100.1 - 192.168.100.250

250

255.255.255.0

Internal API network

172.16.1.10 - 172.16.1.250

241

255.255.255.0

Storage

172.16.2.10 - 172.16.2.250

241

255.255.255.0

Storage Management

172.16.3.10 - 172.16.3.250

241

255.255.255.0

Tenant network (GRE/VXLAN)

172.16.4.10 - 172.16.4.250

241

255.255.255.0

External network (incl. floating IPs)

10.1.2.10 - 10.1.3.222

469

255.255.254.0

Provider network (infrastructure)

10.10.3.10 - 10.10.3.250

241

255.255.252.0